Previously expected loss has been directly associated with Basel and IRB, where models are built to predict the expected loss for all assets under the IRB umbrella. In the IRB context, the expected loss is a 12 month EL and therefore considers defaults occurring within the next year. As with all predictive modelling, there is the challenge of predicting the unknown and assigning probabilistic outcomes to all assets. However, after initial difficulties and varied solutions the industry as a whole has accepted this modelling challenge and there now exists some fairly standard ways of achieving this goal.
These standard approaches result in comfort being achieved outside of the modelling teams that the estimates are robust and accurate. Therefore stakeholders are able challenge to the models is on the basis of specifics, not overall approach.
Now there is a new challenge under IFRS 9, where the bar has been raised to consider losses from defaults occurring throughout the remaining life of the asset. This is been rightly seen as a more complex challenge than that previously faced - the censored period of 12 months has been abandoned, and many assume that this is the main problem.
In my view, this is not the case. Picturing the problem we have just highlighted gives us:
This is essentially the same problem just over a different period. What is the probability of a credit loss from a default occurring in the blue box? Admittedly more data will be required and the estimates will be less accurate but the problem, and therefore the solution, are the same.
However this isn’t actually what IFRS 9 requires. For IRB the requirement is a single figure of expected losses and there is no concern in terms of where in the blue box the loss occurs, it’s just a homogenous period of time. Unfortunately this is not the case for IFRS 9. To allow appropriate discounting we now also need to know what the probable time spread is of the estimated losses.
Additionally, the interaction between default events and other loss drivers will result in very different estimates of losses. Consider an amortising loan, where the balance associated with a default in one month’s time will be significantly different to the balance associated with a default in say, 3 years’ time.
So now when we picture the problem it looks a bit different:
Suddenly the IFRS 9 line looks a great deal more complicated.
We now have to consider the future loss profile from the current position for all assets. Fortunately we don’t necessarily have to do all of this from scratch. Most organisations do have a way to risk rank the current population, whether it be judgemental risk grading, IRB PD models or operational behavioural scoring. It would therefore seem sensible to use these as a starting point to finding the solution.
The use of this existing information allows us to understand the likely magnitude of the risk over the short period. Looking at the diagram above we now have to work out how to make this a lifetime view of default likelihood and how to distribute that risk appropriately into discreet time periods.
This step should involve the consideration of anything that alters the distribution of losses, for example the current risk, and/or anything that changes over time and can be predicted, for example the maturity/months on book.
The current risk clearly is the main driver of the magnitude of expected default/loss. However, it also has a large impact on the expected shape of that loss, as high risk assets would be expected to see early emergence of default, whilst lower risk assets would be expected to see gradual default growth.
So it would now appear that we have a solution for lifetime default estimates – just take the current risk based on some extant measure (e.g. IRB PD) and then distribute/extrapolate based on some pre-defined curves and we have an IFRS 9 compliant approach. Unfortunately this is not true.
The above problem, and therefore solution, is still missing one key element, that of the estimates being forward-looking. It is currently assuming that, whilst we have more blue boxes, they are all the same. As such, we need a new (but fortunately final) picture of the problem:
Now we have the lifetime view, split into discrete periods as before; but we also acknowledge that the environment in each of those periods is different.
The forward looking element, in my view, is where the real complexity is introduced. The industry is fast trying to enhance its macro-economic forecasting capabilities in light of increased focus from regulators on stress testing over that last few years. However these methods are still not perfect and are only designed to deliver forecasts at a portfolio or segment level. The ability to perform these calculations appropriately at the account level, ideally required for stage allocation, is a step into the unknown.
There are obviously ways to apply factors based on segment level forecasts from stress testing solutions, but will these be appropriate? For example, a mortgage currently at 30% LTV will be somewhat unaffected by a 50% fall in house prices, however this won’t be the case for a mortgage currently with an LTV of 75% (which would move to 150% LTV), where the large value of negative equity could move into a situation of won’t pay, even if they can pay. However if the 10% LTV mortgage has very little disposable income it would be severely effect by a 3% interest rate rise, which may result in a can’t pay. It is easy to see how it becomes very difficult to model at such an individual level using any kind of ‘curve scaling’.
This therefore leads into a modelling world of survival analysis, decomposition and decaying cubic spline, which will make a few experienced modellers sit up, never mind the teams in finance responsible for providing commentary on the movements. But is there a cause for concern at a potentially high level of sophistication and therefore complexity in underlying models? Based on what we are trying to achieve and existing practices, the answer has to be no.
In the world of IRB there is generally comfort across all areas to be able to explain that ‘RWAs have increased because PD’s have increased because we have originated some riskier assets’. No one outside of the modelling department asks (or cares) whether a step-wise logistic regression was used to define the underlying PD models. The important thing is that the output is understandable and therefore challengeable. As such, black box deployment solutions should be avoided, but sophisticated modelling that delivers a more accurate, compliant result should be embraced, as long as it is commensurate with the size and complexity of the organisation and is understood somewhere internally.
All of the above gets us to a forward looking lifetime PD model, which should be regarded as the most complex element. However it isn’t the total solution. We now need to consider the other standard expected loss elements, namely Exposure at Default (EAD) and Loss Given Default (LGD).
Considering EAD first, there are two main considerations. Is there any element of undrawn exposure and is the asset amortising? If there is any undrawn element then the propensity for this to be drawn, and the associated level of drawings, in the event of default, needs to be modelled. However this isn’t a new challenge and there are methods well understood for addressing this.
In my view it is likely that the level of exposure at point of default for revolving products is not dependent on the time of the default, and as such if IRB EAD models exist, these could be considered as an appropriate estimate for EAD through the life of the assets (once any downturn considerations have been removed).
For amortising products the IRB EAD become useless, due to the regulatory floor at current balance. However there should be little modelling required for IFRS 9. A simple mechanical extrapolation using the current balance, regular repayment amount and interest rate should suffice. Over-payments (though not early redemption) can be considered here, though my experience has thus far shown that they are not material for defaulting accounts.
This therefore leaves LGD, which should be considered in two key segments, those assets with associated security/collateral and those without. For assets without security it is likely that there will be very little differentiation between estimates. Therefore in the absence of extant models a single portfolio LGD may be most appropriate. For assets with security though the modelling needs to be specific to the asset and be dynamic to take account of the moving exposure (from the EAD estimate) and the expected future vales of the security (from the economic forecasts).
Once all of these have been considered and built the working of a lifetime expected loss model will be formed, such that at each month in the future there is probability of defaulting in that month (PD), the estimated exposure should it default in that month (EAD) and the final loss resulting from the default as a percentage of the EAD. These can then be multiplied together and summed over either all months (for stage 2) or the first 12 months (for stage 1) to arrive at an impairment allowance for each account.
This will require detailed account/obligor level data to be fed in to allow the calculations to run. However, particularly for the PD and secured LGD, there is also the need to have economic forecasts fed directly into the model estimates.
This means that all organisations will need to have a method for generating economic scenarios for use in their IFRS 9 calculations. Some organisations have asked whether there will be a central scenario store for all institutions to use, produced for example by the PRA. I think the safe answer here is no. There is little/no likelihood of the PRA hanging its hat on an economic scenario for all financial institutions to use to derive their loss allowances – the risk would be too great. The economic scenarios have to be each institution’s view of the economic outlook.
Note the deliberate use of the plural – scenarios. There has been explicit guidance from the IASB that the expectation is that the IFRS 9 impairment allowance will be a probability weighted summation of a range of scenarios. This means that every organisation will need an internal economic outlook as well as variations with associated probabilities.
Overall the challenges posed by lifetime forward looking estimates are significant, but not insurmountable. There will be varied approaches at first while institutions balance the required complexity for a fully compliant solution, with what is appropriate for themselves. Ultimately as with IRB, there will be some convergence on methods, but this could be a way off yet.