Amid the tumultuous economic conditions sparked by the coronavirus pandemic, we face looming expectation of a recession or economic downturn. Even as governments around the world attempt interventions to soften the blow, job losses, bankruptcies and increased defaults remain a likely inevitability. The ominous uncertainty puts risk management top of mind in financial services. What’s a credit risk manager to do under such circumstances?
Risk managers who believe models always work (or have a disproportionately strong faith in them) are in for a surprise. In particular, those who treat model development as an exercise in chasing statistics will have soon experience the strengths and weaknesses of models. That doesn’t mean we’ll stop using models – or, moreover, that more complex machine learning (ML) models won’t be the wave of the future. But we have a great opportunity to better understand such models and how to use them judiciously.
When statistical tools aren’t as reliable as you once thought, you must rely on judgment and experience to manage risk. Fortunately, banking industry risk managers are already devising ways to navigate near-term uncertainty – and, as we eye medium-term challenges, they can likewise take steps to prepare.
Near Term: What credit risk managers can do today
The lack of mature data limits the use of analytical solutions for immediate strategy adjustments. In such circumstances, risk managers tend to favor judgmental adjustments to strategies.
Risk mitigation strategies
Currently, many risk managers are adjusting cutoffs and policy rules to control credit. Many are using historical default rates combined with expert-based safety factors as guidelines. For example, default rates experienced during the 2008/9 credit crisis can function as a benchmark, but only as a broad indicator given differences in the current situation. While 2008 brought a liquidity crisis, we did not witness massive unemployment as we do today.
Many banks are examining results of scenario analysis and numbers from stress testing exercises as benchmarks. Keep in mind, predicting a number at this point is difficult given rampant uncertainty – about the length and depth of the economic slowdown as well as the effects and duration of loan deferral programs and government interventions like temporary income/subsidies. As a result, most risk managers are understandably navigating by judgment.
To mitigate risk, banks are taking practical steps like suspending pre-approved loan programs, preventing over-limit and delinquent customers from making further card purchases, and proactively offering payment deferrals to select customers. Some bankers are studying profiles of customers asking for payment deferrals and analyzing changes in applicant populations to gain insights for risk decisions. There is increased preparedness on how to create and quickly deploy effective strategies for the anticipated increase in collections activities downstream.
Even in this environment, models can still prove useful. Indeed, most well-built, broadly based models will rank risk – but the probability of default will likely increase across risk bands. Broad based scorecards with historically reliable and predictive variables (which display generalized/monotonic trends like weight of evidence, for example) may fare better than models that have described temporary phenomena. In other words, these are expected to better retain risk ranking when conditions change. The key assumption here, of course, is that the model will rank risk. Expected default rates can be adjusted based on a combination of previous experience, judgment and other numbers, such as stress testing results.
Despite the lack of mature data, near-term opportunities to use analytics remain – including ML models, which can deliver significant benefit by helping banks target early interventions. For example, transactional data from savings and checking accounts might generate early warnings for at-risk customers and small businesses by detecting changes in purchasing behavior, savings and assets depletion, deposit reductions, income loss, and liquidity shifts. Models can detect such events over a relatively short term, and for modeling purposes, banks can use judgmentally determined targets. However, given significant shifts in consumer behavior recently, it is imperative to interpret all correlations and establish causality before rendering decisions.
Additionally, banks have come to increasingly embrace AI/ML models – and for good reason. In corporate lending, for example, banks may conduct ongoing credit assessments using online data with streaming analytics and AI, as it’s much easier for AI to capture sentiment on this vast, fast-moving data.
Those using AI/ML models must assess whether they’ll be more resilient in turbulent times than simpler scorecards. Credit risk managers may prefer models whose components are known (vs. black box models), and where the relationship of these components to risk is well understood (e.g., WOE curves). In such cases, they may temporarily base decisions on judgmental scorecards until more robust data becomes available.
Model monitoring also warrants attention in this climate. Bankers anticipate high instability but recognize this may be a short-term phenomenon and are looking to delay action until things stabilize. For example, will we see a V- or L-shaped recession? How long will significant unemployment last? Bankers expect to exercise manual overrides of reports in the short term for the same reason.
Risk managers expect portfolio performance reports to remain satisfactory in the near term, as these reports are retrospective and won’t incorporate the downturn’s effects for many months. Specifically, default rates for accounts opened in the past, or as of a historical date, may not change for many months due to payment holidays.
In exploring risk mitigation strategies, cutoffs and models, let us remember this: on the other side of each transaction is a person who may be struggling for reasons beyond their control.
Customer experience should always be a priority – and even more so in tough times. Anticipate hardship and find ways to help customers. In Canada, for example, banks are foregoing balloon payments at the end of payment deferral periods and instead spreading missed payments over months to reduce customer burden. Banks have also temporarily reduced interest rates and increased “tap” transaction limits.
Take care of your customers. A little understanding and flexibility will earn loyalty and could help stem permanent losses.
Medium Term: How credit risk managers can prepare
Reliable delinquency data will take time. In countries where banks have allowed payment deferrals, this may take nine to 15 months. Until then, risk managers will likely rely on early defaults and extrapolation of vintages and roll rates to manage default rate expectations (outside of judgment). Note there may be a difference in the courses of action for business models compared to those used for regulatory purposes. This will likely get clearer as regulators get a better understanding of future impact.
In the medium term, risk managers will face model replacement decisions. Evaluating model performance will depend on factors like the materiality and level of change expected over the long term. At that time, models can be redeveloped, reweighted or just recalibrated. Some risk managers who employ scorecard-type models expect those models to retain their risk ranking capabilities better than more complex models and will continue using them. It will be interesting to test this using actual data in coming months.
In cases where models must be redeveloped due to significant changes in applicant populations, economic conditions, and long-term, industry-specific impacts, speed will be paramount. Many banks today have a nine- to 12-month cycle to build and deploy models. While this was tolerable (but not ideal) in a low-default, economic-boom environment – a rising tide lifts all boats, as they say – it could be disastrous in a downturn.
For banks with hundreds of models and billions of dollars of decisions monthly, the ability to develop and deploy models several months earlier could save hundreds of millions in reduced write-offs. The same is true for risk strategies around originations, credit limit management, fraud and collections. How long do you want to wait before deploying a better strategy?
To better prepare, risk managers should create model inventories, assign the expected level of impact on each model or model group and evaluate their ability/capacity to rebuild and deploy models quickly. They may also want to prioritize models based on their projected bottom-line impact.
Current uncertainty gives risk managers lead time to prepare, and they should use it to assess and optimize how they build and deploy models. A step-by-step analysis of model development allows risk managers to understand current processes and eliminate bottlenecks for more efficiency. Consider these commonly identified challenges:
- Data sources. Financial institutions have multiple data sources badly connected to each other.
- Data extraction. Analysts repeat the same data cleansing, matching, merging, derived variable creation exercises, etc., for each model built. This is not only inefficient but prevents capture of repeatable corporate IP.
- Recoding. Code for models and data management must be rewritten for deployment and validation, leading to delays and inconsistencies in interpretation.
- Model management. Model validation/governance is detached from the model development work (vs. a shared a platform), resulting in delays and misunderstandings.
- Platform detachment. Decisioning platforms are detached from the analytical – and require recoding of models and strategies for deployment. This hinders the bank’s ability to react quickly to changes.
As we in the credit risk field continue down the uncertain path before us, we recognize the honest answer to many current questions is, “We don’t know.” But that should not stop us from making reasonable predictions about the future.
Our data will be new and immature, produced by an unexpected economic downturn. Automated processes will help enhance efficiencies, but data issues will require human judgment for model development. As such, building and deploying thousands of models with robots is unlikely to produce desired results. If there’s one thing we can deduce from historical experience, it’s that infrastructure investments to create an integrated model development, deployment, validation and decisioning ecosystem will always pay dividends.
Naeem Siddiqi is a Senior Advisor in the Risk Research and Quantitative Solutions division at SAS, where he advises C-level bankers on issues around credit scoring and decisioning, risk strategy, climate change risk, AI/ML in credit risk and modernizing analytics infrastructures. He has worked in retail credit risk management since 1992, both as a consultant and as a risk manager at financial institutions, during which time he has educated bankers in more than 20 countries on the art and science of credit scoring.
Naeem is also the author of Intelligent Credit Scoring: Building and Implementing Better Credit Risk Scorecards (Wiley and Sons, 2017) and Credit Risk Scorecards: Developing and Implementing Intelligent Credit Scoring (Wiley and Sons, New York, 2005). He has an honors bachelor’s degree in engineering from Imperial College of Science, Technology and Medicine at the University of London and an MBA from York University in Toronto.