Suddenly fruitful after years of sparse adoption, the long-awaited flowering of artificial intelligence (AI) and machine learning is upon us. Risk management and compliance leaders can expect these advanced analytic technologies will propel productivity-enhancing applications for years to come. But how did we get to this point? And as we enter the third decade of the 21st century, what can we anticipate right around the corner?
AI owes its recent gains largely to the accumulation of big data assets and the continually declining cost of computing. Together they are the catalyzing spark behind the growth of AI, machine learning, computer vision, natural language processing and other analytics varieties.
Risk management and compliance applications saw a major transition between 2017 and 2019 in the role AI/ML played in risk management. Two years ago, not all organizations were considering AI, and those that did were primarily looking for use cases. These early AI adopters appeared to be driven more by a desire to participate in the technology than to fill an organic need with it.
Today’s landscape is significantly different. Nearly all enterprise-sized financial institutions are engaged in AI-associated machine learning (ML) projects and leveraging them for real business – improving customer experience, fraud detection and, of course, risk and compliance functions. In the risk and compliance arena, we can expect new applications of AI in credit decisioning, model risk management and governance, and stress testing. ML will also drive natural language processing and intelligent automation to assist many GRC tasks.
An AI bouquet
While some believe that AI will eliminate the roles of certain workers in the coming years, that outcome is highly unlikely in risk and compliance. Instead of displacing employees, AI and ML will make them more productive and efficient. Advanced analytics will take over many of workers’ more routine tasks, freeing highly skilled staff to turn their talents to more productive duties. There will also be significant and quantifiable improvements in decision-making as the industry advances in some key arenas.
Productionizing machine learning in credit decisioning begins. There have historically been two key challenges in the adoption of machine learning in credit decisioning: first that ML models are hard to explain, and second that ML models sometimes don’t achieve the accuracy improvements that make them worth pursuing.
Explainability is being addressed by a wide community of ML experts, as it is essential for ML adoption across a number of industries. Cross-industry efforts have led to a variety of techniques for explaining ML models, as well as generating constrained interpretable ML models. Methods for generating credit scorecards from a variety of ML models are underway, including a model-agnostic approach that will support scorecard generation from any model.
The challenge of meeting accuracy expectations is being tackled through the gathering and incorporation of more data into credit decision models. The volume and nature of data traditionally fed into credit scoring models were suited to and tailored for linear models. Non-linear models like neural networks and gradient boosting had little to find in the data beyond what the linear models found.
However, as new data is available (banking transaction data, for example) and incorporated into decisions, the accuracy of more advanced machine learning will be significantly better than traditional linear models. Banks’ desire to extend credit into markets previously blocked due to lack of credit history will drive the incorporation of this new data and adoption of ML models.
Increased adoption of the cognitive technologies. Two years ago the idea of incorporating natural language processing (NLP) or computer vision into risk processes was hard to imagine. More recently, use cases have emerged, and these technologies will move into production over the next few years.
The automation of document reading is one use case already widely considered -- be it in credit risk, counterparty credit, or in regulatory risk and compliance. Computer vision is used to extract content from printed or PDF documents; ML is then used to classify content, and NLP is applied to identify entities, trends and sentiment.
In addition, natural language generation (NLG) is increasingly used for risk models to communicate their output with analysts and consumers. Self-explaining and self-documenting models are likely to be the norm over the next few years.
Model risk management. Governance of ML models and governance by ML models will increase in the coming years, pushing machine learning in model risk management (MRM) in two directions:
- There must be oversight for ML models that takes into account their specific nature and risks relative to traditional models, including monitoring for bias, validating transparency and continuous performance monitoring. This also puts more scrutiny on data management and data quality as part of model management.
- Interestingly, ML will be increasingly leveraged to improve model governance. For example, ensuring that training data and production data match is key to ML model risk. ML models will be used to measure similarity, and continuous performance management can be automated with ML anomaly detection. Automated documentation and alerts via NLG will also be leveraged.
The help of ML models in the MRM process enables the productionizing of ML models and the proliferation of models. It also helps integrate MRM into the model development process to encourage model development with governance considerations from the outset.
Machine learning used throughout risk processes, not in isolation. Prior to now, much of ML exploration in risk was looking at a single model with a specific purpose to be replaced by, or created with, ML. Now ML is viewed as a common thread throughout an entire risk process.
For example, as one financial institution worked to develop a market risk application, it became obvious that ML would play a role through the entire stress testing process. First, unsupervised ML would be used for anomaly detection of market inputs. Next, recurrent neural networks would be considered for time series generation for macroeconomic forecasts. Thirdly, for any risk factor not having proper data quality or liquidity, proxies were identified using k-means clustering. Then, to achieve on-demand stress results, approximation of pricing functions would be done with neural networks. Finally, stress outputs would also go through anomaly detection to identify portfolios of interest.
Cultivating the future of AI and machine learning
As familiarity with machine learning models increases – and with access to software that makes it easy to incorporate the models – risk professionals can expect ML to spread through entire risk and governance processes rather than just in individual and isolated models.
While widespread AI adoption may be decades in the making, this game-changing technology is quickly establishing itself an enterprise computing must-have. Those who invest now are likely to see their efforts come up roses.
Katherine Taylor is an AI Specialist in the SAS Global Technology Practice for Artificial Intelligence and Machine Learning. She has many years of experience in analytics, primarily in banking and energy.