We’ve all heard it before: “Win or go home.” Whether in business or on the playing field, the pressure to win is intense. And in today’s financial services industry, the winner can literally take all. As banks struggle to adapt in the throes of digital disruption, executives find themselves squeezed to use artificial intelligence (AI) or machine learning (ML) models to power their digital transformation initiatives forward.
Why the frantic push to deploy AI and ML models now?
The industry’s use of computational finance models to make decisions is nothing new. Models create a good competitive posture because they save time; help establish repeatable, reliable processes; and produce fast results based on more (equating to “better”) data. But traditional statistical models are limited in the number of dimensions they can access.
Unlike traditional statistical models, machine learning models can consume vast amounts of unstructured data, spot patterns and translate them into usable information. These models improve automatically through experience – by “learning” – which results in greater accuracy and predictability over time. Not only are such capabilities enticing, they’re becoming imperative in an industry driven by continual change, digital interactions and a “need it now” consumer mindset.
A recent survey by SAS and the Global Association of Risk Professionals (GARP) found that, over the next three to five years, businesses expect to significantly increase adoption of AI and ML models to support key risk business cases.1 Banks are also using machine learning models for marketing, fraud detection and anti-money laundering.
But with all good things, there’s a catch. AI and ML models need more governance than other data models. Winning in the digital space also requires a well-drilled team that communicates clearly across all players – data scientists, modelers, validators, auditors and managers alike. That’s much easier said than done.
Business leaders: Failing to prepare is preparing to fail
Deploying AI models in haste can have serious consequences. For example, losses occur as a result of poorly functioning models – including loan origination, debt management and pricing. Some firms have even gone out of business by deploying models that weren’t properly managed and tested. Consider well-documented model-risk-based failures, such as the Flash Crash of 2010, the London Whale or Knight Trading’s losses.
New types of models raise the potential for operational risks due to unexpected impacts on an otherwise stable, well-managed business. When you add concerns around regulations, personal data privacy, “black box” transparency and explainable AI, sobering questions arise.
Business leaders in firms adopting these new techniques must demand a clear understanding of models in development and deployment, at an enterprise level. Obtaining this comprehensive view calls for business leaders to seek out an explanation of the purpose and history of models in business terms – not technical modeling jargon.
Fortunately, you can seize the advantages of these models and score big with AI while simultaneously defending against risks. By using rigorous model governance on both sides of the field – for both defense and offense – you can propel your model-dependent digital transformation initiatives into action, with confidence. Here are four ways to make it happen.
Unite model ecosystem silos
Expert model developers and business managers across divisions or initiatives often fail to communicate. A strong model governance framework invites ongoing dialog and documentation as a routine part of the model life cycle.
Defensive plays. Having a central place to document essentials and routinely communicate across traditional silos positions you to easily answer questions from auditors and regulators about which AI or ML models you’re using, why you chose them, and where and how they’re being used.
Offensive plays. Having a comprehensive view of models across the entire organization means you can, at any time, see which models are affecting your business – and in what ways – and how they’re performing within the broader ecosystem. This keeps everyone on the same page, working toward the same high-level organizational goals.
Gain transparency into the inner workings of models
It’s important to understand the design of ML models, especially in terms of the variables the models employ. But it’s not easy to know which data complex ML models are using, because the data that feeds the models changes all the time. And since the very nature of AI and ML models is conceptually different from traditional models, you’ll need new approaches to be able to understand and explain the nitty gritty of how they work.
Defensive plays. Understanding what data was used to train your models, how it changed between training and usage cycles, and how the output was benchmarked against traditional models creates a strong defensive line. Internal auditors and external regulators will always demand a certain level of transparency and “explainability” for your ML models. By supplying proof, you can counter the tough questions raised about fairness or possible bias in the core of your model data.
Offensive plays. Having tools that shed light on the detailed, inner workings of ML models makes it easier for you to explain the “black box” to stakeholders, which promotes acceptance of the models. And acceptance is crucial if you need to quickly gain buy-in to develop more ML models for promising new digital transformation initiatives – or to simply deal with changes in the market. Explainability is also a prerequisite for incorporating new, nontraditional data sources like social media when you want to enrich model accuracy.
Spot connections and potential snowballing risks
Multiple models are connected across your organization, but traditional model management tools may only give insight into how individual models are performing. Lacking a broad model governance system, this isolated view will limit your awareness of dependencies among models.
Defensive plays. Knowing all the connections between models is essential to understanding how an issue in one model could spread across the entire modeling ecosystem and do extensive damage. Getting an immediate heads-up when an individual model is starting to fail is your first line of defense against such enterprise-wide impacts. It’s also a way to avoid “black swans” – that is, events that come as a surprise, have major effects and are often inappropriately rationalized after the fact.
Offensive plays. Having a broad view of models throughout your ecosystem highlights all the ways in which one model’s output feeds other downstream models. With this interconnected view, you can vastly improve your firm’s model risk governance while freeing developers to confidently build new models, retrain existing models or incorporate new data types.
Lower costs through improved resource allocation and efficiency
Because AI and ML models need constant governance, managing them the old way simply won’t cut it. The dynamic nature of machine learning models means they require frequent performance monitoring, constant data review and benchmarking, better contextual model inventory understanding, and detailed contingency plans.
Defensive plays. By defining the methodologies used to aggregate and report model risk throughout your model development platforms, you’ll see how various data points come together to paint model risk exposure – for business managers, executives, auditors and regulators. A strong model governance solution also defends against runaway costs, which come in the form of expensive resources used to track, test and govern the models, as well as the systems, workflows and infrastructure required to run them. By using a comprehensive, integrated model risk management environment to centrally manage, track and test models, you can identify which models are critical – then focus your resources accordingly.
Offensive plays. Simply put, strong model governance is the smartest way to sustain digital transformation efforts that require AI and ML models. Improved transparency, governance and resource allocation increase efficiency and lower costs by streamlining processes from model development through validation, deployment and ongoing governance. Model governance also invites critique of processes and methodologies so you can fine-tune and achieve even better accuracy and efficiency.
Looking ahead, effective governance of machine learning models will only become more critical as nuances in the global, multidimensional marketplace grow and data volumes and complexity amplify. The industry will increasingly use models for an expanding variety of business cases – particularly complex, AI and machine learning models. Adding strong model governance to your digital transformation playbook now – as part of an overall model risk management ecosystem – is key to gaining that competitive edge you’ll need to win in this new landscape.
David Asermely is the Global Lead of Model Risk Management at SAS, responsible for product design, support, partner strategy and more. Passionate about translating data into actionable intelligence, David combines the best technologies and design principles to help financial services organizations improve modeling efficiency and quality.
David holds two master’s degrees from the University of Massachusetts Amherst. Prior to joining SAS, he managed the Bank of New York Mellon’s Global Performance and Risk Analytics product set. Connect with him on Twitter, @davidasermely.