Menu
Banking Exchange Magazine Logo
Menu

How the Crisis in Ukraine Could Impact Machine Learning Models

As the conflict there worsens the critical need for financial organizations to have checks in place to safeguard artificial intelligence (AI) and prevent bias will be revealed

  • |
  • Written by  Fion Lee-Madan, Fairly AI
 
 
How the Crisis in Ukraine Could Impact Machine Learning Models

When there's a major event that causes the destabilization of financial markets, not having a sound governance, risk and control (GRC) strategy for Machine Learning (ML) models becomes increasingly a risk to banks and other organizations.

The pandemic is a prime example — it sent shockwaves throughout the financial industry and jeopardized the validity of financial ML models.

The current crisis in Ukraine is another example. As the conflict there worsens and the resulting economic fallout continues, the critical need for financial organizations to have checks in place to safeguard artificial intelligence (AI) and prevent bias will be revealed.

You may have heard stories of grandparents trying to make their first-ever online purchases during the pandemic, only to be declined because they had never used their credit cards to shop online before the pandemic. The ML algorithms credit card companies used flagged the attempted purchases as fraudulent transactions. During a time of disaster, these individuals were simply trying to adapt. Unfortunately, they were unfairly impacted because they were older and inexperienced at shopping online.

Russia’s invasion of Ukraine is causing seismic economic consequences around the globe. Given the events in eastern Europe are unfolding at a dramatic rate, let's examine how the Ukraine crisis could impact ML Models used by the banking industry.

In just a few short weeks, finance has become both a defense strategy and a casualty in the conflict. Examples include:

  • Economic sanctions
  • Foreign banks restricting Russian banks from accessing the global payment system SWIFT
  • Supply disruption on oil markets driving price increases
  • Increased inflation worries
  • Economic uncertainty and market instability

These and other major financial disruptions will impact data and financial ML models. Now, more than ever, banks need to be conscious of how unreliable data might affect projections and be vigilant in ensuring ML models in market don't negatively impact disadvantaged groups. (For the purpose of this article, an ML model is defined as a type of AI models.)

In finance, the cost of model risk is measured in financial loss and reputational harm. Model risk is particularly punitive when associated with any of the many forms of bias due to the heightened levels of visibility generated by such occurrences. Certain concepts, expressed by terms such as fairness, bias, or trustworthiness, play a more significant role in the types of ML models often used in finance than they do in traditional quant pricing or risk models.

This is partly because many of the current financial applications of ML involve assessments of the credit worthiness (i.e., probability of default (PD), loss given default (LGD)) of loan and credit card applicants. ML models are also heavily used for Anti Money Laundering (AML) and fraudulent transaction monitoring applications.

The Ukraine crisis has pushed oil, gas and food prices to new levels. The NYTimes reported what historical data has already told us: “poorer people spend a higher share of incomes on food and heating,” said Ian Goldin, a professor of globalization and development at Oxford University.

Data represents historical conditions. Since ML models are trained on historical data, it's more difficult to alter the model's underlying assumption. This is why there is additional complexity in managing model risk of these “black-box” ML models.

Having oversight and an ability to act fast is necessary. Having proper governance processes, risk assessment and automated controls in the model and data is what allows model owners to be able to respond quickly. For example, if the ML models have never seen data with these new levels of oil, gas and food prices caused by the Ukraine crisis, these models may become unfit for purpose and should be flagged for review immediately.

A multitude of major events during the last 14 years (2008 Financial Crisis, 2020 COVID-19 pandemic) have taught banks and regulators to rethink how to prepare for these financial market disruptions in advance efficiently. There is too much volume, complexity and velocity in ML models for them to be managed using traditional resources and processes.

Using automated software systems as a technology backbone to help teams standardize and scale model risk management across the three lines of defense is a priority for banks that have already embraced ML.


Fion Lee-Madan is the COO and technical co-founder of Fairly AI, a Toronto-Kitchener-Waterloo based startup providing risk management for Artificial Intelligence systems to accelerate safer and better models to market. FAIRLY is a governance, risk and compliance solution built to help businesses accelerate responsible AI models-to-market.

back to top

Sections

About Us

Connect With Us

Resources