Menu
Banking Exchange Magazine Logo
Menu

How Financial Services Companies Can Become Responsible Stewards of AI

Rapid advances in technology, mobile phone penetration, new players and massive investments in technology are transforming the financial services industry

  • |
  • Written by  Phaedra Boinodiris, Trust in AI Business Transformation Leader, IBM Global Business Services
  • |
  • Comments:   DISQUS_COMMENTS
How Financial Services Companies Can Become Responsible Stewards of AI

Rapid advances in technology, mobile phone penetration, new players and massive investments in technology are transforming the financial services industry. The industry is witnessing the emergence of new technology-enabled business models that support increased operational efficiency, decrease costs and create new partnership and funding opportunities.

Banks and financial services are one of the cornerstones of society, with the stated goal of fostering growth, promoting entrepreneurship and reducing inequality. And yet, according to the United Nations, today women have as much access to financial services as men in just 60 percent of the countries assessed and to land ownership in just 42 percent of the countries assessed.

The allocation of venture capital is one of the primary factors determining who takes products to market, which startups succeed or fail, and who gets to participate in the shaping of our collective economy. According to a recent study of 48,000 companies from Santa Clara University, being a man is the primary factor for determining funding. It is more important to be a man for startup funding than attending a top university or even having prior startup exits.

In order to truly reach a goal of reducing inequality, fostering inclusive growth and promoting entrepreneurship for all, the financial industry requires a swift transformation that is built on extreme digitalization that utilizes new technologies including data and responsible AI. The declines listed above occurred despite raising awareness of gender bias in the investment and banking spheres. Knowing that there is bias against women and minorities is not enough. It is time to address this inequality systemically.

The introduction of AI to the finance industry presents many opportunities that could in turn lead to a more fair, efficient and resilient financial system. With a holistic approach for data and AI, financial services can institute a program that addresses culture, ethical practices and governance mandates, while remaining true to its original intentions. Financial services can successfully deploy AI while also addressing the legitimate concerns of all consumers (not just those that have been historically privileged), government and regulators, regarding fair decisions and treatment of personal information. Financial institutions that nurture the culture needed to adopt and scale AI safely, as well as leverage tools that will eliminate industry bias will ensure responsible governance and best practices.

Adopting a new AI culture

Responsible AI is more than just responsible design, development and use of the technology. It speaks more broadly to organizational operating structures and culture. Organizations need to start by building an awareness of what responsible AI is, why it’s unique and what the specific challenges are. The intent is not to necessarily create subject matter experts but rather create responsible AI stewards who can understand, anticipate and mitigate the relevant issues.

Financial institutions must prioritize diversity, equity and inclusivity goals so that the teams embarking on AI projects have diverse teams to curate, develop and test models, as well as ultimately serve on the organizations AI Ethics board. Organizations must nurture cultures and governance requirements that build AI that are ethical by design. For example, machine learning models for credit decisioning can exhibit gender bias, unfairly discriminating against female borrowers if uncontrolled. Using specialized design thinking workshops can help designers and developers think through unintended consequences of this credit decisioning application and create designs and feedback loops that can directly ameliorate negative outcomes. This same team of facilitators can introduce clients to Design Thinking if it is not a process that they are currently investing in. Using Design Thinking is foundational to a responsible AI approach.

AI governance

The realization of AI potential in financial services is highly dependent on access to personal data and model training.That is why to address the risks of data mishandling and discrimination, European authorities have set standards regarding ethical decision making and data privacy. [1] General Data Protection Regulation (GDPR) puts individuals in a position of control over their data, while also posing a number of challenges for the industry when developing AI solutions, such as obtaining consent of individuals for the purpose of AI experimentation and deleting or anonymizing all data that is not necessary for the specific purpose for which it was collected.

Recent developments in the U.S. and on the international stage suggest we’re moving into a new phase in regulatory approaches to AI. In the U.S., the National Institute of Standards and Commerce (NIST) at the Department of Commerce has proposed a federal AI engagement plan that calls for federal agencies to move forward on a range of AI standards, including some that can form the basis of a regulatory approach.

Work across stakeholders to build your organization's governance structure (committees’ structure and charters, roles and responsibilities), as well as create policies and procedures on data and model management. With respect to both human and automated governance, use frameworks for healthy dialogue, that can help you publish policy that is supported by the organization.

Trustworthy tools for AI engineering

Using specialized tools, financial services can demonstrate the contribution of data collected to the model result (purpose) and remove features that did not bring sufficient information (minimization), while achieving high performance in reducing the number of false positives.

Examples of these forensic tools include these that were donated by IBM to the open source community via the Linux Foundation:

Pillar

Toolkit

Feature

Explainability

AI Explainability 360

Help comprehend how machine learning models predict labels by various means throughout the AI application lifecycle

Fairness

AI Fairness 360

Examine, report, and mitigate discrimination and bias in machine learning models

Robustness

Adversarial Robustness 360 Toolbox (ART)

Defend and verify AI models against adversarial attacks

Transparency

AI FactSheets 360

Assemble documentation about AI model’s features, such as purpose, performance, datasets


Phaedra Boinodiris is an expert in Responsible Artificial Intelligence (AI) at IBM. She is currently pursuing her PhD in AI and Ethics at NYU and holds a BA and an MBA from the University of North Carolina at Chapel Hill.

back to top

Sections

About Us

Connect With Us

Resources