Artificial intelligence regulation should be carefully developed to ensure it does not disrupt the operations of companies such as banks already using it, according to the American Bankers Association (ABA).
In a letter to the National Telecommunications and Information Administration (NTIA), the ABA pointed out that artificial intelligence (AI) and machine learning technologies were already embedded into functions such as fraud prevention, cybersecurity, risk management, and lending, among others.
Many banks were exploring how to use AI applications “in a safe and sound manner within the existing risk management framework”, the ABA added, while regulators in various jurisdictions were assessing how to regulate these technologies.
The letter was in response to a request for comment from the NTIA regarding accountability policies and procedures for artificial intelligence. The federal agency plans to draft a report on potential AI regulation based on responses.
The ABA emphasized that distinctions should be made between AI developers, deployers, and end users as “the tools and motives of these entities are different”, requiring different regulatory treatment.
“Banks typically deploy the technology and should continue to be supervised by the existing federal banking regulators,” the association said. “However, there must be an additional layer to oversee the vendors that would actually be building and maintaining the AI programs.
“This is especially important because community and regional banks are more likely to use vendors to meet their demands. As with any technologically adjacent business, a subset of AI vendors that know and understand the business of banking will organically develop.
“These will have many similarities to the core and cloud service providers, and will likely require an even greater level of regulatory involvement.”
Some of the biggest banks in the US are actively exploring how AI can enhance their operations, including by hiring for technology positions.