As artificial intelligence in multiple forms continues to become a greater part of financial services, people aren’t worrying about the robots taking over so much as whether AI could do harm.
Pedro Bizarro, chief science officer and co-founder at Feedzai, an AI development company specializing in anti-fraud applications, thinks the technology could do good by improving processes. But he also thinks that AI, mishandled, could cause harm. The combination of algorithms, data, and data science going into an AI application could determine whether the end result is positive or detrimental.
Bizarro said that often the engineers and scientists who develop AI know better than anyone else what the potential flaws and risks are in what they are building.
“They don’t bring them up,” said Bizarro, “because they don’t think anybody cares.”
What could go worng?
In some ways, the potential harm that could be done by badly built AI brings to mind the types of problems that arise in the context of fair-lending. Bizarro, speaking at a session at the recent Money 20/20 conference, sketched out the risks.
“We don’t want models to make decisions based on what you are, but what you have done,” said Bizarro. Age, gender, etc., shouldn’t be weighed, but instead one’s behavior.
“Anything that is behavioral is more correct than something that is the person himself,” says Bizarro. When AI has been permitted to venture in bad directions, in keeping with the idea that it will learn as it goes, it must be retrained. AI’s results must be evaluated to permit course corrections, according to Bizarro.
Responsibility and accountability
Professional and personal responsibility should become part of the human element of AI as banking and other industries become further involved with the technology, Bizarro believes.
The first part of his argument is that data scientists, engineers, and others involved in AI development should take the equivalent of a Hippocratic Oath. This concept, borrowed from the medical world, would address the duties of a practitioner in making sure that AI doesn’t hurt anyone.
That is one step, but the next step goes further: addressing malpractice.
“Accidents happen,” said Bizarro, but they have effects that must be addressed.
Even when a doctor has taken the oath, errors or worse dictate a penalty, he explained. AI has progressed sufficiently that “we should be considering accountability,” Bizarro said. “If anything, it’s something we should have started thinking about a while ago.”
The paradox of AI, Bizarro added, is that while the general public is only beginning to be aware of the role AI can and will play in their lives, it has already been going on for some time.
“The general public doesn’t even realize that they’ve already been using machine learning, for example, with their web searches” and more, Bizarro said.