Insider fraud is changing, and banks must adjust their strategies to keep up with it. This is my main takeaway from the recently released Strategic Treasury Fraud and Controls survey. Not only is a course correction necessary to counter this potentially devastating threat, they also need to bring their corporate partners along with them to truly make a dent in this new phase of internal data and financial theft.
I say this is a new phase because the report shows new data about the relationship between remote work and insider threats. The rise of this threat has been well-documented as the pandemic scattered workers and their devices to decentralised locations. However, the Strategic Treasurer report shows a new profile. Instead of the predictable result of data leakage and financial losses, the report correlates remote work with social engineering fraud, specifically business email compromise. By the numbers, companies indicated that reliance on remote work spawned three different fraud types. Business email compromise affected 64% of respondents, followed by data theft (39%) and external fraud (38%).
The numbers are significant and concerning. However, I believe this next wave of insider fraud will not have a direct line from remote working to data theft and financial crimes. It will result from several enabling factors related to remote work, such as remote hiring and lax management or oversight of the hiring process, that will accelerate insider fraud. And as the data shows, social engineering threats like business email compromise will join data and financial theft as the most dangerous results.
I see three ways banks can take action to mitigate insider fraud and its financial and reputational damage. In the process, banks can extend tools, technology and best practices to their corporate customers, who are less prepared to fight this newest scourge.
The first, as I’ve already alluded to, is in the hiring process. It’s hard to find reliable data about the number of people hired remotely since the pandemic, or even over the past year. What’s not hard to find is evidence that although remote work is decreasing, it is here to stay. A Stanford University WFH Research project, updated monthly, shows that 28.7% of all work in March 2023 was done remotely. That’s down from 61% at the height of the pandemic. It stands to reason that remote hiring is still a significant factor. The problem is that meeting a prospective hire lacks the body language and face-to-face interaction that can inform those first impressions. There is no question that increased investment in background checks, reference checks and other screening tools are essential to make sure that the person on the Zoom screen is trustworthy.
That’s one element of remote hiring. The other is the continuous training and education that needs to happen so the new hire understands the dos and don’ts of navigating today’s data-driven organisation. A hardened criminal will most likely be picked up by standard screen checks, although I have heard anecdotal evidence that organised crime gangs are now trying to actively place fake “employees” in businesses. Training will avoid the instance of what looks like “innocent” insider fraud, such as the new grad who sends a few key sales contacts to a friend. But there’s no excuse for negligent management practices that can enable benign or malicious insider behaviour. So, it's important to differentiate between these activities. Obviously, the malicious ones are rare, but when they act, they can be catastrophic.
The second action point moves beyond suspicious individual employee behaviour and into monitoring a menu of key business applications. Log files will show suspicious interactions, but they’re difficult to read, manual to process and only catch a bad actor after the fact. It can be more effective to monitor the junctions and business applications related to data and money movement where fraudulent behaviour can occur. Some examples include a payment system through which an insider can make unauthorised transactions, a customer data warehouse where an insider can steal identities resulting in the aforementioned business email compromise, or even a compliance platform where fraudulent activity can potentially be disguised.
The third action is better usage of data. The Strategic Treasurer report showed that the usage of AI and ML continues to grow in fraud detection and defence. In 2022 the percentage of respondents using AI for fraud prevention was only 11%; in 2023 it jumped to 24%, and I expect to see another jump next year. There’s no question in my mind that Chat GPT, as well as projects from Microsoft and Google, have raised the profile of AI. It has been an “aha” moment. But now we must translate that data to intelligently integrate it into current infrastructure. This will impact and improve the volumes of data being processed for fraud detection and prevention. AI can use predictive analytics to identify potential fraud before it occurs by analysing historical data and identifying patterns within it. At Bottomline, we are a vendor with a responsibility to help banks and businesses mitigate against fraud, as thus we are investing heavily in this area.
Let’s end by fast-forwarding one year. When I see a 2024 research report that details fraud and financial crimes for banks and their corporate customers, I would like to see — and I believe we can see — a lower incidence of social engineering fraud and internal data theft. To get there, we will need better collaboration at every level. First, we need more cooperation between financial institutions and corporates through training, technology and best practices. Second, we need better internal collaboration between IT and fraud defence teams to replace manual investigations of suspicious employee behaviour with automated system-based monitoring of important applications. And finally, we need stronger collaboration between fraud prevention teams and human resources to promote thorough background checks and increase face-to-face hiring.
You’ll notice that although I am a technology practitioner, I’m not saying it’s the silver bullet to stop insider fraud. I'm humble enough to say that powerful technology is only good if it is used by properly trained practitioners that work with the right people to contribute to the greater good — to the fraud and financial crime community. Banks have done a tremendous job in the last 10 to 15 years to establish practitioners within the risk pool. Now it’s time to raise corporates to the same level.
By Omri Kletter, Global VP, Fraud and Financial Crime, Bottomline
- Balancing Act: Ensuring ECOA Adverse Action Compliance in the Age of AI Algorithms for Credit Decision-Making
- The Value of Embracing AI in Payments
- Tackling the Affordability Challenge with a Data-Driven Approach
- FHA Introduces Payment Supplement
- Banks Must Improve Digital Offerings to Meet Customers’ Expectations