Nobody ever said regulation was easy. That's the takeaway, at least, from recent efforts to prevent criminal activity, fraud, and harm to investors at financial institutions. Banks are routinely revising their compliance procedures to ensure they are operating within the letter of the law at the local, state, federal, and international levels. An array of industry authorities are setting best practices for regulatory compliance, and customers have placed reputation squarely near the top of their priorities when it comes to choosing a bank to do business with.
With these competing demands evolving and ratcheting up costs, institutions routinely look to automation to make sense of it all. It was only a matter of time, then, before they embraced artificial intelligence, raising a host of new ethical and practical questions.
With these issues in mind, I joined experts on banking, regulation, and technology on February 28th for a Fordham Law School symposium on Artificial Intelligence, Machine Learning, and Law in the Financial Services Sector. What I found most impressive weren't in-depth discussions of algorithms or balance-sheets. Rather, it was the disparate banking stakeholders who offered perspectives on the subject. Compliance and AI, simply, have become a cross-functional exchange of ideas in financial services.
To illustrate the point, consider those joining me on a panel regarding “AI and Machine Learning for Regulatory Compliance”: Donna Daniels of EY, Gary Barnett of the Securities and Exchange Commission, Salon Barocas of Microsoft Research, and Michael Lammie of PwC, who offered opening remarks. The discussion was moderated by N. Cameron Russell, executive director of the Fordham Center on Law and Information Policy. The discussion drew on expert viewpoints from consulting, regulation, research, and law, not to mention NextAngles providing a solutions-oriented perspective.
What was remarkable was that, for a group focused on regulatory compliance, we broadly agreed that AI's value to financial services will grow exponentially in the years ahead — beyond solving burdens such as anti-money laundering and “Know Your Customer” rules.
Early on in our discussion, it was noted that beyond regulatory compliance, AI-based solutions allow banks to code, organize and leverage customer data in more efficient ways. Much as institutions can use technology to connect transactions to bad actors and scan sanctions lists, institutions can cross-reference customer transactions and accounts to identify new business opportunities and tailor customer offerings to their needs.
NextAngles is an example: After integrating the solution with existing operational systems, a bank can eliminate “false positive” compliance violations, freeing professionals to focus on actual threats to the bank's reputation. An extension to, instead of a replacement for “end-to-end” solutions, AI capabilities are a managed service offering, pulling in data from multiple institutions and adjusting solutions to each client's needs.
Consultants equally recognize the value in AI and machine learning. Donna Daniels of EY observed that emerging solutions can guard institutions from risk. She raised the point that AI technologies incorporate predictive functions; rather than just reacting to possible violations, the solutions can spot potentially troubling patterns (such as suspicious transactions) before they rise to the level of a compliance violation or financial crime. “As analytics gets smarter and smarter, things will rise to the top”, she rightly noted.
That possibility could facilitate an important convergence of business functions to allow banks to operate more efficiently and profitably — which even regulators appreciate. Gary Barnett, the SEC's deputy director in its Division of Trading and Markets, acknowledged the tremendous potential for financial firms of all stripes to utilize the technology to become smarter—and of course more compliant.
One concern shared by most participants related to the monitoring of bank employees and the risk of a false positive in which a system wrong identifies an employee as committing a violation. Barocas, of Microsoft, has studied how to install ethical safeguards in AI and acknowledged that indicators of fraud often tend to be life events such as financial distress. These, he said, “are strong indicators, but are also likely to be the cases that are false-positives.”
What we concluded was that as the technology becomes ubiquitous over the next five years, it would transform each of our fields as it relates to financial institutions. Financial services, after all, is just one field where AI will continue to transform business and make it more profitable, efficient, and safe. And that was something we could all agree upon.
Image courtesy of Fordham Law News.