close
close

GOP lawmakers, financial leaders ‘leery’ of rushing AI rules on the sector

GOP lawmakers, financial leaders ‘leery’ of rushing AI rules on the sector

Federal agencies and members of Congress should tread lightly when introducing new regulations for artificial intelligence in the banking, capital markets and housing sectors, GOP lawmakers and a handful of industry experts argued during a House Financial Services Committee hearing Tuesday.

Rep. Patrick McHenry, the North Carolina Republican who chairs the committee, said in his opening statement that Congress should be “leery of rushing legislation” as lawmakers attempt to “tackle the thorny questions AI presents” — a perspective that came up repeatedly over the course of the four-hour hearing.

“It’s far better we get this right rather than to be first,” McHenry said. “This committee should examine whether current regulation needs to be clarified and carefully considered if targeted legislation to close regulatory caps may be needed.”

Calling U.S. financial regulations “technology neutral,” McHenry aimed to strike a balance between financial firms’ compliance with existing consumer protection laws regardless of AI’s effect on the sector and ensuring that regulators are “equipped” to evolve with “this new technological frontier.”

Those comments broadly reflected the tenor of a bipartisan report from a dozen committee members on AI applications in financial services and housing. The report, released last week, was the culmination of a six-month examination of federal regulators’ relationship with AI as well as how the technology impacts capital markets, housing and insurance, financial institutions and nonbank firms, and national security.

Key bipartisan takeaways in the report included calls for the committee to consider the reformation of data privacy laws, ensure that regulators are provided with the proper tools and focus they need for oversight, and play a “leading role” in overseeing the adoption of AI in financial services and housing. 

During Tuesday’s hearing, many witnesses attempted to thread the needle on the regulatory topic. Vijay Karunamurthy, chief technology officer at Scale AI, said “new regulations may not be necessary,” but only a “thorough and comprehensive gap analysis” would be able to determine what is or is not applicable. 

“If gaps exist, we must fill them,” Karunamurthy said. “We believe that should be done with risk-based, sector-specific regulations.” 

Other witnesses echoed Karunamurthy’s call for targeted regulations that avoid a patchwork approach that keeps companies guessing. John Zecca, executive vice president and global chief legal, risk and regulatory officer for NASDAQ, said that while the stock exchange opposes the creation of a central regulator on AI, it does view the National Institute of Standards and Technology’s AI risk management framework positively. 

“Existing regulations and regulatory structures should be leveraged where possible. Like prior technological innovations, the adoption of AI technology does not necessarily demand sweeping regulatory changes,” Zecca said. “We support leveraging NIST to coordinate across the government.”

GOP Reps. Bill Huizenga of Michigan and Dan Meuser of Pennsylvania prodded Zecca for his feelings on the European Union’s landmark AI act, which is set to take effect Aug. 1. Zecca reiterated his concerns about having a single AI regulator and how that person’s regulatory agenda could conflict with a regulator of markets. 

Frederick Reynolds, FIS Global’s deputy general counsel for regulatory legal and chief compliance officer, largely agreed, calling the European approach to AI in financial services “fairly restrictive” so far.

That approach “can stifle innovation and make it harder, I think, for companies to adopt this new technology,” he said. “And I think it creates an unlevel playing field across different geographies because the technology that will work well in the United States may not work very well in Europe.”

On a more local level, financial institutions are largely looking for regulatory clarity when it comes to AI. Elizabeth Osborne, chief operations officer of Great Lakes Credit Union, said her organization’s existing regulation is “technology agnostic,” which represents a “great starting point.” But for community financial services companies of that kind, more is needed.

“We have the guardrails in place to enable financial institutions of all sizes to proceed with entering into relationships, like our current relationship with interface AI,” she said. “One area that would be helpful is if our regulators provided a statement or clarity in terms of where AI fits in that regulation.” 

While most of the witnesses were game to answer questions about what’s working in the AI financial regulatory space and what they think would present problems for their respective industries, one discussion point fell flat. 

Rep. Andy Barr, R-Ky., asked if any of the panelists cared to comment on what they’ve seen on AI-related regulations from the Consumer Financial Protection Bureau, an agency that he said “has shown itself to be more inclined to move the market by intimidation and supposition, rather than by evidence and administrative procedure.”

“Anyone worried about CFPB stifling innovation in artificial intelligence?” Barr asked, followed by silence. “No? Maybe? OK, crickets.” 

Matt Bracken

Written by Matt Bracken

Matt Bracken is the managing editor of FedScoop and CyberScoop, overseeing coverage of federal government technology policy and cybersecurity.

Before joining Scoop News Group in 2023, Matt was a senior editor at Morning Consult, leading data-driven coverage of tech, finance, health and energy. He previously worked in various editorial roles at The Baltimore Sun and the Arizona Daily Star.

You can reach him at [email protected].