US banks spent $31.3 billion on AI in 2024. That's the second-largest industry investment in AI globally, behind only software and IT services. And only 29% of those financial institutions report meaningful cost savings from those investments.

That gap is the entire story of AI in banking right now.

Fraud detection and code generation are working. American Express prevents $2 billion in annual fraud losses with its neural network models. JPMorgan attributes $1.5 billion in annual business value to AI-driven capabilities. These numbers are auditable and real. But generative AI — which 98% of banks use or plan to use — has officially entered Gartner's "Trough of Disillusionment." A S&P Global survey found 42% of companies abandoned most of their AI initiatives in 2025, up from 17% the year before.

What follows is where the $31 billion is actually going, which use cases have proven ROI, and what separates the banks generating returns from the ones running 14 chatbot pilots simultaneously and calling it an AI strategy.

$31.3B
global bank AI spending in 2024 — projected to reach $97 billion by 2027
IDC / Industry estimates, 2024
29%
of financial institutions report meaningful cost savings from AI investment
BCG / McKinsey, 2025
42%
of companies abandoned most of their AI initiatives in 2025 — up from 17% in 2024
S&P Global Market Intelligence, 2025
12%
of AI initiatives in financial services are fully deployed enterprise-wide; 62% remain in pilot
Riverbed Global Survey, 2025

Where the $31 Billion Is Going

Banking is not spending $31 billion on one thing. The investment clusters around six distinct use cases, and the ROI profile varies dramatically across them.

Fraud detection and prevention is deployed at 87% of global financial institutions — up from 72% in early 2024. The ROI is proven and quantified. This is the most defensible AI investment in the industry.

Software engineering and code generation has become the leading enterprise AI use case. Banks report 10–20% developer productivity gains, with Goldman Sachs projecting 3–4x productivity improvements from its autonomous coding pilots. It's internal, controllable, and measurable against existing velocity baselines.

Customer-facing virtual assistants hold the largest revenue share of AI agents in financial services at 32.5%. Bank of America's Erica is the most mature example, but most virtual assistant deployments remain narrowly scoped and far from the full-service agent functionality being marketed.

Compliance and regulatory monitoring — automated document review, regulatory parsing, AML surveillance — commands significant and growing investment, driven directly by regulatory pressure and the cost of manual compliance work.

Risk management and credit decisioning, including credit memo drafting and early-warning systems, and enterprise knowledge and productivity platforms for document summarization and internal Q&A round out the top six.

The trajectory is steep regardless of ROI clarity. Financial sector AI spending is projected to grow from $35 billion in 2023 to $97 billion by 2027 — a 29% compound annual growth rate. GenAI spending specifically hit $6 billion in 2024 and is forecast to reach $85 billion by 2030. 80% of US banks increased their AI budgets for 2025.

What the Biggest Banks Are Actually Doing

The competitive landscape in bank AI is defined by a handful of institutions whose technology budgets dwarf most companies' total revenue. Their disclosures are the clearest picture of what mature bank AI actually looks like.

JPMorgan Chase is running the most aggressive AI program in the industry. Its 2024 technology budget reached $17 billion, rising to $18 billion in 2025, with roughly $1.3 billion earmarked specifically for AI. The centerpiece is LLM Suite — a proprietary generative AI platform using models from OpenAI and Anthropic — which onboarded 200,000 employees within eight months of launch. That's the largest Wall Street AI deployment on record. The bank has 450+ proofs of concept in development. Other tools include ChatCFO (an LLM for the finance team that automates financial modeling), IndexGPT (thematic investing using GPT-4 and NLP), and Coach AI (helping wealth advisors access information 95% faster). JPMorgan's AI-driven tools contributed to a 20% year-over-year increase in gross sales in Asset & Wealth Management. The bank employs over 2,000 AI experts — more than the next seven largest banks combined, by its own account.

Bank of America allocated $4 billion to new technology initiatives including AI in 2025, nearly a third of its $13 billion total technology budget. Its virtual assistant Erica has logged 3.2 billion total interactions since its 2018 launch, with roughly 700 million in 2024 alone and 98% of customer inquiries resolved without human handoff. The bank estimates Erica's 2 million daily interactions save the equivalent of 11,000 staffers' daily work. Internally, 90% of BofA's 213,000 employees actively use Erica for Employees, reducing IT service desk queries by over 50%.

Morgan Stanley has built its AI program around its exclusive strategic relationship with OpenAI in wealth management. Its AI @ Morgan Stanley Assistant, powered by GPT-4, serves 16,000+ financial advisors. Document retrieval efficiency jumped from 20% to 80% and query times dropped from 30+ minutes to seconds after launch. 98% of advisor teams have adopted it. The bank's Debrief tool attends Zoom meetings, generates notes and action items, drafts follow-up emails, and saves everything to Salesforce. Nearly 50% of all Morgan Stanley employees now access some form of GenAI tool.

Goldman Sachs opened its multi-model GS AI Platform to all 46,500 employees in June 2025, achieving over 50% adoption with staff generating 1 million GenAI prompts per month. The platform spans GPT variants, Google Gemini, Meta LLaMA, and Anthropic Claude. Goldman became the first major bank to pilot an autonomous AI software engineer in July 2025. CEO David Solomon captured the emerging reality at a January 2025 conference: AI can now draft "95% of an S-1 filing in minutes" — a task that previously took a six-person team two weeks. "The last 5% now matters because the rest is now a commodity."

A clear pattern runs across all four. Each institution uses a hybrid architecture: proprietary platforms built internally, running external foundation models through secure sandboxed environments with zero-data-retention policies. The build-vs-buy question has been effectively resolved. The answer at scale is both.

The banks winning the AI race all invested heavily in data infrastructure and cloud migration years before generative AI arrived. That foundation is what separates a $1.5 billion annual AI return from a stuck pilot.

Fraud Detection: Where the ROI Is Essentially Proven

If there is one domain where AI's value in banking is not a projection, it's fraud detection. The urgency driving investment here is real and growing.

Consumer fraud losses reached $12.5 billion in 2024 — up 25% over 2023. The FBI documented $16.6 billion in total internet crime losses. Per LexisNexis, every dollar lost to fraud now costs financial institutions more than $5 in total — up 25% in four years. Synthetic identity fraud attempts grew 153% from late 2023 to early 2024. Deepfake-related incidents in North America surged 1,740%.

Against this, AI-powered systems are outperforming traditional rule-based approaches by a margin that isn't close. Traditional transaction monitoring generates false positive rates of 30–70% — meaning most flagged transactions are legitimate. In AML, false positive rates run as high as 90%. These aren't just operational costs. Each false positive requires human review. At scale, the labor cost is enormous and the customer friction is significant.

AI systems are collapsing those rates while finding more actual fraud:

  • HSBC reduced false positives by 60% while detecting 2–4x more genuinely suspicious activity, after deploying Google Cloud's AML AI.
  • JPMorgan reports a 95% reduction in AML false positives.
  • Mastercard's Decision Intelligence Pro achieves an 85% false positive reduction, with fraud detection improvements averaging 20% and reaching 300% in some cases.
  • DBS Bank cut false positives by 90% and investigation times by 75%.
  • PSCU, serving 1,500 credit unions, saved $35 million over 18 months.
  • Eastern Bank achieved a 67% decrease in false positives within its first year of adoption.

American Express maintains the industry's lowest fraud rates for 14 consecutive years. Its 10th-generation neural network processes $1.2 trillion across 8 billion transactions at 2-millisecond latency and saves the company $2 billion annually in prevented fraud losses. Visa blocked $40 billion in attempted fraud in fiscal year 2023.

The practical deployment model has settled into a hybrid approach. Rules handle known fraud patterns and provide the regulatory explainability required for SAR filings. ML models handle novel pattern detection and continuous adaptation. Most banks deploy AI in shadow mode alongside existing rules first, then gradually shift decision authority as confidence builds.

The explainability challenge remains real. Regulators require justification for every SAR filing and non-filing decision. Techniques like SHAP and LIME make black-box models interpretable, but they add complexity. This is not a blocker — it's a cost that needs to be designed into the architecture from the start, not retrofitted after deployment.

A Regulatory Environment That's Permissive and Uncertain

The regulatory landscape for AI in banking is simultaneously more open and more confusing than it has ever been. For institutions making multi-year investment decisions, that combination creates real strategic risk.

The foundational framework is still SR 11-7 and OCC Bulletin 2011-12 — interagency Model Risk Management guidance written in 2011. That document predates modern AI by a decade and doesn't mention artificial intelligence. Yet it governs every AI model banks deploy. Industry groups have called repeatedly for updates. None have been issued. Banks are applying 2011 guidance to 2026 technology and hoping the interpretation holds up.

The CFPB was the most active AI regulator under the Biden administration. Its 2022 and 2023 circulars declared that "companies are not absolved of their legal responsibilities when they let a black-box model make lending decisions" — requiring adverse action notices to reflect the AI's actual reasoning, not generic approximations. In May 2025, the Trump administration withdrew those circulars. The underlying statutes — ECOA and FCRA — remain fully in force. What changed is the specific interpretive guidance, not the law itself. Private litigation risk may actually fill the enforcement void left by withdrawn regulatory guidance.

The SEC signaled its intent clearly in March 2024 with its first AI-specific enforcement actions. Investment advisers Delphia and Global Predictions were charged for AI-washing — falsely claiming to use AI in their investment processes. The penalties were small ($225,000 and $175,000), but the examination priority is permanent. Regulators will punish misrepresentation about AI capabilities.

The most comprehensive recent development is Treasury's Financial Services AI Risk Management Framework, published in February 2026. Built through a public-private partnership and mapped to the NIST AI RMF, it establishes 230 control objectives across governance, data management, model development, validation, monitoring, third-party risk, and consumer protection. It's voluntary. But it's the most detailed blueprint available for what "adequate AI governance in banking" looks like, and examiners will reference it.

FinCEN's proposed AML modernization rule explicitly encourages banks to adopt AI for "greater precision in assessing customer risk" and "reducing false positives" — one of the few instances of a regulator actively pushing toward AI adoption rather than just managing the risks of it.

One finding that should be required reading for every bank CTO: a Richmond Fed study found that banks with higher AI investments incur greater operational losses. A 10% increase in AI investment correlated with a 4% increase in quarterly operational losses, driven by external fraud, client problems, and system failures. The effect was strongest at banks with weak risk management practices. AI amplifies operational weaknesses. It doesn't fix them.

The Gap Between C-Suite Messaging and Actual Results

The most important story in bank AI right now isn't about the technology. It's about the gap between what executives say publicly and what the internal data shows.

Publicly: 99% of executives plan AI investment. 76% call it critical for survival. Jamie Dimon compared AI to "the printing press, the steam engine, electricity, computing, and the Internet" in his 2024 annual letter.

Privately: BCG's 2024 data shows only 29% of financial institutions report meaningful cost savings. McKinsey's 2025 survey of 44 banking institutions found "considerable skepticism over the technology's potential to boost productivity, often reflecting previous experiences where tech rollouts did not achieve the expected gains." Deloitte's Financial AI Adoption Report found only 38% of AI projects in finance meet or exceed ROI expectations.

The average organization scrapped 46% of AI proofs of concept before reaching production. RAND puts AI project failure rates at 80% — double the failure rate of non-AI IT projects. Only 12% of AI initiatives in financial services are fully deployed enterprise-wide. A S&P Global analysis of roughly 550 bank earnings call transcripts found that while 43% reported internal AI deployment, only 9% indicated use in external-facing, customer-impacting systems.

The failure modes are consistent. Data quality tops every obstacle list — 43% of CDOs cite it as the primary barrier. Banks sit on decades of data fragmented across dozens of systems of record with different schemas and identifiers. Cleaning and connecting that data before AI can use it reliably is the actual project, often larger than the AI project itself.

Legacy system compatibility ranks second. EY's financial services survey found 68% of CTOs cite it as their most significant obstacle, with AI initiatives experiencing average delays of 12–18 months. Talent is third: AI initiatives at 65% of financial institutions are delayed an average of 14 months due to talent scarcity. JPMorgan employs more than 2,000 AI experts. That number is not achievable for mid-tier or community banks.

One US bank was running 14 chatbot pilots simultaneously, all focused on cost reduction, with not one addressing onboarding, fraud prevention, or revenue generation. That's not an AI strategy. It's pilot accumulation without a problem definition. The pattern that kills projects — solving for the wrong metric — shows up in bank AI exactly as it does everywhere else.

Bank of America's Global Fund Manager Survey registered an unprecedented signal: for the first time in 20 years, institutional investors believe companies are overinvesting — with AI spending specifically in their crosshairs. First Internet Bank's CEO captured the shift: "The excitement is there but there is also an air of caution that wasn't there 12 months ago."

The Jobs Question

Bloomberg predicts US banks may cut 200,000 jobs in 3–5 years. JPMorgan has projected AI could eventually reduce headcount by at least 10% across divisions. These projections get significant coverage.

As of late 2025, America's largest banks had not made significant workforce reductions. Bank of America employed four fewer workers year-over-year. JPMorgan increased headcount by 2,000. Goldman added roughly 1,800 staff.

The actual displacement has been quiet rather than visible. Entry-level and back-office positions are not being backfilled through attrition. Junior analyst roles in areas like equity research and investment banking are being restructured. The jobs disappearing are the ones that don't appear in layoff announcements — they're simply not replaced when someone leaves.

Goldman Sachs CEO David Solomon's observation about S-1 filings applies broadly: if AI can now draft 95% of a task in minutes, the economics of hiring six people for two weeks to produce that draft changes permanently. The question is not whether this affects employment. It's over what timeframe and in which roles first.

What the Evidence Actually Supports

Three conclusions hold up across the data.

Traditional AI and ML are delivering proven, quantified value in narrow domains. Fraud detection is the clearest example. JPMorgan's $1.5 billion in AI-driven value, American Express's $2 billion in annual fraud savings, and Bank of America's Erica handling 3.2 billion interactions represent real, auditable operational results. Developer productivity gains from code generation tools are similarly measurable. These are not projections.

Generative AI in banking is real but not yet proven at scale in regulated environments. The major banks have built impressive platforms and onboarded hundreds of thousands of employees. But most GenAI use cases are internal productivity tools — code generation, document summarization, knowledge retrieval — not the customer-facing, revenue-generating transformation being marketed. The 42% abandonment rate for AI initiatives reflects structural challenges in data quality and legacy infrastructure that capital alone doesn't solve.

The regulatory environment creates a paradox for planning. The deregulatory posture has removed specific AI guidance while the underlying statutory obligations remain in force. Banks have more operational freedom but less clarity about where the legal boundaries are. Treasury's new FS AI RMF provides a comprehensive voluntary framework. The institutions navigating this best are treating the removed guidance as a minimum standard for their internal governance, not as a ceiling that's been lifted.

The banks generating returns share three characteristics that matter more than their technology choices: they invested in data infrastructure and cloud migration before the GenAI wave arrived, they centralized AI governance while letting individual teams experiment, and they measured success in incremental productivity gains rather than transformational leaps.

For institutions without that foundation, the most important question isn't which AI to buy. It's whether the prerequisite data and infrastructure work is complete enough to make any AI investment return anything at all. That question needs a documented answer before any AI project starts.

Evaluating AI for your financial services organization?

The AI Readiness Assessment maps your specific use cases against the ROI evidence above, identifies your data and governance gaps, and produces a pre-investment projection grounded in what's actually working in the industry.

Financial Services AI

Sources

  1. IDC — Financial services AI spending forecast, $35B–$97B (2023–2027)
  2. BCG — "Closing the AI Value Gap in Financial Services" (2024)
  3. McKinsey & Company — "The State of AI in Banking" (2025)
  4. S&P Global Market Intelligence — AI initiative abandonment survey (2025)
  5. Riverbed Technology — "Global AI Survey: Financial Services" (2025)
  6. Deloitte — "Financial AI Adoption Report" (2025)
  7. American Banker — "AI Innovation of the Year: JPMorgan LLM Suite" (2025)
  8. JPMorgan Chase — Annual Report, AI technology investments and outcomes (2024–2025)
  9. Bank of America — Annual Report, Erica metrics and technology investment (2024–2025)
  10. Morgan Stanley — AI @ Morgan Stanley deployment data (2024–2025)
  11. Goldman Sachs — GS AI Platform launch communications (June 2025)
  12. LexisNexis Risk Solutions — "True Cost of Fraud" (2024)
  13. FBI Internet Crime Complaint Center (IC3) — "2024 Internet Crime Report"
  14. HSBC / Google Cloud — AML AI deployment results (2024)
  15. Mastercard — Decision Intelligence Pro performance data (2024)
  16. American Express — Annual Report, fraud prevention metrics (2024)
  17. Visa — Annual Report, fraud prevention metrics (FY2023)
  18. Federal Reserve Bank of Richmond — "AI Investment and Operational Losses in Banking" (2024–2025)
  19. CFPB — "Adverse Action Notification Requirements and the Equal Credit Opportunity Act" circular (2022); withdrawn May 2025
  20. SEC — "In the Matter of Delphia (USA) Inc." and "In the Matter of Global Predictions Inc." (March 2024)
  21. US Department of the Treasury — "Financial Services AI Risk Management Framework" (February 2026)
  22. FinCEN — AML modernization rule proposal (June 2024)
  23. Informatica — "2025 Chief Data Officer Survey"
  24. EY — "Financial Services Technology Priorities Survey" (2025)
  25. Gartner — "Hype Cycle for Artificial Intelligence" (2025)