In brief

  • Federal AI use has grown rapidly, but adoption remains heavily concentrated among a handful of large agencies.
  • Key bottlenecks include a shortage of AI-specialized talent, a risk-averse agency culture, and procurement rules ill-suited to fast-moving AI systems.
  • Public trust is a critical hurdle, with only 17% of Americans believing AI will benefit the country, making transparency essential to building confidence.

The use of artificial intelligence across the U.S. federal government has expanded dramatically in recent years, but significant obstacles—from talent shortages to public skepticism—are slowing the technology's responsible integration into government services, according to a new report from the Brookings Institution.

The Wednesday report draws on AI use case inventories from 2023 to 2025, federal jobs data, Office of Management and Budget memoranda, and interviews with current and former federal technologists across eight agencies.

The numbers tell a story of rapid acceleration. In 2025, 41 agencies documented more than 3,600 individual AI use cases—69% above the total reported in 2024 and five times the number reported in 2023. The applications span a wide range of government functions: More than half of the Social Security Administration's reported use cases support service delivery and benefits processing, while over half of the Department of Justice's inventory supports law enforcement efforts.

Yet the growth is far from evenly distributed. For the past three years, five large agencies accounted for over half of all reported AI use cases, and large agencies contributed 76% of the total inventory in 2025. Smaller agencies are barely keeping pace: The 11 small agencies that reported in 2025 collectively submitted just 60 use cases, representing only 2% of the total inventory.

The report identifies several structural barriers holding back broader adoption. One of the most pressing is a lack of specialized talent. Of more than 56,000 technical job listings posted by the federal government since 2016, just over 1,600—fewer than 3%—explicitly reference AI capabilities.

A Biden-era hiring surge aimed to address this gap, but workforce reductions in early 2025 may have undermined those efforts, as at least 25% of AI-specific job listings were posted from 2024 onward—meaning many of those newly hired workers could have been among the most recently and easily dismissed.

Beyond staffing, the report points to a deeply ingrained culture of risk aversion inside federal agencies. Nearly 60% of all AI use cases are either in the pilot or pre-deployment stage, suggesting the federal AI landscape is still in a rapid growth phase—one that requires dedicated time for education and experimentation that many agencies struggle to carve out. The report also notes that the Trump administration's explicit linkage of AI deployment to workforce cuts through the Department of Government Efficiency (DOGE) may be reinforcing that hesitancy.

Accountability gaps are another concern. More than 85% of all high-impact deployed AI use cases in 2025 lack some required information about risk mitigation measures, despite explicit requirements from the OMB.

Public confidence poses yet another challenge. According to recent Pew Research Center data, about half of Americans now say they are more concerned than excited about the growing prominence of AI, up from 37% four years prior, and just 17% of the American public believes AI will positively impact the U.S. in the next two decades.

The report warns that the stakes are high. Public trust in the federal government remains near historic lows, with recent data showing only 16% of Americans saying they trust Washington to do what is right most or nearly all of the time. Against that backdrop, the authors argue that poorly executed AI deployments could cause serious damage—but that well-designed applications focused on tangible service improvements could, conversely, help rebuild confidence in government institutions.

To get there, Brookings recommends expanding AI literacy training across agencies, reforming procurement rules that were designed for more static software systems, strengthening transparency practices around high-risk AI use, and prioritizing use cases that produce clear, positive benefits for the public.

Daily Debrief Newsletter

Start every day with the top news stories right now, plus original features, a podcast, videos and more.