AI is upending business, our personal lives, and much more in between—including the operation of the U.S. government. In total, The Washington Post reported 2,987 uses of AI across the executive branch last year, hundreds of which are described as “high impact.”
Some agencies have embraced the technology wholeheartedly. NASA has gone from 18 reported AI applications in 2024 to 420 in 2025; the Department of Health and Human Services, overseen by Robert F. Kennedy Jr., now reports 398 uses, up from 255 a year ago. The Department of Energy has seen a fourfold increase in AI usage, with a similar jump at the Commerce Department. Agencies were effectively given the green light in April 2025, when the White House announced it was eliminating barriers to AI adoption across the federal government. They appear to have taken that invitation seriously.
Those numbers may raise eyebrows—or trigger concern among observers worried about bias, hallucinations, and lingering memories of the chaotic AI-enabled government overhaul associated with the quasi-official Department of Government Efficiency during Elon Musk’s brief orbit near the center of power.
“It’s not clear using AI for most government tasks is necessary, or preferable to conventional software,” cautions Chris Schmitz, a researcher at the Hertie School in Berlin. “The digital infrastructure of the U.S. government, like that of many others, is a deeply suboptimal, dated, path-dependent patchwork of legacy systems, and using AI for ‘quick wins’ is frequently more of a Band-Aid than a sustainable modernization.”
Others who have worked at the center of government digital innovation argue that alarmism may be misplaced. In fact, they say, experimenting with AI can be a form of smart governance—if done carefully. “It’s become apparent that we never really properly moved government into the internet era,” says Jennifer Pahlka, cofounder and chair of the board at the Recoding America Fund and former U.S. deputy chief technology officer under the Obama administration. “There have been real problems that have come out of that where government is just not meeting the needs of people in the way that it should.”
Pahlka believes that experimentation with AI in government is “probably somewhat appropriate” given how early we are in the generative AI era. Testing is necessary to understand where—and where not—the technology can improve operations. “What you want, though, is ways of experimenting with this that gives you very clear and effective feedback loops, such that you are catching problems before it’s rolled out to large numbers of people or to have a large impact,” she says.
Still, it is far from certain that AI systems will produce outcomes that serve all Americans equally. Denice Ross, executive fellow in applied technology policy at the University of California, Berkeley, warns that rigorous evaluation is essential. “The way government would find out if a tool is doing what it’s supposed to for the American people is by collecting and analyzing data about how it performs, and the outcomes for different populations,” says Ross, who served as chief data scientist in the White House from 2023 to 2024.
The core issue, she says, is whether a given system is actually helping the people it’s meant to serve, or whether “some people [are] being left behind or harmed.” The only way to know is to look closely at the data. That might mean discovering, for example, that a tool works fine for digitally fluent users but falls short for people without high-speed internet or for older Americans.
Public participation is also critical. “Getting the conditions for legitimate government AI use right is hard, and this work by and large has not been done,” the Hertie School’s Schmitz argues, noting that “there has been no real democratic negotiation of the legal basis for automated decision-making or build-out of oversight structures, for example.”
There are also reasons to be cautious about rushed or poorly structured AI deployments, including reported plans at the Department of Transportation to experiment with tools like Google Gemini. Philip Wallach, a senior fellow at the American Enterprise Institute, argues that while the government should be exploring how rapid advances in AI can serve the public, it must do so without sacrificing democratic accountability. The priority, he suggests, should be preserving accountable human judgment in government decision-making before momentum and political expediency crowd it out.
Looking at the government’s overall AI strategy, Pahlka says she sees some grounds for cautious optimism. From what she can tell, many of the early efforts appear focused on applying AI to bureaucratic bottlenecks and process slowdowns where it could meaningfully boost productivity. If that focus holds, she suggests, the payoff could be pretty useful.
Still, she believes more care and attention to detail is needed—something the Trump White House has not always demonstrated. “What I’m not sure I see is a questioning of the processes themselves,” she says, explaining that, in her view, thoughtful AI adoption requires asking whether a process should exist in its current form at all—not simply whether AI can accelerate one step within it.
That distinction matters because poorly implemented AI can have real consequences. Government’s track record with large-scale technology deployments is uneven, and layering AI onto flawed systems could cause undue harm. “We have consistently rolled out technology in government in ways that have harmed people because we do not have test and learn frameworks as the fundamental way of approaching these problems,” Pahlka says.
If done right, however, the opportunity is significant. AI could help government function more effectively, and more equitably, for everyone.