How a Judge Ruled That DOGE's Use of ChatGPT to Cancel Grants Was Both Unconstitutional and Reckless

By ✦ min read

The Department of Government Efficiency (DOGE) recently faced a major legal setback when a federal judge ruled that its decision to cancel over $100 million in grants was unconstitutional. The ruling highlighted DOGE's controversial method of using ChatGPT to determine whether grants were related to diversity, equity, and inclusion (DEI) initiatives. This Q&A breaks down the key aspects of the case, the judge's reasoning, and what it means for government accountability.

What exactly did DOGE do that the judge found illegal?

DOGE terminated more than $100 million in grants from the National Endowment for the Humanities (NEH). The judge, Colleen McMahon, ruled that the process was unconstitutional because DOGE used ChatGPT to scan grant applications and flag any content related to diversity, equity, and inclusion (DEI). According to the court, this approach made decisions based on protected characteristics rather than on the merits or compliance of the grants. In her 143-page decision, McMahon stated, "It could not be more obvious that DOGE used the mere presence of particular, protected characteristics to disqualify grants from continued funding." This reliance on an AI tool without proper human oversight or legal review violated constitutional protections, making the cancellations invalid.

How a Judge Ruled That DOGE's Use of ChatGPT to Cancel Grants Was Both Unconstitutional and Reckless
Source: www.theverge.com

Why did using ChatGPT make this process illegal?

The illegality stemmed from two main issues. First, DOGE outsourced a government decision that required human judgment and compliance with federal law to an AI system that had no understanding of legal standards or context. ChatGPT, as a language model, can only identify keywords or phrases but cannot evaluate whether a grant actually violates DEI-related rules. Second, the judge found that targeting grants based on protected characteristics (like race, gender, or ideology) is discriminatory under the Constitution. Even if the intention was to cut waste, using AI to filter grants by DEI content effectively punished groups that promote diversity—a violation of equal protection principles. The court emphasized that automation does not excuse unconstitutional actions.

Was this lawsuit filed by a specific group?

Yes, the lawsuit was filed in 2025 by a coalition of humanities organizations and advocacy groups. These groups argued that DOGE's use of ChatGPT to cancel grants was arbitrary and targeted programs that support under represented voices in the humanities. The plaintiffs included museums, universities, and cultural nonprofits that had received or applied for NEH grants. They claimed that the cancellations not only harmed their operations but also chilled free speech and academic inquiry. The case became a test of how far executive agencies can go in using AI to make funding decisions, and the court agreed with the plaintiffs that the process violated due process and anti-discrimination laws.

What did the judge's 143-page ruling specifically say?

Judge Colleen McMahon's ruling was highly detailed. She documented how DOGE instructed ChatGPT to classify grants as "DEI-related" or not, then used those classifications to terminate funding. The judge wrote that the procedure was "dumb and illegal," noting that AI could not differentiate between a grant that promotes diversity and one that merely mentions diversity in passing. She also criticized DOGE for failing to provide grantees with a chance to respond or appeal. The ruling ordered the reinstatement of the cancelled funds and barred DOGE from using similar AI-driven methods without proper safeguards. McMahon stressed that while technology can aid efficiency, it cannot replace constitutional checks and balances.

How a Judge Ruled That DOGE's Use of ChatGPT to Cancel Grants Was Both Unconstitutional and Reckless
Source: www.theverge.com

What does this ruling mean for other government uses of AI?

This decision sets an important precedent. It warns agencies that automated decision-making must comply with the same rules as human decisions. Any use of AI to evaluate grants, benefits, or enforcement actions must include transparency, accountability, and an appeals process. The ruling also reinforces that agencies cannot use AI to target protected categories indirectly. For example, using an AI scan to flag "DEI" terms is essentially a proxy for discriminating against racial or gender-related programs. Moving forward, courts may require agencies to demonstrate that AI tools are free from bias and used only as aids, not as final arbiters. The judge's strong language suggests that reckless reliance on AI without legal review will not be tolerated.

How much money was involved in the cancelled grants?

The total amount of grants cancelled by DOGE exceeded $100 million, according to the court documents. These were grants from the National Endowment for the Humanities, which supports research, education, and public programs in history, literature, and culture. The cancellation affected a wide range of projects, from archival preservation to educational initiatives at museums and universities. Many of these projects were in the middle of their work, causing immediate financial distress. The judge's order to reinstate the funds meant that the NEH had to restore the grants and continue funding as originally planned. This substantial financial impact was a key reason the court ruled that the harm to grantees was irreparable.

What is the Department of Government Efficiency (DOGE) and its role?

The Department of Government Efficiency (DOGE) is a federal agency created to streamline government operations and reduce wasteful spending. In this case, DOGE was tasked with reviewing NEH grants to ensure they aligned with efficiency goals. However, the agency overstepped by canceling grants based on an AI analysis of DEI content, rather than evaluating actual efficiency or compliance. Critics argue that DOGE's approach conflated "efficiency" with ideological targeting. The agency defended its actions as cost-cutting, but the court found that the method violated the law. The ruling clarifies that while DOGE's mission is valid, it must operate within constitutional boundaries and cannot use AI to make discriminatory funding decisions.

Tags:

Recommended

Discover More

Facebook Overhauls Groups Search with AI to Tap into Community Knowledge10 Key Insights: How Kubernetes Became the Backbone of AIWalmart Unveils Onn 4K Google TV Stick: The Long-Awaited Chromecast Successor ArrivesHow to Join the Python Security Response TeamIran-Linked Hacktivists Target Medical Giant Stryker in Devastating Wiper Attack