top of page
Search

How to Create AI Policies for Your Organization: Lessons from a $70 Million Government Failure

Updated: Jun 23


The White House just provided a costly lesson in AI governance failure, and it happened publicly in front of the entire country.


The White House disconnect between AI outputs and accuracy
To what degree the MAHA report used AI is unclear, but it is apparent that there wasn't protocol in place to verify outputs and correct obvious mistakes.

Last week, White House Health Secretary Robert F. Kennedy Jr. released the "MAHA Report: Making Our Children Healthy Again," a 70-page federal document that intended to showcase "gold-standard science" and serve as the foundation for America's health policy. Instead, it became a masterclass in what happens when organizations use AI without proper oversight.



This isn't just an embarrassing mistake: this is a federal agency at the highest level using AI to fabricate scientific evidence for public health policy and undermining public trust.


The Anatomy of the AI MAHA Failure


What happened with the MAHA report is textbook AI misuse. The Washington Post found that "some of the citations that underpin the science in the White House's sweeping 'MAHA Report' appear to have been generated using artificial intelligence, resulting in numerous garbled scientific references and invented studies".


When pressed for an explanation, White House Press Secretary Karoline Leavitt dismissed the fake citations as "some formatting issues", which is disconnected from the reality of the negative repercussions this will have.


Why This Matters Beyond Politics


Regardless of your political affiliation, the MAHA report represents something every organization should fear: what happens when AI is used without guardrails in high-stakes situations.


The parallels to private sector risks are obvious. Replace "federal health policy" with "legal brief," "financial report," "compliance document," or "client presentation," and you have the same potential for catastrophic failure. As one bioethicist noted, "It's the kind of thing that gets a senior researcher into deep trouble, potentially losing their funding. It's the kind of thing that leads to a student getting an F. It's inexcusable".


The MAHA report failure highlights three critical AI governance principles that organizations ignore at their peril:


  1. Human verification is non-negotiable for high-stakes outputs. Someone clearly fed research topics into an AI system and published the results without verification. In any organization, that's a policy failure, not a technology failure.

  2. AI hallucinations aren't bugs—they're features you must plan for. AI systems don't "know" when they're making things up. They generate plausible-sounding content whether it's true or not. The MAHA report's fake studies were convincing enough to fool whoever was supposed to be reviewing them.

  3. Accountability chains matter more than technology capabilities. The real question isn't whether AI was used to create the MAHA report. The question is: who was responsible for ensuring accuracy, and how did fabricated citations make it through their review process?


The Broader Pattern of AI Governance Gaps


This isn't the first time we've seen AI misuse create real-world consequences. As we previously discussed, attorneys have recently faced court sanctions for using fabricated case citations created by ChatGPT in legal briefs. The Utah law firm case follows the same pattern: AI without governance leading to professional humiliation and damaged credibility.


What makes the MAHA report particularly striking is the scale and visibility of the failure. This isn't a small law firm making a mistake. This is the federal government publishing AI-generated misinformation as the basis for national health policy.


And just like with the law firm sanctions, the response reveals a fundamental misunderstanding of what went wrong. Calling fabricated scientific citations "formatting issues" is like calling a data breach a "computer problem". It completely misses the governance failure that enabled the incident.


Learning from the Highest-Profile AI Failure Yet


The MAHA report disaster offers a brutal but valuable lesson: AI governance isn't optional anymore, even for the most powerful organizations in the world. If the federal government can accidentally publish AI-fabricated research as official policy, your organization can certainly fall into similar traps.


The good news is that this failure is entirely preventable. Organizations with clear AI policies—policies that require human verification for public documents, prohibit unsupervised AI use for factual claims, and establish accountability for AI-assisted work simply don't have these problems.


The bad news is that every day your organization operates without AI governance is another day you're rolling the dice with your reputation. Your staff are already using AI tools. The question is whether you have the policies to ensure they're using them responsibly.


Don't wait for your organization's MAHA moment. Schedule a consultation and let's build AI governance that prevents disasters instead of explaining them.


When the White House can't get AI governance right, it's time to admit that good intentions aren't enough. You need actual policies.

 
 
 

Commentaires


© 2025 by Ex Machina Solutions. | Wix.com

bottom of page