RFK Jr.’s MAHA Scandal: The Report's Fake Citations Reveal A Deeper Problem.
Why Even A Few AI-Generated Citations Undermine the Entire Report
Last week, the MAHA coalition and Robert F. Kennedy Jr. released their long-anticipated MAHA report on the causes of disease in children.
It quickly blew up online, not because of groundbreaking insights, but rather because it contained seven completely fabricated citations (out of ~500 total). Each of those 7 fake studies was hallucinated by an AI tool1.
Journalists quickly caught on, contacted the supposed authors, and received denials; the authors had never heard of the papers they were credited with writing. They had, however, published in similar subject areas in the past, which is why MAHA’s AI tool saw fit to write them as authors on the fabricated studies.
Some corners of the internet have brushed this aside as no big deal, saying that these seven fictitious citations are minor mistakes - “7 fake papers out of nearly 500 total references is a small fraction”. On paper, the accuracy rate might still look impressive: over 98% correct citations!
But, that math misses the point entirely.
The presence of any fake citations are revealing about the lack of expertise among the individuals that produced the report:
If you're a professional in biomedical science - or any rigorous academic field - you know that truly understanding the literature is a core element of your expertise. It means reading the full papers, not just pulling a convenient sentence or two from a summary or abstract and then citing the paper to bolster your point. It means grasping the nuance, strengths, and the limitations of each study you cite - and how it fits within the context of the broader literature on that subject - which can only be understood by carefully reading the study in detail.
It is not possible to imagine that the report authors were up to snuff with that standard level of rigorous academic detail, if they couldn’t be bothered to even check whether some of the studies were real publications in the first place. It would seem that their academic rigor stopped at the level of “does this paper’s title support the point I wanted to make?” which is antithetical to the process of good science.
Think about a PhD candidate preparing to defend their thesis. You could ask them a random question about their topic, and they’d probably be able to recite the author’s name, publication year, and even direct you straight to a certain Figure within that paper (from memory). They might even walk you through the key findings of that without needing to check their notes. My own PhD defense was years ago, and I still remember specific figures from specific papers related to the topic of my dissertation. That is the level of expertise in a field that is common among actual experts.
So, the real question here isn’t whether 98% of the citations in the MAHA report were real (which is technically true). To me, the question is more about what is says about how those citations were chosen. If the authors were comfortable letting an AI spit out completely fabricated references, how closely did they examine the rest of the cited studies?2
For a report intended to shape public policy, and influence billions of dollars in funding, this is a huge red flag. Policy decisions can’t be based on selectively reading the literature, cherry-picking evidence to support preexisting beliefs, or even worse, letting an AI randomly generate "evidence" that sounds like it backs up your ideas.
This whole debacle suggests the MAHA team is not taking evidence-based policymaking seriously at all. When you begin with the conclusions you want and then find (or fabricate) references afterward, you abandon rigorous science entirely.
In response to this criticism, I commonly get “Well don’t you think that the MAHA goals of making food more healthy and reducing chronic disease are the right path forward, despite these shortcomings?”
Clearly, the answer is yes - these goals are critically important to public health. But it is exactly because these goals are so important that we should all demand genuine scientific rigor, not superficial pretenses of evidence.
The presence of fake references isn’t a minor detail, even if they do make up a minority of total citations. It’s proof of a broken, unserious approach - confirmation bias masquerading as science.
-@dr.noc
as indicated by the inclusion of "OAICITE" placeholder text, clearly referencing OpenAI's citation format.
I bet they didn’t read past the study abstract in most cases.
People who are reacting to this the wrong way seem to act like AI-fabricated "research" citations are just an "Oopsies!" that can easily happen to people, just like other errors like inputting a data point wrong or misspelling someone's name. But like, how would a fake AI-generated study make it into a research paper anyway? Only by someone using AI to pull data for them and to interpret it for them. Even for a college student writing a research paper for a class assignment be turned in (and not to be published), that sounds extremely sloppy at best.
There’s this concept known as “honor.” If you have perpetrated such an egregious offense as this (among the thousands of dishonorable, illegitimate actions and statements from Rfk jr and his band), you have forfeited any claim to engagement with serious policy. You have dishonored yourself, and you should be banished, if you do banished yourself. If not, the entire process is a joke.