My Hallucination Papers
I did not write these scientific papers—though, according to ChatGPT, I could have.
I admire Google Scholar’s efficiency in crawling the scientific literature, automatically identifying our papers and their citations. It feels like magic and sets it apart from others that often require manual reference entry, have limited coverage of top conferences in my area, or miss recent preprints.
However, I’ve noticed something unusual recently. Google Scholar has increasingly suggested that I authored papers that I never wrote. But why are these nonexistent papers suddenly appearing on my profile?
As you might have guessed, these ghost papers are likely hallucinations of ChatGPT and other large language models (LLMs).
These citations appear in questionable literature reviews published in dubious journals by profit-driven publishers. The pattern is clear: these papers, published in 2023 or later (post-LLM boom), suddenly cite my supposedly co-authored paper from years ago—one that never existed.
Take this citation:
This paper was never written. In fact, my esteemed PhD supervisor, Prof. Monard, had long retired by 2018. Yet, this fabricated reference appeared for the first time in a well-cited MDPI paper:
Unsurprisingly, other papers citing my nonexistent work also cite Aldoseri et al., illustrating how bad science spreads like a disease. For example:
Another fabricated reference:
This bibliographic hallucination originated from yet another MDPI paper:
In short, LLMs and Google Scholar have unintentionally created a powerful tool for identifying bad research: one hallucinates references, and the other indexes them.
Researchers eager to integrate LLMs into their workflows or cite references without verifying their legitimacy should be cautious. Blindly trusting citations from other papers—especially without careful revision—risks perpetuating scientific misinformation.