The Chicago Sun-Times, the largest newspaper in Chicago by circulation, is under fire after publishing a summer reading list riddled with AI-generated errors including multiple fake books wrongly attributed to real, bestselling authors.
In what’s being called a glaring editorial oversight, the Sun-Times’ recently released “Summer Reading List for 2025” featured 15 book recommendations, but only five of them actually exist. The remaining ten titles were entirely fictional, generated by artificial intelligence and mistakenly credited to well-known authors.
Some of the fake listings included:
- Tidewater Dreams by Isabel Allende
- The Last Algorithm by Andy Weir
These books do not exist but were presented in the list as if they did, complete with authors, titles, and brief descriptions, misleading readers and sparking outrage across social media.
The reading list was compiled by Marco Buscaglia, who later admitted to using AI to help generate the recommendations. In an interview with 404 Media, Buscaglia took full responsibility for the blunder:
“I do use AI for background at times but always check out the material first. This time, I did not, and I can’t believe I missed it because it’s so obvious. No excuses. On me 100 percent, and I’m completely embarrassed.”
While the apology is candid, the error has reignited serious discussions about editorial standards and the role of AI in journalism.
This incident is a textbook example of what experts refer to as an “AI hallucination”—when artificial intelligence tools like ChatGPT generate plausible-sounding but entirely false content. These hallucinations have appeared in various fields, from search engine results to legal documents citing non-existent court cases.
In this case, the AI didn’t just invent fake book titles it also assigned them to real, living authors, creating a potentially damaging blend of fiction and reality that slipped past editorial review.
As newsrooms increasingly experiment with AI to assist in content creation, editorial oversight remains non-negotiable. Tools like ChatGPT and others can be valuable for drafting and ideation but they’re not replacements for human fact-checking and journalistic integrity.
This slip-up by a reputable publication is a reminder that AI’s convenience comes with risk. The ability of language models to produce credible-sounding, but completely false, information underscores the critical need for responsible AI use in journalism.
The Chicago Sun-Times has yet to issue a formal correction or public statement beyond Buscaglia’s apology. Meanwhile, readers and media experts are calling for greater transparency and clear AI-use policies within journalistic institutions.
In a world where misinformation spreads fast and trust in the media is already fragile, this kind of mistake, however unintentional, can further erode public confidence.
The Chicago Sun-Times incident is not just a story about fake books. It’s a cautionary tale about the limits of AI and the irreplaceable value of human oversight. As AI tools become more integrated into creative and editorial workflows, the question isn’t whether we should use them but how we can use them responsibly.
Because while AI can write a list, it takes a human to know what’s real.