TLDR: Libraries nationwide are facing a new challenge as patrons increasingly request non-existent books and media titles that were “hallucinated” by artificial intelligence systems and subsequently published in newspaper reading lists. This trend has led to frustration among librarians and raised concerns about information literacy and the responsible use of AI in journalism.
Libraries across the country are reporting a significant increase in requests for books and media that do not exist, a direct consequence of artificial intelligence systems generating fictional titles that have appeared on prominent newspaper reading lists. This phenomenon, often referred to as “AI hallucination,” is creating considerable frustration for library staff and raising questions about the integrity of published content.
The issue gained widespread attention following an incident in May 2025, when the Chicago Sun-Times, along with the Philadelphia Inquirer, published a syndicated “Heat Index” special section. This multi-page advertorial supplement, intended for summer activities and recommendations, included a “Summer reading list for 2025” that featured numerous fabricated book titles attributed to real, acclaimed authors. Out of 15 recommended titles in the Chicago Sun-Times list, only five were genuine, with the remaining ten being entirely made up.
Freelancer Marco Buscaglia, a Chicago-based writer, confirmed to The Washington Post that he utilized AI chatbots during the content creation process for the “Heat Index” section. He admitted to failing to fact-check the AI-generated output, stating, “Stupidly, and 100 per cent on me, I just kind of republished this list that [an AI program] spit out. Usually, it’s something I wouldn’t do… I definitely failed in that task.”
Librarians are on the front lines of this new challenge. Reference librarian Eddie Kristan shared his firsthand experience with 404 Media, noting patrons asking for titles from these AI-generated lists. Kristan expressed deep concern, saying, “It’s really, really frustrating, and it’s really setting us back as far as the community’s info literacy.”
Examples of the fictional books include “Hurricane Season” by Brit Bennett, “Nightshade Market” by Min Jin Lee, “The Longest Day” by Rumaan Alam, “Boiling Point” by Rebecca Makkai, “Migrations” by Maggie O’Farrell, and “The Rainmakers” by Percival Everett. Beyond books, the AI-generated content in the supplements also included quotes from purported experts and professors who appear to be non-existent, such as a Cornell University food anthropologist named Catherine Furst and a FirepitBase.com editor named Daniel Ray.
The Chicago Sun-Times quickly responded to the controversy, stating it was investigating the matter and emphasizing that the content was not created or approved by its newsroom. The newspaper later called the incident “a learning moment for all of journalism.” The Sun-Times Guild, the newspaper’s union, also voiced its disturbance, referring to the situation as “this slop syndication” and expressing concern over the inclusion of AI-generated content alongside human work.
Also Read:
- Powell’s Books Under Fire for AI-Assisted Merchandise Designs Amidst Union Warnings
- Generative AI’s Rise: A Potential Path Towards Public Ignorance?
This incident is not isolated, highlighting a broader trend of AI-generated content blunders in media. Previous cases include Sports Illustrated reportedly using AI-generated authors, CNET issuing major corrections for AI-assisted stories, and a Microsoft article recommending the Ottawa Food Bank as a tourist attraction, all underscoring the critical need for rigorous fact-checking and clear editorial guidelines when integrating artificial intelligence into content creation workflows.


