1. SEJ
  2.  ⋅ 
  3. Generative AI

AI Chatbots Fail News Accuracy Test, BBC Study Reveals

BBC study finds leading AI chatbots consistently distort news content, raising concerns about information accuracy and trust.

  • AI chatbots are getting news wrong more often than right. Trusted brands like BBC are losing control of their content.
  • The problem is industry-wide, affecting all major AI platforms.
  • The problem is industry-wide, affecting all major AI platforms.
AI Chatbots Fail News Accuracy Test, BBC Study Reveals

A new BBC study reveals that AI assistants struggle with news-related questions, often providing inaccurate or misleading information.

BBC journalists reviewed answers from four AI assistants: ChatGPT, Microsoft’s Copilot, Google’s Gemini, and Perplexity.

Journalists submitted 100 questions about current news and asked the chatbots to cite BBC articles as sources.

Here’s what they found:

  • 51% of responses had significant problems.
  • 91% had some issues.
  • 19% of responses citing BBC content contained factual errors, such as incorrect dates and statistics.
  • 13% of quotes from BBC articles were altered or fabricated.
  • AI assistants often had trouble distinguishing fact from opinion and providing necessary context.

BBC journalists concluded:

“AI assistants cannot currently be relied upon to provide accurate news, and they risk misleading the audience.”

Examples of mistakes found include:

  • Google’s Gemini incorrectly claimed that “The NHS advises people not to start vaping” when it actually recommends vaping to quit smoking.
  • Perplexity and ChatGPT made errors about TV presenter Dr. Michael Mosley’s death.
  • Several AI assistants wrongly stated that political leaders were still in office after stepping down or being replaced.

Why Does This Matter?

The BBC points out that frequent errors create concerns about AI spreading misinformation. Even accurate statements can mislead when presented without context.

From the report:

“It is essential that audiences can trust the news to be accurate, whether on TV, radio, digital platforms, or via an AI assistant. It matters because society functions on a shared understanding of facts, and inaccuracy and distortion can lead to real harm.”

These findings align with another study I covered this week, which examines public trust in AI chatbots. This study revealed that trust is evenly divided, but there is a distinct preference for human-centric journalism.

What This Means For Marketers

The BBC’s findings highlight key risks and limitations for marketers using AI tools to create content.

  1. Accuracy matters: Content needs to be accurate to build trust. AI-generated content with errors can harm a brand’s reputation.
  2. Human review is essential: While AI can simplify content creation, human checks are vital for spotting mistakes and ensuring quality.
  3. AI may lack context: The study shows that AI often struggles with providing context and distinguishing facts from opinions. Marketers should be aware of this limitation.
  4. Proper attribution: When using AI to summarize or reference sources, ensure you credit and link to the correct pages.

As AI becomes common, marketers should consider informing audiences when and how they use AI to maintain trust.

While AI has potential in content marketing, it’s important to use it wisely and with human oversight to avoid damaging your brand.


Featured Image: elenabsl/Shutterstock

Category News Generative AI
ADVERTISEMENT
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...