Google has reportedly told the EU it won’t add fact-checking to search results or YouTube videos, nor will it use fact-checks to influence rankings or remove content.
This decision defies new EU rules aimed at tackling disinformation.
Google Says No to EU’s Disinformation Code
In a letter to Renate Nikolay of the European Commission, Google’s global affairs president, Kent Walker, said fact-checking “isn’t appropriate or effective” for Google’s services.
The EU’s updated Disinformation Code, part of the Digital Services Act (DSA), would require platforms to include fact-checks alongside search results and YouTube videos and to bake them into their ranking systems.
Walker argued Google’s current moderation tools—like SynthID watermarking and AI disclosures on YouTube—are already effective.
He pointed to last year’s elections as proof Google can manage misinformation without fact-checking.
Google also confirmed it plans to fully exit all fact-checking commitments in the EU’s voluntary Disinformation Code before it becomes mandatory under the DSA.
Context: Major Elections Ahead
This refusal from Google comes ahead of several key European elections, including:
- Germany’s Federal Election (Feb. 23)
- Romania’s Presidential Election (May 4)
- Poland’s Presidential Election (May 18)
- Czech Republic’s Parliamentary Elections (Sept.)
- Norway’s Parliamentary Elections (Sept. 8)
These elections will likely test how well tech platforms handle misinformation without stricter rules.
Tech Giants Backing Away from Fact-Checking
Google’s decision follows a larger trend in the industry.
Last week, Meta announced it would end its fact-checking program on Facebook, Instagram, and Threads and shift to a crowdsourced model like X’s (formerly Twitter) Community Notes.
Elon Musk has drastically reduced moderation efforts on X since buying the platform in 2022.
What It Means
As platforms like Google and Meta move away from active fact-checking, concerns are growing about how misinformation will spread—especially during elections.
While tech companies say transparency tools and user-driven features are enough, critics argue they’re not doing enough to combat disinformation.
Google’s pushback signals a growing divide between regulators and platforms over how to manage harmful content.
Featured Image: Wasan Tita/Shutterstock