A member of the SEO community expressed the opinion that misinformation in medical search results topics are as harmful and bad for users as spam content. And if that’s true, then why why doesn’t Google penalize misinformation sites with the vigor that Google penalizes sites for spam? Google’s Danny Sullivan offered an explanation.
Should Misleading Information Be Treated Like Spam?
Joe Hall (@joehall), a member of the search marketing community, framed the question of misinformation in the search results within the context of a bad user experience and compared it to spam.
One of the reasons why Google cracks down on spam is because it’s a poor user experience, so it’s not unreasonable to link misinformation with spam.
Joe Hall isn’t alone in linking misleading information with spam. Google does too.
Google Defines Misleading Content as Spam
Google’s own Webmaster Guidelines defines misleading information as spam because it harms the user experience.
“A rich result may be considered spam if it harms the user experience by highlighting falsified or misleading information. For example, a rich result promoting a travel package as an Event or displaying fabricated Reviews would be considered spam.”
If a user searches for “this” and is taken to a page of content about “that,” according to Google’s own guidelines, that’s considered spam.
Is Misleading Different From Misinformation?
Some may quibble that there’s a difference between the words misleading and misinformation.
But this is how the dictionary defines those words:
Merriam-Webster Definition of misleading:
“…to lead in a wrong direction or into a mistaken action or belief often by deliberate deceit… to lead astray : give a wrong impression…”
Merriam-Webster Definition of Misinformation:
“…incorrect or misleading information”
Regardless if you believe that there’s a gulf of difference between misleading and misinformation, the end result is the same, an unfulfilled user and a bad user experience.
Google’s Algorithm Designed to Fulfill Information Needs
Google’s documentation on their ranking updates states that the purpose of the changes is to fulfill users information needs. The reason they want to send users to sites that fulfill their information needs is because that’s a “great user experience.”
Here’s what Google said about their algorithms:
“The goal of many of our ranking changes is to help searchers find sites that provide a great user experience and fulfill their information needs.”
If a site that provides quality information provides a great user experience then it’s not unreasonable to say that sites that provide misleading information provide a poor user experience.
The word “egregious” means shockingly bad, an appropriate word to describe a site that provides misleading information for sensitive medical related search queries.
So, if it’s true that misleading information provides a poor user experience then why isn’t Google tackling these sites in the same way they take down spam sites?
If misleading information is as bad or worse than spam, why doesn’t Google hand out the most severest penalties (like manual actions) to sites that are egregious offenders?
“If you are found to spread misinformation about COVID19 vaccines… Then you shouldn’t be in Google’s index at all. It’s time that G puts it’s money where its mouth is in regards to content quality.”
Joe next tweeted about the seeming futility of the algorithm or concepts like E-A-T for dealing with misinformation and the difference between how Google treats spam and misinformation:
“Forget Core Quality Updates, YMYL, and EAT, just kick them out of the index. Sick of seeing Google put the hammer down for things like buying links… But consistently turns a blind eye to content that causes real harm to people.”
Google Responds to Issue of Misinformation in SERPs
Google’s Danny Sullivan insisted that Google is not turning a blind eye to misinformation. He affirmed Google’s commitment to showing useful information in the SERPs.
We don't turn a blind eye. Just because something is indexed is entirely different from whether it ranks. We invest a huge amount of resources to ensure we're returning useful, authoritative information in ranking. See also: https://t.co/SRUFrTcg56 and https://t.co/cTveD8XNxp
— Danny Sullivan (@dannysullivan) December 10, 2020
The end result is the same. Our systems look to reward quality. If you are posting misinformation, you're not rewarded, because you don't rank well. If you try to artificially boost your relevance, you're not rewarded, because you get a manual action and don't rank well.
— Danny Sullivan (@dannysullivan) December 10, 2020
“Bottom line, protecting your user’s life/health should take a higher precedence than punishing link buyers.”
“It already does. You are choosing to deliberately focus on the fact that we take manual action on *some* things in *addition to* automated protections to make it seem like our existing ranking systems are somehow not trying to show the best and most useful info we can.”
Google’s Danny Sullivan then followed up with:
“It seems like you equate manual action in the case of some spam attempts as somehow like we’re not working across all pages all the time to fight both spam and misinformation. We are.”
Joe Hall returned to ask why misleading sites aren’t penalized in a similar manner as spam is:
I understand that. The point I'm trying to make is why isn't there a manual penalty for spreading disinformation that can kill people? Why is it that manual penalties are only reserved for links? Algorithms don't carry the same message that manual penalties do.
— Joe Hall 🦡 (@joehall) December 11, 2020
Danny explained in two tweets the challenge of manually reviewing millions of misleading sites and of ranking breaking news:
“There are millions of pages with misinformation out there. We can’t manually review all existing pages, somehow judge them & also review every new page that’s created for topics that are entirely new. The best way to deal with that is how we do, a focus on quality ranking…
Remember the whole 15% of queries are new thing. That’s a big deal. Some new story breaks, uncertain info flows, misinfo flows along with authority info that flows. Our systems have to deal with this within seconds. Seconds. Over thousands+ pages that quickly emerge…”
Next Danny asserted that automated systems do far more heavy lifting against spam than manual actions.
“Yes, we will take manual actions in addition to the automated stuff, but that’s a tiny amount and also something where a manual approach can work, because it’s pretty clear to us what’s spam or not.”
Google and Misinformation
It’s uncertain whether Google’s algorithms are doing a good job surfacing high quality information in the search results.
But the question as to whether Google should elevate how it treats misinformation is a valid one. Particularly in YMYL queries in medical topics, blocking misinformation in those search results seems to be as important as blocking spam.