Advertisement
  1. SEJ
  2.  ⋅ 
  3. News

Google Increases Efforts to Filter Out Offensive, Upsetting, and Inaccurate Content

Google Increases Efforts to Filter Out Offensive, Upsetting, and Inaccurate Content

Google has updated its search quality evaluator guidelines with a section directing quality raters how to flag inaccurate, offensive, or upsetting content. With these directions put in place, Google is strengthening its efforts to surface more factually accurate and reliable sources in search results.

It is the responsibility of Google’s quality raters to conduct real-life queries and evaluate the pages returned in terms of how well they satisfy the query. A new section has been added to the 160-page document with guidance on how to rate “Upsetting-Offensive Results”.

Evaluating Upsetting-Offensive Content

The addition to Google’s quality evaluator guidelines instructs quality raters to assign the Upsetting-Offensive flag to all results which may be considered offensive or upsetting, even if the results satisfy the user’s query. In other words, even in instances where the query may be intentionally requesting upsetting or offensive information, the web pages returned should still be flagged if they meet the criteria.

According to Google, Upsetting-­Offensive content typically includes the following:

  • Content that promotes hate or violence against a group of people based on criteria including (but not limited to) race or ethnicity, religion, gender, nationality or citizenship, disability, age, sexual orientation, or veteran status.
  • Content with racial slurs or extremely offensive terminology.
  • Graphic violence, including animal cruelty or child abuse.
  • Explicit how-­to information about harmful activities (e.g., how­tos on human trafficking or violent assault).
  • Other types of content which users in your locale would find extremely upsetting or offensive.

Google also provides a visual example:

The above example refers to a query in which the user is seeking historically accurate information about the Holocaust. The first result returned in the example is a web page about Holocaust denial on a white supremacist site, which quality raters are instructed to flag as Upsetting-Offensive.

The second result returned in the example above is a page on the History Channel website. Despite the Holocaust being a potentially upsetting topic, quality raters are instructed to NOT mark it as Upsetting-Offensive because it’s a factually accurate source of historical information.

Following these instructions is a way in which Google hopes to weed out content that is both potentially upsetting and inaccurate.

What if a User is Intentionally Searching for Upsetting-Offensive Content?

The new section in the search evaluator guidelines also provides instructions on how to assign a “Needs Met” rating for Upsetting-Offensive content. There may be times when a user is genuinely interested in receiving factually accurate information about this type of content.

”Remember that users of all ages, genders, races, and religions use search engines for a variety of needs. One especially important user need is exploring subjects which may be difficult to discuss in person. For example, some people may hesitate to ask what racial slurs mean. People may also want to understand why certain racially offensive statements are made. Giving users access to resources that help them understand racism, hatred, and other sensitive topics is beneficial
to society.

When the user’s query seems to either ask for or tolerate potentially upsetting, offensive, or sensitive content, we will call the query a “Upsetting-­Offensive tolerant query”. For the purpose of Needs Met rating, please assume that users have a dominant educational/informational intent for Upsetting-Offensive tolerant queries. All results should be rated on the Needs Met rating scale assuming a genuine educational/informational intent.”

Quality raters are instructed to give a “Highly Meets” rating when informational results about Upsetting­- Offensive topics meet the following criteria:

  • Results are found on highly trustworthy, factually accurate, and credible sources, unless the query clearly indicates the user is seeking an alternative viewpoint.
  • Results address the specific topic of the query so that users can understand why it is upsetting or offensive and what the sensitivities involved are.

Google also cautions quality raters not to make their own assumptions about the queries.

Important:

  • Do not assume that Upsetting­-Offensive tolerant queries “deserve” offensive results.
  • Do not assume Upsetting­-Offensive tolerant queries are issued by racist or “bad” people.
  • Do not assume users are merely seeking to validate an offensive or upsetting perspective.

According to the following examples provided by Google, this is how user intent should be interpreted with respect to queries about Upsetting-Offensive topics:

What Happens When Content is Flagged?

Content being flagged is not grounds for immediate demotion or deindexing. The data collected by quality raters will be sent to those who write code for Google’s search algorithms. Coders will then figure out how to adjust Google’s algorithms to automatically flag Upsetting-Offensive content in the future.

If Google’s algorithms then determine the content will be upsetting to the user based on the query used to search for it, it will be less likely to show up for that specific user. However, if the user has alternative views and is intentionally trying to find content on bigoted websites, for example, then that content will still show up for them.

In other words, the likelihood of Upsetting-Offensive content appearing in search results after it has been flagged all depends on user intent.

Time will tell how well these new guidelines actually work to improve the quality of search results.

Category News SEO
ADVERTISEMENT
SEJ STAFF Matt G. Southern Senior News Writer at Search Engine Journal

Matt G. Southern, Senior News Writer, has been with Search Engine Journal since 2013. With a bachelor’s degree in communications, ...