Google knows it has a problem. The quality of their search results was never an issue until the recent fake news epidemic. Before, Google’s algorithm used the popularity of pages to determine both its value and its relevance. However, with such a massive fake news movement, incorrect and sometimes offensive content was surfacing as truth. Google’s response: Project Owl:
The first change involves adjustments to the autocomplete algorithm. This is the part where the system completes either the word or the sentence when you type something into the search box. Usually, the response is tied to the most popular phrases that begin with what you have entered so far. Unfortunately, in this post-truth world, some offensive search terms are at the very top. Google’s changes involve a reporting mechanism to flag offensive and hateful search terms.
The other change is the featured snippet that appears at the top of most search results. This snippet pulls content from some of the top results and as we have seen from the “are women evil” videos, can contain complete fiction as answers. The approach is mostly the same, with a feedback mechanism that involves figuring out what type of answers to avoid.
The final change is to improve its general search quality using search quality raters. The rating team goes through search terms and flag content that is either fake, harmful, hateful or offensive. The team also rates content, and authoritative content will get a boost with the next update.