With growing criticism over misinformation in search results Google is taking a harder look at potentially upsetting or offensive content tapping humans to aid its computer algorithms to deliver more factually accurate and less inflammatory results.
The humans are Googles 10000 independent contractors who work as what Google calls quality raters. They are given searches based on real queries to score the results and they operate based on guidelines provided by Google.
On Tuesday they were handed a new one to hunt for Upsetting-Offensive content such as hate or violence against a group of people racial slurs or offensive terminology graphic violence including animal cruelty or child abuse or explicit information about harmful activities such as human trafficking according to guidelines posted by Google.
The goal to steer people with queries such as did the Holocaust happen to trustworthy websites and not to websites that engage in falsehoods or hate speech.
The Internet giant is using data from quality raters to spot demonstrably inaccurate information Paul Haahr a Google senior engineer involved with search quality said in an interview with industry blog Search Engine Land. Haahr told Search Engine Land that Google is avoiding the term fake news because it is too vague.
How it works Google for example advises its quality raters that a search result from white supremacist website Stormfront that denies the Holocaust happened should be flagged as upsetting or offensive content while a result from the History Channel describing what happened during the Holocaust should not.