Google’s ‘Project Owl’ — a three-pronged attack on fake news & problematic content

Google hopes to improve by better surfacing authoritative content and enlisting feedback about suggested searches and Featured Snippets answers. Google knows it has a search quality problem. It’s been plagued since November with concerns about fake news, disturbing answers and offensive search suggestions appearing at the top of its results. “Project Owl” is an effort by the company to address these issues, with three specific actions being announced today. In particular, Google is launching: a new feedback form for search suggestions, plus formal policies about why suggestions might be removed. a new feedback form for “Featured Snippets” answers. a new emphasis on authoritative content to improve search quality. We’ll get into the particulars of each of those items below. First, some background on the issue they aim to fix. Project Owl & problematic content Project Owl is Google’s internal name for its endeavor to fight back on problematic searches. The owl name was picked for no specific reason, Google said. However, the idea of an owl as a symbol for wisdom is appropriate. Google’s effort seeks to bring some wisdom back into areas where it is sorely needed. “Problematic searches” is a term I’ve been giving to a situations where Google is coping with the consequences of the “post-truth” world. People are increasingly producing content that reaffirms a particular world view or opinion regardless of actual facts. In addition, people are searching in enough volume for rumors, urban myths, slurs or derogatory topics that they’re influencing the search suggestions that Google offers in offensive and possibly dangerous ways. These are problematic searches, because they don’t fall in the clear-cut areas where Google has typically taken action. Google has long dealt with search spam, where people try to manipulate its results outside acceptable practices for monetary gain. It has had to deal with piracy. It’s had to deal with poor-quality content showing up for popular searches. Problematic searches aren’t any of those issues. Instead, they involve fake news, where people completely make things up. They involve heavily-biased content. They involve rumors, conspiracies and myths. They can include shocking or offensive information. They pose an entirely new quality problem for Google, hence my dubbing them “problematic searches.” Problematic searches aren’t new but typically haven’t been an big issue because of how relatively infrequent they are. In an interview last week, Pandu Nayak — a Google Fellow who works on search quality — spoke to this: “This turns out to be a very small problem, a fraction of our query stream. So it doesn’t actually show up very often or almost ever in our regular evals and so forth. And we see these problems. It feels like a small problem,” Nayak said. But over the past few months, they’ve grown as a major public relations nightmare for the company. My story from earlier this month, A deep look at Google’s biggest-ever search quality crisis, provides more background about this. All the attention has registered with Google. “People [at Google] were really shellshocked, by the whole thing. That, even though it was a small problem [in terms of number of searches], it became clear to us that we really needed to solve it. It was a significant problem, and it’s one that we had I guess not appreciated before,” Nayak said. Suffice it to say, Google appreciates the problem now. Hence today’s news, to stress that it’s taking real action that it hopes will make significant changes. Improving Autocomplete search suggestions The first of these changes involves “Autocomplete.” This is when Google suggests topics to search on as someone begins to type in a search box. It was designed to be a way to speed up searching. Someone typing “wea” probably means to search for “weather.” Autocomplete, by suggesting that full word, can save the searcher a little time. Google’s suggestions come from the most popular things people search on that are related to the first few letters or words that someone enters. So while “wea” brings up “weather” as a top suggestion, it also brings back “weather today,” or “weather tomorrow,” because those are other popular searches beginning with those letters that people actually conduct. Since suggestions come from real things people search on, they can unfortunately reflect unsavory beliefs that people may have or problematic topics they are researching. Suggestions can also potentially “detour” people into areas far afield of what they were originally interested in, sometimes in shocking ways. This was illustrated last December, when the Guardian published a pair of widely-discussed articles looking at disturbing search suggestions, such as “did the holocaust happen,” as shown below: For years, Google’s had issues like these. But finally, the new attention has prompted it to take action. Last February, Google launched a limited test allowing people to report offensive and problematic search suggestions. Today, that system is going live for everyone, worldwide. A new “Report inappropriate predictions” link will now appear below the search box. Clicking that link brings up a form that allows people to select a prediction or predictions with issues and report in one of several categories: Predictions can be reported as hateful, sexually explicit, violent or including dangerous and harmful activity, plus a catch-all “Other” category. Comments are allowed. The categories correspond to new policies that Google’s publishing for the first time about why it may remove some predictions from Autocomplete. Until now, Google’s never published reasons why something might be removed. Those policies focus on non-legal reasons why Google might remove suggestions. Legal reasons include removal of personally identifiable information, removals ordered by a court or those deemed to be piracy-related, as we’ve previously covered. Will this new system help? If so, how? That remains to be seen. Google stressed that it hopes the feedback will be most useful so that it can make algorithmic changes that improve all search suggestions, rather than a piecemeal approach that deals with problematic suggestions on an individual basis. In other words, reporting an offensive suggestion won’t cause it to immediately