Our op-ed: Regulating what is “best” in search?
Google’s Marissa Mayer wrote in the Financial Times today about the impact for consumers of governments potentially regulating search results. Because the article is behind the FT’s paywall, we thought we’d share the complete text here (also, check out search analyst Danny Sullivan’s take on this issue).
Do not neutralise the web’s endless search
By Marissa Mayer
Published: July 14 2010
Think about the word “jaguar” – what comes to mind? The animal? The car? A sports team? Now ask yourself: what is the best piece of literature ever written about jaguars? What about the best piece of literature ever written containing the word jaguar?
How do you define what is best? What characteristics and attributes should be taken into account? Which should not? There is a debate brewing, reported in the Financial Times this week, about whether standards are needed to ensure fairness – or what is “best” – in internet search results.
Search engines use algorithms and equations to produce order and organisation online where manual effort cannot. These algorithms embody rules that decide which information is “best”, and how to measure it. Clearly defining which of any product or service is best is subjective. Yet in our view, the notion of “search neutrality” threatens innovation, competition and, fundamentally,your ability as a user to improve how you find information.
When Google was launched in 1998, its fundamental innovation was the PageRank algorithm. It was a new and helpful tool in helping users decide which was the best information available – and one of many hundreds that have since been deployed by search engines to improve the ranking and relevance of their results.
Yet searching the web has never been more complex. Type “World Cup” into Google today and you will see millions of returns, ranging from recent news articles to images of players. Often the answer is not a web page: sports scores, news, pictures and tweets about matches are included. Such results stem from an upgrade in Google’s technology launched in 2007, which made it possible to include media such as maps, books, or videos on a results page. Our goal is to provide our users with the best and most effective answer. Consider the search “how to tie a bowtie”. Answers to these types of searches benefit from the inclusion of different media (diagrams, videos), sometimes from a Google service (books, maps).
To make matters more difficult, a quarter of all daily searches on Google have never been seen before. Each presents a new challenge, so our engineers need constantly to improve and update our algorithms. On average, we make one or two changes every day. But even then they sometimes require a more hands-on approach. For example, we occasionally have to flag malicious programmes manually, removing links to child pornography or spam sites.
The world of search has developed a system in which each search engine uses different algorithms, and with many search engines to choose from users elect to use the engine whose algorithm approximates to their personal notion of what is best for the task at hand. The proponents of “search neutrality” want to put an end to this system, introducing a new set of rules in which governments would regulate search results to ensure they are fair or neutral.
Here the practical challenges would be formidable. What is fair in terms of ordering? An alphabetical listing? Equally, new results will need to be incorporated – new web pages, but also new media types such as tweets or audio streams. Without competition and experimentation between companies, how could the rules keep up? There is no doubt that this will stifle the advance of the science around search engines.
Abuse would be a further problem. If search engines were forced to disclose their algorithms and not just the signals they use, or, worse, if they had to use a standardised algorithm, spammers would certainly use that knowledge to game the system, making the results suspect.
But the strongest arguments against rules for “neutral search” is that they would make the ranking of results on each search engine similar, creating a strong disincentive for each company to find new, innovative ways to seek out the best answers on an increasingly complex web. What if a better answer for your search, say, on the World Cup or “jaguar” were to appear on the web tomorrow? Also, what if a new technology were to be developed as powerful as PageRank that transforms the way search engines work? Neutrality forcing standardised results removes the potential for innovation and turns search into a commodity.
We know that Google plays an important role in accessing information. We also welcome scrutiny and want to ensure everyone understands how we work. Yet we believe the best answer for a particular search changes constantly. It changes because the web changes, because users’ expectations and tastes evolve and because the media never stay still. Yet proponents of search neutrality are effectively saying that they know what is “best” for you. We think consumers should be able to decide for themselves – with an array of internet search engines to choose from, each providing their very best.
The writer is vice-president of search product and user experience at Google.