Introducing the 2009 class of Google Policy Fellows

This summer promises to be an exciting time to work on Internet and technology policy issues, and we can't wait to see how our impressive 2009 Google Policy Fellowship class helps shape the debate. The students this year are bloggers, engineers, lawyers, journalists, activists, and web-entrepreneurs representing 14 universities in three countries.

Our host organizations selected the 14 fellows from a stunning applicant pool of over 600 students - that's twice as many applications as last year. If last summer's experience is any indication, the fellows will be out there blogging, meeting with policymakers, publishing papers, and much more. Be sure to look out for their work this summer on the host organizations' websites. Here's the full list of students and the host organizations where they'll be working this summer:
Congratulations to our fellows, and a hearty thanks to the hundreds of talented individuals who applied.

Introducing our European public policy blog

I'm happy to share the news that we recently launched our own continental spinoff of this blog -- the Google European Public Policy Blog. It's edited here in Brussels, the capital of the European Union, but it will draw on Google's public policy resources from across the entire continent.

Though we will continue to cross-post some news from across the Atlantic here on the Google Public Policy Blog, the European policy blog will be the best Google resource for European policymakers and other policy wonks.

Check out the welcome post from Simon Hampton and Susan Pointer, Google's Directors for European Public Policy and Government Affairs. And we hope you'll keep reading.

Google D.C. Talk Friday: A Conversation with Jeff Jarvis

For those of you in D.C., we hope you'll join us this Friday for our next Google D.C. Talk with author Jeff Jarvis.

In his new book What Would Google Do?, Jeff reverse-engineers Google to discern its core practices, strategies and attitudes. Among his recommendations: make mistakes well, manage abundance, trust the people, and be transparent.

Ultimately, the book isn't really about Google, though. It's a candid assessment of where today’s companies are failing as well as a survival guide for succeeding – in fields as diverse as automobiles, power plants, media companies, and health care.

Google's Bob Boorstin will be talking with Jeff about the book -- and Jeff will be taking questions from you too. We hope you can make it.

Google D.C. Talks presents
A Conversation with Jeff Jarvis, What Would Google Do?
Friday, April 3, 2009
10:00 AM - 11:00 AM ET
Google DC
1101 New York Avenue NW
2nd Floor
Entrance on Eye Street
Washington, DC

Citizen participation that scales: a call to action

(Cross-posted from the Official Google Blog)

At Google we hold weekly town hall-style meetings with our founders, CEO, and guest speakers, which always feature a Q&A session. Managing Q&A is a unique challenge with an audience of thousands, in offices around the world, who inevitably want to ask more questions than we have time to answer. To help address this challenge, we developed Google Moderator, built on App Engine.

Moderator gives participants a way to submit questions and vote for the ones they want answered. And thanks to the scale that App Engine provides, this application can now support tens of thousands of people at once. This gives everyone the chance to be heard in a way that gives priority to the issues that matter most to the broader group.

As you may have heard, the White House is hosting an online town hall meeting on Thursday and has asked people to submit questions for the president and vote on which ones they think he should answer.

We think technology can be a force for greater accountability and access between citizens and their elected officials. We're excited that the White House has chosen to use the power of cloud-based applications like Google Moderator and App Engine to scale the president's direct dialogue with the American people.

To take part in this experiment in citizen participation, please visit:

What policymakers should know about "cloud computing"

Most technology observers see "cloud computing" -- the movement of computer applications and data storage from the desktop to remote servers -- as the next big thing in computing. We held a "Google D.C. Talk " on the topic last September, and write quite a bit about cloud computing on this blog. But for many policymakers, the concept of cloud computing and its implications are still not well understood.

To help clear through the confusion, Jeffrey Rayport and Andrew Heyward of the research firm Marketspace released a paper today based on a series of interviews with dozens of people from business, academia, and government. This paper is called “Envisioning the Cloud: the Next Computing Paradigm" and was commissioned by Google.

The study's authors talk about the economic and social benefits of the cloud computing model, as well as policy guidelines for regulators and lawmakers in Washington who want to help foster the development of cloud computing. They recommend that policymakers look closely at these issues:

  • Full connectivity - Users have to be connected to the Internet for cloud-based services to flourish -- all users, all the time. Government can encourage availability and adoption of cloud computing through tax incentives to technology providers, subsidies to low-income users, and regulation of the wireless spectrum.

  • Open access - Access to the basic infrastructure of cloud computing should not be based on discriminatory pricing or provide an unfair advantage to certain users. Congress can take the lead in clarifying openness rules for the marketplace and the FCC should actively enforce existing laws designed to ensure open access.

  • Security - Cloud computing providers must make a compelling case to users that their data is safe. While competitive market forces will drive service providers to differentiate themselves on security, the government can play a role by aggressively enforcing cyber-crime laws.

  • Privacy - Cloud computing providers must address three specific concerns about privacy: protecting data from unauthorized access by government; restricting its exploitation for commercial purposes; and safeguarding it from the prying eyes of competitors. One goal of privacy legislation should be to shield consumer data from inappropriate government scrutiny and to define what rights companies have to use data about their users for commercial purposes.

It's always tempting to suggest that the next new technology will be disruptive, game-changing, or revolutionary. The Internet certainly was. It remains to be seen whether cloud computing will deliver the same magnitude of changes and benefits (or more), but it unquestionably holds a lot of promise.

Cloud computing can be a great equalizer, putting powerful computing tools that were once available only to large institutions in the hands of individuals and small businesses. It can promote competition, accelerate innovation, enhance productivity, deliver cost savings, strengthen data security and maybe even help our environment.

That is a lot to live up to, but we hope that by continuing this conversation we can separate hype from reality and offer some practical suggestions aimed at getting the most out of what cloud computing has to offer.

Google D.C. Talk tomorrow: The Future of Cloud Computing

Recently, there's been a lot of talk about the possibilities of web-based services in "the cloud," a new evolution in computing that gives individual users the power to access and deploy applications and services through the Internet.

Tomorrow morning, we'll be hosting a discussion on the future of cloud computing and technology policy at the Newseum in D.C.

Jeffrey Rayport, principal at the Marketspace consulting group and one of the nation's leading experts on digital strategy and marketing, and Andrew Heyward, former President of CBS News, will present the findings of their new study. They will assess the possibilities, risks and returns of cloud computing; the next steps in moving forward; and potential implications for technology policy.

Bernard Golden (CEO of HyperStratus) will offer commentary and Chris Dorobek (co-anchor of The Daily Debrief on Federal News Radio 1500 AM, managing editor of, and editor in chief of the blog) will moderate the discussion.

Google D.C. Talks presents
"Envisioning the Cloud: The Next Computing Paradigm
and its Implications for Technology Policy"

Friday, March 20, 2008
10:00 AM - 11:30 AM ET
The Newseum
555 Pennsylvania Ave. NW, Seventh Floor
Washington, D.C. 20001
RSVP Here or e-mail Dorothy Chou at

We hope you can join us tomorrow. All are invited to submit questions in advance via Google Moderator.

Why the next generation Internet Protocol matters

Just as a unique number is associated with your telephone line, your computer is assigned an Internet Protocol (IP) address when you connect to the Internet. Unfortunately, under the current Internet protocol, IPv4, the Internet is projected to run out of IP addresses in 2011. While technologies such as Network Address Translation (NAT) can provide temporary workarounds, they undermine the Internet's open architecture and "innovation without permission" ethos, allowing network intermediaries to exert undue control over new applications.

Effective adoption of the next generation protocol -- IPv6 -- will provide a real, sustainable solution. By expanding the number of IP addresses -- enough for three billion addresses for every person on the planet -- IPv6 will clear the way for the next generation of VoIP, video conferencing, mobile applications, "smart" appliances (Internet-enabled heating systems, cars, refrigerators, and other devices) and other novel applications.

In a report prepared for the National Institute of Standards & Technology in 2005, RTI International estimated annual benefits in excess of $10 billion.

Unfortunately, IPv6 presents a classic chicken-and-egg problem. The benefits of any one network operator, device vendor, application and content provider, or Internet user adopting IPv6 are limited if there is not a critical mass of other adopters. As a result, adoption lags.

The best way to kickstart IPv6 support is to adopt it, and governments are uniquely positioned here. Governments can take advantage of their roles as network operator, content provider, and consumer of Internet services to spur rapid, effective adoption of IPv6. Governments are owners of large IP-based networks, and they can transition both their externally- and internally-facing services to IPv6. They can also choose to only purchase Internet services from entities that commit to deploying native IPv6. In addition, governments can also consider subsidizing or otherwise financially supporting IPv6, such as by conditioning funding for broadband deployment on the use of IPv6 and by funding research around innovative IPv6-based applications

The private sector also has a critical role to play, of course. Here at Google we're hosting a conference this week to support IPv6 implementation. We began offering Web Search over IPv6 on in March 2008, and we recently announced our Google over IPv6 initiative, which provides users seamless access to most Google services over IPv6 simply by going to websites like At this week's conference, participants will share IPv6 implementation experience, advice, and associated research, and hopefully take one more step towards sustaining a healthy, open Internet.

Why we believe in geospatial data sharing

(Cross-posted from the Google Lat Long Blog)

In several recent posts, we've highlighted our ongoing efforts to partner with public sector organizations to add their map content to Google Maps and Google Earth. We undertake these partnerships because, by definition, organizations like local governments are the most authoritative source of geospatial data for their jurisdiction. But partnering with governments is a difficult mathematical equation. If you run the numbers for just the U.S. where there are many federal agencies with geospatial data, 50 state governments, some 3,000 counties and over 30,000 cities and towns, you quickly get an idea of the volume of relationships you'd have to develop and manage to add data from all governments to a service like Google Maps.

It's therefore no surprise that we at Google are very supportive of organizations that seek to streamline access to and simplify the sharing of geospatial data. One such organization is the National States Geographic Information Council (NSGIC), the association of U.S. state government GIS agencies. Among NSGIC members' objectives is coordinating the collection and sharing of data within their jurisdictions. Because of the efforts of many NSGIC members, we've managed to efficiently add aerial imagery and other datasets for entire states to our services.

For example, the State of Arkansas improved the resolution and currency of imagery statewide. The following screenshots, taken from Google Earth's Historical Imagery feature, show the Clinton Presidential Library in Little Rock, Arkansas, while under construction and following its dedication. The latter view was provided by the State of Arkansas.

Clinton Presidential Library under construction, December 2002, DigitalGlobe

Clinton Presidential Library post-dedication, December 2005, State of Arkansas

Another NSGIC objective, shared by U.S. federal agencies and others, is producing nationwide datasets as part of a National Spatial Data Infrastructure, such as through the Imagery for the Nation program. We've joined others in the technology industry in endorsing such efforts.

NSGIC recently held a conference in Annapolis, Maryland, where we had the opportunity to present to the association's state government members. The purpose of our presentation was to address the recurring questions we get from GIS agencies about the types of geosptial data we welcome and the steps involved in partnering with us. One result is that we've published answers to an initial group of questions and we'll be adding others soon.

We applaud the work of GIS agency managers and policymakers who are working, at all levels of government, to ensure that the public's investment in geosptial data is shared and thereby used across agencies and governments, but also is made readily available to the public through free services like Google Maps. We look forward to collaborating with NSGIC and other organizations to advance such efforts in data sharing.

The economics of open access

For one of the two broadband deployment programs created by last month's stimulus package, the legislation states that "Priority for awarding such funds shall be given to project applications for broadband systems that will deliver end users a choice of more than one service provider." Telecom wonks call this "open access" -- while one entity builds and owns the physical network infrastructure, other competing companies are allowed to use the infrastructure to offer Internet access and other services to consumers.

In Europe and elsewhere in the world, regulations that require incumbent telecom companies to operate on an open access are quite common. By enabling more competition, open access can enhance consumer choice, lower prices, and ultimately drive infrastructure improvements. Open access can also catalyze innovation, because competing providers can develop new broadband data services. For instance, Stockholm's Stokab network is used to provide not only Internet access, but also telemedicine, e-learning, and a multiplicity of other services (link via Tim Poulus).

Regardless of the public policy rationale, are there reasons why infrastructure providers should embrace the open network model? Some certainly think so. British Telecom, for instance, restructured itself in early 2006 to operate its infrastructure on an open access basis, and Swisscomm is building a super high-speed fiber-to-the-home network that will allow multiple competitors to serve each household. The CEO of Dutch telecom company KPN recently stated, "In hindsight, KPN made a mistake back in 1996. We were not too enthusiastic to be forced to allow competitors on our old wireline network. That turned out not to be very wise. If you allow all your competitors on your network, all services will run on your network, and that results in the lowest cost possible per service. Which in turn attracts more customers for those services, so your network grows much faster. An open network is not charity from us, in the long run it simply works best for everybody."

If you want an in-depth discussion of how open access can make good business sense, check out this insightful presentation from Yankee Group analyst Benoit Felten (the first part is embedded below). Felten runs a tremendous telecom blog called Fiberevolution and his thoughts on open access are summarized here.

Introduction to the ad auction

(Cross-posted from the Inside AdWords Blog)

When we go to conferences or read posts in forums, we find that advertisers sometimes know more about advanced features than about the basics of how AdWords works. So, we've decided to take some time to get back to basics and talk about how the AdWords auction actually works. To help you, we've brought along our Chief Economist, Hal Varian, to walk you through the auction and explain how your maximum cost-per-click (CPC) bid and Quality Score determine how much you actually pay for an ad click on Google's search results pages.

When people think of an auction, they often think of a prize being sold for the highest bid. But the AdWords auction works a little differently, where the winner only pays the minimum amount necessary to maintain their position on the page. That means you'll only pay the minimum necessary to beat the person below you. In fact, our quality-based pricing system ensures that you'll often pay less than your maximum bid.

How exactly does this work? We'll leave that to Hal to explain.

If you have trouble viewing this video, you can watch it here.

Giving consumers control over ads

In her post to the Official Google Blog this morning, Susan Wojcicki, VP of Product Management, announced that we are making interest-based advertising available in beta for our AdSense partner sites and YouTube. Interest-based advertising uses information about the web pages people visit to make the online ads they see more relevant. Relevant advertising, in turn, has fueled the content, products and services available on the Internet today.

Providing such advertising has proven to be a challenging policy issue for advertisers, publishers, internet companies and regulators over the last decade. On the one hand, well-tailored ads benefit consumers, advertisers, and publishers alike. On the other hand, the industry has long struggled with how to deliver relevant ads while respecting users' privacy.

Last month, the U.S. Federal Trade Commission released its principles for online advertising. Likewise, other organizations interested in consumer protection and privacy also recently issued guidelines: The Network Advertising Initiative released its 2008 Self-Regulatory Code of Conduct in December; the Center for Democracy and Technology released its Threshold Analysis for Online Advertising Practices in January; and the Internet Advertising Bureau in the U.K. announced its Good Practice Principles last week. There is a consistent message in all of these guidelines: Consumers need and deserve greater transparency and choice when it comes to online advertising.

As Google prepared to roll out interest-based advertising, we talked to many users, privacy advocates and government experts. By listening to them and by relying on the creativity of our engineers, we built a product that's not only consistent with industry groups' privacy principles, but also goes beyond their requirements. We are pleased that our launch of interest-based advertising includes innovative, consumer-friendly features to provide meaningful transparency and choice for our users:
  • Transparency in the right place and at the right time. When users see online ads today, they often don't know what information is being collected, who provided the ad, and sometimes who the advertiser is. We already clearly label most of the ads provided by Google on the AdSense partner network and on YouTube. With one click on the labels, users can get more information about how we serve ads, and the information we use to show ads. This year we will expand the range of ad formats and publishers that display labels that provide a way to learn more and make choices about Google's ad serving.
  • Meaningful, granular, and user-friendly choice. For the first time, people will have a say in the types of ads they see by using our new Ads Preferences Manager. With this tool, users can view, add and remove the categories that are used to show them interest-based ads (sports, travel, cooking, etc.) when they visit one of our AdSense partners' websites or YouTube. To provide greater privacy protections to users, we will not serve interest-based ads based on sensitive interest categories. For example, we don’t have health status interest categories or interest categories designed for children.
  • Tools that respect users’ choices. With one click in the Ads Preferences Manager or in the advertising section of our Privacy Center, users can opt out of interest-based ads altogether, although it means they will probably see advertising that's less relevant and useful on our partners' websites or YouTube. The opt-out is achieved by attaching an "opt-out cookie" — a small file containing a string of characters that stores a preference for opting out — to a user's browser. Opt-out cookies in the industry, however, have traditionally not been permanent. So Google's engineers also developed tools to make our opt-out cookie permanent, even when users clear other cookies from their browsers.
  • Transparency beyond privacy policies. With interest-based advertising, we’re continuing to explore new ways of communicating with our users on privacy. We've revamped the advertising section of our Privacy Center. And the Ads Preferences Manager features a video, embedded below, that explains in plain language how interest-based advertising works. All of the videos on the Google Privacy Channel on YouTube are open for comment and we look forward to hearing feedback from our users.

We’ve built our business by earning and keeping the trust of our users. And we’ll continue our dialogue with them and with other stakeholders as we develop new products to make the ads we show our users more relevant and useful.

Encouraging European e-commerce

Europe benefits from a single currency and a single market, yet few Europeans use the Internet to shop for deals outside of their home countries. The problem isn't an aversion to e-commerce. More and more Europeans are buying online; 33% of Europeans shopped online last year, up from 27% in the previous year. Meanwhile, the figure for purchases abroad remained almost stable at a mere 7%, according to a report released today by the European Commission.

The Commission's Directorate General for "Health and Consumer's" provocative study dissects the barriers to cross-border e-commerce. The Commission found that only about a third of European consumers said they were willing to purchase goods and services in another language -- and only about two-thirds of European online merchants are prepared to sell in more than one language.

Google is working hard to help consumers and merchants overcome these language barriers. Free tools like Google Translate and Google Dictionary allow shoppers to navigate the continent's fragmented, multilingual retail universe. Google Toolbar also contains a translation feature. With a single click, these tools make foreign language websites understandable.

Merchants can use these free tools to add machine translation to their websites. This are particularly useful for small businesses, which often lack the resources to build multilingual sites and which the Commission says "appear to have been particularly reluctant to embrace the opportunities of e-commerce to sell cross-border."

Technology, of course, cannot by itself create a seamless single European online market, and language isn't the only barrier to cross-border e-commerce. As the Commission rightly notes, regulators themselves must work to end the continent's differences in consumer, copyright, and tax systems. But better information can provide a big boost. At Google, we'll continue to advance tools that connect consumers and businesses across the European Union's many languages.

Open standards for a smart electric grid

How would you feel if your gas station taped over the meters on the gas pump, preventing you from seeing how much gasoline you had just bought or how much you had to pay? What if you ran your credit card through but didn't see a receipt -- or at least not until the end of the month? This is how we buy electricity today.

But there's good news: Congress recently provided $4.5 billion to build a smarter electricity grid that can empower consumers with information about their electricity consumption. Studies have shown that having such consumer energy information in real-time can reduce energy use by 5 to 15 percent.

Edward Lu, Google's Program Manager for Advanced Projects and a former astronaut, testified today on this topic before the Senate Energy and Natural Resources Committee.

Ed stressed that energy information should be provided to consumers in as close to real time as practical. And that information should be provided using open, non-proprietary standards that drive innovation and competition, and that will guard against technology obsolescence as the smart grid evolves.

The smart grid is essentially a nascent energy Internet. Thanks to the open protocols and standards on which it was built, the Internet has grown into a thriving ecosystem, delivering innovative products and services to billions of users worldwide. Applying the same principles of openness to our nation's electric grid would create a smarter platform for products and services, helping consumers conserve energy and save money.

Check out Ed's full testimony and a video recording of the hearing. Also, check out Google PowerMeter to get a preview of the sorts of consumer applications that could be built around a smart grid.