Wednesday, November 25, 2009

Today in Milan

Today in Milan, the Milan Public Prosecutors’ Office will make their closing arguments why 4 Google employees including me should be held personally criminally liable for content created by four Italian high school students and uploaded to Google Video. I have no idea what the Prosecutors will say in court today, and my lawyers have told me not to set foot in Italy, so I wanted to provide some factual background on this case.

In terms of timeline, the Prosecutors present their case today, November 25. The Google employees' lawyers will present their defense on December 14 and a verdict should be issued on December 23.

The Judge hearing this case is Judge Magi, who recently convicted 23 Americans, mostly CIA agents, as reported by the New York Times:

In a landmark ruling, an Italian judge on Wednesday convicted a base chief for the Central Intelligence Agency and 22 other Americans, almost all C.I.A. operatives, of kidnapping a Muslim cleric from the streets of Milan in 2003.

http://www.nytimes.com/2009/11/05/world/europe/05italy.html

Today’s trial stems from an incident in 2006 when teenagers at a school in Turin filmed and then uploaded a video to Google Video that showed them bullying a disabled schoolmate. Google removed the video promptly after being notified. Even so, last summer, the Public Prosecutor brought the following criminal charges against four Google employees, including myself. All of us face one or two charges:

Charge A: Criminal defamation against the Vivi Down Association, an association that represents individuals with down syndrome

Charge B: Failure to comply with the Italian Privacy Code

It should be obvious, but none of us Google employees had any involvement with the uploaded video. None of us produced, uploaded or reviewed it.

The video, shot by a student in a classroom, depicts a boy being harassed by teenagers, including one who makes reference to the Vividown Association. A teacher was allegedly present during part of the filming. Four youths between the ages of 16 and 17 from the Technical Institute in Turin were involved in the creation and uploading of the video. One of these young men actually filmed the video. The teenagers who created the video uploaded it to Google Video, which at the time was Google’s online video-sharing service. Google Video was a host for user-generated content. The Vividown Association and later the family of the boy who was filmed filed a claim against Google in Milan, which is how Google was initially brought into the case. The family of the boy later withdrew from the case. Google complied with law enforcement requests to help identify the bullies, who were subsequently punished.

The Prosecutor then chose to charge individual Google employees. Today he will present his case.


Tuesday, November 24, 2009

Ciao, Italia!

I won't be attending my trial in Milan in person. I'll be represented by outside counsel. I believe that each of my 3 co-defendants has reached the same conclusion. As for me, I'm under clear instructions from my outside counsel not to set foot in Italy, at all. That's a tragedy, since I love Italy. It means I won't be speaking at this privacy conference in Bologna in May, which still seems to be advertising me as a speaker:

http://www.sassuolo2000.it/2009/11/17/bologna-la-privacy-al-tempo-di-facebook-8-incontri-ad-alma-graduate-school/

It also means I won't go hiking with friends in the Dolomites this summer.

Why? Well, Italy has a legal concept which is unknown in Anglo-Saxon countries: namely, that an employee of a company can be held personally criminally liable for the actions or non-actions of the corporation he works for. Moreover, Italy has also criminalized much of its data protection laws, meaning that routine data protection questions can give rise to criminal prosecutions. As everyone in the field of privacy knows, data protection laws are full of sweeping statements that need to be interpreted with judgment and common sense. But imagine the consequences if every data protection decision made by a company can be second-guessed by a public prosecutor with little knowledge of privacy law. Does that mean that a data protection lawyer working for a company is running the risk of personal criminal arrest and indictment and prosecution for routine business practices? Well, I guess you can see why I've been advised not to set foot in Italy. I'm sure such prosecutions will remain rare, and perhaps my current prosecution will the be last of its type. But maybe not. And working for one of the world's most visible Internet companies puts me at more risk than most of my colleagues in the field of data protection, as the current prosecution has shown.

Italy is my favorite country in the world to visit. What a shame.

Ciao, Italia!

Monday, November 23, 2009

On Trial in Italy

I'm relieved that the Google "privacy" trial in Italy is finally underway. This week, the Milan Public Prosecutor will make his case why four random Google employees should be held personally criminally liable for a video that some high-school kids in Turin made and uploaded to Google Video.

For me, I've lived under this Sword of Damocles for two years now. It began in January 2008 when I was invited to speak at a privacy conference at the University of Milan. I was approaching the University on foot, when I heard someone call my name. I turned around, and saw a guy in plain clothes, who told me to wait a minute, while he spoke into a cell phone, and within seconds, I found myself on the sidewalk surrounded by 5 Italian policemen. I had no idea what was going on. I was scared. I couldn't understand much, but I did understand that they wanted to take my passport, asked me to sign some documents, and wanted to escort me to a judge. I was allowed to put a call into my Italian colleagues at Google, who thankfully were able to rush to the scene and talk to the policemen. I was escorted by the policemen on foot through central Milan, with tourists and locals alike stopping to stare at the scene. My colleagues told the group of policemen that I was supposed to deliver a speech at the privacy conference shortly. After much discussion, it was agreed that I would be allowed to deliver the speech, after providing my passport and signing various documents that were being served on me, and that I would be interrogated by the Public Prosecutor afterwards.

And so, I was allowed to deliver this talk. If I look a little distracted, now you know why. [between us, I had to stop to vomit, but that part has been edited out.]


This whole Italian prosecution has been an ordeal. I just want it to be over soon. After two years, well, it's finally underway.

Guys in Ties, thinking about children and privacy

First, thanks to a bunch of you for sending me notes, encouraging me to keep blogging. I will.

I recently joined a group of privacy experts working with a Spanish foundation dedicated to children's issues to think about how to help protect kids' privacy online, in particular in social networking services. We've just had one inaugural meeting, a brainstorming session. It's too early to say which approach the group will take. But for my part, I recommended a crowd-sourcing approach, where we encourage (sponsor?) an open-ended contest to invite people to create videos on YouTube where kids talk to other kids about privacy. I doubt a top-down approach would work, where governments or corporations lecture kids about what they should or should not do online. I think kids will react more to videos by other kids, who talk about sharing with their friends, what happens if they share personal stuff with the wrong people, how to make good choices.

If you have a better idea about how to approach the challenge of sensitizing kids about the privacy risks when they post stuff online, please let me know, and I'll take it to the group.

Sunday, November 22, 2009

I've been taking a break


I've been taking a break from blogging.
In case you wonder why, it's because I was rattled to see an Italian public prosecutor scour my blog and print out copies of it to help him indict and prosecute me and some of my Google colleagues for some "privacy" criminal theory. I'm all for free speech, and love a robust debate of privacy issues, but seeing your own words being combed through by a prosecutor who's looking for evidence to convict you in criminal court is enough to give anyone reason to pause from blogging.
I'll start blogging again soon. At least I know I have one reader.

Thursday, April 16, 2009

The Cloud: policy consequences for privacy when data no longer has a clear location

Cloud Computing has become one of the more influential tech trends of our day. The Cloud is roughly analogous to remote computing, where computing and storage move away from your personal device to servers run by companies. A simple example might be online photo albums, which allow users to move their pictures off personal computers and into a secure and accessible space on the Web. Some Cloud services, like Hotmail, have been around for roughly a decade. And others have appeared since; almost all of Google's services, for example, run in the Cloud. As these services become more widely used, it's important to ask how our privacy laws and regimes should deal with this new phenomenon.

Some privacy laws, such as in the EU Directive, base regulation in part on the location of data. If data is in the Cloud, where exactly is that? Data in the Cloud exists within the physical infrastructure of the Internet, in other words, on the servers of the companies offering these services, as well as on users’ own machines. Cloud services are built on the concept that data held in the Cloud enables users to access and share data from anywhere, anytime and from any Internet-enabled device.

To know the “location” of data in the Cloud, you’d need to understand the architecture of data centers, among other things. Some companies like Google have data centers in multiple locations. A data center is a building that houses many, many, computers-- not too different from the ones you may have in your home. Companies try to pick places that, among other things, have a skilled workforce, reasonable local business regulation and are near low-cost and abundant sources of electricity. They tend not to provide too many specific details about these data centers, for a couple reasons. First, the data center industry is highly competitive and companies try not to disclose too many details that may give competitors a leg up. Second, knowing that users' personal information is stored in these computers, companies take the privacy and security of this data seriously and ensure that these buildings are well secured so that no one could just walk out with a computer holding your credit card information. The geographical location of data centers can be optimized to enhance the speed of a service, e.g., serving European users from a European data center might be faster than having the data cross the Atlantic. Finally, having data centers in different locations allows companies to optimize computing power, automatically shifting work from one location to another, depending on how busy the machines are.

Moreover, cloud applications are architected not to lose users’ data and to respond to queries quickly. Applications therefore usually replicate users’ data in more than one place. No Internet user would be happy if they lost access to all their email or calendar information, for example, just because the power goes out in some data center location. Applications may dynamically load balance their users among different data centers, so that the location of a particular user's data may change over time.

For all these reasons, it’s actually very hard to answer the apparently simple question: “where’s my data?” Indeed, it's becoming problematic that existing EU data protection laws were largely written in an era when data had an easily-identifiable location. For example, EU laws impose restrictions on the transfer of personal data outside the EU to any jurisdiction where there is not "adequate" data protection. In the past, "transfer" was defined as the physical shipment of data, such as sending a computer tape or paper files to an office in a faraway location. However, nowadays almost any activity on the Internet involves a transfer of data outside of the EU. Sending a document to a colleague in New York, for example, can technically be considered a transfer of material outside of the EU. In today's era of connectivity, strict and literal application of these laws would cause more than just a headache for companies and regulators: it would cause the Internet to shut down.

In this Internet age, when data flows around the planet at the click of a mouse, everyone agrees we need to identify a better model of privacy protections. Data doesn't start and stop at national borders when it travels on the Information Super-highway. From a privacy perspective, the important question is not “where is my data?”, but rather “who holds my data, and what are their privacy policies?" For a user, the important thing is to research and understand the data protection policies of the company which holds the data, regardless of its location.

I’ve looked at various laws around the world, and I’m impressed by the far-sighted model adopted in Canada’s privacy laws. I can’t do better than just quote the Office of the Privacy Commissioner:

http://www.privcom.gc.ca/information/guide/2009/gl_dab_090127_e.asp

"European Union member states have passed laws prohibiting the transfer of personal information to another jurisdiction unless the European Commission has determined that the other jurisdiction offers "adequate" protection for personal information. In contrast to this state-to-state approach, Canada has, through PIPEDA, chosen an organization-to-organization approach that is not based on the concept of adequacy… [U]nder PIPEDA, organizations are held accountable for the protection of personal information transfers under each individual outsourcing arrangement…

Regardless of where the information is being processed - whether in Canada or in a foreign country - the organization must take all reasonable steps to protect it from unauthorized uses and disclosures while it is in the hands of the third party processor. The organization must be satisfied that the third party has policies and processes in place, including training for its staff and effective security measures, to ensure that the information in its care is properly safeguarded at all times. ... [O]rganizations must in their own best interests, as well as those of their customers, do what they can to protect the information."

Canada’s approach works to preserve privacy protections, and to hold data collectors accountable for privacy protections regardless of the location of data. Canada has blazed a trail that will help guide us in the age of the Cloud.

Friday, March 6, 2009

A picture of your house on the Internet for all to see



I did a little OpEd in the French paper Liberation on Google's Street View and privacy. Only fair, I guess, to put a picture of my own house on this blog. I confess, I did hesitate a minute before posting it. In any case, I do believe in taking one's own medicine, or eating one's own dogfood, as the case may be.

D’ici une centaine d’années, quelles avancées auront marqué notre époque ? Nos progrès politiques comme la création de l’Union européenne ? Les avancées scientifiques ?
Selon nous, s’il y a un progrès en gestation depuis la fin du XXe siècle qui pourrait marquer le passage de notre génération sur terre, c’est bien celui du partage de la connaissance. Engendrée par Internet, la démocratisation de l’accès à l’information au tournant du millénaire est une révolution dont on se souviendra probablement très longtemps. Dans une tribune parue le 13 février dans Libération, Odile Belinga et Etienne Tête ont émis un certain nombre de critiques concernant Street View, la nouvelle fonctionnalité de Google Maps qui permet de naviguer virtuellement dans les grandes villes françaises. Les deux auteurs affirment que ce service ne respecte pas la vie privée des individus et le comparent à de la vidéosurveillance.
Street View permet quotidiennement à des milliers d’utilisateurs de naviguer à trois cent soixante degrés grâce à des photos prises dans la rue à hauteur d’homme. Les internautes du monde entier peuvent ainsi se déplacer virtuellement, préparer leur prochain voyage à Rome, descendre les Ramblas à Barcelone, explorer leur ville, ou tout simplement repérer l’adresse de leur prochain appartement. C’est aussi un formidable outil pour mettre en valeur le patrimoine d’une ville ou promouvoir l’activité d’un commerçant. Il s’agit ici de contribuer à l’écosystème ouvert et bénéfique permis par Internet. Les nombreux partenaires qui ont choisi de s’associer à ce service (Télérama, Cityvox, l’Office du tourisme et des congrès de Paris…) ne s’y sont pas trompés.
Le service Street View respecte-t-il la vie privée ? La question est tout à fait légitime. Et la réponse est oui. Rappelons tout d’abord une évidence : sur Internet, l’information, comme la concurrence, est toute proche, à un seul clic de souris. Autrement dit, sans l’intérêt et la confiance de l’internaute, un site ne vaut pas grand-chose. Et cette confiance, il s’agit de ne pas la bafouer.
Les photographies affichées dans Street View sont parfaitement licites. Elles ne contiennent que des images de voies publiques et ne dévoilent aucune information qui n’était déjà exposée à la vue des passants. Les arguments selon lesquels un service de cartographie comme le nôtre ne pourrait pas utiliser de telles images au nom du respect de «l’intimité» remettent fondamentalement en cause la notion d’espace public. Ils dénaturent au contraire cette sphère de l’intime à qui la loi accorde, à juste titre, une protection accrue.
Les images de Street View sont les mêmes que celles que pourrait prendre n’importe quel passant dans la rue avec son appareil photo. Des images de ce type, sur les villes du monde entier, sont déjà diffusées dans toutes sortes de formats sur la Toile mondiale. Conscient que ce service rassemblait ces images en un seul endroit, Google a volontairement décidé de prendre des précautions supplémentaires en créant une technologie de floutage automatique des visages et des plaques d’immatriculation, dont la Cnil a d’ailleurs salué la mise en œuvre. Pour aller plus loin, en cas de visage non flouté ou imparfaitement flouté, toute personne peut demander la suppression des images concernées en cliquant sur un simple bouton. Les photos ne sont pas datées (ni heure, ni jour) et ne sont pas des prises de vue en temps réel. Bref, tout sauf des caméras de surveillance !
Soyons curieux, doutons, c’est ce qui a animé nos échanges avec la Cnil avant le lancement de Street View en France. Mais n’ayons pas peur, par principe, du progrès et des avancées technologiques qu’il implique. Prenons l’exemple récent de «Google Flu Trends» : avant d’appeler leurs médecins, beaucoup d’internautes utilisent comme mot-clé «symptômes de la grippe» dans leur moteur de recherche. Cette requête, multipliée par des millions d’individus a permis à Google de développer un outil de prévision des foyers de grippe capable de devancer jusqu’à dix jours celui des autorités sanitaires. En observant simplement les zones géographiques renseignées par les rapports de connexion. Soyons curieux, soyons vigilants, mais n’ayons pas peur d’Internet.
Bien plus que le véhicule de menaces, aussi réelles sur Internet que dans le monde physique, c’est avant tout un outil extraordinaire qui facilite nos vies au quotidien

Monday, February 9, 2009

Lead Data Protection Authority

Lead Data Protection Authority:  how EU data protection regulation can catch up with other areas of European law

Being a global company means having employees, partners and users who interact on a worldwide basis without geographical or jurisdictional limitations.  Maximising efficiency is a key driver so most global companies attempt to adopt a consistent way of doing business internationally.  Whilst cultural differences may have an impact on some activities, economic globalisation encourages a uniform and coherent approach to most operations, from sales practices to compliance protocols.  However, global companies still have to comply with diverse laws across jurisdictions and be accountable to many national regulators.  All of these trends become even more pronounced for companies doing business over the Internet. 

In the European Union, some industry sectors can benefit from regulatory regimes which are specifically aimed at simplifying the way in which players within those sectors comply with cross-jurisdictional rules.  For example, pharmaceutical companies may rely on simplified procedures to have their products evaluated and authorised across the EU.  One solution is called the “decentralised procedure”, by which companies can go directly to a national authority to obtain permission to market its products in that member state and then seek to have other member states accept the approval of the first member state.  This procedure is applicable in cases where an authorisation for a pharmaceutical product does not yet exist in any member state.

Alternatively, pharmaceutical companies may in some instances rely on the mutual recognition procedure, by which the assessment and marketing authorisation of one member state should be mutually recognised by other concerned countries within the EU.  Under the mutual recognition procedure, the pharmaceutical company submits its application to the chosen country, which will carry out the assessment work and approve or reject the application.  The other countries then have 90 days to decide whether they approve or reject the decision made by the original country.

Similarly, financial services firms can seek authorisation in one member state and obtain “passport rights” to enable them to carry on financial services in other member states.  When a financial services provider wishes to establish a branch or provide services in several EU countries, notification of such intention is submitted to the regulatory authority in the home member state.  This notification is then forwarded to the regulator in the member states in which the operator intends to open the branch or provide its services. As a result, a particular product licensed in the home member state becomes automatically recognised in all other member states and may therefore be sold across borders free of undue bureaucratic controls.

Some areas of law – such as e-commerce – also follow the “country of origin” principle.  This principle establishes that where an action or service is performed in one country but received in another, the applicable law is the law of the country where the action or service is performed.  For example, if a company sells products online across Europe but it is formally established as a limited company under the laws of one member state, that commercial activity will normally be subject to the law of that country.

Data protection regulatory complexities

The jurisdictional rules under the EU data protection directive do not work like that.  When a company handles personal information about employees, customers, suppliers and others, it will be subject to the different privacy and data protection regimes in force in each EU jurisdiction.  In the European Union, data protection laws will establish a number of very specific requirements and compliance will be overseen by the data protection authorities of each member state.  This means that the use of personal information by that company will be regulated in slightly different ways across the EU.

All European directives pursue the same overriding objective: achieving harmonisation across EU member states whilst respecting the national legislative power of each jurisdiction.  This is normally achieved by establishing a set of principles that each member state incorporates into its own legislation within the parameters of the directive.  When a directive, like the 1995 data protection directive, creates a complex regulatory regime involving an independent regulator, member states devise suitable structures that provide for the establishment and operation of that regulator.

This approach to data protection regulation has caused a number of complexities that diminish the two-fold aim of the directive, namely: protecting the fundamental rights and freedoms of natural persons and facilitating the free flow of personal data between member states.  The fact that laws and regulators are different make pan-European compliance more difficult and hence less effective.  At the same time, the existence of disjointed regulatory approaches creates inefficiencies, business barriers and unnecessary expense for those companies seeking to comply with all applicable laws and regulations.

The lead authority concept

Whilst legislative harmonisation may not be achieved without radical constitutional changes, the experience of simplified oversight in some industry sectors shows that adopting a lead regulator approach is not only possible but desirable.  The most promising step in this direction within the data protection regime is the “lead authority” concept that was created for the purpose of assessing and approving Binding Corporate Rules (“BCR”) applications.  In 2005, the Article 29 Working Party adopted a co-ordinated approval mechanism that allows companies seeking the approval of their BCR to fast-track their submissions through all of the relevant EU data protection authorities.  This mechanism entails choosing an “entry point” data protection authority which will be the official point of contact with the candidate until the BCR are ready for approval in that country, and then will assist the relevant organisation to gain approval throughout the European Union.  More recently, a group of data protection authorities within the Article 29 Working Party launched the BCR mutual recognition procedure, so that approval by one authority will automatically lead to approval of the same BCR by the others. 

Whilst for some organisations it may be obvious which data protection authority should act as the lead authority, where it is not clear which authority should become the entry point, the co-ordinated approval mechanism establishes that organisations must consider the following factors to determine the most appropriate data protection authority:

·                     The location of the corporate group’s European headquarters or office with data protection responsibilities.

·                     The location of the company which is best placed to lead the BCR application and, if necessary, enforce compliance.

·                     The place where any key operational decisions in terms of the purposes and means of the data processing are made.

·                     The EU country from which most international transfers originate.

Extending the concept beyond BCR

Both the co-ordinated approval mechanism for BCR and the mutual recognition procedure are contributing to making BCR a much more credible and attractive option for organisations using personal data on a global basis.  The fact that the approval stage itself focuses on meeting one single set of standards and expectations – even when these are high – allows those organisations to concentrate their compliance efforts in a consistent and effective way.  In other words, companies can devote their attention to ensuring that they apply the right standards and achieve a workable level of privacy and data protection, rather than to dealing with the diverse expectations of a plethora of similar regulators.

Given that BCR systems include policies and procedures affecting the whole range of data protection obligations and rights, it should also be possible to take the lead authority concept beyond BCR and apply it to data protection compliance generally.  The criteria to determine the most appropriate data protection authority for BCR applications could also be used to identify the most suitable authority overall.  If the single regulator idea has worked in heavily regulated sectors like health care and banking, it is not inconceivable that the same idea could work very effectively in the area of data protection compliance.

If this were the case, global companies collecting, using and sharing data in the EU could not only benefit from the harmonisation of legal standards but from the simplification of regulatory activities across the EU.  The national regulators themselves would be able to operate in a much more focussed way.  These efficiency gains would ultimately translate into a greater and more realistic level of protection for individuals.  So the case for a lead data protection regulator to oversee the data activities of pan-European organisations is one that the EU data protection authorities themselves, as well as the EU Commission, should be making their own.  

  

 

 

Thursday, January 15, 2009

Launching another "global" forum to talk about privacy

There is a new buzz these days in privacy circles: the idea of global standards seems to be gaining momentum.  On January 12, privacy commissioners, and a handful of invited academics, advocates and CPO's, met in Barcelona for an inaugural meeting to launch work on a "Joint Proposal for a Draft of International Standards for the Protection of Privacy and Personal Data."  http://www.privacyconference2008.org/adopted_resolutions/STRASBOURG2008/resolution_international_standards_en.pdf  

There have been several very serious attempts at developing international, or regional, privacy standards.  The oldest, and perhaps most successful, was the OECD Privacy Guidelines from 1980.  Essentially all privacy laws in the world today derive from the OECD's work.  The OECD was so successful, because it maintained the privacy guidelines at a sufficiently high-level that they were not rendered obsolete by technological developments.  And the OECD refrained from mixing implementation issues into its guidelines, wisely recognizing that its member countries have very different legal and regulatory regimes.  

The EU Data Protection Directive of 1995 is probably the most complete and detailed set of regional privacy laws in the world.  Because the Directive was very focused on European Common Market issues, it took great strides to harmonize pan-European regulatory and implementation issues.  Since many of these implementation issues, such as the mandatory creation of an "independent" data protection authority, are unique to the European legal and regulatory context, the Directive itself is not suitable for broad global adoption, except in countries with European colonial traditions, like Hong Kong.  

APEC continues its work on a Privacy Framework, building on the OECD Privacy Guidelines and adding new and effective concepts of "accountability" and "harm".  APEC is the most exciting initiative underway anywhere in the world in terms of new thinking about how to move forward on global privacy standards.  Singapore, as this year's revolving host country, will host further meetings to build on the strong progress that's been made in past years.  

I attended most of this week's meeting in Barcelona.  It's too early to tell if this initiative, sponsored by the Data Protection Commissioners, will have legs in terms of moving forward the debate.  The inaugural meeting on January 12 was mostly attended by Europeans.  The documents that it cited as reference points were mostly European.  The overwhelming majority of participants were European data protection authorities, who naturally are very familiar with the EU Data Protection Directive, and come to the table imbued with the European approach.  A sprinkling of North Americans rounded out the participants, which left me thinking that this "global" meeting represented countries with something like 10% of the global population.   This particular initiative will sadly fail in the international arena, if it simply turns into an exercise of European commissioners to try to convince the rest of the world to adopt something like the EU Data Protection Directive.  They've already been doing that for over a decade, so there's little incremental benefit from continuing down that path.  

I think the world needs minimum international privacy standards, as I've blogged many times before. OECD and APEC are also promising forums to advance the debate.  In parallel, Europe will continue its reflections on how to modernize its own data protection concepts, and perhaps, streamline some of its rather inefficient bureaucracy.  Europe would certainly be more credible as a global leader, if it got its own data protection house more up to date and efficient.  [I'll be contributing to that effort in a separate forum.]  In the meantime, if I were from a country with no pre-existing tradition of privacy laws, I would be looking to the OECD and APEC for inspiration.  In any case, competition is good, even in the sphere of privacy policy thinking.  

Wednesday, October 29, 2008

Lessons from the failure of global financial regulation

The financial crisis has everyone talking about global financial regulation. Why didn’t regulations work? And how can regulation be reformed to prevent future melt-downs? Who should regulate in a global context? In a sense, these are the same questions I’ve been pondering for years, in the context of global privacy regulation. Like many people in the privacy community, I’ve been calling for better global privacy standards now, so that we’re not faced with a crisis later.

What lessons have we learned from the financial regulatory crisis that are relevant for privacy?

The issues are global. The crisis is global. Financial and data flows are global. Money, in all its diverse forms, flows across borders, making all of finance inter-connected. Global financial flows are now essentially digital data traffic. When it comes to money, and data, countries are not islands, as Iceland has clearly demonstrated. And if there’s anything that flows globally even more quickly than money, it’s data.

You can identify problems before they turn into crises. In retrospect, the problems were pretty obvious, even if people were enjoying the party at the time too much to want to sober up enough to confront them. It’s fashionable to claim that you can only identify a bubble in retrospect. I think that’s nonsense: I knew Florida condos were a bubble when my house painter bought a condo there, on which the annual maintenance fees alone exceeded his annual income, as he proudly told me, but he was unworried, “because real estate prices only go up.” Similarly, in the world of privacy, we already know what the issues are… so, the only real question is whether we need to wait for a crisis to muster the willpower to drive change.

Regulations that are out-of-date are useless. The financial crisis is exposing lots of regulations from other eras that have proven useless. I hardly need to remind readers of the bizarre patchwork of regulations that apply differently, or not at all, to banks, to investment banks, to special financial vehicles, to hedge funds, etc. Similarly, much of the world’s privacy regulations were designed for a pre-Internet world. Having regulations that are out-of-date means that they are either not applied at all, or applied poorly, or simply “re-interpreted” according to the tastes of individual regulators, like the German “regulator” who blithely declared all search engines to be “illegal”, whatever that means. So, having European data protection regulations that require things like “prior authorizations” from “supervisory authorities” before an international transfer of data is quaint (at best), or dangerous (at worst), in the age of the Internet. In fact, I think it’s dangerous to base international data protection rules on obsolete fictions, like the fiction that data flows somehow stop at borders.

Solutions have to be global. Without global solutions, we create the risk of regulatory havens, like tax havens, where actors can engage in regulatory arbitrage, moving from highly-regulated to lightly-or non-regulated spheres, be they countries or industries (e.g., the move from banks to hedge funds). Much of the privacy debate in recent years has been almost exclusively trans-Atlantic. For example, if you read the work of the EU Working Party data protection regulators over the last decade, you would come away with the impression that they are obsessed with privacy issues of US companies and the US government, while almost completely ignoring any privacy issues relating to data flows to or from anywhere else on the planet, such as India, to cite but one example. But surely, even EU data protection authorities in the anti-American ideological camp (perhaps I should use the German word “Anti-Amerikanismus”) will recognize that the US provides much more solid legal protections for personal data than the vast majority of countries on the planet. So, the obsession with the trans-Atlantic data flows issues is actually becoming dangerous, if it blinds us to the global nature of data flows. That’s one reason why I’m so excited about the APEC initiative, a process where many countries with no tradition of privacy laws are coming together to define privacy standards that are up-to-date, multi-national, and forward-looking. APEC is the most positive thing to happen in the world of global privacy standards since the EU Data Protection Directive of 1995.

Enforcement has to be local. While regulations need to be thought of in global terms, enforcement has to be local, to remain anchored in local legal and regulatory traditions. Some have suggested that we should create “super-regulators” with global mandates, like a mini-UN agency. Personally, I think international bodies have a strong role to play in driving forward international standards, but I’ve watched too many international meetings descend into farce to have much hope that they can function as day-to-day regulators. Moreover, different countries cannot have the same regulatory structures, often because of fundamental constitutional reasons. The US simply cannot have an independent Federal Data Protection Authority in the French mode, because the US Constitution wouldn’t allow it. So, calls for global harmonization of regulatory structures are doomed. The French can try to convince French-speaking Ivory Coast of the need to create a French-style data protection authority, and they may succeed, but that’s not a formula for global success. Whether that’s good for the Ivory Coast is another question entirely. The Spanish can try to convince Spanish-speaking Colombia of the need to create a Spanish-style data protection authority, and they may succeed, but they can’t expect a country with a very different constitutional structure, like the US, to follow that lead. There are some people who honestly believe that you can’t have privacy without an EU-style data protection authority…well, hey, they might want to open their eyes wider.

Regulatory experimentation is a good thing. No one really has all the answers. The US experimented with Security Breach Notifications laws, and they generally seem to work, so Europe is adopting them too. Europe experimented with the creation of dedicated privacy Data Protection Authorities, and many countries around the world, from Argentina to New Zealand, have adopted them since. Maintaining some level of regulatory experimentation, even as we move towards global privacy standards, is a healthy foundation for the innovation in privacy frameworks that we need.

There’s no “Mission Accomplished” moment. Moving towards global privacy standards will be a multi-year process, with steps forward, and back, with vigorous debates, with ideology, with pragmatism, with passion. It’s a process, hopefully with progress in a more or less straight line, towards ensuring better privacy protections in our new global reality. Some people will stress the need for a legal framework and legal enforcement powers; others will stress the usefulness of self-regulatory standards. That’s fine, and it reflects traditions: some peoples expect the government to solve most of their problems; others expect the private sector to do most of the work. One thing is certain; we’ll need to carry on this debate virtually, without expensive global summits or conferences, since thanks to the global financial crisis, none of us can afford to travel anymore. Oh well: blogging is great and free.

Friday, September 19, 2008

Why would Germans claim their "privacy" laws prevents them from publishing a list of victims of Nazi terror?

There was a short report in the BBC today which struck me, my highlights in red :  

"The federal archive in Berlin has for the first time compiled a list of some 600,000 Jews who lived in Germany up to 1945 and were persecuted by the Nazis.

The names and addresses, which took four years to compile, will be made available to Holocaust groups to help people uncover the fate of relatives.

Archive officials from the Remembrance, Responsibility and Future Foundation said the list was not yet definitive and would require further work.

It will not be released to the public because of Germany's privacy laws, but will be passed on to museums and institutions, including Israel's national Holocaust memorial, Yad Vashem.

"In handing over this list, we want to make a substantial contribution to documenting the loss that German Jewry suffered through persecution, expulsion and destruction," said Guenter Saathof, the head of foundation."

I'm a privacy legal expert, and it's baffling to me why German "privacy" laws would prevent this list from being published to the Internet.   This is a valuable historical document.  Putting it on the Internet would allow people around the world to study it.  I would like to see if my grandfather is on the list.  I could check if his address in Berlin was indeed correct.  I think this information belongs to humanity.  

Now, of course, I can imagine certain privacy issues.  A very very small number of people included in the list may still be alive.  Privacy laws are only meant to protect living human beings, after all, not dead people or their reputations after death.  Other laws, like libel laws, can apply after death, but privacy laws cannot.  So, I would call on the Foundation to publish its work on the Internet.  I think it is wrong to cite "privacy" laws as a reason not to make this information public.   

Because, after all, whose "privacy" are we protecting now, for a list which includes names and addresses from something like 70 years ago, and most of whom have been dead for over half a century?

This is the sort of nonsense that gives German privacy law a bad name.     

Friday, August 29, 2008

Relax: the Faroe Islands have adequate data protection

Lots of people in Europe are trying to figure out how to reduce bureaucracy and red tape.  Let's face it:  we Europeans face some of the highest tax burdens in the world, with some of the highest numbers of public servants as a percentage of the general population anywhere on the planet.  So, let me pick a little example, to make a point. 
 
In this Internet age, when data flows around the planet at the click of a mouse, everyone agrees we need to be talking about global privacy standards.  Data doesn't start and stop at national borders when it travels on the Information Super-highway.  So, all the time and effort that has been spent in recent years, trying to segregate the world's countries into "adequate" and "not adequate" regimes in terms of data protection, has become largely obsolete and pointless.  Data doesn't stop, take a look around, and wait to find out if the European Commission has categorized a country as having "adequate" data protection.   The whole process is becoming a bit tired and irrelevant.  Last year, the European privacy regulators adopted an opinion, concluding that Jersey and the Faroe Islands have "adequate" data protection. 
 
Indeed, Jersey and the Faroe Islands.  I haven't been to either.  I'm sure they're lovely places.  I think they do fishing in the Faroe Islands.  As for Jersey, I have some sense of the kind of data that goes to places that are known as international tax havens.  International tax havens as a rule have "privacy" laws, and it's pretty obvious why.  I'm perfectly prepared to accept that these islands have solid data protection laws.  But why aren't we talking about more important topics, like Japan, for example, to name a country that is widely viewed as having very strong data protection practices, even if they're different than Europe's?  
 
Let's face it.  This process, reviewing a country's data protection regime, to ensure that it exactly mirrors Europe's, before awarding it a bureaucratic seal of approval, is a process that is out-of-date.  It doesn't reflect the realities in the world:  under current opinions, Argentina, Romania and Bulgaria are "adequate", but Japan is not!  Does anyone in the real world believe that personal data is better protected in Argentina, Romania or Bulgaria than in Japan?  And if our taxpayer-paid government leaders are spending their time writing opinions about the adequacy of data protection in the Faroe Islands, it's fair to ask whether our taxes are being wisely spent.

Monday, June 16, 2008

Talking to Monsieur Tout-le-Monde

I think privacy professionals need to get out more. I mean, talk to real people, average consumers, normal Internet users. Most of us privacy officers spend most of our time talking to each other, or to privacy regulators, or to privay advocates, or to company privacy department colleagues. But, at the end of the day, the people whose privacy we're trying to protect are not the specialists. So, I've made a personal resolution to try to spend less time engaging in abstruse academic privacy debates, and more time giving simple privacy advice, for general audiences, with practical tips. Anyway, I'm trying. Here's a radio interview for France Info, which, I'm told, reaches 4 or 5 million people. In French:


http://www.france-info.com/spip.php?article146650&theme=81&sous_theme=109

Sunday, May 18, 2008

Talking about privacy

I think it's really important to contribute to robust public debates about online privacy issues. And I think it's really important to use the YouTube video platform to bring these talks to the widest group of people who might be interested in them. So, here are some of my recent talks: at Harvard, at Google, and at the University of Milan, in that order.

http://youtube.com/watch?v=JNu1OtkWrOY

http://youtube.com/watch?v=2IKBke1puFw

http://youtube.com/watch?v=ZkN12ZR9dvE




Friday, February 15, 2008

Can a website identify a user based on IP address?

There is a public debate about whether IP addresses should be considered to be “personally-identifiable data” (to use the US phrase) or “personal data” (to use the European phrase. The question is: when can a person be identified by an IP address? This is a question of significant import, since it’s relevant to every single web site on the planet, and indeed to every single packet of data being transferred on the Internet architecture. I’ve blogged about this before, but the debate has evolved:

http://peterfleischer.blogspot.com/2007/02/are-ip-addresses-personal-data.html

Last year, the Article 29 Working Party of EU data protection authorities published an official Opinion on the concept of personal information which included a thorough analysis of what is meant by “identified or identifiable” person. The Opinion pointed out that someone is identifiable if it is possible to distinguish that person from others. The recitals that precede the EU data protection directive explain that to decide which pieces of information qualify as personal information, it is necessary to consider all the means likely reasonably to be used to identify the individual. As the Working Party put it, this means that a mere hypothetical possibility to single out an individual is not enough to consider that person as identifiable. Therefore, if taking into account all the means likely reasonably to be used, that possibility does not exist or is negligible, a person should not be considered as identifiable and the information would not be considered as personal data.

Two recent decisions from the Paris Appeals Court followed this logic. The Court concluded that 'the IP address doesn't allow the identification of the persons who used this computer since only the legitimate authority for investigation (the law enforcement authority) may obtain the user identity from the ISP' (27 April ruling). The Court recognized in the same decision that 'it should also be reminded that each computer connected to the Internet is identified by a unique number called "Internet address" or IP address (internet protocol) that allows to find it among connected computers or to find back the sender of a message'. In its 15 May ruling, the Court considered that 'this series of numbers indeed constitutes by no means an indirectly nominative data of the person in that it only relates to a machine, and not to the individual who is using the computer in order to commit counterfeit.' The Court conclusion was then that this collection of IP addresses does not constitute a processing of personal data, and consequently was not subject to CNIL prior authorization, as required by the French Data Protection Act. The CNIL has protested loudly that these court decisions are incorrect, but the CNIL’s own position of declaring “all” IP addresses to be personal data, regardless of context, seems to be incorrect to me.

Paris Appeal Court decision - Anthony G. vs. SCPP (27.04.2007)http://www.legalis.net/jurisprudence-decision.php3?id_article=1954
Paris Appeal Court decision - Henri S. vs. SCPP (15.05.2007)http://www.legalis.net/jurisprudence-decision.php3?id_article=1955
IP address is a personal data for all the European DPAs (2.08.2007)http://www.cnil.fr/index.php?id=2244


Let’s take Google as an example. Like all websites, Google servers capture the IP addresses of its visitors. If a user is using non-authenticated Google Search (i.e., not using a Google Account to log in), then Google collects the user’s IP Address along with the search query and the date and time of the query. Can Google determine the identity of the person using that IP Address only on the basis of that information? No. The IP Address may locate a single computer or it may locate a computer network using Network Address Translation. Where the IP Address locates a single computer, can Google identify the person using that computer? The answer is still “no”. The IP Address enables to send data to one specific computer, but it does not disclose which actual computer that is, let alone who owns it. In order to get to that granular of a level, it would be necessary for Google to ask the ISP that issued the IP Address for the identity of the person that was using that IP Address. Even then, the ISP can only identify the account holder, not the person who was actually using the computer at any given time.

Also, the ISP is prohibited under US law from giving Google that information, and there are similar legal prohibitions under European laws. Surely, illegal means are not “reasonable” means in the terms of the Directive.

So the reality is that like any other web site on the Internet that logs the IP Address of the computer used to access that site, the chances of Google being able to combine an IP Address with other information held by the ISP that issued that IP Address in order to identify anyone are indeed negligible.

However, let’s hypothesize for now that Google could ask the ISP for that information. Could the ISP give Google the identity of the person? Again, the answer is “hardly.” Why is it so difficult? First, an ISP can only link an IP Address to an account. That means that if there are multiple people, like a family, logging into the same account, only the account holder’s name is associated with the IP Address.

Second, ISP’s are given a finite number of IP Addresses to assign to their subscribers. At this point there are not enough IP Addresses to cover the number of users that wish to access the Internet. So, many ISPs have resorted to the use of dynamic IP Addresses. This means that a user could be assigned a different IP Address as often as every time they access the Internet. In order for the ISP to track the account that is connected to an IP Address, the ISP may require the actual date and time of use.

Finally, almost all big organizations have their own private network that sits behind a firewall. They may use static or dynamic IP addresses, but in either case these are not visible outside the organization. They are using Network Address Translation (NAT). NAT enable multiple hosts on a private network to access the Internet using a single IP Address. NAT is also a standard feature in routers for home and small office Internet connections.

So again, on the balance of probabilities and taking into account any factors identified by the Working Party as relevant, the most obvious conclusion is that the IP Addresses obtained by Google and other websites are not sufficiently significant or revealing to qualify as personal data from the point of view of the EU data protection directive.

Some people have raised the question whether the government/law enforcement can identify an individual user from an IP address from Google’s logs. Google on its own cannot tie any IP to any specific ISP account or any specific computer. We simply know that the IP address locates a computer that is accessing our system. We don’t know who is using that computer. So, in order for someone to tie the IP to an account holder, there have to be at least two subpoenas issued: one to Google and a separate one to the ISP.

Others have suggested that IP addresses should be considered “personal data”, on the mistaken understanding that looking up an IP address in a “whois” directory allows IP addresses to be tied to identifiable human beings. But in reality, if you look up an IP address in a whois directory, you usually get the name of the organization that manages the IP address. So, normally, Google could determine that a user’s queries come from a particular IP address owned by, say, Comcast, but Google has no way of knowing the name or organization of the human being behind the IP address.

A different question altogether is whether identifiability should equate to individualization. As discussed above, identifiability is about the likelihood of an individual being distinguished from others. But for this distinction to merit the protection afforded by privacy laws, it must be necessary to establish a link between the person and their right to privacy. For example, during the course of an online transaction between a retail web site and a customer, that customer’s identity will be protected by data privacy laws that impose obligations on the website operator (like seeking the customer’s consent for ancillary uses of customer information) and give rights to the individual (like allowing the customer to opt out of direct marketing). However, if someone who visits the web site for the first time (therefore prior to any transaction taking place) is presented with a local language version of the web site as a result of the geographical identifier associated to the IP Address used to access the site, there will be an element of individualization that does not involve identifying the person. In other words, unless and until that user becomes a registered customer, the web site operator will not be able to identify that individual. But the language appearing on the pages accessed by anyone using that IP Address may be different from the language presented to those using an IP Address associated with a different geographic location.

Should privacy laws apply in this situation? There is an obvious danger in trying to apply privacy laws as we understand them today in terms of notice, choice, access rights or data transfer limitations, to these types of cases. For example, there is no way that websites can provide consumers with a so-called right of access to IP-address-based logs, since such databases provide no way of authenticating a user. Individualization of Internet users is a logical and beneficial result of the way in which Internet technology works and sometimes it is also indispensable in order to comply with legal obligations such as presenting or blocking certain information in certain territories. Attempting to impose privacy requirements to situations that do not affect someone’s right to privacy will not only hamper technological development, but will entirely contradict the common sense principles on which privacy laws were founded. Privacy laws should be about protecting identifiable individuals and their information, not about undermining individualization. No doubt some people think that the cause of privacy is advanced, if data protection is extended to ever-broader categories of numerical locators like IP addresses. But let’s think hard about when these numbers can identify someone, and when they can’t. Black and white slogans are usually wrong. The real world is more complicated than that.

Wednesday, December 5, 2007

Transparency, Google and Privacy

A group called One World Trust sent a survey to Google. A lot of people ask us to fill out surveys. I’m not sure who at Google they sent it to. In fairness, until yesterday, I had never heard of One World Trust, and it’s possible that whoever received it hadn’t either. Since we didn’t respond to their request for a survey, though, Google was ranked bottom in terms of transparency, in particular with regards to privacy. And Robert Lloyd, the report’s lead author, went so far as to say Google “did not co-operate (with the report) and on some policy issues, such as transparency towards customers, they have no publicly available information at all.” All this according to the FT http://us.ft.com/ftgateway/superpage.ft?news_id=fto120420070313216545.

But filling out surveys is not how a company proves transparency to its customers. It does so by making information public. We’ve gone to extraordinary lengths to publish information for our users about our privacy practices. In many respects, I feel we lead the industry. Here are just a few examples:

We were the first search company to start anonymising our search records (a move the rest of the industry soon followed), and we published our exchange of letters with the EU privacy regulators, explaining these issues in great depth. http://googleblog.blogspot.com/2007/06/how-long-should-google-remember.html

We engineer transparency into our technologies like Web History, which allow users to see and control their own personal search history.
https://www.google.com/accounts/ServiceLogin?hl=en&continue=http://www.google.com/psearch&nui=1&service=hist

We’ve also gone to extraordinary lengths to explain our privacy practices to users in the clearest ways we can devise.
Our privacy policies: http://www.google.com/privacy.html
Our privacy channel on YouTube, with consumer-focused videos explaining basic privacy concepts:
http://googleblog.blogspot.com/2007/08/google-search-privacy-plain-and-simple.htmlhttp://googleblog.blogspot.com/2007/09/search-privacy-and-personalized-search.htmlOur Google blogs on privacy: http://googleblog.blogspot.com/2007/05/why-does-google-remember-information.html
Our Google public policy blogs on privacy: http://googlepublicpolicy.blogspot.com/

With each of these efforts, we were the first, and often the only, search engine to embrace this level of transparency and user control. And lots of people in Google are working on even more tools, videos and content to help our users understand our privacy practices and to make informed decisions about how to use them. Check back to these sites regularly to see more.

So, really, is it fair for this organization to claim that “on some policy issues, such as transparency towards customers, they have no publicly available information at all”? Perhaps next time, they can follow up their email with a comment on our public policy blog, or a video response on our Google Privacy YouTube channel. Or even send a question to our Privacy Help Site:
http://www.google.com/support/bin/request.py?contact_type=privacy . Well, so much for the report’s claim that Google doesn’t have a feedback link.

Tuesday, October 23, 2007

Online Advertising: privacy issues are important, but they don’t belong in merger reviews

As the European Commission and the US Federal Trade Commission review Google’s proposed acquisition of DoubleClick, a number of academics, privacy advocates and Google competitors have argued that these competition/anti-trust authorities should consider “privacy” as part of their merger review. That’s just plain wrong, as a matter of competition law. It’s also the wrong forum to address privacy issues. If online advertising presents a “harm to consumers”, let’s try to figure out what exactly the harm is, figure out which online advertising practices to change, and then apply those principles to all the participants in the industry. But we shouldn’t bootstrap privacy concerns onto a merger review. That’s like evaluating a merger of automakers by looking at the gas mileage of their cars. We don’t invoke antitrust law to prevent a merger of car companies, because we think the industry should build cars that use less gas.

Some advocates state that online advertising “harms” consumers. So they reason that the merger of Google and DoubleClick would “harm” consumers more, to the extent that it enables more targeted advertising. But these same critics rarely cite specific examples of consumer “harms”, and indeed, I’m having trouble identifying what they might be. The typical use of ad impression tracking now is to limit the number of times a user is exposed to a particular ad. That is, after you have seen an image of a blue car for 6 or 7 times, the ad server will switch to an image of a red car or to some other ad. This means that a user will see different ads, rather than re-seeing the same ad over and over again. As someone who is sick of seeing the same ads over and over again on television, I think that’s good for both viewers and advertisers. There are also new forms of advertising that are enabled by the Internet that may allow for more effective matching between buyers and sellers. Again, I prefer to see relevant ads, if possible. I go to travel sites a lot, and I’m happy to see travel ads, even when I’m not on a travel site. I don’t want to see ads for children’s toys, and I dislike the primitive nature of television, when it shows me such blatantly irrelevant ads.

We all dislike unsolicited direct marketing by phone. So, we created a regulatory “do not call” solution. But without knowing which precise practices of online advertising create a “harm”, it’s impossible to discuss a potential solution. Moreover, a website that offers its services or content for free to consumers (e.g., a news site), tries to generate revenue from advertising to pay its journalists’ salaries and other costs. Shouldn’t such websites also have a say in whether they should be forced to offer their free content to consumers without the ability to match ads to viewers according to some basic criteria? It’s very clear (but worth reiterating) that free sites are almost always more respectful of privacy than paying sites, because of the simple fact that paying sites must collect their users’ real identities and real credit card numbers, while free sites can often be used anonymously.

Now, some legal observations relating to European laws on merger reviews. The overriding principle protected by those laws is consumer welfare: referring to those aspects of a transaction that affect the supply and demand for goods/services (i.e., that affect quantity, quality, innovation choice, etc.). The reference in Article 2(1)(b) ECMR to "the interests of the intermediate and ultimate consumers, and the development of technical and economic progress provided that it is to consumers' advantage and does not form an obstacle to competition" must therefore be read in this context – consumer interests are relevant to the merger assessment only for the purpose of assessing whether the degree of competition that will remain post-transaction will be sufficient to guarantee consumer welfare.

The fact that non-competition issues, such as privacy, fall outside the scope of ECMR is consistent with the general consensus that merger control should focus on the objective of ensuring that consumer welfare is not harmed as a result of a significant impediment to effective competition. Introducing non-competition related considerations into a merger analysis (e.g., environmental protection or privacy) would lead to a potentially arbitrary act of balancing competition against potentially diverging interests. Accordingly, policy issues, such as privacy, are not suitably addressed in a merger control procedure, but should be dealt with separately.

Indeed, privacy interests are addressed in Directive 95/468 and Directive2002/589 (both of which are based on Article 14 EC and Article 95 EC), Article 6 TEU and Article 8 ECHR, and Google must abide by its legal obligations under these instruments. Such instruments are also far more efficient in addressing privacy issues than the ECMR, as they are industry-wide in scope. Internet privacy issues are relevant to the entire industry as they are inextricably linked to the very nature of the technology used by every participant on the Internet. Information is generated in relation to virtually every event that occurs on the Internet, although the nature of the data, the circumstances in which it is collected, the entities from whom and by whom it is collected, and the uses to which it is put, vary considerably. This situation pre-dates Google’s proposed acquisition of DoubleClick and is not in any way specific to it. More importantly, any modification of the status quo in terms of the current levels of privacy protection must involve the industry as a whole, taking account of the diversity of participants and their specific circumstances.

Google has always been, and will continue to be, willing to engage in a wider policy debate regarding Internet privacy. Issues of privacy and data security are of course of great importance to Google, as maintaining user trust is essential for its success. As a large and highly visible company, Google has strong incentives to practice strong privacy and security policies in order to safeguard user data and maintain user trust. These concerns are one of the reasons why Google has thus far chosen not to accept display ad tags from third parties. The proposed transaction will not change Google's commitment to privacy, and Google is in fact currently developing a new privacy policy to address the additional data gathered through third-party ad serving. Similarly, a number of Google's competitors have announced new and supposedly improved policies to protect consumer privacy, highlighting the robustness of recent competition on privacy issues. There is no reason to suggest that such competition will diminish if Google acquires DoubleClick; to the contrary, such competition appears to be intensifying.

Privacy is an important issue in the world of online ads. But it is not an issue for a competition law review.

Can you “identify” the person walking down the street?


I recently posted a blog on Google’s Lat Long Blog about Street View and privacy.
http://google-latlong.blogspot.com/2007/09/street-view-and-privacy.html

I’d like to add a few personal observations to that post.

Some people might have wondered why Google posted a blog about what a future launch of Street View would look like in some non-US countries, especially since, so far, it only includes images from 15 US cities. We felt the need to respond to concerns that we had heard recently, in particular concerns from Canada’s privacy regulators, that a launch of the US-style of Street View in Canada might not comply with Canadian privacy regulations. And we wanted to be very clear that we understood privacy regimes are different in some countries, such as Canada, and for that matter, much of Europe, compared to the US tradition of “public spaces.” And of course, that we would respect those differences, when/if we launched Street View in those countries.

Basically, Street View is going to try not to capture “identifiable faces or identifiable license plates” in its versions in places where the privacy laws probably wouldn’t allow them (absent consent from the data subjects, which is logistically impossible), in other words, in places like Canada and much of Europe. And for most people, that pretty much solves the issue. If you can’t identify a person’s face, then that person is not an “identifiable” human being in privacy law terms. If you can’t identify a license plate number, then that car is not something that can be linked to an identifiable human being in privacy law terms.

How would Street View try not to capture identifiable faces or license plates? It might be a combination of blurring technology and resolution. The quality of face-blurring technology has certainly improved recently, but there are still some unsolved limitations with it. As one of my engineering colleagues at Google explained it to me: “Face detection and obscuring technology has existed for some time, but it turns out not to work so well. Firstly, face recognition misses a lot of faces in practice, and secondly, a surprising number of natural features (bits of buildings, branches, signs, chance coincidence of all of the above) look like faces. It’s somewhat surprising when you run a face recognition program over a random scene and then look closely at what it recognises. These problems are also exacerbated by the fact that you have no idea of scale, because of the huge variations in distance that can occur.”

Lowering the quality of resolution of images is another approach to try not to capture identifiable faces or license plates. If the resolution is not great, it’s hard (or even impossible) to identify them. Unfortunately, any such reduction in resolution would of course also reduce the resolution of the things we do want to show, such as buildings. So, it’s a difficult trade-off.

Some privacy advocates raise the question of how to circumscribe the limits of “identifiability”. Can a person be considered to be identifiable, even if you cannot see their face? In pragmatic terms, and in privacy law terms, I think not. The fact is that a person may be identifiable to someone who already knows them, on the basis of their clothes (e.g., wearing a red coat), plus context (in front of a particular building), but they wouldn’t be “identifiable” to anyone in general. Others raise the issue of whether properties (houses, farms, ranches) should be considered to be “personal data” (so that their owners or residents could request them to be deleted from these geo sites, like Google Earth)? Last month, various German privacy officials made these arguments in a Bundestag committee hearing. They reasoned that a simple Internet search can often combine a property’s address with the names of the property’s residents. Others see this reasoning as a distortion of privacy concepts, which were not meant to be extended to properties. And the consequences of that reasoning would mean that satellite and Street View imagery of the world might be full of holes, as some people (disproportionately, celebrities and the rich, of course) would try to block their properties from being discoverable.

Google will have to be pragmatic, trying to solve privacy issues in a way that doesn’t undermine the utility of the service or the ability of people to find and view legitimate global geographic images. I personally would like to see the same standard of privacy care applied to Street View across the globe: namely, trying not to capture identifiable faces or license plates, even in the US, regardless of whether that’s required by law or not. But I recognize that there are important conflicting principles at play (i.e., concepts of “public spaces”), and “privacy” decisions are never made in a bubble.

We’re engaged in a hard debate, inside Google and outside: what does privacy mean in connection with images taken in “public spaces”, and when does a picture of someone become “identifiable”? Can we have a consistent standard around the world, or will we have to have different standards in different countries based on local laws and culture? This isn’t the first time (and I hope, not the last time) that Google has launched a new service, letting people access and search for new types of information. Those of us in the privacy world are still debating how to address it.

I think the decisions taken by the Street View team have been the right ones, even for the US launch, at least at this point in time, and given the current state of technology. But a more privacy-protective version in other countries (and someday, maybe in the US too?) would be a good thing, at least for privacy.

Tuesday, October 16, 2007

I like the anonymity of the big city

For much of history, people lived in small communities, where everyone knew them, and they knew everyone. Identity was largely inherited and imposed, and the ability of people to re-invent themselves was quite limited. You were father, farmer, drunkard, and everyone knew it.

The big city changed all that, by offering anonymity and choice. Against the background of anonymity, people can choose their identity, or choose multiple identities, often by choosing the community of other people with whom they live, work or play. In the city, you can choose to cultivate multiple identities: to mingle with bankers or toddlers by day, to play rugby or poker by night, to socialize with rabbis or lesbians, and to do all this while choosing how anonymous to remain. Maybe you’re happy to use your real name with your bank colleagues, but delight in the anonymity of a large nightclub. And you can share different parts of your identity with different communities, and none of them need to know about the other parts, if you don’t want them too: work and home, family and friends, familiarity and exploration, the city allows you to create your identity against a background of anonymity.

Like the city, but on a much, much bigger scale, the Web allows people to create multiple digital identities, and to decide whether to use their “real” identity, or pseudonyms, or even complete anonymity. With billions of people online, and with the power of the Internet, people can find information and create virtual communities to match any interest, any identity. You may join a social networking site with your real names or your pseudonyms, finding common interests with other people on any conceivable topic, or exploring new ones. You may participate in a breast cancer forum, by sharing as much or as little information about yourself as you wish. You may explore what it means to be gay or diabetic, without wanting anyone else to know. Or you may revel in your passion to create new hybrids of roses with other aficionados. The Web is like the city, only more so: more people, more communities, more knowledge, more possibility. And the Web has put us all in the same “city”, in cyberspace.

Life is about possibilities: figuring out who you are, who you want to be. Cities opened more possibilities for us to create the identities we choose. The Web is opening even more.