Friday, March 18, 2011

The right to forget or the right to spin?

Viviane Reding has been publicising one of the more poetic planks of the upcoming Data Protection Directive reforms, the so-called "right to forget" or from the French (who dreamt it up), the droit a d'oubli.

The right to forget is intriguing and seems to have caught the public attention of more than geeks and DP nerds. In boring Anglo-Saxon, it sounds much less exciting. The right to delete your personal data, wherever it is held - eg on Facebook - is what it's about. Put that way it doesn't sound that new. After all the DPD already gives you the right in art 14 to
" object at any time on compelling legitimate grounds relating to his particular situation to the processing of data relating to him, save where otherwise provided by national legislation. Where there is a justified objection, the processing instigated by the controller may no longer involve those data;"
In the UK DPA 98, s 10, that gets translated as the right to stop processing where it is "causing or is likely to cause substantial damage or substantial distress to him or to another" and this is "unwarranted". As often the case, there is an argument that this is a rather limited expression lof the DPD, especially when case law is considered. There's also a connected right to demand your personal data is not processed for the purposes of direct marketing.

But this doesn't add up to an unqualified right to have data deleted nor to have this done for no reason at all, except it's your data. This is what the "right to forget"or "delete" movement is about.

Pangloss initially found the right to forget very appealing, but has got more conflicted as time has gone on. The trouble most often cited is that your personal data is very often also someone else's personal data. If I post a picture of both of us at a party on FB, do you have the right to delete it? What about my freedom of expression, my right to tell my own story? With pictures, you can imagine solutions - pixellate out the person objecting or crop it. Perhaps the compromise is that I have the right to post the photo but you have the right to untag yourself from it. (Though this will not suit some.)

But what about where I say "I was at Jack's last night and he was steaming drunk?" Does Jack have the right to delete this data, even if it's on my profile? This is where the Americans start indeed to get steamed up - since their culture and legal system has repeatedly preferred free speech to privacy rights.

Unsurprisingly this is one of the the scenarios Peter Fleischer, chief privacy officer of Google, had in mind when he described the right to forget last week as "foggy thinking ", claimed that "this raises difficult issues of conflict between freedom of expression and privacy" and more or less implied that this could be dealt with perfectly well by traditional laws of libel. In an ideal world this might be so: but we don't live in that world, but one where ordinary citizens as opposed to celebrities, almost never get to use laws like libel because they're simply far too costly and scarey.

Would Jack sue for libel in the above example? No, almost never. But he might ask FB to take it down (if he was aware it existed). This is another of Fleischer's worries - that intermediaries like ISPs and hosts would get inextricably and expensively involved in the "right to forget". Here his real agenda becomes fairly apparent - Google's success is entirely based on their right to remember as much as possible about us. We are back here in another version of the cookie and data retention wars, passim.

I am a fan of the Google chocolate factory, as anyone reading this blog will surely have gathered - but it is a mite disingenous to read Fleisher's (beautifully written) post without bearing in mind what seems to be Google's real worry, as cited at the bottom of his list, that search engines will find themselves called on to implement what people often want far more than a right to delete, namely a "right for their data not to be found" - ie, for it to be expunged from Google's web results.

Fleisher says correctly (and commendably under-statedly) that "This will surely generate legal challenges and counter-challenges before this debate is resolved. ". Imagine the reaction of Trip Advisor for example when 1000s of people who run hotels and restaurants try to have the site removed from Google rankings because it has personal data about them that they're not overly fond of..? More sympathetically, many readers of this blog will know decent people who have tried for years to get results removed from Google - unfair and illegitimate reviews, catty remarks from ex partners, professionals whose working life is blighted by abusive remarks by disgruntled ex clients. There should I think be clear remedies for them not dependent on the ad hoc discretion of the sitein question, depending on what mood it's in that day. On the other other hand, I don't want a world where politicians or demagogues can get their dodgy past involvements with fascism or the BNP or whatever quietly deleted or rendered unfindable on Google (this is a turf war which already goes on day in day out on the edits on Wikipedia).

A big problem (as with all DP issues) is the cross border, applicable law or jurisdiction aspects. Fleisher's column cites a rather sensationalist example - when a German court ordered references to a murder by a German citizen removed from a US based Wikipedia page because those convictions under German law were "spent". In fact rules about rehabilitation of offenders and spent convictions are common - certainly the UK has similar - and all that is unusual about this case is the attempt of the German courts to extend jurisdiction to publications hosted abroad. Indeed as some US states have "rights of publicity" protecting celebrity image and some don;t, one imagines they must already have evolved a degree of expertise in the international private law of privacy/publicity rights. (What if Elvis's image on tee shirts is protected in Tennessee but not in Virginia? can the Tennessee estate sue the Virginia t shirt factory that uses his image without paying?)

But certainly an EU right to forget will almost invariably engage us in the same kind of angst and threats of "data wars" over extraterritoriality that the Eighth DP Principle on export of personal data already has - not something to look forward to. It is noticeable that Reding fires off an early salvo on this when her spokesperson says , not for the first time, that companies "can't think they're exempt just because they have their servers in California or do their data processing in Bangalore. If they're targeting EU citizens, they will have to comply with the rules."

In reality , Pangloss suspects any right to forget that makes it through the next few years of horse trading will look much more limited and less existential than most of the ideas in the blogoverse - more like the right FB has already conceded, to delete rather than simply deactivate your profile, for example. Reding's speech itself seems to be in practice more about how FB sets its defaults than anything else: a default opt out from letting third parties tag your photos, rather than opt in, would seem a pretty limited and sensible demand.

Being more aspirational, Pangloss still has a soft spot for one interpretation of the "right to forget" which Fleischer rather derides as technically impossible - self expiring data. I'd love to hear from any techies who know more about this topic.

But the debate that has caught the public imagination goes wider than just DP law, and it is about whether we want to live in an online spin society.

There has been a certain amount of information coming out lately about how the Internet is not what it once was. Once we thought the Web was a conduit to unmediated news and opinions from real people, that it would enable direct democracy and change the world. But recent evidence has been that when it really matters - in matters of politics and revolutions and celebrities and ideology - a lot of what seems to be the "honest bloggers" or commenters or posters are actually paid spinners, employed and trained in the blogging and astro turfing schools of China and Russia and Iran and now, we hear this week, the US.

The right to forget can in some ways be used as the individual, non corporate, non state version of this. Rewriting history has been described by many people as Orwellian: we are at war with Eastasia, we have always been at war with Eastasia. That is chilling (in all senses of the word, including speech :-). The reality, as I already said, is likely to be consideringly less overwhelming (or effective). But this is still a debate we need to start having.

Tuesday, March 15, 2011

Online behavioural advertising: threat or menace?

Pangloss has recently been engaged in high level summit talks with her usual sparring partner Cybermatron on this rather current topic (which Pangloss is teaching, and about which Cybermatron is organising a workshop): as usual CyberM takes the privacy moral high ground that it is simply wrong for businesses and marketers to "follow you around the Web" without clear informed consent, while Pangloss is reduced to her usual confused, "er, um, yes it's a bit squicky but does it really need regulation? is it that significant in the nature of things compared to tsunamis, revolution in Africa and control orders? isn't it a matter that could better be dealt with by code and co-regulation, rather than regulation which would be territorially limited and probably merely favour US over EU digital industry?"

The latter approach certainly seems to be taking centre stage. Today I hear on Twitter that Microsoft, still maker of the most popular browser in the world, have agreed to install a Do Not Track opt-out cookie into IE v 9; this follows Firefox doing something roughly similar, leaving only Chrome (Google) and Safari (Apple) of the major desktop browsers as outliers.

Will this self regulatory, "code" solution, which has been heavily advocated by the FTC in the US be successful? It is very relevant to us in Europe right now, where a similar system is being promoted by the ad industry, especially the IAB and EASA . They suggest an "awareness raising icon" or "big red button" ,which would be put on the sites of participating websites, and would then lead users who clicked on it to an opt-out registry by which means they could indicate "do not track me" to the ad networks. These are the networks which collect data via third party cookies and other techniques such as Flash cookies, and then distribute the ads to participant websites. (Slightly worryingly, Pangloss has heard of this development anecdotally via attendee accounts of meetings held with the EC Commission in December and March, but cannot seem to trace an official document on the Web about it. These accounts seem to indicate that the Commission is already heavily behind these initiatives, which is all the more reason for a proper public debate.)

In an ideal universe, such a user-choice driven system could be good. It might allow users (like Cybermatron) who want to to protect themselves from online data collection and profiling, to do that: and let those who are either quite happy about it all (the majority "don't cares"), or feel that web 2.0 businesses need a revenue stream to survive that targeted ads supply, and the genie is already out of the bottle re their personal data (moi, on a bad day); or who actually like targeted ads (these people must exist somewhere, though Pangloss has never met them); or who feel they can protect themselves from ads using filter products like AdAware or Firefox anti-ad plugins (the techy fringe, and distinctly not including my mum), to go on doing their thing.

But as usual it's a little more complicated than that (c Ben Goldacre, 2011). The WSJ note firstly:

It still isn't clear how effective the privacy protection tools in Microsoft's browser will be. The do-not-track feature automatically sends out a message to websites and others requesting that the user's data not be tracked.

But the system will only work if tracking companies agree to respect visitors' requests. So far, no companies have publicly agreed to participate in the system.

The price goes on to quote the IAB moaning that their members have no systems set up to respond to "Do Not Track" requests. This strikes me as getting into protesteth too much territory: if the advertising industry wants to avoid mandatory regulation with, perhaps, stiff fines, they wil get their act together on this pronto or face the worse alternative. One imagines similar fears are driving Microsoft and Forefox. It is interesting that Google who make Chrome and who benefit by far the most from the online advertising market appear to be dragging their feet.

So what are the problems? Pangloss has been trying to get her head around this, with a bit of help from Ms Matron and Alex Hanff's blog on PI.

First, that good old chestnut, consumer ignorance, inertia and techno-inability. Most consumers don't click on buttons to opt out from behavioural tracking, just like they don't go looking for privacy settings on Facebook. They have better things to do: like go looking for the goods and services they went online for in the first place, or on FB, looking to see what friends are having cool parties. There also seems to be some debate about just how big the "big red button" will be but that's really the least of the problem.

(Interestingly, Pangloss has spent some time lately helping her much maligned mother with computing matters and observed that she (my mum that is) just does not have the habit most readers here of younger generations will have acquired without noticing, of searching all around a webpage for cues. She would never even notice the big red button unless it was as big as a Comic Relief red nose. But I digress.)

And in fact US research bears this out already re the behavioural ads opt out button. Hanff states:
"TrustE carried out an experiment to measure the effectiveness of the (US Do-Not_track) icon. Over 20 million people visited an experimental web page of which 0.6% of unique visitors interacted with the icon. TrustE shouted that this was a wonderful success, but I think the sane among us would argue the opposite is true."

If this is true, I'd certainly agree.

A secpnd, connected, problem is what is the effect of an opt out indication even if someone gets around to making one, by Do Not Track button or otherwise? You might well think it means that you have chosen for data collected about you not to be profiled and mined ie not to be tracked: but in fact the US experience so far may be just that the data collection and mining still goes on, but you don't get the targeted ads. This rather misses the point and I'm pretty sure everyone, including the NAI and IAB , knows this :-)

And a third problem is that given inertia, the problem is not really solved by the button, charming as it is, but by the underlying default set up of consumer browsers like IE, Firefox and Chrome. If the default is no tracking without saying "yes, please." (ie opt-in) then those who really want targeted ads can indeed opt-in, argues Cybermatron, and leave the rest of us alone. Less determined people like me say, well if no one ever clicks buttons if they don't have to, then no one will opt in to targeted ads bar a few maniacs, and web 2.0 will go bankrupt. I don't want that. Hmm. (It is also worth noting at this point that browsers are mostly written by companies whose fortunes are fairly heavily dependent on online advertising. Also hmm.)

Matron's solution is that web 2.0 can survive on serving ads, without using ad networks and behavioural tracking and data mining - good old fashioned second party cookie tracking, where one site uses what it learns about you to serve you more relevant ads. The likes of Amazon used to do quite nicely on this alone, using algorithms like "People like you who bought X also liked Y". Users can also fairly successfully block second party cookies themselves using most browsers, without having to rely on believing ad networks will implement do-not-track opt-out registers, not just save the data fot later and hide the ads.

But such evidence as there has been available to the public in recent years seems to point, unfortunately, to second party cookie tracking not being good enough for economic success. Google has massively the giant's share of the online ad delivery market because via its AdWords programmes, its near monopoly of search terms in many countries and its affiliates like YouTube and Android, it can collect far more targeting info about users than any other single site. The empirical evidence seems to be ; more targeted info means more click throughs means more money for the online industries in question.

One of the notable phenomena is that for companies like Amazon, advertising was a second string activity, really mainly marketing their own services. By contrast, the web 2.0 market, like Google, Facebook, last.fm etc etc, charge nothing so have to make money out of selling something, ie ads for other services and companies. This can only be achieved in any realistic way via third party cookies, ad networks and the like, goes a fairly obvious argument. Is it coincidence that third party advertising networks began to take over the market at almost the same time web 2.0 unpaid activity became the great success story of the Web? Seems unlikely but who knows?

In short, we need more data. Economic data on who makes money from which forms of targeted marketing, and who doesn't. Technical data on how effective an opt-out cookie can be anyway (what for example, would its effect be on Flash and zombie cookies? what happens if you delete your opt-out cookie?) Technical and social data on how valid the underlying data profiles are which are used by ad networks to deliver targeted ads: are their predictions reasonable out of context (eg some in-game data collection seems to have reportedly tagged people as "risk taking" or "aggressive" ; are they verifiable and transparent ; can they be misused (eg used to target addicts or the young with inherently risky offerings); can they be de-anonymised.

Since the latter seems increasingly likely (see Paul Ohm's seminal work passim), I have suggested before that such anonymised data profiles should benefit from some if not all of the same protection as "personal data" under some rubric like "potentially personal data". Notably this might make data profiles even where not tagged by name subject to subject access requests, and deletion requests where damage or distress was shown (or even not at all if we get the much ballyhooed right to forget).

Finally, for us lawyers, I think the biggest challenge is to dig ourself out the regulatory hell we are in where the DPD and PECD (and the media, exceptionally unhelpfully) present us with a mish mash of consent, "explicit consent", prior consent, informed consent, opt-in and opt-out consent. To a very large extent these distinctions are now pretty meaningless in their purpose, ie, to provide protection to users in controlling the processing of their personal data without their knowledge and consent. Eg, "sensitive personal data" is supposed to be specially protected by a requirement of "explicit consent" in the DPD scheme, but a common lawyer would argue a site like Facebook gets exactly that - via the registration, login or "I accept the terms and conditions" box - without any real sense of any added protection.

Hanff (above)argues forcefully that the amended PECD, which is due to be implemented across the EU shortly, now requires prior opt-in, and thus an opt-out system of the "big red button" type, will be illegal. But sympathetic as I am to his outrage, this is not what the new law says.

Art 5(3)of the PECD now says that placing cookies is only allowed where "the user has given his or her consent, having been provided with clear and comprehensive information." In some EU countries such as notably the UK, consent can be given by implication. If the article said "explicit" consent then this would not be possible - but, contrary to some very bad BBC reporting, and according to BIS's version of the amended PECD, there is no use in amended art 5(3) of the word "explicit". (Nor by the way, is there in art 14 on locational data which remains unamended by the new changes. This seems exceptionally odd.)

Furthermore, under EU law generally, it seems that the settings of a browser which has not been altered to opt-out, very unfortunately, can probably be seen as giving that consent by implication, as this has what has been expressly put into the recitals of the amended PECD. Most browsers do by default accept second, and sometimes third, party cookies. In some browsers, such as the version Pangloss has of Firefox, this distinction is not made - cookies are accepted and users can choose to go in and delete them individually. In such an analysis, most browsers will be set to "give consent" and the "big red button" is merely providing users with an opportunity to withdraw the consent they have already given, and is perfectly legal.

This is not a good analysis for privacy or consumers. It is not what those who fought for the changes in art 5(3) probably thought they were getting. But it is a plausible interpretation. Of course, existing national laws and national implementations may alter its meaning "on the ground" ; and I suspect we will see substantial cross EU disharmony emerging as a result. None of which will in fact help the digital industries.

What do we need out of regulation rather than this fumbling about opt in and opt out? Nellie Kroes has some ideas:

First and foremost, we need effective transparency. This means that users should be provided with clear notice about any targeting activity that is taking place.

Secondly, we need consent, i.e. an appropriate form of affirmation on the part of the user that he or she accepts to be subject to targeting.

Third, we need a user-friendly solution, possibly based on browser (or another application) settings. Obviously we want to avoid solutions which would have a negative impact on the user experience. On that basis it would be prudent to avoid options such as recurring pop-up windows. On the other hand, it will not be sufficient to bury the necessary information deep in a website’s privacy policies. We need to find a middle way.[italics added]

On a related note, I would expect from you a clear condemnation of illegal practices which are unfortunately still taking place, such as ‘re-spawning’ of standard HTTP cookies against the explicit wishes of users.

Fourth and finally: effective enforcement. It is essential that any self-regulation system includes clear and simple complaint handling, reliable third-party compliance auditing and effective sanctioning mechanisms. If there is no way to detect breaches and enforce sanctions against those who break the rules, then self-regulation will not only be a fiction, it will be a failure. Besides, a system of reliable third party compliance auditing should be in place."

That "middle way" solution, that involves real opt in consent but not endless pop up windows requesting consent, sounds a lot to me like mandating that browsers and manufacturers set browsers by default to reject cookies so users can demnonstarte real consent by changing that setting : the same strategy that I rejected above as impractical as the death of revenue to web 2.0. Maybe there is some more suble version of Reding's "middle way" I don't know about - I sincerely hope so. (Techy answers again very welcome!!)

But if Ed Vaizey, can for example suggest, as he did this week, that all computers sold in the UK should be shipped with software set by default to filter out all "porn", (however he plans to define that, and good luck with that) then why can't a similar command be sent out re the relatively simple privacy settings of browsers? Pangloss suspects that in reality, neither will happen, especially given that computers and handsets alike are mostly assembled outside the EU. It looks like the cookie and OBA wars , both in and outside of Europe, still have a fair way to go..





Friday, March 04, 2011

A few more dates for diaries

The Strathclyde LLM in Internet law and Policy is happy to present a public lecture by Daithi MacSithigh of the University of East Anglia on March 25th 2011 Room 7.42, 7th floor Graham Hills Building, 40 George Street, Glasgow, commencing at 5.00pm. The event is free but please email Linda at linda.nicolson@strath.ac.uk to let us know if you are planning to attend.

The title is "
"The medium is still the message:Angry Birds,the Met Opera & broadband bills"

Pangloss is really looking forward to that :)

Also for central-belt Scots - put April 14th 2011 evening in your diary, when Strathclyde Law School and the Franco-Scots Alliance will be co-hosting an event on the current state of anti filesharing legislation in the UK and France - myself and Nicolas Jondet (currently teaching IP law at Strathclyde, and local expert on HADOPI) representing these jurisdictions respectively. Venue TBD but Old College in Edinburgh likely. Given the current events around the Digital Economy Act - judicial review, Hargreaves Review - as well as in France this could be lively :)

GikII goes Gothenberg!

From Matthias Klang who is bravely taking the helm..

GikII VI, Göteborg, Sweden 2011

Freedom, openness & piracy?
26-28 June 2011
IT University
Göteborg, Sweden

Call for Papers

Is GikII a discussion of popular culture through the lens of law – or is it about technology law, spiced with popular culture? For five years and counting, GikII has been a vessel for the leading edge of debate about law, technology and culture, charting a course through the murky waters of our societal uses and abuses of technology.

For 2011, this ship full of seriously playful lawyers will enter for the first time the cold waters of the north (well, further north than Scotland) and enter that land of paradoxes: Sweden. Seen by outsiders as well-organised suicidal Bergman-watching conformists, but also the country that brought you Freedom of Information, ABBA, the Swedish chef, The Pirate Bay and (sort of…) Julian Assange. We offer fine weather, the summer solstice and a fair reception at the friendly harbour of Göteborg.

So come one, come all… Clean your screens, look into the harder discs of your virtual and real lives, and present your peers with your ideas on the meaning of our augmented lives. Confuse us with questions, dazzle us with legal arguments, and impress us with your GikIIness. If you have a paper on (for example) regulation of Technology & Futurama, soft law in World of Warcraft, censoring social media & Confucius, the creative role of piracy on latter day punk or plagiarism among the ancient Egyptians – We are the audience for you (for a taste of past presentations see the Programme section).

Application process

Please send an abstract not exceeding 500 words to Professor Lilian Edwards (Lilian.Edwards@strath.ac.uk) or Dr Mathias Klang (klang@ituniv.se). The deadline for submissions is 15 April 2011. We will try to have them approved and confirmed as soon as possible so that you can organise the necessary travel and accommodation.

Registration

As with previous years, GikII is free of charge, and therefore there are limited spaces available, so please make sure you submit your paper early. Priority is always given to speakers, but there are some limited spaces available for students and non-speakers. Registration is open through Eventbrite.

Friday, February 25, 2011

Wikileaks, online intermediaries and privatised censorship

A ppt below of a talk I gave at a very interesting workshop on the whole spectrum of Wikileaks associated issues, convened with admirable alacrity by Rachel Crawfurd-Smith at Edinburgh.



Reminded of this as currently sitting at Georgia Tech U in Atlanta who kindly invited me to their Workshop on Free and Open Communication on the Internet. It becomes more and more apparent that although in Iran, China and Libya, the state may openly censor the Internet, in the developed world, censorship exists also, but is more often happening "under the wire" - in the form of voluntary removal or blocking of content by privately owned hosts or ISPs (eg You Tube, Amazon, BT) - either because of covert pressure from states or , more commonly perhaps, because there is no money or actual commercial risk in hosting upsetting content. This raises a key issue: what if any are the social responsibilities of private bodies to sacrifice their own profits to preserve human rights?

I don't think I'm particularly cynical here: I know thoughtful, clever, aware and socially conscious people working high up in inter alia Google, Microsoft , HP and various ISPs (and at the IWF). But I genuinely don't see why a not particularly evil private company would not choose to enforce its terms of service differentially eg not to lose important advertisers. These choices are hard enough for newspapers with traditional journalistic values : which are not part of the substratum of most ISPs and hosts.

Wendy Seltzer from the Berkman Centre points to the Global Net Initiative group of companies , and at this workshop we've heard of a great many promising Google initiatives (or co-initiatives): Chilling Effects; Transparency Report; and the fact that Blogger has good anti DDOS protection and thus hosts many activist blogs. But with no offense to Google (where some of my best friends, etc, etc), these acts of charity too have to be placed in the context of Google's own worldwide efforts to win public and regulatory support for its other battles - with Viacom, with the copyright industry, with Italy over privacy etc. When it becomes useful for Google's profit margin and existence to work against free speech rather than for it - what corporate social value will take precedence then?

Monday, January 10, 2011

Welcome to 2011!

Happy new year, gentle readers, slightly belatedly, and for Pangloss it's all new indeed: new job, new title (Professor of E-Governance), new workplace (Strathclyde Law School) and new abode (back in Auld Reekie). All of this makes me very happy if in the short term, slightly, dishevelled, abandoned, hyper and well, fill in the adjective of your own choice :)

Please note AGAIN my new email address is lilian.edwards@strath.ac.uk and my snail address should you conceivably need it is

School of Law

Faculty of Humanities and Social Sciences

Graham Hills Building, Level 7 (GH 7.13)

50 George Street

Glasgow G1 1QE


If any of you can remember the achingly long time ago before the festive season, the burst pipes (oh so don't ask) and the Snowpocalypse, you may remember we were a little exercised about Wikileaks. The nice people at Practical Law Company (PLC) asked me to write a briefing on what issues might be involved for the UK legal system, and you too can read it for free here. Basically I think the key issues are:

- were criminal offences committed of DDOS by UK residents? (almost certainly yes)
- is merely downloading a tool which can be used to help commit DDOS a crime? (yes, though proof of intent may be tricky)
- can IP addresses of attackers be captured & UK ISPs be asked to help identify such persons (yup)
- can ISPs in UK conceivably be asked to block Wikileaks sites or domain names? (A. probably not, unless by some back door means such as invoking copyright laws under s 97A of the CDPA, or by some hitherto latent common law power which would need at least a High Court application in England & Wales or Court of Session in Scotland, and still be pretty uncertain).

The last point, though it seems farfetched, is a topical one given the ill judged comments by Ed Vaizey just before Christmas suggesting that all online "adult sexual materials" sites should be blocked "at source" by UK ISPs , with only adults then allowed to opt back in. Beyond the obvious difficulties of definition of such sites, over blocking, under blocking, the herculean task of assembling such a list, most of which will be overseas, evasion, ULL-jumping, VPNs, proxy servers, the fact that kids are better than adults at hacking this, etc ad nauseam, the simple fact is that such blocking solutions don't work and don't scale on practical terms unless you're willing to devote the resources and the Stalinist control of a country like China to such a pursuit. Just look at Australia for the trouble it has caused there in a smaller country with far fewer ISPs and far more history of state censorship than here.

I'm all for thinking of the children, really (actually, to be honest, as a child's rights lawyer on the side I also wonder if anyone has paid attention to the emergent minor child's right to autonomy, see Gillick, see future possible ECHR applications..?) but right now this seems like an expensive, embarrassing, largely pointless red herring to go down. IF parents want to stop kids accessing porn, there are many good products out there to allow them to do it at home eg |Net Nanny and its ilk. The Daily Mail will like it though :-)

But more than ALL that, what worries me is the huge possibility for scope creep here. As I have noted often, often before, once you have one scheme for blocking huge amounts of URLS without transparency or accountability in place, what is the temptation to start adding other URLs to it you don't like? High , in my cynical opinion. (And whatever the government means by blocking sites "at source" this will have to involve an Internet Watch Foundation style blocklist - because every single adult site closed down by its host service in UK will simply shift to a host abroad in under 24 hours. Indeed the Telegraph story seems to clearly indicate an IWF type list would be used : "Ministers now want companies to use the same technology to stop children accessing adult images".)

So on a brighter more positive start to the new year, here's a few events I plan to be at, be running , be speaking at, and so forth:

Workshop on Free and Open Communication on theInternet (FOCI), to be held February 24-25, 2011 at Georgia Tech in Atlanta,Georgia (invited expert speaker)

BILETA, Manchester Metropolitan University, 11th-12th April

3rd Web Science Conference, Koblenz, Germany - June 15-17

GikII in Gothenberg, Sweden!! GikII goes Scandinavian hardcore:) , contact Matthias Klang for info - 27-28 June

SCL Policy Forum, London, Herbert Smiths, September 15-16th - I'm curating this one on a theme of the new shape of European regulation as the DPD, ECD and other major instruments head for reform.

Sunday, December 12, 2010

Wikileaks drips on: some responses

Again like every self respecting blogger on the planet, I have written a short comment for the Grauniad on Wikileaks.

The main thrust of the point I was making is that settling a dispute of major public consequence by covert and non-legitimate bully boy tactics - covert pressure on hosts, payment services and DNS servers, plus DDOS attacks on Wikileaks hosts from the US-sympathising side - and anonymous DDOS attacks on sites like Amazon, Mastercard and Assange's alleged rape victims' lawyer's site from the Wikileaks-sympathising side - are BOTH the wrong thing to do. The point of a civilised society is suposed to be that disputes are settled by transparent legitimate and democratic, judicial or political processes. This has not been a particularly popular point with almost anybody, but it seems to me that it may indeed be naive (as some commenters have accused), but it is also, I stil think, both correct and needing saying, in the current frenzy around the First Great Infowar etc (it's 1996 all over again, yet again..).

One commenter asks not entirely unreasonably why it is justified for Amazon to take down content without going to court but "Vigilanteism" if the forces of Anonymous take down content extra judicially by DDOS attacks. The confusion here is in the word "justified". Amazon are justified, I argue, because since they host the content, they could be held legally liable for it (on a variety of grounds) if they do not take down having been given notice. That could lead to damages against them, injunctions blocking their site to customers (at their busiest time of the year) or even a prison sentence for their CEO . As a (liberal-sympathisng) friend in industry said to me, that last does tend to focus the mind. To state the bleeding obvious, Anonymous by contrast are not liable for the content they bring down.

But that meant I was saying Amazon were justified in a risk-management sense, and a legal sense, not an ethical sense. Was it Amazon's highest ethical duty to defend freedom of speech or to be responsible to their shareholders and their employees? That's a harder question. Many used to feel companies had no ethical duties at all, though that is gone in an era of corporate social responsibility (though this is still rarely if ever a legal obligation). Amazon's role is perhaps confused because they are best known as a consumer site selling books ie complicit with freedom of expression. Would we feel as aggrieved if Wikileaks had gone to a cloud host known only for B2B hosting? Perhaps, but what reason would there then be for expecting a host to behave like a newspaper?

What this leads us to as many, many commentators have pointed out is a renewed understanding that freedom of speech online is worryingly dependent on the good intentions of intermediaries whose core values and business model is not based on journalistic ethics, as was true for traditional news outlets in the offine age. This is hardly news: it has been making headines since at least 1996 when a Bavarian court convicted the CEO of Compuserve for distributing Usenet newsgroups to Europe, some of which happened to contain pornographic files. That incident among many others, lead to rules restricting the liability of hosts and intermediaries, in both the EU and US, which did quite well till round the early 2000s but are now struggling (not least because of pressure from both the copyright and the chld safety lobbies for less, not more, immunity). Not uncoincidentally, these rules are now being actively reviewed by among others, the EU, the OECD and WIPO. The really interesting question now will be what effect Wikileaks as a case study has on those debates.

Wednesday, December 01, 2010

Veni Vidi Wikileaks

Since every other blogger in the universe has discussed how the US is going to stop Wikileaks, perhaps it's time for Pangloss to enter the fray, with the not terribly unexpected news that Amazon (in its cloud hosting services capacity) have indeed decided to stop acting as new temporary host to Wikileaks which moved there following the devastating DDOS attacks on its own server (thanks to Simon Bradshaw for pointing me at this news).

This is interesting in all kinds of ways.

First, the initial move to Amazon was a clever one. In the old days, a concerted and continuing DDOS attack on a small site might have seen them off - nowadays there are plenty of commercial reasonably priced or free cloud hosts. So cloud computing can be seen as a bulwark for freedom of speech - vive les nuages!

Second, though of course, what strokes your back can also bite it, and here we have Amazon suddenly coming over shy. This appears to be entirely the sensible legal thing for them to do and anyone accusing them of bad behaviour should be accused right back of utter naivete. Amazon are now on notice from the government of hosting material which breached US national security and so would according to the US Espionage Act as quoted in the Guardian piece, fairly clearly have been at risk of guilt as a person who "knowingly receives and transmits protected national security information" if they had not taken down. (Though see a contrary view here.)

While Assange as an Australian not a US citizen, and a journalist (of sorts) might have had defences against the charges quoted also ( as canvassed in the Grauniad piece) Amazon, interestingly, would, it seems, not. They are American and by definition for other useful purposes (eg CDA s 230 (c) - see below and ye ancient Prodigy case) , not the sort of publisher who gets First Amendment protections. And Amazon has its CEO and its major assets in the US, also unlike Assange. I think that makes take down for Amazon a no-brainer. (And also interestingly, CDA s 230(c) which normally gives hosts complete immunity in matters of liability which might affect press freedom (such as defamation by parties hosted) does not apply to federal criminal liability.)

But as Simon B also pointed out, there are lots of other cloud suppliers , lots in Europe even. What if Wikileaks packs and moves again? Would any non US`host be committing a crime? That would depend on the local laws: but certainly it would be hard to see if the US Espionage Act could apply, or at any rate what effective sanctions could be taken against them if a US court ruled a foreign host service was guilty of a US crime.

Which leaves anyone wanting to stop access to Wikileaks, as Technollama already canvassed, the options of, basically, blocking and (illegal)DDOS (seperating the existence of the Wikileaks site from any action against Assange as an individual). Let's concentrate, as lawyers, on the former.

Could or would the UK block Wikileaks if the US`asked?

Well there is an infrastructure in place for exactly such. It is the IWF blacklist of URLs which almost all UK ISPs are instructed to block, without need for court order or warrant - and which is encrypted as it goes out, so no one in public (or in Parliament?) would need to know. This is one of the reasons I get so worked up about the current IWF when people are asking me if I won't think of the children.

There is also the possibility, as we saw just last week, of pressure being exerted not on ISPs but on the people who run domain name servers and the registrars that keep domain names valid. Andres G suggests that the US might exert pressure on ICANN to take down wikileaks.org for example. Wikileaks doesn't need a UK domain name to make itself known to the world, but interestingly only last week we also saw a suggestion from SOCA (not very well reported) that they should have powers effectively to force Nominet, the UK registry, to close down UK domain names being used for criminal purposes. Note though if you follow the link that that power could only be used if the doman was breaking a UK criminal law.

But there is a really simply non controversial way to allow UK courts the power to block Wikileaks. Or there may be soon.

Section 18 of the Digital Economy Act 2010 - remember that? - allows for regulations to be made for "the granting by a court of a blocking injunction in respect of a location on the internet which the court is satisfied has been, is being or is likely to be used for or in connection with an activity that infringes copyright."

Section 18, at present, needs a review and regulations to be made before it can come into force. This may in the new political climate perhaps never happen - who knows. But what if that had been seen to?

Wikileaks documents are almost all copyright of someone , like the US government, and are being used ie copied (bien sur) without permission. Hence almost certainly, a s18 fully realised could be used to block the Wikileaks site.Of course there is some possibility from the case of Ashcroft v Telegraph Group [2001] EWCA Civ 1142`that a public interest/freedom of expression defense to copyright infringement might be plead - but this is far less developed than it is in libel and even there it is not something people much want to rely on.

So there you go : copyright, the answer to everything, even Julian Assange :-)

Oh and PS - oddly enough the US legislature is currently considering a bill, COICA, which would also allow them to block the domain name of sites accused of encouraging copyright infringement. Handy, eh? (Though on this one point, the UK DEA s 18 is even less restrictive than COICA, which requires the site to be blocked to be "offering goods and services" in violation of copyright law - which is not even to a lawyer a description that sounds very much like Wikileaks.)

EDIT: Commenters have pointed out that official government documents in the US, unlike in the UK do not attract copyright. Howver the principle stands firm: embarrassing UK docs leaked by Wikileaks certainly would be prone to attack on copyright grounds, including DEA s 18, and it is quite possible some of the current Wikileaks documents could quote extensively from material copyright to individuals (and Wikileaks prior to the current batch of cables almost certainly contain copyright material).

Interestingly Amazon did in fact, subsequent to this piece, claim they removed Wikileaks from their service, not because of US pressure, but on grounds of breach of terms of service : see the Guardian 3 December 2010

"for example, our terms of service state that 'you represent and warrant that you own or otherwise control all of the rights to the content… that use of the content you supply does not violate this policy and will not cause injury to any person or entity.' It's clear that WikiLeaks doesn't own or otherwise control all the rights to this classified content. Further, it is not credible that the extraordinary volume of 250,000 classified documents that WikiLeaks is publishing could have been carefully redacted in such a way as to ensure that they weren't putting innocent people in jeopardy. Human rights organisations have in fact written to WikiLeaks asking them to exercise caution and not release the names or identities of human rights defenders who might be persecuted by their governments."

The copyright defense is alive and well :-)

Job opportunity

In all the excitement, I almost forgot what I came here to post..!

As advisory board meber of ORG, I was asked to help spread the word about a new job opportunity with the Open Rights Group, where for the first time we're looking to hire someone with some kind of legal background. If you’re a London-based law student, trainee in waiting, or other legal type with an interest in IT and/or IP law, then you may want to check the following new job at ORG:

Copyright Campaigner

The Open Rights Group, a fast-growing digital rights campaigning organisation, are looking for a Copyright Campaigner to take our campaigning on this fast moving area to a new level.

You will work as a full time campaigner to reform copyright and protect individuals from inappropriate enforcement laws like ‘three strikes’. We’re after someone with a passion for this area who has a proven ability to organise and deliver effective campaigns.

We are looking for someone with excellent communication skills, good organisational and planning skills, who works well in a team environment and is able to prioritise their own work without depending on line management. You will be able to demonstrate commitment to our digital rights.

The jobs is full time for one year, with the possibility of extension to a second year. Salary: £30,000.

Welcome back to me

Yes, it's been a very long time since I blogged here. I'm not exactly Technollama am I? *hangs head in shame*. I could list the usual litany of less-fun-than-blogging things I've been doing (work for the OECD and WIPO, teaching, moving jobs, being snowed in, ORG, etc etc) but really it seems that blawging is a habit it's easy to lose if you're not careful, but also (I hope) easy to resume. (Also, let's face it, Twitter. Wonderful fun, dangerously seductive and Bad for Blawgs.)

So to kick off, a reprise of my annual not-very-serious predictions for next year, from the SCL journal site (where many more such can be found.)

"1. France will pass a law forbidding French companies from using cloud computing companies based anywhere other than France. Germany will ban cloud computing as unfair competition with German companies. Ireland will consider putting its banking in the cloud, but realise there's no point as they have no money left.

2. A Google off-shore water-cooled server farm will be kidnapped by Somali pirates, towed to international waters, repurposed as encrypted BitTorrent client and take over 95% of the world's traffic in infringing file-sharing (with substantial advertising revenue, of course) (thanks to Chris Millard for this one). (Meanwhile the Irish will attempt to nationalise all the Google servers they still host on shore to pay for bailing out the banks.)

3. TalkTalk will lose their judicial review case against the Digital Economy Act, but the coalition will find some very good reason to delay bringing in the Initial Obligations code, and the technical measures stage will quietly wither on the vine, as rights-holders realise it will cost them more to pay for it than they will gain in royalties.

4. 120% of people of the world including unborn children, all except my mother, will join Facebook. Mark Zuckerberg will buy Ireland and turn it into a Farmville theme park, with extra potatoes.

5. 4chan allied with Anonymous will hack Prince William's e-mail inbox on the eve of the Royal Wedding, revealing he is secretly in love with an older, plainer and less marriageable woman than Kate Middleton (possibly Irish), and also illicitly downloads Lady Gaga songs. In retaliation, the coalition passes emergency legislation imposing life imprisonment as the maximum penalty for DDOS attacks, and repeals the Digital Economy Act."

Lawrence Eastham kindly says that he can see at least 2 of these coming true.I imagine one of those is no 3, but which do you think the other was, dear readers?

As a final amuse-bouche beforeI return to Proper Things, have a picture of someone skiiing to the local shops this morning. yes. Not Seefeld. Sheffield. Merry Xmas!





Sunday, October 03, 2010

OK I lied: this is the last robot post..




I was trying to remember for the last five days, the saddest most anthromoporphic [NOTE: canomorphic??] piece of robot culture I'd ever seen...

Why We Shouldn'T Date Robots

OK I'll stop about the robots after this, promise, but I can't resist this one someone's sent me.

Futurama - Don't date robots from John Pope on Vimeo.

Friday, October 01, 2010

Edwards' Three Laws for Roboticists




A while back I blogged about how delighted I was to have been invited by the EPSRC to a retreat to discuss robot ethics, along with a dozen and half or so other experts drawn not just from robotics and AI itself but also from industry, the arts, media and cultural studies, performance, journalism, ethics, philosophy, psychology - and er, law (ie , moi.)

The retreat was this week, and two and a half days later, Pangloss is reeling with exhaustion, information overload, cognitive frenzy and sugar rush :-) It is clear this is an immensely fertile field of endeavour, with huge amounts to offer society. But it is also clear that society (not everywhere - cf Japan - but in the UK and US at least - and not everyone - some kids adore robots) has an inherited cultural fear of the runaway killer robot (Skynet, Terminator, Frankenstein yadda yadda), and needs a lot of reassurance about the worth and safety of robots in real life, if we are to avoid the kind of moral panics and backlashes we have seen around everything from GM crops to MMR vaccinations to stem cell surgery. (Note I have NOT here used a picture of either Arnie or Maria from Metropolis, those twin peaks of fear and deception.)

Why do we need robots at all if we're that scared of them, then? Well, robots are already being used to perform difficult dirty and dangerous tasks that humans do not want to do, don't do well or could not do because it would cause them damage, eg in polluted or lethal environments such as space or undersea. (What if the Chilean miners had been robots?? They wouldn't now be asking for cigarettes and alcohol down a tube..) )

Robots are also being developed to give basic care in home and care environments, such as providing limited companionship and doing menial tasks for the sick or the housebound or the mentally fragile. We may say (as Pangloss did initially) that we would rather these tasks be performed by human beings as part of a decent welfare society : but with most the developed world facing lower birth rates and a predominantly aging population, combined with a crippling economic recession, robots may be the best way to assure our vulnerable a bearable quality of life. They may also give the vulnerable more autonomy than having to depend on another human being.

And of course the final extension of the care giving robot is the famous sexbot , which might provide a training experience for the scared or blessed contact for the disabled or unsightly - or might introduce a worrying objectification/commodification of sex, and sex partners, and an acceptance of the unaceptable like sexual rape and torture, into our society.

Finally and most controversially robots are to a very large extent being funded at the cutting edge by military money. This is good ,because robots in the frontline don't come back in body bags - one reason the US is investing extensively. But it is also bad, because if humans on the frontline don't die on one side, we may not stop and think twice before launching wars, which in the end will have collateral damge for out own people as well as risk imposing devastating casualties on human opposition from less developed countries. We have to be careful in some ways to avoid robots making war too "easy" (for the developed world side, not the developing of course - robots so far at least are damn expensive.)

Three key messages came over:

- Robots are not science fiction. They already exist in their millions and are ubiquitous in the developed world eg robot hoovers, industrial robots in car factories, care robots are being rolled out even in UK hospitals eg Birmingham. However we are at a tipping point because until now robots of any sophistication have mostly been segregated from humans eg in industrial zones. The movement of robots into home and domestic and care environments, sometimes interacting with the vulnerable, children and the elderly especially, brings with it a whole new layer of ethical issues.

- Robots are mostly not humanoid. Again science fiction brings with it a baggage of human like robots like Terminators, or even more controversially, sex robots or fembots as celebrated in Japanese popular culture and Buffy. In fact there is little reason why robots should be entirely humanoid , as it is damn difficult to do - although it may be very useful for them to mimic say a human arm, or eye, or to have mobility. One development we talked a lot about were military applications of "swarm" robots. These resemble a large number of insects far more than they do a human being. Other robots may simply not even resemble anything organic.

-But robots are still something different from ordinary "machines" or tools or software. First, they have a degree of mobility and/or autonomy. This implies a degree of sometimes threatening out of control-ness. Second, they mostly have capacity to learn and adapt. This has really interesting consequences for legal liability: is a manufacturer liable in negligence if it could not "reasonably foresee" what its robots might eventually do after a few months in the wild?

Third, and perhaps most interestingly, robots increasingly have the capacity to deceive the unwary (eg dementia patients) into believing they are truely alive, which may be unfortunate (would you give an infertile woman a robot baby which will never grow up? would you give a pedophile a sex robot that looked like a child to divert his anti social urges?). Connectedly, they may manipulate the emotions and alter behaviour in new ways: we are used to kids insisting on buying an entire new wardrobe for Barbie, but what about when they pay more attention to their robot dog (which needs nothing except plugged in occasionally) than their real one, so it starves to death?

All this brought us to a familiar place, of wondering if it might be a good start to consider rewriting Asimov's famous Three Laws of Robotics. But of course Asimov's laws are - surprise!! - science fiction. Robots cannot and in foreseeable future will not, be able to understand, act on, be forced to obey, and most importantly reason with, commands phrased in natural language. But - and this came to me lit up like a conceptual lightbulb dipped in Aristotle' imaginary bathtub - those who design robots - and indeed buy them and use them and operate them and modify them - DO understand law and natural language, and social ethics. Robots are not subjects of the law nor are they responsible agents in ethics ; but the people who make them and use them are. So it is laws for roboticists we need - not for robots. (My thanks to the wonderful Alan Winfield of UWE for that last bit.)

So here are my Three Laws for Roboticists, as scribbled frantically on the back of an envelope. To give context, we then worked on these rules as a group, particularly a small sub group including Alan Winfield, as mentioned above , and Joanna Bryson of University of Bath, who added two further rules relating to transparency and attribution (I could write about them but already too long!).

It seems possible that the EPSRC may promote a version of these rules, both in my more precise "legalese" form, and in a simpler, more public-communicative style, with commentary : not, obviously, as "laws" but simply as a vehicle to start discussion about robotics ethics , both in the science community and with the general public. It is an exciting thing for a technology lawyer to be involved in, to put it mildly :)

But all that is to come: for now I merely want to stress this is my preliminary version and all faults, solecisms and complete misunderstandings of the cultural discourse are mine, and not to be blamed on the EPSRC or any of the other fabulously erudite attendees. Comments welcome though :)

Edwards' Three Laws for Roboticists

1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill, except in the interests of national security.

2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy.

3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).


My thanks again to all the participants for their knowledge and insight (and putting up with me talking so much), and in particular to Stephen Kemp of the EPSRC for organising and Vivienne Parry for facilitating the event.


Phew. Time for t'weekend, Pangloss signing off!