A UK-based cyberlaw blog by Lilian Edwards. Specialising in online privacy and security law, cybercrime, online intermediary law (including eBay and Google law), e-commerce, digital property, filesharing and whatever captures my eye:-) Based at The Law School of Strathclyde University . From January 2011, I will be Professor of E-Governance at Strathclyde University, and my email address will be lilian.edwards@strath.ac.uk .
Sunday, October 03, 2010
OK I lied: this is the last robot post..
I was trying to remember for the last five days, the saddest most anthromoporphic [NOTE: canomorphic??] piece of robot culture I'd ever seen...
Why We Shouldn'T Date Robots
Futurama - Don't date robots from John Pope on Vimeo.
Friday, October 01, 2010
Edwards' Three Laws for Roboticists
A while back I blogged about how delighted I was to have been invited by the EPSRC to a retreat to discuss robot ethics, along with a dozen and half or so other experts drawn not just from robotics and AI itself but also from industry, the arts, media and cultural studies, performance, journalism, ethics, philosophy, psychology - and er, law (ie , moi.)
The retreat was this week, and two and a half days later, Pangloss is reeling with exhaustion, information overload, cognitive frenzy and sugar rush :-) It is clear this is an immensely fertile field of endeavour, with huge amounts to offer society. But it is also clear that society (not everywhere - cf Japan - but in the UK and US at least - and not everyone - some kids adore robots) has an inherited cultural fear of the runaway killer robot (Skynet, Terminator, Frankenstein yadda yadda), and needs a lot of reassurance about the worth and safety of robots in real life, if we are to avoid the kind of moral panics and backlashes we have seen around everything from GM crops to MMR vaccinations to stem cell surgery. (Note I have NOT here used a picture of either Arnie or Maria from Metropolis, those twin peaks of fear and deception.)
Why do we need robots at all if we're that scared of them, then? Well, robots are already being used to perform difficult dirty and dangerous tasks that humans do not want to do, don't do well or could not do because it would cause them damage, eg in polluted or lethal environments such as space or undersea. (What if the Chilean miners had been robots?? They wouldn't now be asking for cigarettes and alcohol down a tube..) )
Robots are also being developed to give basic care in home and care environments, such as providing limited companionship and doing menial tasks for the sick or the housebound or the mentally fragile. We may say (as Pangloss did initially) that we would rather these tasks be performed by human beings as part of a decent welfare society : but with most the developed world facing lower birth rates and a predominantly aging population, combined with a crippling economic recession, robots may be the best way to assure our vulnerable a bearable quality of life. They may also give the vulnerable more autonomy than having to depend on another human being.
And of course the final extension of the care giving robot is the famous sexbot , which might provide a training experience for the scared or blessed contact for the disabled or unsightly - or might introduce a worrying objectification/commodification of sex, and sex partners, and an acceptance of the unaceptable like sexual rape and torture, into our society.
Finally and most controversially robots are to a very large extent being funded at the cutting edge by military money. This is good ,because robots in the frontline don't come back in body bags - one reason the US is investing extensively. But it is also bad, because if humans on the frontline don't die on one side, we may not stop and think twice before launching wars, which in the end will have collateral damge for out own people as well as risk imposing devastating casualties on human opposition from less developed countries. We have to be careful in some ways to avoid robots making war too "easy" (for the developed world side, not the developing of course - robots so far at least are damn expensive.)
Three key messages came over:
- Robots are not science fiction. They already exist in their millions and are ubiquitous in the developed world eg robot hoovers, industrial robots in car factories, care robots are being rolled out even in UK hospitals eg Birmingham. However we are at a tipping point because until now robots of any sophistication have mostly been segregated from humans eg in industrial zones. The movement of robots into home and domestic and care environments, sometimes interacting with the vulnerable, children and the elderly especially, brings with it a whole new layer of ethical issues.
- Robots are mostly not humanoid. Again science fiction brings with it a baggage of human like robots like Terminators, or even more controversially, sex robots or fembots as celebrated in Japanese popular culture and Buffy. In fact there is little reason why robots should be entirely humanoid , as it is damn difficult to do - although it may be very useful for them to mimic say a human arm, or eye, or to have mobility. One development we talked a lot about were military applications of "swarm" robots. These resemble a large number of insects far more than they do a human being. Other robots may simply not even resemble anything organic.
-But robots are still something different from ordinary "machines" or tools or software. First, they have a degree of mobility and/or autonomy. This implies a degree of sometimes threatening out of control-ness. Second, they mostly have capacity to learn and adapt. This has really interesting consequences for legal liability: is a manufacturer liable in negligence if it could not "reasonably foresee" what its robots might eventually do after a few months in the wild?
Third, and perhaps most interestingly, robots increasingly have the capacity to deceive the unwary (eg dementia patients) into believing they are truely alive, which may be unfortunate (would you give an infertile woman a robot baby which will never grow up? would you give a pedophile a sex robot that looked like a child to divert his anti social urges?). Connectedly, they may manipulate the emotions and alter behaviour in new ways: we are used to kids insisting on buying an entire new wardrobe for Barbie, but what about when they pay more attention to their robot dog (which needs nothing except plugged in occasionally) than their real one, so it starves to death?
All this brought us to a familiar place, of wondering if it might be a good start to consider rewriting Asimov's famous Three Laws of Robotics. But of course Asimov's laws are - surprise!! - science fiction. Robots cannot and in foreseeable future will not, be able to understand, act on, be forced to obey, and most importantly reason with, commands phrased in natural language. But - and this came to me lit up like a conceptual lightbulb dipped in Aristotle' imaginary bathtub - those who design robots - and indeed buy them and use them and operate them and modify them - DO understand law and natural language, and social ethics. Robots are not subjects of the law nor are they responsible agents in ethics ; but the people who make them and use them are. So it is laws for roboticists we need - not for robots. (My thanks to the wonderful Alan Winfield of UWE for that last bit.)
So here are my Three Laws for Roboticists, as scribbled frantically on the back of an envelope. To give context, we then worked on these rules as a group, particularly a small sub group including Alan Winfield, as mentioned above , and Joanna Bryson of University of Bath, who added two further rules relating to transparency and attribution (I could write about them but already too long!).
It seems possible that the EPSRC may promote a version of these rules, both in my more precise "legalese" form, and in a simpler, more public-communicative style, with commentary : not, obviously, as "laws" but simply as a vehicle to start discussion about robotics ethics , both in the science community and with the general public. It is an exciting thing for a technology lawyer to be involved in, to put it mildly :)
But all that is to come: for now I merely want to stress this is my preliminary version and all faults, solecisms and complete misunderstandings of the cultural discourse are mine, and not to be blamed on the EPSRC or any of the other fabulously erudite attendees. Comments welcome though :)
Edwards' Three Laws for Roboticists
1.Robots are multi-use tools. Robots should not be designed solely or primarily to kill, except in the interests of national security.
2 Humans are responsible for the actions of robots. Robots should be designed & operated as far as is practicable to comply with existing laws & fundamental rights and freedoms, including privacy.
3) Robots are products. As such they should be designed using processes which assure their safety and security (which does not exclude their having a reasonable capacity to safeguard their integrity).
My thanks again to all the participants for their knowledge and insight (and putting up with me talking so much), and in particular to Stephen Kemp of the EPSRC for organising and Vivienne Parry for facilitating the event.
Phew. Time for t'weekend, Pangloss signing off!
Wednesday, August 11, 2010
Do robots need laws? : a summer post:)
I can so use this for the EPSRC Robotics Retreat I am going to in September!! (via io9 with thanks to Simon Bradshaw)
Another slightly more legal bit of robotics that's been doing the rounds, is this robots.txt file from the last.fm site. Robots.txt for the non techies are small text files which give instructions to software agents or bots as to what they are allowed to do on the site. Most typically, they do or don't tell Google and other search engines whether they are allowed to make copies of the site or not ("spider" it). No prize at all for the first person to realise what famous laws the last three lines are implementing:-)
User-Agent: *
Disallow: /music?
Disallow: /widgets/radio?
Disallow: /show_ads.php
Disallow: /affiliate/
Disallow: /affiliate_redirect.php
Disallow: /affiliate_sendto.php
Disallow: /affiliatelink.php
Disallow: /campaignlink.php
Disallow: /delivery.php
Disallow: /music/+noredirect/
Disallow: /harming/humans
Disallow: /ignoring/human/orders
Disallow: /harm/to/self
Allow: /
This all raises a serious underlying question (sorry) which is, how should the law regulate robots? We already have a surprising number of them. Of course it depends what you call a robot: Wikipedia defines them as "an automatically guided machine which is able to do tasks on its own, almost always due to electronically-programmed instructions".
That's pretty wide. It could mean the software agents or bots that as discussed above, spider the web, make orders on auction sites like eBay, collect data for marketing and malware purposes, learn stuff about the market, etc - in which case we are already awash with them.
Do we mean humanoid robots? We are of course, getting there too - see eg the world's leading current example, Honda's ASIMO, which might one day really become the faithful, affordable, un-needy helpmate of our 1950's Campbellian dreams . (Although what happens to the unemployment figures then?? I guess not that much of the market is blue collar labour anymore?) .But we also already live in a world of ubiquitous non humanoid robots - such as in the domestic sector, the fabulous Roomba vacuum cleaner, beloved of geeks (and cats); in industry, automated car assembly robots (as in the Picasso ads) ; and, of course, there are emergent military robots.
Only a few days ago, the news broke of the world's alleged first robot to feel emotions (although I am sure I heard of research protypes of this kind at Edinburgh University's robotics group years back.) Named Nao, the machine is programmed to mimic the emotional skills of a one-year-old child.
"When the robot is sad it hunches its shoulders forward and looks down. When it is happy, it raises its arms, looking for a hug.
The relationships Nao forms depend on how people react and treat it |
When frightened, it cowers and stays like that until it is soothed with gentle strokes on the head.The relationships Nao forms depend on how people react and treat, and on the amount of attention they show."
Robots of this kind could be care-giving companions not only for children, but perhaps also in the home or care homes for lonely geriatrics and long term invalids, whose isolation is often crippling. (Though again I Ludditely wonder if it wouldn't be cheaper just to buy them a cat.)
Where does the law come into this? There is of course the longstanding fear of the Killer Robot: a fear which Asimov's famous first law of robotics was of course designed to repel. (Smart bombs are of course another possibility which already, to some extent exist - oddly they don't seem to create the same degree of public distrust and terror, only philosophical musings in obscure B-movies.) But given the fact that general purpose ambulant humanoid artificially intelligent robots are still very much in the lab, only Japan seems so far to have even begun to think about creating rules securing the safety of "friendly AIs" in real life, and even there Google seems to show no further progress since draft guidelines were issued in 2007.But the real legal issues are likely to be more prosaic, at least in the short term. If robots do cause physical harm to humans (or, indeed, property) at the moment the problem seems more akin to one for the law of torts or maybe product liability than murder or manslaughter. We are a long away away yet from giving rights of legal personality to robots. So there may be questions like how "novel" does a robot have to be before there will be no product liability because of the state of the art defense? How much does a robot have to have a capacity to learn and determine its own behaviours before what it does is not reasonably foreseeable by its programmer?? Do we need rules of strict liability for robot behaviour by its "owners" - as Roman law did and Scots law still does for animals, depending on whether they are categorised as tame or wild? And should that liability fall on the designer of the software, the hardware, or the "keeper" ie the person who uses the robot for some useful task? Or all three?? Is there a better analogy to be drawn from the liability of master for slave in the Roman law of slavery, as Andrew Katz brilliantly suggsted at a GikII a while back??
In the short(er) term, though, the key problems may be around the more intangible but important issue of privacy. Robots are likely as with NAO above to be extensively used as aids to patients in hospitals, homes and care homes; this is already happening in Japan and S Korea and even from some conference papers I have heard in the US. Such robots are useful not just because they give care but because they can monitor, collect and pass on data. Is the patient staying in bed? Are they eating their food and taking their drugs and doing their exercises? Remote sensing by telemedicine is already here; robot aides take it one step further. All very useful but what happens to the right to refuse to do what you are told, with patients who are already of limited autonomy? Do we want children to be able to remotely surveille their aged parents 24/7 in nursing homes, rather than trek back to visit them, as we are anecdotally told already occurs in the likes of Japan?
There are real issues here about consent, and welfare vs autonomy which will need thrashed out. More worryingly still, information collected about patients could be directly channeled to drug or other companies - perhaps in return for a free robot. We already sell our own personal data to pay for web 2.0 services without thinking very hard about it - should we sell the data of our sick and vulnerable too??
Finally robots will be a hard problem for data protection. If robots collect and process personal data eg of patients, are they data controllers or processors? presumably the latter; in which case very few obligations pertain to them except concerning security. This framework may need sdjusting as the ability of the human "owner" to supervise what they do may be fragile, given leearning algorithms, bugs in software and changing environments.
What else do we need to be thinking about?? Comments please :-)