William Safire tells an amazing story in his column in today's New York Times. He says that in the early 1980's, the U.S. government hid malicious code in oil-pipeline-control software that the Soviet Union then stole and used to control a huge trans-Siberia pipeline. The malicious code manipulated the pipelines valves and other controls in a way that caused a huge explosion, ruining the pipeline.
After that, Safire reports, "all the software [the Soviet Union] had stolen for years was suddenly suspect, which stopped or delayed the work of thousands of worried Russian technicians and scientists."
I should emphasize that as of yet there is no corroboration for this story; and the story appears in an editorial-page column and not on the news pages of the Times (where it would presumably be subject to more stringent fact-checking, especially in light of the Times' recent experience).
From a purely technical standpoint, this sort of thing is definitely possible. Any time you rely on somebody else to write your software, especially software that controls dangerous equipment, you're trusting that person not to insert malicious code. Whether it's true or not, Safire's story is instructive.
Posted by Edward W. Felten at 01:24 PM | permanent link | Comments (5) | Followups (0)
The Tennessee Super-DMCA is back. Here's the text of the latest version.
Like the previous version, which died in a past legislative session, this bill looks like an attempt to broaden existing bans on unauthorized access to cable TV and phone service. The old version was much too broad. The new version is worded more carefully, with exceptions for "multipurpose devices". I haven't read it carefully enough to tell whether there are remaining problems.
Tennessee Digital Freedom is a good source for information and updates on this bill.
Posted by Edward W. Felten at 12:37 PM | permanent link | Comments (0) | Followups (0)
Julian Dibbell, at TerraNova, points out an issued U.S. Patent that seems to cover digital property systems of the type used by many multiplayer online games:
How naive must one be, in this day and age, to spend months debating the question of virtual property without once wondering whether the question itself (or at any rate the phenomenon underlying it) wasn't already somebody's intellectual property?
Speaking only for myself, I confess the thought never crossed my mind. Not until last week, that is, when I received a friendly email from veteran game designer Ron Martinez, who alerted me to U.S. patent 6,119,229, "Virtual Property System," filed April 1997, granted September 2000, and jointly held by Martinez, Greg Guerin, and the famous cryptographer Bruce Schneier.
As if it weren't freaky enough that someone could own the concept of digital property, check this out: the patent arguably covers the U.S. Patent system itself, as administered by the PTO, at least with respect to patents on network technology.
Don't believe me? Let's read the text of Claim 1 of the patent against the U.S. Patent system. I'll intersperse the language of the claim (in ordinary typeface) with explanations of where each element can be found in the patent system (in italics). Ready? Here goes.
What is claimed is:
1. A digital object ownership system [the U.S. patent system], comprising:
a plurality of user terminals, each of said user terminals being accessible by at least one individual user [PCs on the Internet];
at least one central computer system, said central computer system being capable of communicating with each of said user terminals [the Patent Office's servers];
a plurality of digital objects [U.S. patents], each of said digital objects having a unique object identification code [the patent number], each of said digital objects being assigned to an owner [patents have owners], said digital objects being persistent such that each of said digital objects is accessible by a particular user both when said user's terminal is in communications with said central computer system and also when said terminal is not in communication with said central computer system [patents still exist even when users aren't reading them on the Net], said object having utility [the ability to bring an infringement suit] in connection with communication over a network [assuming the patent covers subject matter connected to communication over a network], said utility requiring the presence of the object identification code and proof of ownership [infringement suit requires the use of the patent number and a proof of ownership of the patent];
wherein said objects are transferable among users [patent ownership can be transferred]; and
wherein an object that is transferred is assigned to the new owner [when transferred, patent belongs to the new owner].
Yikes! Perhaps the patent system itself is prior art that would invalidate this claim, or at least narrow its scope. This is too much to contemplate on a Friday afternoon.
Posted by Edward W. Felten at 04:30 PM | permanent link | Comments (4) | Followups (1)
Reflecting on the recent argument about Howard Dean's old smartcard speech, Larry Lessig condemns the kind of binary thinking that would divide us all into two camps, pro-privacy vs. pro-national-security. He argues that Dean's balanced speech was (perhaps deliberately) misread by some, with the goal of putting Dean into the extreme pro-national-security/anti-privacy camp.
There is a special circle in hell reserved for those who try to destroy the middle ground on issues like this. Dean was clearly trying to take a balanced position, and it's unfair to ignore the pro-privacy part of his speech to paint him as anti-privacy. Dean was advocating a reasonable balance.
But it's not enough simply to want balance. You also have to figure out how to achieve it, or at least approximate it, by adjusting the available policy levers. And that can be difficult, especially if those levers are weak or hard to understand. Opting for balance is not the end of the policy process, but the beginning.
Rather than accusing politicians like Dean of wanting the worst for America, we can do much more good by helping them understand what the policy levers do and why it might not be such a good idea to pull that one they're reaching for.
Posted by Edward W. Felten at 01:56 PM | permanent link | Comments (2) | Followups (1)
A group of ex-NSA security experts, hired by the state of Maryland to evaluate the state's Diebold electronic voting systems, found the systems riddled with basic security flaws. This confirmed two previous studies, one led by Johns Hopkins researchers and one by SAIC. Here are some excerpts from John Schwartz's New York Times story:
Electronic voting machines made by Diebold Inc. that are widely used in several states have such poor computer security and physical security that an election could be disrupted or even stolen by corrupt insiders or determined outsiders, according to a new report presented today to Maryland state legislators.
...
The authors of the report said that they had expected a higher degree of security in the design of the machines. "We were genuinely surprised at the basic level of the exploits" that allowed tampering, said Mr. Wertheimer, a former security expert for the National Security Agency.
William A. Arbaugh, an assistant professor of computer science at the University of Maryland and a member of the Red Team exercise, said, "I can say with confidence that nobody looked at the system with an eye to security who understands security."
Read the second (on-line) page of the NYT story for a litany of problems the team found. In short, they could easily corrupt individual voting machines so that they counted votes for the wrong candidate or not at all; they could introduce false vote counts for whole precincts into the central vote-tallying server; or they could use well-known hostile exploits to seize control of the servers remotely.
Diebold's response?
In a statement released today, Bob Urosevich, president of Diebold Election Systems, said this report and another by the Science Applications International Corporation "confirm the accuracy and security of Maryland's voting procedures and our voting systems as they exist today."
Mr. Urosevich added: "With that said, in our continued spirit of innovation and industry leadership, there will always be room for improvement and refinement. This is especially true in assuring the utmost security in elections."
University of Maryland professor Bill Arbaugh, one of the study participants and a genuine security expert, gets the last word: "It seemed everywhere we scratched, there was something that's pretty troubling."
Posted by Edward W. Felten at 05:57 AM | permanent link | Comments (2) | Followups (1)
Declan McCullagh at CNet news.com criticizes a speech given by Howard Dean about two years ago, in which Dean called for aggressive adoption of smartcard-based state driver's licenses and smartcard readers. Declan highlights the privacy-endangering aspects of the smartcard agenda, and paints Dean as a hypocrite for pushing that agenda while positioning himself as pro-privacy.
Larry Lessig (among others) argues that Declan mischaracterized Dean's speech, and urges people to read the text of Dean's speech. Others have compared this incident to Declan's infamous role in manufacturing the "Al Gore claims to have invented the Internet" meme back in 2000.
There is certainly a disconnect between the tone of Declan's article and that of Dean's speech. Reading the speech, we see Dean genuflecting properly, and at length, to the importance of privacy. We don't hear about that in Declan's article.
But Declan's omissions aren't the whole story. The first half of Declan's piece quotes extensively from Dean's speech, and it portrays accurately the technical proposal that Dean was endorsing. Declan's reaction to that technical agenda is not unreasonable. For example, a National Academy study report on national ID technologies took a position closer to Declan's than to Dean's.
The fact is that there is a deep disconnect between the different sections of Dean's speech. It's hard to reconcile the privacy-is-paramount part of the speech with the smartcards-everywhere part. At least, it's hard to reconcile them if you really understand the technology. Dean makes a compelling argument that computer security is important, and he makes an equally compelling argument in favor of preserving privacy. But how can we have both? Enter the smartcard as deus ex machina. It sounds good, but unfortunately it's not a technically sound argument.
Now, nobody expects state governors to understand technology well enough to spot the technical flaws in Dean's speech. Probably, nobody advising Dean at the time had the knowledge to notice the problem. That's not good; but it hardly makes Dean unique.
At bottom, what we have here is a mistake by Dean, in deciding to give a speech recommending specific technical steps whose consequences he didn't fully understand. That's not good. But on the scale of campaign gaffes, this one seems pretty minor.
[Disclaimer: My longstanding policy is to avoid partisan politics on this blog. I'm commenting on this issue because of my expertise in computer security, and not to make a political point or to urge anyone to vote for or against Dean.]
Posted by Edward W. Felten at 04:14 PM | permanent link | Comments (4) | Followups (1)
Some people have argued that the Senate file pilfering could not have violated the law, because the files were reportedly on a shared network drive that was not password-protected. (See, for instance, Jack Shafer's Slate article.) Assuming those facts, were the accesses unlawful?
Here's the relevant wording from the Computer Fraud and Abuse Act (18 U.S.C. 1030):
Whoever ... intentionally accesses a computer without authorization or exceeds authorized access, and thereby obtains ... information from any department or agency of the United States ... shall be punished as provided in subsection (c) ...
...
[T]he term ''exceeds authorized access'' means to access a computer with authorization and to use such access to obtain or alter information in the computer that the accesser is not entitled so to obtain or alter
To my non-lawyer's eye, this looks like a judgment call. It seems not to matter that the files were on a shared server or that the staffers may have been entitled to access other files on that server.
The key issue is whether the staffers were "entitled to" access the particular files in question. And this issue, to me at least, doesn't look clear-cut. The fact that it was easy to access the files isn't dispositive -- "entitled to access" is not the same as "able to access". (An "able to access" exception would render the provision vacuous -- a violation would require someone to access information that they are unable to access.)
The lack of password protection cuts in favor of an entitlement to access, if failure to protect the files is taken to indicate a decision not to protect them, or at least an indifference to whether they were protected. But if the perpetrators knew that the failure to use password protection was a mistake, that would cut against entitlement. The rules and practices of the Senate seem relevant too, but I don't know much about them.
The bottom line is that unsupported claims that the accesses were obviously lawful, or obviously unlawful, should be taken with a large grain of salt. I'd love to hear the opinion of a lawyer experienced with the CFAA.
(Disclaimer: This post is only about whether the accesses were lawful. Even if lawful, they appear unethical.)
Posted by Edward W. Felten at 09:10 AM | permanent link | Comments (9) | Followups (1)
Charlie Savage reports in today's Boston Globe:
Republican staff members of the US Senate Judiciary Commitee infiltrated opposition computer files for a year, monitoring secret strategy memos and periodically passing on copies to the media, Senate officials told The Globe.
From the spring of 2002 until at least April 2003, members of the GOP committee staff exploited a computer glitch that allowed them to access restricted Democratic communications without a password. Trolling through hundreds of memos, they were able to read talking points and accounts of private meetings discussing which judicial nominees Democrats would fight -- and with what tactics.
We already knew there were unauthorized accesses; the news here is that they were much more extensive than had previously been revealed, and that the results of the snooping were leaked to the media on several occasions.
Committee Chairman Orrin Hatch (a Republican) has strongly condemned the accesses, saying that he is "mortified that this improper, unethical and simply unacceptable breach of confidential files may have occurred on my watch."
The accesses were possible because of a technician's error, according to the Globe story:
A technician hired by the new judiciary chairman, Patrick Leahy, Democrat of Vermont, apparently made a mistake [in 2001] that allowed anyone to access newly created accounts on a Judiciary Committee server shared by both parties -- even though the accounts were supposed to restrict access only to those with the right password.
An investigation is ongoing. It sounds like the investigators have a pretty good idea who the culprits are. Based on Sen. Hatch's statement, it's pretty clear that people will be fired. Criminal charges seem likely as well.
UPDATE (Friday, January 23): Today's New York Times runs a surprisingly flat story by Neil A. Lewis. The story seems to buy the accused staffer's lame rationalization of the accesses, and it treats the investigation, rather than the improper acts being investigated, as the main news. The headline even refers, euphemistically, to files that "went astray". How much of this is sour grapes at being beaten to this story by the Globe?
Posted by Edward W. Felten at 02:13 PM | permanent link | Comments (3) | Followups (1)
Four respected computer scientists, members of a government-commissioned study panel, have published a report critical of SERVE, a proposed system to let overseas military people vote in elections via a website. (Links: the report itself; John Schwartz story at N.Y. Times; Dan Keating story at Washington Post.) The report's authors are David Jefferson, Avi Rubin, Barbara Simons, and David Wagner. The problem is not in the design of the voting technology itself, but in the simple fact that it is built on ordinary PCs and the Internet, leaving it open to all of the standard security attacks that ordinary systems face:
The real barrier to success is not a lack of vision, skill, resources, or dedication; it is the fact that, given the current Internet and PC security technology, and the goal of a secure, all-electronic remote voting system, the [program] has taken on an essentially impossible task. There really is no good way to build such a voting system without a radical change in overall architecture of the Internet and the PC, or some unforeseen security breakthrough.
SERVE advocates have two reponses. The first is simple stonewalling (for example, saying "We have addressed all of those problems", which is just false). I'll ignore the stonewalling. The second response, which does have some force, says that SERVE is worth pursuing as an experiment. An experiment would have some value in understanding user-interface issues relating to e-voting; and the security risk would be acceptable as long as the experiment was small.
The authors of the report disagree, because they worry that the "experiment" would not be an experiment at all but just the first phase of deployment of a manifestly insecure system. If an experiment is done, and no fraud occurs -- or at least no fraud is detected -- this might be taken as showing that the system is secure, which it clearly is not.
This reminds me of an analogy used by the physicist Richard Feynman to criticize NASA's safety culture after the Challenger space shuttle accident. (Feynman served on the Challenger commission, and famously demonstrated the brittleness of the rubber O-ring material by dunking it in his glass of ice water during a hearing.) Feynman likened NASA to a man playing Russian Roulette. The man spins the cylinder, puts the gun to his head, and pulls the trigger. Click; he survives. "Aha!" the man says, "This must be safe."
UPDATE (Saturday, January 24): The Washington Post site has a chat with Avi Rubin, one of the report's authors.
Posted by Edward W. Felten at 06:44 AM | permanent link | Comments (6) | Followups (0)
Every so often, somebody gets the idea that computers should detect viruses in the same way that the human immune system detects bio-viruses. Faced with the problem of how to defend against unexpected computer viruses, it seems natural to emulate the body's defenses against unexpected bio-viruses, by creating a "digital immune system."
It's an enticing idea -- our immune systems do defend us well against the bio-viruses they see. But if we dig a bit deeper, the analogy doesn't seem so solid.
The human immune system is designed to stave off viruses that arose by natural evolution. Confronted by an engineered bio-weapon, our immune systems don't do nearly so well. And computer viruses really are more like bio-weapons than like evolved viruses. Computer viruses, like bio-weapons, are designed by people who understand how the defensive systems work, and are engineered to evade the defenses.
As far as I can tell, a "digital immune system" is just a complicated machine learning algorithm that tries to learn how to tell virus code apart from nonvirus code. To succeed, it must outperform the other machine learning methods that are available. Maybe a biologically inspired learning algorithm will turn out to be the best, but that seems unlikely. In any case, such an algorithm must be justified by performance, and not merely by analogy.
Posted by Edward W. Felten at 11:56 AM | permanent link | Comments (9) | Followups (0)