My posting on the Copenhagen conference, and its downgrading of global warming, provoked a neat hostile comment: you (Posner) criticize these economists for opining outside their fields, but isn’t that what you do all the time? Well, yes, but here’s my defense: you don’t have to be an expert in a field to criticize the experts, provided you know enough about the field to understand what the experts are saying and writing, to be able to spot internal contradictions and other logical lapses, sources of bias, arguments obviously not based on knowledge, carelessness in the use of evidence, lack of common sense, and mistaken predictions. These are the analytical tools that judges, who in our system are generalists rather than specialists, bring to the task of adjudicating cases in specialized fields of law.
I don’t have to be a climate scientist to realize that assembling a group of economists none of whom is a specialist in the science, politics, or economics of global warming, and asking them to reach a consensus on where to rank global warming among the world’s worst ills without conducting any research of their own, but instead by discussing a position paper commissioned from another economist by the organizer of the conference, is not a rational procedure. And there is more (see the Copenhagen Consensus website). The organizer, Bjorn Lomborg, a statistician and controversialist, not an economist, gave the conference participants a week to discuss 17 projects (three involving climate, the others involving health issues, malnutrition, water purity, and other disparate topics) and rank them. The results were publicized before any analytical or evidentiary backup was, and the very idea of pressing for consensus (unanimity) suggests, as in the case of the 9/11 Commission’s similar consensus drive, a basic lack of intellectual seriousness.
I thank commenter Craig for discovering that my review of the 9/11 Commission’s report, to be published tomorrow in the New York Times Book Review section, is now online. The review was written before Senator Roberts’ proposal to break up the CIA, but offers several reasons for thinking that the failure to prevent the 9/11 attacks, if it was indeed a culpable failure rather than an inevitable one, was primarily a managerial rather than a structural failure.
Issues of government organization are baffling. Where you have a boundary, you have a turf war; and if you erase the boundary, you lose diversity and competition, and with it the power of intelligent control. If only one person reports to you, you’re pretty much at his mercy; he’ll tell you just as much as he wants to.
I suggest in my review (despite my general skepticism about structural solutions) carving the domestic intelligence function out of the FBI and creating a stand-alone domestic intelligence agency, similar to England’s MI5; and I point out that MI5 and MI6 (England’s counterpart to the CIA) work well together because they’re both intelligence agencies. The FBI doesn’t work well with the CIA, because the FBI is not an intelligence agency, but a criminal investigation agency, in other words a plainclothes police department.
MI5 has no power of arrest; the power to arrest terrorists is lodged in the Special Branch of Scotland Yard, Scotland Yard being England’s counterpart to the FBI. Presumably MI5 has some of the same problems of coordinating with the Special Branch as the CIA does in coordinating with the FBI; in both cases, you have an intelligence agency working with a criminal investigation agency. But I think--though could well be wrong--that a section of the FBI that was, like the Special Branch of Scotland Yard, specialized to arresting and otherwise assisting in the criminal prosecution of terrorists would make a better fit with a domestic intelligence agency modeled on MI5 than the current counteterrorism branch of the FBI makes with the rest of the FBI. Because the dominant culture of the FBI is and probably always will be that of criminal investigating, intelligence officers lodged in the FBI will always seem odd men out; a person wanting a career in intelligence will not be attracted to working for a police deparment. But it is quite otherwise with someone wanting a career in the criminal investigation and prosecution of terrorists, a perfectly respectable and exciting field of police work. Such a unit in the FBI could holds its head high, and would at the same time have strong incentives to cooperate with a domestic intelligence agency.
Global Warming (III): The Public Intellectuals Weigh in
The following description of a recent conference on the world’s worst ills, featuring several economists who had been awarded the Nobel Prize in economics, enables me to sink global-warming skeptics and academic public intellectuals with only one salvo.
“An international panel of economists brought together to rank the world’s worst ills ended a weeklong conference in Copenhagen Saturday by listing HIV/AIDS, hunger, trade barriers and malaria as the world’s most pressing problems while relegating global warming to the near-bottom of the list. The eight economists at the Copenhagen Consensus — among them Nobel laureates Robert Fogel, Douglass North and Vernon Smith — were invited by maverick environmentalist Bjorn Lomborg to spend a hypothetical $50 billion in ways that would produce the most results. The panel unanimously gave HIV/AIDS top priority and recommended spending $27 billion to fight it, saying that although the costs were ‘considerable, they are tiny in relation to what can be gained.’ The issue deemed second in importance, malnutrition, was allocated $12 billion. The panel ranked trade liberalization third but allocated no funds to expand it. Fourth-ranked malaria received $13 billion.”
Each of these illuminati was paid $30,000 to attend the conference.
The AIDS epidemic is serious, but it does not threaten to destroy civilization; it is also readily controllable, without medical intervention, by avoidance of promiscuous sex. Malnutrition and malaria are serious problems too, but one effect of eliminating them would be to cause a population surge, which would in turn increase global warming, because added population means added energy demands (met primarily by burning fossil fuels) and added food demands (met in part by deforestation). Unlike AIDS, malnutrition, and malaria, global warming, especially if abrupt, could terminate civilization.
As I mentioned in a previous posting, the global climate equilibrium is fragile. In a period (known as the “Younger Dryas”) of only about a decade some 11,000 or 12,000 years ago, the earth’s temperature rose by 14 degrees Fahrenheit. The climate was very cold (it was the end of the last ice age) when the surge started, so no harm to human beings was done (rather the contrary); but imagine a similar surge today. Suppose the ice sheets that cover Greenland and Antarctica melted, raising ocean levels to the point at which most coastal regions, including many of the world’s largest cities, would be inundated. Or if the dilution of salt in the North Atlantic as a result of the melting of the north polar ice cap, the ice of which is largely salt free, diverted the Gulf Stream away from the continent of Europe. The dense salty water of the North Atlantic blocks the Atlantic currents from carrying warm water from the South Atlantic due north to the Arctic, instead deflecting the warm water east to Europe. That warm-water current is the Gulf Stream. If reduced salinity in the North Atlantic allowed the Gulf Stream to return to its natural northward path, the climate of the entire European continent would become like that of Siberia, and Europe’s agriculture would be destroyed.
Worse is possible. I mentioned in a previous posting the possibility of a runaway methane greenhouse effect, and here I add that it might be augmented by the effect of higher atmospheric temperatures in increasing the amount of water vapor in the atmosphere, because water vapor is another greenhouse gas. It is even conceivable that because increased rainstorms mean more clouds, and some clouds prevent sunlight from reaching the earth without blocking the heat reflected from the earth’s surface, global warming could (paradoxically) precipitate a new ice age—or worse. Falling temperatures might cause more precipitation to take the form of snow rather than rain, leading to a further drop in surface temperatures and creating more ice, which reflects sunlight better than seawater and earth (both of which are darker than ice) do. Surface temperatures might fall so far as to engender a return to “snowball earth.” The snowball-earth hypothesis is that 600 million years ago, and maybe at earlier times as well, the earth, including the equatorial regions, was for a time entirely covered by a layer of ice several kilometers thick except where the tips of volcanoes peeped through.
The hypothesis is controversial. It is unclear whether the conditions required for the initiation of snowball earth were ever present, and whether current or foreseeable conditions could cause such initiation. What is suggestive about the example is the ominous tipping or feedback effect that it illustrates (the domain of chaos theory). A relatively small change, such as an increase in rainfall caused by global warming, or an increase in the fraction of precipitation that takes the form of snow rather than rain, could trigger a drastic temperature spiral. The runaway greenhouse effect involving methane illustrates the same process in reverse, and, as in the rainfall example, one spiral can trigger the opposite spiral; that is the essence of a chaotic system.
The probability of abrupt global warming that would precipitate the disasters that I have described is unknown, but presumably small; yet economists know that to figure the expected cost of some risk, you must consider the consequences if a risk materializes, and not just the low probability that it will materialize. This point was missed by the “Copenhagen Consensus.” Nor is there any indication that the participants had studied the relevant science or conducted any cost-benefit analyses in deciding how to allocate their hypothetical $50 billion. Nor, I believe, were any of the economists experts in the economics of climate change. Economics is a large field, and the fact that one has received a Nobel Prize for work in economic history (Fogel), experimental economics (Smith), or the history of institutions (North) is no warrant of competence to opine on the economics of climate change.
In an article in WIRED called Insanely Destructive Devices, Larry Lessig discusses one of the greatest of possible techno-disasters, a terrorist-engendered smallpox epidemic. What gives it a technological dimension is that experiments have shown that genetic alteration of the smallpox virus, utilizing biotechnological techniques and equipment that are inexpensive and widely available, including in Third World countries, could make the “juiced up” virus not only more lethal than “ordinary smallpox” (which kills a “mere” 30 percent of its victims) but also, and more important, impervious to smallpox vaccines (and there is no cure for smallpox). Smallpox is highly contagious and because its initial symptoms are not distinctive, the disease could spread so far, for example by aerosolizers placed in major airports around the world, before it was discovered that quarantining would be instituted too late to be effective, even if health workers and security personnel could be induced, without vaccine protection, to enforce a quarantine. (There is a full discussion in Chapter 1 of my forthcoming book Catastrophe: Risk and Response.)
Lessig despairs of being able to come up with a technological or regulatory solution to this threat. Instead he suggests, with unmistakable reference to the foreign policy of the Bush Administration, that we should foreswear “our present course of unilateral cowboyism” which is “produc[ing] generations of angry souls seeking revenge on us”; we should “focus on ways to eliminate the reasons to annihilate us.” I don’t think so. The “reasons” are too various. Think of the Unabomber; what could we have done to remove his particular grievance? Think of the Islamist terrorists, for whom Western values, including the emancipation of women, are our greatest offense. And of course once we decide that the way to prevent terrorism is to change our way of life, we create new incentives for people who want us to change our way of life to resort to terrorism.
Lessig’s is a counsel of despair, and is premature. Although it is extremely difficult to prevent bioterrorism, it should not be impossible to reduce the risk of it substantially. Measures include an international organization patterned on the International Atomic Energy Agency (a solution the Administration has resisted), stricter restrictions on access to laboratory supplies of lethal pathogens and toxins--and such simple measures as not allowing airports to install aerosol fresheners, as they are doing--and nothing would be easier than for a terrorist to switch such a freshener with an aerosol dispenser containing a lethal pathogen.
Yet there is the standard, and very serious, dual-use dilemma. To develop effective vaccines against variants of lethal pathogens such as the smallpox virus, we need to create samples of these variants, these “juiced up,” bio-reingineered bacteria and viruses, and these sample are potential weapons and the techniques used to create them are techniques that bioterrorists could utilize. The more people we train to create new vaccines, the more people there are with knowledge that can be put to evil uses.
The solution to this dilemma is not obvious, but one possibility is to shift much of the research on new vaccines from open university facilities to closed university affiliates, such as MIT’s Lincoln Labs, which conduct classified research under more secure conditions that found or feasible in the ordinary university setting.
Good comments, and mostly supportive though some skeptical along the lines of climate models are complex, climate science is uncertain, the experts may be wrong. All true; but reading the skeptical literature, I am reminded of the debates in the 1960s over the effects of cigarette smoking on human health. The evidence for serious ill effects was already very strong, but there were skeptics, some financed by the tobacco industry, who said such things as: the evidence is statistical, the mechanism by which nicotine and tars cause changes in lung tissue, etc. is not well understood, and in short we can’t be certain that there are these effects--the implication being that we should do nothing. Similar points are made today, often by energy companies or persons in their pay, and similarly insinuating that, given uncertainty, we should do nothing.
That is a non sequitur. We rarely have the luxury of being able to act on certainties; you’d be a fool if, credibly informed that unless you had an operation to repair an aneurysm you had a 99 percent chance of dying within a week, you responded that you only act when you’re certain. In my last posting, I speculated that a 1 percent chance of criminal punishment might deter certain copyright violations, and I didn’t mean that only the irrational would be deterred.
What would be irrational would be to conclude, from the fact that a minority of scientists deride global warming fears, that we should ignore the problem. Indeed, if you look at their grounds for skepticism, you may become more alarmed about global warming rather than less so. Because what you will learn is that their skepticism is based mainly on the existence of profound uncertainties about climate, and those uncertainties cut both ways and by doing so imply added rather than diminished risk. For example, skeptics point out that in the earth’s prehistory there have been periods (one roughly 10,000 years ago) in which the concentration of carbon dioxide in the atmosphere spiked, even though cavemen didn’t drive SUVs. Yes, and if one of those non-human-induced spikes coincided with our human-induced spiking, we’ll be in real trouble.
I mentioned in passing, in the preceding posting, risk aversion. If you would rather pay $100 certain than run a 1 percent risk of a $9,999 loss, even though the expected cost of such a risk is only $99.99, then you’re risk averse (think of the $100 as an insurance premium). The greater the variance in possible outcomes, the more upset the risk averse are likely to be. The more uncertainty there is about climate, the greater the variance in possible consequences of increasing the atmospheric concentration of carbon dioxide (and of other greenhouse gases, such as methane, which is even more heat-retentive than carbon dioxide, and is being released into the atmosphere in increased quantity because of the melting of the Alaskan and Siberian permafrost--and you can see what a dangerous feedback effect is possible as more methane in the atmosphere raises surface temperatures which melts more permafrost releasing more methane…). So people who are risk averse, and that is most of us when we are facing potential disaster on the scale that global warming might inflict, will not be reassured by people who ground their global warming skepticism in nothing solider than a reminder that other things besides human activity affect climate; those other things seem as likely to exacerbate the effects of human activity as to offset them.
…the Justice Department is conducting criminal investigations of file-sharing networks. This development illustrates a point I made in a previous posting (a Lessig point) about the relationship of substitution between law and technology. The Grokster decision last week, if it holds up, will facilitate circumvention of copyright law by file sharing, by placing the sellers of the software for such sharing beyond reach of the copyright law. The liability of the sharers themselves is not affected; and already as we know hundreds of them have been sued by the recording industry. But copyright law also authorizes criminal sanctions. The Justice Department has not yet indicated an interest in prosecuting individuals who download just an occasional copyrighted song. But it is sometimes possible to deter unlawful behavior by a very slight threat of prosecution. Economists have the useful concept of an “expected cost.” If there is a 1 percent probability that you will incur a $100 cost, the expected cost is $1 ($100 x .01), and if you’re risk averse, you will spend up to $1 to avoid it. If the expected cost is an expected cost of punishment, it may be very great even if the probability of punishment is slight: many kids will stop downloading copyrighted songs if they think there is a 1 percent probability that they will be sent to prison for 6 months. So we can think of what the DoJ is doing (whether you like it or not) as the law pushing back against technology--trying to defeat technological circumvention of law by jacking up legal sanctions, in this instance possibly to a higher level than the RIAA can achieve with its civil suits.
Good comments, e.g., by “yozhik” and by Prof. Castronova, the economist of the virtual world phenomenon; and for a rich discussion of the laws that govern or should govern virtual worlds, see Balkin. Also a paper by Lastowka and Hunter.
…is the name of a 1998 novel by Dan Brown, the author of The Da Vinci Code. Digital Fortress is a cyberthriller about the National Security Agency (NSA), which monitors and intercepts electronic communications worldwide. In the book as in real life, the agency is concerned with encryption technologies that can prevent it from decoding the communications that it intercepts.(One of the triumphs of modern technology is the unbreakable code; it used to be that even the cleverest codes could, with enough time and effort, be decoded.) The agency would like all such technologies to contain a “backdoor” that would enable it and only it to decode all intercepted messages.
The book has a number of unrealistic features (I very much doubt, for example, that the NSA employs hit men), but it flags a genuine problem, which is that privacy is an equivocal good. This statement will shock many people, for whom “privacy,” like “liberty” and “justice,” signifies an unallowed good. In fact all that “privacy” means, in the case of communications at any rate, is concealment, which obviously can serve bad as well as good purposes; few civil libertarians are so doctrinaire as to deny that there are some situations in which wiretapping of phone conversations is legitimate. So what if telephone or other electronic communications are so effectively encrypted that wiretapping (or wireless tapping) is impossible? It would be another example, analytically symmetrical with that of the use of encryption to protect (and extend) copyright protection, of technology upsetting a balance deliberately struck by the law, in this case between freedom and safety. Hence the case for the back door. The problem is how to control the back door. In the case of conventional, nonencrypted phone conversations, the government has to obtain a warrant to wiretap. But the (unspoken) assumption is that evidence of criminal activity can usually be obtained without wiretapping, then used as the basis for applying to a judicial officer for a warrant to obtain further conclusive evidence. But in the case of foreign intelligence surveillance, the assumption is that winnowing an enormous mass of unfiltered communications may be the only way of obtaining evidence of some terrorist or other enemy threat, and if so then it would be dangerous to forbid the NSA to read intercepted communications without a warrant. But if the NSA has unlimited authority to read communications, then no communications are really private.
My inclination--it is only that; I am not an expert in these matters--would be to let the NSA have its back door. I think that people who worry a lot about invasions of communicative privacy sometimes overlook the fact that communications are never really private. There is always the possibility that the person at the other end of the communication, the person you trust not to disclose the contents of the communication to anyone else, will betray you, or that he will make a copy of the communication and it will come into the hands of someone who wishes you ill. In the case of email, we all know by now that an email message is likely to sit, forever, on several servers and terminals. So communicative privacy is inherently qualified, imperfect, incomplete; and the question is whether knowledge that your communications may be decoded, scanned, and perhaps stored, by the NSA, is going to inhibit you, or inflict psychological distress; and the answer to both questions probably is no.
I don’t doubt that there potential dangers from allowing government surveillance. Think now of the NSA’s interceptions being filed under the names of the participants in the intercepted communication and placed in a database along with other information about each individual, including for example his commuting patterns gleaned from the E-Z Pass database. Eventually there would be an incredibly detailed dossier on every person in the U.S. The value of such dossiers for preventing terrorism and detecting crime would be immense; but so would be the potential political and psychological consequences if every person knew that the government was in effect tracking his every move.
At last, high-level Administration acknowledgment that global warming is real, and that human activity (mainly the burning of fossil fuels, principally oil, natural gas, and coal, and deforestation in Third World countries) is a principal cause because such activity emits carbon dioxide. (See also Times article.)
Greenhouse gases, such as carbon dioxide, in the atmosphere trap heat reflected from the earth and by doing so maintain a temperate climate. But since the Industrial Revolution and in particular since about 1970, economic and population growth has resulted in greatly increased emissions of carbon dioxide, resulting in greatly increased atmospheric concentrations of the gas (the effect of emissions is largely cumulative, because it takes a long time for carbon dioxide to be removed from the atmosphere, by absorption by the oceans), producing in turn higher global temperatures. As I explain in my forthcoming book Catastrophe: Risk and Response, because the global climate equilibrium is fragile, abrupt global warming is possible, though unlikely, in the near future. It would not be as abrupt as depicted in “The Day After Tomorrow,” but it might be abrupt enough to have catastrophic consequences within a decade or even less, consequences that might include a rise in ocean levels that would inundate most of the world’s coatal areas, where most of the largest cities and much of the world’s population are found.
The current global-warming problem is an artifact of technology (though not of the newest technology), which has not only made carbon the basis of most of our energy but has contributed to a great increase in the number and wealth of people, and hence to a great increase in the demand for energy. But technology may bail us out, either by developing feasible, economical substitutes for carbon-based energy sources, or, by advances in nanotechnology (molecular-scale engineering), creating carbon-dioxide devouring nanomachines to cleanse carbon dioxide out of the atmosphere. Unfortunately, as is so generally the case, technology has a downside; for example, concern has been expressed that the weaponization of nanotechnology could further destabilize the geopoliitcal system, and even that nanomachines might accidentally be created that were incredibly voracious self-replicators--superweeds that might devour all organic matter on the planet. See Nanotechnology.
First example: how technology will bring us to the world of The Matrix.
The matrix is a video online world that is so realistic that if one’s “avatar” (one’s electronic self, the player in the video world) is killed, one dies of shock. The current video online worlds, in which you create and manipulate your avatar by means of a computer screen and a mouse or joystick, are insufficiently realistic to cause many deaths; I know of only one, described in a great article by James Meek: ‘In October 2002 a 24-year-old man, Kim Kyung-jae, died of a DVT-like illness after playing an online game, Mu, virtually nonstop for three and a half days. “I told him not to spend so much time on the internet,” his mother told the BBC. “He just said, ‘Yes, Mum’, but kept on playing.” (According to Lance Stites of NCsoft the company has taken steps to encourage players to keep the distinction between real and virtual worlds clear. Now, messages appear periodically on screen reminding subscribers to “stretch your legs and see the sunshine once in a while”.)’ But already there is a video game in which you wear a headset that enables you to manipulate your avatar by brain waves. More matrix-like still is a technology under development whereby chips implanted in the brains of paralyzed people will enable them to operate computers by thought alone: they ‘will have a cable sticking out of their heads to connect them to computers, making them look something like characters in “The Matrix.”’ Implants.
Even in the current, primitive stage of online video world technology, literally millions of people are participating, many obsessively; the use of real money to purchase game money with which to buy equipment, clothing, and other assets in the video world is already a big business. A few years hence, people will be interacting in the video world by brainwaves alone, and in that “no hands” context they may forget who and where they are. The social consequences could be immense, and the political as well if government obtains control of the chips implanted in people’s brains to enable them to play and of the signals communicated to those chips. It will take many years to create a video online world as complex as that of The Matrix, where millions of avatars interact in a stunningly realistic simulation of a 20th century big city. But short of that, people will find it increasingly difficult to distinguish between the actual and virtual worlds in which they participate.
The law is slowly beginning to notice the video online world phenomenon; there is even a recent case in China in which an online player sued the video game company for allowing a hacker to steal the player’s virtual possessions!
The big question--what if any social controls should be placed on the evolution of video online worlds--is baffling and as far as I am aware has attracted little attention.
As Larry Lessig has long and presciently emphasized, law and technology are substitute methods of protecting an interest. You can sue a trespasser; but it may be cheaper just to put up a strong fence. We used to think that if the technological substitute was adequate, it would be superior to the legal; and in fact the law often imposes self-help requirements to discourage lawsuits. And we never (or rarely) used to think that technology could upset a balance struck by the law; we thought law could cope with any technological changes. The dizzying advances of modern technology have destroyed these assumptions.
File sharing is the obvious example. On the one hand, encryption technology and Internet distribution (that is, selling directly to the consumer rather than through a dealer, enabling the seller to impose by contract additional restrictions on the use of his product beyond those imposed by copyright law) may progress to a point at which the fair use privilege of copyright law is extinguished (and so Lydia Loren has made the interesting suggestion that it should be presumptively deemed copyright misuse for a copyright holder to impose by contract (or, presumably, by encryption) restrictions over and above those authorized by copyright law). It would be like having a fence and gate so secure that the fire department couldn’t enter one’s premises to fight a fire; in such a case the fence would be giving the homeowner greater rights than trespass law, which would permit such entry.
On the other hand, Grokster-like services greatly reduce the cost of infringing copyright. The copyright owners retain (even if the Ninth Circuit’s Grokster decision stands) their right to sue the direct infringers, i.e., the people downloading recordings of copyrighted songs, without a license, into their computers, but this imposes litigation costs that the copyright owners did not have to bear when unauthorized copying of recordings was sufficiently costly to discourage most infringers without having to threaten them with a lawsuit.
We are in the presence of an arms race between encryption and copying technologies; if the latter prevails in this competition, copyright law will be ousted from one of its domains.
With all due respect for the interests of the recording industry and the file sharers, I regard this particular interaction of law and technology as relatively trivial in its overall social consequences. I am much more concerned about the ability, or rather inability, of the law and other policy instruments to cope with the issues thrown up by the relentless progress of science and technology. I’ll give examples in subsequent postings.
A further thought, prompted in part by the release yesterday of the Schlesinger panel’s report of its investigation of the Abu Ghraib scandal.
Under the present system of intelligence, the CIA, although it is not the largest intelligence agency, is the leading agency, and its director is understood to be the government’s senior intelligence officer; he briefs the President, and is responsible for keeping the President and the other top officials informed. If a National Intelligence Director is layered on top of the CIA, its director, and the other agencies, as recommended by the 9/11 Commission, and if in addition, as suggested by Senator Roberts, the CIA is broken up into three parts, who will brief the President? The NID will be too busy supervising 18 agencies, which will mean worrying about spy-satellite launchings, creating “back doors” to encrypted Internet communications, monitoring the Coast Guard’s intelligence activities, etc., etc. So will the responsibility for keeping the President informed devolve on the head of one of the CIA fragments? But won’t he be too low-level an official to be able to marshal all the intelligence resources of government?
The basic problem with the recommendations is the attempt to solve managerial problems with structural solutions. This was recognized by the Schlesinger panel. Its report explains that the Abu Ghraib interrogation fiasco was the result of specific mistakes in planning, analysis, training, deployment, supervision, and personnel, made by specific individuals up and down the chain of command, who are named. The mistakes were not the product of a deficient structure. For the most part, this is likewise the case with respect to the failure to detect Al Qaeda’s 9/11 plot and respond to the attacks. Inadequate screening of visa applicants, deficiencies in building-evacuation plans, misunderstood rules regarding sharing of intelligence between criminal investigators and intelligence officers--the list of remediable management failures goes on and on, but the closest to a structural failure that I discern is the lodging of domestic terrorist surveillance in the FBI, which seems to have a deep-seated prosecutorial mindset that is inconsistent with effective preventive surveillance of potential terrorists.
Doug Lichtman, a very able IP professor at the University of Chicago Law School, took sharp issue with my brief note on patent fair use, emailing me that my “quick reference to patent fair use…is problematic for the simple reason that, often, the key market for research tools is to sell those tools to other researchers. If a researcher’s use of patented research tool is fair use, that would significantly degrade the incentive to create those research tools inthe first place. Moreover, even if your approach works, it is in sharp conflict with the Bayh-Dole instinct that society might very well be better off in a world where academic researchers patent their work. As you know, that legislation was passed in response to evidence that university breakthroughts were sitting on the shelves both because (a) they could not be owned exclusively under old NIH rules; and (b) universities had too little incentive to bring their work to the attention of industry. Overall, patent fair use and the research exception are an important topic, but your short sentence seems to unfairly duck the many hard issues.”
These are difficult issues, to which I can’t do full justice here. Lichtman and I differ on the importance of patents as motivators of research. The effects of patents on innovation are extremely complex, an important consideration being that when a field becomes blanketed by patents, as is happening with research tools, inventors are forced into what can be costly and protracted negotiations for licenses in order to be able to use and build on previous innovations. So we have to consider carefully what alternatives there are to patents for motivating innovation in pharmaceutical and other research. It turns out that there are many alternatives, including government grants, university grants (universities have their own resources--Harvard has an endowment of $20 billion), the commercial advantages of a headstart, and trademarks.
And are we really better off in a world in which academic researchers can patent their work? Maybe so, but a countervailing factor is that the patentability of academic research deflects academic researchers from basic to applied research, which may have long-run consequences for innovation that are adverse.
Here is a very worrisome problem concerning fair use. It has to do with a dichotomy long noted by legal thinkers between the law on the books and the law in action. They often diverge. And fair use is an example of this divergence. As I said in an earlier posting, fair use often benefits rather than harms the copyright holder. However, it doesn’t always; moreover, even if a copyright holder is not going to lose, and is even going to gain, sales from a degree of unlicensed copying, if he thinks he can extract a license fee, he’ll want to claim that the copying is not fair use; and finally, because the doctrine has vague contours, copyright owners are inclined to interpret it very narrowly, lest it expand by increments.
The result is a systematic overclaiming of copyright, resulting in a misunderstanding of copyright’s breadth. Look at the copyright page in virtually any book, or the copyright notice at the beginning of a DVD or VHS film recording. The notice will almost always state that no part of the work can be reproduced without the publisher’s (or movie studio’s) permission. This is a flat denial of fair use. The reader or viewer who thumbs his nose at the copyright notice risks receiving a threatening letter from the copyright owner. He doesn’t know whether he will be sued, and because the fair use doctrine is vague, he may not be altogether confident about the outcome of the suit.
The would-be fair user is likely to be an author, movie director, etc. and he will find that his publisher or studio is a strict copyright policeman. That is, since a publisher worries about expansive fair uses of the books he publishes, he doesn’t want to encourage such uses by permitting his own authors to copy from other publishers’ works. So you have a whole “law in action” law invented by publishers, including ridiculous rules such as that any quotation of more than two lines of a poem requires a copyright license.
Here’s a reductio ad absurdum of folding in the face of copyright overclaiming: “While interviewing students for a documentary about inner-city schools, a filmmaker accidentally captures a television playing in the background, in which you can just make out three seconds of an episode of ‘The Little Rascals.’ He can’t include the interview in his film unless he gets permission from the copyright holder to use the three seconds of TV footage. After dozens of phone calls to The Hal Roach Studios, he is passed along to a company lawyer who tells him that he can include the fleeting glimpse of Alfalfa in his nonprofit film, but only if he’s willing to pay $25,000. He can’t, and so he cuts the entire scene.” Jeffrey Rosen, “Mouse Trap: Disney’s Copyright Conquest,” New Republic, Oct. 28, 2002, p. 12 (emphasis added). Clearly, copying the three-second “fleeting glimpse” was fair use, but who knows how the studio would have responded if the filmmaker hadn’t cut the scene?
What to do about such abuses of copyright? One possibility, which I raised hypothetically in my opinion in WIREdata, pp. 11-12, is to deem copyright overclaiming a form of copyright misuse, which could result in forfeiture of the copyright. For a fuller discussion, see the very interesting paper by Kathryn Judge, not available online but obtainable by emailing her at kjudge@stanfordalumni.org.
The underlying problems are two: the asymmetry in stakes in disputes between owners of valuable copyrights and people who are either public domain publishers or don’t anticipate that the works they’re creating will have great commercial value; and the vagueness of the fair-use docrine. I have suggested that this vagueness can be reduced by a categorical approach, under which types of use are given essentially blanket protection from claims of copyright infringement. If only one could define “glimpse”!