permanent link to this entry Stardate 20040606.1631

(Captain's log): So I spend my afternoon responding to letters from people who think they see easy answers after all, despite everything I've written about the problems of energy generation, or who have other kinds of questions about energy. Care to see some of them?

John wrote:

I read your vaccinating and informed viewpoints on energy solutions and am wondering, since you expertly trashed all the other solutions, what do you think is the near and long term solution for our energy needs? How much longer do you think we have doing things the way we are know?

One thing I've learned as an engineer is that it's a pointless waste of time to try to speculate too far into the future, or to try to design for far future problems.

We can't know what problems the next generation of engineers will face, and we also can't predict what new tools and approaches will be available to them which might be part of the solution.

Engineers working for the telephone company in 1950 could not possible have foreseen the kinds of things we use the telephone system for now, and they equally had no chance of predicting the way modern switches work.

State of the art in 1950 was mechanical switches in crossbars. How could they have predicted that in the year 2000 there would be no moving parts in the modern equivalent (ignoring cooling fans and hard disk spindles)?

It would have been utter futility for such an engineer in 1950 to try to design something he expected would still be in use in the year 2000.

What new energy uses will there be in 2050? What will be the equivalent for energy in 2050 to computer data in 2000 for the phone system?

And what new tools and techniques will be available? What will be the equivalent for energy in 2050 to semiconductors in 2000 for the phone system?

There is absolutely no way to answer either of those questions. So since we can't characterize the problem of energy in 2050 and don't have any idea what kinds of tools would be available to solve that problem, then there ain't really a damned lot we actually can talk about intelligently – except for admitting that it's a waste of time to even try.

Rob wrote:

I think you've proved quite well that there isn't any simple solution to the creation of a lot of new energy from new sources. The market has seen to this, of course. If there was a good alternative, we'd be using it.

There is an answer, however: use less energy. I'm not talking about crippling the economy or making people ride bikes, I'm talking about increasing the efficiency of our most energy-consuming practices. We've done a lot for the average gas mileage of cars over the last twenty years and we obviously could do more. It would be hard to insulate our existing buildings, but new ones could be far, far more insulated than they are now (I wonder if anyone is working on commercializing aerogel for use as an insulator for buildings? Ah, turns out they already have). LED's are already starting to replace incandescent lighting and that will no doubt continue. Flat panel displays are nice and they use a lot less electricity than CRT's.

If we went all-out, I imagine that we could (over twenty years or so) cut our energy needs by at least 25% and 50% isn't out of the question. In the long run it would save us money and it wouldn't hurt the environment any, either.

I'm afraid not. It is impossible to achieve that much gain solely through technological changes like that.

I don't mean "infeasible" or "impractical", I mean it is physically impossible. To get a 50% gain solely through technology improvements we'd have to revoke the laws of thermodynamics and figure out how to change the universal electrical constant. I don't expect to see that happen in my lifetime.

The second law of thermodynamics imposes an absolute limit on how efficient energy usage can be, but that level is an asymptote, a level you can never actually achieve. Each successive step towards that level is harder than the step before it. There comes a point of diminishing returns where the curve becomes ferociously steep, and any additional significant gains are extremely difficult and expensive. In most of our current energy use, we're already near that level.

There are some specific areas where major gains are possible. We can still make small overall gains. But we cannot improve 25%.

There are some areas where new technologies may come along to replace older ones yielding substantial increases in efficiency, such as Rob's example of replacing incandescent lights with LEDs. But in most of our energy usage, existing technologies are already about as good as they'll get. We may develop new approaches which might have substantial advantages in other ways, but they won't be significantly more efficient in terms of energy usage.

Improvement in efficiency is not an infinitely deep well.

This is like file compression. Putting a BMP file inside a ZIP usually results in huge compression, but if that ZIP file is placed inside another ZIP file, it doesn't compress any further. There are limits to compression. It turns out that one reason for this was Shannon's recognition that representation of information inherently involves a certain amount of energy, and that some aspects of information manipulation are affected by the laws of thermodynamics. Because of that, data compression beyond a certain point is equivalent to invention of perpetual motion. It would create more energy than it uses, and that's against the rules. (And there's no way to cheat on the rules.)

Likewise, there are distinct limits on how efficient certain kinds of energy transactions can be. Once you get near the limit, the game is over.

Rob's example of LEDs and light bulbs is a rare anomaly, not a typical case. And those rare anomalies, even taken collectively, would not reduce overall energy consumption 25%.

Individual gains in small areas don't gain as much overall as you might think, because the effect is diluted. Percentage gain doesn't add up; you have to calculated weighted averages.

For instance, if it turned out that 10% of electrical power was used for incandescent lights, and if a switch to LEDs would reduce that by 50%, and if electricity represented about 20% of total energy consumption (all numbers I just made up) then overall energy usage would decrease by 1% -- after a complete cutover, which would probably actually take years.

Except that a complete cutover is unlikely. Fluorescent lights are much more efficient than incandescent lights, and they've been a practical alternative for decades. They're used in a lot of places, but have not killed off the incandescent light bulb.

And I suspect that the numbers I made up are all much too high. If, instead, incandescent lights are 5% of the electrical load (and it's probably even lower), and electricity is 15% of total energy usage, and if the gain using LEDs is 30%, and if you only achieve 40% cutover, then you only gain 0.09% of overall power consumption.

To reduce overall energy usage by 25% you'd have to make extremely significant gains in nearly every area where we use energy now, and you'd have to gain much more than 25% in some significant areas to balance other areas where your improvement was less than 25%.

But in at least half of the areas where we use energy now, it's already about as efficient as it practically can be. (We're already using ZIP files rather than BMPs; no more compression is to be had.) In most of the rest only small gains are possible. It's not like conservation and efficiency are anything new; we've been working on them for a long time already, especially since the oil embargo in the late 1970's. At this point we've picked almost all the low hanging fruit, and a lot more which isn't low hanging.

For example, here's no real way to reduce line losses in long distance electric power lines, or line losses in local power distribution, or losses in transformers due to heating. The technology of transport and distribution of electric power has been mature for 75 years and already operates at the practical limit of efficiency. Any non-negligible gains would require a breakthrough in theory (i.e. development of room-temperature superconductors costing about the same as existing wire), and a 25% gain would violate the laws of physics.

Big electrical generation plants already have implemented all the efficiency gains which are economically justified based on 30 year investment/payoff cycles. Any gains beyond that will be unacceptably expensive. Most of those plants are already about as efficient as they can reasonably be made. And those pesky laws of thermodynamics forbid a gain of 25% over current performance, because those plants are closer than that to the theoretical limit.

Gains of the magnitude Rob proposes cannot be achieved via efficiency gains from technology changes exclusively. There may be specific areas where gains of 25% are within reach, but a 25% overall gain is physically impossible. A 10% overall gain is so far up the difficulty curve as to not be funny. Even a 5% gain overall would be extremely tough, and might take decades to complete, since it would happen as part of the normal turnover of the capital plant (which is not fast).

And because deployment of any such technologies would be slow, this runs into the fact that the problem isn't standing still. If we improve efficiency by 5% over 20 years, but energy usage grows 2% per year compounded, total energy use will be a lot higher in 20 years than it is now in absolute terms. The rate of improvement in overall efficiency of energy usage will never be as great as the rate at which our consumption of energy is expected to increase, absent economic cataclysm.

If technological improvements in efficiency won't solve the problem, then neither will reconfiguration of the system. David writes:

I hesitate to write you on the topic of energy generation because I suspect you’ve already thought about this idea and rejected as so obviously wrong as to not even require mentioning. I don’t want to bother you with another couple of stupid ideas, but I don’t recall your addressing them. What the hell, I’ve been wrong about more things than I care to count, so this won’t be a new experience for me.

In your latest posting, you talk about the problems of scaling up solutions to useful size. Have you thought about the advantages of scaling down power generation? If I recall correctly, we loose between 20 and 50% of the power generated at power plants from inefficiencies of transmission lines and transformers. Small generators located at the site of power usage could eliminate that loss. These mini-generators could range in size from a central air conditioning sized unit, up to the size of small building. They could provide power for a local consumer such as a single home, an apartment block, a factory, an office building or a hospital. If these small generators are highly efficient (a big if, I know), then they should be able to produce electricity less expensively than central plants by eliminating transmission losses.

Power generation is subject to considerable economies of scale, which is one of the reasons why the existing power generation system relies on fewer big plants instead of a swarm of small ones. Per unit of power generation capacity, big plants cost a lot less than small ones do.

But what's more important is that they are also are more efficient.

Distributed small power plants would reduce transmission line losses by reducing the distance the power was transmitted. But those losses are not as major as David thinks, and small plants would be less efficient in producing electric power. You would lose more than you would gain. The overall result is worse, not better. That's especially true when you're comparing power generation over ranges varying by 6 or 7 orders of magnitude (i.e. a 1 kilowatt generator for a single home compared to a 1 gigawatt power plant).

Utilization of energy means taking advantage of energy differences. There's an inherent energy difference between a high place and a lower one, and there's inherent energy difference between a warm place and a cool one. The laws of thermodynamics tell us that the amount of energy which can be converted to useful work is a fraction of that difference, irrespective of the absolute energy levels of each.

We take things from the high place and move them to the low one in a fashion which permits us to harness some of the energy release to perform work, and that's how hydro power operates. Hydro is actually a form of solar power; sunlight powers the weather, which moves water from the ocean to the mountains. As that water flows downward again to the sea, we use some of the energy which is released as it falls.

Likewise, we move heat from the warm place to the cool place and likewise harness some of it, and that is how a coal or nuclear power plant operates. (In a sense, coal is also a form of solar power, and nuclear arguably is too, but let's not get into that.)

Up to a certain point, the greater that differential, the more efficient a power system can be at harnessing it. A greater differential means there's more energy present in absolute terms, but it's also possible to utilize a larger percentage of the energy; it's a squared advantage up to the point where efficiency gains begin to level off.

In hydro the term for that difference is "head". The head is the distance from the turbines up to the surface of the lake behind the dam. If the head is not very great (e.g. 2 meters) then there isn't very much energy available per unit of water processed, and the percentage of the energy which can be harvested will be relatively low.

With that kind of head, you can't use turbines at all. There isn't enough pressure; the water moves too slow, and flows through the turbine and around the blades without applying enough pressure to the blades to make them move. With that small a head you have use something like a water wheel. A water wheel works with low head but doesn't scale up, and is inherently less efficient overall than turbines operating at optimum conditions. (Also, turbines scale far better for volume of water processed.)

As the head increases, the pressure differential across the turbine goes up, and turbines harvest more of the energy with less being wasted as turbulence. (Understand that this is far from a complete discussion of the factors involved; there are other ways in which turbine efficiency rises with speed and pressure because of non-linear scaling e.g. bearing resistance as a function of speed, or non-linear characteristics of fluid flow.) So up to a certain point, the greater the difference between the "high place" and the "low place", the more efficient the conversion will be.

There are also some economies of scale associated with volume; when more water is used, there are ways in which efficiency improves.

A bigger energy differential also helps heat engines such as boilers and steam turbines, which can become more efficient as the temperature difference between the "warm place" and "cool place" increases.

Big dams usually have more head than small dams. Big heat-engine power plants usually run at higher temperatures.

You could make a small heat-engine plant use the same temperature differential as a big one, but there are other scaling factors, and the overall efficiency will still be worse. One such scaling factor is known as "cube-square", and in this case what it means is that the heat loss of a boiler is proportional to its surface area, which rises as the square, but the energy production capacity is a function of the volume of the boiler, which increases as the cube. If you double the size of the plant, there's four times as much surface area but eight times as much volume, so each unit of energy generation volume suffers losses through half as much surface area.

It turns out there are also economic reasons for gains in economy of scale because the cost of some things don't scale linearly with their size or capacity. There are things you can include in the design of big plants which help improve their efficiency which don't make economic sense for small plants.

And there are some kinds of approaches which don't work at all below a particular level; you do them big or you don't do them.

Those are just some of the technical reasons why distributing power generation is not helpful. There are also economies of scale in costs. For instance, you need fewer employees per unit generation capacity to operate large plants than to operate small plants.

And implementing such a change would take decades, because the turnover of capital plant is really slow when it comes to this kind of thing.

Let's not even talk about the regulatory hurdles involved in trying to get approval to build hundreds of small power plants close to end users, all of whom know lawyers.

Carey wrote:

I was happy to see you revisit energy but, disappointed with the waste of your genius explaining why crack pot schemes won't work. If only you could forget about the bogyman and see the possibilities of a modern nuclear energy program. Someone must have a vision before nuclear can be re-sold to America. We have the greens, Kyoto, NGO's whipping up emotion, we have real power problems because of the dearth of a clear, shinning alternative. Where is our Adm Rickover? I think you may be the one to do it.

That's not engineering, that's marketing. I don't have the skillset needed for marketing. I've worked with good marketers, and I know their job is harder than mine and it's more important. I also know I couldn't do it.

I don't want to switch from writing about crackpot energy to writing about nuclear energy; I want to stop writing about all energy so I can write about other things. This is not really a subject I find interesting.

Andrew Olmstead wrote:

Clearly the solution to our energy problems is to hook up some kind of collection facility to the people who like to send you letters about energy. There appears to be an inexhaustible supply of those, after all. ;)

Sometimes it seems that way. There are five or six subjects which always make me cringe a bit when I write about them, just because the response will be so voluminous and so predictable. Energy is one of them. Another is Ayn Rand/Objectivism.

There's also Libertarianism. (Don't you DARE claim that their grand plan would founder on the Tragedy of the Commons! Since they would privatize everything, there would not be any commons any more and no tragedy is possible! And don't bother pointing out that the "tragedy of the commons" doesn't necessarily refer to things which are "collectively owned" or "owned by government", and that it doesn't even make sense to talk about "ownership" when talking about some kinds of commons. They're not interested in details like that.)

And I always get hammered any time I express skepticism about grand plans to colonize space.

What they all have in common is that the beliefs held by those indignant and predictable letter writers are indistinguishable from religion, even if they're not about subjects classically considered to be religious. They have a grand vision, a dream, and they don't want some plodding bore like me to take it from them.

There was a time when I was even a bit reticent to reveal that I am an atheist, for fear of receiving letters from Christians who wanted to save my soul.

Of course, tweaking Mac owners is the same, but that's a guilty pleasure. (I shouldn't. I know I shouldn't. It's petty.)

Anyway, getting back to energy, let me emphasize just as strongly as I can that I have not listed all the issues and problems. If you didn't see me mention it, it doesn't mean that it is "the answer", it just means it was yet another problem I didn't feel like talking about.

There is no "answer". When I said "no", I really meant "no".

Daddy has spoken. Don't go ask your mother. (Mother Nature is the one who is responsible for those damned laws of thermodynamics which keep getting in the way.)


permanent link to this entry Stardate 20040604.1353

(Captain's log): I guess I should have predicted this. Every time I talk about energy generation, the letters pour in suggesting innovative/cool/strange alternatives which the letter writer thinks might actually solve the problem. Usually I try to write back to point out one or two major reasons why it won't, but then they will answer those objections (they think) and triumphantly ask, "So now what do you think?"

Since my original list of objections wasn't exhaustive, about all I can do is to wearily point out other objections which I hadn't bothered listing in the previous letter.

For instance: why not put a solar power plant on the moon instead of in geosynchronous orbit? Because it would be in darkness half the time, and because it would no longer hang over the power receiver on earth, so even when it was in sunlight it would only be able to beam power down about a third of the time, and because the moon is a lot further away so the power beam would spread more and cover a much bigger area which would increase wastage and potentially affect the health of people living there, and because the moon is a lot further away and the cost of putting such a facility on the moon would be even greater, in money and energy, than putting it in geosynchronous orbit. And even that is not an exhaustive list. I did not, for instance, mention any issues relating to security and the hazard such a system would represent if it malfunctioned or was taken over by inimical forces.

James wrote to me suggesting that wind power and/or some sort of terrestrial conversion of solar power actually can be scaled up enough to make a difference.

I responded by linking to another article of mine not included in the last post about this. What that article discussed was the fact that electric power has unique properties, and one of the most important is that at any given instant the amount of electric power being generated will always exactly match the amount of power being consumed. If you don't deliberately balance the system, the laws of physics will do the balancing for you in ways you won't like.

Electric power has to be generated at the time it is needed, and the electric power grid overall has to have the ability to add generation capacity as demand rises, and to reduce generation when demand falls again. Demand actually rises and falls by as much as 30% every day.

The biggest drawback of wind/solar is that they generate power when conditions permit them to do so, not when demand requires them to do so. And there's no practical way to store electric energy in adequate quantities to deal with this without unacceptable losses or unreasonable capital and/or operating expense. (This is a major flaw of most of the fad alternate electrical energy sources we hear so much about.)

It is by no means the only serious objection I have to solar/wind, but it is a major one and was the only one I listed in my response to him.

James thought that could be solved by tacking on another piece. He responded, in part:

The really large scale solar/wind power generation model assumed some big invention in storage. I suggested it in the same spirit as you suggested your 40km deep geothermal idea. Which will be more practical in the future? A machine that converts CO2 + water + electricity into methane/propane/hexane, or the really deep hole that produces steam? Certainly not obvious to me.

I don't think James is looking at this the way an engineer must look at things. (Engineers have a term for pieces added to a system to fix existing problems which create entirely new problems at least as severe. We call 'em "Kludges".)

The last time I got involved in discussing alternative energy sources I kept running up against the fact that most of my readers didn't understand the scale of the problem, and didn't understand the fundamental problem of scaling.

Something which may work really well at one scale may not work at all well when scaled up 6 orders of magnitude, even if it is possible to scale it up that far.

For instance, in the last article I mentioned that there would be huge losses involved in conversion of electricity in a solar power satellite into microwaves for downlink. TMLutas found an online reference to a circuit which was able to convert DC to microwaves with 90% yield. That's all well and good, but their approach was designed to operate at 20 watts. It isn't possible to scale it up nine orders of magnitude to 20 gigawatts.

"Converting electric power, CO2 and water into methane" is certainly an appealing idea. I don't think we actually know how to do that now with anything like an acceptable yield, if we know how to do it at all. But as James says, we presume that one result of the "Energy Manhattan Project" is development of such a technology. Unfortunately, as tough as that might be, it's the easy part.

What would we need to design and build in order to implement that technology at useful scales? In one of the articles linked last time, I defined "useful scales" as being average power levels at least 1% of current US power consumption. The US right now consumes about 3.3 terawatts of power average, and any energy technology which can't deal with 1% of that will have negligible political effect.

Let's do some rough calculations to get some idea of the size of the problem, shall we? What does this system look like when operating at 35 gigawatts, about 1% of average US energy consumption?

I wanted to get some idea of the mass involved in this, but the original calculation I did for this article was totally botched. Reader Tim sent me enough information to correct it:

3.5*1010 joules/second

* 3600 seconds/hour

* 0.3 conversion efficiency

/ 1055 joules/BTU

/ 21,000 BTU/pound of methane

/ 2205 pounds per metric ton

== 774 metric tons of methane per hour.

Update 20040605: Tim's first letter made clear that I totally loused up the first calculation I included in this article. Somehow or other I managed to arrive at an answer of 411,000 metric tons of methane per hour, which I intuitively recognized as way too high. But when I looked at his calculations, I couldn't reproduce his final result. So I used some of his conversion constants and came up with what you see above. I wrote to Tim and told him I couldn't reproduce his number and suggested that I thought that in one step he had mutiplied where he should have divided.

He has written back. He cross-checked his calculation and confirmed his answer, so I'm lost.

Here's his original letter:

From table you cited:
methane = 21000 BTU/lb
3412 BTU = 1 KWH
assume electricity conversion of .3

3.5e10 W / 0.3 conversion = 116GW heat

116 GW * 3.412 (BTU/WH) / 21000 (BTU/lb) / 2.205 (lb/kg)

= 8550 metric tons / hour

Your heat to energy factor was multiplied instead of divided and you used the density for liquid methane instead of gas at STP.

methane ~ 7e-4g / cc.

His second letter:

Cross checked calculations a different way:
Made sure all units cancelled: 10500 BTU(therm) / KWH (electric)
         (see Misc table at bottom of DOE page)
1000 BTU(therm) / ft^3 methane heat from table
35.3 ft^3 / m^3
0.7165 kg / m^3 density of methane at standard temp and pressure

So
35e9 [W(elec)] * 10.5 [BTU (therm) / WH (elec)] * 1/1000 [ft^3 / BTU
(therm)] * 1/35.3 [m^3/ft^3] * 0.7165 [kg / m^3]

= 7459 T
cancel all units.
difference with 8550 primarily due to therm -> elec conversion estimate

Also you multiplied by .3 to get from elec to therm energy for a factor of
11 so 770T * 11 = 8550

Further convert to equivalent barrels of oil (from tables)
18600 BTU/lb oil
21000 BTU/lb methane
1 barrel oil = 0.136 metric tons
= 61922 bbl/hr
~ 1.5 million bbls per day

The huge Norco refinery in Laplace, LA (near me) produces 240,000 bbl / day product.

Just to make things even more exciting, Garrett calculated the result yet another way. Here's what he wrote:

I finally found a use for my chemistry textbook. I figured I'd work out the whole methane energy thing for you, just to save a little time.

I used chemistry instead of estimated figures. This way we can assume 100% efficient energy conversion. This avoid the whole "but we can just make it more efficient later on" argument which at best might get you an order of magnitude.

Formula:

CO2 + 2H2O    --->    2O2  + CH4

From bond energy table in my textbook, energy of formation:

CO2:   2 C=O bonds        @ 745 kJ/mol   = 1490 kJ/mol
2H2O:  2 * 2 O-H bonds    @ 467 kJ/mol   = 1868 kJ/mol
2O2:   2 O=O bonds        @ 495 kJ/mol   =  990 kJ/mol
CH4:	 (from book)      @ 1652kJ/mol   = 1652 kJ/mol

For the reaction:

delta H     = 990 + 1652 - 1490 - 1868
            = 2642-3358
            =-716 kJ/mol      (Reaction)

Given 35 GW power requirement,
35 GW = 35 * 60 * 60 = 126000 GJ energy
=1.26 * 10^11 KJ
=1.76 * 10 ^8 mol

Now we can figure out the mass of the required items:

CO2: 44.01 g/mol  * 1.76 * 10^8 mol  = 7745760000 g = 7746 metric tons
H2O: 2 * 18.016   * 1.76 * 10^8 mol  = 6342 metric tons
O2:  2 * 32       * 1.76 * 10^8 mol  = 11264 metric tons
CH4: 16.42 * 1.76 * 10^8 mol         = 2890 metric tons

For those not familiar with this kind of chemistry, a mole ("mol") is a physical quantity of a given chemical  in which the count of the number of molecules is equal to Avogadro's number, 6.023 * 1023, the number of hydrogen atoms in one gram. Molecular weight is always expressed in units of grams per mole. Thus a mole of some chemical is a quantity whose weight in grams is equal to the molecular weight. Presenting these kinds of numbers in terms of moles makes sense, since if a given reaction would use one molecule each of two different chemicals, then at scale it would also use 1 mole of each. It also cleans up a lot of calculations because you can express the energies involved using fairly normal units of joules or kilojoules instead of having to talk about zeptojoules.

All of Garrett's calculations are based on one hour, and he presumes 100% efficiency (as he states). If we scale his result based on my assumption of 30% yield, then it would mean that 867 tons of methane would be produced per hour, consuming about 2320 tons of CO2. That's not identical to my results of 774 tons and 2127 tons respectively, and but it's pretty close, and the difference could be explained by the fact that my calculation relied on a book value for the energy equivalent of methane per unit mass, whereas Garrett calculated that directly and presumably didn't get quite the same number.

I give up. I keep getting loused up by trying to mix metric units with the weird-ass units used by DOE. <begin rant> Why in the hell are they still using "British Thermal Units" to represent energy? What is wrong with joules? Not even the English still use English weights and measures, and a BTU is a stupid unit anyway: it's the amount of energy required to raise the temperature of one pound of water from 39 degrees Fahrenheit to 40 degrees Fahrenheit. Almost all the normal units in the English system are primary, based on direct measure of arbitrarily-chosen physical phenomena (e.g. the length of some monarch's foot), which is why virtually every calculation done in units from the English system will be loaded with strange conversion constants. The rare exceptions are actually famous (a pint's a pound the whole world round!).

In MKS very few units are primary. Three are totally arbitrary: an arbitrary unit of length (the "meter"), an arbitrary unit of mass (the "kilogram"), and an arbitrary unit of duration (the "second"). A small number of others are based on measurable physical phenomena: if equal current flows through two parallel conductors which are 1 meter apart in vacuum, then 1 ampere of current through each wire will result in 2*10-7 newtons of force between them per meter of length. 100 Kelvin degrees is defined as the difference between the melting point and boiling point of water in defined conditions, and 0 degrees Kelvin is "absolute zero", the temperature at which all molecular motion would cease and the pressure of helium would be exactly zero.

The values of most of the other units are derived such that the appropriate conversion constants are "1": In a current of 1 ampere, 1 coulomb of charge passes a given point per second (DC, anyway). If a current of 1 ampere dissipates 1 joule per second of heat when passing through a resistor, then there is 1 ohm of resistance, 1 watt of power is consumed, and there will be 1 volt of electric potential across the resistor. If a capacitor has one volt applied across it, and if at steady state it holds a charge of 1 coulomb, then it has 1 farad of capacitance. That makes life a lot easier, and I don't generally get anything like as lost as this when I do calculations entirely within MKS.

Besides which, the English system doesn't even have any units for electricity. There are no units in the English system comparable to watts, volts, amperes, ohms, farads, and henries. So any time you talk about electric power you have to work in metric units anyway. (Actually, those are the official units for the English system, but they don't fit nicely – not that anything fits nicely anyway. 1 watt is 1 joule per second. It's also 0.000948 BTU's per second. Isn't that convenient?) <end rant>

I don't see anything wrong with the calculation I did, so I'm utterly mystified as to why my answer is so much different than Tim's.

It probably isn't worth pursuing much further. For the purposes of the argument presented by this post, what I needed to establish was that "a hell of a lot" of methane would get produced, and therefore "a hell of a lot" of CO2 would be needed, and there's no doubt about that. The only confusion right now is the precise value of the technical term a hell of a lot. Even using my lowball numbers, it doesn't appear that there is any practical way to collect and deliver the quantity of CO2 required. If we use Tim's numbers instead, that conclusion is strengthened.

Garrett's calculation presumed 100% efficiency in order to avoid distraction by debate on what the conversion efficiency actually would be. I assumed 30% because that's not atypical for these kinds of industrial processes, but it's interesting that if someone tried to claim that the conversion efficiency was higher, then that would mean the CO2 supply problem would be all the worse. With Garrett's (unrealistic) assumption of 100% yield, about 186 thousand tons of CO2 would be needed per 24 hours of operation.

No matter how you calculate this, the driving factor is that 35 gigawatts is a truly huge amount of power, and anything you do at power levels of 35 gigawatts will be big and complex. Ideas which appear to make sense at kilowatt levels won't usually look as good when you think in gigawatts.

How much conversion capacity would any given machine really have? How many would we actually need to operate at 35 gigawatts? It doesn't seem likely that one such machine could operate at a power level of more than a few megawatts, and that means you'd need thousands of them.

How much would they cost? What kind of capital investment are we looking at, here?

Methane is CH4 and has a molecular weight of about 16. Carbon makes up about 75% of the mass. 774 metric tons of methane contains about 580 metric tons of carbon.

CO2 has a molecular weight of about 44: one carbon @ 12, and 2 oxygen @ 16 each. That means that carbon makes up about 27% of the mass of CO2.

Therefore, to produce 774 metric tons of methane you will consume 2127 metric tons of CO2.

That would be about 51,000 tons per 24 hours of operation, assuming no wastage.

[Update: If Tim's numbers are right, then production of 8550 metric tons of methane per hour would require 23,500 metric tons of CO2 per hour, or 564,000 metric tons of CO2 per 24 hours.]

How pure must the CO2 be for this conversion process? What is the actual source of the CO2? How do we collect it and purify it to the required degree? How do we transport it to the conversion plant(s)?

What kind of equipment would be needed to collect that much CO2, purify it to the required grade, and deliver it? How much additional capital investment would we have to make to provide this essential feedstock?

Commercial CO2 actually usually comes from breweries, where it's collected from fermentation tanks and is already highly concentrated. It can also be produced by treating carbonate minerals with acid, and in some cases they make it by burning natural gas (ahem). Trying to collect CO2 out of the exhaust gas emitted by a coal-burning plant is a lot harder. Separating CO2 from a mixture of other gases (e.g. nitrogen) is a pain. You can bubble the mixture through water, and some of the CO2 will go into solution as carbonic acid, but most of it won't, and getting the CO2 back again is also a pain. You can use fractional distillation, but to do that you have to cool the mixture to cryogenic temperatures, since you're taking advantage of differences in melting point and boiling point. But that's far too energy intense.

I'm having a hard time believing there's any easy way to collect, purify, and deliver an adequate quantity of CO2 on an ongoing basis.

When we consider the capital investment, let's keep in mind that this is not an energy source. It's an energy storage mechanism which James proposes we implement in order to patch a serious drawback of solar/wind generation: the fact that they generate power when conditions permit them to do so rather than when customers need power. Solar and wind energy supposedly are "free", but it's beginning to look like this "free" energy will be damned expensive.

This is a good demonstration of the difference between what is possible and what is feasible. Even if it were possible to implement such a system at this kind of scale, it would have a huge capital cost and huge operational expenses. What would the true cost be per unit methane produced, counting amortized capital cost and real operating expenses? It's hard for me to believe that it would be a price I was willing to pay. (And how many years or decades would it take to build it up to an operational capacity of 35 gigawatts?)

Core taps are a lot easier. The big problem is digging the hole, but that's a problem associated with building the power plant, not with operating it. Again, we hypothesize that the Energy Manhattan Project develops a way of drilling such holes which is fast enough and cheap enough to be practical.

Once the hole has been drilled, somehow or other, actual operation would be pretty similar to operation of a coal-fired power plant of comparable capacity. The only real difference would be the heat source used to boil water to produce steam; everything beyond that would be the same. Turbines and generators which operate at those kinds of power levels are current technology and are well understood. No existing power plant generates 35 gigawatts, but there are a lot of power plants which produce 1 gigawatt, and many hydro projects produce much more than that.

It's unlikely that any single core tap would produce 35 gigawatts. (We should be so lucky.) They're more likely to operate at power levels comparable to large hydro, 2-6 gigawatts. But even if a single core tap actually did produce 35 gigawatts, there's no real problem with adding parallel turbines and generators to deal with that much steam. 35 gigawatts is bigger than anything we do now, but only about one order of magnitude bigger. There would not be any serious challenge in scaling existing generation technology up to that level.

The only significant feedstock consumed by a coretap power plant would cooling water. Per unit energy yield, it would use about the same amount of cooling water as a comparable fission power plant.

That requires it to be built near a major river, but that is not a very severe constraint. The river collects and delivers adequate quantities of water to the plant, so we would not have create infrastructure to do so. The basic cooling problem for a coretap is not significantly different than for coal and nuclear plants. Existing cooling technologies could reasonably be scaled up one order of magnitude were that required.

Unlike the methane-conversion system discussed above, a core tap is a primary power generation system. Since we don't how the hole is drilled, there's no way to estimate the capital cost to build such a plant, and therefore the amortized cost per megawatt-hour. But we do know is that the operation cost would be much lower. It would be comparable to the operating cost of a fission plant, and much lower than the operating cost of a coal-fired plant (which includes the cost of the coal which is burned).

So even if it is not obvious to James which of these two would turn out to be more practical, it seems very obvious to me. Speaking as an engineer, I'm a lot less afraid of the basic problem of drilling a really deep hole than I am of the problem of trying to operate a system which requires collection, purification, delivery, and consumption of 51,000 metric tons of CO2 per day. I can't take that idea seriously until someone explains a bit better how that part works.

There are a lot of ways of storing electric energy. You can use flywheels, or you can pump water up a hill, or you can use storage batteries, or you can electrolyze water into hydrogen and oxygen for later consumption in fuel cells. If you want to take drugs, you can consider even stranger approaches such as monstrous magnetic fields in superconducting magnets. But none of them can be scaled up to these kinds of power levels with acceptable efficiency and acceptable capital cost and operating cost per unit capacity. If you wanted to generate power with wind/solar during the day for consumption at night, you'd have to store several hours worth of energy. At our 35 gigawatt threshold of political significance:

  35 gigawatts * 3600 joules per watt-hour * 4 hours == 5.04*1014 joules

500 trillion joules stored every afternoon and released and consumed every night, cheaply and efficiently. I sure as hell don't know any way to do that. Maybe in a thousand years someone will figure out a way, but it won't be possible soon enough to affect the war.

And that's just one of the critical problems associated with largescale reliance on solar/wind as a major source of electric power. There are many others, any one of which would be enough to prevent them from operating at average power levels of 35 gigawatts soon enough to have political consequences relative to this war.

By the same token, I can see many serious problems with the idea of growing algae for use as biomass, and with all the other ideas people have mailed me. (Generation of electricity with a skyhook? Please!) If I responded to your particular suggestion then I would only have described one or two objections, enough to make the point.

Last time this came up I was reduced to begging my readers to stop sending such suggestions. Please don't make me do that again, OK? Let me try to clearly state my conclusion:

There is no technology for generation, transmission, conversion or storage of energy which we currently understand or could plausibly develop which would be efficient enough, and which could be deployed soon enough, cheaply enough, and at a scale large enough, to significantly aid us in winning this war. And if it can't do those things, I don't care about it.

Update: Several readers made the same excellent point: even if some magic technological advanced made it possible for the the US to totally cease importation of Arab oil, other nations would continue buying. If the idea is "choking off terror dollars", then it fails because their money works just as well for supporting terrorism as ours does.

And as Jon points out, it isn't even clear that successfully choking off the dollars would make any difference.

Update: The first version of this article which was posted had the wrong conversion factor from cubic feet to cubic centimeters. I'm still not sure I got my calculation right; intuitively, the numbers seem to be one or two orders of magnitude too high.

Update 20040605: Oh, brother. It was high by a lot more than 2 orders of magnitude. My original calculation came to the conclusion that 35 gigawatts would yield 411,000 metric tons of methane per hour, instead of the current answer of 774 metric tons per hour.

Update: I added an extended discussion of the calculation inline rather than appending it.

Update: I know where to get enough CO2!! Buy it on the "carbon dioxide trading market!!

Update: Myria comments (and offers a nice analogy).

Update 20040606: Alas, yet more here.


permanent link to this entry Stardate 20040604.1053

(On Screen): Reuters is unbiased. Just ask them, they'll tell you.

The AP offers us this report:

Iraq Report Gives Coalition Some Praise
U.N. Report Credits Coalition With Ending Saddam's Atrocities but Notes Iraq Prisoner Abuse

The U.N. human rights commission credited the U.S.-led coalition Friday with ending years of systematic violations by Saddam Hussein's regime but also cited concerns about prisoner abuse by coalition forces.

A report by the U.N. High Commissioner for Human Rights said the coalition's invasion of Iraq "removed a government that preyed on the Iraqi people and committed shocking, systematic and criminal violations of human rights."

In particular, the commission noted Iraqis have greater freedom of expression now than they did under Saddam's regime.

But Reuters sees through this unconscionably biased propaganda, and instead tells us the real story:

UN Says Coalition Troops Violated Rights in Iraq

The United Nations' top human rights official said on Friday U.S.-led occupation forces had committed "serious violations" of international humanitarian law in Iraq and had ill-treated ordinary Iraqis.

Acting High Commissioner for Human Rights Bertrand Ramcharan said coalition troops were able to act with impunity and urged appointment of an independent figure to monitor their behavior.

In a report for the world body's Human Rights Commission, Ramcharan also indicated that U.S. male and female soldiers accused of gross abuses of detainees at Baghdad's Abu Ghraib prison could be guilty of war crimes.

Whew! For a moment I was afraid we might have lost sight of what was really going on in Iraq. Good thing Reuters was there to set us straight, wasn't it?


permanent link to this entry Stardate 20040604.0518

(On Screen): Our special "allies" in Europe have spent the last year demanding that we withdraw all our troops from Iraq. Last summer, for example, they demanded that we complete our withdrawal by the end of last December.

And in the latest (why-are-we-) wrangling in the UN to get another Security Council resolution approved, our most special "allies" are demanding a hard date for withdrawal.

In other news, we learn that the Pentagon is making plans to withdraw the 1st Infantry Division and the 1st Armored Division from Germany, replacing them with the Stryker Brigade. And "experts and allied officials" warn us that we're being premature, and that withdrawing our divisions from Germany could be a serious mistake.

After all, we have only been occupying Germany militarily for sixty years, right?


permanent link to this entry Stardate 20040603.1434

(On Screen): In my perusals of my referer logs, I noticed that Greg Burch had linked to an old article of mine, for which I thank him. Unfortunately, I don't agree with his post in which he did so. I started writing a letter to him explaining why, and it got longer and longer and so I decided to post it instead.

Greg comes to a conclusion which many others have also reached. It's been a pretty regular fixture in my mailbox for a long time. He explains it this way:

We've got years, perhaps decades, of violent conflict with the Islamic world ahead of us. Sooner or later we'll have to realize that the only thing that is making this necessary is our dependence on oil from the Middle East. The culture that is attempting to destroy us is on artifical life support through the money pumped into the Middle East for oil. If that stopped, our enemy would wither and die, or change.

To begin with, I don't think that this war was caused by our use of Arab petroleum. It would have happened eventually anyway.

But even if it was caused by our use of Arab oil, that doesn't mean it will end if we cease using Arab oil.

And in any case, it isn't actually possible for us to stop relying on Arab oil without drastic and painful changes in lifestyle combined with commission of economic suicide. It wouldn't be possible for us even come close to maintaining our current GDP.

This is a fundamentally difficult problem, and most of the constraints are physical, practical, and insurmountable. This is one of those problems which look really simple to solve, but only if you don't look really closely.

As I mentioned, I've gotten a lot of email over the last couple of years from people who came to the same conclusion as Greg has. In response, I've written a lot of posts about various aspects of this in the past to show how intractable it is, and I'll start with a roundup of them, along with summaries of the points they make.

The war wasn't caused by our reliance on Arab oil.

There aren't any credible alternatives to Arab oil (general discussion).

Conservation isn't the answer.

Discussions of the flaws of alternative energy sources: geothermal, solar, wind, solar satellites, tides, fission, hydrogen, biomass

There are no other alternatives, either. Most people don't understand the size of the problem. There are only four potential sources of energy which are large enough (core taps, solar satellites, fusion, direct conversion of matter to energy) and none of them are practical now.

We aren't going to replace Arab oil with turkey guts.

Ethanol doesn't make sense as a fuel.

More details on why we won't be using electric power for vehicles. (Also includes a discussion of some aspects of terrestrial solar power.)

More discussion of the inherent problems of all forms of biomass (including ethanol and "biodiesel" and conversion of turkey guts).

In that last article, I gave this list of five properties any proposed alternative energy source must have if it is to make any real difference.

1. It has to be huge (in terms of both energy and power)
2. It has to be reliable (not intermittent or unschedulable)
3. It has to be concentrated (not diffuse)
4. It has to be possible to utilize it efficiently
5. The capital investment and operating cost to utilize it has to be comparable to existing energy sources (per gigawatt, and per gigajoule).

Note: energy and power are not the same thing. Power (measured in watts) is the amount of energy (measured in joules) produced or consumed per unit time. #1 above requires both that the total energy embodied in the resource be huge and that it be possible to utilize that energy at a very high rate (i.e. at high power levels). If there are significant practical limits on the rate, then it cannot offset the rate at which we use petroleum. If the total energy is limited, it will get used up too rapidly.

We engineers have a saying: "Anything is possible for the man who doesn't have to do it himself." I can understand why Greg wants to find some sort of technological fix for the political problem we face. But there is no point in initiating something like a "Manhattan Project" for this.

This seems like a modification of a rhetorical device I heard again and again in the 1970's and 1980's: "If we can put a man on the moon, why can't we...?" e.g. "why can't we provide clean drinking water for everyone on the planet?"

For me as an engineer, the answer was always obvious: they were not the same kinds of problems.

There are three things needed in any engineering development project: will, resources, and engineering implementation, but the third can only happen after the first two. John F. Kennedy made a speech challenging the US try to put a man on the moon before 1970; after JFK was assassinated, that created the political will to embark on the Apollo project, and to spend whatever it took to make it happen. That's when the engineers started working on it.

But not all projects are engineering development projects. In the case of "clean drinking water", the engineering is pretty simple. The real problem is political: finding the will and the resources. So the honest answer to the rhetorical question is, "Because we don't care as much about it." That's not an engineering problem.

The problem of "providing clean drinking water" is fundamentally easy from an engineering point of view. The goal of putting a man on the moon was a very tough and challenging engineering problem but it was not obviously impossible to do it (though the time constraints were pretty severe). The goal of the original Manhattan Project was also tough and challenging.

But "eliminating our reliance on Arab oil" transcends "tough and challenging" and resides in the lofty realm which engineers call "nontrivial". (Translated for laymen, that means, "Forget about it. You won't live to see it happen.")

When the American space program began in the early 1960's, they knew specifically what they wanted to accomplish, and they had a pretty good general idea of how it would be done. Likewise, in 1942 when the decision was made to begin the "Manhattan Project", there was a theoretical basis for nuclear weapons and they had a pretty good general idea of what would be needed to make them work.

In both cases the project mission was to convert the theory and general approach into detailed engineering practice. There's no doubt they were both really tough problems, but the people on those projects knew where they were trying to go and had a pretty good idea before they began about how to get there.

However, a hypothetical "eliminate reliance on Arab oil" project would not have any theoretical basis for a solution. There are no alternative energy source which satisfy those five requirements which we have or could readily develop the technology to utilize.

Greg is looking for a miracle. He wants the people on such a project to come up with some surprising answer. He doesn't have any hints for them about where they should look for it; he merely tasks them with finding it.

Unfortunately, this is a bit like the people who say we should "win without war". I wish it were possible, but what's missing is any idea of how we would actually do that. I don't think it can be done.

And I don't think we can substantially reduce our reliance on Arab petroleum. There is no "tough but achievable" solution. All alternatives are either well understood and easy (and already being used) or are intolerably difficult, and there are no as-yet-undiscovered alternatives which would satisfy those five criteria.

This, too, ultimately isn't an engineering problem.

Fusion won't solve the problem. Fusion has been "on the brink of success" all my life, and there's no telling when they'll actually make it work, or if they ever will. But if they did, I think it virtually certain that commercial fusion power generation will fail #5. The kind and amount of equipment which will be needed even for a moderate sized generation facility would make the capital cost and maintenance expense unacceptably high.

Solar satellites won't solve it, either. Solar Satellite power technology will fail #5 even more badly than fusion. It will also badly fail #4.

When it comes to power generation, the job's not done until the energy reaches the end user. The challenge of energy delivery is particularly severe for solar satellite technology.

Generally speaking, every time energy is converted from one form to another a lot of it will be lost (because of the Second Law of Thermodynamics). All technologies which generate power and deliver it to end users involve such conversions. A coal-fired electrical generation plant burns coal to produce heat, converts heat to pressure by applying a lot of that heat to a boiler to produce steam, converts pressure into mechanical motion (with a turbine), converts mechanical motion into electricity (with a dynamo), and then delivers the electricity with long distance power lines, which usually requires multiple voltage/current conversions using transformers or motor-generators. Many of those conversions are very efficient but some of them involve pretty significant losses.

The efficiencies of every step have to be multiplied together to calculate the overall system efficiency. If you have five steps and each one wastes 20%, then each step has an efficiency of 0.8, and the overall system efficiency will be 0.8*0.8*0.8*0.8*0.8 == 0.328, meaning about 33% of the original energy would be delivered to end users, with the remaining 67% being lost. But if each of those five steps wasted 30% instead of 20%, the overall system would only deliver 17% of the original energy. The more conversions required, and the worse the efficiency on those conversions, then the lower the efficiency of the overall system.

Solar satellite power generation is particularly poor in this regard. Sunlight is concentrated using mirrors (with some losses) onto a boiler (with some of the light reflecting instead of being converted to heat, and some of the heat radiating away via black-box radiation). The next few steps are the same as for a coal plant: steam drives a turbine, which drives a dynamo, which generates electricity. At that point, all you have to do is to deliver it, but that is not easy with solar satellites.

The electric power would have to be converted to microwaves (with a lot of losses). That would be beamed down to earth (with losses from atmospheric reflection, scattering and absorption). Most of the beam would strike the receiver but some would not because of beam spreading. (Also, there beam would tend to wander a bit because of atmospheric refraction, which also makes stars "twinkle".) The receiver would have to capture the microwaves that struck it and somehow convert back into electricity, and every way I know to do this has dreadfully poor yields.

Microwaves are not the only approach to the downlink, but every approach I know of for the downlink either cannot handle the power levels involved, or is terribly inefficient. Compared to terrestrial electrical power generation technologies, solar satellites inherently require more conversions, many of which have poor efficiency, and the overall system efficiency will necessarily be far worse. I would be surprised if the system had a yield as high as 5%. I would tend to think it would be even lower.

On the other hand, the energy which would have to be expended to create a solar power satellite would be huge compared to the energy needed to build a terrestrial power generation facility. Would it break even before it reached the end of its operating life? Would it actually produce more energy than it cost? I'm not so sure it would.

The capital cost to create a solar satellite would also dwarf the cost of terrestrial power plants which delivered comparable amounts of power, but the satellites and terrestrial power generators would sell their power on the same market at the same price. Could a solar satellite produce enough revenue during its operational life to repay its capital cost? It seems unlikely.

In other words, solar satellites are possible but are not feasible. We'd be much better off spending our money to build more coal-fired generation plants. We could produce much more power while spending much less money.

But they wouldn't be as romantic.

I suppose there's a place for romance. I suppose there's a place for trophy projects. I understand why the French and Brits created the Chunnel. It was an amazing technological achievement – but it's never going to pay off the investment needed to produce it. It's clear now that they'd probably have been better off considering some other approach.

I think core taps (described in this post) could ultimately satisfy all five criteria, but there are essential technological precursors for core taps which don't exist yet. The most important problem is that current drilling technology is subject to scaling problems which mean it can never reach the necessary depths. As drill shafts get longer, the total mass of the shaft rises and turning resistance increases (from friction on a greater surface). The strength of the metal used in the shaft can't really be increased. So as the shaft gets longer, one of three things will eventually happen: either the drill will seize up because too much force is needed to overcome shaft friction and too little force will be transmitted to the drill bit to keep cutting, or the drill shaft will start to twist, or the drill shaft will actually break outright.

One of the things which is needed is an entirely new drilling technology which can reach the depths required, but it's not obvious what it might be. My best guess is that it will be based on a high powered laser, but there's more to the problem than that. (When you drill, you not only need to cut, you need to remove. How does a laser remove?) Whatever the solution might turn out to be, it's likely to bear about as much resemblance to current drilling as transistors bear to vacuum tubes.

In 1942, the people on the original Manhattan Project had a definite destination in mind and a pretty good idea of how to get there.

But people assigned to Greg's new Manhattan Project would have no such destination. All they'd have would be a nebulous goal. That's not the same thing. What's worse, they'd have every reason to believe there was no way to achieve that goal.

Greg continued:

I'll say it as clearly as I can: If we're at war -- and we are -- where is the "Manhattan Project"? Where are our leaders? Why isn't developing technologies that will free us from dependence on oil our number one priority as a civilization?

There is no such project because it would be a waste of time and money. I'm sorry, but it's really that simple.

Or rather, it can be stated that simply, but the actual reason is very complicated.

Update: In fact, if we actually became bound and determined to drastically reduce consumption of Arab petroleum, and didn't want to destroy our economy doing so, then there are two quite obvious solutions. First would be to start using Canadian oil sands. The other would be for us to start building coal gasification plants (to utilize America's vast deposits of black or brown coal). The technologies involved in both of those are well understood; neither would require a "Manhattan Project". Right now, the main reason we aren't doing either is that they're not cost-competitive with petroleum.

When I referred to the problem as "nontrivial" above, I was responding to it in the terms Greg was thinking about: development of some brand new energy source which would miraculously make it so we no longer needed much petroleum. That remains nontrivial.

Update 20040604: TMLutas thinks I'm too pessimistic.

He thinks he's totally discredited this article by pointing out that solar satellites would use solar cells instead of mirrors and boilers. Actually, in high-power designs, boilers and turbines have surprisingly good efficiency, much better than the 15% he quotes for solar cells, which waste the majority of the light which strikes them because the frequency is wrong. That's not where I think the efficiency problem lies, anyway. The problem is the power downlink to the ground, especially the conversion to RF and back to electricity in the receiver. They'll both be terrible.

He thinks he's found a citation for 90% efficiency in conversion of DC to microwave RF. Unfortunately, what he has found isn't relevant to this problem. It's easy to do that if you're only talking about a few watts. It is not at all easy if you're talking a gigawatt. No one is going to get 90% conversion of electric power to microwave transmission at gigawatt power levels. No one is going to come remotely close.

This is one of the few remaining applications where semiconductors have not yet displaced vacuum tubes. In a modern TV transmitter rated for 500 KW or 1 MW, everything is transistorized right up to the very last amplification stage, which uses vacuum tubes the size of garbage cans.

The satellite downlink will have to generate and transmit as much RF as a thousand such TV stations. Doing that is difficult. Doing that with 90% efficiency is "nontrivial".

Doing it at microwave frequencies merely adds to the fun, because extremely high frequency applications are also extremely unforgiving. I'm not really sure just how you'd generate microwave RF at gigawatt power levels, quite frankly, but whatever approach gets used, it ain't gonna achieve 90% conversion. Not gonna happen.

Lutas concludes, SDB took on an almost impossible task, proving that something cannot be done feasibly. No, I'm afraid not. I don't contend that these things are and will always remain infeasible (though the ones I discussed are definitely infeasible right now). What I contend that they cannot be done soon enough, large enough, to have any political effect on this war.

Update: More here.


permanent link to this entry Stardate 20040602.1521

(On Screen): Responding to my article about sources of bias in news reporting (and its followup here), Andrew Olmstead points out that the military has a similar problem. If all intelligence was dumped on senior commanders they'd drown in data, but when the data is filtered by subordinates there's always a degree of bias and distortion involved.

The fundamental problem of "drowning in data" is an interesting one. It's a consequence of developments in basic technologies relating to storage and transmission of information, because as those have advanced we have ended up with an embarrassment of riches. It's increasingly easy to know something important without realizing you do, because it is a very small needle lost in a very big haystack. Librarians were among the first to have to deal with this seriously, and the Dewey Decimal System is one of the great unheralded achievements of the modern era.

Sometimes there are several pieces of information each of which appears uninteresting but which taken together are critically important. It seems to be the case that various intelligence groups in the US government had enough hints collectively to have discovered and foiled the 9/11 hijacking plot, but did not do so because the significance of that information wasn't recognized, and the pieces weren't put together before the fact. There's been a lot of criticism of that by outsiders using 20:20 hindsight, but most of them don't realize just how tough a problem this is.

It comes up all kinds of places. I spent most of my career as an engineer designing tools for other engineers to use. I spent six years at Tektronix designing logic analyzers in the late 1970's and early 1980's. A logic analyzer (LA) is an instrument which permits an engineer to observe the behavior of digital circuits similar to the way an oscilloscope permits an engineer to observe the behavior of analog circuits. During that six years I participated in two complete product development cycles (the 7D02 and the 1240), and while working on the second one I came to the realization that the true power of test and measurement equipment (such as LAs) was not a function of how much useful data they could capture so much as it was a function of how much useless data they could exclude and discard. The ideal T&M instrument would capture and display exactly the data which was most interesting and useful to the engineer without displaying anything else. Like all ideals, this is unachievable, but the better the instrument is at this, the more useful it will be perceived to be. On the other hand, if the critical data you need is lost in a huge sea of irrelevance, then it's much less useful.

Claude Shannon rigorously examined the basic question, "What is information?" in the late 1940's while working at Bell Labs. He developed what we now call "Information Theory", and there may be no single theoretical work which is more important and less well known. For many electrical engineers and computer programmers it's central and vital, but few laymen have ever heard of Shannon and quite a lot of programmers don't know his name.

One of Shannon's fundamental insights was that transmission is not the same as information. He concentrated particularly on the fundamental properties of bit streams (he was the first to use the word "bit" to refer to binary digits) and concluded that information was a function of surprise or unpredictability. When someone receives a message encoded as string of bits, if based on the value of the bit stream up to a given point the receiver has no better than a 50:50 chance of predicting the next bit, then that bit contains maximal information. At the other extreme, if the receiver can predict the next bit unfailingly, then that bit contains no information at all.

That can be demonstrated by providing sequences of letters and asking the reader to guess the letter which comes next. Try these two:

I WENT TO THE STORE AND BOUGHT _

THE PRESIDENT OF THE UNITED STA_

The first one is very difficult to predict; the second one is extremely easy. The actual letter which will be received next in the first message will contain a lot of information; the next letter received for the second message will contain almost none.

Shannon developed a mathematically rigorous way of evaluating the amount of information present in a given bit stream, although it was actually all turned upside down from how I am describing it. What he analyzed and rigorously described was redundancy rather than information, and he developed a way of calculating what he called the "entropy" of a bitstream, which was a measure of the extent to which that bit stream carried less than the maximum amount of information possible for a bitstream of that length.

Shannon's work provides the theoretical basis for file compression algorithms. Compression algorithms convert long bit sequences which have a relatively high entropy into shorter bit sequences with much lower entropy in such a way that they can be interpreted later to reproduce the original higher entropy bit sequence. But there's a limit to compression which you reach when the compressed bit stream has entropy of zero, beyond which you cannot go. (That's why putting a ZIP file inside another ZIP file doesn't cause any further compression.) If you want to compress even further you have to discard information.

A compression algorithm called "Huffman Encoding" which was developed 25 or 30 years ago. (It's long enough so that the patent has expired, which is all I care about.) Since then the basic approach it used has been developed further and modern compression algorithms come pretty close to the theoretical limit, producing output bitstreams with extremely low entropies. The specification for the PNG image format includes that kind of compression, which means that PNG files are much smaller than BMP files, which are not compressed at all.

JPG files are much smaller yet, but that's only possible because JPEG encoding is "lossy"; it deliberately throws away information in order to further reduce filesize. The group that developed the JPEG format included experts in human visual perception, and the JPEG format is designed to discard information which is imperceptible to human eyes. The image reproduced from decoding a good quality JPG file created by someone who knows what they're doing is not actually identical to the original image which was encoded into JPG, but we can't tell them apart with our eyes.

The codecs used in cell phones likewise achieve phenomenal compression rates on sound because they discard information which is imperceptible to human ears. (For example, they don't encode phase relationships between harmonics. We can't hear that.)

All of this work was strongly influenced by Papa Shannon. The only reason the people who did this could play the game was because Shannon drew all the lines for the playing field. But Information Theory is far more profound and has much more broad application than that.

Shannon's theory includes much else. For instance, he realized that any encoding of information required energy, and the more information which was communicated, the more energy was required to represent and communicate it. Because of this the laws of thermodynamics kick in, causing all sorts of interesting consequences. (And that's the reason he used the term "entropy" as described above.)

Information theory ultimately radically changed cryptography. Computers made entirely new kinds of ciphers practical, and Information Theory showed how to make them strong.

Shannon developed a mathematically rigorous definition of "information", but it operates at the level of encoding. (Because what he actually developed was a rigorous definition of redundancy, and "information" is everything which isn't redundant.) So far as I know he did not seriously examine the question of "information" at the level of semantics. He analyzed representation and transmission of information; he did not examine information in the sense of meaning.

In the first example message I gave above, the next letter received will contain less information than it could. Our ability to predict that next letter depends very strongly on the knowledge we have. Assume that we limit the choice for the next letter to the 26 capital letters and space. If all 27 choices are equally likely, then the letter we actually received would carry the theoretical maximum amount of information. But those 27 choices are not all equally likely in that particular message.

If the message had been something like this, then the next character to arrive might have been totally unpredictable:

YW0gYSBiaWcgYW5pbWF0aW9uIGZhbiBidXQgSS
BoYXZlbid0ID0NCgk+d2F0Y2hlZCB0b28gbXVj
aCBhbmltZSBhbmQgSSdtIG5vdCBz_

But the actual message was in English, and English is quite redundant. That's why we have a much better than 1/27th chance of predicting the next letter.

Based on our knowledge of the English language, we can determine that some letters are more likely than others to be next, though no single letter stands out as an obvious choice overall. Our knowledge of English Language syntax aids our prediction because we know the next letter is the first letter of a word, and we know that some letters are far more common at the beginning of words than others. For example, the chance that the next letter will be T is well below 50:50, but it is far more likely to be T than to be X, though the chance of it being X is not zero. (The next word could be "Xerxes", the name of someone that the sender of the message was buying something for.)

Since we understand the early part of the message, we can analyze the kinds of things the sender might want to say, and thus which words might appear next. Our knowledge of stores and the things they sell helps us to reduce the solution space even further (although the next word isn't necessarily what was bought). If we knew who the speaker was and knew more about the circumstances in which that message was sent, we might well be able to further refine our prediction since we would have a better idea of what kinds of things that person might have wanted to buy on that particular occasion, or why they decided to visit the store.

At the semantic level, the amount of information carried by the next letter is very much a function of who is receiving it and how much they know. The more knowledge brought to bear, the better the prediction will be, and the less information the letter would actually convey once it arrived. That can't be determined through direct examination of the message itself since two different receivers of the exact same message wouldn't necessarily have the same amount of knowledge to apply.

For some receivers this message might contain almost no information. If this message was sent to someone who had accompanied the sender to the store, they would already know what got bought.

And if the full message had previously been sent to us, we would have little difficult with predictions if we received it a second time.

Redundancy isn't necessarily bad. Suppose that the whole message ended up being this:

I WENT TO THE STORE AND BOUGHT XANANAS.

The letter in that position actually turned out to be X but we know that it is wrong. We know what letter should have been there. We use the redundancy inherent in the English Language to detect and correct the error. Not all redundancy necessarily supports error checking or error correction, but Shannon was able to show that all error checking and error correction required redundancy, and to show how they related to each other.

The main reason Shannon analyzed redundancy was that he wanted to understand how to utilize a communication channel to communicate information reliably while optimizing the amount of information sent on that channel. (That was, after all, his employer's business.) One of the things he learned was that there was a tradeoff between efficient utilization of channel bandwidth and reliable delivery. It was possible for a channel to be both inefficient and unreliable, but it was impossible to use the entire bandwidth of the channel without risking undetectable corruption of data. Reliable delivery required redundancy, and redundancy was parasitic on efficiency.

Shannon tells us how to quantitatively evaluate the entropy of a given message, which tells us how redundant it is. But at the semantic level, evaluating "how much" information is really present involves other factors which can't be evaluated objectively or quantitatively.

One is relevance: even if someone sends us a message which we would have a hard time predicting when partially transmitted, that doesn't mean we'll actually care about what it tells us. If someone sends me the score for a basketball game I hadn't heard about, the message would contain a lot of information. But since I don't give a damn about basketball, I wouldn't be interested.

Information density is a function of unpredictability, but relevance is a function of usefulness. That basketball score would surprise me (in the sense that I would not be able to predict it) but would not be useful to me at all. And there isn't really any correlation between information density and information relevance. A message which carries very little information could still be extremely useful, while a very dense message with a lot of information could be a total bore.

This is more about the receiver than the message. What is useless to me might be useful to someone else, and vice versa. (There are people who really do care about basketball, I'm told.)

Another semantic problem is veracity. Some messages tell the truth, while others do not. And this, too, doesn't correlate. Whether the information in a message is deceptive or not has nothing to do with the information density of the message, or its relevance to its receiver.

I don't think it's possible to evaluate information at the semantic level with the kind of rigor Shannon applied at the lower level of physical encoding. I don't see any way to analyze things like relevance quantitatively, since they're fundamentally subjective.

When we deal with the problem of veracity, we have rise even above semantics and consider things like intent and competence. We can't just evaluate what was sent; we also have to think about who sent it. Do we think the message sender might be deliberately trying to deceive us? Is it possible that he is sincere but misinformed? (And are we even sure who sent it? That gets us into authentication, another can of worms.)

These are problems which cannot be ignored, even though there isn't any obvious "solution" to any of them. It's ultimately a problem which can only be addressed by induction, and inductive results are never totally certain.

When I worked on logic analyzers, we put a lot of effort into providing easy, powerful ways for our user to discard information which he considered to be "uninteresting". That's because there wasn't any way for us to do that automatically because there was no way for us to differentiate "useful" data and "useless" data.

Unfortunately, sometimes the user himself doesn't know what data will be useful. That's a particular problem with instruments designed to monitor operations so as to permit analysis of unexpected failures. Since there's no way of knowing what kind of failure will ultimately be monitored – after all, if it could have been predicted then it would have been prevented – then there's no way to know ahead of time what information which can potentially be collected will be important. The best you can do is to decide that some kinds of data you can collect has a good chance of being useful, and to capture lots and lots of it. After the fact, you have to wade through it looking for a needle or two in a stadium full of straw. (What makes it all the more fun is that you may not even recognize the needle when you see it.)

That was one of the reasons why it took so long to identify the cause of the power blackout in New England last August, as I explained in this post. But at least the information they had to sort through could be trusted (unless they suspected instrument failure). Almost all of it was irrelevant, but none of it was deliberately intended to deceive.

When it comes to information filtration involved in news or in military operations, the problem is all the worse because some sources are trying to deceive us. As Andrew explains, the way the military deals with this is pretty good, but it isn't perfect. I don't think it is possible for it to be perfect.

And there isn't any perfect solution regarding the press, either.


permanent link to this entry Stardate 20040601.1007

(Captain's log): This coming weekend will be the 60th anniversary of the Normandy landing in France, and President Bush has little choice but to attend. It's almost as if he had received something like this in the mail:

Chirac will be the host, and he will act smug and superior. He will try to leverage the situation for his own benefit. Bush isn't going because he has any great urge to talk to Chirac. I suspect that Bush would much rather visit the dentist than to visit Chirac. As President of the United States, Bush will be in Normandy on June 6 to honor the Americans buried there, and he will have to tolerate humiliation by Chirac to do so. There's little he can do to avoid at least informally meeting with Chirac. He can't be rude even if Chirac is.

There are always speeches given after these kinds of informal meetings. Chirac has become a bit notorious for taking advantage of such speeches to blindside those who visit him. I remember at least one case where Tony Blair visited, and had to stand dispassionately while Chirac gave a speech which made clear that Chirac thought Blair was misguided. Doubtless Chirac intends to do the same to Bush this time. Don't be too surprised if Chirac's speech emphasizes how the Normandy landing was made by a multilateral coalition, and how important it is to maintain such alliances, and how the presence of American dead in Normandy shows the deep and eternal bond of friendship between France and the nation of cowboys who, quite naturally, really should listen to what their French friends advise them to do blah blah blah...

I have a little fantasy. I don't expect it to happen. But I imagine to myself Bush delivering this speech, when it is his turn at the microphone.

Sixty years ago, American soldiers fought on this ground to save it from fascism. They went overseas to a strange land, full of people speaking a strange and incomprehensible language, and they fought to save those people from brutal tyranny, and to prevent that tyranny from reaching the shores of their American homeland to threaten the loved ones they left behind. They liberated that nation, and then most of them went home. They fought not to create an empire, but to prevent creation of one.

Most of them went home afterwards, but some of them, too many of them, remained behind. Some of them, too many of them, never saw their loved ones again. They died here, and they were buried here, far from home. They rest forever among those they freed. They sacrificed everything to save people they did not know who were unable to save themselves. These men deserve to be honored for what they did, for what they believed, and for the price they paid.

They should rest among friends, among those who understand and are grateful for the sacrifice they made. They cannot know what we do, or what we say, but we still owe it to them to live up to the example they set for us. We owe it to them to not waste their sacrifice; we owe it to them to refuse to lightly discard the precious gift of liberty they gave everything to preserve. They should rest among friends who understand that no price is too steep for us to pay to preserve our liberty.

I am saddened that they have no such friends here. The grandsons of these men are once again fighting overseas in a strange land, which is full of people speaking a strange and incomprehensible language, in order to free those people from brutal tyranny and to prevent that tyranny from threatening their loved ones at home. I am saddened that these brave soldiers, who fought and died to free the people of a nation not their own, must now rest among people who condemn their grandsons as they, too, fight and die to free the people of a nation not their own. I am saddened that they must rest among people who feel only contempt for the values they died to preserve. I am saddened that the people these men died for do not believe anyone else should be given the precious gift of liberty they themselves were given by these dead soldiers. These men deserve better than that.

So when I return to Washington I will ask Congress for money to move them all back home, to the land they held dear, so they can rest forever among the people they loved. They will not see America when they arrive, but America will see them and will welcome them home. It is the least we can do for these soldiers who gave so much for us all. They should rest among people who love what they loved, who value what they valued, and who will if necessary fight to defend the liberty these men fought and died to defend.

Bush won't do anything like this. He will express gratitude and respect for the soldiers buried there and will avoid reference to current events. Chirac, on the other hand, will use the occasion to lecture Bush, and Bush will have no choice but to put up with it. And when I read about it, I'll grind my teeth. But there's nothing to be done about it. France holds some of our war dead hostage; the President must travel there to honor them. But I hate the fact that the people they died for scorn everything those soldiers believed, and that the French in their ingratitude and resentment now condemn exactly that which gave them their freedom to express that condemnation.

History she is a bitch.

Update 20040602: David Boxenhorn comments.

Update: Tim quotes part of a speech Bush actually made today at the Air Force Academy. And I fully agree with Dave when he says, [Bush] may be correct, he may be wrong, but what he said is what I believe.

Meanwhile, Murdoc has his own suggestion for what Bush should say.

Update: David pointed this out:

French officials fear George Bush will inflame anti-American sentiment in France this weekend by linking the D-Day landings with the invasion of Iraq.

Advisers close to Jacques Chirac have let it be known that any reference to Iraq during the 60th anniversary of the Allied invasion of France on Sunday would be ill-advised and unwelcome.

Both presidents will address second world war veterans and VIPs during a service at the American cemetery in Colleville-sur-Mer, Normandy.

"He'd better not go too far down the road of making a historical comparison because it's likely to backfire on him," said a source close to President Chirac.

Given that I still think Chirac himself will try to take advantage of the situation, this is pretty amazing.


permanent link to this entry Stardate 20040601.0627

(On Screen): Regarding Richard Clarke, I only have one question:

Why hasn't this man been indicted for perjury?


permanent link to this entry Stardate 20040531.1216

(Captain's log): Frank writes:

I scanned your media bias piece. Interesting. However, I think at the end you may have made a mistake: I don't know about Murrow -- he was before my time -- but history is increasingly less kind to Walter Cronkite regarding the Vietnam War, and rightfully so, in my opinion. We know now that we won Tet decisively, but Cronkite portrayed it as a failure and ignited popular angst against the war to the critical level that eventually led to withdraw and defeat. Moreover, plenty of Vietnamese blood was spilled during and after the draw-down of forces, and I lay no small portion of that blood at Conkrite's feet. And then there is the matter of the entire Vietnamese population today living under the jack boot of communist thugs. How much responsibility for this should be assigned to Walter Cronkite? Plenty, in my view -- although I reserve some for John Kerry as well.

What we many never know (has Cronkite ever publicly discussed it in detail?) is whether Cronkite's expressions were rooted in a personal anti-war agenda, or in a deep-seated conviction -- however wrong -- that the war was lost. I'll give Cronkite the benefit of the doubt that he may have been convinced that the war was lost, but he was still *wrong* nonetheless. I also wonder if he cares about the unintended consequences of being the original "anti-war" agenda journalist, and about the lasting effects that his diatribe has had on the journalism profession. My suspicion is that, like most effete liberal elitist snobs, he thinks he elevated the profession and it's just fine with him if most of the current reporting on the war is shaped to serve the anti-war agenda, regardless of how morally bankrupt such positions might be.

Walter Cronkite may have, at one time, been "the most trusted man in America." But that was then, and this is now. Today, I don't think Cronkite could ratings on any of the big three television networks. He may still be loved by liberal intellectuals. But that's a tiny (and shrinking) segment of the population. Flyover country is just glad that he's not showing his face too much. Let him spend his time trying to chase energy-producing windmills from his vistas overlooking Martha's Vineyard. The rest of the country will move forward, ever skeptical of the reporting of mainstream media outlets -- on the war and most other matters -- all thanks to Walter Conkrite.

He was responding to the final paragraph of that piece:

I don't think there's any easy answer. All of us just have to get used to it, and to remember that much of what we're being told is either false or is distorted. The days of Murrow and Cronkite are gone. We citizens have to start thinking of news reporting as being about as reliable as advertising.

What I meant by that was that the days when we deeply respected top news reporters are over. The people of this nation used to hold certain top reporters in very high esteem, and to see them as men of integrity, men motivated by noble purpose. That doesn't mean they were invariably right; no man ever is. But they were trusted and respected. No media personality today has anything like the kind of reputation those two had.

I am aware of Cronkite's role in the overall result of the Tet Offensive, though I think some now are giving it too much credence. There's something of a tendency in humans to try to finger one person, or one event, or one specific thing as being "the cause" of important results. Sometimes this is an unconscious mistake, and sometimes it is deliberate. Sometimes it is deliberately misused. It shows up in many ways, such as in those who now claim that Iraqi WMDs were somehow the reason for the invasion of Iraq, and our failure to locate them somehow proves that the reason was invalid.

Reality is rarely that straightforward. Indeed, sometimes the factors which contribute to a tragic sequence of events are extremely varied and complicated. It is, for example, very difficult to explain and understand all the factors and events which combined together to bring about the tragedy we now refer to as "World War I" in Europe. Among them are the advent of the industrial revolution and especially the industrialization of Germany, the actual political formation of "Germany" by Bismarck, and the French humiliation in the Franco-Prussian War. But that is far from a complete list of contributing factors, and the combination of contributing factors had emergent qualities which are not obvious without close study.

I definitely agree that the Tet Offensive was the turning point in the Viet Nam War. I also agree that there was an astounding disparity between the military reality and the political effect it had in the US. (I wrote about it here.)

I also understand that Cronkite's performance was one of the more important factors in the overall political result, in part because of his unique position of having a very tall podium from which to speak, and being widely respected. But I don't think it's correct to lay substantially all of the responsibility for the political turn-around at his feet.

Regardless, Cronkite's performance relating to the Tet Offensive doesn't really affect the point I was trying to make. There was a time when we believed that top media personalities were men of integrity. Now we must view them as being as trustworthy as used car salesmen, and polls show that most of us actually do rate them that way.

What I think is rather strange is that polls within the industry also tend to show that journalists have much different opinions about themselves than the general public has about them. Journalists think they are centrist; the public sees them as slanting heavily to the left. (Understand that these are all generalizations.) Journalists think they are in touch with the public zeitgeist and that they tend to pursue issues the public cares about; the public doesn't tend to agree with that.

Even more interesting is that journalists not only have a higher opinion of themselves than the public has, but journalists seem to think the public holds them in higher esteem than the public actually does.

There's quite a difference in how certain professions are broadly viewed. Generally speaking, polls have shown that the clergy are respected the most. Scientists and doctors tend to come in next, and my own profession of engineering is slightly behind that. (Engineers are not generally viewed as being evil, but I think we are viewed as being careless and narrow. We're not seen as being motivated by higher principles or attempts to work for the greater good. The general public is dimly aware of just how much their lives have been improved by the fruits of engineering, but doesn't generally understand just how pervasive it is. They're also aware that the fruits of engineering have caused problems, and some are afraid of just how little they really understand about what we engineers are doing. Engineers are creating their future, and they don't understand what we're making and don't contribute to the decisions we make which will affect their future. So engineers are held in respect in part because so much of what we make actually works so well, or is so nifty and cool, but we're also feared and resented because we give our loyalty to our employers rather than to the public at large.)

Lawyers traditionally have rated much worse, and politicians have always rated very poorly. These ratings tend to be very stable over time, though some events can shake them. (I would not be surprised, for instance, to learn that the reputation of the clergy has been seriously shaken by the revelations about widespread pedophilia among Catholic priests, and about the way the church worked to cover it up.) In the last few decade, the one profession whose reputation has changed the most has been journalism, whose overall reputation has declined radically. Indeed, in some polls, journalists rated lower than politicians.

It's interesting that journalists overall have not really come to grasp just how badly their reputation has decayed. But that kind of disconnect can't last, and we're beginning to see a certain amount of soul-searching within the profession about it.

This is actually a classic example of "spoiling the commons". Over a long period of time, the acts of individual reporters, taken with the intent of promoting their own reputations, have had the cumulative effect of seriously degrading the reputation of their profession and their industry.

In many ways, Watergate has to be seen as a turning point. In some ways it was the greatest triumph in the history of modern journalism. An activist media, seeking the truth, helped reveal and then helped keep pressure on to continue to reveal, crimes committed by the top officials of the government of one of the most powerful nations in the world, and through that eventually brought about the downfall of that leadership. Journalists rightly credit their profession as being critically important in the sequence of events which eventually led to Nixon's resignation as President of the United States.

But if it was a triumph for the journalistic profession, it also sowed the seeds of its decline. It inspired a generation of new journalists all of whom had the ambition of becoming the next Woodward or Bernstein. They wanted to do it again. Ric wrote to me:

Woodward and Bernstein got a Pulitzer for the Watergate stories. More importantly, the Press learned that if they were persistent enough they could destroy, or at minimum contribute greatly to destroying, a President. This is real power, albeit that of the bully: "I can destroy this, therefore I control it." That they would not attempt to exercise that power may be too much to expect, given human psychology.

I don't know that it was really motivated by the same thing as motivates bullies. I think it was more about to a man with a hammer... This was really the only way the Press had of influencing events, and for individual journalists to make names for themselves.

It's generally accepted now that Cronkite was wrong in his evaluation of the Tet Offensive. It's not as easy to say whether the overall evaluation of the Viet Nam war was correct. However, I think that whether he was right or wrong, Cronkite actually was trying to do the right thing. If he came to oppose the Viet Nam war, it was because he actually thought the US was wrong to be fighting it. I don't think Cronkite was primarily motivated by ambition to make a name for himself, or any goal of "making a difference" by bringing down an administration and otherwise gumming up the gears.

However, since Watergate, it seems that more and more individual members of the press are primarily motivated by personal ambition. And that's why this is a case of "spoiling the commons": in their quest to gain fame and respect personally, they collectively acted in ways which seriously damaged the reputation of their profession and industry.

Woodward and Bernstein were not the first journalists to help bring down an influential politician, by any stretch. One of the reasons that Murrow holds an honored place in the history of the profession was because of his negative coverage of the McCarthy witch-hunts. (This, like so many other historical events, has come down to us in our popular wisdom in rather abstract and distorted terms. There's no question that McCarthy took things too far, and no question that he was ambitious and unscrupulous. There's no question that anti-Communist hysteria in the early 1950's was an overreaction, and no question that it unjustly ruined the lives of many people. On the other hand, there actually was a problem, and a better man than McCarthy in his position would have gone down in history as a hero.)

Murrow was already famous and respected; he had the same kind of "big podium" that Cronkite had in the 1960's. Murrow used his unique position to expose McCarthy's abuses, and was a critical factor in bringing about McCarthy's political destruction.

There were two critical differences between Murrow and Woodward/Bernstein. For one thing, the latter helped bring down a President. But even more critically, Woodward and Bernstein were unknowns at the time. They became famous because of their Watergate reporting. And because of that, they inspired a generation of ambitious journalists who wanted to do the same thing.

Only problem was that the kind of big story and deep and important perfidy which was present in the Nixon administration was actually not all that common. Woodward and Bernstein deserve most of the credit they've been given, but in part their achievement was based on luck. They were lucky enough to face a hugely important story, and they saw it and reported it.

The other journalists inspired by them needed/wanted equivalently big stories – but were not as lucky. So as my reader SJ put it, journalists became anxiety pimps. We saw a seeming endless progression of mountains-out-of-molehills presented by a breathless and near-hysterical press which were immediately dubbed this-gate, that-gate, something-else-gate, another-thing-scam, and so on. It was never totally clear whether creation of a xxxx-gate name for some new scandal was serious or ironic, in fact, and it became something of a joke after a while. But it still goes on.

Challeron wrote:

I was also glad to see you mention Edward R. Murrow and Walter Cronkite at the end of the essay, since you had early on mentioned "the mission of the press"; but I was kinda hoping that you'd note that it was William Paley, the founder of CBS, who allowed his News Division to run at an operating loss for many years, and who felt that news should *never* be a for-profit enterprise. Fred Friendly's "Due To Circumstances Beyond Our Control" is an excellent read about "the corrosive effect of money on the news business, the sensationalization of news reporting, and the viewing public's appetite for quality broadcasting" (from the Amazon.com book description).

That, too, has been a factor. If reporters are more and more motivated by ambition, in television they are also being encouraged to be sensationalist by networks which crave ratings and the big advertising revenues they bring. That has led to abuse, such as the "exploding pickup truck" scandal. (NBC's 60-minutes-alike showed film of a pickup truck exploding after a test crash, as part of a feature which reported that they were misdesigned and excessively dangerous. It was later revealed that the reporters which set up and filmed that demonstration had concealed an explosive charge in the truck which they set off after the collision to make sure the resulting film was spectacular.)

Part of the problem is that in their endless pursuit of the next "big story", reporters tend to dive into areas they don't really understand, and don't do their homework. Barry wrote:

Many journalists write articles about issues, and they haven't the foggiest idea about the subject. They pretend to know instead. Now this leads to the various biases you described, but frankly many journalists aren't the sharpest knives in the drawer.

This has certainly been a problem, and is something of an emergent result of the combination of reporter ambition and network finances. Journalists used to be assigned to "beats", to particular areas which they worked on for years. There was even something of an apprentice system involved, with junior reporters working for senior reporters on that "beat", and then being promoted when the senior reporter retired. That gave them time to learn about the "beat" and become knowledgeable about it.

But it also meant they spent a lot of time covering that "beat" even when nothing important was happening, and they did not have the opportunity to write stories which hit the front page. Ambitious reporters want more flexibility than that to try to cover whatever seems "hot" at any given moment, and the financial reality inside networks is that they don't want to carry and pay for the kind of staff that would require.

This has been a particular problem in coverage of military affairs and especially in coverage of wars. A year ago, the news organizations had to come up with a lot of reporters to send to the Middle East to cover the impending invasion of Iraq. Most of them were completely ignorant about military affairs.

Many of them ended up "embedded", and were assigned to their units long before combat began. That gave them time to acclimate, and to get to know the men in their units and to learn more about the day-to-day activities and procedures of the military, which was all to the good. But it didn't give them knowledge of what combat was actually like, or deep understanding of weapons and other aspects of military affairs. It did tend to make their reporting somewhat more sympathetic to the units they were attached to. (And later some who opposed the war criticized that reporting as being too sympathetic.) But it didn't keep them from making a lot of mistakes.

In particular, reporters tended to massively exaggerate the significance of Fedayeen activities in the rear of the advancing 3rd Infantry Division and 1st MEF. If you've never seen combat before, actually being under fire is terrifying and some of these reporters were in units which actually did get shot at. A short engagement between your infantry platoon and 20 or 30 hostiles using automatic weapons, RPGs and mortars can seem like a pretty big thing, especially if one of the soldiers you have gotten to know gets killed or seriously wounded. Psychologically speaking, a 60 mm mortar round which detonates near enough to you to cause incontinence is bigger than a daisy-cutter you've only read about.

There were also major technical errors. An Iraqi attack made with Frog surface-to-surface missiles was initially reported to have used SCUD missiles instead. That was a pretty important mistake on the political level, because Frog missiles did not have the range to violate Iraq's 1990 ceasefire agreement, whereas SCUDs did.

Since reporters no longer spend the time learning about the "beat" they're covering, this makes it harder for them to find stories. They don't really know where to look, and they have a harder time understanding what they see. There's a greater chance that they will misjudge the significance of events they try to cover (with a general tendency to assign too great a significance).

This often tends to encourage development of symbiotic relationships between reporters and advocate groups. The reporter seeks a sensational story to tell, and extremist activists have a sensational story they want told, even if it isn't really true. Reporters often come to rely on such people for technical advice and tips, because they had proved themselves valuable in the past.

That's because they had given the reporter information which made for a sensational story. It mattered much less whether the story was actually true or was badly overblown. (And those advocates were more than willing to provide advice and tips to the reporters for free, an added benefit.)

What we the public see resulting from this is activist-group propaganda reported as straight news. That's gotten more and more common as time has gone on. And since it has primarily been leftist activists who have been successful at this, especially in areas such as environmentalism and international politics, this has contributed to the general perception by the public of a strong and increasing leftward slant by the press. It is not always the case that such reporters are actually biased (or as much biased as they seem) so much as that they have come to rely on "sources" which are biased and which have an agenda.

When reporters are careless about this, it can become apparent. One example of that was "9/11 families opposing the war", when outside observers (i.e. bloggers) noticed that a small number of specific people kept appearing in news reports. It turned out they were all associated with one particular anti-war advocacy group, which some reporters had come to rely on as a reliable source of sensational quotes.

Yet another factor is related to a problem which afflicted industries for a long time. It has a formal name (I remember reading about it) but I'm not sure what it is. [Update: it is the "principal-agent problem", and in this manifestation it refers to the fact that the managers make their decisions for their own benefit, not for the benefit of their employer.] It had to do with the fact that ambitious executives tended to move around pretty rapidly from position to position within a corporation, often holding a position for less than two years (and sometimes only for a couple of quarters), and the "best" ones tended to rise rapidly. If a manager was given responsibility for a division, then if that division seemed to do very well, that manager would be promoted and given larger responsibilities.

It is possible to make any division seem to look better than it really is, and to cause it to seem to perform particularly well for a while, if one is not interested in the long term. In particular, the "bottom line" of a division can be improved by decreasing long term investment. In the short run that increases gross profit, but it can't be maintained.

However, by the time the division begins to suffer because of inadequate investment, the manager who made those decisions will have moved on, and will have stuck someone else with the problem. (In fact, it can look like it was the later manager's fault. When Allen was manager, gross profits were high; later, under Bob's management it began to get worse. That doesn't make Bob look very good, even though it was actually Allen's decision to underinvest which is responsible. It still can make Allen look better, since performance under him seemed so much better.)

This was a pretty serious endemic problem in American industry in the 1960's and 1970's. One way to reduce that is for managers to not be shifted around as much; if a manager knows he'll be responsible for the division for five to ten years, he is much less likely to neglect essential investments or to manage for short-term performance. Another way to reduce that is to use a more enlightened evaluation of manager performance, taking into account not just how much gross profit they managed to wring out of the division but also the fundamental shape the division is in to keep producing gross profit.

Reporters flitting from story to story are subject to something of the same kind of temptation. Even if they're exaggerating and distorting the story, they are likely to have moved on (and upwards) before the story implodes. And since their news organizations are increasingly interested more in ratings than in accuracy, exaggerated and sensationalist reporting which is later punctured isn't really viewed as being a problem. As long as there is a continuing flow of new sensationalist reporting to keep ratings up, the networks are happy.

When managers in industry succumbed to this temptation, the overall result was ramping off of productivity increases in the US in some industries, and other related economic effects. Some companies were so debilitated that they ended up bankrupt or were sold off. (For the workers, that often was the same thing, since the new owner often ended up shutting down much of what they had acquired.)

I think that as reporters do the same thing, the overall consequences are at least as bad for us, and in some ways may be worse. But there isn't the same dynamic in place in journalism which will cause the industry to fix it, the way there as in industry.

It is one of the major factors which has led to a serious decline in the reputation of the press. And because press credibility has been deeply eroded, and because they scream so loudly even about little things, they have deeply damaged their ability to deal with the next Watergate, a really big story where reporters really would make a critical and valuable difference.

If there's a really big story which is being suppressed, and if the press discovers it and reveals it, they do us a great service. But if they try incessantly to turn every small story that comes along into a big one, they do us no favors. And as this kept happening, the press gained the reputation of being the boy who cried "wolf!" too often.

Frankly, this is not in anyone's best interest in the long run (except perhaps for the next Nixon). A well-respected crusading press, which practices restraint and judgment, which shines its light on big issues without overblowing small ones, is a major asset in helping to maintain a free society and in preventing the gradual development of government tyranny. Even with the best of intentions it wouldn't always be right, just as Cronkite probably was not right in his interpretation of the Tet Offensive.

But I'll forgive honest mistakes, even when they're big ones. (And even if Cronkite was wrong in interpreting the significance of the Tet Offensive, he was surely not wrong in recognizing that it was an important event, a "big" story.)

Unfortunately, what we have now is a press which in my opinion routinely makes mistakes, and whose mistakes can't really be considered "honest". It's not that they routinely deliberately lie, though there is some of that; it's more that they routinely exhibit what the SCOTUS referred to as a "reckless disregard as to truth or falsity".

Increasingly stories are evaluated primarily on the basis of sensationalism rather than on verity or significance. "Will this produce a great headline?" has become more important than "Is this true and accurate and complete?" or "Is this important?"

Unfortunately, I don't believe there is a structural solution to this. Barry wrote:

Something that is always uppermost in my mind when I read about the "newspapers" is that I pay to read a paper that claims to present the news. Why are my rights infringed in that, the majority of reports I am "forced" to read, the news comes interwoven with the reporter's nuance, omission of some of the facts, bias/agenda, his quotes and whatever? This is especially serious when it comes to making an educated choice for my representative in the political process. I do not have the freedom to choose as I am not presented with the naked facts. I am in fact led by the nose according to the agenda at play.

Maybe it would be in the interests of everybody if the media were forced to follow the British legal system and structure their paper, or news hour (getting rid of the sound bites might be in the best interests of the country), in which the facts are presented on the first pages and thereafter the opinions of the "opposition" and "defence". Just watching the BBC reporter Peter Hounam talking about Vanunu one is given the impression that the Israeli govt., unfairly hounded Vanunu. What does not get stated is that Vanunu on accepting the job offered signed a legal document pledging not to disclose information that fell under the secrets act. This kind of omission is typical of the great majority and certainly does not permit us to "fairly" judge a situation.

The solution Barry suggests is not really a solution. (He himself complains about BBC coverage of a specific case in Israel.)

This is a cure which is potentially far worse than the disease. The problem is that there's no real objective way to decide what is a "fact", or to decide what is "relevant". No matter who makes that decision, many of those choices will be controversial. Ultimately someone will have to decide what is a "fact" and what is not, and what is "relevant" and what is not, and no matter who it is, sometimes they'll make mistakes and much of the time others will disagree with their judgment. And no matter who it is, their decisions will be colored by their opinions and their ideology and their self interest.

Is it really in our best interests to take that decision out of the hands of people in the media, to give it to someone else? Does that solve the problem, or just shift it?

And who do we rely on to make those decisions? In particular, do we want it to be the government?

We in the US sure as hell don't; that's why free press is one of the rights guaranteed by the First Amendment.

At best, Barry's proposal moves the problem without actually solving it. At worst, it eliminates all the benefits we could get from a free press by subjecting it to direct government control.

When I said that I didn't think there was any easy answer to this problem, I really was serious. I said, We citizens have to start thinking of news reporting as being about as reliable as advertising. As someone else said (in a reference I can't locate right now) it's worse than that, because there are legal penalties for false advertising.

Other commentary from: Jonathan Gewirtz, Demosophia, Sarah, Amritas, Jeff Darcy, Laurence Simon, Ken

Update 20040601: Bill Quick comments.

Update: Hyspeed points out a stunningly brilliant parody of current media reactions to the war in the form of a fictional opinion piece from a London newspaper from May 31, 1941. This was before the German invasion of the USSR began, at a time when the UK had been fighting alone for a year.

Update 20040602: More here.


permanent link to this entry Stardate 20040530.2253

(On Screen): Writing from South Africa, someone named S'thembiso Sangweni complains about President Bush in terms which are very familiar.

When our own homegrown voice of reason, Nelson Mandela, warned before the invasion of Iraq that US President George W Bush should be reined in because he was "a president who can't think", the world and the UN Security Council - toothless against the White House - looked the other way.

Now the American promise to bring the good life to the Iraqi people has been reduced to car bombs, thick black smoke from oil centres and pictures showing members of American and other allied forces ridiculing and imposing their will on those who see the world differently from them.

What a novel perspective! How unusual and fresh!

He concludes:

Bush must be ordered back to base, or else UN secretary-general Kofi Annan will remain just a man who is good at poetic speeches but thin on poetic justice.

Ordered? Who exactly is going to issue this order to the government of the United States? And what, exactly, will they do if we tell them to take a hike?

I have a telegram for Mr. Sangweni: the difference between a request and an order is that there are consequences for disobeying an order. You may request that we "return to base", and we'll take it under advisement. But if you think you can order us to do such a thing, then you are full of yourself. (Or full of something else.)

Kofi Annan is not as described by Sangweni. Rather, he is a man whose son is deeply implicated in the biggest embezzlement scandal of all time, and he is a man who leads an organization which needs the US to remain a member (and to contribute nearly a quarter of its operating funding) a hell of a lot more than the US needs to be a member.

The UN's reputation in the US is near an all time low, and a decision by our government to leave it would be very popular. We are tired of being lectured by self-important pipsqueaks. If the UN orders the US to do anything, we're out of there.

And it will be out of here. New York City has better use for that land. (It could, for instance, better serve the city as a garbage dump. It would certainly smell better.)

Mice really think that cats should wear bells. But who shall give the order to the cat? And what shall they do if the cat refuses?

Update 20040531: Richard responds from South Africa.


This page has been viewed 6301 times today,
6920 times yesterday, and 7304959 times since 20010726.

include   +force_include   -force_exclude

 
 
 

Main:
normal
long
no graphics

Contact
Log archives
Best log entries
Other articles

Site Search

The Essential Library
Manifesto
Frequent Questions
Font: PC   Mac
Steven Den Beste's Biography
CDMA FAQ
Wishlist

My custom Proxomitron settings
as of 20040318



 
 
 

Friends:
Disenchanted

Grim amusements
Armed and Dangerous
Joe User
One Hand Clapping


Rising stars:
Ace of Spades HQ
Baldilocks
Bastard Sword
Drumwaster's Rants
Iraq the Model
iRi
Miniluv
Mister Pterodactyl
The Politburo Diktat
The Right Coast
Teleologic Blog
The Review
Truck and Barter
Western Standard
Who Knew?

Alumni