Note:These pages make extensive use of the latest XHTML and CSS Standards. They ought to look great in any standards-compliant modern browser. Unfortunately, they will probably look horrible in older browsers, like Netscape 4.x and IE 4.x. Moreover, many posts use MathML, which is, currently only supported in Mozilla. My best suggestion (and you will thank me when surfing an ever-increasing number of sites on the web which have been crafted to use the new standards) is to upgrade to the latest version of your browser. If that's not possible, consider moving to the Standards-compliant and open-source Mozilla browser.

June 24, 2004

Back From the Dead

As some reader may recall, a month ago I seized the opportunity and replaced my ailing G3 iBook with a new G4 model. When the new Apple Store opened, we decided to take the old machine in and see if it could be fixed. I’m happy to report that, not only did Apple fix it, they fixed it for free.

So now, my wife has a new computer, and I have three MacOSX machines to manage (Golem and the two iBooks).

One of the things you have to deal with when you have multiple machines is keeping your data in sync. That’s where rsync is your friend. Among that data are the many gigabytes of music in my iTunes Music Library. To simplify life, I have the command

rsync --progress -v -au -e ssh --delete "/Users/distler/Music/iTunes/iTunes Music" golem.ph.utexas.edu:"/Users/distler/Music/iTunes/iTunes\ Music"

aliased to put_music, which keeps things nicely up-to date. Less obvious, however, is how to fix things so that my wife can listen to the library of music that resides in my account on her iBook. The obvious Unixy things don’t seem to work. The only thing that does seem to work is to leave myself logged onto her machine, with iTunes running and “Share My Music” enabled. This just seems wrong.

Posted by distler at 11:27 AM | Permalink | Followups (1)

June 19, 2004

We’re Number 17!

Matt Mullenweg made it all the way to number 1 in a Google search for “Matt”, and made good on his promise to take down his web site, if only temporarily.

Curious, I decided to find out how I rank in a search for “Jacques.” Turns out that I’m number 17, following

  1. Jacques Villeneuve
  2. Brian Jacques
  3. Jacques Cartier
  4. Jacques Derrida
  5. Jacques Torres Chocolates
  6. Jean-Jacques Rousseau
  7. The Jacques-Edouard Berger Foundation
  8. Jacques Maritain
  9. Jacques Brel
  10. Jacques Cousteau
  11. Jacques Whitford Engineering
  12. Transat Jacques Vabre (a race from Havre to Bahia)
  13. Jacques Vapillon
  14. Jacques Delors
  15. Jacques Attali
  16. Jacques Tati

This is fairly remarkable, as I’ve heard of most of the other 16, whereas I’m pretty sure none of them have heard of me.

Unlike Matt, I promise not to take my website down in the unlikely event that I ever reach number 1.

Posted by distler at 08:27 PM | Permalink | Post a Comment

June 16, 2004

Forcing Comment Previews

I’m only about 6 months behind in writing this up.

Back when I first implemented Comment Validation, I relied on a very simple bit of social engineering to ensure that comments got run through the Validator before posting. Near the <textarea> where you enter/edit your comment, there’s a PREVIEW button, but no POST button. The POST button only appears on the Comment Preview form when you’ve previewed the comment, and it has successfully Validated.

If you then scroll down and edit the comment, you are supposed to click on PREVIEW again. But if you actually look at the code I suggested for the Comment Preview page, you quickly realize that nothing prevents you from scrolling back up and clicking on POST instead. I ran with this system for 7 months without a single invalid comment being posted. But just because nobody’s yet come around and rattled the door handle, there is no reason to leave the backdoor perpetually unlocked.

Besides, spambots and crapflooding scripts bypass the comment form entirely, and POST directly to the comment CGI script. They’re immune to social engineering. A better solution was called for.

So, 6 months ago, I created a plugin to solve the problem. Introducing MTHash.

Installation is easy:

  1. The plugin’s only prerequisite is the Digest::SHA1 Perl module.
  2. Put the plugin in your plugins directory.
  3. Create a text file called salt.txt with some random gibberish in it, and place that, too, in your plugins directory.

The plugin offers two new MT Container Tags

<MTSHA1Hash>...</MTSHA1Hash>
Replaces its contents by an SHA1 hash of the contents.
<MTSHA1SaltHash>...</MTSHA1SaltHash>
Replaces its contents by an SHA1 hash of the contents salted with the aforementioned salt.txt file’s contents. Unlike the previous one, this can’t be pre-computed without access to the salt.txt file.

OK, so how do we use it?

  1. In the Comment Preview form, we add a hidden form field
    <input type="hidden" name="validated" value="<MTSHA1SaltHash><MTCommentPreviewBody convert_breaks='0'><$MTEntryID$><MTCommentPreviewIP></MTSHA1SaltHash>" />
  2. Then we modify lib/MT/App/Comments.pm
    --- lib/MT/App/Comments.pm.orig Fri May 28 00:42:21 2004
    +++ lib/MT/App/Comments.pm      Thu Jun  3 00:09:50 2004
    @@ -240,6 +288,25 @@
         if (!$q->param('text')) {
            return $app->handle_error($app->translate("Comment text is required."));
         }
    +    require Digest::SHA1;
    +    my $sha1 = Digest::SHA1->new;
    +
    +    $sha1->add($q->param('text'));
    +    $sha1->add($q->param('entry_id'));
    +    $sha1->add($user_ip);
    +    my $salt_file = MT::ConfigMgr->instance->PluginPath .'/salt.txt';
    +    my $FH;
    +    open($FH, $salt_file) or die "cannot open file <$salt_file> ($!)";
    +    $sha1->addfile($FH);
    +    close $FH;
    +
    +    my $digest = $sha1->b64digest . "=";
    +
    +    if ($q->param('validated') ne $digest) {
    +    return $app->handle_error($app->translate(
    +            "Please preview your modified entry before posting it."));
    +    }
    +
         my $comment = MT::Comment->new;
         if ($commenter) {
            $comment->commenter_id($commenter->id);

That’s it! If you modify your comment, now you must preview it again before posting.

As a side benefit, spambots, which post directly to the Comment CGI script don’t work any more. Some of my friends who’ve been using this technique (without the Validation part) tell me it works wonders against spambots.

I wouldn’t know; I don’t get comment spam anymore (so far, only 15 spam comments in over 8 months).

Posted by distler at 09:59 AM | Permalink | Followups (6)

June 11, 2004

Higgs Up

[via Sean Carroll] Now, I know I’m not supposed to believe any physics papers published in Nature, but this one — if true — is pretty interesting. It seems that DØ has been reanalyzing their data on the top quark, and their new analysis pushes the central value for the mass up by 4 GeV. The previous world average value was m t =174.3±5.14 GeV. The new value is 178 4.3 GeV.

This may not sound like much of a change, but if you recall, the Higgs mass in the MSSM depends rather sensitively on m t . The one-loop contribution, up to logarithmic corrections, goes like m t 4 , m h 2 <m Z 2cos 2(2β)+6 λ t 2m t 24 π 2log(m t ˜ /m t) where λ t =m t/H is the top Yukawa coupling, m t ˜ is the stop mass, and tan (β)=H/H ˜ .

To accommodate the current experimental lower bound, m h >114 GeV, one needs a very heavy stop, m t ˜ >850 GeV. And, even so, one has an upper limit of something like m h <135 GeV.

Shift the top mass up by 4 GeV, and the upper limit on m h also shifts up to m h <140 GeV. But, more significantly, the central “best-fit” value goes up from the now-experimentally excluded 96 GeV to 117 GeV. Just out of range of what would have been seen at LEP, but about the first thing they’ll see at the LHC.

Plot of Higgs mass versus tan beta, for old and new values of top quark mass

The mass of the lightest Higgs boson in the minimal supersymmetric standard model (MSSM). The predicted value is shown as a function of the parameter tan β (the other MSSM parameters are chosen such that they maximize the resulting value of the Higgs mass). The predicted Higgs mass is sensitive to the value of the top-quark mass used in the calculation. The solid line indicates the prediction using the new measurement of the top-quark mass from the DØ Collaboration; the white band indicates the uncertainty of the prediction that results from the error on the top-quark mass. The dashed line shows the situation before the new measurement (the previous experimental error of 5.1 GeV/c2 is not shown). Based on the new value of the top-quark mass, an upper bound on the mass of the lightest MSSM Higgs boson of about 140 GeV/c2 is established. [Nature 429 (10 June 2004) p. 614.]

Posted by distler at 12:06 AM | Permalink | Followups (6)

June 06, 2004

Hot Tech

Camelbak FlashFlo Hydration System

It’s June and summer has arrived in Austin. This week, we saw a couple of 40 ℃ days. And there will be many more 38+ ℃ days before the weather gets nice again at the end of September.

Yesterday, when I embarked on my customary weekend 10.1 mile run (Mopac to the Longhorns Dam) on Town Lake1, I had a new high-tech gadget, to accompany the heart monitor and the iPod: a Camelbak FlashFlo. The new gadget was purchased at S’s insistence, out of fear, I suppose, that I will keel over in the middle of one of my workouts2. It may have been “only” 34 ℃, but I must say that sipping cold water, while listening to Eric Dolphy made the run much more pleasant.

Mopac to Longhorns Dam
Mopac to Longhorns Dam running trail on Town Lake.
(Mopac to 1st St. is also indicated.)

The trick with the Camelbak seems to be to fill with water, and stick the whole contraption in the freezer. Just remember to take it out far enough in advance, so that it’s half-defrosted by the time you hit the trail.

Summer in Austin may be tough, but now I’m ready.


1 During the week, I do the 4.1 mile Mopac to 1st Street loop, with 10 lb handweights.

2Whatever else you might say, it’s still a heckuva lot easier than my old workout.

Posted by distler at 03:01 PM | Permalink | Followups (1)

June 05, 2004

Bogdanorama

Back when I was just starting this weblog, the Bogdanov brothers provided fodder for a number of highly amusing posts. It was a good way to get things rolling and, while this weblog has gone on to bigger (and hopefully better) things, I still maintain a soft spot for old Igor and Grichka B. and their antics. Many of you probably feel the same way.

Back in December, I received some correspondence which clearly indicated something was afoot on the Bogdanov front. And now, dear readers, your patience has been rewarded. The brothers have a new book out, Avant le Big Bang. It’s currently number 9 in sales on amazon.fr.

In it, they apparently claim that their erstwhile critics have retracted their criticisms. Fabien Besnard has been following up with the allegedly former critics.

I almost got to play along. Fortunately, my habitually prickly demeanor kept me out of trouble.

Posted by distler at 08:18 PM | Permalink | Followups (11)

June 03, 2004

itexToMML News

A minor update to my MovableType itexToMML plugin. With this update, the itex to MathML with parbreaks filter is much smarter about skipping over block-level tags. There’s no change to the itex2MML executable, which was last updated at the end of April.

If you’ve tried commenting on this blog, you’ll also have noticed that I have Textile with itex to MathML and Markdown with itex to MathML filters available. If you have Textile 1.1 and/or Markdown 1.0b5 installed on your system, you can get the same functionality on your blog by

  1. Applying this patch to Textile and/or this patch to Markdown.
  2. Installing the TextileMarkdownMML plugin.

(Anyone who wants to take a crack at getting this to work with Textile 2.x is welcome to try.)

Finally, a minor update to my WordPress patch, which enable my WordPress plugin to work nicely with the existing WordPress text filters (wptexturize, Textile1, Textile2 and Markdown). For those who’ve already applied my previous patch, the new bit is

--- wp-includes/functions-formatting.php.orig   Sun May 16 17:14:14 2004
+++ wp-includes/functions-formatting.php        Thu May 20 12:27:40 2004
@@ -103,7 +104,7 @@
        $content = preg_replace('/<category>(.+?)<\/category>/','',$content);
// Converts lone & characters into &#38; (a.k.a. &amp;) - $content = preg_replace('/&([^#])(?![a-z]{1,8};)/i', '&#038;$1', $content); + $content = preg_replace('/&([^#])(?![a-z]+;)/i', '&#038;$1', $content);
// Fix Word pasting $content = strtr($content, $wp_htmltranswinuni);
Posted by distler at 12:43 PM | Permalink | Followups (10)

June 02, 2004

Review of Large-N Duality

I’m always on the lookout for good review articles on various topic in string theory. For one thing, graduate students are always asking for them. For another, I find them useful for catching up on topics that I may not have been paying sufficiently close attention to.

It’s not often that I find one which is wholly satisfactory. Today, however, I get to sing the praises of Marcos Mariño’s Chern-Simons Theory and Topological Strings.

Marcos starts with a review of Chern-Simons Theory and topological string theory, and builds up to a review of Gopakumar and Vafa’s large-N duality, geometric transitions and the construction of the topological vertex.

All within a digestible 46 pages. Really a great read, about an imminently important subject. Highly recommended.

Posted by distler at 01:05 AM | Permalink | Post a Comment

May 29, 2004

Movin’ On Up

MT 3.0

Yes, if you look down at the sidebar, you see that Musings and the String Coffee Table have been upgraded to MT 3.0. That’s 29 plugins1 and 657 lines2 of (unified diff) patches to the MT source code. Actually, 657 lines is a good bit shorter than my accumulated patches for MT 2.661. 6A actually fixed several of the bugs on my list in 3.0, while introducing, so far, only two new ones.

Remarkably, the whole thing seems to work.

(Well, OK, the comment-posting code was busted for a while, and people could preview, but not post their comments. Thanks to Srijith for catching this.)

iBook

I type these words on my new 14" G4 iBook. My aging G3 iBook was suffering from the dreaded backlight problems, and intermittent trackpad wonkiness. Keeping an external monitor and a USB mouse plugged in kinda defeats the concept of “laptop.” So, when I had a chance to purchase a brand new 1 GHz 14" iBook for under $1000 (ya gotta know the right people ;-), I decided to go for it. It arrived yesterday. I installed the RAM upgrade (to 640 MB), booted the machine in FireWire Disk Mode, and proceeded to clone my old machine’s hard drive onto the new one3. Upon rebooting, iTunes demanded that I authorize the new machine, Mathematica demanded that I re-enter my License Code (it demands that, whenever you so much as sneeze), but everything else worked flawlessly.

My only miscalculation was the naive presumption that I could re-use the Airport card from the old machine. Nope! “Airport Extreme-Ready” means incompatible with the old cards. Guess I’ll be adding another $100 to the cost of this baby …


1 The 30th plugin, MT-Blacklist, awaits a 3.0-compatible update, and another, was rendered superfluous by 3.0. Six of the 29 were plugins of mine.

2 More precisely, that was: 14 files patched, 289 lines added and 29 lines removed from the MT codebase. This doesn’t count patches to other people’s plugins. I just copied the plugins over from my MT 2.x plugins directory.

3 N.b. I had MacOSX 10.3.4 (build 7H63) installed, a later build than the one included with the machine. This is important.

Posted by distler at 10:48 AM | Permalink | Followups (4)

May 27, 2004

High Energy Supersymmetry

I’ve really gotta stop posting about the Landscape. Posts on the subject rot your teeth and attract flies.

Still, it helps to sort out one’s thinking about anthropic ideas, which definitely clash with the sort of “explanation” we have become used-to in field theory. In any scientific framework, one needs to understand what’s just given — input data, if you will — what needs to be “explained” and (most importantly) what counts as an explanation. There’s a temptation to mix and match: to envoke the anthropic principle to explain some things, and “technical naturalness” to explain others. But that is simply inconsistent; a statistical distribution in the space of couplings does not favour technically-natural ones over others.

Consider the question: why is the QCD scale so much lower than the Planck scale (or the GUT scale)?

We are accustomed to saying that this large hierarchy is natural because it arises from renormalization-group running. The QCD coupling starts out moderately small at the GUT scale ( α GUT 1/25 ), and increases only logarithmically as we go down in energy.

But, in the Landscape, there’s a probability distribution for values of α GUT , which might just as easily be 1 /10 , or 1 /150 . What sounded like a virtue now sounds like a vice. The ratio Λ QCD /M GUT depends exponentially on α GUT , and so is an exquisitely sensitive function of the moduli — exactly the sort of thing about which it is hard to make statistical predictions.

Instead, there’s an anthropic explanation for the value of Λ QCD . Namely, the proton mass (which is essentially determined by the QCD scale) is tightly constrained. Vary m p /M Pl by a factor of a few, and stars cease to exist. Hence α GUT must be pretty close to 1 /25 , otherwise, we aren’t here.

Similarly, point out Arkani-Hamed and Dimopoulos, the electroweak scale cannot be vastly different from Λ QCD . For the ratio enters into the neutron-proton mass difference. If the neutron were lighter than the proton, there would be no atoms at all. If it were much heavier, all heavy elements would be unstable to beta decay, and there would be only hydrogen. Either way, we would not exist.

If the electroweak scale is anthropically-determined, is there any reason to expect any beyond-the-Standard-Model particles below the GUT scale? We don’t need low-energy supersymmetry to make M E.W. /M Pl 1 natural. Arkani-Hamed and Dimopoulos posit a scenario where supersymmetry is broken at a high scale, with squarks and sleptons having masses in the 10 9 GeV range (more on that below), whereas the “'inos” (the higgsino, the gluino, the wino, zino and photino) survive down to low energies.

Light fermions are, of course, technically natural. But there’s no reason to expect the theory to have approximate chiral symmetries. So technical naturalness is not, in this context an explanation for the light fermions. Instead, Arkani-Hamed and Dimopoulos argue that low-energy supersymmetry does have one great virtue — it ensured the unification of couplings around 10 16 GeV. The “'inos” contribute to the β -function at 1-loop, so the 1-loop running in this model is exactly as in the MSSM. The squarks and sleptons contribute at 2-loops (as they come in complete SU (5) multiplets, their 1-loop contribution does not affect the unification of couplings), and removing them from low energies actually improves the fit somewhat.

Arguing for coupling constant unification sounds equally bogus until you turn the argument on its head (thanks to Aaron Bergman for helping me see the light). Assume that at short distances one has grand unification. Then one needs light “'inos” so that the 3-2-1 couplings flow to their anthropically-allowed values at long distances.

Once we’ve abandoned low-energy, SUSY breaking, why not let the SUSY breaking scale be all the way up at the GUT scale? The reason is, again, anthropic. The gluino is a light colour-octet fermion, and hence very long-lived (it decays only via gravitino exchange). If you push the SUSY breaking scale up too high, the long-lived gluino creates problems for cosmology. Arkani-Hamed and Dimopoulos favour a SUSY-breaking scale, M S 10 9 GeV.

This gives a big improvement over low-energy SUSY in the context of the landscape. Flavour-changing neutral currents are no longer a problem. And it ameliorates, but does not really solve the problem of proton decay.

Proton decay via tree-level squark exchange
Proton decay via squark exchange, with two R-parity violating vertices.

Recall that there’s no reason for the generic vacuum on the Landscape to respect R-parity. R-parity is respected only on very high codimension subvarieties of the moduli space (if it’s present at all). So, generically, one expects R-parity violating terms in the superpotential to be unsuppressed. Since the squarks are much lighter than M GUT , the dominant contribution to proton decay comes from squark exchange and the proton lifetime is roughly T ( M S λ M GUT ) 4×10 32 years where λ is the strength of the R-parity violating Yukawa couplings.

For TeV-mass squarks, the anthropic bound on the proton lifetime gives λ <10 - 9 , whereas the observational bound is λ <10 - 13 . Pushing the squark masses up to 10 9 GeV, the bound on λ is no longer so absurdly small. The anthropic bound is λ <10 - 3 , and the observational bound is λ <10 - 7 , but there is still a 4-orders of magnitude discrepancy which needs explaining.

I think that’s still a serious challenge for the anthropic principle. Why are the R-parity violating Yukawa couplings 4 orders of magnitude smaller than required by the anthropic bound?

A possible way out was suggest to me by Nima in the course of our email conversation. The lightest superpartner (one of the neutralinos) decays as well through R-parity violating interactions (a similar diagram to the one which led to proton decay, but with one R-parity violating and one R-parity preserving vertex, instead of two R-parity violating vertices. If we want the lightest superpartner to furnish a candidate for the dark matter (leading to structure formation and hence to us) we need its lifetime to be at least comparable to the age of the universe. For M lsp a few hundred GeV, to get a lifetime of 10 17 seconds, one ends up requiring λ 10 - 7 .

Perhaps it is the existence of dark matter that “explains” the nearly exact R-parity in our universe. I’m still pretty sceptical, but I’m keeping an open mind. So, there may well be more posts on this subject in the future …

Posted by distler at 12:59 AM | Permalink | Followups (11)

May 26, 2004

Now More User-Friendly

If you’ve ever used the W3C Validator, you probably noticed a couple of things:

  1. The error messages produced by the onsgmls parser are pretty obscure.
  2. The latest version of the Validator attempts to improve the situation by including its own, more verbose error messages, in addition to the terse ones from onsgmls. These messages are not necessarily the clearest, but they are a big improvement.

If you’ve ever commented on this blog, you know that we run comments through a local copy of the Validator, yielding the same obscure error messages as the “old” W3C Validator. Alexei Kosut seems to have lost interest in his MTValidate plugin (at least, he never answered any of my emails). So I decided to update the plugin to use the new, more user-friendly error messages.

The result is mtvalidate-0.2. To install,

  1. There are some Perl module prerequisites. Get your webhost to install them, or use CPAN to install them in your extlib directory.
  2. Make sure you have the onsgmls SGML parser installed on your system. It comes with RedHat Linux, it’s available via fink for MacOSX,
    fink install opensp3
    and for other OS’s, you can always download and compile the source code.
  3. Download and uncompress the sgml-lib directory and put it inside the validator directory.
  4. Edit the SGML_Parser line in validator/config/validator.conf to reflect the location of onsgmls.
  5. Move MTValidate.pl and the validator directory into your MovableType plugins directory.
  6. Follow my previous instructions to enable validation of comments. Alexei has instructions to enable validation of entries.

Let me know how you like the new, “improved” error-reporting. And let the W3C know if you have any suggestions for improving the error messages.

Posted by distler at 11:45 PM | Permalink | Followups (4)

May 24, 2004

Rampant Paranoia

Now that MacOSX has been smitten with two remote protocol handler vulnerabilities in less that a week, people are running a bit scared. Jason Harris claims to have found a new one, in which a hostile attacker gets LaunchServices to register a new URI scheme, for which a surreptitiously-downloaded hostile application is the default handler.

Mr. Harris provides two sample exploits, differing in the protocol used to download the hostile application to the victim’s machine. If successful, they are supposed to create a file, “owned.txt” in the victim’s home directory. When I tried the exploits in Mozilla, the hostile attempts were blocked with the messages, “malware is not a registered protocol.” and “guardian 452 is not a registered protocol.” No disk was remote-mounted (I do have the “disk://” protocol disabled using the RCDefaultApp PreferencePane) and no file was downloaded via FTP.

I was equally unsuccessful in getting either exploit to work in Safari, though no helpful diagnostic error message was given. I’m not saying there’s no possibility of an exploit here (though I’m somewhat incredulous that the mere act of downloading an application — not launching it, not installing it in /Applications/, merely downloading it — would be enough to get LaunchServices to register it as the default handler for some unknown URI scheme), but it’s a bit premature of Mr. Harris to claim

Because this sample exploit registers its own URI scheme, none of the methods people had been using involving disabling certain scripts, moving Help.app or changing the ‘help’ URI scheme would protect against it. At this time, only Paranoid Android provides protection from it.

Dick Cheney makes me paranoid, but the author of my beloved Chicken of the VNC? Nah…

Update (5/25/2004): John Gruber has a more thorough analysis of this new “threat”. According to John, the hostile application gets registered with LaunchServices when it is displayed in the Finder (still sounds wacky to me, but if you say so …). That would happen, for instance, if you had the Finder assigned as your ftp:// helper. Me, I have that task assigned to Mozilla. If the hostile application doesn’t get registered, it can’t be used to attack you.

I find this “display an application in the Finder, and it’s automatically registered as a URI handler” — if true — to be very disturbing. Only applications in /Network/Applications, /Applications and $HOME/Applications should be automatically-registered as URI handlers. That’s true of Services, why should URI handlers be different?

Update (6/7/2004): The 2004-06-07 Security Update has a more comprehensive fix for this whole class of problems. Kudos to Apple for their quick work on the issue and for their forthright and comprehensible explanations of their fixes.

Posted by distler at 01:46 AM | Permalink | Post a Comment

May 22, 2004

WordPress 1.2, MathML Goodness

WordPress 1.2 has just been released. Congratulations to Matt and his team for numerous improvements and a shiny new plugin architecture!

In celebration of the event, I’m releasing an itexToMML plugin for WordPress 1.2 and above. This brings easy-to-use mathematical authoring to the WordPress platform.

Installation involves a few simple steps.

  1. First, you need to download and install the itex2MML binary. There are precompiled binaries for Linux and Windows and a precompiled MacOSX binary is included with my source distribution.
  2. Edit line 22 of the plugin to reflect the location where you installed the binary. By default, it says
    $itex2MML = '/usr/local/bin/itex2MML';
  3. Install the plugin as wp-content/plugins/itexToMML.php
  4. Apply the following patch, which makes sure that the installed text-filtering plugins — wptexturize, Textile (1 and 2) and Markdown — play nice with MathML content. (These changes will, hopefully, be in the next release of WordPress.)
  5. Activate the plugin in the administrative interface.
  6. Start serving your blog with the correct MIME Type.
  7. If you want people to be able to post comments in itex, add the requisite list of MathML tags to your mt-hacks.php file.

That’s the good news.

The bad news is that WordPress 1.2 has a serious bug, which renders the plugin nearly useless for serious work. Like its ancestor, b2, WordPress eats backslashes. Type “\\a” in the entry form, and “\a” gets posted to your blog. Re-edit the post, and “\a” gets turned into “a” when re-posted. Since TeX relies heavily on backslashes, this is a pretty debilitating feature. Hopefully, it’ll get fixed soon.

The other thing that is less than ideal is that enabling the plugin is all-or-nothing. When enabled, all your posts and comments get filtered through itexToMML, even those with no math in them. That’s rather wasteful of resources.

But, again, I’m pretty sure that this will have to change in subsequent versions of WordPress. Forget about the people using itexToMML. Consider the choice of text filters for composing posts. Currently, there are four: wptexturize (the default), Textile1, Textile2 and Markdown. Say you have been using Textile for a while and decide one day to switch to Markdown. Guess what? You can’t! If you disable Textile and enable Markdown, this choice applies to all your posts. But the syntaxes of these two markup dialects are incompatible. Your old posts will break horribly if you switch. Once you’ve accumulated a body of posts using one text filter, you are basically stuck, regardless of whether something better comes along, tempting you to switch.

MovableType lets you assign a choice of text filter to each of your posts individually. If you decide one day to switch from Textile to Markdown, your old posts don’t break, because they still get processed with Textile. I added the ability to assign a choice of text filter to each comment in MT. That way, commenters can compose their comments in their favourite idiom, rather than yours.

It seems to me that, once you start giving people a choice of text filters for formatting their posts, it’s inevitable that you’ll need to allow them to make that selection on a per-post basis. WordPress actually allows multiple text filters to be applied to (every) post. If you want to use itexToMML with Textile formatting, you just activate both plugins. In MovableType, I had to create a third text filter plugin, whose sole purpose was to daisy-chain the other two together. It will be cool to see how WordPress eventually handles this. Perhaps there will be a set of checkboxes in the composition window, letting you select which text filters apply to the post you’re composing.

But all that is for the future. Right now, WordPress users have a shiny new toy to play with. I hope they enjoy my small addition to the party.

MIME Types for WordPress

Those familiar with this blog will know that to get MathML to render in Gecko-based browsers (Netscape 7, Mozilla, Firefox,…) and in IE/6 with the MathPlayer 2.0 plugin, you need to serve your pages as application/xhtml+xml. My MovableType solution involves using mod_rewrite to set the HTTP Content-Type headers.

In WordPress, as in any PHP-based system, it’s probably preferable to set the headers directly in your PHP code. It would be great if someone wrote up a definitive guide to doing this in WordPress. Unfortunately, most of the existing instructions, like Simon Jessey’s are written under the misapprehension that the correct thing to do is to set the Content-Type based on the Accept headers sent by the browser.

This is wrong. It may be “morally correct,” but it doesn’t actually work with real-world browsers.

Both Camino and Opera 7.5 include application/xhtml+xml in their Accept headers. Both cough up hairballs when served XHTML+MathML content with that MIME type. IE/6, with the MathPlayer 2.0 plugin installed, handles application/xhtml+xml (either straight XHTML or XHTML+MathML) just fine, even though it doesn’t say so in its Accept headers.

The only correct thing to do is to send the MIME type based on the User-Agent string sent by the browser. Anybody want to take a crack at writing up some instructions for WordPress?

Update (5/24/2004): Josh comes through with the following PHP code,

if ( (preg_match("/Gecko|W3C_Validator|MathPlayer/", $_SERVER["HTTP_USER_AGENT"])
    && !preg_match("/Chimera|Camino|KHTML/",$_SERVER["HTTP_USER_AGENT"]))
    || preg_match("/Camino.*MathML-Enabled/", $_SERVER["HTTP_USER_AGENT"]) ) {
        header("Content-type: application/xhtml+xml; charset=utf-8");
        print('<?xml version="1.0" encoding="utf-8"?>
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1 plus MathML 2.0//EN" "http://www.w3.org/Math/DTD/mathml2/xhtml-math11-f.dtd" >
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en">
');
}
else {
        header("Content-type: text/html; charset=utf-8");
        print('
<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Strict//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-strict.dtd">
<html xmlns="http://www.w3.org/1999/xhtml" xml:lang="en" lang="en">
');
}

to be placed in wp-blog-header.php or at the top of whatever pages need to be served with the correct MIME type. Either way, you need to remove the hard-coded DOCTYPE declaration and opening <html> tag in the affected pages.

Posted by distler at 09:04 AM | Permalink | Followups (10)

May 19, 2004

Remote Exploit for MacOSX

[Via Jay Allen] This is the sort of thing one expects from our friends in Redmond.

  1. Help Viewer.app will happily run scripts on your local machine.
  2. Help Viewer.app is the default helper application for help://... URI’s.
  3. Ergo an evil web master can execute scripts on your computer by redirecting an (innocent-looking) link that you click on to a help:runscript=/path/to/some/script.scpt URL.

By itself, this is limited to executing scripts or applications that are already on your machine (this includes the ubiquitous OpnApp.scpt, which can execute shell commands). For extra fun, Mr. Evil can get you to remote-mount a disk image by redirecting you to a disk://... URL, and then use the previous trick to run an application on the mounted disk image.

That is really, really, evil.

Workaround: Use the RCDefaultApp PreferencePane to disable the help://... helper application. And, similarly, disable disk://... and disks://... .

Update (5/21/2004): Apple has released an update to Help Viewer.app to address this issue, Security Update 2004-05-24 (also available through Software Update):

HelpViewer: Fixes CAN-2004-0486 to ensure that HelpViewer will only process scripts that it initiated. Credit to lixlpixel <me@lixlpixel.com> for reporting this issue.

Update (5/22/2004): John Gruber points out another vulnerability, this time in Terminal.app’s handling of the telnet:// URI scheme. Following a

telnet://-npath%2Fto%2Fsome%2Ffile

will overwrite any file you have write-access to. Best to disable that URI scheme too, until Apple fixes Terminal.app. (It’s fixed in 10.3.4.).

Posted by distler at 01:56 AM | Permalink | Followups (4)

May 17, 2004

del Pezzo

Seiberg Duality is one of the mysterious and wonderful features of strongly-coupled N =1 supersymmetric gauge theories to have emerged from the interplay between string theory and gauge theory. In the purely gauge-theoretic context, it’s a bit of a black art to construct Seiberg dual pairs of gauge theories. A stringy context in which a large class of examples can be found, and hence where one can hope to find a systematic understanding, is D-branes on local del Pezzo surfaces.

Let X ={KS} be the noncompact Calabi-Yau 3-fold, which is the total space of the canonical bundle of a del Pezzo surface, S ( 𝒫 2 , with k =0,8 points blown up). X has a minimal-sized surface, S 0 ( S , embedded via the zero section). “Compactify” Type IIB on X , and consider space-filling D3-branes and D5-branes wrapped on cycles of S 0 . Varying the Kähler moduli of X is an irrelevant deformation of the resulting 4D gauge theory. So, studying the different D-brane descriptions which arise as one moves in the Kähler moduli space gives a concrete description of Seiberg Duality. (I’m lying slightly, here, but part of the mystery of the subject is understanding exactly when that’s a lie.)

At certain loci in the moduli space, a nonabelian gauge invariance is manifest, and one has a quiver gauge theory with massless bifundamentals and, typically, some gauge-invariant superpotential for them. There’s a close relation between, D b (X) , the derived category of sheaves on X (in which the aforementioned D-branes are objects) and the derived category of quiver representations (with relations given by the derivatives of the superpotential).

There’s a rich literature on this subject, but two recent papers provide a good entrée into it for those (like yours truly) who haven’t been following the literature in much detail. Chris Herzog argues that admissible mutations of strongly exceptional collection of coherent sheaves (which, in turn, for a basis of objects in the derived category) is Seiberg Duality. Aspinwall and Melnikov discuss the same issue from the point of view of tilting equivalences between the derived categories of quiver representations (an approach pioneered by Berenstein and Douglas).

Posted by distler at 02:13 PM | Permalink | Post a Comment