Julian Sanchez Main header image

photos by Lara Shipley

All Ethics Are Secular Ethics

April 23rd, 2012 · 19 Comments

In an exchange at Slate with Will Saletan, Ross Douthat writes:

[T]he more purely secular liberalism has become, the more it has spent down its Christian inheritance—the more its ideals seem to hang from what Christopher Hitchens’ Calvinist sparring partner Douglas Wilson has called intellectual “skyhooks,” suspended halfway between our earth and the heaven on which many liberals have long since given up. Say what you will about the prosperity gospel and the cult of the God Within and the other theologies I criticize in Bad Religion, but at least they have a metaphysically coherent picture of the universe to justify their claims. Whereas much of today’s liberalism expects me to respect its moral fervor even as it denies the revelation that once justified that fervor in the first place. It insists that it is a purely secular and scientific enterprise even as it grounds its politics in metaphysical claims. (You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.) It complains that Christian teachings on homosexuality do violence to gay people’s equal dignity—but if the world is just matter in motion, whence comes this dignity? What justifies and sustains it? Why should I grant it such intense, almost supernatural respect?

Now, I know Ross has read his Euthyphro, but since he talks here as though he hasn’t, I’ll go ahead and make the obvious point: Invoking God doesn’t actually get you very far in ethics, because ascribing “goodness” to a deity or its laws is meaningless unless there’s some independent criterion for this. At best, God gets you two things: First, a plausible prudential internal motivation to behave “morally” (because God will punish you if you don’t), though of the same formal sort as the motivation you might have to obey a powerful state or a whimsical alien overlord. Second, a potential form of “expert validation” for independent moral truths we lack direct epistemic access to, as when we accept certain propositions on the grounds that mathematicians or scientists have confirmed them, even if most of us are incapable of comprehending the detailed proof.  But invoking God doesn’t solve any of the problems that secular moral philosophers grapple with—it’s essentially just a way of gesturing at a black box, wherein we’re assured the answer lies, and asserting that we needn’t worry our pretty little heads about it.

If divine commandments are not supposed to be mere arbitrary rules we obey out of fear, then every question Ross thinks confronts the secular moralist reappears within a theistic framework. Why does being made in the image of God, whatever that entails, imbue people with dignity? Why would it obligate us to treat them (or refrain from treating them) in certain ways? Why should we believe that supernatural properties can supply us with the appropriate sort of reasons if natural properties cannot? As with cosmological questions, appealing to God defers the questions rather than answering them. In the moral case, one might add, it seems to do so in a rather unattractive way: It turns out that the reasons we have to respect other persons are rather like the reasons we have to respect property—flowing not from anything intrinsic to the object, but from the consideration due some third party who is the real source of  value.

One way to highlight what’s wrong with this picture is by reflecting on the familiar but confused idea implicit in the observation: “You will not find the principle of absolute human equality in evolutionary theory, or universal human rights anywhere in physics.”  Nor an account of plate tectonics in calculus, a diagnosis of schizophrenia in game theory, or the concept of Turing completeness in astronomy. It is not some kind of contingent disappointment that physics and biology have not discovered dutyons mixed in somewhere with the bosons and protons, or failed to detect the Rights Field generated by the human body: The kinds of facts studied by the natural sciences are, more or less by definition, not normative facts. But the same goes for supernatural facts. If there is a God, we still need ethics to get us across the gap to ought. Facts about the divine, if we had any, would just join natural facts in the pool of data for secular moral philosophy. [Addendum: This graf is not meant to take a position on the more contentious question of whether any natural facts—including facts about mental states—could be normative facts.]

Ross is certainly correct that we owe a debt to thinkers in the Christian tradition—who in turn owe one to pagan thinkers of ancient Greece and Rome—but it’s far less clear that the value of their contributions rests crucially on their theistic metaphysical trappings. Aquinas thought that moral law could be derived by human reason from reflection on natural facts. John Locke may have peppered his political philosophy with a generous dose of theology, but it’s not at all obvious that what was always most interesting and compelling in his arguments requires supernatural support. For that matter, Newton was famously quite devout, and thought the physical laws he described ordained by God. But it turns out that F=MA even after you reject that premise: Physical law (like moral law?) does not require a lawgiver. None of which is to deny there’s plenty of hard problems left for modern moral philosophers to solve, but they’re mostly problems that were obscured rather than seriously addressed by theology.

Ross closes with a pitch to modern liberals who wish to preserve ideals like human rights, suggesting that “for all its strange claims and confounding commandments, [Christianity] might still provide a better home for humankind than whatever destination our civilization is headed for.”  This gets us to the odd circularity that’s always at the heart of moralized defenses of religion. The notion seems to be that someone not (yet) convinced of Christian doctrine would have strong reasons—strong humanistic reasons—to hope for a world in which human dignity and individual rights are respected. But then why aren’t these reasons enough to do the job on their own? If Christian doctrine is true, then external considerations are irrelevant to the truth of whatever normative beliefs it supports. If it is false, and our moral beliefs are unsustainable without this false premise, then we should be glad to be rid of false and unjustifiable beliefs. If we think it would be awful to discard those beliefs, then that awfulness is sufficient reason to hang onto them without any religious scaffolding. The only coherent version of this argument is that people who don’t think about it very hard will more readily believe that  the religious version of this story provides reasons to respect rights, and comport themselves accordingly. If that were true, it might lead us to hope most people continue to hold the relevant false beliefs, but such pessimism seems premature.

→ 19 Comments

Bookmark and Share

 

 

 

 

Tumblr Killed the Blog Star? Some Thoughts on Interpreting Online Trends.

April 19th, 2012 · 4 Comments

AblogalypseThis recent xkcd comic implies that Tumblr is on its way to outpacing blogs in popularity or cultural relevance. I’m not at all sure that’s what the graph in question shows, though.

Presumably in the early days of the mass Internet you had a much higher proportion of novice users entering search terms like “Buffy website” or “game review website,” because the whole idea of a website was novel enough to seem like it needed to be included in the specification of what you were looking for—but over time people would have realized this was superfluous.

Something a bit similar has probably happened with blogs, partly out of this sort of familiarity (people realize it’s redundant to search for “Instapundit blog” or “Gawker blog” for example) but also partly because we’ve integrated blogs into the media ecosystem so fully that they’re much less of a discrete entity from, well, a website. Most major newspapers and magazines now run (or have acquired) at least one, and more often several blogs, with writers producing content on the same site in various forms. The distinctive of the form also seems less important as more traditional reported news stories are, quite incidentally, delivered in a “blog” form. So what people are now likely to think, and link, is “Writing over at Slate, Dave Weigel argues…” without splitting hairs about whether the particular item appeared as part of the Weigel blog or was classified as full-fledged Slate article.

In other words, we’ve all finally gotten it through our heads that all those panels on “blogging versus journalism” were based on a weird category error: Blogging was essentially just a particular kind of display format for writing, which could be pro or amateur, journalism or opinion, a personal essay or a corporate press release. So we understand that whether a piece of content happens to be delivered in a blog format is probably one of the least relevant things about it. That’s especially the case now that so much of our media consumption is driven by social media and aggregators—which means you’re apt to click straight through to a particular item without even noting the context in which it’s delivered, even on sites that do still maintain some kind of meaningful segregation of “articles” and “blog posts.”

As a practical matter, moreover, the ubiquity and integration of blogs means that “blog” is a much less useful search term for narrowing down your results: When everyone casually references blog posts, but actual blogs at publications are often not actually named as blogs (at The Atlantic, for instance, they’re called “Voices” and “Channels”) it’s as likely to distort your results as get you to what you’re looking for in many cases.

Tumblr, by contrast, is still ultimately one domain, and distinctive enough that if you saw something on a Tumblr, you’re apt to remember that it was a Tumblr, both from contextual clues about the site itself, and because there are still some very characteristic types of content that we associate with Tumblrs. So including “Tumblr” in your search terms is actually a really good way to quickly narrow your results so that you find that new Tumblr about MadMen/funny animated GIFs/stupid things people Tweet, as opposed to other kinds of sites which will have different types of characteristic content.

OK, so why dwell at such length on a doodle? Because there’s a general point here about how to interpret trends in online activity—whether it’s Google, Twitter references, Facebook likes, or whatever.  The frequency trend over time can’t actually be interpreted straightforwardly without thinking a little bit about both broader changes in the media ecosystem you’re examining and how changing user behavior fits into the specific purposes of the technology you’re tracking. With search, the question isn’t just “are people interested in term X?” but also “is term X a useful filter for generating relevant results given the current universe of content being indexed?” You could, for instance, see a spike in searches for terms like “band” or “music”—not because people are suddenly more interested in bands or music, but because a bunch of popular bands have recently chosen annoyingly generic names like Cults, Tennis, and Girls. (For the same reason, you’d expect a lot more people to search “Girls HBO” than “The Sopranos HBO” or “Game of Thrones HBO”—just looking at the incidence of HBO would give you a misleading picture of people’s interest in HBO programming.)

In the other direction, there’s the familiar snowball effect, perhaps easiest to note in realtime on Twitter: Once a term is trending on Twitter, you can rest assured its incidence will absolutely explode, through a combination of people reacting to the fact (“Oh no, why is Dick Clark Trending? Did he die?” or “Who’s this Dick Clark guy?”) or self-consciously including it in tweets as an attention-grabbing or spamming mechanism, since users are more likely to do searches on terms they see are trending.  In principle, then, you could have a set of terms with very similar frequencies in the Twitter population—until one breaks into the trending list by a tiny initial margin and then rapidly breaks away from the pack.

We’ve got such unprecedented tools for quantifying the zeitgeist in realtime that it’s become pretty common to use these sorts of metrics as a rough-and-ready estimate of public interest in various topics over time. Probably most of the time, that works out fine—but it can also give wildly misleading results if we don’t pause to think about how other factors, like context and user purposes, tend to affect the trends.

→ 4 Comments

Bookmark and Share

 

Everything Is a Bulletin Board!

April 6th, 2012 · No Comments

An Arizona man named William Hall is headed back to prison for violating the terms of his parole, which apparently included the following rather dated language:

I will not use an electronic bulletin board system, Internet relay chat channel, DCC chat channel, instant messaging, newsgroup, user group, peer to peer (e.g.Napster, Gnutella, Freenet, etc).

Hall argued (unsuccessfully) that his use of Facebook and MySpace should not have constituted a violation of those terms. Given that the purpose of the restriction was to prevent him from repeating his offense—attempted sexual exploitation of a minor—it seems pretty natural that those sites would indeed be included, and the facts I’ve looked at suggest Hall not only realized as much, but deliberately sought to circumvent software blocks imposed by his supervisors, so I’m not at all inclined to dispute the result particular case. Still, this exchange between the court and Hall’s surveillance officer (or S.O.) caught my eye:

Q: Can you show me where on [the Guidelines] [Hall] is not allowed to use social networking sites?

S.O.: Yes, No. 13. I will not use any electronic bulletin board system[s] and social networking [is] underneath bulletin board system[s] because [users] can post comments and that’s what a bulletin board system is.

The thing is, if any site where users can post comments—whether or not that’s the site’s primary function—counts as an “electronic bulletin board system,” that term now encompasses nearly all of the most popular or useful sites on the Net: Amazon, Ebay, YouTube, Craigslist, Wikipedia, IMDB, Flickr, news sites like NYTimes.com, most blogs… even many corporate websites have some kind of community forum or chat-support function. One can, of course, use most of those sites without accessing the bulletin board function—though Craigslist is a fuzzy case—but plenty of people just maintain a more-or-less static profile page and photo host, without using the chat or messaging features. Intuitively, you want to apply the rule to exclude the use of “bulletin board” style functions, not entire sites, given how ubiquitous those functions are now. But then you’ve got to figure out which functions those are. A Facebook wall seems bulletin board–ish when comments from others are enabled, but if they’re turned off, the wall is just a string of recent status updates. The chat function presumably counts as “instant messaging,” but what about private messages to friends? Is that “posting a comment” or is it just another form of e-mail, which is conspicuously excluded from the forbidden list, presumably because it’s more or less impossible to work as a white-collar professional without using e-mail. Does reading a solution to a technical problem on a discussion board (perhaps after clicking through a Google link that doesn’t make clear what sort of site the answer is located on) count as “using” a board, or does that require signing up for an account and posting messages?

Many of the other categories raise further questions. Phones and text messaging are presumably allowed, but Apple’s new Messages client effectively obliterates the distinction: Text messaging is instant messaging. Gmail is presumably allowed, while Gchat is not—but you’d be forgiven for failing to see much difference between a threaded e-mail exchange and a Gchat log, and presumably if Google felt like it, they could obliterate the distinction within the user interface entirely. Skype is a VoIP service—presumably equivalent to ordinary phone service—but also has text chat functionality. Listservs operate over e-mail, but are functionally equivalent to “newsgroups” or “user groups” and are typically also accessible via Web-based interfaces that look like any other electronic bulletin board.

Since the great generative power of the Internet rests in its end-to-end architecture, which enables new and unexpected functions to emerge in a distributed way—often from the choices of users behaving in ways the platform creators did not anticipate—it shouldn’t be terribly surprising that a list based on decade-old categories would lead to substantial ambiguity. Which seems like a bit of a problem when the classifications determine how well a parolee can reintegrate into society as a productive professional and community member—not to mention whether they get to remain free or return to lockup. At the very least, you’d think that would be a reason to regularly update the boilerplate, but it also calls into question whether regulating parolees by means of these broad technological categories is really the right way to go about it.

→ No Comments

Bookmark and Share

 

 

 

 

“Girls Around Me,” Privacy, and the Semiotics of Creepiness

April 3rd, 2012 · 4 Comments

Kashmir Hill is a little disturbed by the public reaction to a controversial iPhone app called “Girls Around Me,” which mined data from the social location platform Foursquare and public profiles on sites like Facebook to create what one breathless critic dubbed “a tool for rapists and stalkers.” Writes Hill:

For one, how do we know that the women who could be found on this map did not want to be visible in this way? A recent Pew study found that women are the savvier sex when it comes to privacy settings, visiting them and ramping them up at much higher rates than men. Those Bostonians who popped up on Brownlee’s map may want to be publicly broadcasting where they are. There are, after all, dating apps, such as Blendr, that do offer exactly that to both men and women. Sometimes we can be found because we want to be found. [....]

The women “exposed” by ‘Girls Around Me’ have chosen to be on Foursquare, and the company tells me that the app was only able to pull up a woman’s Facebook profile if she chose to link it to her Foursquare account. In rejecting and banishing the app, we’re  choosing to ignore the publicity choices these women have made (assuming, as Brownlee, does, that they did not intend to be that public), in the name of keeping them safe. And we make the ugly assumption that men who might want to check out women in the area have nefarious intentions. If you extend this kind of thinking ‘offline,’ we would be calling on all women to wear burkas so potential rapists and stalkers don’t spot them on the streets and follow them home.

Framed as a privacy issue, the reaction is indeed a little strange. There is no reason to join Foursquare, nor to actively link it to your public Facebook profile, unless you want to publicly share that information: That is the point of the service. Nor, frankly, is it all that much more difficult to do what the app enabled by manually examining nearby locations using Foursquare’s own official client to see where women (or men) have checked in—so it seems like a stretch to say this is one of those cases where technically-public information is being aggregated in a radically game-changing way.

What seems more likely is that the reaction to the app is substantially  a result of, as Hill puts it,  the “design of ‘Girls Around Me,’ consisting of Bond-style silhouettes of naked ladies dancing and posing provocatively.” Suppose Foursquare were rebranded as “Hookupsquare” (or “Stalkersquare”), or Facebook absorbed by AdultFriendFinder, but everything else about the software remained the same. One assumes they would be a good deal less popular, despite being identical in terms of the information flows they enabled. The “creepiness” would be entirely in the use they appears to endorse of that  information, or the dimensions of public information they bring to the fore.

One reasons labels are important here is that often when it comes to sex, we like to maintain a deliberate measure of vagueness about exactly what we’re doing.  Strangers flirting at a bar or club seldom open with “I’m hoping we might be able to sleep together a little later,” however clear it might be to all concerned that this is ultimately the point—and someone who did would probably seem pretty creepy, even if you’d been harboring the same hope.  Shifting online, a big part of the appeal of Facebook is that it serves many of the functions of a dating site readily without defining the core activity as searching for a romantic partner. Keeping the romantic or sexual functions of an interaction—or a platform for interaction—in the background actually ends up serving those functions by creating a space for intimacy to develop without suggesting that all other communication is just some tedious prelude to fluid exchange.

Privacy is probably a bit of a red herring here, then, but it may seem natural to cast it in those terms because the feeling of objectification may overlap with one of the harm we identify with privacy violations. Why do we dislike the idea of being spied on in the shower—even by people who’ve seen us naked before, and so aren’t really obtaining “new information”? Presumably the same thing we find “creepy” about the guy who’s conspicuously ogling body parts in public—which may make someone feel more physically exposed even though, technically, they already were. And we react differently to the same observation depending on how overt it is. Someone who notices they’re being checked-out by a furtive glance—or, if we’re in an 80s movie, the slowly lowered sunglasses—may not mind, or even regard it as complimentary, at the same time as they’d be repulsed by open leering. Why? Because the attempt to be subtle (even if not so subtle as to escape notice) recognizes the observed both as a sexual body and as a subject with a reciprocal gaze of their own. The leer encompasses its object only as object.

If we think about problems of observation (whether the gaze is digital or physical) primarily in terms of control over information flows, the backlash against “Girls Around Me” can seem confused: It doesn’t render private information more public, and it doesn’t substantively alter the purposes for which that information can be easily used. That doesn’t mean the feeling there’s something creepy or objectionable bout the app is misguided, though: It just means not all issues in this sphere are usefully shoehorned into a privacy rubric.

→ 4 Comments

Bookmark and Share

 

On Snobbery and Books for Grown-Ups

April 3rd, 2012 · 24 Comments

Joel Stein is being roundly booed as a snob for opining in a recent Times roundtable that “Adults Should Read Adult Books” and steer clear of young adult fare. Maybe out of pure contrariness, I’m inclined to offer a qualified defense. It has to be qualified because, let’s face it, I’m a 33-year-old man with an extensive comic book library.  I even read all the Harry Potter and Hunger Games books, and I can’t see why that’s any worse a light entertainment than watching an action movie—which takes about as long. Nor—since he mentions the shame of seeing an adult crack one of these tomes on an airplane—are they appreciably less sophisticated or intellectually challenging than any number of spy thrillers, conspiracy yarns, and other airport bookshop staples. None of them contain prose as clunky or appalling as nominal “adult” author Dan Brown churns out. They even provide a broad form of common culture, an easy source of metaphors, because many more of us have time to blow through one of them on a lazy Sunday than can commit to tackling Ulysses or Infinite Jest—which means it’s hard to believe there’s some kind of one-to-one displacement effect.

All that said, while there’s often a surprising amount of thematic sophistication to mine in literature aimed at kids and teens, let’s not kid ourselves that it’s equivalent to what you’ll find in the best literary fiction. Well-rounded adults  need their share of that too, and some of the most rewarding of it can be hard going. Infinite Jest is an enormously fun book in many ways, but nobody’s going to honestly call it an easy read. Serious literature is too enjoyable to take an eat-your-lima beans approach, but like most really worthwhile things, it can be difficult, and there probably is some danger of getting so accustomed to a diet of effortless page-turners that we lose our ability to digest richer food.

Most of us, let’s admit, are fundamentally lazy: After working hard all day, who wants to work in their spare time? Even if we’d be glad we did it at the end, it can be hard to motivate ourselves to pick up Joyce when Rowling beckons, promising fewer demands. So in the same way that a little bit of physical vanity can be healthy, if that’s what it ultimately takes to get you out to the gym or the yoga studio a few times a week, maybe a dollop of snobbery is beneficial in the long run if that’s what pushes us to bear the initial mental strain of reading challenging fiction. Sure, ideally we wouldn’t need it: The intrinsic long-run rewards of the activity would be motivation enough. Ideally everyone would behave decently because it’s the right thing to do, and not out of fear of public shaming or legal penalties.

But realistically, we all need and employ all sorts of social commitment mechanisms to help us overcome akrasia and short term bias—to do what we reflectively know is better for us in the long term, rather than always and only what’s immediately pleasurable. (Of course, once we’re in the habit of working the relevant muscles, we get more immediate pleasure out of the “difficult” activity too.)  That is, ultimately, a huge part of what it means to grow up, to be an adult: Taking the long view rather than acting on your immediate desires and impulses—but the internal fortitude to do this develops through, and is sustained by, all sorts of external sanctions. Perhaps more so when it comes to our fictional diets, because it’s hard not to notice that you’re developing a paunch and getting winded climbing stairs after a few years of subsisting on junk food and skipping the gym—whereas the way our thinking and personalities are flattened when they’re starved of nutritious fiction can be hard to notice until you get back in the habit and realize what you’ve been missing.

So I’ll go ahead and say I think we’d probably be worse off in a world completely bereft of this kind of cultural snobbery. It’s hard to resist poking fun at the pretentious undergrad lugging some William Gaddis doorstop to the local café so everyone can see what they’re reading—but I’m not sure I’d prefer a world where grown men and women didn’t feel slightly sheepish about settling in with teen lit day after day instead. This probably isn’t an issue for the sort of wordsmith public intellectuals who felt inclined to comment on Stein’s squib: Of course they’re going to read plenty of adult fiction, and of course they’re right to bristle at anyone who’d sneer at them for throwing something a bit lighter into the mix. But that’s not a given for most adults, and a little nagging voice in the back of the head that says “Hey, you’re a grown-ass man/lady, shouldn’t you challenge yourself a bit?” is probably a net cultural asset.

→ 24 Comments

Bookmark and Share

 

Political Metastasis

March 30th, 2012 · 27 Comments

Browsing a conservative news site the other day, I was struck by the sheer oddness of that familiar genre of political commentary that treats  liberals and conservatives, not just as groups of people with systematic disagreements on policy questions, but as something like distinct subspecies of humanity. The piece that triggered this was something along the lines of “Five Reasons Liberals Are Awful People,” and it had almost nothing to do with any concrete policy question, or ultimately even the broad-brush contours of liberal political thought: It was a string of assertions about broad types of character flaws purportedly shared by liberals, of which their policy views were only a symptom. The same day, I chanced across a piece by Chris Mooney— based on his new book The Republican Brain—making a similar sort of argument from the other side by drawing on recent social science. Then just yesterday, my friend Conor Friedersdorf tweeted a request for good summaries of the liberal view of the right to privacy, and I was again struck by how odd it sounded: Scholars have advanced a whole array of views on the question, and while certainly liberals and conservatives would tend to find different ones more congenial, it seemed like an unhelpful way to map the terrain or illuminate the key points on which various thinkers diverge.

Without denying that political and policy differences are likely to track deeper differences in temperament—differences that shape our preferences and behavior across many domains—it’s worth recalling that the binary nature of our political discourse, featuring two main parties with corresponding ideologies, is a highly contingent feature of our electoral rules. As libertarians never tire of pointing out, there is no particularly compelling philosophical reason that one’s views on abortion, foreign military intervention, environmental regulation, tax policy, and criminal justice should cluster in the particular pattern we find among Republican and Democratic partisans. So we ought to be awfully skeptical about the (growing?) tendency to treat this binary divide as reflecting some essential fact about human nature, or as providing a frame within which to understand all intellectual or cultural life.

Cracking open Will Kymlicka’s excellent Contemporary Political Philosophy: An Introduction, I find he actually makes this point right at the outset: “Our traditional picture of the political landscape views political principles as falling somewhere on a  single line, stretching from left to right… [and] it is often thought that the best way to understand or describe someone’s political principles is to try to locate them somewhere on that line.” But of course, as anyone who has taken a course in political philosophy can tell you, that’s not what the main divisions look like at all: The syllabus will not contain a section on “liberal political philosophy” or “conservative political philosophy.” More likely, you’ll see a section on the various flavors of utilitarianism (act vs rule, aggregate vs average), maybe Kantian and Lockean rights theories and their progeny, communitarianism, contractualism of at least the Rawlsian variety—with Gauthier and Buchanan thrown in if the professor is feeling ecumenical. Again, you may be slightly more likely to find conservatives or liberals gravitating to one view or another, but thinkers with very different practical political commitments may be quite close at the theoretical level, and vice versa. Friedrich Hayek famously declared himself to be in almost complete agreement with the egalitarian John Rawls when it came to the normative fundamentals.

In legal theory, interpretive schools of thought fit somewhat better into “conservative” and “liberal” compartments, but there are plenty of exceptions: Yale’s Jack Balkin, for instance, is a vocal proponent of progressive originalism. More importantly, while people undoubtedly do sometimes choose an interpretive theory by working backwards from the policy preferences they’d like to justify, this categorization tends to obscure the underlying arguments for each approach, and is in any event highly contingent on the controversies that happen to be politically salient at any given time.

It starts to seem, as Albert Camus once put it, that we’ve made the mind into an armed camp—in which not only politicians and legislative proposals, but moral philosophies, artworks, even scientific theories, have to wear the insignia of one or the other army. This obviously oversimplifies—a taxonomy with two categories is not particularly rich—but also obscures the internal faultlines within each domain in a way that’s guaranteed to undermine our understanding. We’re at the point where people are morally certain about the empirical facts of what happened between Trayvon Martin and George Zimmerman on the basis of their general political worldviews. This isn’t exactly surprising—we are tribal creatures who like master narratives—but it feels as though it’s gotten more pronounced recently, and it’s almost certainly making us all stupider.

Addendum: On a related note, Kevin Drum notes an obvious problem for Chris Mooney’s thesis: Basic temperaments are supposed to be universal, but many of the political phenomena Mooney identifies as functions of those temperaments are pretty unique to American conservatives. Their European counterparts, for instance, don’t tend to exhibit the same hostility to the results of mainstream climate research or evolutionary biology. Even if people with different personality types tend to gravitate toward one local tribe or another, there’s obviously an enormous amount of contextual variation in what that will actually amount to.

→ 27 Comments

Bookmark and Share

 

Aren’t There Photos of George Zimmerman’s Supposed Injuries?

March 29th, 2012 · 31 Comments

The latest development in the Trayvon Martin case is the leak of police surveillance footage showing a not-conspicuously-injured George Zimmerman being ushered into the Sanford police station on the night of the shooting, calling into question the account that puts Zimmerman on the receiving end of a brutal pummeling that made him fear for his life. Now we’ve got people squabbling over fuzzy tape trying to determine whether some blob on the back of his head is a wound or a shadow, how much he might have been cleaned up by medics at the scene, and on, and on…

This all seems unnecessary. When I was jumped about a year ago, the police who came to the scene took close-up photos of every visible injury—all, mercifully, quite minor—presumably so they could prove battery if they ended up catching the kids. This seems to be pretty standard procedure, and it’s unfathomable that they wouldn’t do the same in a case where those injuries are the main physical evidence backing a claim of self defense in the shooting of an unarmed teenager. I am not intimately familiar with Florida’s records laws, but it would also be pretty standard to have privacy exemptions barring the release of potentially sensitive photographs, such as those showing bodily injuries of identifiable crime victims. But in this case it would seem to be in Zimmerman’s interest to waive that protection if the photographs actually show serious injury.

At the very least, it seems as though someone should ask the obvious question: Did police take close-up photos of whatever injuries Zimmerman sustained on the night of the shooting? If they did not, it would be incredibly suspicious. Assuming there are photos, even if they can’t be released to the public, has a state medical examiner or forensic scientist at least independently reviewed them to see whether they suggest a beating of such severity that a reasonable person would think lethal force was a necessary response? If not, that sounds like an obvious first step that might go a ways toward clarifying what really happened.

→ 31 Comments

Bookmark and Share

 

Trayvon Martin and the Moral Clarity Hypothesis

March 27th, 2012 · 16 Comments

Sanford police are pushing back in the face of public criticism, saying that witnesses have corroborated George Zimmerman’s account of his fatal encounter with Trayvon Martin. Given how many salient facts about the case seem to have been missed in the initial investigation—Zimmerman’s history of arrests for violence, the failure to test the admitted shooter for drugs or alcohol at the scene, the account given by Martin’s girlfriend of their cell conversation during the minutes leading up to the confrontation—that’s no reason to back off calls for a more thorough independent investigation. But it does reinforce my worry that when facts are incomplete, we tend to gravitate toward (and even insist upon) the least ambiguous narrative template available, ideally featuring one completely reprehensible villain and one completely innocent victim.

The most obviously repellent version of this has come in the form of attempts to shoehorn Martin—you know, the unarmed dead teenager—into the crude stereotype of a young thug. Any admission that maybe young black men are routinely subject to unfair “profiling,” that maybe racism exists as something other than a charge to unfairly hurl at white conservatives, would be a victory for “the left”—so it cannot be allowed! Hence we’re treated to reports that Martin used a lot of vulgarity on Twitter or maybe smoked pot, as though these were capital crimes, or even slightly unusual activities for a 17-year-old.

Much more understandably, considering who was shot dead in this encounter, there seems to be a tacit supposition that if Zimmerman was a racial-profiling jerk who mistook himself for Batman, and if he approached Martin (against the advice of a 911 dispatcher) to question him about his “suspicious” presence in the neighborhood for no better reason than his age and race, then it must also be Zimmerman who turned the confrontation violent. And maybe it was. But it also seems entirely possible that Martin—whether from fear of assault or anger and frustration at being treated like a criminal just for Walking While Black—really did strike first, and a panicked Zimmerman (head injury perhaps clouding his already poor judgment) genuinely thought he was defending himself when he fired.

That wouldn’t in any way mean that Martin “deserved” to get shot, or that Zimmerman wasn’t also seriously in the wrong, or that the use of lethal force was a justifiable form of self-defense against an unarmed assailant under the circumstances. It would just mean the situation was complicated, and that it’s possible for both parties to have been partly in the wrong even if one person was more culpable—or even a generally worse human being. But a whole lot of people seem impossibly confident that things must have played out one way or the other, as though it were a matter of moral principle rather than (possibly unknowable) fact.

A commenter on an earlier post suggested that one factor in people’s varied reactions here may be what psychologists call the Just World Hypothesis. Briefly: People want to believe the world is basically fair, and that good people don’t suffer for no reason, so they strive mightily to rationalize that suffering as somehow earned or justified. That (along with a generous helping of plain old racism) may work for the folks who are at such pains to lay all the blame on the slain Martin. But a slightly different story might account for the general impulse to insist that, wherever the blame falls, it falls wholly—all or nothing.

Probably there’s already some other name for the phenomenon I’m about to describe, but I’ll call it the Moral Clarity Hypothesis. This allows that the world is not always perfectly just, but still maintains a strict moral order by insisting on perfect injustice as the sole alternative. Life may not be fair, but at least it isn’t arbitrary. The moral ledger is always balanced: For every good person that suffers, some bad person is culpable in direct proportion to that suffering; for every unjustified harm, there is a corresponding wrong. The result is a kind of compensatory absolution—a way of sparing the unfairly injured the added insult of ascribing responsibility, however small a share. You see something like this, I think, across a number of domains: Either the poor deserve their lot because they’re feckless and lazy or it’s an injustice inflicted by malign plutocrats.

As a corollary, every story has exactly one moral: If the most important lesson to learn from the killing of Trayvon Martin is that racial stereotyping—by citizens and perhaps also police—remains a pervasive problem with catastrophic consequences for young black men, then anything that complicates that picture is a rationalization, a distraction, an attempt to make excuses and blame the victim. Or: If the picture actually is more complicated in any way, we can breathe easy knowing that racism was abolished in 1964, there’s no need to question a criminal justice system that incarcerates black men at rates that would make Stalin blush, and any indications to the contrary can be put down to a leftist plot to score political points.

Maybe it turns out that this case really is that simple: That Zimmerman launched an unprovoked assault on Martin, and bears complete, unqualified responsibility for all that ensued. Even at this late date, I hope a more thorough and independent investigation can provide everyone concerned with more certainty about what actually happened. But the facts of this particular case aren’t stakes in some allegorical battle. They won’t confirm or refute any overarching point about Race in America—only the details of one horrifying night in February in one Florida town.  Which means we don’t have to insist that reality become a cartoon to validate our moral commitments.

→ 16 Comments

Bookmark and Share

 

And May the Demographic Odds Be Ever in Your Favor. Or Not.

March 26th, 2012 · 17 Comments

The weekend, a depressing number of supposed Hunger Games fans expressed attitudes ranging from surprise to undisguised racist hostility at the discovery that black actors had been cast to play the characters Rue and Thresh in the movie. As more attentive fans were quick to point out, these reactions were not only ugly but obtuse: The characters are pretty clearly described in the book as having dark brown skin, and it’s strongly intimated that the agrarian District 11 from which they hail overlaps with the contemporary American South.  (True, the author doesn’t describe them as “African American” because… what’s an American? We’re in Panem, remember?)

The book doesn’t dwell on this, though, and a reader skimming along at a fast clip could be forgiven for missing the two quick references. The deeper stupidity here is the assumption that the default race of any character is Caucasian when it’s not stated explicitly, and that casting a person of color in this case would represent some kind of deviation from the book’s implicit characterization. This would be wrongheaded for an adaptation of a book set in the present, but at least quasi-understandable:  The social realities of people of color in contemporary America are different in a variety of ways, enough so that we do generally expect authors to make at least passing reference to a major character’s minority status.

It makes no sense at all, however, in a dystopian sci-fi novel (implicitly) set two or three centuries in the future. First, we have no real idea what the racial dynamics of Panem are like, so there’s no particular reason to think Suzanne Collins would need to make note of it if Katniss were of (say) Korean or Chicana descent. Second, and maybe more to the point, non-Hispanic whites are already projected to constitute less than half of the U.S. population in 2050, long before the earliest possible date for the events of the book. (Incidentally, reactions to the unfolding Trayvon Martin story reveal a surprising number of Americans struggling with the notion that the adjectives “white” and “Hispanic” might apply to the very same person. Kindly refer to the photo atop this blog if you’re among them.) Unless the war—and possibly other apocalyptic events—that precede the events of the book had some kind of wildly skewed demographic effects, you’d think our default expectation would be that a randomly chosen character of unspecified race won’t match your basic Anglo phenotype. If anything, then, the filmmakers probably should have gone a good deal heavier on the non-white actors—though I shudder to think how vile the Twitter reaction would have been if they had.

→ 17 Comments

Bookmark and Share

 

Undercover Atheists?

March 26th, 2012 · 17 Comments

Writing at The American Prospect a few weeks back, Patrick Caldwell expressed puzzlement at the view, seemingly widespread on the right, that the hegemonic forces of secularism are somehow forcing believers out of the public square:

When I first read Santorum’s comments though, I was mostly struck by how off base his statement is from the actual reality of our political class. People who lack a specific faith are the ones typically closed out from government service. Out of 538 members of Congress, California Rep. Pete Stark is the only self-avowed atheist. For as much as Republicans opine about the secularist goals of Obama’s presidency, he has stocked his cabinet with Catholics and other gentiles. The highest court of the land has six Catholics and three Jews.

A Gallup poll last December had 15 percent of Americans list their religious preference as none, atheist, or agnostic, though another Gallup poll from earlier in the year found that 7 percent claim to have no belief in God. By either measure, Americans lacking allegiance to an organized religion are vastly underrepresented among public officials.

It’s worth saying, first, that Santorum’s complaint is not so much about religious people being somehow hounded from public office, but about the secularism of mainstream political discourse. Which is just to say that we generally expect political actors in a pluralistic country to offer justifications for their preferred policies that do not hinge on one’s sharing a particular interpretation of a particular sacred text. Santorum thinks it should be perfectly sufficient to say: “It should be illegal because the Bible condemns it,” and he’s irritated that even believers mostly feel obligated to focus on religiously neutral “public reasons” that could be accepted by people who don’t acknowledge the authority of (that reading of) the Christian Bible. He’s not empirically wrong about this (and a good thing!), he just has a repugnant, medieval vision of how things ought to be.

That aside, though, I suspect “self-avowed” is a key qualifier in the passage quoted above. Whatever they check off on their census forms, the political class in D.C. have always struck me as pretty secular. Maybe they’re just quiet about their faith—praying quietly in private, regularly attending worship services on the weekend without making much fuss about it. And I certainly wouldn’t claim that people I happen to know socially are anything like a representative sample of “the D.C. political class.” Still, if you asked me to guess what percentage of the under-40 political professionals in this town—hill staffers, pundits, journalists, wonks, and activists—are agnostic or atheist in their private beliefs, I’d hazard a number much higher than 15 percent. If you expand that definition to encompass what I’d call “operational atheists”—people who might tell a pollster they’re whatever faith they grew up in, and might “believe” in some vague abstract sense, but whose nominal religion plays no discernible role in their thinking or everyday life—you’re probably well over 50 percent.

Of course, there are obvious reasons for Congress to be unrepresentative.  Given the widespread popular prejudice against atheists, they’re probably disproportionately likely to self-select into think tanks and magazines and various other supporting roles. And I wouldn’t be surprised if some smart, ambitious young people with political aspirations either consciously or subconsciously made a pragmatic decision, maybe at some point in college, that there was no real benefit in subjecting this particular corner of their belief systems to special scrutiny. Most of us, after all, hold thousands of beliefs with little better warrant than “I’m pretty sure I read that somewhere”—so it would be easy enough for a would-be politico to conclude there’s no sense rocking this particular epistemic boat.

But it’s still very, very hard for me to believe that there’s really only one atheist in the United States Congress. Not everyone who concludes, in an hour of quiet reflection, that religious doctrines are probably false feels compelled to shout it from the rooftops as loudly as Christopher Hitchens or Richard Dawkins. Lots of them are even perfectly happy to go through the motions at appropriate occasions, for the sake of family (presumably not everyone who converts at marriage has a genuine theological epiphany) or because they enjoy the sense of community, or even just because the ceremonial trappings have grown familiar and comfortable.  People fake it—so routinely that a Google search for “coming out atheist” brings up an incredible deluge of stories and discussions about people making the decision to leave the closet after years of going along to get along… or not. YouTube is packed with similar testimonials. Historically even intellectuals felt obliged to play along: David Hume (to pick one famous example from a rich pool) halfheartedly professed to be persuaded by the “Argument from Design”—then gives all the most devastating arguments in his “Dialogues Concerning Natural Religion” to the skeptic who demolishes that argument.  It strains credulity to think there aren’t at least a few—and maybe more than a few—comparable closet cases in a profession where success depends on convincing this cycle’s electorate that you’re deeply committed to whatever it is they believe… even if it’s the opposite of what the last electorate believed.

It’s something of a cliché at this point to talk about the “paranoid style” of conservative politics—and the seeming migration of that paranoia from the fringe to the mainstream. But maybe in part it has roots in a perfectly common real-life experience that must, to believers, seem a bit like something out of Invasion of the Body Snatchers: The bright young child everyone was so proud to ship off to a prestigious university comes back over break subtly different somehow… dutifully says grace at supper, but seems (for reasons you can’t quite nail down—maybe just that hint of a smirk?) to be humoring the ritual. For Americans who (mistakenly) take faith to be a sort of minimum prerequisite for moral conduct, this has to seem like the ultimate form of deception: Lying about even the general possibility of being honest. What had been understood as a kind of polite dissimulation—yes, of course your newborn is the most beautiful baby in the history of babies—starts to look downright insidious.

Previously faith could more or less be taken for granted—maybe the candidate makes a passing reference to the church they regularly attend—and that’s all there is to it, really, because of course everyone’s a believer of one stripe or another. Increasingly, isn’t so—that there are actually quite a lot of unbelievers, many of them effectively operating in stealth mode. This was probably always the case, but outside the academy and a few urban enclaves, nobody was terribly vocal about it—you certainly didn’t have anything like a visible public “movement.” Suddenly, if you’re someone who thinks of faith as a minimal prerequisite for decency, what was previously tacitly understood has to be signaled with extra vigor.

A comparison with gay rights may be apt here: Go back a few decades and the idea is so marginal that nobody really thinks of it as a political issue. (Note that in some of the most virulently homophobic societies, you also see a lot more normal physical affection between men that would be normal in the U.S., possibly because it’s so beyond the pale that nobody worries about sending the wrong signals.)  Roll forward another decade or two and it’ll so normalized that nobody can quite understand why there was ever a fuss about it: Every city has plenty of nice gay families, and everyone can see they’re not fundamentally different from the nice straight family next door. You get “culture wars” in the middle: When a phenomenon is prevalent enough to seem threatening, but not yet (visibly) prevalent enough that it becomes obvious it’s not actually a threat at all.

I’ve always found the more aggressive, proselytizing sort of atheism a bit distasteful: Do we really need a People-who-don’t-play-chess club or a non-basketball-team? As a writer or pundit or whatever I am, it’s no surprise that I’ll occasionally bring up this aspect of my worldview, but since most of us don’t think our fellow citizens have souls that need saving, shouldn’t the modal atheist just go on quietly not-believing, and hope polite circumspection on these issues catches on? Maybe, though, there’s a case for being a little more vocal—for coming out secular—at this particular historical moment, in the interest of hastening the journey across the valley between invisibility and normalcy.

→ 17 Comments

Bookmark and Share