July 16, 2004
Defending the Rings
I guess I shouldn't be surprised that after 8 months this post is still attracting comments.
Fellini Twilight
Michael Blowhard posted this little observation a few days ago on his blog:
Odd to think that many young film fans have no idea who Federico Fellini was. For decades, Fellini was a huge and iconic presence in the filmmaking and filmgoing worlds, the very incarnation of the film director as star. He made some good movies too.
He then provides a link to an article about Fellini, in order to “help a few film fans fill in some of their film-history blanks.”
Now, I’d question Michael’s observation just based on this piece of anecdotal evidence: all my young film buff friends and I know exactly who Fellini was. And so did just about every film buff I met as an undergraduate and as a grad student. Most of us had even seen some of his movies.
Maybe Michael is talking about really young film fans—teenagers, people in their early twenties, etc. In this case, not knowing about Fellini would be normal, not odd. After all, a young film fan in the 1970s would have known who Fellini was, but she might not have known Jean Renoir or Josef von Sternberg.
Or, more likely, Michael’s “film fan” is much more casual about his movie watching than my “film buff.” It’s not at all surprising that folks who’re really into, say, American Indie movies or Hong Kong action movies or British period pictures would have no idea who Fellini was, but, then, they’re not likely to know who Howard Hawks, Nicholas Ray, Jean Renoir, or Ernst Lubitsch were either.
However, I do think Michael is onto something: that the film buffs of my generation who know who Fellini is, really don’t seem to give a fig about him or his work. On the one hand, this is unfortunate, because in the 1950s he made one great movie after another. On the other hand, this is understandable, because his reputation—his iconic presence—was built around movies he made from 1960 on, most of which, unless you (a) grew up with them or (b) are looking for examples of 1960s-style excess, are now unwatchable.
Fellini was important to filmgoers of the 1960s and 1970s, not because of the individual films he made, but because of what he meant and how he acted out—in real life and in his movies—his struggles to create cinematic art. Take away Fellini’s “iconic presence” and 8½ isn’t much more than a confused-looking Marcello Mastroianni.
Fellini’s self-dramatization was important to the filmgoing scene. However, most of the time, a director gets into the “film history canon” not because of his iconic presence but because of the movies he made. And the kind of movies Fellini is most famous for don’t resonate as much with the film buffs of today. Apart from Woody Allen (one of Fellini’s most devoted followers), there’s no contemporary filmmaker who’s into the whole making-art-that-expresses-the-struggle-of-making-art thing. In fact, that stuff (at least in movieland) is, thankfully, kind of passé.
Canons change. Moreover, an era’s canon isn’t set by the cognoscenti of that era but by the cognoscenti of the following generations. Film buffs of the 1960s might find it odd that Fellini’s 8½ isn’t as well thought of by today’s film buffs as Jean-Pierre Melville’s Le Circle rouge.
Over time, I wouldn’t be surprised to find Fellini’s early films seeping into the collective consciousness of film buffs, but Fellini’s importance as an iconic presence is gone forever.
July 14, 2004
Performance Anxieties
For a while now, I’ve wanted to write a post about why the conventional wisdom about what makes good acting—especially in movies—is usually wrong. As per usual, I procrastinated, but two seemingly unrelated pieces have inspired me to put down my thoughts on the subject. The first piece was Terry Teachout’s attempt to take Marlon Brando down a peg or two. The second was Johnny Bacardi’s account of the acting in Spider-Man 2, which seems to jibe with what most people who have seen the moving are saying: Alfred Molina is great, Tobey Maguire is good, and James Franco is, at best, bearable and, at worst, terrible. (FYI: I’ll save my specific comments on Brando and Spider-Man 2 for later posts.)
Many smart people spend a good deal of time thinking and writing about movies, but hardly anyone pays much attention to the acting. Whether writing popular movie reviews or academic film criticism, most writers will gladly deal with the specifics of a movie’s plot, theme, script, and technique, but they’ll describe the acting only in superficial generalities (i.e., “Tom Hanks was very believable” or “Keanu Reeves was wooden” or “Russell Crowe got to the heart of the character,” etc.).
I’m tempted to explain this by saying that writing about the specifics of a performance is simply harder than writing about the specifics of a screenplay. However, I don’t think that this is precisely true, that is, I think it is harder for most writers to write about acting, but not because of any inherent difference in difficulty. Rather, writers, being writers, find it easier to treat movies as a piece of writing. They find it easier to write about the literary qualities that movies shares with other kinds of writing (plot, theme, dialogue, etc.) than it is to write about those qualities that movies share with interpretative/performing arts.
I’d argue that only a writer could have come up with something like the auteur theory, because only a writer would look at a movie as the realization of a single person’s vision. A casual moviegoer would more likely, and more rightly, see a movie as the result of collaboration between the actors and some unseen technicians.
To offer some completely anecdotal evidence: not once in my career as a cinema studies grad student did I encounter (1) a piece of writing about acting or (2) someone who was working on a piece of writing about acting or (3) someone who was interested in writing about acting. I did come across a lot of people writing about star personae, but those writers weren’t interested in acting per se but rather how a star’s image affected the reading of a movie.
However, it isn’t only writers, critics, and academics who have a blind spot when it comes to acting. Most people seem to judge performances not based on the acting but on what I like to call the ahc-ting. In one of my favorite blog posts, Michael Blowhard decries what he calls “writin’”, that is writing, which word-by-word, sentence-by-sentence, calls attention to its own importance and literary greatness. Following Michael, I’d like to use the word “ahc-ting” to describe the kind of acting where ever line reading and gesture is meant to call attention to the meaning and importance of the performance. Ahc-ting doesn’t necessarily require scenery chewing or going over-the-top. It can and often does, but some of the most egregious ahc-ting is very subtle. Unfortunately, it seems that most people now judge good acting by the standards of ahc-ting. Ahc-ting is supposed to impress people, and much of it is impressive. However, even when I’ve been impressed, I haven’t much liked it. Here are a few suggestions that might help save us all from ahc-ting:
Actors should not show how hard they are working.
Every performance requires a certain amount of effort. Some require a great deal. However, I never really enjoy a performance when an actor doesn’t let me forget how much effort it’s taking.
Method actors—even very good ones—will often start showing-off their homework when they have nothing else to do. John Cassavetes’s movies are filled with very good actors who, because they’re not given any specific direction and they’re not trying to play specific characters, end up flailing around, wrestling with their inner demons for no greater purpose than to show the audience how much effort they’re putting into the whole affair.
Movie stars will often make this mistake. For example, Tom Cruise is always taking roles—Ron Kovic in Born on the Fourth of July or Fank T.J. Mackey in Magnolia—where he has to strain and sweat and strut, in order to show that he’s a “real” ahc-tor, and not just a pretty face.
Meryl Streep does this, albeit with more class and skill than most. Streep’s performances are all very technically accomplished, but there’s usually not much more your can say about them. It’s not so much a case of her showing off how hard she’s working, but of having nothing to show the audience except the preparation she’s taken—she approaches a role as if it were an exercise in an acting class. I usually don’t mind watching Streep because, unlike Tom Cruise, for example, she is talented and she does know a lot about the nuts and bolts of acting. However, most of Streep’s performances aren’t that enjoyable.
99% of the time actors should just be themselves.
For some actors, mimicry can be a very effective technique (see Sir Laurence Olivier and Daniel Day-Lewis), but mimicry itself isn’t acting. However, many people seem to believe that mimicry is at the heart of acting—that actors must try to “be” someone else and that acting is a kind of self-transformation. This is an unfortunate delusion. Acting is about playing a part, which is accomplished not by a combination of physical and emotional contortions, but by saying and doing and pretending to want the same things a character is supposed to say and do and want.
In general, actors should avoid mimicry for two major reasons:
1) Most actors simply aren’t good at it. There’s nothing less compelling than someone expending a lot of effort trying to “transform” themselves into someone else when it is painfully obvious that no such transformation is possible. I like Russell Crowe in movies like L.A. Confidential and Master and Commander, but his attempt to portray John Nash in A Beautiful Mind is completely preposterous. Crowe’s performance consists entirely of flexing his fingers, stuttering, and giving off an “uncomfortable” vibe.2) Even actors who are technically proficient mimics have a tendency to turn mimicry into a stunt. Dustin Hoffman’s performance in Rain Man is a perfect example. Hoffman does an excellent job of impersonating an autistic person, but the performance would have been just as effective if it were 5 minutes long. Jim Carrey nails the Andy Kaufman routines in Man on the Moon, but, again, so what?
The least enjoyable kind of mimicry occurs when the impersonation itself is the entire point of the performance. In these cases the audience is meant to look at the characters and say “I can’t believe that’s really so-and-so.” Once that’s been said, really, there’s usually nothing more to say about these kinds of performances. A recent example is Charlize Theron’s performance in Monster. Though Theron is a competent actor, the big draw of watching Monster wasn’t her performance, per se, but that you couldn’t believe it was her under all that make-up.
There are few things I find more annoying than listening to someone criticize an actor because he (or she) always plays himself (or herself). As far as I am concerned “playing yourself” is neither good nor bad. What matters is how interesting and enjoyable the performance is, not its novelty. Moreover, it makes sense to cast actors in roles that fit them. In other words, it makes sense to cast actors within their range, and we shouldn’t care how great an actor’s range is if he (or she) is always good within it.
The “playing himself” criticism is also bogus because many actors manage to play many different characters, while always “playing themselves.” James Stewart, for example, is perfectly believable as an uptight, oversensitive urban clerk (The Shop Around the Corner), a small town man at the end of his rope (It’s a Wonderful Life), an aw-shucks farm boy in the big city (Mr. Smith Goes to Washington), an ornery cattle driving outlaw (The Far Country), and an obsessive whack-job (Vertigo, but also, with more subtlety, The Man Who Knew Too Much). Yet, though all these characters are certainly different, in each performance Stewart is just as certainly “himself.” Or, rather, Stewart manages to play a wide variety of characters by emphasizing different aspects of his self as appropriate.
Rather than thinking of good actors as people who can convincingly “be someone else,” I’d argue that good actors are people who can convincingly find themselves in the roles they play.
It’s just a method.
Now, I don’t want to dump on method acting alone. My beef is with any kind of “one-true-way” kind of acting criticism. For example, my reply to Olivier’s anti-method quip to Dustin Hoffman, who was going a little overboard in his method-y preparations for Marathon Man—“Why don’t you try acting?”—would be to suggest that what Hoffman was doing was part of the acting process, it just wasn’t the same process Olivier had been taught. (As the proof of the pudding is in the eating, it is perhaps useful to point out that neither man’s preparations did them much good in Marathon Man: Olivier comes across as a constipated B-movie Nazi and Hoffman is a passive, uninteresting slug. The movie would have been better with Christopher Lee and Richard Dreyfuss, but that might have thwarted John Schlesinger’s plan of making a thoroughly miserable film.)
Unfortunately, when most people think of “good acting” they are thinking of only one or two things: a serious Method performance (Sean Penn in Mystic River) and/or a serious British Classical performance (Sir Anthony Hopkins in Silence of the Lambs). These two styles use different techniques, but they share the same goal: naturalistic performance. Truth be told, both styles do a pretty good job of achieving naturalism on a regular basis, but that still leaves the question of whether “naturalism” itself should be the goal of a performance.
Of course, I don’t think it should be. Nowadays, audiences and critics tend to look down on overt theatricality, which is a real shame, as many of the greatest actors of the first half of the 20th Century gave theatrical-style performances: Orson Welles, Katherine Hepburn, Humphrey Bogart (in The Treasure of the Sierra Madres, for example), James Cagney, Ida Lupino, etc. Gary Oldman is one of the few genuine hams who gets any respect from contemporary critics.
Almost everything is forgivable if (a) the actor is having fun and (b) the audience is having fun.
A lot of people complain about actors who “chew the scenery.” Now, this can be a legitimate complaint, but a lot of the time, I find over-the-top hamming extremely enjoyable. However, this is dependent on tone and context. For example, I got a real kick out of Al Pacino’s shenanigans in The Devil’s Advocate, where he gets so angry he catches on fire but couldn’t stand them in Scent of a Woman, when we’re meant to take them seriously. Likewise, I thought that Sir Anthony Hopkins was a hoot in Bram Stoker’s Dracula, but that his scenery chewing in The Silence of the Lambs was preposterous. His Hannibal Lecter was far too actor-y to fit into a movie that is trying to be a realistic psychological thriller. (Compare Hopkins’s performance in Lambs to Brian Cox’s understated, but much creepier, take on Lecter in Manhunter).
And I could go on and on…
I guess I’ll stop here for now, but I could certainly add a few more, like: line reading isn’t everything (too many people overlook the physical side of a performance), beautiful people can be fine actors (and they shouldn’t need to make themselves ugly to prove it), range isn’t everything (the Christopher Walken Rule)…
Though this post was basically an excuse to air some pet peeves, I hope that it will inspire you to question the conventional wisdom about good acting and be more willing to stand up for performances that you actually find enjoyable—not just the ones you’re supposed to find impressive.
One last thing: attentive readers might have noted that though I complained about writers who don't write about the specifics of acting, I used generalities and made vague assertions throughout this post. Although I intend to get around to some pro-active writing about acting soon, until then you should check out this book, which features some of the best writing on acting I've ever read.
June 29, 2004
The Intentional Fallacy Fallacy
Most people even vaguely familiar with arts criticism have come across the concept of “the intentional fallacy.” It’s a pretty reasonable notion: just because an artist intended a work of art to mean something, doesn’t necessarily make it so. This makes sense for a number of reasons: intentions in art, as in every other human endeavor, often go awry; an artist, like the rest of us, might not be able to give (or might not want to give) a completely truthful account of her intentions; art-making ability isn’t the same thing as art-appreciation ability, and neither are the same as the faculties for articulating art-appreciation. Because of these and other, more technically philosophical, factors, the acceptance of the intentional fallacy among critics hasn’t been very controversial.
However, certain critics have a tendency to take the intentional fallacy to ridiculous extremes. Not satisfied with its straightforward meaning—that an artist’s take on her own work doesn’t trump everyone else’s take on the work—some people think we should throw out notions of intentionality altogether when dealing with a work of art. This kind of critic thinks we should look at a work of art entirely on its own, without any reference to the person or persons who brought it into being.
Critics who go to these extremes have fallen into the intentional fallacy fallacy: the mistaken belief that it’s even possible to deal with a work of art without any reference to the intentions of its creator(s). These critics often like to talk about the notion of “authorship” as a necessary fiction. However, it is their notion of “the death of the author” that is actually fictional. (Perhaps a useful fiction, especially for critics trying to carve out some interpretative space for themselves, but a fiction nonetheless.)
First, all works of art must be intentional, and critics have to accept them as intentional before any criticism can take place. There’s a difference in the way we would talk about a sculpture—formed by human hands—and a rock formation that looks the way it does because of the weather. At the very least, know that we will be able to find some human meaning in a sculpture, while a close reading of the rock formation will only yield geological and meteorological information.
Second, the choices the artist makes—choices that are intended by the artist—directly affect the way we engage with and respond to the work of art. It would be impossible for anyone to completely ignore these choices and still be able to talk about the work.
For example, the first thing an attentive reader will notice on picking up Frederick Turner’s book Genesis is that Turner decided to tell his sci-fi story about the colonization of Mars as an epic poem in iambic pentameter. Now, the meaning a reader sees in this choice might not be the same as the one Turner intended (to rekindle the epic, heroic tradition that has faded in the modern age, etc.), but the reader must acknowledge (1) that Turner did make an intentional choice and (2) that the choice has some meaning.
(Of course, you might ask about the unintentional stuff that ends up in a finished work of art. I’d answer that the only way that stuff gets there in the first place is because of intentional choices that the artist had already made. The intentional choices must come first in order for the work of art to contain unintentional parts. For example, Howard Hawks, Leigh Brackett, and Jules Furtham might not have intended that Rio Bravo be a celebration of grass roots American democracy, but they did intend to make a western that showed off all their actors’ talents and gave each of the characters an equally important part in the story.)
Critics have embraced the intentional fallacy fallacy for the same reason they have shunned value judgments. Looking at a work honestly—that is, looking at the artist’s choices behind her art—opens up the possibility that the choices the artist make can be good ones or bad ones. They can be the right choice or the wrong choice, better or worse than any number of other possible choices. For art therapists and grade school art teachers, there are no wrong choices. An analogy is in order: if I have no particular place to go, there’s no such thing as a wrong path. However, as soon as I have a destination, there is such a thing as a wrong path. (This is not to say that there is only one right path—I can drive to the Berkshires on the highway or I can take the scenic route.) Another analogy: if I’m playing a game with no rules, there’s no way to tell whether I’m playing it better or worse than anyone else. However, once I start playing a game with rules I will quickly discover that there are better and worse ways of playing (I did so last night when my girlfriend introduced me to Mancala).
Now, there are people who might say—jumbling my analogies together—that art is an activity that has no destination and follows no rules. Statements along those lines are total B.S. Art-making’s destination and rules, or rather, just the rules, which implicitly contain its goal (destination), are what separate an organized activity like art-making from, say, just sitting around doing nothing (which, incidentally, is how most self-styled artists spend their time). However, just so you know that I’m not some kind of art-authoritarian and in order to return to the topic of this post, one of the most important (intentional) choices the artist makes is the set of rules (with or without variations) she’ll use.
This is probably easiest to see when it comes to writing verse. The would-be poet chooses the form, meter, rhyme scheme, etc. for his poem. He sets up his own limits in order to better express himself. This is yet another example of one of the earliest discoveries about human nature: meaningful freedom requires limitations, and things only acquire meaning within a system of limitations.
(This leads to the related issue of “meta-criticism”: criticism that deals with these rules themselves. After all, some sets of rules just aren’t very good. However, I'll save that discussion for later.)
When dealing with art, I think the best term for these systems of limitations is genre, taken in its broadest (non-pejorative) sense. (Zane Grey’s The Riders of the Purple Sage and Larry McMurtry’s Sin Killer are both novels of the western genre; Philip Roth’s The Anatomy Lesson and Larry McMurtry’s Duane’s Depressed are novels of the male-midlife crisis genre). The artist’s choice of genre—her intentions to work in one genre (representational sculpture) instead of another (abstract sculpture)—is one of the first things a critic, or, indeed, anyone trying to engage with the work, needs to acknowledge.
One final warning: though the intentional fallacy rightly points out that an artist’s interpretation doesn’t trump anyone else’s, only someone who holds the intentional fallacy fallacy would make the mistake of discounting, out of hand, the artist’s take on her own work. For example, Jean-Luc Godard and Robert Altman are notorious B.S.-artists and nothing they say about their own movies should be taken (a) seriously and (b) at face value. However, Jean Renoir is quite articulate about his own work, and while his interpretation of The Rules of the Game may not be the definitive one, it’s still very compelling.