Chuck Klosterman on Film and Television – Read Now and Download Mobi
Chuck Klosterman on Film and Television
A Collection of Previously Published Essays
Scribner
New York London Toronto Sydney
SCRIBNER
A Division of Simon & Schuster, Inc.
1230 Avenue of the Americas
New York, NY 10020
www.SimonandSchuster.com
Essays in this work were previously published in Sex, Drugs, and Cocoa Puffs copyright © 2003, 2004 by Chuck Klosterman, Chuck Klosterman IV copyright © 2006, 2007 by Chuck Klosterman, and Eating the Dinosaur copyright © 2009 by Chuck Klosterman.
All rights reserved, including the right to reproduce this book or portions thereof in any form whatsoever. For information address Scribner Subsidiary Rights Department, 1230 Avenue of the Americas, New York, NY 10020.
First Scribner ebook edition September 2010
SCRIBNER and design are registered trademarks of The Gale Group, Inc., used under license by Simon & Schuster, Inc., the publisher of this work.
For information about special discounts for bulk purchases, please contact Simon & Schuster Special Sales at 1-866-506-1949 or [email protected].
The Simon & Schuster Speakers Bureau can bring authors to your live event. For more information or to book an event contact the Simon & Schuster Speakers Bureau at 1-866-248-3049 or visit our website at www.simonspeakers.com.
Manufactured in the United States of America
ISBN 978-1-4516-2478-6
Portions of this work originally appeared in Esquire and on SPIN.com.
Contents
From Sex, Drugs, and Cocoa Puffs
The Awe-Inspiring Beauty of Tom Cruise’s Shattered, Troll-like Face
From Chuck Klosterman IV
From Eating the Dinosaur
This Is Emo
No woman will ever satisfy me. I know that now, and I would never try to deny it. But this is actually okay, because I will never satisfy a woman, either.
Should I be writing such thoughts? Perhaps not. Perhaps it’s a bad idea. I can definitely foresee a scenario where that first paragraph could come back to haunt me, especially if I somehow became marginally famous. If I become marginally famous, I will undoubtedly be interviewed by someone in the media,1 and the interviewer will inevitably ask, “Fifteen years ago, you wrote that no woman could ever satisfy you. Now that you’ve been married for almost five years, are those words still true?” And I will have to say, “Oh, God no. Those were the words of an entirely different person—a person whom I can’t even relate to anymore. Honestly, I can’t image an existence without _____. She satisfies me in ways that I never even considered. She saved my life, really.”
Now, I will be lying. I won’t really feel that way. But I’ll certainly say those words, and I’ll deliver them with the utmost sincerity, even though those sentiments will not be there. So then the interviewer will undoubtedly quote lines from this particular paragraph, thereby reminding me that I swore I would publicly deny my true feelings, and I’ll chuckle and say, “Come on, Mr. Rose. That was a literary device. You know I never really believed that.”
But here’s the thing: I do believe that. It’s the truth now, and it will be in the future. And while I’m not exactly happy about that truth, it doesn’t make me sad, either. I know it’s not my fault.
It’s no one’s fault, really. Or maybe it’s everyone’s fault. It 1 should be everyone’s fault, because it’s everyone’s problem. Well, okay… not everyone. Not boring people, and not the profoundly retarded. But whenever I meet dynamic, nonretarded Americans, I notice that they all seem to share a single unifying characteristic: the inability to experience the kind of mind-blowing, transcendent romantic relationship they perceive to be a normal part of living. And someone needs to take the fall for this. So instead of blaming no one for this (which is kind of cowardly) or blaming everyone (which is kind of meaningless), I’m going to blame John Cusack.
I once loved a girl who almost loved me, but not as much as she loved John Cusack. Under certain circumstances, this would have been fine; Cusack is relatively good-looking, he seems like a pretty cool guy (he likes the Clash and the Who, at least), and he undoubtedly has millions of bones in the bank. If Cusack and I were competing for the same woman, I could easily accept losing. However, I don’t really feel like John and I were “competing” for the girl I’m referring to, inasmuch as her relationship to Cusack was confined to watching him as a two-dimensional projection, pretending to be characters who don’t actually exist. Now, there was a time when I would have thought that detachment would have given me a huge advantage over Johnny C., inasmuch as my relationship with this woman included things like “talking on the phone” and “nuzzling under umbrellas” and “eating pancakes.” However, I have come to realize that I perceived this competition completely backward; it was definitely an unfair battle, but not in my favor. It was unfair in Cusack’s favor. I never had a chance.
It appears that countless women born between the years of 1965 and 1978 are in love with John Cusack. I cannot fathom how he isn’t the number-one box-office star in America, because every straight girl I know would sell her soul to share a milkshake with that motherfucker. For upwardly mobile women in their twenties and thirties, John Cusack is the neo-Elvis. But here’s what none of these upwardly mobile women seem to realize: They don’t love John Cusack. They love Lloyd Dobler. When they see Mr. Cusack, they are still seeing the optimistic, charmingly loquacious teenager he played in Say Anything, a movie that came out more than a decade ago. That’s the guy they think he is; when Cusack played Eddie Thomas in America’s Sweethearts or the sensitive hit man in Grosse Pointe Blank, all his female fans knew he was only acting… but they assume when the camera stopped rolling, he went back to his genuine self… which was someone like Lloyd Dobler… which was, in fact, someone who is Lloyd Dobler, and someone who continues to have a storybook romance with Diane Court (or with Ione Skye, depending on how you look at it). And these upwardly mobile women are not alone. We all convince ourselves of things like this—not necessarily about Say Anything, but about any fictionalized portrayals of romance that happen to hit us in the right place, at the right time. This is why I will never be completely satisfied by a woman, and this is why the kind of woman I tend to find attractive will never be satisfied by me. We will both measure our relationship against the prospect of fake love.
Fake love is a very powerful thing. That girl who adored John Cusack once had the opportunity to spend a weekend with me in New York at the Waldorf-Astoria, but she elected to fly to Portland instead to see the first U.S. appearance by Coldplay, a British pop group whose success derives from their ability to write melodramatic alt-rock songs about fake love. It does not matter that Coldplay is absolutely the shittiest fucking band I’ve ever heard in my entire fucking life, or that they sound like a mediocre photocopy of Travis (who sound like a mediocre photocopy of Radiohead), or that their greatest fucking artistic achievement is a video where their blandly attractive frontman walks on a beach on a cloudy fucking afternoon. None of that matters. What matters is that Coldplay manufactures fake love as frenetically as the Ford fucking Motor Company manufactures Mustangs, and that’s all this woman heard. “For you I bleed myself dry,” sang their blockhead vocalist, brilliantly informing us that stars in the sky are, in fact, yellow. How am I going to compete with that shit? That sleepy-eyed bozo isn’t even making sense. He’s just pouring fabricated emotions over four gloomy guitar chords, and it ends up sounding like love. And what does that mean? It means she flies to fucking Portland to hear two hours of amateurish U.K. hyperslop, and I sleep alone in a $270 hotel in Manhattan, and I hope Coldplay gets fucking dropped by fucking EMI and ends up like the Stone fucking Roses, who were actually a better fucking band, all things considered.
Not that I’m bitter about this. Oh, I concede that I may be taking this particular example somewhat personally—but I do think it’s a perfect illustration of why almost everyone I know is either overtly or covertly unhappy. Coldplay songs deliver an amorphous, irrefutable interpretation of how being in love is supposed to feel, and people find themselves wanting that feeling for real. They want men to adore them like Lloyd Dobler would, and they want women to think like Aimee Mann, and they expect all their arguments to sound like Sam Malone and Diane Chambers. They think everything will work out perfectly in the end (just like it did for Helen Fielding’s Bridget Jones and Nick Hornby’s Rob Fleming), and they don’t stop believing, because Journey’s Steve Perry insists we should never do that. In the nineteenth century, teenagers merely aspired to have a marriage that would be better than that of their parents; personally, I would never be satisfied unless my marriage was as good as Cliff and Clair Huxtable’s (or at least as enigmatic as Jack and Meg White’s).
Pundits are always blaming TV for making people stupid, movies for desensitizing the world to violence, and rock music for making kids take drugs and kill themselves. These things should be the least of our worries. The main problem with mass media is that it makes it impossible to fall in love with any acumen of normalcy. There is no “normal,” because everybody is being twisted by the same sources simultaneously. You can’t compare your relationship with the playful couple who lives next door, because they’re probably modeling themselves after Chandler Bing and Monica Geller. Real people are actively trying to live like fake people, so real people are no less fake. Every comparison becomes impractical. This is why the impractical has become totally acceptable; impracticality almost seems cool. The best relationship I ever had was with a journalist who was as crazy as me, and some of our coworkers liked to compare us to Sid Vicious and Nancy Spungen. At the time, I used to think, “Yeah, that’s completely valid: We fight all the time, our love is self-destructive, and—if she was mysteriously killed—I’m sure I’d be wrongly arrested for second-degree murder before dying from an overdose.” We even watched Sid & Nancy in her parents’ basement and giggled the whole time. “That’s us,” we said gleefully. And like I said—this was the best relationship I ever had. And I suspect it was the best one she ever had, too.
Of course, this media transference is not all bad. It has certainly worked to my advantage, just as it has for all modern men who look and talk and act like me. We all owe our lives to Woody Allen. If Woody Allen had never been born, I’m sure I would be doomed to a life of celibacy. Remember the aforementioned woman who loved Cusack and Coldplay? There is absolutely no way I could have dated this person if Woody Allen didn’t exist. In tangible terms, she was light-years out of my league, along with most of the other women I’ve slept with. But Woody Allen changed everything. Woody Allen made it acceptable for beautiful women to sleep with nerdy, bespectacled goofballs; all we need to do is fabricate the illusion of intellectual humor, and we somehow have a chance. The irony is that many of the women most susceptible to this scam haven’t even seen any of Woody’s movies, nor would they want to touch the actual Woody Allen if they ever had the chance (especially since he’s proven to be an über-pervy clarinet freak). If asked, most of these foxy ladies wouldn’t classify Woody Allen as sexy, or handsome, or even likable. But this is how media devolution works: It creates an archetype that eventually dwarfs its origin. By now, the “Woody Allen Personality Type” has far greater cultural importance than the man himself.
Now, the argument could be made that all this is good for the sexual bloodstream of Americana, and that all these Women Who Want Woody are being unconsciously conditioned to be less shallow than their sociobiology dictates. Self-deprecating cleverness has become a virtue. At least on the surface, movies and television actively promote dating the nonbeautiful: If we have learned anything from the mass media, it’s that the only people who can make us happy are those who don’t strike us as being particularly desirable. Whether it’s Jerry Maguire or Sixteen Candles or Who’s the Boss or Some Kind of Wonderful or Speed Racer, we are constantly reminded that the unattainable icons of perfection we lust after can never fulfill us like the platonic allies who have been there all along.2 If we all took media messages at their absolute face value, we’d all be sleeping with our best friends. And that does happen, sometimes.3 But herein lies the trap: We’ve also been trained to think this will always work out over the long term, which dooms us to disappointment. Because when push comes to shove, we really don’t want to have sex with our friends… unless they’re sexy. And sometimes we do want to have sex with our blackhearted, soul-sucking enemies… assuming they’re sexy. Because that’s all it ever comes down to in real life, regardless of what happened to Michael J. Fox in Teen Wolf.
The mass media causes sexual misdirection: It prompts us to need something deeper than what we want. This is why Woody Allen has made nebbish guys cool; he makes people assume there is something profound about having a relationship based on witty conversation and intellectual discourse. There isn’t. It’s just another gimmick, and it’s no different than wanting to be with someone because they’re thin or rich or the former lead singer of Whiskeytown. And it actually might be worse, because an intellectual relationship isn’t real at all. My witty banter and cerebral discourse is always completely contrived. Right now, I have three and a half dates worth of material, all of which I pretend to deliver spontaneously.4 This is my strategy: If I can just coerce women into the last half of that fourth date, it’s anyone’s ball game. I’ve beaten the system; I’ve broken the code; I’ve slain the Minotaur. If we part ways on that fourth evening without some kind of conversational disaster, she probably digs me. Or at least she thinks she digs me, because who she digs is not really me. Sadly, our relationship will not last ninety-three minutes (like Annie Hall) or ninety-six minutes (like Manhattan). It will go on for days or weeks or months or years, and I’ve already used everything in my vault. Very soon, I will have nothing more to say, and we will be sitting across from each other at breakfast, completely devoid of banter; she will feel betrayed and foolish, and I will suddenly find myself actively trying to avoid spending time with a woman I didn’t deserve to be with in the first place.
Perhaps this sounds depressing. That is not my intention. This is all normal. There’s not a lot to say during breakfast. I mean, you just woke up, you know? Nothing has happened. If neither person had an especially weird dream and nobody burned the toast, breakfast is just the time for chewing Cocoa Puffs and/or wishing you were still asleep. But we’ve been convinced not to think like that. Silence is only supposed to happen as a manifestation of supreme actualization, where both parties are so at peace with their emotional connection that it cannot be expressed through the rudimentary tools of the lexicon; otherwise, silence is proof that the magic is gone and the relationship is over (hence the phrase “We just don’t talk anymore”). For those of us who grew up in the media age, the only good silence is the kind described by the hair metal band Extreme. “More than words is all I ever needed you to show,” explained Gary Cherone on the Pornograffiti album. “Then you wouldn’t have to say that you love me, cause I’d already know.” This is the difference between art and life: In art, not talking is never an extension of having nothing to say; not talking always means something. And now that art and life have become completely interchangeable, we’re forced to live inside the acoustic power chords of Nuno Bettencourt, even if most of us don’t necessarily know who the fuck Nuno Bettencourt is.
When Harry Met Sally hit theaters in 1989. I didn’t see it until 1997, but it turns out I could have skipped it entirely. The movie itself isn’t bad (which is pretty amazing, since it stars Meg Ryan and Billy Crystal), and there are funny parts and sweet parts and smart dialogue, and—all things considered—it’s a well-executed example of a certain kind of entertainment.5 Yet watching this film in 1997 was like watching the 1978 one-game playoff between the Yankees and the Red Sox on ESPN Classic: Though I’ve never sat through the pitch sequence that leads to Bucky Dent’s three-run homer, I know exactly what happened. I feel like I remember it, even though I don’t. And—more important—I know what it all means. Knowing about sports means knowing that Bucky Dent is the living, breathing, metaphorical incarnation of the Bo Sox’s undying futility; I didn’t have to see that game to understand the fabric of its existence. I didn’t need to see When Harry Met Sally, either. Within three years of its initial release, classifying any intense friendship as “totally a Harry-Met-Sally situation” had a recognizable meaning to everyone, regardless of whether or not they’d actually seen the movie. And that meaning remains clear and remarkably consistent: It implies that two platonic acquaintances are refusing to admit that they’re deeply in love with each other. When Harry Met Sally cemented the plausibility of that notion, and it gave a lot of desperate people hope. It made it realistic to suspect your best friend may be your soul mate, and it made wanting such a scenario comfortably conventional. The problem is that the Harry-Met-Sally situation is almost always tragically unbalanced. Most of the time, the two involved parties are not really “best friends.” Inevitably, one of the people has been in love with the other from the first day they met, while the other person is either (a) wracked with guilt and pressure, or (b) completely oblivious to the espoused attraction. Every relationship is fundamentally a power struggle, and the individual in power is whoever likes the other person less. But When Harry Met Sally gives the powerless, unrequited lover a reason to live. When this person gets drunk and tells his friends that he’s in love with a woman who only sees him as a buddy, they will say, “You’re wrong. You’re perfect for each other. This is just like When Harry Met Sally! I’m sure she loves you—she just doesn’t realize it yet.” Nora Ephron accidentally ruined a lot of lives.
I remember taking a course in college called “Communication and Society,” and my professor was obsessed by the belief that fairy tales like “Hansel and Gretel” and “Little Red Riding Hood” were evil. She said they were part of a latent social code that hoped to suppress women and minorities. At the time, I was mildly outraged that my tuition money was supporting this kind of crap; years later, I have come to recall those pseudo-savvy lectures as what I loved about college. But I still think they were probably wasteful, and here’s why: Even if those theories are true, they’re barely significant. “The Three Little Pigs” is not the story that is fucking people up. Stories like Say Anything are fucking people up. We don’t need to worry about people unconsciously “absorbing” archaic secret messages when they’re six years old; we need to worry about all the entertaining messages people are consciously accepting when they’re twenty-six. They’re the ones that get us, because they’re the ones we try to turn into life. I mean, Christ: I wish I could believe that bozo in Coldplay when he tells me that stars are yellow. I miss that girl. I wish I was Lloyd Dobler. I don’t want anybody to step on a piece of broken glass. I want fake love. But that’s all I want, and that’s why I can’t have it.
1. Hopefully Charlie Rose, if he’s still alive.
2. The notable exceptions being Vertigo (where the softhearted Barbara Bel Geddes gets jammed by sexpot Kim Novak) and My So-Called Life (where poor Brian Krakow never got any play, even though Jordan Catalano couldn’t fucking read).
3. “Sometimes” meaning “during college.”
4. Here’s one example I tend to deploy on second dates, and it’s rewarded with an endearing guffaw at least 90 percent of the time: I ask the woman what religion she is. Inevitably, she will say something like, “Oh, I’m sort of Catholic, but I’m pretty lapsed in my participation,” or “Oh, I’m kind of Jewish, but I don’t really practice anymore.” Virtually everyone under the age of thirty will answer that question in this manner. I then respond by saying, “Yeah, it seems like everybody I meet describes themselves as ‘sort of Catholic’ or ‘sort of Jewish’ or ‘sort of Methodist.’ Do you think all religions have this problem? I mean, do you think there are twenty-five-year-old Amish people who say, ‘Well, I’m sort of Amish. I currently work as a computer programmer, but I still believe pants with metal zippers are the work of Satan.’”
5. “A certain kind” meaning “bad.”
What Happens When People Stop Being Polite
Even before Eric Nies came into my life, I was having a pretty good 1992.
I wasn’t doing anything of consequence that summer, but—at least retrospectively—nothingness always seems to facilitate the best periods of my life. I suppose I was ostensibly going to summer school, sort of; I had signed up for three summer classes at the University of North Dakota in order to qualify for the maximum amount of financial aid, but then I dropped two of the classes the same day I got my check. I suppose I was also employed, sort of; I had a work-study job in the campus “geography library,” which was really just a room with a high ceiling, filled with maps no one ever used. For some reason, it was my job to count these maps for three hours a day (I was, however, allowed to listen to classic-rock radio). But most importantly, I was living in an apartment with a guy who spent all night locked in his bedroom writing a novel he was unironically titling Bits of Reality, which I think was a modern retelling of Oedipus Rex. He slept during the afternoon and mostly subsisted on raw hot dogs. I think his girlfriend paid the rent for both of us.
Now, this dude who ate the hot dogs… he was an excellent roommate. He didn’t care about anything remotely practical. When two people live together, there’s typically an unconscious Odd Couple relationship: There’s always one fastidious guy who keeps life organized, and there’s always one chaotic guy who makes life wacky and interesting. Somehow, the hot dog eater and I both fit into the latter category. In our lives, there was no Tony Randall. We would sit in the living room, drink a case of Busch beer, and throw the empty cans into the kitchen for no reason whatsoever, beyond the fact that it was the most overtly irresponsible way for any two people to live. We would consciously choose to put out cigarettes on the carpet when ashtrays were readily available; we would write phone messages on the walls; we would vomit out the window. And this was a basement apartment.
Obviously, we rarely argued about the living conditions.
We did, however, argue about everything else. Constantly. We’d argue about H. Ross Perot’s chances in the upcoming presidential election, and we’d argue about whether there were fewer Jews in the NBA than logic should dictate. We argued about the merits of dog racing, dogfighting, cockfighting, affirmative action, legalized prostitution, the properties of ice, chaos theory, and whether or not water had a discernible flavor. We argued about how difficult it would be to ride a bear, assuming said bear was muzzled. We argued about partial-birth abortion, and we argued about the possibility of Trent Reznor committing suicide and/or being gay. We once got into a vicious argument over whether or not I had actually read all of an aggrandizing Guns N’ Roses biography within the scope of a single day, an achievement my hot dog–gorged roommate claimed was impossible (that particular debate extended for all of July). Mostly, we argued about which of us was better at arguing, and particularly about who had won the previous argument.
Perhaps this is why we were both enraptured by that summer’s debut of MTV’s The Real World, an artistic product that mostly seemed like a TV show about people arguing. And these people were terrible arguers; the seven cast members thrown into that New York loft always made ill-conceived points and got unjustifiably emotional, and they all seemed to take everything much too personally. But the raw hot dog eater and I watched these people argue all summer long, and then we watched them argue again in the summer of 1993, and then again in the summer of 1994. Technically, these people were completely different every year, but they were also exactly the same. And pretty soon it became clear that the producers of The Real World weren’t sampling the youth of America—they were unintentionally creating it. By now, everyone I know is one of seven defined strangers, inevitably hoping to represent a predefined demographic and always failing horribly. The Real World is the real world is The Real World is the real world. It’s the same true story, even when it isn’t.
I tend to consider myself an amateur Real World scholar. I say “amateur” because I’ve done no actual university study on this subject, but I still say “scholar” because I’ve stopped watching the show as entertainment. At this point, I only watch it in hopes of unlocking the questions that have haunted man since the dawn of civilization. I’ve seen every episode of every season, and I’ve seen them all a minimum of three times. This, of course, is the key to appreciating The Real World (and the rest of MTV’s programming): repetition. To really get it, you have to watch MTV so much that you know things you never tried to remember. You can’t try to deduce the day-to-day habits of Jon Brennan (he was the cowboy dude) from RW 2: Los Angeles. That would be ridiculous. You can’t consciously try to figure out what he likes and what he hates and how he lives; these are things you have to know without trying. You just have to “know” he constantly drinks cherry Kool-Aid. But you can’t try to learn that, because that would make you a weirdo. This kind of knowledge is like a vivid dream you suddenly pull out of the cosmic ether, eight hours after waking up. If someone asks you when Montana from RW 6: Boston exposed her breasts, you just sort of vaguely recall it was on a boat; if someone asks you who the effeminate black guy from Seattle slapped in the face, you inexplicably know it was the chick with Lyme disease. Yet these are not bits of information you actively acquired; these are things picked up the same way you sussed out how to get around on the subway, or the way you figured out how to properly mix Bloody Marys. One day, you just suddenly realize it’s something you know. And—somehow—there’s a cold logic to it. It’s an extension of your own life, even though you never tried to make it that way.
In 1992, The Real World was supposed to be that kind of calculated accident; it was theoretically created as a seamless extension of reality. But somewhere that relationship became reversed; theory was replaced by practice. During that first RW summer, I saw kids on MTV who reminded me of people I knew in real life. By 1997, the opposite was starting to happen; I kept meeting new people who were like old Real World characters. I’ve met at least six Pucks in the past five years. This doesn’t mean they necessarily talk about snot or eat peanut butter with their hands; what it means is they play The Puck Role. In any given situation, they will provide The Puck Perspective, and they will force those around them to Confront The Puck Paradigm. If nothing else, The Real World has provided avenues for world views that are both specialized and universal, and it has particularly validated world views that are patently unreasonable.
Part of me is hesitant to write about cast members from The Real World in any specific sense, because I realize few Americans have studied (or even seen) all twelve seasons of the show. You hear a lot of people say things like they watched most of the first two seasons, or that they watched every season up until Miami, or that they never started watching until the San Francisco season, or that they’ve only seen bits and pieces of the last three years and tend to get the casts mixed up. For most normal TV watchers, The Real World is an obsession that fades at roughly the same rate as denim. I’ve noticed that much of the program’s original 1992 audience gets especially bored whenever a modern cast starts to talk like teenage aliens.1 Last year, an old friend told me she’s grown to hate the Real World because, “MTV used to pick people for that show who I could relate to. Now they just have these stupid little kids who act like selfish twits.” This was said by a woman—now a responsible twenty-nine-year-old software specialist—who once threw a drink into the face of her college roommate for reasons that could never be explained. It’s hard for most people to hang with a show that so deeply bathes in a fountain of youth.
However, another part of me realizes there’s no risk whatsoever in pointing out specific RW cast members, even to people who’ve never seen the show once: You don’t need to know the people I’m talking about, because you know the people I’m talking about. And I don’t mean you know them in the ham-fisted way MTV casts them (i.e., “The Angry Black Militant”2 or “The Gay One”3 or “The Naive Virginal Southerner Who’s Vaguely Foxy”4). When I say “you know these people,” it’s because the personalities on The Real World have become the only available personalities for everyone who’s (a) alive and (b) under the age of twenty-nine.
Our cultural preparation for a Real World universe actually started in movie theaters during the eighties, particularly with two films that both came out in 1985: The Breakfast Club and St. Elmo’s Fire. These seminal portraits were what The Real World was supposed to be like, assuming MTV could find nonfictional people who would have interesting conversations on a semiregular basis. Like most RW casts, The Breakfast Club broke teen culture into five segments that were laughably stereotypical (and—just in case you somehow missed what they were—Anthony Michael Hall pedantically explains it all in the closing scene). St. Elmo’s Fire used many of the same actors, but it evolved their personalities by five years and made them more (ahem) “philosophically complex.” Here is where we see the true genesis of future Real Worldians. With Judd Nelson, we have the respected social climber doomed to fail ethically;5 with Andrew McCarthy, the sensitive, self-absorbed guy who works hard at being bitter.6 Rob Lowe is the self-destructive guy we’re somehow supposed to envy;7 Emilio Estevez is the romantic that all chumps are supposed to identify with, mostly because he’s obsessed with his own obviousness.8 Demi Moore is fucked up and pathetic,9 but Mare Winningham is even more pathetic because she aspires to be fucked up.10 Ally Sheedy is too normal to have these friends11 (or, I suppose, to be in this particular movie).
If we were to combine these two films—in other words, if we were to throw the St. Elmo’s kids into all-day Saturday detention—we’d have a pretty good Real World. It’s been noted that one of the keys to Alfred Hitchcock’s success as a filmmaker was that he didn’t draw characters as much as he drew character types; this is how he normalized the cinematic experience. It’s the same way with The Real World. The show succeeds because it edits malleable personalities into flat, twenty-something archetypes. What interests me is the way those archetypes so quickly became the normal way for people of my generation to behave.
It’s become popular for Real World revisionists to claim that the first season was the only truly transcendent RW, the argument being that this was the singular year its cast members actually acted “real.” In a broad sense, that’s accurate: Since that first Real World was entirely new, no one knew what it was going to look like (or how it would be received). Nobody in the original New York loft was able to formulate an agenda on purpose. Logically, this should make for great television. In practice, it doesn’t translate: In truth, RW 1 is mostly dull. It was fascinating in 1992 because of the novelty, but it doesn’t stand up over time.
I’ll concede that the cast on the first Real World were the only ones who didn’t constantly play to the camera; only hunky model Eric Nies did so on an episode-to-episode basis, but one gets the impression this was just his normal behavior. While the actual filming was taking place, I have no doubt the seven loft-dwellers were clueless about what the final product would look like on television; that certainly fostered the possibility for spontaneous “reality,” and there are glimpses of that throughout RW 1. The problem is that hard reality tends to be static: On paper, the conversations from that virgin Real World would make for a terrible script. In fact, the greatest moments from the first Real World are when nothing is going on at all—the awkwardness becomes transfixing, not unlike the sensation of sitting in an airport and watching someone read a newspaper. Yet if every cast of The Real World has been as “real” as that first New York ensemble, the show would have only lasted two seasons.
Ironically, the reason RW flourished is because its telegenic humanoids became less complex with every passing season. Multifaceted people do not translate within The Real World format. Future cast members figured this out when that initial season finally aired and it was immediately obvious that only two personalities mattered: Alabama belle Julie and angry African-American Kevin. The only truly compelling episode from the first season came in week eleven, when Julie and Kevin had an outdoor screaming match over a seemingly random race issue.12 But the fight itself wasn’t the key. What was important was the way it galvanized two archetypes that would become cornerstones for latetwentieth-century youth: the educated automaton and the likable anti-intellectual. Those two personality sects are suddenly everywhere, and they’re both children of The Real World.
Obviously, Kevin embodies the former attitude and Julie embodies the latter. And—almost as obviously—neither designation is particularly accurate. Kevin became a solid hip-hop writer for Vibe and Rolling Stone, and he’s far less robotic than he appears on The Real World. Meanwhile, Julie was never a backwater hick (I interviewed her in 1995, and I honestly suspect she might be the savviest person in the show’s history). But within the truncated course of those thirteen original episodes, we are led to believe that (a) Kevin is obsessed with racial identity and attempts to inject his blackness into every conversation, while (b) Julie adores anything remotely new and abhors everything remotely pretentious.
Kevin’s Huey Newton–like image can’t be blamed entirely on him: The Real World is unnaturally obsessed with race. And what’s disheartening is that The Real World is so consumed with creating racial tension that it often makes black people look terrible: If your only exposure to diversity was Coral and Nicole from the 2001 “Back to New York” RW cast, you’d be forced to assume all black women are blithering idiots. This is partially because the only black characters who get valuable RW airtime are the ones who refuse to talk about anything else. It’s the same situation for homosexual cast members—their Q factor is completely dependent on how aggressively gay they’re willing to act. In that first NYC season, Norman is immediately identified as bisexual, but he’s not bisexual enough; he only gets major face time when he’s dating future TV talk-show host Charles Perez. Future queer cast members would not make this mistake; for people like AIDS victim Pedro Zamora and Dan from RW 5: Miami, being gay was pretty much their only personality trait. Perhaps more than anything else, this is the ultimate accomplishment of The Real World: It has validated the merits of having a one-dimensional personality. In fact, it has made that kind of persona desirable, because other one-dimensional personalities can more easily understand you.
If you believe Real World producers Mary-Ellis Bunim and Jon Murray, they don’t look for troublemakers when they make casting decisions. They insist they simply cast for “diversity.” But this is only true in a macro sense—they want obvious diversity. They want physical diversity, or sexual diversity, or economic diversity. What they have no use for is intellectual diversity. A Renaissance man (or woman) need not apply to this program. You need to be able to deduce who a given Real Worlder represents socially before the second commercial break of the very first episode, which gives you about eighteen minutes of personality. It was very easy to make RW 1 Kevin appear one-dimensional, even if that portrayal wasn’t accurate; he gave them enough “race card” material to ignore everything else. Thus, Kevin became the inadvertent model for thousands and thousands of future Real World applicants—these are the people who looked at themselves in the mirror and thought, “I could get on that show. I could be the _____ guy.”
The “_______” became almost anything: race, gender, geographic origin, sexual appetite, etc. There was suddenly an unspoken understanding that every person in the Real World house was supposed to fit some kind of highly specific—but completely one-dimensional—persona. In his memoir A Heartbreaking Work of Staggering Genius, Dave Eggers writes about how he tried to get on Real World 3: San Francisco, but was beaten out by Judd. Coincidentally, both of those guys were cartoonists. But the larger issue is that they were both liberal and sensitive, and they were both likely to be the kind of guy who would fall in love with a female housemate who only perceived him as a good friend. This is exactly the person Judd became; there is now a famous13 scene from that third season where Judd is rowing a boat and longingly stares at roommate Pam and her boyfriend, Christopher, as they paddle alongside in a similar watercraft. Months after the conclusion of RW 3, Pam broke up with Chris and fell in love with Judd, which is (a) kind of bizarre, but mostly (b) exactly what MTV dreams of having happen during any given season. Whenever I see repeat episodes of RW 3, I find myself deconstructing every casual conversation Judd and Pam have, because I know a secret they don’t—eighteen months later, they will have sex. It’s sort of like seeing old Judas Priest videos on VH1 Classic and looking for signs of Rob Halford’s homosexuality.
The Judd-Pam undercurrent is part of the reason I consider Real World 3: San Francisco the best-ever RW, but that’s not the only reason. Central to my affinity for RW 3 is a wholly personal issue: The summer it premiered was the summer following my college graduation. I had just moved to a town where I knew almost no one, and my cable was installed the afternoon of The Real World season premiere. The first new friends I made were Cory and Pedro, and I rode with them on a train to California. And I pretty much hated both of them (or at least Cory) immediately.
In truth, there wasn’t any member of RW 3 I particularly liked, and I couldn’t relate to any of them, except maybe Rachel (and only because she was a bad Catholic). But I became emotionally attached to these people in a very authentic way, and I think it was because I started noticing that the cast members on RW 3 were not like people from my past. Instead, they seemed like new people I was meeting in the present.
Because The Real World has now been going on for a decade—and because of Survivor and Big Brother and The Mole and Temptation Island and The Osbournes—the idea of “reality TV” is now something everyone understands. Without even trying, American TV watchers have developed an amazingly sophisticated view of postmodernism, even if they would never use the word postmodern in any conversation (or even be able to define it).14 However, this was still a new idea in 1994. And what’s important about RW 3 is that it was the first time MTV quit trying to pretend it wasn’t on television.
Here’s what I mean by that: I once read a movie review by Roger Ebert for the film Jay and Silent Bob Strike Back. Early in the review, Ebert makes a tangential point about whether or not film characters are theoretically “aware” of other films and other movie characters. Ebert only touches on this issue casually, but it’s probably the most interesting philosophical question ever asked about film grammar. Could Harrison Ford’s character in What Lies Beneath rent Raiders of the Lost Ark? Could John Rambo draw personal inspiration from Rocky? In Desperately Seeking Susan, what is Madonna hearing when she goes to a club and dances to her own song? Within the reality of one specific fiction, how do other fictions exist?
The Real World deals with an identical problem, but in a completely opposite way: They have a nonfiction situation that is supposed to have no relationship to other nonfictions. They have to behave as if what they’re doing hasn’t been done before. Real Worlders always get into arguments, but you never hear them say, “Oh, you’re only saying that because you know this is going to be on TV,” even though that would be the best comeback 90 percent of the time. No one would ever compare a housemate to a cast member from a different season, even when such comparisons seem obvious. The kids talk directly into the camera every single day, but they are ceaselessly instructed to pretend as if they are not being videotaped whenever they’re outside the confessional. Most of all, they never openly recognize that they’re part of a cultural phenomenon; they never mention how weird it is that people are watching them exist. Every Real World cast exists in a vacuum.
That illusion started to crack in RW 3. That’s also when the show’s mentality started to leak into the social bloodstream.
The reason this occurred in San Francisco is because two of the housemates, Puck and Pedro, never allowed themselves to slip into The Real World’s fabricated portrait of reality; they were always keenly cognizant of how they could use this program to forward their goals. Depending on your attitude, Pedro’s agenda was either altruistic (i.e., personalizing the HIV epidemic), self-aggrandizing (he was doggedly focused on achieving martyrdom status), or a little of both (which is probably closest to the mark). Meanwhile, Puck’s agenda was entirely negative, any way you slice it; he wanted to become the show’s first “breakout star” (a Real World Fonzie, if you will), and he succeeded at that goal by actively trying to wreck the entire project. In a show about living together, he tried to be impossible to live with. But in at least one way, Pedro and Puck were identical: Both of these guys immediately saw that they could design their own TV show by developing a script within their head. They fashioned themselves as caricatures.
Ironically, they both attacked each other for doing this. By the ninth episode, Puck was breaking the fourth wall by suggesting that Pedro was trying to force his message down the throats of viewers; no one had ever implied something like this before. Without being too obvious, The Real World producers relaxed the reins and gave up on the notion that this show was somehow organic; a decision was made to let Puck and Pedro fight over the future identity of The Real World. Puck represented the idea of a show where everyone was openly fake and we all knew it was a sham; Pedro represented the aesthetic of a show where what we saw was mostly fake, but we would agree to watch it as if it was totally real. It was almost a social contract. To feel Pedro’s pain (as Bill Clinton supposedly did), you had to suspend your disbelief— a paradoxical requirement for a reality program.
In the end, Puck’s asinine subversion turned everyone against him with too much voracity. He was jettisoned from the house in episode eleven, appearing only sporadically for the remainder of the season. Pedro remained in the residence and became MTV’s shining moment of the 1990s; he proved himself as an educational hero with a mind-blowing flair for the dramatic (the fact that he died the day after the final episode aired is almost as eerie as Charles Schulz dying the day before the final Peanuts strip ran in newspapers). Though the second half of the RW 3 season (after Puck’s departure) is considerably less entertaining than its first half, it’s probably good Puck was booted. He would have destroyed the show. In fact, whenever a member of a Real World cast has tried to subvert the premise of the program—Puck, Seattle’s Irene,15 Hawaii’s Justin16—they’ve never made it through an entire season. If they did, it would have turned something charmingly silly into a complete farce. But as long as that unspoken agreement remains between the show and the audience— they pretend to be normal people, we pretend to believe them—The Real World works as both bubblegum sociology and a sculptor of human behavior… which brings me back to what I was saying about how almost everyone I meet has suddenly turned into a Real World cast member.
It all became clear in 1994, during RW 3: I had just graduated from college the previous spring and was residing in Fargo, a town I was logistically familiar with despite knowing virtually no one who lived there. However, Fargo is only an hour’s drive from Grand Forks, North Dakota (the college town where I attended school), so I drove back to “rock” every other weekend. I’d cut out of work early and arrive in G.F. around 4:30 P.M.; I’d spring for a case of Busch pounders (I was now making $18,500 a year and was therefore unspeakably rich) and I’d sit around with a revolving door of acquaintances in someone’s shithole apartment. We’d load up on Busch until it was time to go to the local uncool sports bar (Jonesy’s) at 8:00, which was where you went before hitting the hipster bar (Whitey’s) at around 10:20. Not unlike the summer of 1992, there was no real activity: We’d just sit around and listen to the dying days of grunge, fondly reminiscing about things that had happened in the very recent past. But sometimes I’d notice something weird, especially if strangers stumbled into our posse: Everyone was adopting a singularity to their selfawareness. When I had first arrived at college in 1990, one of the things I loved was the discovery of people who seemed impossible to categorize; I’d meet a guy watching a Vikings-Packers game in the TV room, only to later discover that he was obsessed with Fugazi, only to eventually learn that he was a gay born-again Christian. There was a certain collegiate cachet to being a walking contradiction. But somehow The Real World leaked out of those TV sets when Puck shattered the glass barrier between his life and ours. People started becoming personality templates, devoid of complication and obsessed with melodrama. I distinctly recall drinking with two girls in a Grand Forks tavern while they discussed their plan to “confront” a third roommate about her “abrasive” behavior. How did that become a normal way to talk? Who makes plans to “confront” a roommate? To me, it was obvious where this stuff came from: It came from Real World people. It was Real World culture. It’s a microcosm of the United Nations, occupied by seven underdeveloped countries trying to force the others to recognize their right to exist.
During that very first summer of The Real World, everyone kept telling me I should try to get on RW 2. They gave the same advice to my hot dog–eating roommate. I suspect this was meant to be a compliment to both of us; when people tell you that you should be on a reality program, they’re basically saying you’re crazy enough to amuse total strangers. I was always flattered by this suggestion, and I used to fantasize about being cast on The Real World, imagining that it would make me famous. What I failed to realize is that being a former member of The Real World is the worst kind of fame. There is no financial upside; it offers no artistic credibility or mainstream adoration or easy sex. Basically, the only reward is that people will (a) point at you in public, and (b) ask you about absolutely nothing else until the day you die, when your participation in a cable television program becomes the lead item in your obituary. You will be the kind of person who suddenly gets recognized at places like Burger King, but you will still be the kind of person who eats at places like Burger King.
Once you’ve been on TV, nothing else matters. If Flora from Miami wrote the twenty-first-century version of Anna Karenina, she’d still be known as the loud-mouthed bitch who fell through the bathroom window. Almost a dozen ex–Real Worlders have pursued careers in music, all with a jump-start from MTV. None have succeeded; their combined album sales would be dwarfed by Arrested Development’s live album. Eric Nies and Puck managed to stay in the spotlight for a few extra milliseconds, but they both went bankrupt. It appears that the highest residual success one can achieve from a Real World stint is that of being asked to compete in a Real World/Road Rules challenge. All these people are forever doomed to the one-dimensional qualities that made them famous nobodies. The idea that they could do anything else seems impossible.
This is why I could never be on The Real World, no matter how much I love watching it. I could never filter every experience through my singular, self-conscious individuality. Yet part of me fears this will happen anyway; I fear that The Real World’s unipersonal approach will become so central to American life that I’ll need a singular persona just to make conversation with whatever media-saturated robot I end up marrying. Being interesting has been replaced by being identifiable. I guess my only hope is to find myself an Alabama Julie, whose wonderfully one-dimensional naïveté will be impressed by the unpretentious way I vomit out the window.
1. An obvious example: White kids using the word like phat unironically.
2. Kevin from RW 1, Kameelah from RW 6, Coral from RW 10, etc.
3. Norman, Beth, Pedro, Dan, Chris, et al.
4. Julie, Elka, that big-toothed Mormon, the girl with perfect lips from Louisiana, and Trishelle.
5. Joe from Miami.
6. Judd from San Francisco.
7. Dominic from L.A.
8. Kind of like that dork from Hawaii who fell in love with the alcoholic lesbian and then dated her sister.
9. Theoretically Ruthie, the drunk chick from Hawaii—although (in truth) she was actually more reasonable than everyone else in that house.
10. Cory in San Fran, all the other girls from Hawaii, Tonya from Chicago, and every other female who spends at least two episodes of any season staring at a large body of water.
11. Julie from the first NYC cast, the blonde from New Orleans, Kevin in the second set of New Yorkers, and Frank from Vegas.
12. I say “seemingly” because this argument appears totally superficial— until you find out the context: It happened during the Rodney King riots in Los Angeles, a fact that MTV never mentioned. As a rule, The Real World does not deal with the issue of context very well, consciously skewing it much of the time. When David (the black comedian in Los Angeles) was kicked out for “sexually harassing” future NBA groupie Tami in RW 2, the viewing audience is given the impression that he had been living in the house for weeks. In truth, it happened almost immediately after everyone moved in.
13. Relatively speaking.
14. This is partially because everyone who does use postmodern in casual conversation seems to define it differently, usually in accordance with whatever argument they’re trying to illustrate. I think the best definition is the simplest: “Any art that is conscious of the fact that it is, in fact, art.” So when I refer to something as postmodern, that’s usually what I mean. I realize some would suggest that an even better definition is “Any art that is conscious of the fact that it is, in fact, product,” but that strikes me as needlessly cynical.
15. This was that chick with Lyme disease.
16. This was the gay law student with the spiky hair.
Being Zack Morris
Sometimes I’m a bad guy, but I still do good things. Ironically, those good things are often a direct extension of my badness. And this makes me even worse, because it means my sinister nature is making people unknowingly smile.
Here’s one example: I was once dating a girl in a major American city, and I was also kind of pursuing another girl in another major American city. I had just received one of those nifty “CD burners” for my computer, so I started making compilation albums for friends and particularly for lady friends. Like most uncreative intellectual men, almost all of my previous relationships had been based on my ability to make incredibly moving mix cassettes; though I cannot prove it, I would estimate that magnetic audiotape directly influenced 66 percent of my career sexual encounters. However, the explosion of CD burning technology has forced people like me to create CDs instead of cassettes, which is somewhat disheartening. The great thing about mix tapes was that you could anticipate the listener would have to listen to the entire thing at least once (and you could guarantee this by not giving them a track listing). Sequencing was very important. The strategy was to place specific “message” songs inbetween semimeaningless “rocking” songs; this would transfix, compliment, and confuse the listener, which was always sort of the goal. However, once people starting making their own CDs, the mix tape suddenly seemed cheap and archaic. I had no choice but to start making CDs, even though they’re not as effective: People tend to be more impressed by the packaging of the jewel case than the songs themselves, and they end up experiencing the music no differently than if they had thoughtlessly purchased the disc at Best Buy (i.e., they skip from track to track without really studying the larger concept behind the artistic whole).
ANYWAY, I was making a mix disc for one of these women (I will never admit which), and it was my intention to find eighteen songs that reflected key elements of our relationship, which I thought I did. But as I looked at the track selection, it suddenly dawned on me that these songs were just as applicable to my other relationship. My feelings for “Woman A” were completely different than my feelings for “Woman B,” but the musical messages would make emotional sense to both, despite the fact that these two women were wildly dissimilar. So I ended up making two copies of this album and sending one to each woman, using all the same songs and identical cover art (computers make this entirely too easy). I expressed identical romantic overtures to two different people with one singular movement. And they both received their discs on the same day, and they both loved them.1
Part of me will always know this was a diabolical thing to do. However, I’m mostly struck by the fact that all my deepest, most sincere feelings are so totally stereotypical that they pretty much apply to every girl I find even vaguely attractive. My feelings toward every woman I’ve ever loved can be completely explained by Paul McCartney’s “Maybe I’m Amazed,” Rod Stewart’s “You’re in My Heart,” and either Matthew Sweet’s “Girlfriend” or Liz Phair’s “Divorce Song” (depending on how long we’ve known each other). My feelings about politics and literature and mathematics and the rest of life’s minutiae can only be described through a labyrinthine of six-sided questions, but everything that actually matters can be explained by Lindsey fucking Buckingham and Stevie fucking Nicks in four fucking minutes. Important things are inevitably cliché, but nobody wants to admit that. And that’s why nobody is deconstructing Saved by the Bell.
Saved by the Bell is like this little generational secret that’s hyperfamiliar to people born between 1970 and 1977, yet generally unremarkable to anyone born after (and completely alien to all those born before). It was an NBC sitcom that ran for four years (1989 to 1993) after an initial thirteen-episode season on the Disney Channel (where it was originally titled Good Morning, Miss Bliss). The show spawned two spin-offs—Saved by the Bell: The College Years and Saved by the Bell: The New Class—and also included a six-episode summer run (usually referred to as the “Malibu Sands” miniseason) and two made-for-TV movies (one set in Hawaii, the other in Las Vegas).
It was a program about high school kids.
I realize that is not much expository information. Typically, one tries to explain TV shows in terms of “context”—if someone asked me to describe The X-Files, for example, I would seem like a moron if I said, “It was a program about two people who mostly looked for aliens.” That would never qualify as a significant description. I would have to write about how the supernatural religiosity of The X-Files personified a philosophical extension of its audience, and how the characters represented two distinct perspectives on modern reality, and how the sexual chemistry between Mulder and Scully was electrified by their lack of physical intimacy. All this abstract deconstruction is necessary, and it’s necessary because The X-Files was artful. However, I have never watched even one episode of The X-Files, because I’m not interested. I’m not interested in trying to understand culture by understanding that particular show, and that’s part of the social contract with appreciating anything artful. You can’t place something into its aforementioned “context” unless you know where (and how) to culturally file it, and I honestly don’t care where The X-Files belongs in the American zeitgeist. Dozens of smart people told me how great this show was, and I’m sure they were right. But I’m satisfied with assuming that program was about two people who mostly looked for aliens, so—as a consequence—the show meant nothing to me. I “don’t get it.”
That’s not the case with Saved by the Bell. Saved by the Bell wasn’t artful at all. Now, that doesn’t mean it’s bad (nor does it mean it’s good). What it means is that you don’t need to place Saved by the Bell into any context to experience it. I didn’t care about Saved by the Bell any more than I cared about The X-Files, but the difference is that I could watch Saved by the Bell without caring and still have it become a minor part of my life, which is the most transcendent thing any kind of art can accomplish (regardless of its technical merits).
When I first saw Saved by the Bell, I was a senior in high school. It was on Saturday mornings, usually right when I woke up (which I think was either 11:00 or 11:30 A.M.). It was supposedly the first live-action show NBC ever broadcast on a Saturday morning, an idiom that had previously been reserved for animation. I would watch Saved by the Bell the same way all high school kids watch morning television, which is to say I stared at it with the same thoughtless intensity I displayed when watching the dryer. I watched it because it was on TV, which is generally the driving force behind why most people watch any program. However, I became a more serious Saved by the Bell student when I got to college. I suspect this kind of awakening was not uncommon, as universities always spawn little cultures of terrible TV appreciation: When I was a sophomore, the only non-MTV shows anyone seemed to watch were Saved by the Bell, Life Goes On (that was the show about the retarded kid), Quantum Leap, the Canadian teen drama Fifteen, and Days of Our Lives. And what was interesting was that everybody seemed to watch them together, in the same room (or over the telephone), and with a cultic intensity. We liked the “process” of watching these shows. The idea of these programs being entertaining never seemed central to anything, which remains the most fascinating aspect of all televised art: consumers don’t demand it to be good. It just needs to be watchable. And the reason that designation can be applied to Saved by the Bell has a lot to do with the fundamental truth of its staggering unreality.
Saved by the Bell followed the lives of six kids at a California high school called Bayside. Architecturally, the school was comprised of one multipurpose classroom, one square hallway, a very small locker room, and a diner owned by a magician. The six primary characters were as follows:
Zack Morris (Mark-Paul Gosselaar): Good-looking blond kid with the ability to talk directly to the camera like Ferris Bueller; possessed a cell phone years before that was common; something of an Eddie Haskell/James Spader type, but with a heart of gold.
Samuel “Screech” Powers (Dustin Diamond): Über-geeky Zack sycophant.
Albert Clifford “A.C.” Slater (Mario Lopez): Good-looking ethnic fellow; star wrestler; nemesis of Zack—except in episodes where they’re inexplicably best friends.
Kelly Kapowski (Tiffani-Amber Thiessen): Sexy girl next door; love interest of Zack.
Jessica “Jessie” Spano (Elizabeth Berkley): Sexy 4.00 overachieving feminist; love interest of A.C.
Lisa Turtle (Lark Voorhies): Wildly unlikable rich black girl; vain clotheshorse; unrequited love interest of Screech.
Every other kid at Bayside was either a nerd, a jock, a randomly hot chick, or completely nondescript; it was sort of like Rydell High in Grease. There were several noteworthy kids from the Good Morning, Miss Bliss era who simply disappeared when the show moved to NBC (this is akin to what happened to people like Molly Ringwald and Julie Piekarski when The Facts of Life changed from an ensemble cast to it’s signature Blair-Jo-Natalie-Tootie alignment). Tori Spelling portrayed Screech’s girlfriend Violet in a few episodes, Leah Remini served as Zack’s girlfriend during the six episodes set at the Malibu beach resort, an unbilled Denise Richards appeared in the final episode of the Malibu run, and a now-buxom Punky Brewster played a snob for one show in the final season. Weirdly, a leather-clad girl named Tori (Leanna Creel) became the main character for half of the last season when Thiessen and Berkley left the show, but then they both reappeared at graduation and Creel was never seen again (I’ll address the so-called “Tori Paradox” in a moment).
But—beyond that—the writers of Saved by the Bell always seemed to suggest that most adolescents are exactly the same and exist solely as props for the popular kids, which was probably true at most American high schools in the 1980s.2 The only other important personality in the Bayside universe is Mr. Belding (Dennis Haskins), who is a principal of the John Hughes variety; there is no glass ceiling to his stupidity. However, Belding differs from the prototypical TV principal in that he tended to be completely transfixed by the school’s most fashionable students; he really wanted Zack to like him, and Belding and Morris would often join forces on harebrained schemes.
On the surface, Saved by the Bell must undoubtedly seem like everything one would expect from a dreadful show directed at children, which is what it was. But that’s not how it was consumed by its audience. There was a stunning recalibration of the classic “suspension of disbelief vs. aesthetic distance” relationship in Saved by the Bell, and it may have accidentally altered reality (at least for brief moments).
Here’s what I mean: In 1993, Saved by the Bell was shown four times a day. If I recall correctly, two episodes were on the USA Network from 4:00 to 5:00 P.M. CST, and then two more were on TBS from 5:05 to 6:05. It’s possible I have these backward, but the order doesn’t matter; the bottom line is that I sometimes watched this show twenty times a week. So did my neighbor, a dude named (I think) Joel who (I think) was studying to become a pilot. Sometimes I would walk over to Joel’s place and watch Saved by the Bell with him, and he was the type of affable stoic who never spoke. He was one of those quiet guys who would offer you a beer when you walked into his apartment, and then he’d silently drink by himself, regardless of whether you joined him or not. Honestly, we never became friends. But we sort of had this mute, parasitic relationship through Saved by the Bell, and I will always remember the singular significant conversation we had: We were watching an episode where Belding was blackmailing Zack into dating his niece, and Joel suddenly got real incredulous and asked, “Oh, come on. Who the fuck has that kind of relationship with their high school principal?”
Of all the things that could have caused Joel to bristle, I remain fascinated by his oddly specific observation. I mean, Bayside High was a school where students made money by selling a “Girls of Bayside” calendar, and it was a school where oil was discovered under the football team’s goalposts. This is a show where Zack had the ability to call time-out and stop time in order to narrate what was happening with the plot. There is never a single moment in the Saved by the Bell series that reflects any kind of concrete authenticity. You’d think Zack’s unconventional relationship with an authority figure would be the least of Joel’s concerns. However, this was the only complaint he ever lodged against the Saved by the Bell aesthetic, and that’s very telling.
Now, I realize there is some precedent for this kind of disconnect: Trekkies generally have no problem with the USS Enterprise moving at seven times the speed of light, but they roll their eyes in disgust if Spock acts a little too jovial. Within any drama, we all concede certain unbelievable parameters, assuming specific aspects of the story don’t go outside the presupposed reality. But I think Joel’s take on Saved by the Bell is different than the usual contradiction. What it made me realize is that people like Joel (and like me, I suppose) were drawn to this unentertaining show because we felt like we knew what was going to happen next. Understanding Saved by the Bell meant you understood what was supposed to define the ultrasimplistic, hyperstereotypical high school experience—and understanding that formula meant you realized what was (supposedly) important about growing up. It’s like I said before: Important things are inevitably cliché. Zack’s relationship with Belding—and his niece—was just too creative, and bad television is supposed to be reassuring. Nobody needs it to be interesting.
Take a show like M*A*S*H, for instance. M*A*S*H consciously aspired to be “good television.” Its goal was to be intellectually provoking (particularly over its final four seasons), so almost every plot hinged on a twist: The North Korean POW was actually more ethical than the South Korean soldier, Colonel Potter’s visiting war buddy was actually corrupt, a much-decorated sergeant was actually killing off his black platoon members on purpose, etc., etc., etc. The first ten minutes of every M*A*S*H episode set strict conditions; the next twenty minutes would illustrate how life is not always as it seems.3 This—in theory—is clever, and it’s supposed to teach us something we don’t know. Meanwhile, Saved by the Bell did the opposite. The first ten minutes of every episode put a character (usually Zack) in a position where he or she was tempted to do something that was obviously wrong, and their friends would warn them that this was a mistake. Then they would do it anyway, learn a lesson, and admit that everyone was right all along. Saved by the Bell wasn’t ironic in the contemporary sense (i.e., detached and sardonic), and it wasn’t even ironic in the literal sense (the intentions and themes of the story never contradicted what they stated ostensibly). You never learned anything, and you weren’t supposed to.
Take the episode from the gang’s senior year, where they went to a toga party hosted by a bloated jock nicknamed Ox. They all get drunk, but Zack claims to be able to drive Lisa’s car home.4 Before they climb into the vehicle, they all note how this is dangerous, because Zack might wreck the car. And (of course) he does just that. Obviously, NBC would claim this was a “message” episode, and it was supposed to show teenagers that alcohol and the highway are a deadly combination. But there’s really no way anyone would learn anything from Zack’s booze cruising. There’s no kid in America who doesn’t know that drinking and driving is dangerous, and there’s no way that you could argue Saved by the Bell made this sentiment any more “in your face” than when Stevie Wonder sang “Don’t Drive Drunk.” It served no educational purpose, and it served no artistic purpose. But what it did was reestablish everyone’s moral reality. If Saved by the Bell was a clichéd, uncreative teen sitcom (and I think we would all agree that it was), it needed to deliver the clichéd, uncreative plot: If these kids drink and drive, they will have to have a bad accident—but no one will actually die, because we all deserve a second chance. As I watched that particular episode in college, I took satisfaction in knowing that American morality was still basically the same as it had been when I was thirteen years old. It proved I still understood how the mainstream, knee-jerk populace looked at life, even though my personal paradigm no longer fit those standards.
Saved by the Bell was well-suited for conventional moralizing, because none of the characters had multifaceted ethics (or even situational ethics). Every decision they made was generated by whatever the audience would expect them to do; it was almost like the people watching the show wrote the dialogue. This was damaging to the Saved by the Bell actors, all of whom went to ridiculous lengths to avoid being typecast as their TV identities once the show ended. Berkley was the most adamant about her reinvention, taking the lead role in the soft-porn box-office failure Showgirls, which even her costars couldn’t fathom. “I wouldn’t see why you’d want to go so far afield to change your image that you’d take a role so demanding or drastic as that,” said a remarkably candid Screech in an 2002 interview with The Onion A.V. Club. “It pretty much was just the exploitation of a Saturdaymorning icon, I feel. I don’t think that the movie had any more substance than, ‘Hey, we should go check it out to see the girl from Saved by the Bell naked!’ That’s pretty much what everyone went to the theater to see.”
Yet Berkley was not alone; she was merely the only one who exposed her nipples. Thiessen elected to become the new Shannen Doherty on Beverly Hills, 90210 and smoked pot in her very first episode. Lopez portrayed a homosexual as the star of Breaking the Surface: The Greg Louganis Story. Diamond started a prog rock band (!) who call themselves Salty the Pocket Knife. Gosselaar may have actually made the most disturbing transition, as he dyed his hair black and joined the cast of NYPD Blue, one of the most serious police dramas on TV; he essentially became an altogether different person. Only Lark Voorhies moved in a “logical” direction, taking a role on the soap opera The Bold and the Beautiful.
I’m not sure what all that signifies, really. I suppose it just proves how trapped these people must have felt, although some of that is clearly their own fault; Zack, Slater, Screech, and Kelly all appeared in the lone season of Saved by the Bell: The College Years, and Screech played a faculty member for most of the seven-season run of Saved by the Bell: The New Class. Those latter two shows—neither of which I watched consistently—made for a comfortable transition of loss: I saw the Saved by the Bell characters constantly, then periodically, and then not at all. It was actually a lot like my relationship with the friends from college who used to watch the show with me; I once saw guys like Joel constantly, then periodically, and then never. Which brings me to the aforementioned “Tori Paradox,” a desperate move by the Saved by the Bell producers that accidentally became the program’s most realistic avenue (and probably the clearest example of how there’s nothing more true than a cliché).
The Tori Paradox is a little like the season of Dukes of Hazzard when Bo and Luke were momentarily replaced by their cousins Coy and Vance, two guys who were exactly like them (so much so that the blond guy still preferred to drive). Here’s the crux of the incongruity: For half of the “senior year” at Bayside, Jessie (Berkley) and Kelly (Thiessen) are completely part of the action, just as they’d been for the last three seasons. However, they’re suddenly absent for twelve consecutive episodes, having been replaced by “Tori,” an attractive, brassy brunette in a black leather jacket who displays elements of both their personalities. Within moments of her arrival, Tori is completely absorbed into the Bayside gang; she’s romantically pursued by Zack and Slater and generally behaves as if she has always been one of their closest friends. This lasts until the graduation episode (aired in prime time), when Kelly and Jessie suddenly reappear as if nothing ever happened. Meanwhile, Tori does not appear at graduation and is not even mentioned.
The motivation for these moves were purely practical; Berkley and Thiessen wanted to leave the cast, but NBC wanted to squeeze out a dozen more episodes of a show that was now quite popular (and being rerun four times a day on other networks). NBC essentially shot the graduation special (and another prime-time movie, Saved by the Bell Hawaiian Style), embargoed them for later use, and queued up the Tori era. It was the easiest way to extend the series. However, this rudimentary solution created a seemingly unfathomable scenario: Since both the “Tori episodes” and the “Kelly/Jessie episodes” were shown concurrently—sometimes on the same day—we were evidently supposed to conclude that these adventures were happening at the same time. Whenever we were watching Zack’s attempts to scam on Tori, we were asked to assume that Kelly and Jessie were in the lunch room or at the mall or sick, and it was just a coincidence that nobody ever mentioned them (or introduced them to Tori, or even recognized their existence).
On paper, this seems idiotic, borderline insulting, and—above all—unreal. But the more I think back on my life, the more I’ve come to realize that the Tori Paradox might be the only element of Saved by the Bell that actually happened to me. Whenever I try to remember friends from high school, friends from college, or even just friends from five years ago, my memory always creates the illusion that we were together constantly, just like those kids on Saved by the Bell. However, this was almost never the case. Whenever I seriously piece together my past, I inevitably uncover long stretches where somebody who (retrospectively) seemed among my closest companions simply wasn’t around. I knew a girl in college who partied with me and my posse constantly, except for one semester in 1993—she had a waitressing job at Applebee’s during that stretch and could never make it to any parties. And even though we all loved her, I can’t recall anyone mentioning her absence until she came back. And sometimes I was the person cut out of life’s script: That very same semester, all my coworkers at our college newspaper temporarily decided I was a jerk and briefly froze me out of their lives; we later reunited, but now— whenever they tell nostalgic stories from that period—I’m always confused about why I can’t remember what they’re talking about… until I remember that I wasn’t included in those specific memories. A few years later I started hanging out with a girl who liked to do drugs, so the two of us spent a year smoking pot in my poorly lit apartment while everyone else we knew continued to go out in public; when I eventually rejoined all my old acquaintances at the local tavern, I could kind of relate to how Kelly Kapowski must have felt after Tori evaporated. Coming and going is more normal than it should be.
So what does that mean? Maybe nothing. But maybe this: Conscious attempts at reality don’t work. The character of Angela on ABC’s short-lived drama My So-Called Life was byzantine and unpredictable and emotionally complex, and all that well-crafted nuance made her seem like an individual. But Angela was so much an individual that she wasn’t like anyone but herself; she didn’t reflect any archetypes. She was real enough to be interesting, but too real to be important. Kelly Kapowski was never real, so she ended up being a little like everybody (or at least like someone everybody used to know). The Tori Paradox was a lazy way for NBC to avoid thinking, but nobody watching at home blinked; it was openly ridiculous, but latently plausible. That’s why the Tori Paradox made sense, and why it illustrated a greater paradox that matters even more: Saved by the Bell wasn’t real, but neither is most of reality.
1. Until now, I suppose.
2. This is less true now, since unpopular kids are more willing to wear trench coats to school and kill everybody for no good reason.
3. In fact, M*A*S*H followed this template so consistently that these twists ultimately became completely predictable; whenever I watch M*A*S*H reruns, I immediately assume every guest star is a flawed hypocrite who fails to understand the horror of televised war. It should also be noted that there is one Saved by the Bell script that borrows this formula: When beloved pop singer Jonny Dakota comes to Bayside High to film an antidrug video, we quickly learn that he is actually a drug addict, although that realization is foreshadowed by the fact that Jonny is vaguely rude.
4. It’s been several years since I’ve seen this episode, but what I particularly remember about it is that—while intoxicated—all the kids sing a song in the car… and in my memory, the song they sing is Sweet’s “Fox on the Run.” However, that just can’t be. It was probably something like “Help Me Rhonda.”
Sulking with Lisa Loeb on the Ice Planet Hoth
It’s become cool to like Star Wars, which actually means it’s totally uncool to like Star Wars. I think you know what I mean by this: There was a time in our very recent history when it was “interesting” to be a Star Wars fan. It was sort of like admitting you masturbate twice a day, or that your favorite band was They Might Be Giants. Star Wars was something everyone of a certain age secretly loved but never openly recognized; I don’t recall anyone talking about Star Wars in 1990, except for that select class of übergeeks who consciously embraced their sublime nerdiness four years before the advent of Weezer (you may recall that these were also the first people who told you about the Internet). But that era has passed; suddenly it seems like everyone born between 1963 and 1975 will gleefully tell you how mind-blowingly important the Star Wars trilogy was to their youth, and it’s slowly become acceptable to make Wookie jokes without the fear of alienation. This is probably Kevin Smith’s fault.
What’s interesting about this evolution is that the value of a movie like Star Wars was vastly underrated at the time of its release and is now vastly overrated in retrospect. In 1977, few people realized this film would completely change the culture of filmmaking, inasmuch as this was the genesis of all those blockbuster movies that everyone gets tricked into seeing summer after summer after summer. Star Wars changed the social perception of what a movie was supposed to be; George Lucas, along with Steven Spielberg, managed to kill the best era of American filmmaking in less than five years. Yet—over time—Star Wars has become one of the most overrated films of all time, inasmuch as it’s pretty fucking terrible when you actually try to watch it. Star Wars’s greatest asset is that it’s inevitably compared to 1983’s Return of the Jedi, quite possibly the leastwatchable major film of the last twenty-five years. I once knew a girl who claimed to have a recurring dream about a polar bear that mauled Ewoks; it made me love her.
However, the middle film in the Star Wars trilogy, The Empire Strikes Back, remains a legitimately great picture—but not for any cinematic reason. It’s great for thematic, social reasons. It’s now completely obvious that The Empire Strikes Back was the seminal foundation for what became “Generation X.”1 In a roundabout way, Boba Fett created Pearl Jam. While movies like Easy Rider and Saturday Night Fever painted living portraits for generations they represented in the present tense, The Empire Strikes Back might be the only example of a movie that set the social aesthetic for a generation coming in the future. The narrative extension to The Empire Strikes Back was not the Endor-saturated stupidity of Return of the Jedi; it was Reality Bites.
I concede that part of my bias toward Empire probably comes from the fact that it was the first movie I ever saw in a theater. This is a seminal experience for anyone, and I suppose it unconsciously shapes the way a person looks at cinema (I initially assumed all theatrical releases were prefaced by an expository text block that was virtually incomprehensible). The film was set in three static locations: The ice planet Hoth (which looked like North Dakota), the jungle system Dagobah (which was sort of like the final twenty minutes of Apocalypse Now), and the mining community of Cloud City (apparently a cross between Las Vegas and Birmingham, Alabama). It’s often noted by critics that this is the only Star Wars film that ends on a stridently depressing note: Han Solo is frozen in carbonite and torn away from Princess Leia, Luke gets his paw hacked off, and Darth Vader has the universe by the jugular. The Empire Strikes Back is the only blockbuster of the modern era to celebrate the abysmal failure of its protagonists. This is important; this is why The Empire Strikes Back set the philosophical template for all the slackers who would come of age ten years later. George Lucas built the army of clones that would eventually be led by Richard Linklater.
Now, I realize The Empire Strikes Back was not the first movie all future Gen Xers saw. I was eight when I saw Empire, and I distinctly remember that a lot of my classmates had already seen Star Wars (or at least its first theatrical rerelease) and of course they all loved it, mostly because little kids are stupid. But Empire was the first movie that people born in the early seventies could understand in a way that went outside of its rudimentary plotline. And that’s why a movie about the good guys losing—both politically and romantically—is so integral to how people my age look at life.
When sociologists and journalists started writing about the sensibilities that drove Gen Xers, they inevitably used words like angst-ridden and disenfranchised and lost. As of late, it’s become popular to suggest that this was a flawed stereotype, perpetuated by an aging media who didn’t understand the emerging underclass.
Actually, everyone was right the first time.
All those original pundits were dead-on; for once, the media managed to define an entire demographic of Americans with absolute accuracy. Everything said about Gen Xers—both positive and negative—was completely true. Twenty-somethings in the nineties rejected the traditional working-class American lifestyle because (a) they were smart enough to realize those values were unsatisfying, and (b) they were totally fucking lazy. Twenty somethings in the nineties embraced a record like Nirvana’s Nevermind because (a) it was a sociocultural affront to the vapidity of the Reagan-era paradigm, and (b) it fucking rocked. Twenty-somethings in the nineties were by and large depressed about the future, mostly because (a) they knew there was very little to look forward to, and (b) they were obsessed with staring into the eyes of their own self-absorbed sadness. There are no myths about Generation X. It’s all true.
This being the case, it’s clear that Luke Skywalker was the original Gen Xer. For one thing, he was incessantly whiny. For another, he was exhaustively educated—via Yoda—about things that had little practical value (i.e., how to stand on one’s head while lifting a rock telekinetically). Essentially, Luke went to the University of Dagobah with a major in Buddhist philosophy and a minor in physical education. There’s not a lot of career opportunities for that kind of schooling; that’s probably why he dropped out in the middle of the semester. Meanwhile, Luke’s only romantic aspirations are directed toward a woman who (literally) looks at him like a brother. His dad is on his case to join the family business. Most significantly, all the problems in his life can be directly blamed on the generation that came before him, and specifically on his father’s views about what to believe (i.e., respect authority, dress conservatively, annihilate innocent planets, etc.).
Studied objectively, Luke Skywalker was not very cool. But for kids who saw Empire, Luke was The Man. He was the guy we wanted to be. Retrospectively, we’d like to claim Han Solo was the single-most desirable character—and he was, in theory. But Solo’s brand of badass cool is something you can’t understand until you’re old enough to realize that being an arrogant jerk is an attractive male quality. Third-graders didn’t want to be gritty and misunderstood; third-graders wanted to be Mark Hamill. And even though obsessive thirty-year-old fans of the trilogy hate to admit it, these were always kids’ movies. Lucas is not a Coppola or a Scorsese or even a De Palma—he makes movies that a sleepy eight-year-old can appreciate.2 That’s his gift, and he completely admits it. “I wanted to make a kids’ film that would… introduce a kind of basic morality,” Lucas told author David Sheff. And because the Star Wars movies were children’s movies, Hamill had to be the center of the story. Any normal child was going to be drawn to Skywalker more than Solo. That’s the personality we swallowed. So when all the eight-year-olds from 1980 turned twenty-one in 1993, we couldn’t evolve. We were just old enough to be warped by childhood and just young enough not to realize it. Suddenly, we all wanted to be Han Solo. But we were stuck with Skywalker problems.
There’s a scene late in The Empire Strikes Back where Luke and Vader are having their epic light-saber duel, and one particular shot is filmed from behind Mark Hamill. Within the context of this shot, Darth Vader is roughly twice the physical size of Luke; obviously, the filmmakers are trying to illustrate a point about the massive size of the Empire and the relative impotence of the fledgling Jedi. Not surprisingly, they all go a bit overboard: Vader’s head appears larger than Luke’s entire torso, which sort of overextends any suspension of disbelief a rational adult might harbor. But to a wide-eyed youngster, that image looked completely reasonable: If Vader is Luke’s father (as we would learn minutes later), then Vader should seem as big as your dad.
As the scene continues, Luke is driven out onto a catwalk, where he loses his right hand and is informed that he’s the heir to the intergalactic Osama bin Laden. He more or less tries to commit suicide. Now, Luke is saved from this fate (of course), and since this is a movie, logic tells us that (of course) Vader will fall in the next installment of the series, even though it will take three years to get there. This is all understood. But that understanding is an adult understanding. As an eight-year-old, the final message of The Empire Strikes Back felt remarkably hopeless: Luke’s a good person, but Luke still lost. And it wasn’t like the end of Rocky, where Apollo Creed wins the split decision but Rocky wins a larger victory for the human spirit; Darth Vader beats Luke the way Ike used to beat Tina. A psychologist once told me that—over the span of her entire career—she had never known a man who didn’t have some kind of creepy, unresolved issue with his father. She told me that’s just an inherent part of being male. And here we have a movie where the hero is fighting every ideology he hates, gets his ass kicked, and is then informed, “Oh, and by the way: I’m your dad. But you knew that all along.”
In this same scene, Darth Vader tells Skywalker he has to make a decision: He can keep fighting a war he will probably lose, or he can compromise his ethics and succeed wildly. Many young adults face a similar decision after college, and those seen as “responsible” inevitably choose the latter path. However, an eight-year-old would never sell out. Little kids will always take the righteous option. And what’s intriguing about Gen Xers is they never really wavered from that decision. Luke’s quandary in The Empire Strikes Back is exactly like the situation facing Winona Ryder in 1994’s Reality Bites: Should she stick with the nice, sensible guy who treats her well (Ben Stiller), or should she roll the dice with the frustrating boho bozo who treats her like crap (Ethan Hawke)? For a detached adult, that answer seems obvious; for people who were twenty-one when this movie came out, the answer was just as obvious but completely different. As we all know, Winona went with Hawke. She had to. When Gene Siskel and Roger Ebert reviewed Reality Bites, I recall them complaining that Ryder picked the wrong guy; as far as I could tell, choosing the wrong guy was the whole point.
You don’t often see Reality Bites mentioned as an important (or even as a particularly good) film, but it grows more seminal with every passing year. When it was originally released, all its Gap jokes and AIDS fears and Lisa Loeb songs merely seemed like marketing strategies and ephemeral stabs at insight. However, it’s amazing how one film so completely captured every hyper-conventional ideal of such a short-lived era; Reality Bites is a period piece in the best sense of the term. And in the same way I have a special place in my heart for the first film I saw inside a movie house, I reserve a special place in my consciousness for the first film so unabashedly directed toward the condition of my own life. I was graduating from college the spring Reality Bites was released, and—though it didn’t necessarily seem like a movie about me—it was clearly a movie for me. Eighteen months earlier, everyone I knew had seen Cameron Crowe’s Singles, which we initially viewed as a youth movie. When we went back and rented Singles in the summer of 1994, I was suddenly struck by how old its cast seemed. I mean, they had full-time jobs and wanted to get married and have babies. Singles was just a normal romantic comedy that happened to have Soundgarden on the soundtrack. Reality Bites was an equally mediocre movie, but it validated a lot of mediocre lives, most notably my own. As I stated earlier, all the clichés about Gen Xers were true—but the point everyone failed to make was that our whole demographic was comprised of cynical optimists. Whenever my circa-1993 friends and I would sit around and discuss the future, there was always the omnipresent sentiment that the world was on the decline, but we were somehow destined to succeed individually. Everyone felt they would somehow be the exception within an otherwise grim universe. This is why Ryder had to pick Hawke. Winona made the kind of romantic decision most people my age would have made in 1994: She pursued a path that was difficult and depressing, and she did so because it showed the slightest potential for transcendence. Not coincidentally, this is also the Jedi’s path. Adventure? Excitement? The Jedi craves not these things. However, he does crave something greater than the bloodless existence of his father. Quite simply, Winona Ryder is Luke Skywalker, only with a better haircut and a killer rack.
Part of the reason so many critics think The Empire Strikes Back is the best Star Wars movie is just a product of how theater works: Empire is the second act of a three-act production, and the second act is usually the best part. The second act contains the conflict. And as someone born in the summer of 1972, I’ve sort of come to realize I’m part of a second-act generation. The most popular three-act play of the twentieth century is obvious: The Depression (Act I), World War II (Act II), and the sock-hop serenity of Richie Cunningham’s 1950s (Act III). The narrative arc is clear. But the play containing my life is a little more amorphous and a little less exciting, and test audiences are mixed: The first act started in 1962 and has a lot of good music and weird costumes, but the second act was poorly choreographed. Half the cast ran in place while the other half just sat around in coffeehouses, and we all tried to figure out what we were supposed to do with a society that had more media than intellect (and more irony than personality). Maybe the curtain on Act II fell with the World Trade Center. And as I look back at the best years of my life, I find myself wondering if maybe I wasn’t unconsciously conditioned to exist somewhere in the middle of two better stories, caught between the invention of the recent past and the valor of the coming future. Personally, I don’t think I truly understand invention or valor; they seem like pursuits that would require a light saber.
Within the circuits of my mind, the moments in The Empire Strikes Back I most adore are whenever Yoda gives his little Vince Lombardi speeches, often explaining that—in life—there is no inherent value to effort. “Do, or do not,” says the greenish Muggsy Bogues. “There is no try.” And that’s an inspiring sentiment. It’s the kind of logic that drives the world. But in my heart of hearts, the part of the film I can’t shake is when Luke Skywalker and Han Solo are riding around Hoth on tauntans, which are (for all practical purposes) bipedal space horses. When things get rough, Han Solo cuts open the belly of a tauntan and stuffs Luke inside the carcass; he saves him from a raging blizzard by encasing him in a cocoon of guts. I assume we’re supposed to find this clever and disgusting (or maybe even inventive and heroic). But I just know I’d rather be inside the belly of the beast.
1. I know nobody uses the term Generation X anymore, and I know all the people it supposedly describes supposedly hate the supposed designation. But I like it. It’s simply the easiest way to categorize a genre of people who were born between 1965 and 1977 and therefore share a similar cultural experience. It’s not pejorative or complimentary; it’s factual. I’m a “Gen Xer,” okay? And I buy shit marketed to “Gen Xers.” And I use air quotes when I talk, and I sigh a lot, and I own a Human League cassette. Get over it.
2. Case in point: When Episode I—The Phantom Menace came out in 1999, all the adults who waited in line for seventy-two hours to buy opening-night tickets were profoundly upset at the inclusion of Jar Jar Binks. “He’s annoying,” they said. Well, how annoying would R2D2 have seemed if you hadn’t been in the third fucking grade? Viewed objectively, R2D2 is like a dwarf holding a Simon.
The Awe-Inspiring Beauty of Tom Cruise’s Shattered, Troll-like Face
Last night I awoke at 3:30 A.M. with a piercing pain in my abdomen, certain I had been infected by some sort of Peruvian parasite that was gnawing away at my small intestine. It felt like the Neptunes had remixed my digestive tract, severely pumping up the bass. Now, the details of my illness will not be discussed here, as they are unappetizing. However, there was one upside to this tragedy: I was forced to spend several hours in my bathroom reading old issues of Entertainment Weekly, which inadvertently recalibrated my perception of existence.
As a rule, I do not read film reviews of movies I have not seen. Honestly, I’ve never quite understood why anyone would want to be informed about the supposed value of a film before they actually experience it. Somewhat paradoxically, I used to earn my living reviewing films, and it always made me angry when people at dinner parties would try to make conversation by asking if they should (or shouldn’t) see a specific film; I never wanted to affect the choices those people made. When writing reviews, I actively avoided anything that could be perceived as an attempt at persuasion. Moreover, I never liked explaining the plot of a movie, nor did I think it was remotely interesting to comment on the quality of the acting or the innovation of the special effects.
Perhaps this is why many people did not appreciate my film reviews.
However, the one thing I did like discussing was the “idea” of a given film, assuming it actually had one. This is also why I prefer reading film reviews of movies I’ve already seen; I’m always more interested in seeing if what I philosophically absorbed from a motion picture was conventional or atypical, and that can usually be deduced from what details the critic focuses on in his or her piece. This was particularly true on the morning of my cataclysmic tummy ache, when I stumbled across EW’s January 4, 2002, review of Vanilla Sky.
I am keenly aware that I am the only person in America who thought Vanilla Sky was a decent movie. This was made utterly lucid just forty-five seconds after it ended: As I walked out of the theater during the closing credits, other members of the audience actually seemed angry at what they had just experienced (in the parking lot outside the theater, I overheard one guy tell his girlfriend he was going to beat her for making him watch this picture!). Over the next few days, everything I heard about Vanilla Sky was about how it was nothing but a vanity project for Tom Cruise and that the story didn’t make any sense; the overwhelming consensus was that this was an overlong, underthought abomination. This being the case, I was not surprised to see EW’s Owen Gleiberman give Vanilla Sky a grade of D+. His take seemed in-step with most of North America. However, I found myself perturbed with one specific phrase in O.G.’s review:
The way that the film has been edited, none of the fake-outs and reversals have any weight; the more that they pile up, the less we hold on to any of them. We’re left with a cracked hall of mirrors taped together by a What is reality? cryogenics plot and scored to [director] Cameron Crowe’s record collection.
The phrase I take issue with is the prototypically snarky “What is reality?” remark, which strikes me as a profoundly misguided criticism. That particular question is precisely why I think Vanilla Sky was one of the more worthwhile movies I’ve seen in the past ten years, along with Memento, Mulholland Drive, Waking Life, Fight Club, Being John Malkovich, The Matrix, Donnie Darko, eXistenZ, and a scant handful of other films, all of which tangentially ask the only relevant question available for contemporary filmmakers: “What is reality?” It’s insane of Gleiberman to suggest that posing this query could somehow be a justification for hating Vanilla Sky. It might be the only valid reason for loving it.
By now, almost everyone seems to agree that the number of transcendent mass-consumer films shrinks almost every year, almost to the point of their nonexistence. In fact, I’m not sure I’ve heard anyone suggest otherwise. Granted, there remains a preponderance of low-budget, deeply interesting movies that never play outside of major U.S. cities; Todd Solondz’s twisted troika of Welcome to the Dollhouse, Happiness, and Storytelling is an obvious example, as are mildly subversive minor films like Pi and Ghost World. P. T. Anderson and Wes Anderson make great films that get press and flirt with commerce. However, the idea of making a sophisticated movie that could be brilliant and commercially massive is almost unthinkable, and that schism is relatively new. In the early seventies, The Godfather films made tons of money, won bushels of Academy Awards, and—most notably—were anecdotally regarded as damn-near perfect by every non-Italian tier of society, both intellectually and emotionally. They succeed in every dimension. That could never happen today; interesting movies rarely earn money, and Oscar-winning movies are rarely better than good. Titanic was the highest-grossing film of all time and the 1998 winner for Best Picture, so you’d think that might be an exception—but I’ve never met an intelligent person who honestly loved it. Titanic might have been the least watchable movie of the 1990s, because it was so obviously designed for audiences who don’t really like movies (in fact, that was the key to its success). At this point, winning an Oscar is almost like winning a Grammy.
I realize citing the first two Godfather films is something of a cheap argument, since those two pictures are the pinnacle of the cinematic art form. But even if we discount Francis Ford Coppola’s entire body of work, it’s impossible to deny that the chances of seeing an über-fantastic film in a conventional movie house are growing maddeningly rare, which wasn’t always the case. It wasn’t long ago that movies like Cool Hand Luke or The Last Picture Show or Nashville would show up everywhere, and everyone would see them collectively, and everybody would have their consciousness shaken at the same time and in the same way. That never happens anymore (Pulp Fiction was arguably the last instance). This is mostly due to the structure of the Hollywood system; especially in the early 1970s, everybody was consumed with the auteur concept, which gave directors the ability to completely (and autonomously) construct a movie’s vision; for roughly a decade, film was a director’s medium. Today, film is a producer’s medium (the only director with complete control over his product is George Lucas, and he elects to make kids’ movies). Producers want to develop movies they can refer to as “high concept,” which—somewhat ironically—is industry slang for “no concept”: It describes a movie where the human element is secondary to an episodic collection of action sequences. It’s “conceptual” because there is no emphasis on details. Capitalistically, those projects work very well; they can be constructed as “vehicles” for particular celebrities, which is the only thing most audiences care about, anyway. In a weird way, film studios are almost requiring movies to be bad, because they tend to be more efficient.
However, there’s also a second reason we see fewer important adult films in the twenty-first century, and this one is nobody’s fault. Culturally, there’s an important cinematic difference between 1973 and 2003—and it has to do with the purpose movies serve. In the past, film validated social evolution. Look at Jack Nicholson: From 1969 to 1975, Nicholson portrayed an amazing array of characters—this was the stretch where he made Easy Rider, Five Easy Pieces, Carnal Knowledge, The Last Detail, The King of Marvin Gardens, Chinatown, and One Flew Over the Cuckoo’s Nest. It might be the strongest half decade any actor ever had (or at least the strongest five-year jag since the fall of the studio system). And what’s most compelling is that all the people he played during that run were vaguely unified by a singular quality. For a long time, I could never put my finger on what that was. I finally figured it out when I came across a late-eighties profile on Nicholson in The New York Times Magazine: “I like to play people who haven’t existed yet,” Jack said. “A future something.”
Nicholson was particularly adroit at embodying those future somethings, but he was not alone. This was what good movies did during that period—they were visions of a present tense that was just around the corner. When people talk about the seventies as a Golden Era, they tend to talk about cinematic techniques and artistic risks. What they should be discussing is sociology. The filmmaking process is slow and expensive, so movies are always the last idiom to respond to social evolution; the finest films from the seventies were really just the manifestation of how art and life had changed in the sixties. After a generation of being entertained by an illusion of simplicity and the clarity of good vs. evil, a film like Five Easy Pieces offered the kind of psychological complexity people were suddenly relating to in a very personal way. What people like Nicholson were doing was introducing audiences to the new American reality: counterculture as the dominant culture.
Unfortunately, that kind of introduction can’t happen in 2003. It can’t happen because reality is more transient and less concrete. It’s more difficult for a film to define and validate the current of popular culture, because that once linear current has been splintered; it’s become a cracked Volvo windshield, spider-webbing itself in a manner that’s generally predictable but specifically chaotic (in other words, we all sort of know where the national ethos is going, but never exactly how or exactly why or exactly when). Cinematically, this creates a problem. Traditional character models like “The Everyman” and “The Antihero” and “The Wrongly Accused” are no longer useful, because nobody can agree on what those designations are supposed to mean anymore (in Christopher Nolan’s Memento, all three of those labels could simultaneously be applied to the same person). Modern movies can no longer introduce impending realities; they can’t even explain the ones we currently have. Consequently, there’s only one important question a culturally significant film can still ask: What is reality?
I’ll concede that Vanilla Sky poses that question a little too literally at times, inasmuch as I vaguely recall one scene where Tom Cruise is riding in an elevator and someone looks at him and literally asks, “What is reality?” The cryogenics subplot is also a tad silly, since it sometimes seems like an infomercial for Scientology and/or an homage to Arnold Schwarzenegger’s Total Recall. But fuck it—I’m not going to overcompensate and list a bunch of criticisms about a movie I honestly liked, and I did like Vanilla Sky. And what I liked was the way it presented the idea of objectivity vs. perception, which is ultimately what the “What is reality” quandary comes down to. In Vanilla Sky, Cruise plays a dashing magazine publisher. He likes to casually bang Cameron Diaz, but he falls in love with the less attainable Penelope Cruz (it is a credit to Cruz that she makes this situation seem plausible; Penelope is so cute in this film that I found myself siding with Cruise and thinking, “Why the hell would anyone want to have sex with a repulsive hosebag like Cameron Diaz?”). When Diaz figures out that Cruise has been unfaithful, she goes bonkers and tries to kill them both by driving a car off a bridge. She dies, but Cruise escapes—with a horribly disfigured face. Despite his grotesque appearance, he still pursues a satisfying relationship with Cruz and intends to repair his mangled grill through a series of plastic surgeries. Against all odds, his life (and his face) improves. But then it turns out that the diabolical Diaz is still alive… or maybe not… maybe she and Cruz are actually the same person… or maybe neither exists, because this is all a fantasy. Frankly, whatever answer Crowe wanted us to deduce is irrelevant. What matters is that Cruise ultimately has to decide between a fake world that feels real and a real world that feels like torture.
Cruise chooses the latter, although I’m not sure why. Keanu Reeves makes the same choice in The Matrix, electing to live in a realm that is dismal but genuine. Like Vanilla Sky, the plot of The Matrix hinges on the premise that everything we think we’re experiencing is a computer-generated illusion: In a postapocalyptic world, a band of kung fu terrorists wage war against a society of self-actualized machines who derive their power from human batteries, all of whom unknowingly exist in a virtual universe referred to as “the matrix.” The Matrix would suggest that everything you’re feeling and experiencing is just a collective dream the whole world is sharing; nobody is actually living, but nobody’s aware that they aren’t.
For Reeves’s character, Neo, choosing to live in the colorless light of hard reality is an easy decision, mostly because The Matrix makes the distinction between those two options very clear: reality may be a difficult brand of freedom, but unreality is nothing more than comfortable slavery. Cruise’s decision in Vanilla Sky is similar, although less sweeping; his choice has more to do with the “credibility” of his happiness (his fake life would be good, but not satisfying). Both men prefer unconditional reality. This is possibly due to the fact that both films are really just sci-fi stories, and science fiction tends to be philosophy for stupid people.1 Every protagonist in a sci-fi story is ultimately a moral creature who does the right thing, often resulting in his own valiant destruction (think Spock in The Wrath of Khan). But what’s intriguing about Keanu and Cruise is that I’m not sure I agree that choosing hard reality is the “right thing.” The Matrix and Vanilla Sky both pose that question—which I appreciate— but their conclusions don’t necessarily make logical (or emotional) sense. And that doesn’t mean these are bad movies; it just forces us to see a different reflection than the director may have intended. It probably makes them more intriguing.
The reason I think Cruise and Reeves make flawed decisions is because they are not dealing with specific, case-by-case situations. They are dealing with the entire scope of their being, which changes the rules. I would never support the suggestion that ignorance is bliss, but that cliché takes on a totally different meaning when the definition of “ignorance” becomes the same as the definition for “existence.”
Look at it this way: Let’s assume you’re a married woman, and your husband is having an affair. If this is the only lie in your life, it’s something you need to know. As a singular deceit, it’s a problem, because it invalidates every other truth of your relationship. However, let’s say everyone is lying to you all the time— your husband, your family, your coworkers, total strangers, etc. Let’s assume that no one has ever been honest with you since the day you started kindergarten, and you’ve never suspected a thing. In this scenario, there is absolutely no value to learning the truth about anything; if everyone expresses the same construction of lies, those lies are the truth, or at least a kind of truth. But the operative word in this scenario is everyone. Objective reality is not situational; it doesn’t evolve along with you. If you were raised as a strict Mormon and converted into an acid-eating Wiccan during college, it would seem like your reality had completely evolved—but the only thing that would be different is your perception of a world that’s still exactly the same. That’s not the situation Cruise and Reeves face in these movies. They are not looking for the true answer to one important question; they are choosing between two unilateral truths that apply to absolutely everything. And all the things we want out of life—pleasure, love, enlightenment, self-actualization, whatever—can be attained within either realm. They both choose the “harder” reality, but only because the men who made The Matrix and Vanilla Sky assume that option is more optimistic. In truth, both options are exactly the same. Living as an immaterial cog in the matrix would be no better or worse than living as a fully-aware human; existing in a cryogenic dreamworld would be no less credible than existing in corporeal Manhattan.
The dreamworld in Richard Linklater’s staggering Waking Life illustrates this point beautifully, perhaps because that idea is central to its whole intention. Waking Life is an animated film about a guy (voiced by Wiley Wiggins) who finds himself inside a dream he cannot wake from. As the disjointed story progresses, both the character and the audience conclude that Wiggins is actually dead. And what’s cool about Waking Life is that this realization is not the least bit disturbing. Wiggins’s response is a virtual nonreaction, and that’s because he knows he is not merely in a weird situation; he is walking through an alternative reality. Instead of freaking out, he tries to understand how his new surroundings compare to his old ones.
There are lots of mind-expanding moments in Waking Life, and it’s able to get away with a lot of shit that would normally seem pretentious (it’s completely plotless, its characters lecture about oblique philosophical concepts at length, and much of the action is based on people and situations from Linklater’s 1991 debut film, Slacker). There are on-screen conversations in Waking Life that would be difficult to watch in a live-action picture. But Waking Life doesn’t feel self-indulgent or affected, and that’s because it’s a cartoon: Since we’re not seeing real people, we can handle the static image of an old man discussing the flaws of predestination. Moreover, we can accept the film’s most challenging dialogue exchange, which involves the reality of our own interiority.
The scene I’m referring to is where Wiggins’s character meets a girl and goes back to her apartment, and the girl begins explaining her idea for a surrealistic sitcom. She asks if Wiggins would like to be involved. He says he would, but then asks a much harder question in return: “What does it feel like to be a character in someone else’s dream?” Because that’s who Wiggins realizes this person is; he is having a lucid dream, and this woman is his own subconscious construction. But the paradox is that this woman is able to express thoughts and ideas that Wiggins himself could never create. Wiggins mentions that her idea for the TV show is great, and it’s the kind of thing he could never have come up with—but since this is his dream, he must have done exactly that. And this forces the question that lies behind “What is reality?”: “How do we know what we know?”
This second query in what brings us to Memento, probably the most practical reality study I’ve ever seen on film. The reason I say “practical” is because it poses these same abstract questions as the other films I’ve already mentioned, but it does so without relying on an imaginary universe. Usually, playing with the question of reality requires some kind of Through the Looking Glass trope: In Waking Life, Wiggins’s confusion derives from his sudden placement into a dream. Both The Matrix and Vanilla Sky take place in nonexistent realms. Being John Malkovich is founded on the ability to crawl into someone’s brain through a portal in an office building; Fight Club is ultimately about a man who isn’t real; eXistenZ is set inside a video game that could never actually exist. However, Memento takes place in a tangible place and merely requires an implausible—but still entirely possible—medical ailment.
Memento is about a fellow named Leonard (Guy Pearce) who suffers a whack to the head and can no longer create new memories; he still has the long-term memories from before his accident, but absolutely no short-term recall. He forgets everything that happens to him three minutes after the specific event occurs.2 This makes life wildly complicated, especially since his singular goal is to hunt down the men who bonked him on the skull before proceeding to rape and murder his wife.
Since the theme of Memento is revenge and the narrative construction is so wonderfully unconventional (the scenes are shown in reverse order, so the audience—like Leonard—never knows what just happened), it would be easy to find a lot of things in this movie that might appear to define it. However, the concept that’s most vital is the way it presents memory as its own kind of reality. When Memento asks the “What is reality?” question, it actually provides an answer: Reality is a paradigm that always seems different and personal and unique, yet never really is. Its reality is autonomous.
There’s a crucial moment in Memento where Leonard describes his eternal quest to kill his wife’s murderers, and the person sitting across from him makes an astute observation: This will be the least satisfying revenge anyone will ever inflict. Even if Leonard kills his enemies, he’ll never remember doing so. His victory won’t just be hollow; it will be instantaneously erased. But Leonard disagrees. “The world doesn’t disappear when you close your eyes, does it?” he snaps. “My actions still have meaning, even if I can’t remember them.”
What’s ironic about Leonard’s point is that it’s completely true—yet even Leonard refuses to accept what that sentiment means in its totality. Almost no one ever does.
I’m not sure if anyone who’s not a soap opera character truly gets amnesia; it might be one of those fictional TV diseases, like environmental illness or gum disease. However, we all experience intermittent amnesia, sometimes from drinking Ketel One vodka3 but usually from the rudimentary passage of time. We refer to this phenomenon as “forgetting stuff.” (Reader’s note: I realize I’m not exactly introducing ground-breaking medical data right now, but bear with me.) Most people consider forgetting stuff to be a normal part of living. However, I see it as a huge problem; in a way, there’s nothing I fear more. The strength of your memory dictates the size of your reality. And since objective reality is fixed, all we can do is try to experience—to consume—as much of that fixed reality as possible. This can only be done by living in the moment (which I never do) or by exhaustively filing away former moments for later recall (which I do all the time).
Taoists constantly tell me to embrace the present, but I only live in the past and the future; my existence is solely devoted to (a) thinking about what will happen next and (b) thinking back to what’s happened before. The present seems useless, because it has no extension beyond my senses. To me, living a carpe diem philosophy would make me like Leonard. His reality is based almost entirely on faith: Leonard believes his actions have meaning, but he can’t experience those meanings (or even recall the actions that caused them). He knows hard reality is vast, but his soft reality is minuscule. And in the film’s final sequence, we realize that he understands that all too well; ultimately, he lies to himself to expand it. In a sense, he was right all along; his actions do have meaning, even if he doesn’t remember them. But that meaning only applies to an objective reality he’s not part of, and that’s the only game in town.
It’s significant that the character of Rita in David Lynch’s Mulholland Drive is a woman with total amnesia, and that we’re eventually forced to conclude that she might be nothing but a figment of another character’s imagination.4 It’s almost like Lynch is saying that someone who can’t remember who they are really doesn’t exist. Once your reality closes down to zero, you’re no longer part of it. So maybe that’s the bottom line with all of these films. Maybe the answer to “What is reality?” is this: Reality is both reflexive and inflexible. It’s not that we all create our own reality, because we don’t; it’s not that there is no hard reality, because there is. We can’t alter reality—but reality can’t exist unless we know it’s there. It depends on us as much as we depend on it.
We’re all in this together, people.
Semidepressing side note: Eight months after reading Owen Gleiberman’s review in EW, I woke up with another tummy ache in the middle of the night (this time at a Days Inn in Chicago). The only thing I had to read was the July 15 issue of Time magazine that came with the hotel room, so I started looking at the “letters” page. All the letters were about Tom Cruise (Time had just done a cover story on Cruise after the release of Steven Spielberg’s film version of Philip K. Dick’s Minority Report). These are two of them:
I was glad to see Tom Cruise, the most respected person in show biz, on the cover of Time. In general, Hollywood actors contribute little to society, other than mere amusement. Cruise, however, is different. He isn’t simply another mindless entertainer. He is a role model who overcame his childhood problems by being confident and motivated.
—Daniel Liao, Calgary
Was this article intended to help Tom Cruise regain his clean-cut image? He lost so much of it when he left his wife and children. Most alpha males need to control the women in their lives, and since Nicole Kidman has come into her own, it appears that Cruise moved on to a woman he had more control over. Are you going to be doing a cover on Kidman? She is the one who has had to go through the humiliation of being dumped by a famous husband and deal with being a single mom. She is a much more interesting person.
—Susan Trinidad, Spanaway (Wash.)
My first reaction to these letters was guttural: “Since when does Time publish letters that are written by rival publicists?” However, as I sat there in pain, feeling as though my stomach was being vacuumed through the lower half of my torso and into the bowels of this Illinois hotel, I was struck by the more frightening realization: These are not publicists! They are just everyday people, and they are some of the people I am trying to understand reality alongside. Somehow, there are literate men in Canada who believe Tom Cruise is a respected role model and “different” from all the other actors who contribute nothing but “mere amusement.” Somehow, there are women in the Pacific Northwest who think Nicole Kidman is interesting and wonderful and an icon of single motherhood, and that little five-foot-seven Cruise is an “alpha male” (even though everyone I know halfway assumes he’s gay). These are the things they feel strongly about, because these are things they know to be true.
We don’t have a fucking chance.
1. As opposed to this essay, which tends to be philosophy for shallow people.
2. Unfortunately, this does create the one gaping plot hole the filmmakers chose to ignore entirely, probably out of necessity: If Leonard can’t form new memories, there is no way he could comprehend that he even has this specific kind of amnesia, since the specifics of the problem obviously wouldn’t have been explained to him until after he already acquired the condition.
3. Holland’s #1 memory-destroying vodka!
4. For those of you who’ve seen Mulholland Drive and never came to that conclusion, the key to this realization is when the blonde girl (Naomi Watts) masturbates.
How to Disappear Completely and Never Be Found
I’m having a crisis of confidence, and I blame Jesus.
Actually, my crisis is not so much about Jesus as it is about the impending rapture, which I don’t necessarily believe will happen. But I don’t believe the rapture won’t happen, either; I really don’t see any evidence for (or against) either scenario. It all seems unlikely, but still plausible. Interestingly enough, I don’t think there is a word for my particular worldview: “Nihilism” means you don’t believe in anything, but I can’t find a word that describes partial belief in everything. “Paganism” is probably the closest candidate, but that seems too Druidesque for the style of philosophy I’m referring to. Some would claim that this is kind of like “agnosticism,” but true agnostics always seem too willing to side with the negative; they claim there are no answers, so they live as if those answers don’t exist. They’re really just nihilists without panache.
Not me, though. I’m prone to believe that just about any religious ideology is potentially accurate, regardless of how ridiculous it might seem (or be). Which is really making it hard for me to comment on Left Behind.
According to the blurb on its jacket, the Left Behind book series has more than 40 million copies in print, which would normally prompt me to assume that most of America is vaguely familiar with what these books are about. However, that is not the case. By and large, stuff like Left Behind exists only with that bizarre subculture of “good people,” most of whom I’ve never met and never will. These are the kind of people who are fanatically good—the kind of people who’ll tell you that goodness isn’t even that much of an accomplishment.
Left Behind is the first of eleven books about the end of the world. It was conceptualized by Dr. Tim LaHaye, a self-described “prophecy scholar,” and written by Jerry B. Jenkins, a dude who has written over a hundred other books (mostly biographies about moral celebrities like Billy Graham and Walter Payton). The novel’s premise is that the day of reckoning finally arrives and millions of people just disappear into thin air, leaving behind all their clothes and eyeglasses and Nikes and dental work. All the humans who don’t evaporate are forced to come to grips with why this event happened (and specifically why God did not select them). The answer is that they did not “accept Christ as their personal savior,” and now they have seven years to embrace God and battle the rising Antichrist, a charismatic Romanian named Nicolae Carpathia, who is described by the author as resembling “a young Robert Redford.”
Everything that happens in Left Behind is built around interpretations of Paul’s letters and the Book of Revelation, unquestionably the most fucked-up part of the Bible (except maybe for the Book of Job). It’s the epitome of a cautionary tale; every twist of its plot mechanics scream at the reader to realize that the clock is ticking, but it’s not too late—there is still time to accept Jesus and exist forever in the kingdom of heaven. And what’s especially fascinating about this book is that it’s a best-selling piece of entertainment, even though it doesn’t offer intellectual flexibility; it’s pop art, but it has an amazingly strict perspective on what is right and what is wrong. In Left Behind, the only people who are accepted by God are those who would be classified as fundamentalist wacko Jesus freaks with no intellectual credibility in modern society. Many of the Left Behind characters who aren’t taken to heaven—in fact, almost all of them—seem like solid citizens (or—at worst—“normal” Americans). And that creates a weird sensation for the Left Behind reader, because the post-Rapture earth initially seems like a better place to live. Everybody boring would be gone. One could assume that all the infidels who weren’t teleported into God’s kingdom must be pretty cool: All the guys would be drinkers and all the women would be easy, and you could make jokes about homeless people and teen suicide and crack babies without offending anyone. Quite frankly, my response to the opening pages of Left Behind was “Sounds good to me.”
Things in Left Behind get disconcerting pretty rapidly, however, and part of what I found disconcerting was that its main character is a reporter named Buck Williams, which was also the name of a retired NBA power forward regularly described as the league’s hardest worker. As a result, I kept imagining this bearded six-foot-nine black guy as the vortex of the story, which really wouldn’t have been that much of a stretch, especially since the real Buck Williams was involved with the “Jammin’ Against the Darkness” basketball ministry. If the Rapture came down tonight, I’m guessing Buck would be boxing out J.C. by breakfast.
A mind-numbing percentage of pro athletes are obsessed with God. According to an episode of Bryant Gumbel’s Real Sports on HBO, some studies suggest that as many as 40 percent of NFL players consider themselves “born again.” This trend continues to baffle me, especially since it seems like an equal number of pro football players spend the entire off-season snorting coke off the thighs of Cuban prostitutes and murdering their ex-girlfriends.
That notwithstanding, you can’t ignore the relationship between pro sports and end-of-days theology, and its acceleration as an all-or-nothing way of life. In the 1970s, the template for a religious athlete was a player like Roger Staubach of the Dallas Cowboys, someone who was seen as religious simply because everybody knew he was Catholic. The contemporary roster for God’s Squad is far more competitive; if you’re the kind of fellow who’d be “left behind,” you don’t qualify. These are guys like Kurt Warner of the St. Louis Rams, a person who would consider being called a zealot complimentary.
Warner is an especially interesting case, because his decision to become “born again” appears to have helped his career as a football player. Here was a guy who couldn’t make an NFL roster, was working in a grocery store, and was married to a dying woman. And then—inexplicably—his life completely turns around and he becomes the best quarterback in the NFL (and his wife lives!). Warner gives all the credit for this turnaround to his “almighty savior Jesus Christ,” and that explanation seems no less plausible than any other explanation. In fact, I find that I sort of want to believe him. In the fourth quarter of Super Bowl XXXVI, Warner made a break for the end zone against the New England Patriots; at the time, the Rams were down 17–3, and it was fourth and goal. Warner was hit at the one-yard line and fumbled, and a Patriot returned the ball ninety-nine yards for what seemed to be a gameclinching touchdown. However, this play was erased—quite possibly wiped clean by the hand of God. For no valid reason, Patriots linebacker Willie McGinest blatantly tackled Ram running back Marshall Faulk on the weak side of the play, forcing the referee to call defensive holding. I remember thinking to myself, “Holy shit. That made no sense whatsoever. I guess God really does care about football.” St. Louis retained possession and Warner scored two plays later, eventually tying the game with a touchdown pass to Ricky Proehl with under two minutes remaining.
I’m not sure why God would care about a football game, but he certainly seemed interested in this one. It looked like Warner’s faith was tangibly affecting the outcome, which is a wonderful notion. However, New England ultimately won Super Bowl XXXVI on the final play—a forty-eight-yard field goal, kicked by a guy who grew up in South Dakota and is related to Evel Knievel. You can’t question God, though: The following Monday, I happened to catch a few minutes of The 700 Club, and a Patriot wide receiver was talking about how God is awesome. With competitive spirituality, it’s always a push.
• • •
Part of the never-ending weirdness surrounding Left Behind was the 2000 movie version that starred Kirk Cameron, still best known as Mike Seaver from the ABC sitcom Growing Pains. Cameron portrays the aforementioned Buck Williams, a famous broadcast journalist (this is a slight alteration from the book, where Williams is a famous magazine writer). If one views the literary version of Left Behind to be mechanical and didactic, the film version would have to be classified as boring and pedantic. But—once again—there’s something oddly compelling about watching this narrative unfold, and it’s mostly because of Kirk’s mind-bending presence.
It’s always peculiar when someone famous becomes ultrareligious (Prince being the most obvious example), but it’s especially strange when he or she actively tries to advocate their religiosity. Cameron says he became a “believer” when he was eventeen or eighteen, but nobody really cared until he got involved with Left Behind and suddenly became the biggest Christian movie star in America (which—truth be told—is kind of like being the most successful heroin dealer on the campus of Brigham Young University). His wife is also in Left Behind, and she portrays a (relatively) immoral flight attendant named Hattie Durham.
When interviewed about Left Behind when it was first released, Cameron usually played things pretty close to the vest and always stressed that he wanted the film to deliver a point of view about the Bible, but also to work as a commercially competitive secular thriller. However, I did find this mildly controversial exchange from an interview Cameron did with some guy named Robin Parrish on a Christian music site operated by about.com:
How accurate do you think Left Behind is? I mean obviously, there won’t be a real-life Buck or Hattie or whoever. But the events that transpire in the story, how accurate do you think they are? The movie or the book?
Both.
I think one of the most appealing aspects of the Left Behind story is that these are events that could be happening today or tomorrow. It’s very realistic. The events that happen in the story parallel, I think very realistically, the events depicted in the Bible. And whether you’re a pre-Trib Rapture believer, or a mid-Trib, or a post-Trib…
Yeah, is there anything that people who don’t believe in a pre-Tribulation Rapture can take away from this movie?
I’d encourage those people to take a look at the Left Behind film project Web site, which has answers to those kinds of questions. You know… I’m not a pre-Trib or post-Trib expert at defending this kind of stuff, but personally I think the movie is very accurate and in line with the Bible. There are some things in prophecy that we’re just going to have to wait and see how they happen, that we’re not going to really know until they do. The Bible says that Jesus is coming soon though, so I think more important than the pre-Trib or post-Trib debate is all of us being ready before either one happens.
Now, I have no real understanding of what a “pre-Tribulation Rapture” is supposed to signify symbolically; it refers to a Rapture that happens before the technical apocalypse, but I’m not exactly sure how that would be better or worse than a “mid-Tribulation” or “post-Tribulation” Rapture. Honestly, I don’t think it’s important. However, this point is important: Kirk Cameron thinks the idea of 100 million Christians suddenly disappearing is “very realistic.” And I don’t mention this to mock him; I mention this because it’s the kind of realization that significantly changes the experience of watching this movie. In the film, Buck Williams goes from being a normal, successful person to someone who ardently wants the world to realize that there is no future for the unholy and that we must prepare for the political incarnation of Satan; apparently, the exact same thing happened to Cameron in real life. In his mind, he has made a docudrama about a historical event that merely hasn’t happened yet. This is not a former teen actor forced to star in an amateurish production because he needs the money; this is a former teen actor who consciously pursued an amateurish production with the hope of saving mankind. Relatively speaking, all those years he spent with Alan Thicke and Tracey Gold must seem like total shit.
There is something undeniably attractive about becoming a born-again Christian. I hear atheists say that all the time, although they inevitably make that suggestion in the most insulting way possible: Nothing offends me more than those who claim they wish they could become blindly religious because it would “make everything so simple.” People who make that argument are trying to convince the world that they’re somehow doomed by their own intelligence, and that they’d love to be as stupid as all the thoughtless automatons they condescendingly despise. That is not what I find appealing about the Born-Again Lifestyle. Personally, I think becoming a born-again Christian would be really cool, at least for a while. It would sort of be like joining the Crips or the Mossad or Fugazi.
Every rational person will tell you that all the world’s problems ultimately derive from disputes that are perceived by the warring parties as “Us vs. Them.” That seems sensible, but I don’t know if it’s necessarily true; all my problems come from the opposite scenario. I was far more interesting—and probably smarter, in a way—when I refused to recognize the existence of the color gray in my black-and-white universe. When I was twenty-one, I was adamantly anti-abortion and anti–death penalty; these were very clear ideas to me. However, things have since happened in my life, and now I have no feelings about either issue. And I’m sincere about that; I really have no opinion about abortion or the death penalty. Somehow, they don’t even seem important. But that’s what happens whenever you start to understand that most things cannot be emotively understood: You’re able to make better conversation over snifters of brandy, but you become an unfeeling idiot. You go from believing in objective reality to suspecting an objective reality exists; eventually, you start trying to make objectivity mesh with situational ethics, since every situation now seems unique. And then someone tells you that situational ethics is actually an oxymoron, since the idea of ethics is that these are things you do all the time, regardless of the situation. And pretty soon you find yourself in a circumstance where someone asks you if you believe that life begins at conception, and you find yourself changing the subject to NASCAR racing.
This is not a problem for the born again. There are no other subjects, really; nothing else—besides being born again—is even marginally important. Every moment of your life is a search-and-rescue mission: Everyone you meet needs to be converted and anyone you don’t convert is going to hell, and you will be partially at fault for their scorched corpse. Life would become unspeakably important, and every conversation you’d have for the rest of your life (or until the Rapture—whichever comes first) would really, really, really matter. If you ask me, that’s pretty glamorous. And Left Behind pushes that paradigm relentlessly. Another one of its primary characters—airline pilot Rayford Steele—becomes born again after he loses his wife and twelve-year-old son. However, his skeptical college-aged daughter Chloe doesn’t make God’s cut, so much of the text revolves around his attempts to convert Chloe to “The Way.” And the main psychological hurdle Steele must overcome is the fact that he’s not an obtrusive jackass, which Left Behind says we all need to become. “Here I am, worried about offending people,” Rayford thinks to himself at the beginning of chapter 19. “I’m liable to ‘not offend’ my own daughter right into hell.” The stakes are too high to concern oneself with manners.
This is ultimately what I like about the Born-Again Lifestyle: Even though I see fundamentalist Christians as wild-eyed maniacs, I respect their verve. They are probably the only people openly fighting against America’s insipid Oprah Culture—the pervasive belief system that insists everyone’s perspective is valid and that no one can be judged. As far as I can tell, most people I know are like me; most of the people I know are bad people (or they’re good people, but they consciously choose to do bad things). We deserve to be judged.
I realize that liberals and libertarians and Michael Stipe are always quick to quote the Bible when you say something like that, and they’ll tell you, “Judge not, lest ye be judged.” And that’s a solid retort for just about anything, really. But the thing with born agains is that they want to be judged. They can’t fucking wait. That’s why they’re cool.
As I just mentioned, Rayford Steele loses his young son in Left Behind’s Rapture. As it turns out, every young child in this book vanishes, including infants in the process of being born. This is to indicate that they are “innocents” and have done no wrong. And oddly, this was the aspect of Left Behind I found most distasteful.
First of all, it kind of contradicts the book’s premise, since we are constantly told that the ONLY way to get into heaven is to accept Christ, which no four-year-old (much less a four-month-old) could possibly comprehend. Granted, this is mostly a technicality, and I’m sure it’s intentional (for most exclusivist born-again groups, the technicalities are everything; the technicalities are what save you). But my larger issue is philosophical: Why do we assume all children are inherently innocent? Innocent of what? I mean, any grammar school teacher will tell you that “kids can be cruel” on the playground; the average third-grader will gleefully walk up to a six-year-old with hydrocephalus and ask, “What’s wrong with you, Big Head?” And that third-grader knows what he’s doing is evil. He knows it’s hurtful. Little boys torture cats and cute little girls humiliate fat little girls, and they know it’s wrong. They do it because it’s wrong. Sometimes I think children are the worst people alive. And even if they’re not—even if some smiling toddler is as pure as Evian—it’s only a matter of time. He’ll eventually become the fifty-year-old car salesman who we’ll all assume is morally bankrupt until he proves otherwise.
As far as I can tell, the nicest thing you can say about children is that they haven’t done anything terrible yet.
So let’s get to the core question in Left Behind: If the Rapture happened tonight, who gets called up to the Big Show? Judging from the text, the answer is “No one I know, and probably no one who would read this essay.” Left Behind is pretty clear about this, and the authors go to great lengths to illustrate how many of the people passed over by God are fair, moral, and—for the most part—more heroic than prototypical humans. This is a direct reflection of the primary audience for hard-core Christian literature; one assumes those readers would typically possess those same characteristics and simply need a little literary push to become “higher” Christians.
The best example in Left Behind is Rayford Steele, the person with whom we’re evidently supposed to “relate.” Buck Williams is the star and the catalyst (especially in the film version), but his main purpose is to move the plot along and provide the conflict. It’s through Rayford that we are supposed to understand the novel’s theme and experience. The theme is that you’re good, but being good is not enough; the experience is that you cannot be saved until you allow yourself to surrender to faith, even though that’s not really how it works for Rayford.
On the very first page of Left Behind, we learn that Rayford has a bad marriage, and it’s because his wife had developed an “obsession” with religion. We also learn that—twelve years prior—Rayford drunkenly kissed another woman at the company Christmas party and has never really forgiven himself. However, that guilt does not stop him from secretly lusting after the aforementioned Hattie Durham, even though he never actually touches her (interestingly enough, Rayford and Hattie do have a physical relationship in the film version of Left Behind, presumably because director Victor Sarin didn’t think moviegoers would buy the whole Jimmy Carter “I’ve lusted in my heart” sentiment).
Suffice it to say that Rayford would generally be described as a very decent person in the secular universe, which is how most Left Behind readers would likely view themselves. However, he can’t see the espoused “larger truth,” which is that there is only a future for those who take the Kierkegaardian leap and believe everything the Bible states (and as literally as possible).
Rayford can’t do this until his life is destroyed, so his conversion isn’t all that remarkable (it actually seems like the most reasonable decision, considering the circumstances). In many ways, this is the book’s most glaring flaw: It demands blind faith from the reader, but it illustrates faith as a response to terror. And since Left Behind isn’t a metaphor—it presents itself as a fictionalized account of what will happen, according to the Book of Revelation—the justification for embracing Jesus mostly seems like a scare tactic. It’s not a sophisticated reason for believing in God.
Of course, that’s also the point: There is no sophisticated reason for believing in anything supernatural, so it really comes down to believing you’re right. This is another example of how born agains are cool—you’d think they’d be humble, but they’ve got to be amazingly cocksure. And once you’ve crossed over, you don’t even have to try to be nice; according to the born-again exemplar, your goodness will be a natural extension of your salvation. Caring about orphans and helping the homeless will come as naturally as having sex with coworkers and stealing office supplies. If you consciously do good works out of obligation, you’ll never get into heaven; however, if you make God your proverbial copilot, doing good works will just become an unconscious part of your life.
I guess that’s probably the moment where I just stop accepting all this born-again bullshit, no matter how hard I try to remain open-minded. Though I obviously have no proof of this, the one aspect of life that seems clear to me is that good people do whatever they believe is the right thing to do. Being virtuous is hard, not easy. The idea of doing good things simply because you’re good seems like a zero-sum game; I’m not even sure if those actions would still qualify as “good,” since they’d merely be a function of normal behavior. Regardless of what kind of god you believe in—a loving god, a vengeful god, a capricious god, a snooty beret-wearing French god, whatever—one has to assume that you can’t be penalized for doing the things you believe to be truly righteous and just. Certainly, this creates some pretty glaring problems: Hitler may have thought he was serving God. Stalin may have thought he was serving God (or something vaguely similar). I’m certain Osama bin Laden was positive he was serving God. It’s not hard to fathom that all of those maniacs were certain that what they were doing was right. Meanwhile, I constantly do things that I know are wrong; they’re not on the same scale as incinerating Jews or blowing up skyscrapers, but my motivations might be worse. I have looked directly into the eyes of a woman I loved and told her lies for no reason, except that those lies would allow me to continue having sex with another woman I cared about less. This act did not kill 20 million Russian peasants, but it might be more “diabolical” in a literal sense. If I died and found out I was going to hell and Stalin was in heaven, I would note the irony, but I really couldn’t complain. I don’t make the fucking rules.
Just to cover all my doomed bases, I watched a few other apocalyptic movies after Left Behind: I rented The Omega Code and revisited The Rapture. The latter film—a 1991 movie starring Mimi Rogers—was a polarizing attempt to make the end of the world into a conventionally entertaining film, and I still think it’s among the decade’s more interesting movies (at least for its first seventy-five minutes). The Rapture opens with Rogers as a bored sex addict, and it ends with her dragging her child into the desert to wait for God’s wrath. Part of the reason so many critics like this film is because writer/director Michael Tolkin “goes all the way” and resists the temptation to end the film with an unclear conclusion. That’s commendable, but I wonder what the response would have been if Rogers didn’t question God at the very end; her character essentially wants to know why God plays with people like pawns and created a totally fucked world when making a utopia would have been just as easy (and though I realize these are not exactly the most profound of existential questions, it’s hard to deny that they’re not the most important ones, either).
Within the scope of mainstream filmmaking—it was released on the same day as the Joe Pesci vehicle The Super—The Rapture clearly seems like a religious movie. But it’s really not, because it doesn’t have a religious point of view. When push comes to shove, Tolkin’s script adopts a staunchly humanistic take: The Mimi Rogers character asks God why his universe doesn’t make sense. Like most people, she thinks life should be a democracy and that God should behave like an altruistic politician who acts in our best interests. You hear this all the time; critics of organized religion constantly say things like, “There is no way a just God would send a man like Gandhi to hell simply because he’s not a Christian.” Well, why not? I’m certainly pulling for Gandhi’s eternal salvation, but there’s no reason to believe there’s a logic to the afterlife selection process. It might be logical, and it might be arbitrary; in a way, it would be more logical if it was totally arbitrary. But the idea of questioning God’s motives will always be a fiercely American thing to do; it’s almost patriotic to get in God’s face. I’m pretty sure a lot of my friends would love the opportunity to vote against God in a run-off election. Even I’d be curious to see who the other candidate might be (probably Harry Browne).
In contrast, 1999’s The Omega Code is much like Left Behind in that it doesn’t really offer any options besides buying into the whole born-again credit union. Since both stories are so dogged about the Book of Revelation, they share lots of plot points (i.e., two Israeli prophets screaming about the Second Coming, the construction of a church on The Dome of the Rock in Jerusalem, a miracle agricultural product that will end world hunger, etc.). The main difference is that The Omega Code has ties with Michael Drosnin’s The Bible Code, arguably the goofiest book I’ve ever purchased in a lesbian bookstore. Drosnin’s book claims the Torah is actually a three-dimensional crossword puzzle that predicted (among other things) the assassination of Yitzhak Rabin; more importantly, it allows computer specialists to learn just about anything—the date of the coming nuclear war (2006), the coming California earthquake (2010), and the best Rush album (2112). I have no idea why I bought this book (or why it was assumed to be of specific interest to lesbians), but it forms the narrative thread for The Omega Code, a movie that was actually less watchable than Left Behind. Surprisingly, The Omega Code earned about three times as much as Left Behind ($12.6 million to $4.2 million), even though it was made with a much smaller budget ($8 million and $17.4 million, respectively).
I’m not sure why The Omega Code made more at the box office than Left Behind; it’s kind of like trying to deduce why Armageddon grossed more than Deep Impact. But the most plausible explanation is that Left Behind tried a marketing gamble that failed: It was released on video before it was released in theaters. At the end of the VHS version of Left Behind, there is a “special message” from Kirk Cameron. Kirk appears to be standing in the Amazon rain forest while explaining why the movie went to Blockbuster before it went to theaters. “You are part of a very select group,” Cameron tells us, “and that group makes up less than one percent of the country… [but] what about the other 99 percent of the country?” The scheme by Left Behind’s production company (an organization that calls itself Cloud 10) was to have every core reader of Left Behind see the film in their living room in the winter of 1999 and then instruct each person to demand it be played theatrically in every city in America when it was officially released on February 2, 2000. “We need you to literally tell everyone you know,” Kirk stressed in his video message.
I was working as the film critic for the Akron Beacon Journal in early 2000, and—all during January—I kept getting phone calls from strangers, telling me I needed to write a story about some upcoming movie that I had never heard of; I’ve now come to realize that these were Left Behind people. I can’t recall if the film ever opened in Akron or not. Regardless, there is a part of me that would like to see this as an example of how Left Behind is different from other kinds of entertainment. Its audience truly felt it had a social and spiritual import that far exceeded everything else that opened that same weekend (such as Freddie Prinze Jr.’s Head Over Heels). And I’m sure that some of the people who called me that January truly did believe that a Kirk Cameron flick could save the world, and that it was their vocation to make sure all the sinners in suburban Ohio became aware of its existence.
However, I can’t ignore my sinking suspicion that the makers of this movie merely assumed their best hope for commercial success was to manipulate the very people who never needed a movie or a book to learn how to love Jesus. They took people who wanted to rescue my soul and turned them into publicists. Which makes me think the people at Cloud 10 are probably a few tiers below Stalin, too.
There are eleven books in the Left Behind series, and many have excellent subtitles like The Destroyer Is Unleashed and The Beast Takes Possession, both of which may have been Ronnie James Dio records. I am not going to read any more of them, mostly because I know how they’re going to end. I mean, doesn’t everybody? I went back and read the Book of Revelations, which doesn’t make much sense except for the conclusion—that’s where it implicitly states that Jesus is “coming soon.” Of course, Jesus operates within the idiom of infinity, so “soon” might be 30 billion years. Sometimes I find myself wishing that the world would end in my lifetime, since that would be oddly flattering; we’d all be part of humanity’s apex. That’s about as great an accomplishment as I can hope for, since I just don’t see how I will possibly get into heaven, Rapture or otherwise.
When I was a little boy, I used to be very thankful that I was born Catholic. At the time, my Catholicism seemed like an outrageous bit of good fortune, since I considered every other religion to be fake (I considered Lutherans and Methodists akin to USFL franchises). Over time, my opinions on such things have evolved. But quite suddenly, I once again find myself thankful for Catholicism, or at least thankful for its more dogmatic principles. I’m hoping all those nuns were right: I’m angling for purgatory, and I’m angling hard.
CALL ME “LIZARD KING.” NO … REALLY. I INSIST.
When I was leaving Val Kilmer’s ranch house, he gave me a present. He found a two-page poem he had written about a melancholy farmer, and he ripped it out of the book it was in (in 1988, Val apparently published a book of free-verse poetry called My Edens After Burns). He taped the two pages of poetry onto a piece of cardboard and autographed it, which I did not ask him to do. “This is my gift to you,” he said. I still possess this gift. Whenever I stumble across those two pages, I reread Val Kilmer’s poem. Its theme is somewhat murky. In fact, I can’t even tell if the writing is decent or terrible; I’ve asked four other people to analyze its merits, and the jury remain polarized. But this is what I will always wonder: Why did Val Kilmer give me this poem? Why didn’t he just give me the entire book? Was Kilmer trying to tell me something?
The man did not lack confidence.
CRAZY THINGS SEEM NORMAL, NORMAL THINGS SEEM CRAZY
(JULY 2005)
“I just like looking at them,” Val Kilmer tells me as we stare at his bison. “I liked looking at them when I was a kid, and I like looking at them now.” The two buffalo are behind a fence, twenty-five feet away. A 1,500-pound bull stares back at us, bored and tired; he stomps his right hoof, turns 180 degrees, and defecates in our general direction. “Obviously, we are not seeing these particular buffalo at their most noble of moments,” Kilmer adds, “but I still like looking at them. Maybe it has something to do with the fact that I’m part Cherokee. There was such a relationship between the buffalo and the American Indian—the Indians would eat them, live inside their pelts, use every part of the body. There was almost no separation between the people and the animal.”
Val Kilmer tells me he used to own a dozen buffalo, but now he’s down to two. Val says he named one of these remaining two ungulates James Brown, because it likes to spin around in circles and looks like the kind of beast who might beat up his wife. I have been talking to Kilmer for approximately three minutes; it’s 5:20 P.M. on April Fool’s Day. Twenty-four hours ago, I was preparing to fly to Los Angeles to interview Kilmer on the Sunset Strip; this was because Val was supposedly leaving for Switzerland (for four months) on April 3. Late last night, these plans changed entirely: suddenly, Val was not going to be in L.A. Instead, I was instructed to fly to New Mexico, where someone would pick me up at the Albuquerque airport and drive me to his 6,000-acre ranch. However, when I arrived in Albuquerque this afternoon, I received a voicemail on my cell phone; I was now told to rent a car and drive to the ranch myself. Curiously, his ranch is not outside Albuquerque (which I assumed would be the case, particularly since Val himself suggested I fly into the Albuquerque airport). His ranch is actually outside of Santa Fe, which is seventy-three miles away. He’s also no longer going to Switzerland; now he’s going to London.
The drive to Santa Fe on I-25 is mildly Zen: there are public road signs that say “Gusty Winds May Exist.” This seems more like lazy philosophy than travel advice. When I arrive in New Mexico’s capital city, I discover that Kilmer’s ranch is still another thirty minutes away, and the directions on how to arrive there are a little confusing; it takes at least forty-five minutes before I find the gate to his estate. The gate is closed. There is no one around for miles, the sky is huge, and my cell phone no longer works; this, I suppose, is where the buffalo roam (and where roaming rates apply). I locate an intercom phone outside the green steel gate, but most of the numbers don’t work. When an anonymous male voice finally responds to my desperate pleas for service, he is terse. “Who are you meeting?” the voice mechanically barks. “What is this regarding?” I tell him I am a reporter, and that I am there to find Val Kilmer, and that Mr. Kilmer knows I am coming. There is a pause, and then he says something I don’t really understand: “Someone will meet you at the bridge!” The gate swings open automatically, and I drive through its opening. I expect the main residence to be near the entrance, but it is not; I drive at least two miles on a gravel road. Eventually, I cross a wooden bridge and park the vehicle. I see a man driving toward me on a camouflaged ATV four-wheeler. The man looks like a cross between Jeff Bridges and Thomas Haden Church, which means that this is the man I am looking for. He parks next to my rental car; I roll down the window. He is smiling, and his teeth are huge. I find myself staring at them.
“Welcome to the West,” the teeth say. “I’m Val Kilmer. Would you like to see the buffalo?”
“I’ve never been that comfortable talking about myself, or about acting,” the forty-five-year-old Kilmer says. It’s 7:00
P.M. We are now sitting in his lodge, which is more rustic than I anticipated. We are surrounded by unfinished wood and books about trout fishing, and an African kudu head hangs from the wall. There seem to be a lot of hoofed animals on this ranch, and many of them are dead. Kilmer’s friendly ranch hand (a fortyish woman named Pam Sawyer) has just given me a plateful of Mexican food I never really wanted, so Val is eating it for me. He is explaining why he almost never gives interviews and why he doesn’t like talking about himself, presumably because I am interviewing him and he is about to talk about himself for the next four hours. “For quite a while, I thought that it didn’t really matter if I defended myself [to journalists], so a lot of things kind of snowballed when I didn’t rebuke them. And I mainly didn’t do a lot of interviews because they’re hard, and I was sort of super-concerned. When you’re young, you’re always concerned about how you’re being seen and how you’re being criticized.”
I have not come to New Mexico to criticize Val Kilmer. However, he seems almost disturbingly certain of that fact, which is partially why he invited me here. Several months ago, I wrote a column where I made a passing reference about Kilmer being “Advanced.”1 What this means is that I find Kilmer’s persona compelling, and that I think he makes choices other actors would never consider, and that he is probably my favorite working actor. This is all true. However, Kilmer took this column to mean that I am his biggest fan on the planet, and that he can trust me entirely, and that I am among his closest friends. From the moment we look at his buffalo, he is completely relaxed and cooperative; he immediately introduces me to his children, Mercedes (age thirteen) and Jack (age ten). They live with their British mother (Kilmer’s ex-wife Joanne Whalley, his costar from Willow) in Los Angeles, but they apparently spend a great chunk of time on this ranch; they love it here, despite the fact that it doesn’t have a decent television. Along with the bison, the farmstead includes horses, a dog, two cats, and (as of this afternoon) five baby chickens, one of which will be eaten by a cat before the night is over. The Kilmer clan is animal crazy; the house smells like a veterinarian’s office. Jack is predominantly consumed with the chicks in the kitchen and the trampoline in the backyard. Mercedes is an artist and a John Lennon fan; she seems a little too smart to be thirteen. When I ask her what her favorite Val Kilmer movie is, she says, “Oh, probably Batman Forever, but only because it seems like it was secretly made by Andrew Lloyd Webber.”
For the first forty-five minutes I am there, the five of us—Kilmer, his two kids, Pam the ranch hand, and myself—occupy the main room of the ranch house and try to make casual conversation, which is kind of like making conversation with friendly strangers in a wooden airport. Mercedes has a lot of questions about why Kilmer is “Advanced,” and Val mentions how much he enjoys repeating the word Advanced over and over and over again. He tells me about an Afterschool Special he made in 1983 called One Too Many, where he played a teenage alcoholic alongside Mare Winningham (his first teenage girlfriend) and Michelle Pfeiffer (a woman he would later write poetry for). I mention that he seems to play a lot of roles where he’s a drug-addled drunk, and he agrees that this is true. In fact, before I got here, I unconsciously assumed Val would be a drug-addled drunk during this interview, since every story I’ve ever heard about Kilmer implies that he’s completely crazy; he supposedly burned a cameraman with a cigarette on the set of The Island of Dr. Moreau. There are a few directors (most notably Joel Schumacher) who continue to paint him as the most egocentric, unreasonable human in Hollywood. As far as I can tell, this cannot possibly be accurate. If I had to describe Kilmer’s personality in one word (and if I couldn’t use the word Advanced), I would have to employ the least incendiary of all potential modifiers: Val Kilmer is nice. The worst thing I could say about him is that he’s kind of a name-dropper; beyond that, he seems like an affable fellow with a good sense of humor, and he is totally not fucked up.
But he is weird.
He’s weird in ways that are expected, and he’s weird in ways that are not. I anticipated that he might seem a little odd when we talked about the art of acting, mostly because (a) Kilmer is a Method actor, and (b) all Method actors are insane. However, I did not realize how much insanity this process truly required. That started to become clear when I asked him about The Doors and Wonderland, two movies where Kilmer portrays self-destructive drug addicts with an acute degree of realism; there is a scene late in Wonderland where he wordlessly (and desperately) waits for someone to offer him cocaine in a manner that seems painfully authentic. I ask if he ever went through a drug phase for real. He says no. He says he’s never freebased cocaine in his life; he was simply interested in “exploring acting,” but that he understands the mind-set of addiction. The conversation evolves into a meditation on the emotional toll that acting takes on the artist. To get a more specific example, I ask him about the “toll” that he felt while making the 1993 Western Tombstone. He begins telling me about things that tangibly happened to Doc Holliday. I say, “No, no, you must have misunderstood me—I want to know about the toll it took on you.” He says, “I know, I’m talking about those feelings.” And this is the conversation that follows:
CK: You mean you think you literally had the same experience as Doc Holliday?
Kilmer: Oh, sure. It’s not like I believed that I actually shot somebody, but I absolutely know what it feels like to pull the trigger and take someone’s life.
CK: So you’re saying you understand how it feels to shoot someone as much as a person who has actually committed a murder?
Kilmer: I understand it more. It’s an actor’s job. A guy who’s lived through the horror of Vietnam has not spent his life preparing his mind for it. Most of these guys were borderline criminal or poor, and that’s why they got sent to Vietnam. It was all the poor, wretched kids who got beat up by their dads, guys that didn’t get on the football team, guys who couldn’t finagle a scholarship. They didn’t have the emotional equipment to handle that experience. But this is what an actor trains to do. So—standing onstage—I can more effectively represent that kid in Vietnam than a guy who was there.
CK: I don’t question that you can more effectively represent that experience, but that’s not the same thing. If you were talking to someone who’s in prison for murder, and the guy said, “Man, it really fucks you up to kill another person,” do you think you could reasonably say, “I completely know what you’re talking about”?
Kilmer: Oh yeah. I’d know what he’s talking about.
CK: You were in Top Gun. Does this mean you completely understand how it feels to be a fighter pilot?
Kilmer: I understand it more. I don’t have a fighter pilot’s pride. Pilots actually go way past actors’ pride, which is pretty high. Way past rock ’n’ roll pride, which is even higher. They’re in their own class.
CK: Let’s say someone made a movie about you—Val Kilmer—and they cast Jude Law2 in the lead role. By your logic, wouldn’t this mean that Jude Law—if he did a successful job—would therefore understand what it means to be Val Kilmer more than you do?
Kilmer: No, because I’m an actor. Those other people that are in those other circumstances don’t have the self-knowledge.
CK: Well, what if it was a movie about your young life? What if it was a movie about your teen years?
Kilmer: In that case, I guess I’d have to say yes. No matter what the circumstances are, it’s all relative. I think Gandhi had a sense of mission about himself that was spiritual. He found himself in political circumstances, but he became a great man. Most of that story in the film Gandhi is about the politics; it’s about the man leading his nation to freedom. But I know that Sir Ben Kingsley understood the story of Gandhi to be that personal journey of love. It would be impossible to portray Gandhi as he did—which was perfectly—without having the same experience he put into his body. You can’t act that.
CK: Okay, so let’s assume you had been given the lead role in The Passion of the Christ. Would you understand the feeling of being crucified as much as someone who had been literally crucified as the Messiah?
Kilmer: Well, I just played Moses [in a theatrical version of The Ten Commandments]. Of course.
CK: So you understand the experience of being Moses? You understand how it feels to be Moses? Maybe I’m just taking your words too literally.
Kilmer: No, I don’t think so. That’s what acting is.
I keep asking Kilmer if he is joking, and he swears he is not. However, claiming that he’s not joking might be part of the joke. A few weeks after visiting the ranch, I paraphrased the preceding conversation to Academy Award–winning conspiracy theorist Oliver Stone, the man who directed Kilmer in 1991’s The Doors and 2004’s Alexander. He did not find our exchange surprising.
“This has always been the issue with Val,” Stone said via cell phone as his son drove him around Los Angeles. “He speaks in a way that is propelled from deep inside, and he doesn’t always realize how the things he says will sound to other people. But there is a carryover effect from acting. You can never really separate yourself from what you do, and Val is ultrasensitive to that process.”
Stone says Kilmer has substantially matured over the years, noting that the death of Kilmer’s father in 1993 had an immediate impact on his emotional flexibility. “We didn’t have the greatest relationship when we made The Doors,” he says. “I always thought he was a technically brilliant actor, but he was difficult. He can be moody. But when we did Alexander, Val was an absolute pleasure to work with. I think part of his problems with The Doors was that he just got sick of wearing leather pants every day.”
Kilmer and his two kids are playing with the cats. Because there are two of these animals (Ernest and Refrigerator), the living room takes on a Ghost and the Darkness motif. While they play with the felines, Val casually mentions he awoke that morning at 4:00 A.M. to work on a screenplay, but that he went back to bed at 6:00 A.M. His schedule is unconventional. A few hours later, I ask him about the movie he’s writing.
“Well, it’s a woman’s story,” he says cautiously. “It’s about this woman who was just fighting to survive, and everything happened to her.”
I ask him if this is a real person; he says she is. “Her first husband died. Her own family took her son away from her. She marries a guy because he promises to help her get the son back, and then he doesn’t. The new husband is a dentist, but he won’t even fix her teeth. She ends up divorcing him because he gets captured in the Civil War. She meets a homeopathic guy who’s probably more of a mesmerist hypnotist. For the first time in her life, at forty-two years old, she’s feeling good. But then she slips on the ice and breaks every bone in her body, and the doctor and the priest say she should be dead. But she has this experience while she’s praying and she gets up. People literally thought they were seeing a ghost. And then she spent the rest of her life trying to articulate what had happened to her. How was she healed? That’s what the story is about: the rest of her life. Because she lived until she was ninety and became the most famous lady in the United States.”
His vision for this film is amazingly clear, and he tells me the story with a controlled, measured intensity. I ask him the name of the woman. He says, “Mary Baker Eddy. She died in 1910.” We talk a little more about this idea (he’d love to see Cate Blanchette in the lead role), but then the conversation shifts to the subject of Common Sense author Thomas Paine, whom Kilmer thinks should be the subject of Oliver Stone’s next movie.
It is not until the next morning that I realize Mary Baker Eddy was the founder of The Christian Science Monitor, and that Val Kilmer is a Christian Scientist.
“Well, that is what I am trying to be,” he says while we sit on his back porch and look at the bubbling blueness of the Pecos River. “It is quite a challenging faith, but I don’t think I’m hedging. I just think I am being honest.”
There are many facets to Christian Science, but most people only concern themselves with one: Christian Scientists do not take medicine. They believe that healing does not come from internal processes or from the power of the mortal mind; they believe healing comes from the Divine Mind of God. Growing up in Los Angeles, this is how Kilmer was raised by his parents. This belief becomes more complex when you consider the context of the Kilmer family: the son of an engineer and a housewife, Val had two brothers. They lived on the outskirts of L.A., neighbors to the likes of Roy Rogers. Over time, the family splintered. Val’s parents divorced, and he remains estranged from his older brother over a business dispute that happened more than ten years ago (“We have a much better relationship not speaking,” Val says). His younger brother Wesley died as a teenager; Wesley had an epileptic seizure in a swimming pool (Val was seventeen at the time, about to go to school at Juilliard). I ask him if his brother’s epilepsy was untreated at the time of his death.
“Well, this is a complicated answer,” he says. “He was treated periodically. There is a big misnomer with Christian Science; I think maybe that misnomer is fading. People used to say, ‘Christian Science. Oh, you’re the ones that don’t believe in doctors,’ which is not a true thing. It’s just a different way of treating a malady. It could be mental, social, or physical. In my little brother’s case, when he was diagnosed, my parents were divorced. My father had him diagnosed and Wesley was given some medical treatment for his epilepsy. When he was in school, they would stop the treatment. Then periodically, he would go back and forth between Christian Science and the medical treatment.”
I ask him what seems like an obvious question: Isn’t it possible that his brother’s death happened when he was being untreated, and that this incident could have been avoided?
“Christian Science isn’t responsible for my little brother’s death,” he says, and I am in no position to disagree.
We’re still sitting on his porch, and his daughter walks past us. I ask Val if he would not allow her to take amoxicillin if she had a sore throat; he tells me that—because he’s divorced—he doesn’t have autonomous control over that type of decision. But he says his first move in such a scenario would be to pray, because most illness comes from fear. We start talking about the cult of Scientology, which he has heard is “basically Christian Science without God.” We begin discussing what constitutes the definition of religion; Kilmer thinks an institution cannot be classified as a religion unless God is involved. When I argue that this is not necessarily the case, Val walks into the house and brings out the Oxford English Dictionary; I’m not sure how many working actors own their own copy of the OED, but this one does. The print in the OED is minuscule, so he begins scouring the pages like Sherlock Holmes. He pores over the tiny words with a magnifying glass that has an African boar’s tusk as a handle. He finds the definition of religion, but the OED’s answer is unsatisfactory. He decides to check what Webster’s Second Unabridged Dictionary has to say, since he insists that Webster’s Second was the last dictionary created without an agenda. We spend the next fifteen minutes looking up various words, including monastic.
So this, I suppose, is an illustration of how Val Kilmer is weird in unexpected ways: he’s a Christian Scientist, and he owns an inordinate number of reference books.
I ask Val Kilmer if he agrees that his life is crazy. First he says no, but then he (kind of) says yes.
“I make more money than the whole state of New Mexico,” he says. “If you do the math, I’ve probably made as much as six hundred thousand or eight hundred thousand people in this state. And I know that’s crazy. You know, I live on a ranch that’s larger than Manhattan. That’s a weird circumstance.” Now, this is something of a hyperbole; the island of Manhattan is 14,563 acres of real estate, which is more than twice as large as Val’s semiarid homestead. But his point is still valid—he’s got a big fucking backyard, and that’s a weird circumstance. “The thing I’m enjoying more is that there are lots of things that fame has brought me that I can use to my advantage in a quiet way. For example, a friend of mine is an amazing advocate for trees. He’s so incredible and selfless. He’s planted [something like] twenty million trees in Los Alamos. I actually got to plant the twenty-millionth tree. And we got more attention for doing that simply because I’ve made some movies and I’m famous.”
Kilmer’s awareness of his fame seems to partially derive from his familiarity with other famous people. During the two days we spend together, he casually mentions dozens of celebrities he classifies as friends—Robert DeNiro, Nelson Mandela, Steve-O. Val tells me that he passed on the lead role in The Insider that eventually went to Russell Crowe; he tells me he dreams of making a comedy with Will Ferrell, whom he considers a genius. At one point, Kilmer does a flawless Marlon Brando impersonation, even adjusting the timbre of his voice to illustrate the subtle difference between the ’70s Brando from Last Tango in Paris and the ’90s Brando from Don Juan DeMarco. We talk about his longtime camaraderie with Kevin Spacey, and he says that Spacey is “proof that you can learn how to act. Because he was horrible when he first started, and now he’s so good.” We talk about the famous women he’s dated; the last serious relationship he had was with Darryl Hannah, which ended a year ago. During the 1990s, he was involved with Cindy Crawford, so I ask him what it’s like to sleep with the most famous woman in the world. His short answer is that it’s awesome. His long answer is that it’s complicated.
“Cindy is phenomenally comfortable in the public scene,” Kilmer says. “I never accepted that responsibility. If you’re the lead in a film, you have a responsibility to the company and the studio. With a great deal of humor, Cindy describes herself as being in advertising. She’s an icon in it; we actually talked about her image in relation to the product. And I was uncomfortable with that. We got in a huge fight one night because of a hat she was wearing. The hat advertised a bar, and I used to be so unreasonable about that kind of thing. I had a certain point of view about the guy who owned the bar, and I was just being unreasonable. I mean, she knows what she’s doing, and she’s comfortable with it. But I knew we were going to go to dinner and that we’d get photographed with this hat, and I was just hard to deal with. It was a really big deal.”
This is the kind of insight that makes talking to an established movie star so unorthodox: Kilmer remembers that his girlfriend wearing a certain hat was a big deal, but he doesn’t think it was a big deal that the girlfriend was Cindy Crawford. Crazy things seem normal, normal things seem crazy. He mentions that he is almost embarrassed by how cliché his life has become, despite the fact that the manifestation of this cliché includes buffalo ownership. However, there are certain parts of his life that even he knows are strange. This is most evident when—apropos of nothing—he starts talking about Bob Dylan.
“I am a friend of Bob’s, as much as Bob has friends,” Kilmer says. “Bob is a funny guy. He is the funniest man I know.” Apparently, Dylan loved Tombstone so much that he decided to spend an afternoon hanging out in Kilmer’s hotel room, later inviting Val into the recording studio with Eric Clapton and casting him in the film Masked and Anonymous. Much like his ability to mimic Brando, Kilmer is able to impersonate Dylan’s voice with detailed exactness and loves re-creating conversations the two of them have had. What he seems to admire most about Dylan is that—more than anything else—Bob Dylan never appears to care what anyone thinks of him. And that is something Val Kilmer still cares about (even though he’d like to argue otherwise).
“I never cultivated a personality,” he says, which is something I am skeptical of, but something I cannot disprove. “Almost everyone that is really famous has cultivated a personality. I can safely say that no one who has ever won an Oscar didn’t want to win an Oscar. I think that Bob Dylan would have loved to win a Grammy during all those years when he knew he was doing his best work. Advanced or not, he was certainly ahead of his time, and he was more worthy than whoever won … Dylan was doing stuff that was so new that everyone hated it. Like when he started playing the electric guitar, for example: he toured for a year, and he was booed every night. Onstage, I could never take three performances in a row and be booed. I just don’t think I’m that strong. I think that I would just go to the producers of the play and say, ‘Well, we tried, but we failed to entertain here.’ But Dylan spent a year being booed. They were throwing bottles at him. And he still can’t play it! Forty years later, he is still trying to play the electric guitar. I mean, he has a dedication to an ideal that I can’t comprehend.”
On the shores of the Pecos River, nothing is as it seems: Kevin Spacey was once a terrible actor, Bob Dylan remains a terrible guitar player, and Val Kilmer is affable and insecure. Crazy things seem normal, normal things seem crazy. Gusty winds may exist.
1. For a protracted explanation of Advancement, see “Advancement,” available in Chuck Klosterman IV or the ebook collection Chuck Klosterman on Living and Society or as an individual ebook essay.
2. I have no idea why I would cast Jude Law in this role, particularly if Heath Ledger were available.
Q: You have been wrongly accused of a horrific crime: Due to a bizarre collision of unfortunate circumstances and insane coincidences, it appears that you have murdered a prominent U.S. senator, his beautiful young wife, and both of their infant children. Now, you did not do this, but you are indicted and brought to trial.
Predictably, the criminal proceedings are a national sensation (on par with the 1994 O. J. Simpson trial). It’s on television constantly, and it’s the lead story in most newspapers for almost a year. The prosecuting attorney is a charming genius; sadly, your defense team lacks creativity and panache. To make matters worse, the jury is a collection of easily confused sheep. You are found guilty and sentenced to four consecutive life terms with virtually no hope for parole (and—since there were no procedural mistakes during the proceedings—an appeal is hopeless).
This being the case, you are (obviously) disappointed.
However, as you leave the courtroom (and in the days immediately following the verdict), something becomes clear: the “court of public opinion” has overwhelmingly found you innocent. Over 95 percent of the country believes you are not guilty. Noted media personalities have declared this scenario “the ultimate legal tragedy.” So you are going to spend the rest of your life amidst the general population of a maximum-security prison … but you are innocent, and everyone seems to know this.
Does this knowledge make you feel (a) better, (b) no different, or (c) worse?
DON’T LOOK BACK IN ANGER
In 1989, my favorite television show was The Wonder Years. This was because The Wonder Years was the only TV program that allowed me to be nostalgic at the age of seventeen; when you haven’t even been alive for two decades, it’s hard to find media experiences that provide opportunities to reminisce about the past. One of the things I particularly loved about The Wonder Years was Kevin Arnold’s incessant concern over the manner in which certain people liked him. (This person was usually Winnie Cooper, but also Becky Slater and Madeline Adams.) The core question was always the same: Did these girls “like him,” or did they “like him like him.” And Kevin’s plight begs some larger queries that apply to virtually every other aspect of being alive, especially for an American in the twenty-first century. How important, ultimately, is likability? Is being likable the most important quality someone can possess, or is it the most inherently shallow quality anyone can desire? Do we need to be liked, or do we merely want to be liked?
I started rethinking Kevin Arnold’s quest for likability while I was reading The New York Times on the day after Christmas.
On the back page of the Times’s “Year in Review” section, there was a graphic that attempted to quantify a phenomenon countless people have discussed over the past three years—the decline in how much other countries “like” the United States. The Times printed a poll comparing how the international opinion of America (in a general sense) evolved between May of 2003 and March of 2004. The results were close to what you’d likely anticipate. In March of 2003, 70 percent of British citizens viewed the U.S. in a manner they described as “favorable.” That number had dropped to 58 percent by March of ’04. In Germany, the “favorable” designation fell from 45 percent to 38 percent over the same time span; in France, 43 to 37. Interestingly (and perhaps predictably), America is now more popular in places like Turkey and Jordan (in Jordan, the percentage of people who saw the U.S. as “very unfavorable” used to be 83 percent, but now that number is down to 67).
The explanation behind these figures, I suppose, is rather obvious; many nations—particularly European ones—don’t like America’s military policy, so they subsequently don’t like America. Meanwhile, countries with a vested interest in America’s occupation of Iraq and Afghanistan have started to like us more. This became a hot issue during the election, as ardent John Kerry supporters insisted that George W. Bush needed to lose his reelection bid because “other countries hate us now.” Yet the more I think about this point, the more I find that argument to be patently ridiculous. There are easily a thousand valid reasons why Bush shouldn’t be president, but how other nations feel about America is not one of them. Americans allow other nations to exercise the kind of sweeping ethnocentrism we would never accept among ourselves.
There are 1.3 billion people in China. We are generally taught to assume that most of these 1.3 billion people are nice, and that they are hardworking, and that they produce their share of handsome low-post NBA athletes who pass out of the double-team exceptionally well. However, these 1.3 billion people also have a problem we’re all keenly aware of; these 1.3 billion people are governed by an administration that has a propensity for violating human rights. As Americans, we are philosophically against this practice. But if someone were to say, “Hey, have you heard about those human rights violations in rural Beijing? I fucking hate the Chinese!” we would immediately assume said person was a close-minded troglodyte (who would be hating the same people who are having their human rights violated).
From a very young age, we are taught that people are not all the same, and that it’s wrong to hate whole countries based on specific stereotypes. Remember that “freedom fries” fiasco that was supposed to illustrate our anti-French sentiment before we went to war with Iraq? Do you recall how every intellectual in America decried that practice as idiotic? The reason intellectuals made that decree was because this practice was idiotic. No intelligent American took that kind of childish symbolism seriously. It made no sense to hate France (or potatoes) simply because the French had a different foreign policy than the United States, and any conventional liberal would have told you that. But what’s so confusing is that those same left-leaning people are the Americans most concerned about the possibility of France not liking us, or of the British liking us less, or of the Netherlands thinking we’re uncouth. These are the same kind of people who travel from New York to Ireland and proceed to tell strangers in Dublin that they’re actually from Canada. They lie because they are afraid someone might not like them on principle. But why should we care if shortsighted people in other countries are as stupid as the shortsighted rednecks in America?
I can totally understand why someone in Paris or London or Berlin might not like the president; I don’t like the president, either. But don’t these people read the newspaper? It’s not like Bush ran unopposed. Over 57 million people voted against him. Moreover, half of this country doesn’t vote at all; they just happen to live here. So if someone hates the entire concept of America—or even if someone likes the concept of America—based solely on his or her disapproval (or support) of some specific U.S. policy, that person doesn’t know much about how the world works. It would be no different than someone in Idaho hating all of Brazil, simply because their girlfriend slept with some dude who happened to speak Portuguese.
In the days following the election, I kept seeing links to Web sites like www.sorryeverybody.com, which offered a photo of a bearded idiot holding up a piece of paper that apologized to the rest of the planet for the election of George W. Bush. I realize the person who designed this Web site was probably doing so to be clever, and I suspect his motivations were either (a) mostly good or (b) mostly self-serving. But all I could think when I saw it was, This is so pathetic. It’s like the guy on this Web site is actually afraid some anonymous stranger in Tokyo might not unconditionally love him (and for reasons that have nothing to do with either of them). Sometimes it seems like most of American culture has become a thirteen-year-old boy who wants to be popular so much and wants to go to the Snowball Dance so bad and is just so worried about his reputation among a bunch of self-interested classmates whose support is wholly dependent on how much candy he shares.
Now, I am not saying that I’m somehow happy when people in other countries blindly dislike America. It’s just that I’m not happy if they love us, either. I don’t think it matters. The kind of European who hates the United States in totality is exactly like the kind of American who hates Europe in totality; both people are unsophisticated, and their opinions aren’t valid. But our society will never get over this fear; there will always be people in this country who are devastated by the premise of foreigners hating Americans in a macro sense. And I’m starting to think that’s because too many Americans are dangerously obsessed with being liked.
We’re like a nation of Kevin Arnolds; being likable is the only thing that seems to matter to anyone. You see this everywhere. Parents don’t act like parents anymore, because they mainly want their kids to like them; they want their kids to see them as their two best friends. This is why modern kids act like animals. At some point, people confused being liked with being good. Those two qualities are not the same. It’s important to be a good person; it’s not important to be a well-liked person. It’s important to be a good country; it’s not important to be a well-liked country. And I realize there are problems with America, and I’m not necessarily sure if the United States is a good place or a bad place. But the reality behind those problems has no relationship to whether or not France (or Turkey, or Winnie Cooper) thinks we’re cool. They can like us, they can like us like us, or they can hate us. But that is their problem, not ours.
—Esquire, 2005
Q: How would your views about war, politics, and the role of the military change if all future conflicts were fought by armies of robots (that is to say, if all nations agreed to conduct wars exclusively with machines so that human casualties would be virtually nonexistent)?
ROBOTS
Like most middle-class white people who will never be shot at, I’m fascinated by the hyper-desperate, darkly realistic, paper-chasing world of postmodern hip-hop. I’ve learned a lot about life from watching MTV Jams; my understanding of the African American experience comes from street-hardened artists who have looked into the mouth of the lion and scoffed like soldiers. These are people like Shawn Carter (“Jay-Z”), Terius Gray (“Juvenile”), Nasir Jones (“Nas”), and Arturo Molina Jr. (“Frost”), who is technically Mexican American. And, to a lesser extent, Will Smith (“The Fresh Prince of Bel-Air”).
Smith is an intriguing figure, sort of. Unlike his peers, Will Smith has eloquently evolved with the culture that spawned him. Though once merely peeved by his mother’s fashion directives (1988’s “Parents Just Don’t Understand”), he has grown into a mature artist who’s willing to confront America’s single greatest threat: killer robots.
This summer (2004), Smith will star in I, Robot, the cinematic interpretation of nine short stories by Isaac Asimov. When I was in the sixth grade, Asimov struck me as a profoundly compelling figure, prompting me to subscribe to Isaac Asimov’s Science Fiction Magazine, a monthly publication I quit reading after the second installment. (The stories seemed a little implausible.) I did, however, unleash a stirring oral book report on I, Robot, a literary collection that was punctuated by Asimov’s now famous Three Rules of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
3. Do not talk about Fight Club.
Now, I don’t think I’m giving anything away by telling you that the robots in I, Robot find a loophole to those principles, and they proceed to slowly fuck us over. This is a story that was written half a century ago. However, it paints a scenario we continue to fear. I, Robot was published in 1950, but writers (or at least muttonchopped Isaac) were already terrified about mankind’s bellicose relationship with technology. If we have learned only one thing from film, literature, and rock music, it is this: humans will eventually go to war against the machines. There is no way to avoid this. But you know what? If we somehow manage to lose this war, we really have no excuse. Because I can’t imagine any war we’ve spent more time worrying about.
The Terminator trilogy is about a war against the machines; so is The Matrix trilogy. So was Maximum Overdrive, although that movie also implied that robots enjoy the music of AC/DC. I don’t think the Radiohead album OK Computer was specifically about computers trying to kill us, but it certainly suggested that computers were not “okay.” 2001: A Space Odyssey employs elements of robot hysteria, as does the plotline to roughly 2,001 video games. I suspect Blade Runner might have also touched on this topic, but I honestly can’t remember any of the narrative details; I was too busy pretending it wasn’t terrible. There is even a Deutsch electronica band called Lights of Euphoria whose supposed masterpiece is an album titled Krieg gegen die Maschinen, which literally translates as, “War Against the Machines.” This means that even European techno fans are aware of this phenomenon, and those idiots generally aren’t aware of anything (except who in the room might be holding the ketamine).
I’m not sure how we all became convinced that machines intend to dominate us. As I type this very column, I can see my toaster, and I’ll be honest: I’m not nervous. As far as I can tell, it poses no threat. My relationship with my toaster is delicious, but completely one-sided. If I can be considered the Michael Jordan of My Apartment (and I think I can), my toaster is LaBradford Smith. I’m never concerned that my toaster will find a way to poison me, or that it will foster a false sense of security before electrocuting me in the shower, or that it will politically align itself with my microwave. My toaster does not want to conquer society. I even played “Dirty Deeds Done Dirt Cheap” in my kitchen, just to see if my toaster would become self-aware and go for my jugular; its reaction was negligible. Machines have no grit.
It appears we’ve spent half a century preparing for a war against a potential foe who—thus far—has been nothing but civil to us; it’s almost like we’ve made a bunch of movies that warn about a coming conflict with the Netherlands. In fact, there isn’t even evidence that robots could kick our ass if they wanted to. In March, a clandestine military group called DARPA (Defense Advanced Research Projects Agency) challenged engineers to build a driverless vehicle that could traverse a 150-mile course in the Mojave Desert; the contest’s winner was promised a cash prize of $1 million. And you know who won? Nobody. Nobody’s robot SUV could make it farther than 7.4 miles. Even with the aid of a GPS, robots are pretty moronic. Why do we think they’ll be able to construct a matrix if they can’t even drive to Vegas?
I suspect all these dystopic “man versus machine” scenarios are grounded in the fact that technology is legitimately alienating; the rise of computers (and robots, and iPods, and nanomachines who hope to turn the world into sentient “gray goo”) has certainly made life easier, but they’ve also accelerated depression. Case in point: if this were 1904, you would not be reading this essay; you would be chopping wood or churning butter or watching one of your thirteen children perish from crib death. Your life would be horrible, but your life would have purpose. It would have clarity. Machines allow humans the privilege of existential anxiety. Machines provide us with the extra time to worry about the status of our careers, and/or the context of our sexual relationships, and/or what it means to be alive. Unconsciously, we hate technology. We hate the way it replaces visceral experience with self-absorption. And the only way we can reconcile that hatred is by pretending machines hate us, too.
It is human nature to personify anything we don’t understand: God, animals, hurricanes, mountain ranges, jet skis, strippers, etc. We deal with inanimate objects by assigning them the human qualities we assume they might have if they were exactly us. Consequently, we want to think about machines as slaves, and we like to pretend those mechanized slaves will eventually attempt a hostile takeover.
The truth, of course, is that we are the slaves; the machines became our masters through a bloodless coup that began during the Industrial Revolution. (In fact, this is kind of what I, Robot is about, although I assume the Will Smith version will not make that clear.) By now, I think most Americans are aware of that reality; I think any smarter-than-average person already concedes that (a) we’ve lost control of technology, and (b) there’s nothing we can do about it. But that’s defeatist. Openly embracing that despair would make the process of living even darker than it already is; we’d all move to rural Montana and become Unabombers. We need to remain optimistic. And how do we do that? By preparing ourselves for a futuristic war against intelligent, man-hating cyborgs. As long as we dream of a war that has not yet happened, we are able to believe it’s a war we have not yet lost.
But perhaps I’m wrong about all this. Perhaps we humans are still in command, and perhaps there really will be a conventional robot war in the not-so-distant future. If so, let’s roll. I’m ready. My toaster will never be the boss of me. Get ready to make me some Pop-Tarts, bitch.
—Esquire, 2004
Q: Is there any widespread practice more futile than attempting to predict society’s future relationship with technology?
CHAOS
Let’s pretend we could end world hunger with drugs.
Let’s pretend someone invented a single, inexpensive pill that would make eating unnecessary forever. You swallow this pill once, and you’re never hungry again; you’d always remain your ideal weight, and you’d always be in perfect health. And let’s assume this pill could be manufactured anywhere (and by anyone), and it would be impossible to regulate or control. Nobody would ever again starve to death in Africa; nobody would ever need to spend money on groceries or slaughter livestock. All the problems that come with the acquisition and consumption of food would disappear.
This, it can be safely argued, would be positive for mankind.
But let’s add one caveat to this hypothetical: let’s say all this happened suddenly. Let’s say the worldwide distribution of this pill happened in the span of six weeks. For the next ten years, the world would be insane. Millions of farmers would be instantly unemployed. Anybody who makes a living by selling, moving, or preparing food would be obsolete. With no need for farmland, the real estate market would be completely reinvented overnight. Without the structure of meals, day-today activities would be drastically different, and it would take decades for this to normalize. In the long run, this evolution would be good for society—but society would not be prepared for the transition. No one has constructed a social framework for a foodless world, and the result would be chaos.
I bring this up because a similar thing is (probably) going to happen to the advertising industry.
Bob Garfield is the advertising critic for Ad Age and the co-host of NPR’s On the Media. He has begun to propagate a theory he’s (somewhat ominously) dubbing “The Chaos Scenario.” The concept is remarkably simple: Garfield basically looked at two trends that everyone recognizes and suddenly realized their combination was going to make the contemporary media implode.
“This scenario is—potentially—the most wonderful thing imaginable for media consumers in a democracy,” Garfield argues, “unless, of course, you’re actively in the marketing or media industry right now and you’re over the age of forty-five. Then you’re fucked big-time.”
Here are the bare bones of Garfield’s “Chaos Scenario” (which I may be oversimplifying, but—truth be told—that’s pretty much what I do for a living), based on two suppositions: The first supposition is that network television is in trouble (and there’s a lot of data to prove this). The population of the United States has increased by 30 million people during the past ten years, but the network audience has managed to decrease by 2 percent over that same span. In 1980, most Americans had three or four channels; now, many have three hundred. With the exception of the Super Bowl, it’s virtually impossible to stop people from changing the channel whenever they see a commercial. Moreover, a growing percentage of Americans now have TiVos and DVRs, and 70 percent of those consumers don’t watch commercials at all. All of this is making it less and less practical for advertisers to use TV as the way to reach people, especially since the cost of advertising on TV keeps increasing. In short, TV advertising is dying—and it’s dying rapidly.
The second supposition is that the advertising role currently played by TV will eventually be adopted by the Internet. Here again, everyone seems to agree that this is inevitable; the only problem is that no one knows how it will work. At the moment, Web advertising (at least to me) seems completely useless. I never pay attention to pop-up ads, and the notion of businesses sponsoring bloggers seems akin to lighting $100 bills on fire and feeding them to a newborn zonkey. Still, my assumption is that companies will eventually find a way to do this effectively. The Internet will replace TV—but that replacement will happen slowly.
In other words, we have one medium that’s collapsing posthaste, and its replacement medium is still under construction. So what happens during the gap in-between? What happens when people realize that advertising on TV is a waste of money, but there isn’t any clear alternative? According to Garfield, the answer is media chaos. And it’s coming fast.
“I’m thinking 2010,” he says. “There’s simply no capacity for the on-line world to represent all those old-world advertisers. TV is in trouble; newspapers and magazines are in trouble for similar reasons. This is a really good time to be in the billboard business.”
What you need to remember is that television only exists because of the commercials, and that’s always how it has been. Consciously, we all know this; unconsciously, we sort of convince ourselves otherwise. When we think about TV, we tend to think of TV shows; we think about programming. But if you think about TV as a semi-random collection of advertisements that are simply connected by constructed narratives, it starts to seem like a very dangerous business model. A show like Desperate Housewives is merely under-written by Tide and Dial soap and Target stores. Underwriting these increasingly expensive shows is earning companies less and less money. At some point in the very near future, companies will realize this business practice is not cost effective, so they’ll just stop underwriting everything. There won’t be anyone to pay for these shows, so there simply won’t be any programming.
This is when the aforementioned big-time fucking occurs.
Now, I know what you’re thinking: “Won’t everything just become pay-for-view? Won’t everything just be on-demand programming?” Possibly—but not right away. Maybe when everyone’s computer is also their television (or vice versa). But that system isn’t set up yet. And even if it could happen instantaneously, that would create a new problem: people want to know about Tide and Dial, and the companies who make them need a way to get consumers that information. The retail economy depends on it.
This is the chaos.
“But wait,” you may be saying to yourself. “What about product placement? Couldn’t the networks just combine advertising and programming into one animal? Isn’t that happening right now?” Well, sort of. Reality TV already relies on that convergence. Shows like The Apprentice and Queer Eye for the Straight Guy are mostly high-end infomercials. There was an episode of Survivor: Palau where a tribe was “rewarded” with Citrus Flash Scope; there were myriad moments on the third season of Project Greenlight where weirdo director John Gulager swilled Stella Artois.
Garfield thinks this is proof that his theory is already in motion.
“Branded entertainment is step number one in chaos theory because audiences will go crazy,” he says. “People hate it when they recognize product placement. It’s strange: Americans will willingly give up huge elements of their civil liberties, but they won’t stand for the corruption of their Hollywood-produced crap.”
On this point I tend to disagree: I don’t think most people born after 1970 have an intense aversion to branded entertainment, and I don’t think people born after 1980 even notice. Personally, I’d accept more branding. Take Arrested Development, for example: Arrested Development is (arguably) the most sophisticated American sitcom ever produced; somewhat predictably, it looks like it will almost certainly be cancelled. This is because it doesn’t earn enough revenue through advertising. But what if the characters on Arrested Development spent the totality of every episode drinking Coca-Cola, and what if they periodically mentioned how refreshing Coke tasted? Or what if all the characters always wore Coca-Cola shirts for the entire program? What if Arrested Development became like stockcar racing, and everything not directly associated with the storyline featured a logo? Would I still watch it?
I think I would. I mean, I still watch The Apprentice.
And this, I suppose, would be Mr. Garfield’s chaos: a fuzzy world where commercials do not exist, yet everything is a commercial (all the time). And its only alternative would be no TV, which means we’d have no public awareness of Tide or Dial or Citrus Splash Scope.
I can’t wait for 2010.
Also, please buy my new magic food pill. It’s terrific.
—Esquire, 2006
Q: While traveling on business, your spouse (whom you love) is involved in a plane crash over the Pacific Ocean. It is assumed that everyone onboard has died. For the next seven months, you quietly mourn. But then the unbelievable happens: it turns out your spouse has survived. He/She managed to swim to a desert island, where he/she lived in relative comfort with one other survivor (they miraculously located most of the aircraft’s supplies on the beach, and the island itself was filled with ample food sources). Against all odds, they have just been discovered by a Fijian fishing boat.
The two survivors return home via helicopter, greeted by the public as media sensations. Immediately upon their arrival, there is an international press conference. And during this press conference, you cannot help but notice how sexy the other survivor is; physically, he/she perfectly embodies the type of person your mate is normally attracted to. Moreover, the intensity of the event has clearly galvanized a relationship between the two crash victims: they spend most of the interview explaining how they could not have survived without the other person’s presence. They explain how they passed the time by telling anecdotes from their respective lives, and both admit to having virtually given up on the possibility for rescue. At the end of the press conference, the two survivors share a tearful good-bye hug. It's extremely emotional.
After the press conference, you are finally reunited with your spouse. He/She embraces you warmly and kisses you deeply.
How long do you wait before asking if he/she was ever unfaithful to you on this island? Do you never ask? And if your mate’s answer is “yes,” would that (under these specific circumstances) be acceptable?
4, 8, 15, 16, 23, 42
Because I’ve written about reality television on numerous occasions during the past ten years, I am often asked questions about it. Roughly 90 percent of these queries are different versions of the same question: “When will all of this be over?” There seems to be a universally held belief that reality television is a doomed fad, and that the genre has oversaturated the marketplace to an almost unfathomable degree, and that it is only a matter of moments before it all goes away. Everyone also seems to agree that reality programming is becoming more and more scripted, which means we are usually just watching contrived scenarios performed by untrained actors. Moreover, everything else on TV seems to be getting better; televised dramas have never been as complex or engrossing as they are right now. So when you consider how many people hate reality television (and how watchable its unreal competition has become), it’s hard to fathom why slow-paced, quasi-authentic, semi-humiliating game shows can still survive as mainstream entertainment.
And yet they do. And they’re still popular, and they’re still being created, and they’re still prompting ostensibly intelligent people to ask questions such as, “I wonder what kind of world Supernova will sonically inhabit?” Clearly, this genre is more tenacious than logic would dictate. If you’re one of those people still asking “When will all of this be over?” you’re obsessing over the wrong question. The more compelling issue is why people still care about reality TV, especially when there are so many fictional alternatives that are so obviously superior. The easy answer would be to say that this is because most people are stupid, but that conclusion is reductionist, condescending, and wrong; unfortunately, the correct answer is even more depressing. And that explanation is best illustrated through a key ideological difference between Survivor and Lost.
Superficially, Survivor and Lost have a lot in common: both depict disparate groups of people trying to exist on a deserted island while learning how to exist with one another. Lost is probably the best network drama in the history of television1 (the only other candidate might be Twin Peaks). The standards and practices of ABC prevent Lost from being as realistic or deep2 as an HBO series (it’s not as vulgar or intense as The Sopranos or as morally incendiary as Six Feet Under), but the writing is sophisticated and weirdly creative; one of the narrative threads during the first season is loosely based on an alternative universe where Oasis broke up after (What’s the Story) Morning Glory?3 It’s impossible to predict the arc of the Lost narrative, or even to guess how long that arc will last. Conversely, Survivor is static; the program is now entering its thirteenth season, and all of the previous twelve have been slightly different species of the same animal. Its cast members have been ingrained with the history and language of the show, and they all take their strategic cues from prior seasons. Survivor should not be able to compete with a program like Lost, which is better in almost every conceivable way. But it does. And that’s because Survivor has an advantage that Lost could never construct: Survivor—like most reality programming—is powered by the overwhelming significance of jealousy in everyday life. Which is why it still feels real to people, even when they know it mostly isn’t.
At its core, Lost is an adventure story of the classical variety, which is to say it centers on the notion of The Great Man. The main character is a workaholic doctor (Matthew Fox’s Jack Shepherd) who’s expected to take care of everyone else, regardless of what the problem is; we slowly learn that his only weakness is overcommitment. The doctor’s ally/nemesis is a wise old man (Terry O’Quinn’s John Locke) who can slay boars with knives and read people’s minds; upon crashing onto the island’s sand, this unassuming cripple becomes the spiritualist equivalent of Nietzsche’s superman. Both of these characters are admirable. On Lost, greatness is everything4—and that makes the show likable. But it also reminds people that Lost is fake, and it suggests that the story will rarely show them glimpses of their own life (which, ultimately, is art’s main function). The wholly constructed world of Lost is how life should be, but isn’t. Meanwhile, the semi-constructed world of Survivor mirrors the way life actually is: every season, the mediocre majority unifies to destroy the unrivaled. After that, it becomes a popularity contest based on lying.
If Dr. Jack and Mr. Locke were characters on Survivor, neither would have any chance of winning. On Survivor, being a successful leader is akin to a death sentence; with the exceptions of Ethan Zohn from Season 3 and Tom Westman from Season 10, the strongest players always lose.5 The game is actively designed to penalize greatness. The perfect Survivor contestant is the kind of paradoxical individual who should not exist: an understated, noncontroversial, virtually invisible person who—for some unknown reason—really, really wants to be on TV. Most importantly, the perfect Survivor contestant needs to be “un-great.” That is the key to winning $1 million. And from a programming perspective, that’s also why audiences will always relate to reality vehicles like Survivor, even when its tangible content seems dull and artificial.
Lost is high-minded and confusing, which makes it entertaining. Survivor is unavoidably immoral, which makes it more prescient. The American world view is predicated (and measured) by individual success—but success can’t happen to everyone. Greatness is generally not shared. As a result, people are increasingly comfortable with the concept of equalizing the playing field through collusion and resentment. This is why JV basketball players sit on the bench and pray for members of the varsity to tear their ACLs. This is why the only sector of media that seems to be thriving is the coverage of celebrity gossip, an industry flooded with failed sycophants who couldn’t get jobs in real journalism. This is why choosing to position himself as un-great on purpose helped a man win the presidential election in 2000 and 2004. And this is why Survivor still has a place in society: it validates the practice of getting to the top by dragging everyone else to the middle.
Certainly, not all reality shows are based on this kind of philosophical nihilism; Bravo’s Project Runway consistently rewards genuine talent, and MTV’s anachronistic The Real World still (somewhat curiously) has no objective beyond getting cast on the show itself. There are also tangential elements of Survivor that make it specifically unique (for some reason, it’s always interesting to watch strangers starve for nonaesthetic motivations). But its disenchanting sociology is the core explanation as to why reality TV does not disappear. Its dialogue might seem coached and its action might seem staged, but the characters’ motives inevitably strike audiences as sadly plausible. Lost is awesome, but only as long as the storyline remains intense; the moment it gets boring,6 no one will care. All of its Great Men will suddenly seem like improbable caricatures. But Survivor doesn’t have to be interesting in order to be important. All it needs to show is the mendacity of the desperately average, and we will always understand why it is real.
—Esquire, 2006
1. I regret writing this sentence. However, this is not because I retrospectively disagree with my opinion after seeing Lost’s third season; it’s because this single sentence seemed to be the only thing many readers were able to remember about the entire column. Within the context of the piece, it does not matter if Lost is the best show of all-time or the thirty-eighth best show of the twenty-first century. My personal opinion about the program’s entertainment value is completely unrelated to the larger point of the essay. Unfortunately, I failed to realize that most people would rather argue over the superficial merits of what they like (or dislike) than consider why they unconsciously like (or dislike) anything.
2. Chiefly, it is difficult to imagine how crash landing on a tropical island filled with polar bears and mystical smoke monsters would not prompt somebody to occasionally ask, “What the fuck is going on here?”
3. Although in this universe (a) they are called Drive Shaft, (b) Liam is the older brother, (c) the group’s biggest song sounds more “Rock ’N’ Roll Star” than “Wonderwall,” and (d) Noel is named Charlie, plays bass, and is not cool.
4. Further proof of this can be seen during the second season of Lost, when the expanded cast added a “Great Crazy Woman” (Michelle Rodriguez) and a “Great Nigerian Stoic” (Adewale Akinnuoye-Agbaje).
5. Interestingly, the 2006 “Race War” edition of Survivor (which the publication of this column directly preceded) may have proved me wrong on this point. The thirteenth season of the series was dominated by two contestants (Yul Kwon and Ozzy Lusth) who were both incredibly skilled and (mostly) honest and forthright. Yul ended up winning the $1 million on a 5–4 jury vote; it was probably the show’s best season since the year 2000. Is this good news for America? Perhaps. Probably not, but perhaps.
6. Which is a risk every time an episode dwells on golf, the baby, or women Hurley is attracted to who don’t listen to the Hold Steady.
Q: The world is ending. It’s ending quickly, and it’s ending dramatically.
It will either end at noon on your fiftieth birthday, or it will end two days after you die (from natural causes) at the age of seventy-five.
Which apocalyptic scenario do you prefer?
TELEVISION
Those who fail to understand the past are doomed to repeat it. However, what if that’s your goal? What if that’s exactly what you want out of life? What if repeating the past—and then repeating it again and again—is the only thing that makes you happy?
If this is indeed the case, you should do what I did: watch VH1 Classic for twenty-four consecutive hours. Nothing wages war with insomnia like wall-to-wall videos from the Reagan era. More important, there is much that can be learned from such an experience. It’s kind of like what Matthew McConaughey said in Dazed and Confused: “I get older, they stay the same age.” Now, I realize he was talking about high-school girls and I’m referring to Duran Duran videos. But what’s really the difference?
12:02 P.M.: The afternoon begins with Tom Petty and the Heartbreakers’ “So You Want to Be a Rock & Roll Star,” which is one of those videos where the band just performs a song live and we’re supposed to like it. If I were a DJ, this would be a snarky song to play immediately following “Rock ’N’ Roll Star” by Oasis. Tom is wearing a sports coat featuring the planets of the solar system (Saturn most prominently) and he’s smiling constantly; I guess he likes his job. This is followed by a clip from an unsmiling Roger Waters, who sings about beautiful women walking their dogs on the Sunset Strip. I sense this shall be a day of paradoxes.
12:23 P.M.: “Every Little Thing She Does Is Magic” is currently on my twenty-one-inch life window, and the Police are dancing around the recording studio like a trio of nappyhaired gnomes. Everyone remembers that the Police wrote a bunch of great songs, but does anyone remember how often they wore stupid hats? Perhaps that was the style of the time. I can totally understand why Stewart Copeland always wanted to punch Sting in the throat. Up next is Duran Duran’s “Planet Earth.” Obviously, the guys in Duran II don’t wear hats because they’re “New Romantic–Looking.” As a consequence, one of the Andys in this band is dressed like a gay pirate and appears to be sporting my sister’s least successful haircut from the spring of 1986.
12:57 P.M.: Okay, so now we’re into the hyper-trippy “All Right Now” by Free, and it’s raising a few questions. Logically, there is no way that everyone in the 1970s was a drug addict; that just couldn’t have been the case. However, this Free footage was clearly made exclusively for people who were completely high. Does this mean that the normal mind-set of mainstream culture in 1970 was akin to the way stoned people view the world in 2003? I mean, maybe even straight people thought they were stoned in the 1970s; maybe that’s how everyone felt, all the time. This might explain how Jimmy Carter got elected; it also might explain why his presidency was tainted by an attack from a giant swimming rabbit. Music can teach us many things.
1:16 P.M.: We’ve moved into the “all-request hour.” Someone has requested Franke and the Knockouts’ “Sweetheart,” which is a song I never even knew existed. I like to imagine that some unemployed slacker is sitting in his double-wide trailer in the middle of the afternoon, and he’s thinking, Ah, yes. All my hard work has finally paid off. At long last, Franke and the Knockouts are back in the public consciousness. And I would wager $10,000 that this person is named Franke.
1:24 P.M.: “Dust in the Wind,” my all-time favorite song about dirt and air, is pulling at my heartstrings while the bearded fellows in Kansas pull at their violin strings. You know, nobody makes truly sad songs anymore. Outside my bedroom window, some city employees are tearing up the sidewalk with jackhammers and playing the new 50 Cent record on a boom box; from what I can deduce, the first four tracks are about killing people and the fifth is about drinking Bacardi. Now, I realize 50 is reflecting the reality of the streets and the frailty of human existence, but didn’t Kansas already do that? Nothing lasts forever, except the earth and sky. I should have become a farmer.
1:52 P.M.: Lindsey Buckingham is trapped in a fish tank and killing his doppelganger with mind bullets. Part of me is tempted to suggest that this low-fi technology and guileless chutzpah is cool and/or Advanced and/or better than videos of the modern age, but I just can’t do it. This is pretty idiotic. No wonder Stevie Nicks had to do all that blow. I’m sure she saw this video in 1982 and thought to herself, I used to share my shawl with that guy?
2:05 P.M.: There may be better guitar riffs than the opening of AC/DC’s “Highway to Hell,” but there can’t be more than five. The singer’s hat sure is stupid, though. I wish Stewart Copeland would punch him in the throat.
2:09 P.M.: I’m watching film footage of Jimi Hendrix’s “Crosstown Traffic,” and it’s one of those montages where we see old scenes of traffic and frigid homeless people and crazy dudes in wheelchairs, and this is supposed to replicate the experience of traveling across Manhattan in 1968. However, I remain certain that this song must be a metaphor for a stubborn woman Hendrix was trying to sleep with, because there is no way Jimi Hendrix would ever be bothered by gridlock. I can’t fathom a scenario where Jimi would have needed to be on time, unless he was late for a taping of The Dick Cavett Show.
2:25 P.M.: One of my all-time favorite video tropes is the “Spontaneous Sex Party in the Classroom” conceit, best personified by Van Halen in “Hot for Teacher” but also exemplified by what I’m watching right now, which is “Sexy + 17” by the Stray Cats. Sadly, the girls in this video don’t look seventeen. This is more like the last few seasons of Happy Days, where a thirty-six-year-old Fonzie would snap his fingers and be groped by two high-school cheerleaders, both of whom were roughly thirty-three.
2:29 P.M.: Speaking of Van Halen, “Sexy + 17” is followed by “(Oh) Pretty Woman,” which is a narrative about a semi-hot woman being sexually tortured by two dwarves, only to be rescued by a cowboy (Eddie Van Halen), a jungle savage (Alex Van Halen), a Samurai warrior (Michael Anthony), and Napoleon Bonaparte (David Lee Roth). This was released in 1982, so I guess this was Van Halen’s Village People period.
2:35 P.M.: Things are really getting excellent: Poison’s “Fallen Angel” is illustrating the cautionary tale of a small-town girl who moves to Los Angeles and immediately becomes a whore. The moral of our story? Never go anywhere and never try anything. Stay home and buy more Poison records.
2:55 P.M.: Blue Öyster Cult (“Godzilla”) and Led Zeppelin (“Whole Lotta Love”) just had a heavyweight heavy-off, and (much to my surprise) the rubber radioactive monster pounds the mudshark out of the German war blimp.
3:05 P.M.: I am informed by VH1 Classic that “We … are … the ’80s,” and this is proven by Lionel Richie’s willingness to dance on the ceiling. I think this video came out at roughly the same point when Lionel hosted the American Music Awards and kept inexplicably repeating the word outrageous, the most overt (and least successful) attempt in pop history to create a national catchphrase. This video ends with Rodney Dangerfield bugging his eyes out and saying, “I shouldn’t have eaten that upside-down cake!” Now that’s a catchphrase.
3:20 P.M.: With the exception of a waifish brunette wearing a negligee and placing her foot in a basin of water, R.E.M.’s video for “The One I Love” is remarkably similar to the opening credits of the old PBS science show 3-2-1 Contact. I’m calling a copyright attorney.
4:02 P.M.: Driven by hot-blooded lust, Gloria Estefan is crawling into my lap and insisting the rhythm is going to get me (tonight). We’ll just see about that, Gloria. You pipe down!
4:08 P.M.: When one really thinks about it, the message of Culture Club’s “Karma Chameleon” is rather brilliant, inasmuch as the song examines the disconnect between the actions of a lover and how those deeds are interpreted. However, this disconnect is significantly downplayed by the video, inasmuch as it appears to glamorize riverboat gambling.
4:18 P.M.: “Raspberry Beret,” the best Prince song ever recorded, is followed by the Bangles’ “Manic Monday,” the best Prince song ever recorded by somebody else. Prince supposedly gave “Manic Monday” to Susanna Hoffs in the hope that she would sleep with him. If I were Prince, that’s all I would ever do—I’d write airtight singles for every female musician I ever met. As far as I can tell, the reason you write great songs is to become a rock star, and the reason you become a rock star is to have sex with beautiful, famous women. Why not cut out the middleman? Prince is a genius.
4:48 P.M.: Here is what I am learning from “Our House” by Madness: never invite ska musicians into your home, because they’re all too fucking happy. “Our House” and Eddy Grant’s “Electric Avenue” were my favorite songs in fifth grade. Man, I am so glad I got into Mötley Crüe.
5:11 P.M.: Ian Astbury wears sunglasses while singing “Whiskey Bar” with two surviving members of the Doors. Time to get nervous.
5:23 P.M.: In 1984, .38 Special released a record called Tour de Force. Do you think they were serious about this? I mean, do you think they were sitting in the studio, working on tunes like “If I’d Been the One,” and they eventually just looked each other in the eyes and said, “This is it. This is our tour de force.” I’m sure this must have happened, because why else would you make a video where a bunch of wild horses run through a prairie fire?
5:49 P.M.: I’m quite enjoying Michael Sembello’s “Maniac.” However, I’m a tad baffled: How did Flashdance ever get produced theatrically? The movie itself isn’t necessarily bad (it’s kind of good, sort of). But how did anyone pitching the script ever get past the segment of the description where he’d have to say, “Okay, now here’s the key—this girl is also a professional welder.” Because I’m sure every studio executive responded by saying, “She’s what? A professional wrestler?” And then the guy pitching the script would have to go, “No no no, I said welder,” and they’d have a twenty-minute conversation about how to strike an arc and why watching a woman do this would be sexy. Which it is, but that doesn’t make it any easier to explain.
6:00 P.M.: The Metal Mania hour opens with “Summertime Girls” by Y&T, which makes me wish my apartment was an ’84 Caprice Classic. Beautiful women are wearing black leather outfits on the sands of Malibu, and that can’t be comfortable. Luckily, they remove them in order to don black lingerie, which is evidently what they wear when they play beach volleyball. I can totally relate to this.
6:07 P.M.: Go ahead and call me sentimental if you must, but I will always prefer the Def Leppard videos where the drummer still has both his arms.
6:12 P.M.: Memory is a strange thing. Example: I completely recalled that the Scorpions’ “Rock You Like a Hurricane” video was about the band being locked inside a steel cage while hundreds of sex-starved women tried to sexually attack them. However, I had somehow blocked out the fact that this video also involved leopards.
6:25 P.M.: If I have a persecution complex (and I do), it undoubtedly came from watching Twisted Sister videos, namely “I Wanna Rock.” If left to my own devices, I would have never realized how much society was actively trying to stop me from listening to Twisted Sister.
6:42 P.M.: I’m watching “Girls, Girls, Girls” right now. One of the strip clubs Mötley Crüe mentions in this song is the Body Shop on the Sunset Strip, and every time I’m in L.A. I end up walking right past it. Part of me has always wanted to go in there, mostly because of this song. But I never do, mostly because of this song.
6:55 P.M.: King Kobra. Kool.
7:01 P.M.: By some act of God, today’s episode of Headline Act is about KISS. Paul Stanley gives me advice on how to live my life before playing “Rock and Roll All Nite.” Gene Simmons explains that the KISS Army is a volunteer army. True dat.
7:32 P.M.: An interesting aside has just occurred to me: VH1 Classic shows no commercials (just promos for VH1). It’s been a long time since I’ve watched this much television without someone trying to sell me something. However, I suppose VH1 is selling me something; they’re selling nostalgia, which means they’re selling my own memories back to me, which means they are selling me to me. I am both the commodity and the consumer. So what’s the margin on that?
7:40 P.M.: Whitney Houston tells me she gets so emotional, baby, and I believe her. This video came out years before she went fucknuts, but she already seems pretty bizarre and skeletal. Fourteen minutes later, Aretha Franklin sings “Freeway of Love.” She is half as bizarre and forty times less skeletal.
8:06 P.M.: Okay, here’s something I failed to anticipate: it turns out VH1 Classic operates on some kind of “block system,” because they just played Tom Petty’s “So You Want to Be a Rock & Roll Star” (again), and now they’re playing the same Roger Waters shit I saw at 12:05. I am now going to have to spend the next eight hours rewatching the exact same videos I just spent the previous eight hours watching, in the exact same sequence. If I were a member of al Qaeda, this would be enough to make me talk.
8:28 P.M.: This is all so idiotically meta. Because this is VH1 Classic, all these videos are things I saw in the distant past. They make me think of junior high. But because I just finished watching these same exact clips eight hours ago, my window for nostalgia is much smaller. I am now nostalgic for things that just happened. So the second time I see Fine Young Cannibals’ “Good Thing,” it makes me nostalgic for 12:30, which was when I had General Tso’s chicken for lunch. Yeah, those were GTs.
8:57 P.M.: Let me be honest about something: I am not the first person who came up with the idea of watching rock videos and writing about the experience. In 1992, a brilliant guy named Hugh Gallagher locked himself in a hotel room and watched MTV for seven straight days (this is back when MTV still played videos). I recall him writing that a Black Crowes antiheroin video made him want to do heroin. That’s nothing. He should have watched this Free video twice!
9:10 P.M.: There is no way Derek Wittenburg can handle Clyde Drexler off the dribble, and Thurl Bailey cannot match up with Akeem Olajuwon on the block. I am certain Houston will win the 1983 NCAA championship. Oh, fuck … this is ESPN Classic. Sorry.
9:16 P.M.: The first time I saw Triumph’s video for “Somebody’s Out There,” I failed to notice that it inexplicably involved a woman looking into a microscope. Maybe all this repetition will pay dividends.
10:19 P.M.: In the world of Deep Purple’s “Knocking at Your Backdoor,” windmills are remarkably prominent.
10:31 P.M.: Earlier today, I saw Van Halen’s “(Oh) Pretty Woman” and merely thought it was strange. Upon further review, this is the craziest thing I have ever seen in my entire life.
10:51 P.M.: I was wrong about something else this afternoon: upon further review, Zeppelin is substantially heavier than BÖC. I had just been distracted by the two-minute drum solo near the end of “Godzilla.” It also dawned on me (during Jimmy Page’s solo) that some yet-to-be-invented band should make an homage to early-’70s psychedelic acid-rock videos that transpose live performance with still photography.1
12:07 A.M.: Well, it’s now been twelve hours since I started this project. I think I’m holding up okay, although it’s a lot less fun to watch videos when you always know what’s coming up next. This is becoming a Groundhog Day fiasco. But static stimuli makes you consider curious things: like, was Boy George attractive? And I don’t mean attractive as a man, nor do I mean attractive as a woman. It’s more like, was he attractive as a human? And why does the answer to that question suddenly seem so different than the answer to the first question?
12:46 A.M.: Corey Hart looks exactly like a kid I attended basketball camp with in seventh grade. That guy had no game whatsoever. He did, however, upturn the collar of his IZOD, just as Corey does in the video from “Never Surrender,” which I’m now watching. I think that kid from basketball camp was named “Corey,” too. Or maybe it was Monroe. Oh well, let’s move on.
1:20 A.M.: I have very mixed feelings about this REO Speed-wagon video (“Roll with the Changes”). That whole era—1979 to 1983, roughly—was definitely the worst period in the history of rock music. But I think it’s probably my favorite era of rock music, and my reasons for feeling this way are complex. At risk of getting all pseudo-Zen, I don’t like listening to “Roll with the Changes,” but I like the way it sounds. And I don’t like looking at REO Speedwagon, but I like the way they look. The bottom line, I suspect, is that there was never another time where the gap between “totally great” and “completely terrible” was so minuscule, which is why I’m glad VH1 Classic exists.
2:04 A.M.: “Let It Go” by the Japanese metal band Loudness includes ample footage of industrial power saws slicing through tree trunks (We’re back to the aforementioned Metal Mania hour). It would be fascinating to interview the director of this video today, because I’d love to hear him try and explain what he was trying to convey with this imagery. There is no plausible explanation: this is “heavy metal.” It’s not “heavy lumber” (and even if this film was conceptualized by some forward-thinking Tokyo auteur who spoke no English whatsoever, there’s no way he could misinterpret that). Is this supposed to mean the music of Loudness will attack the listener with the frenzied power of sharpened steel? If so, I guess that makes us the trees.
2:14 A.M.: Never before have I been so well informed about VH1’s programming schedule. If you have any questions about upcoming episodes of Driven, feel free to e-mail me at [email protected].
2:41 A.M.: “Girls, Girls, Girls,” again, again, again. What we learn from this video is that there are two kinds of strippers in this world: those who smile and those who don’t. The ones who don’t are apparently trying to seem sultrier, but I prefer the ones who smile. I get the impression the guys in Mötley Crüe spend less time worrying about this, though.
3:00 A.M.: New (old) videos start in an hour. I am so … oh, I don’t know. Stoked?
3:05 A.M.: The triumphant return of that thirty-minute KISS retrospective I already watched at 7:00 P.M. Paul Stanley compares KISS in 1972 to a “baby piranha.” Later, he discusses the concept of freedom and its application to the video for “Tears Are Falling.” He’s a goddamn prophet.
4:01 A.M.: Oh my god. Oh my god oh my god oh my god. It’s Tom Petty’s “So You Wanna Be a Rock & Roll Star.” This block of All-Star Jams is starting over again. This can’t be happening. But it’s happening. Oh my god.
4:41 A.M.: I’ve now seen Paul Carrack’s “Don’t Shed a Tear” thrice, and it’s not getting any better. I hate this. I hate Paul Carrack. I hate myself. But I will not shed a tear, because Paul Carrack understands me better than I understand myself.
4:47 A.M.: Some VJ named Eddie Trunk just implied that “Spill the Wine” by Eric Burdon and War helped end the Vietnam War.
5:42 A.M.: The video for Taco’s “Puttin’ on the Ritz” is not remotely akin to the way I remember it from Friday Night Videos. It seems to be set in a postapocalyptic haunted mansion, occupied by goth witches and tuxedo-clad warlocks wielding Darth Vader’s light sabers. I suddenly have an urge to locate my twelve-sided die and roll up some hit points.
6:02 A.M.: At long last, a format change: since it’s now “officially” Tuesday, we have entered Tuesday Two Play, which means I get to see two consecutive videos by every artist who appears. We begin with Bruce Springsteen doing “My Home-town” (live, with Clarence Clemons on tambourine) and “Thunder Road.” Back in reality, the sun has risen in the east, and people I will never know are jogging outside my bedroom window. The world is a foreign place. I do not belong here.
6:29 A.M.: I drift into shallow slumber and awake from a horrifying dream: a thin man is waving a bouquet of flowers at me, and I am struck into a coma. When the coma is shattered, I find myself half naked, confused about my sexual identity, and overcome by sadness. I think a train may be involved, and possibly a double-decker bus. But then I realize something else: I’ve been awake this whole time. These are just Smiths videos.
7:45 A.M.: Neil Young and Pearl Jam keep on rockin’ in the free world. Van Halen asks me if this is love while swigging Jack Daniel’s from the stage, and I have no valid answer. An underage girl on the beach says she wants candy, and it seems dirty. And it is.
10:03 A.M.: I’m running out of material. I just watched David Bowie’s “Ashes to Ashes,” and all I could think to write was, “Hmm … this looks like a Dr. Who episode.” I think I actually made the joke yesterday. Now I’m watching a pair of Steve Winwood videos and I can’t remember what these songs are titled, even when I’m hearing the chorus. I feel eight hundred years old.
10:17 A.M.: Whenever I’d listen to Toto’s “Africa,” I always assumed the song was using Africa as a metaphor. However, this video suggests the song is literally about the continent itself (and maybe about an African American travel agent, although I can’t be sure), so now I’m confused. It definitely has a globe, though. Also, what does “Africa” have to do with the movie Fatal Attraction? I swore I just heard some VJ talking about that movie (and its relationship to Toto). I struggle.
10:55 A.M.: Here’s an idea: Why doesn’t someone create a network called CNN Classic, which could be a twenty-four-hour channel of old news broadcasts? They could air old episodes of 60 Minutes and the wall-to-wall coverage that was shown during memorable national disasters and random episodes of World News Tonight from the 1970s. They could rebroadcast all the news reports from the day Robert F. Kennedy was shot and the real-time news feeds from the 1986 Challenger explosion. This idea seems unspeakably brilliant to me, and I honestly can’t believe I’m the only person who ever got high and came up with it.
11:35 A.M.: Okay, we’re less than thirty minutes away from the end of this joy ride, and I’m watching a Bryan Ferry video that’s primarily composed of unicorn footage from the movie Legend. I should retire right now. This is undoubtedly the apex of my career as a journalist.
11:58 A.M.: Well, this is it. The end of the road. And who do I see when I reach nirvana? No, not Nirvana; it’s Cher (“Heart of Stone”), and I think she’s singing about people who died in Vietnam. And—somehow—this makes perfect sense to me. Nature has created no being as irrepressible as Cher, a woman who keeps coming back in order to remind us that she used to be somebody (and will therefore be somebody forever). This is why VH1 Classic exists—it’s a network for people who live exclusively in the past and the future, forever ignoring the present tense. Which means it’s for pretty much everybody over the age of eighteen and under the age of forty-five. And when I see Cher again at 7:58 P.M., this will still be true, just as it was eight hours ago.
—SPIN.com, 2003
1. And this band should come from Scandinavia and be called “Dungen.”
“Ha ha,” he said. “Ha ha.”
1 Sometimes writing is difficult. Sometimes writing is like pounding a brick wall with a ball-peen hammer in the hope that the barricade will evolve into a revolving door. Sometimes writing is like talking to a stranger who’s exactly like yourself in every possible way, only to realize that this stranger is boring as shit. In better moments, writing is the opposite of difficult—it’s as if your fingers meander arbitrarily in crosswise patterns and (suddenly) you find yourself reading something you didn’t realize you already knew. Most of the time, the process falls somewhere in between. But there’s one kind of writing that’s always easy: Picking out something obviously stupid and reiterating how stupid it obviously is. This is the lowest form of criticism, easily accomplished by anyone. And for most of my life, I have tried to avoid this. In fact, I’ve spent an inordinate amount of time searching for the underrated value in ostensibly stupid things. I understand Turtle’s motivation and I would have watched Medellin in the theater. I read Mary Worth every day for a decade. I’ve seen Korn in concert three times and liked them once. I went to The Day After Tomorrow on opening night. I own a very expensive robot that doesn’t do anything. I am open to the possibility that everything has metaphorical merit, and I see no point in sardonically attacking the most predictable failures within any culture. I always prefer to do the opposite, even if my argument becomes insane by necessity.
But sometimes I can’t.
Sometimes I experience something so profoundly idiotic—and so deeply universal—that I cannot find any contrarian perspective, even for the sole purpose of playful contrarianism. These are not the things that are stupid for what they are; these are the things that are stupid for what they supposedly reflect about human nature. These are things that make me feel completely alone in the world, because I cannot fathom how the overwhelming majority of people ignores them entirely. These are not real problems (like climate change or African genocide), because those issues are complex and multifaceted; they’re also not intangible personal hypocrisies (like insincerity or greed), because those qualities are biological and understandable. These are things that exist only because they exist. We accept them, we give them a social meaning, and they become part of how we live. Yet these are the things that truly illustrate how ridiculous mankind can be. These are the things that prove just how confused people are (and will always be), and these are the things that are so stupid that they make me feel nothing. Not sadness. Not anger. Not guilt. Nothing.
These are the stupidest things our society has ever manufactured.
And—at least to me—there is one stupid idea that towers above all others. In practice, its impact is minor; in theory, it’s the most fucked-up media construction spawned by the twentieth century. And I’ve felt this way for (almost) my entire life.
I can’t think of anything philosophically stupider than laugh tracks.
2 Perhaps this seems like a shallow complaint to you. Perhaps you think that railing against canned laughter is like complaining that nuclear detonations are bad for the local bunny population. I don’t care. Go read a vampire novel. To me, laugh tracks are as stupid as we get. And, yes, I realize this phenomenon is being phased out by modernity. That’s good. There will be a day in the future when this essay will make no sense, because canned laughter will be as extinct as TV theme songs. It will only be used as a way to inform audiences that they’re supposed to be watching a fake TV show from the 1970s. But—right now, today—canned laughter is still a central component of escapist television. The most popular sitcom on TV, Two and a Half Men, still uses a laugh track, as does the (slightly) more credible How I Met Your Mother and the (significantly) less credible The Big Bang Theory. Forced laughter is also central to the three live-action syndicated shows that are broadcast more than any other, Friends, Home Improvement, and Seinfeld. Cheers will be repeated forever, as will the unseen people guffawing at its barroom banter. And I will always notice this, and it will never become reassuring or nostalgic or quaint. It will always seem stupid, because canned laughter represents the worst qualities of insecure people.
Now, I realize these qualities can be seen everywhere in life and within lots of complicated contexts. Insecurity is part of being alive. But it’s never less complicated than this. It’s never less complicated than a machine that tries to make you feel like you’re already enjoying something, simply because people you’ll never meet were convinced to laugh at something else entirely.
2A I am not the first writer who’s been perversely fascinated with fake laughter. Ron Rosenbaum1 wrote a story for Esquire in the 1970s titled “Inside the Canned Laughter War” that chronicled attempts by Ralph Waldo Emerson III2 to sell American TV networks on a new laughter device that was intended to usurp the original “Laff Box” designed by Charlie Douglass for the early fifties program The Hank McCune Show. Rosenbaum’s piece is apolitical, mainly memorable for mentioning that the voices heard on modern laugh tracks were often the same original voices recorded by Douglas during pre-ancient radio shows like Burns and Allen, which would mean that the sound we hear on laugh tracks is the sound of dead people laughing. As far as I can tell, this has never been proven. But it must be at least partially true; there must be at least a few people recorded for laugh tracks who are now dead, even if their laughter was recorded yesterday. People die all the time. If you watch any episode of Seinfeld, you can be 100 percent confidant that somebody chuckling in the background is six feet underground. I assume this makes Larry David ecstatic.
During the height of the Laff Box Era (the 1970s), lots of TV critics railed against the use of canned laughter, so much so that TV shows began making a concerted effort to always mention that they were taped in front of a live audience (although even those live tapings were almost always mechanically sweetened). At the time, the primary criticism was that laugh tracks were being used to mask bad writing—in Annie Hall, Woody Allen’s self-styled character chastises a colleague working in the TV industry for adding counterfeit hilarity to a terrible program (“Do you realize how immoral this all is?”). Less concrete aesthetes argued that the Laff Box obliterated the viewer’s suspension of disbelief, although it’s hard to imagine how realistically invested audiences were ever supposed to feel about Mork and Mindy. I concede that both of these condemnations were accurate. But those things never bothered me. Laugh tracks never detracted from bad writing, and they never stopped me from thinking the cast of Taxi weren’t legitimate taxi drivers. Those issues are minor. What bothers me is the underlying suggestion that what you are experiencing is different than whatever your mind tells you is actually happening. Moreover, laugh tracks want you to accept that this constructed reality can become the way you feel, or at least the way you behave. It’s a concept grounded in the darkest of perspectives: A laugh track assumes that you are not confident enough to sit quietly, even if your supposed peer group is (a) completely invisible and (b) theoretically dead.
1A I lived in eastern Germany for four months of 2008. There were a million weird things about living there, but there was one that I didn’t anticipate: Germans don’t fake-laugh. If someone in Germany is laughing, it’s because he or she physically can’t help themselves; they are laughing because they’re authentically amused. Nobody there ever laughs because of politeness. Nobody laughs out of obligation. And what this made me recognize is how much American laughter is purely conditioned. Most of our laughing—I would say at least 51 percent—has no relation to humor or to how we actually feel.
You really, really notice this in German grocery stores. When paying for food in Leipzig, I was struck by how much of my daily interaction was punctuated by laughter that was totally detached from what I was doing. I would buy some beer and cookies and give the clerk a twenty-euro note; inevitably, the clerk would ask if I had exact change, because Germans are obsessed with both exactness and money. I would reach into my pocket and discover I had no coins, so I would reply, “Um—heh heh heh. No. Sorry. Ha! Guess not.” I made these noises without thinking. Every single time, the clerk would just stare at me stoically. It had never before occurred to me how often I reflexively laugh; only in the absence of a response did I realize I was laughing for no reason whatsoever. It somehow felt comfortable. Now that I’m back in the U.S., I notice this all the time: People half-heartedly chuckle throughout most casual conversations, regardless of the topic. It’s a modern extension of the verbalized pause, built by TV laugh tracks. Everyone in America has three laughs: a real laugh, a fake real laugh, and a “filler laugh” they use during impersonal conversations. We have been trained to connect conversation with soft, interstitial laughter. It’s our way of showing the other person that we understand the context of the interaction, even when we don’t.
This is not the only reason Germans think Americans are retarded, but it’s definitely one of them.
2B Part of what makes the notion of canned laughter so mysterious is the way it continues to exist within a media world that regularly rewards shows that don’t employ it. Virtually every high-end, “sophisticated” comedy of the early twenty-first century—Arrested Development, It’s Always Suny in Philadelphia, Curb Your Enthusiasm, The Simpsons, 30 Rock—is immune to canned laughter, and it’s difficult to imagine any of those shows supplemented with mechanical, antiseptic chuckling. Very often, the absence of a laugh track serves as a more effective guidepost than the laughter itself—audiences have come to understand that any situation comedy without canned laughter is supposed to be smarter, hipper, and less predictable than traditional versions of the genre. This comprehension began with the Korean War sitcom M*A*S*H, a series that started with the removal of canned laughter from scenes in the hospital operating room (so as not to mitigate the reality of people bleeding to death) and eventually excluded it from the entire broadcast altogether (in order to remind audiences that they were watching something quasi-political and semi-important). But this collective assumption raises two questions:
1. If TV audiences have come to accept that comedic shows without laugh tracks are edgier and more meaningful, is it not possible that the reverse would also be true (in other words, does removing the laugh track change the way a viewer preconceives the show, regardless of its content)?
2. If all the best comedies are devoid of fake laughter, why would anyone elect to use them at all (under any circumstance)?
What’s interesting about these two queries is the way their answers are connected. The answer to the first question is, “Absolutely.” If you watch a comedy that forgoes contrived laughter, you will unconsciously (or maybe even consciously) take it more seriously. Jokes will be interpreted as meaner, weirder, and deeper than however they were originally written. When Liz Lemon says something on 30 Rock that isn’t funny, there’s always the paradoxical possibility that this was intentional; perhaps Tina Fey is commenting on the inanity of the “sitcom joke construct” and purposefully interjecting a joke that failed, thereby making the failure of her joke the part that’s supposed to be funny. The Office and Curb Your Enthusiasm deliver “the humor of humiliation” without contextual cues, so the events can be absorbed as hilarious in the present and cleverly tragic in the retrospective future. These are things we all immediately understand the moment we start watching a TV comedy without a laugh track: The product is multidimensional. We can decide what parts are funny; in fact, the program can even be enjoyed if none of the parts are funny, assuming the writing is propulsive or unusual (this was the case with Aaron Sorkin’s Sports Night, an ABC satire that debuted with a laugh track but slowly eliminated the chuckles over its two-year run). We all take laughless sitcoms more seriously because they seem to take us more seriously. They imply that we will know (and can actively decide) what is (or isn’t) funny.
Which directs us to the answer of question two.
The reason a handful of very popular sitcoms still use canned laughter—and the reason why veteran network leaders always want to use laugh tracks, even though doing so immediately ghettoizes their programming—is due to a specific assumption about human nature. The assumption is this: Normal people don’t have enough confidence to know what they think is funny. And this, sadly, is true. But it’s not their fault.
2C Friends (at least during the early stages of its ten-season run) was taped in front of a live studio audience. This, of course, does not make its laughter (deserved or undeserved) any less fake: Studio audiences are prompted to laugh at everything, want to laugh at everything, and are mechanically fixed (“sweetened”) whenever they fail to perform at optimal levels of outward hilarity assessment. Friends had a laugh track the same way The Flintstones had a laugh track—it’s just that the prefab laughs you heard on Friends were being manufactured on location, in real time. For anyone watching at home, there was no difference.
Now, the best episodes of Friends were funny. The worst episodes were insulting to baboons. But the vast majority fall somewhere in between. Here is an example of a Friends script from season two; this episode was titled “The One Where Old Yeller Dies” and takes place when the series was still a conventional sitcom (as opposed to more of a serial comedy, which started during season three). The mention of a character named “Richard” refers to Tom Selleck, who played Monica’s boyfriend for much of that season. This is the first scene following the opening credits . . .
[Scene: Inside Monica and Rachel’s apartment. Richard is on the balcony smoking and Monica is on the phone.]
MONICA: Hey, have you guys eaten, because uh, Richard and I just finished and we’ve got leftovers . . . Chicken and potatoes . . . What am I wearing? . . . Actually, nothing but rubber gloves.
[Chandler and Joey come sprinting into the apartment from across the hall.]
JOEY: Ya know, one of these times you’re gonna really be naked and we’re not gonna come over.
MONICA: Alright, I’ve got a leg, three breasts and a wing.
CHANDLER: Well, how do you find clothes that fit?
JOEY: Oh, hey, Monica, we’ve got a question.
MONICA: Alright, for the bizillionth time—yes, I see other women in the shower at the gym, and no, I don’t look.
JOEY: No, not that one. We’re trying to figure out who to bring to the Knicks game tonight. We have an extra ticket.
The degree to which you find this passage funny is directly proportional to (a) how familiar you are with this show and (b) how much you recall liking it. Like almost all successful TV ensembles, the plots on Friends weren’t a fraction as important as the characters and who played them—especially as the seasons wore on, the humor came from our familiarity with these characters’ archetypes. People who liked Friends literally liked the friends. Audiences watched the show because they felt like they had a relationship with the cast. The stories were mostly extraneous. But there still had to be a story somewhere. There still had to be something for these people to do, so the show adopted a structure. This is the structure of the previous scene, minus the dialogue:
[Scene: Inside Monica and Rachel’s apartment. Richard is on the balcony smoking and Monica is on the phone.]
MONICA: STATIC INTRO, PLUS JOKE
(small laugh)
[MOMENT OF PHYSICAL COMEDY]
(exaggerated laugh)
JOEY: JOKE BASED IN PREEXISTING KNOWLEDGE OF CHARACTER’S PERSONA
(laugh)
MONICA: SETUP
CHANDLER: OLD-TIMEY JOKE
(laugh)
JOEY: MINOR PLOT POINT
MONICA: UNRELATED JOKE
(laugh)
JOEY: BEGINNING OF STORY ARC FOR EPISODE
Using this template, it seems like anyone could create their own episode of Friends, almost like they were filling out a Mad Libs. And if those Mad Libs lines were actually said by Courteney Cox, Matt LeBlanc, and Matthew Perry, the result would probably be no less effective (were they especially absurd, the net might even be positive). The key to this kind of programming is never what people are saying. They key is (a) which people are doing the talking, and (b) the laugh track.
There are important assumptions we bring into the show as viewers; we are assuming that this is escapist (read: nonincendiary) humor, we are assuming the characters are ultimately good people, and we’re assuming that our relationship to Friends mirrors the traditional relationship Americans have always had with thirty-minute TV programs that employ canned laughter. It’s not always funny, but it’s in the “form of funny.” And because we’re not stupid, we know when to chuckle. But we don’t even have to do that, because the laugh track does it for us. And over time, that starts to feel normal. It starts to make us laugh at other things that aren’t necessarily funny.
1B Earlier in this essay I mentioned how I’ve believed that canned laughter was idiotic for “(almost) my entire life.” The key word there is almost. I did not think laugh tracks were idiotic when I was five. In fact, when I was five, I thought I was partially responsible for the existence of laugh tracks. I thought we all were.
At the time, my assumption was that the speaker on my parents’ Zenith television was a two-way system—I thought it was like a telephone. When I watched Laverne and Shirley or WKRP in Cincinnati and heard the canned laughter, my hypothesis was that this was the sound of thousands of other TV viewers in random locations, laughing at the program in their own individual living rooms. I thought their laughter was being picked up by their various TV consoles and being simultaneously rebroadcast through mine. As a consequence, I would sometimes sit very close to the television and laugh as hard as I could, directly into the TV’s speaker. I would laugh into my own television.
My family thought I just really, really appreciated Howard Hesseman.
And I did. But I mostly wanted to contribute to society.
3 In New York, you get used to people pretending to laugh. Go see a foreign movie with poorly translated English subtitles and you will hear a handful of people howling at jokes that don’t translate, solely because they want to show the rest of the audience that they’re smart enough to understand a better joke was originally designed to be there. Watch The Daily Show in an apartment full of young progressives and you’ll hear them consciously (and unconvincingly) over-laugh at every joke that’s delivered, mostly to assure everyone else that they’re appropriately informed and predictably leftist. Take a lunch meeting with anyone involved in any form of media that isn’t a daily newspaper, and they will pretend to laugh at everything anyone at the table says that could be theoretically classified as humorous, even if the alleged joke is about how airline food isn’t delicious. The only thing people in New York won’t laugh at are unfamous stand-up comedians; we really despise those motherfuckers, for some reason.
It’s possible the reason people in New York laugh at everything is because they’re especially polite, but that seems pretty unlikely. A better explanation is that New York is the most mediated city in America, which means its population is the most media-savvy—and the most media-affected—populace in the country. The more media someone consumes (regardless of who they are or where they live), the more likely they are to take their interpersonal human cues from external, nonhuman sources. One of the principal functions of mass media is to make the world a more fathomable reality—in the short term, it provides assurance and simplicity. But this has a long-term, paradoxical downside. Over time, embracing mass media in its entirety makes people more confused and less secure. The laugh track is our best example. In the short term, it affirms that the TV program we’re watching is intended to be funny and can be experienced with low stakes. It takes away the unconscious pressure of understanding context and tells the audience when they should be amused. But because everything is laughed at in the same way (regardless of value), and because we all watch TV with the recognition that this is mass entertainment, it makes it harder to deduce what we think is independently funny. As a result, Americans of all social classes compensate by living like bipedal Laff Boxes: We mechanically laugh at everything, just to show that we know what’s supposed to be happening. We get the joke, even if there is no joke.
Is this entirely the fault of laugh tracks? Nay. But canned laughter is a lucid manifestation of an anxious culture that doesn’t know what is (and isn’t) funny. If you’ve spent any time trolling the blogosphere, you’ve probably noticed a peculiar literary trend: the pervasive habit of writers inexplicably placing exclamation points at the end of otherwise unremarkable sentences. Sort of like this! This is done to suggest an ironic detachment from the writing of an expository sentence! It’s supposed to signify that the writer is self-aware! And this is idiotic. It’s the saddest kind of failure. F. Scott Fitzgerald believed inserting exclamation points was the literary equivalent of an author laughing at his own jokes, but that’s not the case in the modern age; now, the exclamation point signifies creative confusion. All it illustrates is that even the writer can’t tell if what they’re creating is supposed to be meaningful, frivolous, or cruel. It’s an attempt to insert humor where none exists, on the off chance that a potential reader will only be pleased if they suspect they’re being entertained. Of course, the reader really isn’t sure, either. They just want to know when they’re supposed to pretend that they’re amused. All those extraneous exclamation points are like little splatters of canned laughter: They represent the “form of funny,” which is more easily understood (and more easily constructed) than authentic funniness. I suppose the counter-argument is that Tom Wolfe used a lot of exclamation points, too . . . but I don’t think that had anything to do with humor or insecurity. The Wolfe-Man was honestly stoked about LSD and John Glenn. I bet he didn’t even own a TV. It was a different era!
Build a machine that tells people when to cry. That’s what we need. We need more crying.
1. Rosenbaum would later write a controversial nonfiction book titled Explaining Hitler, which was controversial for suggesting that Hitler was (possibly) an un-evil infant.
2. Yes, they were related.
Tomorrow Rarely Knows
1 It was the 1990s and I was twenty, so we had arguments like this: What, ultimately, is more plausible—time travel, or the invention of a liquid metal with the capacity to think? You will not be surprised that Terminator 2 was central to this dialogue. There were a lot of debates over this movie. The details of the narrative never made sense. Why, for example, did Edward Furlong tell Arnold that he should quip, “Hasta la vista, baby,” whenever he killed people? Wasn’t this kid supposed to like Use Your Illusion II more than Lo-c-ed After Dark? It was a problem. But not as much of a problem as the concept of humans (and machines) moving through time, even when compared to the likelihood of a pool of sentient mercury that could morph itself into a cop or a steel spike or a brick wall or an actor who would eventually disappoint watchers of The X-Files. My thesis at the time (and to this day) was that the impossibility of time travel is a cornerstone of reality: We cannot move forward or backward through time, even if the principles of general relativity and time dilation suggest that this is possible. Some say that time is like water that flows around us (like a stone in the river) and some say we flow with time (like a twig floating on the surface of the water). My sense of the world tells me otherwise. I believe that time is like a train, with men hanging out in front of the engine and off the back of the caboose; the man in front is laying down new tracks the moment before the train touches them and the man in the caboose is tearing up the rails the moment they are passed. There is no linear continuation: The past disappears, the future is unimagined, and the present is ephemeral. It cannot be traversed. So even though the prospect of liquid thinking metal is insane and idiotic, it’s still more viable than time travel. I don’t know if the thinking metal of tomorrow will have the potential to find employment as Linda Hamilton’s assassin, but I do know that those liquid-metal killing machines will be locked into whatever moment they happen to inhabit.
It would be wonderful if someone proved me wrong about this. Wonderful. Wonderful, and sad.
2 I read H. G. Wells’s The Time Machine in 1984. It became my favorite novel for the next two years, but solely for textual reasons: I saw no metaphorical meaning in the narrative. It was nothing except plot, because I was a fucking sixth grader. I reread The Time Machine as a thirty-six-year-old in 2008, and it was (predictably) a wholly different novel that now seemed fixated on archaic views about labor relations and class dynamics, narrated by a protagonist who is completely unlikable. This is a trend with much of Wells’s sci-fi writing from this period; I reread The Invisible Man around the same time, a book that now seems maniacally preoccupied with illustrating how the invisible man was an asshole.
Part of the weirdness surrounding my reinvestigation of The Time Machine was because my paperback copy included a new afterword (written by Paul Youngquist) that described Wells as an egomaniac who attacked every person and entity he encountered throughout his entire lifetime, often contradicting whatever previous attack he had made only days before. He publicly responded to all perceived slights levied against him, constantly sparring with his nemesis Henry James and once sending an angry, scatological letter to George Orwell (written after Orwell had seemingly given him a compliment). He really hated Winston Churchill, too. H. G. Wells managed to write four million words of fiction and eight million words of journalism over the course of his lifetime, but modern audiences remember him exclusively for his first four sci-fi novels (and they don’t remember him that fondly). He is not a canonical writer and maybe not even a great one. However, his influence remains massive. Like the tone of Keith Richards’s guitar or Snidely Whiplash’s mustache, Wells galvanized a universal cliché—and that is just about the rarest thing any artist can do.
The cliché that Wells popularized was not the fictional notion of time travel, because that had been around since the sixteenth century (the oldest instance is probably a 1733 Irish novel by Samuel Madden called Memoirs of the Twentieth Century). Mark Twain reversed the premise in 1889’s A Connecticut Yankee in King Arthur’s Court. There’s even an 1892 novel called Golf in the Year 2000 that (somewhat incredibly) predicts the advent of televised sports. But in all of those examples, time travel just sort of happens inexplicably—a person exists in one moment, and then they’re transposed to another. The meaningful cliché Wells introduced was the machine, and that changed everything. Prior to the advent of Wells’s imaginary instrument, traveling through time generally meant the central character was lost in time, which wasn’t dramatically different from being lost geographically. But a machine gave the protagonist agency. The time traveler was now moving forward or backward on purpose; consequently, the time traveler now needed a motive for doing so. And that question, I suspect, is the core reason why narratives about time travel are almost always interesting, no matter how often the same basic story is retold and repackaged: If time travel was possible, why would we want to do it?
Now, I will concede that there’s an inherent goofballedness in debating the ethics of an action that is impossible. It probably isn’t that different than trying to figure out if leprechauns have high cholesterol. But all philosophical questions are ultimately like this—by necessity, they deal with hypotheticals that are unfeasible. Real-world problems are inevitably too unique and too situational; people will always see any real-world problem through the prism of their own personal experience. The only massive ideas everyone can discuss rationally are big ideas that don’t specifically apply to anyone, which is why a debate over the ethics of time travel is worthwhile: No one has any personal investment whatsoever. It’s only theoretical. Which means no one has any reason to lie.
2A Fictionalized motives for time travel generally operate like this: Characters go back in time to fix a mistake or change the conditions of the present (this is like Back to the Future). Characters go forward in time for personal gain (this is like the gambling subplot1 of Back to the Future Part II). Jack the Ripper used H. G. Wells’s time machine to kill citizens of the seventies in Time After Time, but this was an isolated (and poorly acted) rampage. Obviously, there is always the issue of scientific inquiry with any movement through time, but that motive matters less; if a time traveler’s purpose is simply to learn things that are unknown, it doesn’t make moving through time any different than exploring Skull Island or going to Mars. My interest is in the explicit benefits of being transported to a different moment in existence—what that would mean morally and how the traveler’s goals (whatever they may be) could be implemented successfully.
Here’s a question I like to ask people when I’m 5/8 drunk: Let’s say you had the ability to make a very brief phone call into your own past. You are (somehow) given the opportunity to phone yourself as a teenager; in short, you will be able to communicate with the fifteen-year-old version of you. However, you will only get to talk to your former self for fifteen seconds. As such, there’s no way you will be able to explain who you are, where or when you’re calling from, or what any of this lunacy is supposed to signify. You will only be able to give the younger version of yourself a fleeting, abstract message of unclear origin.
What would you say to yourself during these fifteen seconds?
From a sociological standpoint, what I find most interesting about this query is the way it inevitably splits between gender lines: Women usually advise themselves not to do something they now regret (i.e., “Don’t sleep with Corey McDonald, no matter how much he pressures you”), while men almost always instruct themselves to do something they failed to attempt (i.e., “Punch Corey McDonald in the face, you gutless coward”). But from a more practical standpoint, the thing I’ve come to realize is that virtually no one has any idea how to utilize such an opportunity, even if it were possible. If you can’t directly explain that you’re talking from the future, any prescient message becomes worthless. All advice comes across like a drunk dialer reading a fortune cookie. One person answered my question by claiming he would tell the 1985 incarnation of himself to “Invest in Google.” That sounds smart, but I can’t imagine a phrase that would have been more useless to me as a teenager in 1985. I would have spent the entire evening wondering how it would be possible to invest money into the number 1 with one hundred zeros behind it.
It doesn’t matter what you can do if you don’t know why you’re doing it.
2B I’ve now typed fifteen hundred words about time travel, which means I’ve reached the point where everything becomes a problem for everybody. This is the point where we need to address the philosophical dilemmas embedded in any casual discussions about time travel, real or imagined. And there are a lot of them. And I don’t understand about 64 percent of them. And the 36 percent I do understand are pretty elementary to everyone, including the substantial chunk of consumers who are very high and watching Anna Faris movies while they read this. But here we go! I will start with the most unavoidable eight:
1. If you change any detail about the past, you might accidentally destroy everything in present-day existence. This is why every movie about time travel makes a big, obvious point about not bringing anything from the present back in time, often illustrated by forcing the fictionalized time traveler to travel nude. If you went back to 60,000 BC with a tool box and absentmindedly left the vise grip behind, it’s entirely possible that the world would technologically advance at an exponential rate and destroy itself by the sixteenth century.2 Or so I’m told.
2. If you went back in time to accomplish a specific goal (and you succeeded at this goal), there would be no reason for you to have traveled back in time in the first place. Let’s say you built a time machine in order to murder the proverbial “Baby Hitler” in 1889. Committing that murder would mean the Holocaust never happened. And that would mean you’d have no motive for going back in time in the first place, because the tyrannical Adolf Hitler—the one you despise—would not exist. In other words, any goal achieved through time travel would eliminate the necessity for the traveler to travel. In his fictional (and pathologically grotesque) oral history Rant, author Chuck Palahniuk refers to this impasse as the Godfather Paradox: “The idea that if one could travel backward in time, one could kill one’s own ancestor, eliminating the possibility said time traveler would ever be born—and thus could never have lived to travel back and commit the murder.” The solution to this paradox (according to Palahniuk) is the theory of splintered alternative realities, where all possible trajectories happen autonomously and simultaneously (sort of how Richard Linklater describes The Wizard of Oz to an uninterested cab driver in the opening sequence of Slacker). However, this solution is actually more insane than the original problem. The only modern narrative that handles the conundrum semi-successfully is Richard Kelly’s Donnie Darko, where schizophrenic heartthrob Jake Gyllenhaal uses a portal to move back in time twelve days, thereby allowing himself to die in an accident he had previously avoided. By removing himself from the equation, he never meets his new girlfriend, which keeps her from dying in a car accident that was his fault. More important, his decision to die early stops his adolescence from becoming symbolized by the music of Tears for Fears.
3. A loop in time eliminates the origin of things that already exist. This is something called “the Bootstrap Paradox” (in reference to the Robert Heinlein story “By His Bootstraps”). It’s probably best described by David Toomey, the author of a book called The New Time Travelers (a principal influence on season five of Lost). Toomey uses Hamlet as an example: Let’s suppose Toomey finds a copy of Hamlet in a used-book store, builds a time machine, travels back to 1601, and gives the book to William Shakespeare. Shakespeare then copies the play in his own handwriting and claims he made it up. It’s recopied and republished countless times for hundreds of years, eventually ending up in the bookstore where Toomey shops. So who wrote the play? Shakespeare didn’t. Another example occurs near the end of Back to the Future: Michael J. Fox performs “Johnny B. Goode” at the school dance and the tune is transmitted over the telephone to Chuck Berry3 (who presumably stole it). In this reality, where does the song come from? Who deserves the songwriting royalties?
4. You’d possibly kill everybody by sneezing. Depending on how far you went back in time, there would be a significant risk of infecting the entire worldwide population with an illness that mankind has spent the last few centuries building immunity against. Unless, of course, you happened to contract smallpox immediately upon arrival—then you’d die.
5. You already exist in the recent past. This is the most glaring problem and the one everybody intuitively understands—if you went back to yesterday, you would still be there, standing next to yourself. The consequence of this existential condition is both straightforward and unexplainable. Moreover . . .
6. Before you attempted to travel back in time, you’d already know if it worked. Using the example from problem number 5, imagine that you built a time machine on Thursday. You decide to use the machine on Saturday in order to travel back to Friday afternoon. If this worked, you would already see yourself on Friday. But what would then happen if you and the Future You destroyed your time machine on Friday night? How would the Future You be around to assist with the destroying?
7. Unless all of time is happening simultaneously within multiple realities, memories and artifacts would mysteriously change. The members of Steely Dan (Donald Fagen and Walter Becker) met at Bard College in 1967, when Fagen overheard Becker playing guitar in a café. This meeting has been recounted many times in interviews, and the fact that they were both at Bard College (located in Annandale-on-Hudson) is central to songs like “My Old School,” which was recorded in 1973. But what if Fagen built a time machine in 1980 and went back to find Becker in 1966, when he was still a high school student in Manhattan? What would happen to their shared personal memories of that first meeting in Annandale? And if they had both immediately moved to Los Angeles upon Becker’s graduation, how could the song “My Old School” exist (and what would it be about)?
8. The past has happened, and it can only happen the way it happened. This, I suppose, is debatable. But not by Bruce Willis. In Terry Gilliam’s Twelve Monkeys, Willis goes back in time to confront an insane Brad Pitt before Pitt releases a virus that’s destined to kill five billion people and drive the rest of society into hiding (as it turns out, Pitt is merely trying to release a bunch of giraffes from the Philadelphia Zoo, which is only slightly more confusing than the presence of Madeleine Stowe in this movie). What’s distinctive about Twelve Monkeys is that the reason Willis is sent back in time is not to stop this catastrophe from happening, but merely to locate a primitive version of the virus so that scientists can combat the existing problem in the distant future (where the remnants of mankind have been to forced to take refuge underground). Willis can travel through time, but he can’t change anything or save anyone. “How can I save you?” he rhetorically asks the white-clad dolts who question his sudden appearance in the year 1990. “This already happened. No one can save you.” Twelve Monkeys makes a lot of references to the “Cassandra complex” (named for a Greek myth about a young woman’s inability to convince others that her prophetic warnings are accurate), but it’s mostly about predestination—in Twelve Monkeys, the assumption is that anyone who travels into the past will do exactly what history dictates. Nothing can be altered. What this implies is that everything about life (including the unforeseen future) is concrete and predetermined. There is no free will. So if you’ve seen Twelve Monkeys more than twice, you’re probably a Calvinist.
These are just a handful of the (nonscientific) problems with going backward in time. As far as I can tell, there really aren’t any causality problems with going forward in time—in terms of risk, jumping to the year 2077 isn’t that different than moving to suburban Bangladesh or hiding in your basement for five decades. Time would still move forward on its regular trajectory, no differently than if you were temporarily (or permanently) dead. Your participation in life doesn’t matter to time. This is part of the reason that futurists tend to believe traveling forward in time is more plausible than the alternative—it involves fewer problems. But regardless of the direction you move, the central problem is still there: Why do it? What’s the best reason for exploding the parameters of reality?
With the possible exception of eating a dinosaur, I don’t think there is one.
3 “Even back when I was writing really bad short stories in college,” a (then) thirty-four-year-old Shane Carruth said in an interview with himself, “I always thought the time machine is the device that’s missed most. Without even saying it out loud, that’s the thing people want the most: The ability to take whatever it is that went wrong and fix it.”
Carruth is the writer, director, producer, and costar of the 2004 independent film Primer, the finest movie about time travel I’ve ever seen. The reason Primer is the best (despite its scant seventy-eight-minute run time and $7,000 budget) is because it’s the most realistic—which, I will grant, is a peculiar reason for advocating a piece of science fiction. But the plausibility of Primer is why it’s so memorable. It’s not that the time machine in Primer seems more authentic; it’s that the time travelers themselves seem more believable. They talk and act (and think) like the kind of people who might accidentally figure out how to move through time, which is why it’s the best depiction we have of the ethical quandaries that would emerge from such a discovery.
Here’s the basic outline of Primer: It opens with four identically dressed computer engineers sitting around a table in a nondescript American community (Primer was shot around Dallas, but the setting is like the world of Neil LaBute’s In the Company of Men—it’s a city without character that could literally be anywhere). They speak a dense, clipped version of English that is filled with technical jargon; it’s mostly indecipherable, but that somehow makes it better. They wear ties and white shirts all the time (even when they’re removing a catalytic converter from a car to steal the palladium), and they have no interests outside of superconductivity and NCAA basketball. The two brightest engineers—Abe (David Sullivan) and Aaron (Carruth)—eventually realize they have assembled a box that can move objects backward through a thirteen-hundred-minute loop in time. Without telling anyone else, they build two larger versions of the (staunchly unglamorous) box that can transport them to the previous day.4 Their initial motive is solely financial—they go back a day, drive to the local library, and buy stocks over the Internet that they know will increase in value over the next twenty-four hours. They try to do nothing else of consequence (at least at first). They just sit in a hotel room and wait. “I tried to isolate myself,” Abe says when describing his first journey into the past. “I closed the windows, I unplugged everything—the phone, the TV and clock radio. I didn’t want to take the chance of seeing someone I knew, or of seeing something on the news . . . I mean, if we’re dealing with causality, and I don’t even know for sure . . . I took myself out of the equation.”
If this sounds simple, I can assure you that it is not. Primer is hopelessly confusing and grows more and more byzantine as it unravels (I’ve watched it seven or eight times and I still don’t know what happened). Characters begin to secretly use the time machine for personal reasons and they begin multiplying themselves across time. But because these symmetrical iterations are (inevitably) copies of other copies, the system starts to hemorrhage—Abe and Aaron find themselves bleeding from their ears and struggling with handwriting. When confusing events start to happen in the present, they can’t tell if those events are the manifestations of decisions one of them will eventually make in the future. At one point, no one (not Abe, Aaron, or even the viewer) is able to understand what’s going on. The story does not end in a clear disaster, but with a hazy, open-ended scenario that might be worse.
What’s significant about the two dudes in Primer is how they initially disregard the ethical questions surrounding time travel; as pure scientists, they only consider the practical obstacles of the endeavor. Even when they decide to go back and change the past of another person, their only concern is how this can still work within the framework they’re manipulating. They’re geniuses, but they’re ethical Helen Kellers. When they’re traveling back for financial purposes, they discount their personal role in the success of the stocks they trade; since stocks increase in value whenever people buy them, they are retroactively inflating the value of whatever commodities they select (not by much, but enough to alter the future). When Abe and Aaron start traveling back in time to change their own pasts, they attempt to stoically ignore the horrifying reality they’ve created: Their sense of self—their very definition of self—is suddenly irrelevant. If you go back in time today and meet the person who will become you tomorrow, which of those two people is actually you? The short answer is, “Both.” But once you realize that the short answer is “Both,” the long answer becomes “Neither.” If you exist in two places, you don’t exist at all.
According to the director, Primer is a movie about the relationship between risk and trust. This is true. But it also makes a concrete point about the potential purpose of time travel—it’s too important to use only for money, but too dangerous to use for anything else.
1A I used to have a fantasy about reliving my entire life with my present-day mind. I once thought this fantasy was unique to me, but it turns out that this is very common; many people enjoy imagining what it would be like to reinhabit their past with the knowledge they’ve acquired through experience. I imagine the bizarre things I would have said to teachers in junior high. I think about women I would have pursued and stories I could have written better and about how interesting it would have been to be a genius four-year-old. At its nucleus, this is a fantasy about never having to learn anything. The defining line from Frank Herbert’s Dune argues that the mystery of life “is not a question to be answered but a reality to be experienced.” My fantasy offers the opposite. Nothing would be experienced. Nothing would feel new or unknown or jarring. It’s a fantasy for people who want to solve life’s mysteries without having to do the work.
I am one of those people.
The desire to move through time is electrifying and rational, but it’s a desire of weakness. The real reason I want to travel through time is because I’m a defeatist person. The cynical egomaniac in Wells’s original novel leaves the present because he has contempt for the humanity of his present world, but he never considers changing anything about his own role in that life (which would obviously be easier). Instead, he elects to bolt eight hundred thousand years into the future, blindly hoping that things will have improved for him. It’s a bad plan. Charlton Heston’s character in Planet of the Apes5 tries something similar; he hates mankind, so he volunteers to explore space, only to crash back on a postapocalyptic earth where poorly dressed orangutans employ Robert’s Rules of Order. This is a consistent theme in stories about traveling to the future: Things are always worse when you get there. And I suspect this is because the kind of writer who’s intrigued by the notion of moving forward in time can’t see beyond their own pessimism about being alive. People who want to travel through time are both (a) unhappy and (b) unwilling to compromise anything about who they are. They would rather change every element of society except themselves.
This is how I feel.
This is also why my long-standing desire to build a time machine is not just hopeless but devoid of merit. It has nothing to do with time. I don’t think it ever does (for me, H. G. Wells, Shane Carruth, or anyone else). It takes a flexible mind to imagine how time travel might work, but only an inflexible spirit would actually want to do it. It’s the desire of the depressed and lazy.
On side two of the Beach Boys’ Pet Sounds, Brian Wilson laments that he “just wasn’t made for these times” (“these times” being 1966). He probably wasn’t. But he also didn’t want to be. I assume Wilson would have preferred dealing with the possibility of thinking liquid metal before he would accept the invisible, nonnegotiable shackles of the present tense. Which—sadly, and quite fortunately—is the only tense any of us will ever have.
1. This subplot refers to the actions of a character named Biff (Thomas F. Wilson) who steals a sports almanac from the future in order to gamble on predetermined sporting events in the present. There’s a popular urban legend about this plot point involving the Florida Marlins baseball team: In the film, Biff supposedly bets on a Florida baseball team to win the World Series in 1997, which actually happened. The amazing part is that Back to the Future Part II was released in 1989, four years before the Florida Marlins even had a major league franchise. Unfortunately, this legend is completely false. The reference in the movie is actually a joke about the futility of the Chicago Cubs that somehow got intertwined with another reference to a (fictional) MLB opponent from Miami whose logo was a gator. I realize that by mentioning the inaccuracy of this urban legend, I will probably just perpetuate its erroneous existence. But that’s generally how urban legends work.
2. For whatever the reason, I’ve always assumed vise grips would be extremely liberating for Neanderthals.
3. Semi-unrelated (but semi-interesting) footnote to this paradox: Before Fox plays “Johnny B. Goode” at the high school dance, he tells his audience, “This is an oldie . . . well, this is an oldie from where I come from.” Chuck Berry recorded “Johnny B. Goode” in 1958. Back to the Future was made in 1985, so the gap is twenty-seven years. I’m writing this essay in 2009, which means the gap between 1985 and today is twenty-four years. That’s almost the same amount of time. Yet nobody would ever refer to Back to the Future as an “oldie,” even if he or she were born in the 1990s. What seems to be happening is a dramatic increase in cultural memory: As culture accelerates, the distance between historical events feels smaller. The gap between 2010 and 2000 will seem far smaller than the gap between 1980 and 1970, which already seemed far smaller than the gap between 1950 and 1940. This, I suppose, is society’s own version of time travel (assuming the trend continues for eternity).
4. This is too difficult to explain in a footnote, but one of Carruth’s strengths as a fake science writer is how he deals with the geography of time travel, an issue most writers never even consider. Here, in short, is the problem: If you could instantly travel one hour back in time, you would (theoretically) rematerialize in the exact same place from which you left. That’s how the machine works in the original Time Machine. However, the world would have rotated 15 degrees during that missing hour, so you would actually rematerialize in a totally different spot on the globe. Primer manages to work around this problem, although I honestly don’t understand the solution as much as I see the dilemma.
5. I realize Planet of the Apes isn’t technically about time travel. Time moves at its normal rate while the humans are in suspended animation. But for the purposes of the fictional people involved, there is no difference: They leave from and return to the same geographic country. The only difference is the calendar.
About Chuck Klosterman
Photo Credit:
To learn more about Chuck Klosterman, visit authors.simonandschuster.com/Chuck-Klosterman
Sign up for Chuck Klosterman e-mail alerts and be the first to find out about the latest books, tour events, and more.
Other websites:
- Chuck Klosterman Wikipedia Page (http://en.wikipedia.org/wiki/Chuck_Klosterman)
More eBooks by Chuck Klosterman
Eating the Dinosaur by Chuck Klosterman An exploration of pop culture and sports that takes a Klostermaniacal look at expections, reality, media, and fans. | |
Downtown Owl by Chuck Klosterman Downtown Owl is an engaging, darkly comedic portrait of small-town life that conveys the power of local mythology and the experience of rural America. |
|
Fargo Rock City by Chuck Klosterman Chuck Klosterman’s hilarious memoir of growing up as a shameless metalhead in Wyndmere, North Dakota (population: 498). |
|
Chuck Klosterman IV by Chuck Klosterman Chuck Klosterman’s collection of things that are true, things that might be true, and something that isn’t true at all. |
|
Killing Yourself to Live by Chuck Klosterman A 6,557-mile roadtrip meditation on rock ‘n’ roll, death, girlfriends, and nostalgia for the recent past. |
|
Sex, Drugs, and Cocoa Puffs by Chuck Klosterman A collection of essays, ostensibly about art, entertainment, infotainment, sports, politics, and kittens, but—really—it’s about us. All of us. |