Category Archives: Science

Two 2001s

My favorite science fiction movies are the ones that don’t spend two and a half hours yelling and throwing things at my face. This is why I recently watched 2001: A Space Odyssey again. It’s quiet and slow and never boring.

(I assume anyone reading this knows the story: a monolith forcibly evolves a prehuman; millions of years later, the discovery of another monolith prompts a mission to Jupiter; the ship’s computer goes crazy and kills most of the crew; yet another monolith turns the survivor into a magic space baby. Level up!)

The Movie

Movie poster

2001 looks strikingly different from current Hollywood science fiction. At the moment the coolest future Hollywood can imagine is one drained of all colors but dirty gray, dim gunmetal blue, and body-fluid orange. Apparently sometime in the 21st century the visible spectrum will contract Seasonal Affective Disorder. But then, nearly every Hollywood future is either an apocalypse to struggle through or a dystopia for a self-absorbed hero to topple explodily, so I understand why the color graders are depressed. 2001 has its share of beige and sterile white, but, y’know, it’s a cheerful sterile white. And it’s joined by the computer core where David Bowman lobotomizes Hal, lit with the dark red of an internal organ; and Bowman’s mysterious minty-fresh hotel room; and a rack of spacesuits that might have been sponsored by Skittles bite-size candy. 2001’s future might be worth looking forward to–new worlds to explore, new life forms to discover, magic space babies to evolve into. It feels that way partly because the future is pretty. Listen to the score: the spaceships don’t dock, they waltz.

Studios tend to pigeonhole SF as an action genre, and tend to assume too little of action movie audiences. I often bail on these movies for being too loud, too fast, and too dumb. It’s interesting how little 2001 explains, and how little it needs to. 2001 gives just enough information to suggest what’s happening, and trusts the audience to make connections. The movie doesn’t tell us why Hal kills Discovery’s crew. Hal proudly tells an interviewer that the Hal 9000 computer has never made an error. Hal reads Bowman’s and Poole’s lips as they debate shutting him down for repairs following his mistaken damage report. We can work it out for ourselves.1 I’m mildly jarred when, later, Hal explains to Bowman about the lip reading–Bowman didn’t know about it, so it’s not like this dialogue doesn’t make sense, but the audience knows from the way the movie cut between shots of the crew and Hal’s eye. Watching 2001 I get used to not listening to needless explanations. Over half the movie has no dialogue at all.2 It’s the most effective demonstration in sci-fi film of the principle of “show, don’t tell.”

2001 spends a surprising amount of time watching people run through commonplace routines, the kind of action most movies gloss over. The “Dawn of Man” sequence shows how the prehumans live before the monolith shows up because we need to see how they began to understand how they’ve changed. But you might wonder why 2001 shows every detail of Heywood Floyd’s trip to the moon–sleeping on the shuttle, eating astronaut food, going through the lunar equivalent of customs, and calling his daughter on a videophone. When the film moves to the Discovery the plot doesn’t start rolling again until we know David Bowman’s routine, too.

In 1968, science fiction was in the thick of the New Wave, a label given to the younger SF writers writing with more attention to good prose, rounded characters, and just generally the kinds of ordinary literary qualities that make fiction readable. These were never entirely absent from science fiction, but the “golden age” of the genre was dominated by a functional, didactic style exemplified by the work of Isaac Asimov. Some fans call science fiction the “literature of ideas.”3 In golden age SF, the ideas were king and everything else existed only to serve them. The characters were mouthpieces for the ideas. The prose was kept utilitarian–“transparent” was the usual term–to transmit the ideas with minimal friction. Writers used less of the implicit worldbuilding that dominates modern SF, relying on straightforward exposition to describe the world and particularly the scientific gimmicks they’d built their stories around. Stories often described in intricate detail actions that, to the characters, were routine.

Translate these expository passages to film and you have the scenes of Heywood Floyd taking a shuttle to the moon. This is not inherently bad. I’d argue that it takes more talent and effort to write a good book in this grain than it does to write traditional fiction, but a witty or eloquent writer can do as he or she pleases.4 So can filmmakers as proficient as Stanley Kubrick and Douglas Trumbull, 2001’s effects supervisor. I can see why some viewers lose patience with 2001, but I personally am not bored.

The Book

Book cover

Of course, 2001 does exist in prose. 2001 was a collaboration between Stanley Kubrick and Arthur C. Clarke; while Kubrick wrote the script, Clarke wrote the novel. The movie was so far ahead of its time that, even with its 1968-era design work, it still looks fresh. So when I followed up this viewing of the movie with Clarke’s novel, which I hadn’t read in so long that I’d forgotten it completely, I was surprised it was so old-fashioned. In retrospect, I shouldn’t have been: 2001 the movie’s virtuoso style is built from an old-fashioned plan. I just hadn’t noticed until I reread the novel, which seems unaware the New Wave ever happened. Clarke’s prose lacks the style of Kubrick’s direction, and without actors to give them life the characters are revealed as perfunctory sketches, functions rather than subjects. They demonstrate aspects of the universe, and witness its wonders, but the universe itself is what matters.

It’s striking how much space Clarke devotes not to telling us what someone is doing now, but what they would do, could do, were doing, or had been doing. Sometimes 2001 reads like the kind of nonfiction book you’d give to children to teach them how grown-up jobs work. (“At midday, he would retire to the galley and leave the ship to Hal while he prepared his lunch. Even here he was still fully in touch with events, for the tiny lounge-cum-dining room contained a duplicate of the Situation Display Panel, and Hal could call him at a moment’s notice.”) Clarke’s purpose is not only, and maybe not even primarily, to tell a story. He wants to make the audience understand Heywood Floyd and Dave Bowman’s entire universe. Clarke’s 2001 is the opposite of Kubrick’s. The movie is gnomic, open to multiple interpretations. The book wants to tell us something, and it’s going to make it absolutely clear.

Unfortunately Clarke, while not really bad, doesn’t quite have the writing chops to deliver compelling exposition–or at least he wasn’t exercising them here. Still, the book occasionally improves on the movie, mostly by expanding on it. 2001 is over two hours long, but the book has more space.5 The movie’s “Dawn of Man” segment keeps its distance from its subjects. We don’t get to know any of the prehumans. We can’t really even tell them apart. The novel can get inside the mind of the prehuman Moon-Watcher. After Hal, he’s the most vivid character in the book. (On the other hand, it says something about 2001 that the most vivid characters are an ape-man and a paranoid computer.)

In both film and book, the climax of the “Dawn of Man” comes when Moon-Watcher intuits the concept that makes him, in Clarke’s words, “master of the world”: the tool. Specifically, a weapon–Moon-Watcher needs to hunt and to defend himself from leopards and rival bands of prehumans.

The movie then makes its famous jump cut from bone to satellite. The book tells us something the movie doesn’t: the satellite is part of an orbital nuclear arsenal. If the unease running under the surface of the novel’s middle section seems muted now, it’s because it needed no emphasis in 1968: it went without saying that humanity might soon bomb itself into extinction.

When Bowman returns to Earth as the Starchild, his first act is to destroy the nuclear satellites–and then the book repeats the “master of the world” line. If humanity made its first evolutionary leap when it picked up weapons, says 2001, its next great leap won’t come until it learns to put them down again. Apparently this theme appeared in an earlier draft of the movie’s script but didn’t make it into the final film. It doesn’t feel like the movie is missing anything–it is, after all, already pretty full–but the novel is better for the symmetry.

Evolviness

A monolith.

The theme shared by both novel and movie is evolution–and here we come to the original reason I started this essay: evolution in science fiction is weird. Several crazy evolutionary oddities crop up over and over in SF, and 2001 makes room for them all.

People who don’t know much biology often think evolution tries to build every species into its Platonically perfect ideal form. In this view, humans aren’t just more complex or more self-aware than the first tetrapod to crawl out of the ocean: we’re more “evolved.” This is as pernicious as it is foolish–in the early 20th century, true believers in evolutionary “progress” used the idea to justify eugenics and “scientific” racism.

Despite that, in many science fiction stories evolution has a direction. This direction is usually entirely unlike the direction the eugenicists were thinking of. Not usually enough, mind you–a few of these “perfect” humanoids look disturbingly blonde–but science fiction people mostly evolve into David McCallum in the Outer Limits episode “The Sixth Finger”–people with big throbbing brains and, more importantly, godlike powers. (Also, at least on TV, they tend to glow.) Depending on the story, this might be a metaphor for either social and technological progress (if the more highly evolved beings are wise and authoritative, like Star Trek’s Organians), or absolute power corrupting absolutely (if they’re assholes like Star Trek’s Gary Mitchell). The crew of the Enterprise met guys like this in every other episode of Star Trek–Gene Roddenberry’s universe has more alien gods than H. P. Lovecraft’s. In media SF huge heads are optional; powers aren’t. Especially in comic book sci-fi–think of the X-Men, the spiritual descendants of A. E. van Vogt’s Slan. Sometimes people evolve into “energy” or “pure thought”–or, in modern stories, minds uploaded as software. In Clarke’s own Childhood’s End the human race joins a noncorporeal telepathic hive-mind. In 2001, the Starchild destroys the Earth’s orbiting nukes with a thought. In science fiction, sufficiently evolved biology is indistinguishable from magic.

The suddenness of David Bowman’s transformation brings up another point: in science fiction, evolution happens very fast, not in gradual steps but in leaps. It works like the most extreme form of punctuated equilibrium you’ve ever seen–a species coasts for a few million years in placid stability, until bam: a superbaby is born! With three eyes and an extra liver and telepathy! In biology, this is known as saltation, or more colloquially as the “Hopeful Monsters” hypothesis. Nobody takes it seriously… except in science fiction, where you actually can make an evolutionary leap in a single generation. This is the premise of Childhood’s End, and The Uncanny X-Men, and Slan. Theodore Sturgeon used it in More Than Human. Sometimes this is a metaphor for the way an older generation struggles to understand its children. More often it’s simply artistic license. Evolution takes millions of years, but fiction, unless it’s as untraditionally structured as Last and First Men, deals with individual human lives. To talk about evolution, SF writers collapse its time scale to match the scale they have to work with.

Some SF, particularly in TV and film, twists saltation even further away from standard evolutionary theory: evolutionary leaps don’t just happen between generations. People can evolve–or devolve–in midlife, as David Bowman is evolved by the monolith. In the world of Philip K. Dick’s The Three Stigmata of Palmer Eldritch, for instance, you can go in for “evolutionary therapy” and come out with a Big Head. In my plot summary I used the words “level up” ironically, but it really is like these writers are powering up their Dungeons and Dragons characters–suddenly, the hero knows more spells. Written SF usually tells these stories with some kind of not-exactly-evolutionary equivalent–in Poul Anderson’s Brain Wave all life on Earth becomes more intelligent when the solar system drifts out of an energy-dampening field; the protagonists of Anderson’s “Call Me Joe” and Clifford Simak’s “Desertion” trade their human bodies for “better” bodies built to survive on alien worlds. TV shows just go ahead and let their characters “mutate.” Countless episodes of Star Trek and Doctor Who are built around this premise, the most awe-inspiring being Star Trek: Voyager’s legendarily awful “Threshold”, in which flying a shuttle too fast causes Paris and Janeway to evolve into mudskippers.6 This is, again, artistic license: stories focus on individuals, not species. The easiest way to write a story about biological change is through metaphor, by putting an individual character through an impossible evolutionary leap.

Most of the leaps I’ve cited in the last three paragraphs have something in common: they don’t involve natural selection. In science fiction, evolutionary leaps are triggered by outside forces. Sometimes an evolutionary leap is catalyzed by a natural phenomenon, like the “galactic barrier” encountered by Gary Mitchell on Star Trek. The latest trend in evolutionary catalysts is technological. Vernor Vinge has proposed that humanity is heading for a “Singularity”, when exponentially accelerating technological breakthroughs lead to superintelligence, mind uploads, immortality, and just generally a future our puny meatspace brains cannot predict or comprehend. The Singularity is the hard SF equivalent of ascension to Being of Pure Thoughtdom, leading to the less kind term “the Rapture of the nerds.” Plausible or not,7 the occasional singularitized civilization is de rigeur in modern space opera (not always under that name; for instance, lurking in the background of Iain Banks’s Culture universe are civilizations who’ve “sublimed”). Short of the Singularity, a good chunk of contemporary far-future SF involves transhumans or posthumans, people who’ve enhanced their bodies and minds technologically, A pioneering novel in this vein was Frederick Pohl’s Man Plus, about a man whose body is rebuilt to survive on Mars.

Singularities and transhumanism put humans in charge of our own evolution. 2001 puts human evolution in the hands of aliens, as do many other stories, including Childhood’s End. Octavia Butler’s Dawn and its sequels deal with humanity’s assimilation into a species of gene-trading, colonialist aliens. Both books are about humanity’s future evolution, but just as often the aliens have guided us from the beginning of human history, as in 2001–the idea even took hold outside of fiction, in Erich von Däniken’s crackpot tome Chariots of the Gods?. Star Trek explained the similarity of its mostly-humanoid species with an ancient race of aliens who interfered with evolution on many planets, including Earth. Nigel Kneale’s Quatermass and the Pit is a cynical take on the same concept.

So. Having (at tedious length) established that science fiction tends to get evolution (usually deliberately) wrong, what does it mean? Specifically, what does it mean for the ostensible subject of this essay, 2001?

Tales of weird evolution rarely depict change as evil.8 More often they’re about human potential. Evolution is more often a metaphor for progress and growth, personal or social: the Organians are “more evolved” than us not because they can turn into talking lightbulbs but because they possess more knowledge and wisdom. Stories of evolutionary leaps are about the hope that we can become more than we are, the growing pains we suffer in transition, and occasionally the fear that we might not be able to handle our new knowledge and abilities.

It gives me pause, though, that in science fiction growth is so often represented by a kind of evolution that doesn’t exist. As I’ve mentioned, there are narrative reasons for these oddities. An epiphany is more dramatic, and more suited to a story taking place in a limited time-frame, than a geologically slow ongoing process of becoming. And an epiphany can’t come out of nowhere–it needs a specific cause, one more narratively satisfying than the laws of biology. But what we end up with are stories of personal and social progress in which we don’t grow ourselves–we’re grown by outside forces. Our growth as human beings is an emergent property of accelerating technological change, or it’s granted to us by gods and monoliths. In 2001: A Space Odyssey Moon-Watcher doesn’t discover tools himself–the monolith implants the concept in his mind. The human race has to prove it’s worthy of the next step in evolution by traveling to the moon and then to Jupiter, but when David Bowman arrives the secrets of the universe are given to him.

The evolution metaphor in 2001–and in science fiction in general–is a weird, confused, disquieting tangle of optimism, hope, and cynicism. Humanity has the potential to be more than we are, but not by our own effort and not through any process we can control or understand. It’s like science fiction thinks we can’t get from here to wisdom without a miracle in between.


  1. The book explicitly explains Hal’s behavior and its explanation is different from the one we’re led to believe in the movie. ↩

  2. The trivia section of 2001’s IMDB page gives the dialogue-free time as 88 minutes out of 141. ↩

  3. I am not one of them. The description is uselessly vague. What book isn’t about ideas? ↩

  4. This is why I bought Mark Twain’s Autobiography, on the face of it a thousand pages of random Grandpa Simpsonesque rambling: Mark Twain’s grocery lists are worth reading. ↩

  5. Sorry. ↩

  6. As a capper, they proceed to have baby mudskippers together. ↩

  7. I’m on the “not” side, myself. ↩

  8. When they are, they’re usually horror stories. Often they focus on a mad scientist who’s devolving people, or evolving animals, e.g. The Island of Dr. Moreau. ↩

Kevin Huizenga, The Wild Kingdom

Walk into a comics shop1 and you’ll see rack upon rack of detailed and carefully rendered mainstream comics–“mainstream,” in comic-shop terms, meaning the style and aesthetic typical of superhero comics. Comics that methodically delineate every hair on a characters head yet seem to know about as much about the way the human body moves as an octopus man from the planet Xoth. Comics that obsessively-compulsively render every sidewalk crack and windowpane of a street scene, but fail to clearly communicate what’s happening there. Comics whose draftsmanship is at times photo-perfect, but miss everything that would invest their art with meaning, emotion, or life.

And then there’s Kevin Huizenga, whose comics look like the button-eyed, pipe-cleaner-limbed 1930s newspaper strips of Bud Fisher and E. C. Segar, and are among the most realistic comics currently published. Before I go further, I want to make it clear that this is not genre-bashing. Only a superficial (and dull) interpretation of “realism” would equate it with realistic subject matter. Anyway, although most of Huizenga’s comics are set in suburbia he often uses fantasy–Curses collects several magic-realist tales, and he’s serializing a post-apocalyptic comic on What Things Do. What I mean is that Huizenga’s cartooning is more evocative–better at seeing and understanding the essence of an experience and translating it into marks on a page.

Take Ganges #3. Glenn Ganges, Huizenga’s all-purpose protagonist, spends the first chunk of the book trying to drift off to sleep and getting stuck halfway there. A hypnagogic state, it’s called. On the second page of Ganges #3 Glenn walks out to his front yard. It’s a clear moonlit night, and, without even the benefit of full color, an excellent impression of the way light falls on a clear moonlit night. As his thoughts wander, Glenn absent-mindedly walks up a tree. He’s dreaming. And the feeling of reading this page reminds me of how actual dreams feel: the disjointedness, the way one element of the narrative (Glenn’s thoughts) refuses to acknowledge another (the suspension of gravity), the acceptance of surreal events as literally unremarkable (Glenn walks back into the house, observes himself sleeping, and climbs into his own head as though it’s just what you do on a restless night). Dream sequences in comics aren’t usually like this, partly because they usually serve the kind of narrative function they do in movies–i.e, to develop characters or themes through allegory–and partly because drawing a dream that feels like a dream is hard.

Cover art

Huizenga pulls off the same trick at the beginning of the book I’m actually attempting, however circuitously, to review: The Wild Kingdom. Huizenga has given real thought to how the defining features of dreams could be translated to the page. For instance, how do you depict the sudden time-skips that are typical of dreams? Because here’s the thing: skipping over time is how comics normally work. Moment-to-moment panel transitions, to use Scott McCloud’s categorization, are less common than action-to-action or scene-to-scene. So comics have to work to depict actual narrative discontinuity. Huizenga solves the problem by showing Glenn see himself at different moments in a single panel as he approaches a house. Then there’s the way that dreams tend to jumble together things that seem to belong to different levels of reality, which Huizenga represents by collaging photographs into his cartoony drawings.2

I shouldn’t spend too much time on this dream sequence, though. It’s just a prologue. What The Wild Kingdom is really about… well, that doesn’t become clear for some time. In a good way–this is one of Huizenga’s more challenging works. The cover design and the binding resemble a mid-20th-century children’s science book. There’s a mock-serious introduction and fake table of contents. There are paintings of songbirds on the endpapers. So when Glenn wakes up and proceeds to spend Saturday puttering around his suburban home readers might assume The Wild Kingdom has already wandered off premise. But that’s the point of the book’s first chunk: Suburbia is a wild kingdom, a point reinforced when you flip to the back of the book to discover the songbirds on the endpapers are taken from an ad for Ethyl gasoline.

We usually define “nature,” or “the wild,” as what exists where people don’t. Here we have a city, and here we have a farm, and over here, in this stand of trees along the creek, where no one mows the grass, that’s Nature. We tend to assume, when we’re not particularly thinking about it, that “nature” has clear borders, like a square on a chessboard. It’s more complicated than that, of course, as anyone who’s confronted a suburban lawn after a month’s neglect knows. Cities are also ecosystems. There’s a lot going on in cities that’s not under our control–that is, in other words, wild. To say wildlife survives in the cities is understating the case–those pigeons, raccoons, squirrels and feral cats are thriving. Right under our noses are enough predator-prey dramas to keep Marlin Perkins busy for years.

Glenn is woken by a mosquito, and finds a stag beetle in the basement which is subsequently hassled by a cat. There’s a worm in his apple; he tosses it to a squirrel. On a drive, he sees another car run over a pigeon. A hawk stops to pick up its remains. Again, everything here is closely observed and efficiently communicated. On one page a pigeon pecks at a couple of chili fries. There are seventeen closely packed drawings of just the bird’s head and the fries, without panel borders. The pared-down drawings and the page structure read with a staccato rhythm a lot like the jerky head-bobbing of an actual pigeon. Also interesting: the panels showing Glenn’s car as he drives often enter Glenn’s point of view. Nearby cars are well-defined; so are objects in Glenn’s view as he watches the road, like stoplights and telephone wires. The buildings and trees to the side of the road are built mostly from motion lines with a few sharp details jumping out from the background–the flashbulb images Glenn picks up out of the corner of his eye. These panels are at once pictures of Glenn out for a drive and maps of where his attention is.

The common factor is that these panels are both representational and… diagrammatic, let’s say. In fact, at times Huizenga’s comics include actual diagrams, some accurate and some parodies (as are the diagrams in The Wild Kingdom).

Just when you think you might be getting a handle on The Wild Kingdom, there’s a commercial break. It’s in color. And much more oblique. And it seems to jump around a lot, like the book has lost its attention span. It starts with a Glenn-substitute attempting to ponder some deep questions, but within a couple of pages the book moves on to Hot New Things, and repeated promises that “you’ll be saved from your own life,” and naked dancing Technicolor people shouting “Yeah!”, and Walt Whitman with an exciting new way to make money. This is a different kind of wild: the mental noise that distracts us from the deep attention to the world demonstrated by the black and white pages. This is the wildness of Glenn’s mind when it’s out of control and bereft of attention span. This section is about wanting, and desire, and how the ubiquitous mass media and relentless advertising that surrounds us like air sublimates our more nebulous desires into a need for the Hot New Thing. Because, honestly, isn’t the Hot New Thing easier and more fun to think about than the deep questions? It saves us from our own lives!

After a few pages of this, The Wild Kingdom calms down, gradually going from bright colors to muted colors back to black and white. It returns to the nature theme of the first section in a series of short pieces which include pictures of “fancy pigeons” and clip-and-save trading cards covered in bizarre “facts” about the animals we’ve seen. Then the book introduces Maurice Maenterlinck, and its theme comes together. Maenterlinck was a surrealist playwright who also wrote three books of natural history. The closest thing to a statement of purpose in The Wild Kingdom is a long quotation from The Life of the Bee (available on Project Gutenberg), from which I’ll quote part of a paragraph:

Let our heart, if it will, in the meanwhile repeat, “It is sad;” but let our reason be content to add, “Thus it is.” At the present hour the duty before us is to seek out that which perhaps may be hiding behind these sorrows; and, urged on by this endeavour, we must not turn our eyes away, but steadily, fixedly, watch these sorrows and study them, with a courage and interest as keen as though they were joys. It is right that before we judge nature, before we complain, we should at least ask every question that we can possibly ask.

The first-glance take on The Wild Kingdom might be that it’s about a conflict between nature and the suburbs–but, again, these are “sides” that don’t really exist; even in the city, nature is there. The natural world is the example The Wild Kingdom uses to make its real point. What this book is really about, I think, is attention.

Recently there was a psychological experiment that had a lot of publicity. You might have heard about it. People are asked to watch a video of basketball players, and count how many times the players passed the ball. About half the people who try this become so intent on the task that they do not notice when a guy wanders through the game wearing a gorilla suit. The human brain is not an outstanding multitasker; we can do it, but if we juggle too many tasks at once we’re just a little bit worse at all of them.3 There are limits to how much we can focus on, how much input we can take in, at once. I know the brain-as-computer metaphor is massively overused, but at this time of night I can’t think of a more succinct way to put it: the human mind has only so much bandwidth and can run only so many processing cycles at once.

So it’s not such a great thing when too many of our processing cycles are taken up with toothpaste, chili fries, and endless anticipation of Hot New Things. I don’t want to sound too disapproving. I like Hot New Things; one thing dour anti-consumerists don’t always get is that sometimes everyday life is a grind, and a certain amount of daydreaming about Hot New Things can be one way of coping. Every once in a while we need to be saved from our own lives.

Moderation, though, is key. Daydreams are good; it’s not so healthy to let manufactured anxieties (Are our teeth white enough? How clean is the carpet, really?), catchy slogans, and secondhand narratives colonize our attention entirely. To return to the computer analogy, it’s the difference between playing a game on your computer and letting a virus dominate its processing cycles. The Wild Kingdom’s focus on the natural world hidden in the urban landscape is a reminder that it’s important to pay attention to the world that’s actually there, around us. It’s important not to be so focused on our destination that we don’t see the bird about to be crushed by the wheel. It’s almost Buddhist: Huizenga is asking us to be here now.

The Wild Kingdom ends by returning to the hawk that, in the first third of the book, took off with the pigeon. The hawk lands on an electric transformer and electrocutes itself. This, through a series of Rube Goldberg disasters, leads to an apocalypse. The camera pulls away from Earth–peaceful, from so far away–only to see it collide with another planet. This is, I think, a memento mori; a reminder that some things may not always be around, and we won’t always be around, either, and we should pay attention now, and ask every question that we can possibly ask.

Which brings us back to the beginning of this essay, and the question of how Huizenga invested a simply drawn, cartoony book like The Wild Kingdom with so much more conviction than the pseudo-photorealistic comics a few shelves over.

It helps to look.


  1. If you do, you’re braver than I. The comics shop in the town where I live is relatively neat and clean and I’m still not comfortable going in: however pleasant the store is, I’m just too creeped out by the merchandise. (Want to increase sales, comics companies? Try coming up with products that don’t look sleazy.) ↩

  2. Also, I find that The Wild Kingdom’s dream sequence really captures the way dreams often involve a sense of menace without containing anything obviously menacing. Although maybe that only says something about what my dreams are like. ↩

  3. Although multitasking feels easy. Which is why so many people their cars crash while texting: yeah, they know other people can’t handle it, but… ↩

Links to Things

As sometimes happens, especially as winter is coming on, I’m exhausted. There will probably be no new comics this week. I may manage a post or two on the blog. In the meantime, here are some links. I can’t remember at this point how I found them:

  • A New York Times story from 1896 celebrated the death of the three volume novel. (To read the actual story you’ll have to download a PDF.)

    The system had a deleterious effect upon literature because it required every novelist to spread and pad out his story so that it would fill three volumes, without reference to the normal length of the story he had to tell. Anthony Trollope, in his autobiography, ackknowledges this necessity and naively explains his own methods of padding. The result was a school of fiction which was verbose on compulsion, and in which writers had to beat out their stories as thin as possible that they might spread out over the greatest space.

  • An interesting Dutch newspaper comic panel, introduced by the blog The Fabuleous Fifties. The art has a great design sense, and it’s amazing the effects the artist gets with a few simple pen lines. The tastefully colored Sunday strips put the character into a surreal environment, and then deliver a great sight gag in every panel.

  • A review of Zak Sally’s Like a Dog, which made me very interested in getting this book:

    Sally quit his band, settled down, bought his own press and has become comfortable with the process of making and publishing comics. He’s quick to deflate his own sense of self-satisfaction, along with the idea that anyone’s got it figured out. In the end, he says, “it’s the work that counts”. It’s what mattered when comics frightened him, and it’s what matters now that he’s more settled. While Sally wanted to provide the reader context and his own view on his work (because he liked that sort of thing reading other collections), his opinion about his art was no more or less valid than the reader’s.

  • Seth on cartooning:

    I often find that when I’m drawing, only half my mind is on the work — watching proportions, balancing compositions, eliminating unnecessary details.

    The other half is free to wander. Usually, it’s off in a reverie, visiting the past, picking over old hurts, or recalling that sense of being somewhere specific — at a lake during childhood, or in a nightclub years ago. These reveries are extremely important to the work, and they often find their way into whatever strip I’m working on at the time. Sometimes I wander off so far I surprise myself and laugh out loud. Once or twice, I’ve become so sad that I actually broke down and cried right there at the drawing table.

  • Peter Watts explains why scientists don’t always write happy emails:

    Science doesn’t work despite scientists being asses. Science works, to at least some extent, because scientists are asses. Bickering and backstabbing are essential elements of the process. Haven’t any of these guys ever heard of “peer review”?

Some Interesting Links

I won’t try to write full posts about any of these; my brain is so listless this weekend they would likely turn out as empty bloviation.

**1.** Strange Horizons has posted [an article about how fiction becomes urban legend] [tfv]. In 1888 Ambrose Bierce published a hoax article about three abrupt vanishings. Like the [Angels of Mons] [aom], the storied passed into folklore (or maybe into [fakelore] [fake]), being reproduced in book after book of weird mysteries.

My favorite detail–indicating the level of “scholarship” that goes into these volumes–is the author who salted his reference books with misinformation to detect plagiarists.

**2.** [There’s a strain of symbiotic bacteria in your elbow] [elbow]:

>The crook of your elbow is not just a plain patch of skin. It is a piece of highly coveted real estate, a special ecosystem, a bountiful home to no fewer than six tribes of bacteria. […] They are helping to moisturize the skin by processing the raw fats it produces, says Julia A. Segre of the National Human Genome Research Institute.

These are not generic bacteria, and the article isn’t using elbows as a random example body part. These bacteria *specifically evolved to live in elbows*.

>Dr. Segre reckons that there are at least 20 different niches for bacteria, and maybe many more, on the human skin, each with a characteristic set of favored commensals. The types of bacteria she found in the inner elbow are quite different from those that another researcher identified a few inches away, on the inner forearm. But each of the five people Dr. Segre sampled harbored much the same set of bacteria, suggesting that this set is specialized for the precise conditions of nutrients and moisture that prevail in the human elbow.

**3.** Kit Whitfield examines one of the rarely-identified [stock characters] [stock] of modern fiction: the [Macho Sue] [whit]. (Via [Slacktivist] [slack].)

[tfv]:
[fake]:
[aom]:
[elbow]:
[slack]:
[whit]:
[stock]:

Habits

The _New York Times_ has an article on habit. Specifically on developing new habits. I have habits I’d like to get rid of myself. Or not so much *habits*, exactly, as a deep, deep rut. I spend hours every day with my brain on automatic pilot and I’m trying to take the controls a little more often. So this looked interesting:

>[B]rain researchers have discovered that when we consciously develop new habits, we create parallel synaptic paths, and even entirely new brain cells, that can jump our trains of thought onto new, innovative tracks.

>[…]

>But don’t bother trying to kill off old habits; once those ruts of procedure are worn into the hippocampus, they’re there to stay. Instead, the new habits we deliberately ingrain into ourselves create parallel pathways that can bypass those old roads.

Unfortunately it’s a Business section article, stuffed with inane marketbabble from people like Dawna Markova, “an executive change consultant for Professional Thinking Partners.” Wouldn’t it be great if American culture could break whatever dysfunctional habit leads us to think “executive change consultant” is in any way a sane job description?

The lede promises an exploration of the neuroscience of habit. The further you read the more obvious it gets that this is a shopworn veneer over an ad for Markova’s and business partner M. J. Ryan’s books and consultancy. Is that a press release I see, peering out from behind the faux woodgrain shelf liner?

Anyway. Back to the oversimplified neuroscience:

>Researchers in the late 1960s discovered that humans are born with the capacity to approach challenges in four primary ways: analytically, procedurally, relationally (or collaboratively) and innovatively. At puberty, however, the brain shuts down half of that capacity, preserving only those modes of thought that have seemed most valuable during the first decade or so of life.

The capacity to approach challenges sounds kind of like the alignment system from the Dungeons and Dragons games. (My level 3 ranger is analytically relational!)

(Incidentally, did you know that if you do a Google search on the single word “alignment,” that Wikipedia article is the second link in the search results? Yes, it scares me, too.)

Here the article gets into standardized testing. Our cultural romance with standardized testing baffles me; we took the damn things all the time when I was a kid. I get the impression that No Child Left Behind encourages schools to arrange their curricula around maximizing their test scores. I sometimes suspect we’re raising a generation of dull, regimented multiple-choice drones who understand nothing more important than the best way to fill in bubbles with a number two pencil.

And so does the _New York Times_. The article hypothesizes that we spend so much time training students to take standardized tests that their brains order themselves around the kinds of thought useful for standardized testing–“analysis and procedure,” in consultant-speak. What’s interesting is that this article, in advising business types, automatically assumes the reader is “an analytical or procedural thinker”–the kind of thinking they associate with standardized bubble-fillers. Maybe if I were a more innovative thinker I wouldn’t have bothered reading this far down.

Just By Existing, You’re Messing With People’s Heads

In the New York Times, Daniel Goleman introduces us to mirror neurons:
>The most significant finding was the discovery of “mirror neurons,” a widely dispersed class of brain cells that operate like neural WiFi. Mirror neurons track the emotional flow, movement and even intentions of the person we are with, and replicate this sensed state in our own brain by stirring in our brain the same areas active in the other person.

>Mirror neurons offer a neural mechanism that explains emotional contagion, the tendency of one person to catch the feelings of another, particularly if strongly expressed. This brain-to-brain link may also account for feelings of rapport, which research finds depend in part on extremely rapid synchronization of people’s posture, vocal pacing and movements as they interact. In short, these brain cells seem to allow the interpersonal orchestration of shifts in physiology.

Isn’t it reassuring to know that, when you’re feeling down, you can make everyone around you miserable, *just because you’re there?* Why not go out, and spread the gloom?

Stare bleakly into the face of the driver as you board the bus. Suddenly, he’s aware of time passing, and the certain knowledge of his inevitable decline. Maybe he’ll drive the bus into a lamppost.

Stop for coffee. As the barista gives you your latte, she turns pale and her hands begin to shake. You make your way to a table and as you pass the conversations trail off and fall silent. From somewhere comes the sound of an indrawn breath and a stifled cry. Tonight, these people will return to their homes and dream of sad kittens. Sure, you’re depressed… but as long as there are mirror neurons, you’ll always know how to make everyone else feel *just as bad as you do.*

Hooray for *schadenfreude!*

A Fable for the Puzzled

Richard Sternberg, a staff scientist at the National Institutes of Health, is puzzled to find himself in the middle of a broader clash between religion and science—in popular culture, academia and politics.

Sternberg was the editor of an obscure scientific journal loosely affiliated with the Smithsonian Institution, where he is also a research associate. Last year, he published in the journal a peer-reviewed article by Stephen Meyer, a proponent of intelligent design, an idea which Sternberg himself believes is fatally flawed.

Let’s imagine you’re an English professor.

One day, as you prepare for class, a colleague—for no particular reason, we’ll call him Leroy—rushes in, glowing with excitement. He’s had an amazing insight! It will revolutionize the study of English literature and sell millions of books and government grants will spontaneously appear in his wallet and maybe women will talk to him!

You lean closer, dropping your frappuccino. What is this bold new idea? Leroy is glad you asked. He smiles giddily, drunk on his own cleverness. It has to do with James Joyce, he says. James Joyce was a leprechaun! And Finnegan’s Wake is a coded message intended to lead the careful reader to his stash of Lucky Charms!

Continue reading A Fable for the Puzzled