This pond lily grew all night and broke the surface this morning. I bought it at a nursery and submerged it in my fish pond, which I spent Saturday cleaning out and getting ready for the summer. My previous pond lily didn’t make it through the winter. All week I have watched this new one reach for the sun.
It is difficult to draw things which exist in two media: how to differentiate what is submerged from what is emergent? How to depict the sheer wonder of that bifurcated existence? This occupied me for the length of time it took to do the sketch.
I am reminded of William Stafford’s poem “Connections”:
Last week my good friend Kurt posted a great piece on his blog about “Driverless Cars and Bodiless Brains” (http://resourceinsights.blogspot.com/2018/03/driverless-cars-and-bodiless-brains.html#more). He is referring of course to the news that Uber’s prototype driverless vehicle just ran over and killed a pedestrian in Tempe, Arizona. Kurt points out the folly of thinking that AI will be able to replicate the complicated systems that make up human cognition, in part because the researchers are under the false impression that intelligence is isolated in the brain. Kurt points out
human cognition is not a thing. It cannot be reproduced without reproducing the entire system within which it operates. Human cognition emerges out of the system we live within rather than merely being embedded in it. Cognition is a process rather than a result. But so are the whole host of other processes we attribute to humans: feeling, judging, willing, and perceiving.
I was musing on this as I fingered the nine of clubs that I found in the alley a couple of days ago. It was frozen to the asphalt and glittering with ice crystals when I pried it up. It hardly looked like a playing card, as it was mottled with filth and swollen from the March snow-melt to something like the thickness of a cracker. I put it on a window ledge on my front porch, which faces south and is in the direct path of the spring sun. By yesterday the card had dried out and resembled itself again.
I am–like many of my readers, I suspect–superstitious about cards encountered randomly. I once picked up a tarot card on the road by my student dwelling in Freiburg, Germany. It was the King of Pentangles, which, I found when I looked it up, signifies a person who is a “a natural manager and businessperson” who “has the Midas touch.” Now, obviously, I didn’t believe a playing card was magically diagnosing my personality or predicting my future. Certainly I am not in any way a “natural manager,” nor do I have the ability to turn everything I touch to gold. Yet I also didn’t believe the card to be meaningless. It seemed somehow an important omen at a very critical point in my life when I needed to feel that the universe was on my side. I wove the card into my story, and its meanings were ambiguous and varied enough to allow for that. If the card represented a character who handles a task “competently, drawing on his wide range of skills and practical knowledge” as well as being “always dependable and responsible,” that was an aspirational invitation, if not really a prophecy.
What meaning should I draw from this nine of clubs found in my alley? The back of the card tells me it is from a pack sold as part of a magic kit–there is a goofy drawing of a cartoon magician. Some kid, probably, practicing sleight-of-hand on his way home from school, let the nine of clubs slip away. Now the deck is useless to him, I suppose, but he left me a mystery to contemplate. According to one internet source the nine of clubs represents a “person has all the power tools they need to construct a successful and rewarding life. Developing a strong work ethic and clear direction are essential for unlocking their inherent potential.” It would, of course, be hard to argue with that. Another site developed this idea further:
The 9 of Club is known as the “Adventurer’s Card”. They like to gamble and are always willing to take a chance. They are intensely curious and when they apply their adventurous spirit to the field of knowledge, they are capable of making profound discoveries that benefit others on a universal scale.
Well, that does sound propitious. Or, more to the point, I feel it is propitious because I am at a time in my life when I sense that I may need to take more chances, to be more adventurous. I am at the tail end of my teaching career, my children are grown up now, and I am feeling the call of adventure. So of course I ally myself with the (admittedly trite and randomly encountered) meanings of this card. I’ll keep it pinned to my bulletin board as a reminder of my psychic situation.
To think of an object as significant in some way is to risk apophenia, “the tendency to perceive connections and meaning between unrelated things.” For any serious Modern, that would describe most of what we think about all day. Because a Modern must scrupulously police his or her thinking for any kind of confusion between facts (empirical truths about Nature) and values (superfluous preferences, prompted by the imagination), pretty much everything we find significant is a mild version of apophenia, a kind of “white lie” we live with but shouldn’t really avow. But that is a skepticism too binary to be descriptive, and it tends to make rigorous Moderns feel either phantasmal or nihilistic. Luckily most of us are not rigorous Moderns, and we instinctively know that we value things because we want to put them to use. The tangled complex of body, mind and world that each of us is relies on narrative, on poetry, on art, on any number of pragmatically fetishized objects or images to marshal our actions and focus our aims.
The term apophenia was coined in 1958 by psychiatrist Klaus Conrad to describe truly debilitating mental states, such as those experienced in early stages of schizophrenia. He defined the condition as “unmotivated seeing of connections [accompanied by] a specific feeling of abnormal meaningfulness.” Anyone who has encountered a person in the grip of psychic mania or a paranoid episode can affirm the misery of too much meaning. And certainly one of the advantages of the Modern world-view has been its heightened critique of human tendencies to over-invest the world with significance. Experimental empiricism provides a powerful critique of human fallacies such as anthropomorphism, hasty generalization, “gambler’s fallacy,” and confirmation bias. But by the same token, one of the debilitating effects of Modern skepticism is a specific feeling of abnormal meaninglessness, brought on by the fact/value divide. Because Moderns, since the 1620s, have been obsessed by the desire to construct a true picture of the underlying structure of the natural world, they are continually discounting the other instrumental functions of cognition. What is thought “for”? Their answer can only be “dividing fact from fancy.” But such a neat cleavage does not take into account–or at least is not nearly critical enough–of the provisional status of facts and the fanciful uses we make of them.
Science tends to begin in skepticism and end in certainty–starting with a hypothesis and ending with facts. But, as Bruno Latour has pointed out, that is only one cycle in an endlessly looping process. The move from openness and skeptical questioning to certainty is always provisional, and always at the cost of some left-over contradictions and confusions which must be set aside in the interest of going forward. Yet the truly innovative spirit of science is the promise that at some point the case will be reopened, as new information, or even just the excluded old information, must be accounted for. Yet we often are tempted to treat our facts, our theories, our systems, as more solid than they can ever be. We become superstitious about our certainties. And this encourages us to build systems that we promise “cannot fail.” Until they do, as just happened in Tempe.
My nine of clubs card speaks to me of the mysterious significance of the world. No thing is meaningless, because everything is connected and everything is speaking. But neither can we grasp fully what a thing is saying. There is too much potential meaning, and very little understanding. So we proceed in fear and trembling–or at least with a healthy dose of irony. My card is supposed to signal adventure: there is an adventure for you.
UPDATE: for a fascinating artistic take on this subject, see the collaborative project “Agency Apophany”:
This guy was sitting a row behind me at a recent performance by the student orchestras at the University of Minnesota (my daughter plays viola in one of them). I sketched him for a good 15 minutes, during which he never once looked up from his phone. Something about the intensity of his engrossment prompted me to give him a halo and the Latin epithet which means “Behold the Man.” Ecce Homo is both a brand of late medieval piety, one in which devout people focused on depictions of Christ being exposed to the mockery of the Jerusalem crowd (“Behold the man!” is what Pilate says as he leads Jesus before the rabble) and the title of Nietzsche’s (self-mocking, self-promoting) last book. Nietzsche likes the phrase because it is laudatory and ironic at the same time–it conjures both triumph and ridicule and is tinged with martyrdom. I attached it to this man because I see him, locked in amor fati with his phone, as a martyr of sorts. A sacrifice to the complexities of late Modernity.
Modernity–privileging as it does mind over matter while simultaneously claiming only matter exists–specializes in producing loneliness. Moderns wander homeless among the material agents that invent and sustain them; the assumed inauthenticity of their subjective life leads them to doubt the authenticity of others–with whom they connect mainly through the brutal–but mathematisable–competition of the marketplace which Marx called “the icy water of egotistical calculation” that “resolved personal worth into exchange value.” No wonder mid-twentieth-century Moderns were frightened of being alone in a crowd (viz. books with titles like The Lonely Crowd)–this is not a formula for happy fellow-feeling. But at least loneliness left Moderns time to think. Now the crowd never leaves them alone. Thanks to the internet, Moderns are trapped in an incessant conversation with millions of others frantically vying to be “liked” and clicked on and responded to, all in a virtual space, far from those physical bodies doing the liking and clicking. Alone and crowded! A strange, anxiety-soaked achievement.
This man I have sketched is neither here nor there. He is not here because he is oblivious to his actual surroundings–but neither is he lost in his thoughts. He is somewhere between mind and world, on a flat screen, watching texts scroll by like marching insects. His problem is not alienation but engulfment. He is networked to a crowd whose every trivial thought is hurled at him with increasing rapidity. If he is on Snapchat or Instagram or Twitter or Facebook, then it is a preening, flattering crowd, intent on reducing him to a court sycophant. Like a character out of Dangerous Liaisons, he must present his best life, powdered, bewigged, rouged, to the judgment of others equally dedicated to self-promotion and capable of turning on him in a vicious, seething horde. Is he reading his news feed? Is he checking his bank account? Is he bidding for a snowblower on eBay? Is he looking for love on Tinder? Then it is a wheedling, conniving crowd, pressing in on him, amplifying his desires in hopes of robbing him of his time and money. The “icy water of egotistical calculation” has been heated and accelerated to a boiling torrent.
Worse yet, this man is addicted to the onslaught. Social media turns every smart phone into a dopamine-fueled slot machine. The intermittent reinforcement which the network affords mimics chemical dependency; statistically, this young man will likely check his phone 75 times a day and he’ll spend three hours–and very possibly up to eight hours– staring into it. He will check his phone within minutes of waking and it will be the last thing he sees before he falls asleep. This is a level of devotion lovers of old could not boast of–there is even a one-in-ten chance he has looked at his phone while having sex.
This is a radical twisting of what was originally meant by “the text.”
The picture below is Albrecht Durer’s rendering of St. Jerome, the man who translated the Vulgate Bible:
Notice the similarities in concentration: the downward look, the half-closed lids. But there is a profound difference between what the two men are doing. St. Jerome leans over a text that is utterly still. The text is not vaporous or reactive, it is inert. Whatever moves is his own doing, and what he is doing is shifting his attention between a text he is reading and a text he is creating as he translates the Bible from the original Hebrew and Greek into Latin–the common language of his day. He is combining what he receives with what he understands to create new content, but that new text will always only be a still object, a surface with marks on it. This object–this Latin text–will be replicated throughout the late Classical and Medieval world, but slowly, painstakingly, in the scriptorium (and later in the print shop). Human hands will copy strings of letters onto sheets which other hands will collate and sew into books, each one bearing the weight of time and materiality.
Also, notice that Jerome is alone. A book is an object that speaks, but haltingly, locally, singly. Books can network minds, yes, but they remain objects. Each book is unique; it bears traces of contact–marginalia, lunch stains, wormholes–and smells of vellum and dust. A book is a solid node in a ghostly network–like a bus stop on the spirit line. It is built of thoughts yanked from the stream of consciousness, but they have been halted and stilled and incarnated: one can shelter in their solidity. Others can gather there too–but only one at a time. A book is infinitely patient and will stand to one side for millennia, passively awaiting further visitors. The structure of printed text does not shift with time, or with interaction. That contact between the still and the moving–between text and person–is what has made the text so meaningful to civilization–stabilizing language, codifying belief, bolstering the state, and, with the spread of literacy, forming the Modern person. That is to say, the very idea of the individual, which is so central to Western notions of religion and politics, is in large part a function of literacy. As Walter Ong has said, unlike oral cultures, for whom language is irremediably social, in text based cultures “reading written or printed texts turns individuals in on themselves.” This is why Protestantism championed public education, the goal of which was to turn each person into an individual soul in solitary meditation over the Bible. Reading was a sacred rite of passage. Subsequently reading became a political rite of passage: democracy in its modern form is unthinkable without literacy, nor is industrial society. Science, law, politics, religion, all rely on texts–no wonder the United Nations has defined literacy as a fundamental human right.
Yet the written text owes its structure to oral culture. In his discussion of the shift from oral cultures to text-based cultures, Ong points out that oral cultures use elegance of structure to aid memory: without writing, “How could you ever call back to mind what you had so laboriously worked out?” he asks. The answer is to “Think memorable thoughts.” Oration is characterized by “mnemonic patterns, shaped for ready oral recurrence,” patterns such as balance, antithesis, rhythm, anaphora, consonance. Once writing began these patterns were transferred: what we call “good writing” has the hallmarks of memorable speech. The figures and tropes of rhetoric which have been part of the literary curriculum for thousands of years are based on hundreds of thousands of years of oral practice. Eloquence aids memory, quite simply, and even thought written texts to some extent made memory irrelevant, the physical difficulty encountered in reproducing and transporting books, and their irreducible singularity (only one person at a time can read an individual book), meant that memory still played an important role in written structure. When books are rare, one often has to remember them rather than own them. Moreover, reading (like oration) is linear, and longer texts require memory for comprehension. Reading the Bible, for example, necessitates that readers hold ideas and events from previous chapters in mind as they work their way through the vast labyrinth of stories, proverbs, laws, and prophecies.
In sum, the physical text, like the oration that preceded it, had to be carefully constructed in order to be comprehensible. Since artful construction takes time and effort, it was axiomatic that published writing was reserved for important content. Which is not to say that all writing was eloquent. The origin of writing is in record-keeping: advanced agricultural societies required a way to keep track of their storehouses of grain and pottery and weapons, so memory was outsourced to the clay tablets of scribes. No one needed, in a literate age, spend a great deal of time making an eloquent grocery list–though in oral times one would do so if one wanted to remember the items. The creation of un-artful language is thus to some extent a function of literacy.
One sees a gradual abandonment of memory-based writing as civilizations age; a good illustration being the decline in the use of poetry for mundane content. Many Greek texts on natural history, economics, agronomy, etc. were written as didactic poems. Lucretius’ On the Nature of Things is an example from the Roman period of a work of philosophy set to verse. As late as the 18th century the naturalist Erasmus Darwin (Charles Darwin’s grandfather) versified a botanical treatise called “The Loves of the Plants”–but today it would be unthinkable that a scientist would turn to verse to express his or her ideas.
But what happens when technology begins to invent new ways of recording and disseminating speech? Ong claims that with the coming of radio, phonographs, telephones, television and magnetic tape, technology had created something new, which he called “secondary orality.” This new form of orality differed greatly in scale:
Like primary orality, secondary orality has generated a strong group sense, for listening to spoken words forms hearers into a group, . . .But secondary orality generates a sense for groups immeasurably larger than those of primary oral culture-McLuhan’s “global village.” Moreover, before writing, oral folk were group-minded because no feasible alternative had presented itself. In our age of secondary orality, we are group-minded self-consciously and programmatically. The individual feels that he or she, as an individual, must be socially sensitive.
Much of the twentieth century was consumed with this new orality, as broadcast media played an increasingly important role in social organization. At first, media reinforced eloquence, as transmissions were not repeatable; but with the rise of recording technologies, oral communication was freed from memory–and to some extent from eloquence, especially since it was occurring alongside a vast production of written texts. In secondary orality, one exploits the emotional immediacy of personal delivery, but no longer is disciplined by the formal demands of primary orality.
This leads to an noticeable shift in expectations for public communication. The fact that one does not have to be memorable when communicating means that one can be “informal” –i.e. unstructured, non-repetitive, spontaneous. When informality is allied to the warmth of presence, the result is a kind of illusion of intimacy. Intimate speech is elliptical, allusive, telegraphic because the close relation between communicants provides context. Many things are unsaid because they remain in the storehouse of shared experience–events, beliefs, aspirations can all be “pointed to” without being described. When public communication mimics this intimacy in its embrace of informality, inarticulacy begins to signal close–therefore authentic–relationships.
One can trace this by comparing the rising informality in the discourse of American presidents; from, for example, the speeches of Wilson, which where extremely formal and had to be written down and disseminated by newspapers, to the “fireside chats” of Franklin D. Roosevelt, which made use of the “secondary orality” of radio and happened right in people’s living rooms– to the television version of such a chat Jimmy Carter gave in 1977 in a beige cardigan (demonstrating that visual information was beginning to play an increasingly important role). In the twenty-first century, it is a short distance between George Bush deliberately mispronouncing the word “nuclear” to the ungrammatical, misspelled Tweets of Donald J. Trump. Clearly something quite profound has happened: Trump’s supporters feel a degree of intimacy with him purely as a result of the medium he is using.
I would argue that social media like Twitter and Instagram and Snapchat have moved us to what I would call “tertiary orality”: a stage where public communication lacks the memorable structure of orality because it leans on the permanence of text and lacks explicit content because it pretends to intimacy. If secondary orality convinced us to be “group-minded self-consciously and programmatically” in ever-larger groups, tertiary orality highjacks language and uses it to serve primarily as an incessant reinforcement of group belonging. Tertiary orality mimics intimacy because its use is not to develop thoughts or arguments but to simply mark one’s place in a social network. The average text takes less than five seconds to read, so there is literally no there there. Yet 913,242,000 of these minimal texts are sent every hour of every day worldwide. It is not their content but their status as action that matters. This is part because human beings can’t actually be intimate with more than about 50 people. Social media, with its capitalistic love of expansion, tries to employ the discourse of intimacy to push the user past the possibility of intimacy. If Marshall McLuhan famously said “the medium is the message,” we might say the message is “I text, therefore I am connected.” Affirmation, confirmation, are all that is required for the most part. Like the reassuring caws of agitated flocks of crows, most messaging is conducting the primary business of holding the group together. The result is the social enshrinement of inarticulacy and superficiality as a signifier of belonging. This would explain the widespread use of emojis, which essentially replace linguistic structures with visual symbols, and this would certainly explain Snapchat messaging, which is largely sub-literate. It would also explain the over-use of exclamation points, all-caps, and other textual devices that are the equivalent of speaking more loudly when you think you are not understood. It is a language of likes and dislikes, a language that points to content but never conveys it. In this it resembles most the parlance of advertising (which got to false intimacy as a mode long before the internet). What is texting and Snapchatting and Instagramming but the parlance of the commodified personality?
To return, finally, to the young man depicted above, the martyr who began this conversation: he is scrolling and clicking and swiping right–he is passing on gossip and terse rejoinder and trivial commentary on his status that once would have been reserved for the ephemera of breath. And yet does he have time for reading the texts painstakingly carpentered to outlast the flood? Does he have time to be alone with either his own thoughts, or the thoughts once deemed worthy of remembrance?
We have entered the age that we might call “the tyranny of the thumb.” What can’t be said by a thumb isn’t worth saying. Many will admit that this is frustrating, even debilitating at times. Yet we must love it! Moderns must accept the Modern! Ninety-five percent of people under 25 have phones–a truly remarkable market penetration. I mentioned in my first paragraph the phrase amor fati, which is translated as “love of one’s fate.” Nietzsche was an expositor of this idea, as he puts it in Ecce Homo:
My formula for greatness in a human being is amor fati: that one wants nothing to be different, not forward, not backward, not in all eternity. Not merely bear what is necessary, still less conceal it–all idealism is mendacity in the face of what is necessary–but love it.
It seems that we are to take this world of phantom intimacy, this swarming embrace of a needy and demanding virtual crowd, as the condition of our time. I feel that gravitational pull as I walk through the world, among those who gaze into their hands. Yet I still don’t carry a phone most days (and when I do it is an antique flip phone). I have never learned to text.
Something the Minnesota musician Charlie Parr said about the nature of time has stuck in my mind: in an on-line interview he describes time as being like the curl of waste aluminum coming off of a lathe, an image he gets from watching a craftsman mill a resonator cone for a steel guitar. Time isn’t linear, Parr says, it twists and turns and folds back on itself in unpredictable ways. We experience certain moments again and again, while others are left behind. Sometimes it even seems that the past is still ahead of us.
I am connecting this with my sketch of a building off of Newbury Street–Boston’s boutique-district which comprises 19th-century Back Bay mansions with slate roofs and sandstone gargoyles re-purposed as Anthropologie and Juicy Couture stores. It is only four blocks from Newbury to the John Hancock tower, a 60-story exemplar of the international mid-century modern style which overtook most cities of the world after the Second World War. The minimalist outlines and maximalist scale of the Hancock is typical of architectural modernism–the glass monolith looms over the Back Bay, an inscrutable alien presence. It is defiantly asserting itself as utterly dominant over its surroundings. This is how Moderns see the future.
For Moderns, the arrow of history flies in one direction and carries nothing with it. The future is a liberation from the shackles of the past: the theology of enlightenment requires that the future be the paradise that justifies this rupture. We are all hurtling toward freedom, toward a hard-edged world in which every decision is rational and every consequence known, a world freed from the shadows of superstition and fear.
This is reflected in the cities Moderns imagine they will inhabit. The designs of visionary architects like Mies van der Rohe and Le Corbusier depict vast towers surrounded by sky and empty space. There are no previous styles in the radiant city of the future. There is no past. Like the Heavenly Jerusalem in the Book of Revelations, the city of the Moderns would descend and supplant everything existing before it with crystalline order.
But all attempts to realize this future have had unintended consequences, not least being surprising resistance from the past. Many people instinctively recoiled from the Modern vision of the city; it seemed to them that on the one hand the human will dominated the landscape, but on the other humans had become insects, crawling in the shadow of their own triumph. It was surprising to architects to find out that people actually liked things to be at human scale, a revelation that happened often in the 1960s as critics like Jane Jacobs began to push back against the modernist consensus. The historic preservation movement gained tremendous influence from the mid-’60s on. When the John Hancock Tower went up in 1976, architects had to respond to public outcry that the building would cast a shadow on Trinity Church, a Registered National Monument. The past, it seems, had a vote when it came to determining the shape of the future.
Charlie Parr’s metaphor expresses this well: if time is not linear, but clumpy, recursive, then we are not so easily shed of it. Parr’s experience of time is informed by trauma–he has never quite gotten over the death of his father. His grief over that event has not been left behind, it keeps confronting him; it is in fact the mainspring of his art, as he was driven to write songs in reaction to his sorrow. But it is not only trauma that stays with us. One could also reference joy, or beauty, or any kind of intense fulfillment, the experience of which bobs in our wake but then is re-encountered ahead, flooding future moments with nostalgia but also with value. The memory of both joy and pain gives consciousness its significance and ensures that time is not the simple ticking of a chronometer. The human personality is almost entirely composed of memories–when we lose them we are no longer ourselves. What is death but the end of memories?
I thought of this recently while reading the reviews of Blade Runner 2049, the long-awaited sequel to Ridley Scott’s 1982 film. I have watched original Blade Runner many times. When it came out it was startling in part because of its complex vision of the urban future. The setting, Los Angeles in 2019, is rendered as a disorienting composite of rotting, abandoned buildings and sleek ultra-modern towers, of film noir detectives in raincoats riding in flying cars and Asian street vendors selling bio-engineered snakes. While science fiction films had sometimes depicted the city of the future as dystopian, in general the look had been in line with Modern fantasies–massive domes and pylons, streamlined trains and airships. Most had no street life to speak of, and certainly none featured dilapidated neighborhoods decorated in superseded architectural vocabularies.
A central source of Blade Runner’s hybridity is the set itself; for budget reasons, the movie was filmed on a pre-existing Warner Brothers’ set which had stood for years in a studio lot in Burbank and had been used in countless movies, including The Maltese Falcon. The set was called “New York Street Scene” because it reproduced a typical block of 1920s Manhattan. When director Ridley Scott began the pre-production design of his movie, he and futurist Syd Mead (who Scott had originally hired to design his flying cars) decided to update the New York set by applying an overlay of ducts and pipework to the original architecture, giving the impression of a world which was retrofitted, even jury-rigged. The future was a tangled encrustation appliqued over a dilapidated past.
This felt intuitively right: by the 1980s Americans were living in environments that were visually chaotic. When they went downtown they walked past 19th-century brick storefronts, Art Deco banks, and sleek glass modernist boxes. Americans lived in neighborhoods where Queen Anne mansions mixed with craftsman bungalows and cookie-cutter ranch houses from the 1950s; on the highway they drove past the “Googie” architecture of fast food restaurants and shopping plazas, often juxtaposed to farm houses and weathered barns. Beginning in the ’60s, huge swaths of urban infrastructure had been simply knocked down for parking lots or to make room for freeways, giving every city the look of having survived a bombing. In addition, many Americans had lived long enough to see once-modern buildings beginning to age, both stylistically and structurally. The average American city in 1980 seemed stuck between a past that had been half demolished and a future that had only been partially realized–or had been partially realized and then abandoned.
The bewildering visual contradictions of Blade Runner are thus the contradictions we all live with: Deckard, the eponymous Blade Runner, has an apartment with both a personal computer and an old-fashioned piano on the surface of which, next to his futuristic gun, are old photographs in antique frames and a stack of classical sheet music. There is a glowing orb of Saturn above an oriental rug. Past and present are all jumbled together–as it is in the street down below, where vehicles out of sci-fi comics share space with jingling bicycles ridden by people wearing coolie hats.
Of course, the contradictions of past and present are central to the movie in other ways. In particular, the plot centers on the question of what makes someone really human. Apparently a good way to tell an artificial human from a real one is to test for memory–at the beginning of the movie, Detective Holden says to the replicant Leon, whom he is interrogating: “describe in single words, only the good things that come in to your mind about… your mother.” This rather Freudian query is met with gunfire. It seems that the lack of a past is one of the things driving replicants crazy. Later, industrialist/inventor Tyrell tells Deckard his corporation is endowing replicants with artificial memories–“If we gift them the past we create a cushion or pillow for their emotions and consequently we can control them better” he says.
So central is this idea that memory is what determines humanity that at the climax of the movie, replicant Roy Batty, Deckard’s nemesis, asserts his moral superiority over the men who are trying to erase him by asserting the validity of his real–not his manufactured–memories: “I’ve seen things you people wouldn’t believe,” he says, “Attack ships on fire off the shoulder of Orion. I watched C-beams glitter in the darkness at Tannhäuser Gate. All those moments will be lost in time like tears in rain.” This speech confirms that Roy, the artificial man, is completely human–more so than Deckard, as it turns out.
I titled this post with an oft-repeated quote from Faulkner. It seems more significant now than ever. Modernity assumes a clean future, divested of the past’s tendrils. The past is now pulling hard: American politics is being driven by ghosts from the Civil War and by the nostalgia of the white working class; the “Me, too” movement is dredging up abuses which the abusers thought were long buried; even our climate system is refusing to forget the past, as the accumulating effluents of our fossil fuel era begin inexorably to work our doom. Nothing is thrown away, nothing is forgotten. The more we try to make a clean break to the Utopian future, the more catastrophic the blowback, it seems. The model for our Modern moment, as Bruno Latour has said, is not Prometheus but Oedipus.
Apparently Netflix has decided to reboot the 1960’s series Lost in Space. Coincidentally I have been watching the series on Hulu–something to do while washing dishes. But since a side interest of mine is science fiction film, I’ve actually been doing a lot of thinking about the show and its place in American culture.
Two television series made a huge impression on me when I was in grade school in the 1960s. One was Star Trek, Gene Roddenberry’s paean to the space age (1966-69). That show has worked its way deep into the Modern psyche, spawning a vast empire of spin-offs and novelizations and Comic Con costumery. The other was Lost in Space (1965-68), which has inspired only one really terrible movie (1998). But I have to admit, as I re-watch the serial episodes of Lost in Space, that it was just as important to my young mind as the more cerebral and adult Star Trek. In fact, I was surprised how viscerally I remembered those early episodes which depicted the Robinson family, their pilot Don West, the stowaway Dr. Smith, and the barrel-shaped, lobster-clawed robot that served as Smith’s comic foil as they struggled to survive on a desert planet unknown light years from earth.
What strikes me forcefully, seeing the show as an adult, is how utterly preposterous the premise of the show was and yet how completely appropriate it was to the American experience in the 1960s. For starters, sending a nuclear family into space to colonize a distant planet in a ship no bigger than a split-level suburban house is crazy–the fuel and supplies alone would require a vessel many times bigger. The pie-shaped “Jupiter 2” spaceship bears a superficial resemblance to the flying saucers of Forbidden Planet and The Day the Earth Stood Still, two 1950s films that made science fiction respectable in a decade of bug-eyed-monster movies intended for teenagers at drive-ins. But in producer Irwin Allen’s version, the ship was like a magician’s hat–the Robinson family is constantly pulling various items of heavy equipment–including a full-sized caterpillar tractor vehicle–out of a space that could have barely contained the family’s luggage. Yet the size of the ship is oddly appropriate: the real forebearer of the Jupiter 2 is Disneyland’s “House of the Future.” This was a sleek modular pod built entirely of plastic components–everything from the fiberglass outer shell to the polyester couch pillows– and stuffed with technological innovations such as a microwave oven, an “ultrasonic dishwasher,” an intercom system, and modular sinks that raised and lowered to accommodate the height of the user. The House of the Future was a joint project of Disney, the Monsanto Corporation and M.I.T.–and was intended to be a 3-D sales pitch for the home products that would soon flood the market. The Robinson family is on some level doing what most American families did throughout the 1950s and 1960s–leaving our crowded post-war American cities to colonize the wilderness of subdivided farmland, moving into ranch houses and split-level Cape Cods that looked as if they had been dropped from the sky.
In that sense, the “space” of Lost in Space is part of a social fantasy. After twenty years of collectivism, which enabled America to survive first the Great Depression and then the Second World War, America was reverting to its individualist ethos with a vengeance. And that meant the return of the frontier as a trope. One of most popular genres in this period was the Western–I remember very well the ubiquitous cowboy shows on television–Bonanza, Gunsmoke, Maverick, The Rifleman, etc.— and the CinemaScope horse epics in the movie houses. Another important TV show of the time returned Americans to even earlier frontier experiences: I was an avid fan of Daniel Boone, which was itself inspired by the Disney film Davey Crockett, both starring Fess Parker. America has always been able to romanticize the frontier as the source of its virtue. The “untamed wilderness” is the necessary condition to American individualism, as unclaimed spaces provide the opportunity for advancement without the visible presence of politics. A man and his family can carve out a living in the forest or on the plains, or so the story goes. Frontier space is the theater for the realization of the libertarian dream, since there is nothing to interfere with the natural relationship between hard work and bountiful results (the prior claims of indigenous people notwithstanding).
But by the 1960s we had run out of wilderness–so we naturally turned to spaces beyond the planet. It is no accident then that John F. Kennedy called space the “New Frontier,” a phrase Captain Kirk echoes in Star Trek‘s opening monologue, when he calls space the “final frontier.” The “space” everyone is talking about is not a place, but an ideology of expansion. In America this usually translates into a denial of the frustrations and limitations of community. When social conflicts become acute, Americans are tempted to imagine they will “light out for the territories,” as Huck Finn puts it. Just start over somewhere further west. This was the story of the 1960s, as the suburbs became the easy solution to the social problems of the American city. The Robinsons are, according to the story line of the show’s pilot, fleeing from an overpopulated, polluted earth, trying to start over again in a new land–and this was just a mirror image of the flight from urban spaces.
It is important at this point to remember that the precursor to Lost in Space was the Swiss Family Robinson–the 1812 novel by Johann David Weiss. Weiss’ book is the source not only of the Robinson family’s name but also the show’s basic plot predicament. In Weiss’s story, a family en route to Australia is shipwrecked on an island in the East Indies. They manage to salvage supplies from their ship and set up camp on the island, surviving by dint of their fortitude, intelligence, and faith. In fact the book was intended to teach children “about family values, good husbandry, the uses of the natural world and self-reliance,” according to Wikipedia. The assumption underlying these didactic lessons is that unclaimed spaces are testing grounds for God’s providence; the world is so constructed that well disciplined and rational protestants can thrive.
This is the piety of empire. The characters in Weiss’s book survive because they salvage guns and domestic animals from the ship they came in. They proceed to dominate the “empty” space around them with their technology and their agriculture. This is the project of Europe throughout the 18th- and 19th- centuries, and this is how we envision the extension of our civilization into outer space. It is our “manifest destiny” to spread ourselves across the stars. Here is a grainy still of the Robinson family replicating the Protestant piety of the original story. In Episode 5 of season one, the family has survived a perilous sea journey and when they are safely on land, Maureen Robinson hands her husband a Bible and they form this somewhat awkward tableau.
The difference between the Swiss Family Robinson and the Space Family Robinson was that the Swiss Family Robinson was making itself at home on earth, a planet on which it had evolved and to which it was admirably adapted. The characters in Lost in Space inhabit a world singularly hostile–it has an eccentric orbit which causes it to swing wildly between hot and cold. Also they did not bring with them domestic animals or the tools to create the kind of agriculture they would need to survive. There are scenes showing them planting a little garden bed, the kind that suburbanites toy with in the back yard. But the planet’s hostile climate would have made any farming problematic.
And that is the point I made at the outset: this expedition is not really equipped to be self sufficient, any more than the new suburban pioneer is equipped to use his yard for anything but an ornament. The actual biological basis of America’s highly industrialized civilization had, by the 1960s, become invisible to most people. Food appeared in grocery stores, water from a tap. Whereas the original Swiss Family Robinson story was intended to teach “husbandry” and methods of self-sufficiency, its 20th century successor was perfunctory at best in its treatment of such subjects. The original story was intended to teach the reader something about life on earth. Lacking an earth, Lost in Space must fall back on family drama and a constant supply of hostile aliens–much as the westerns of the 1950s and ’60s were never about animal husbandry but about fighting Indians or “bad guys.”
I have always found it curious that the setting of John Ford’s classic westerns is Monument Valley on the Arizona-Utah border. Having lived in the Valley myself I know it is high desert, not much good for farming (without irrigation) and not even great for ranching. The Navajo manage to scratch out a living with sheep and goats. But Ford was attracted to the cinematic qualities of the place–its barrenness, its fantastic rock formations. These formed a compellingly alien backdrop to his stories of bloody confrontation. In similar fashion, Lost in Space envisions its foreign planet as largely desert. Though there are jungle scenes, and there is an ocean, the Jupiter 2 sets down in a landscape very similar to that of a John Ford movie. The spaceship becomes the lonely ranch house, the outpost of civilization, ever vulnerable to hostile interlopers.
Space, that is, has become increasingly empty in the American imagination. Like the “white spaces” in 19th-century European maps of Africa, the landscape is blank to signify its availability. Something we can write our destiny on in large letters. Something we can fill with our desires and dreams. The problem is that such spaces are only in the imagination. A desert is only seemingly blank–its geology and ecology are real and complex and easily disrupted. Even outer space is not really empty–it is, for example, full of fierce radiation which, as we are finding out in our plans for a mission to Mars, makes any long-term voyage through it very problematic for living beings. Thus our willful simplification of space can hide many dangers–and can also become a kind of self-fulfilling prophecy. As I write this entry, the American territory of Puerto Rico has been utterly devastated by hurricane Maria. People are trying to survive without electricity–thus without clean water or food or air conditioning or medical help. What was in the tourist brochures a tropical paradise has become a hostile desert–the forests stripped, the cities decimated, roads and power lines out of commission, and (most tellingly) the small but promising agricultural sector utterly debilitated. Overnight Puerto Rico has become the dystopian future that many contemporary science fiction novels predict. And of course the energy that amplified Maria’s destruction was provided by our collective inability–and when I say “our” I mean Americans–to face the reality of climate change. Ignorance makes its own deserts.
To be lost in space is to not know where you are. We imagine we are in a providential narrative of inevitable progress and perpetual expansion. That very assumption is not only blinding us to our true position, it is actively working to make the one space we really occupy more and more hostile to us. We are increasingly living on an alien planet–one which we ourselves have alienated.
The one surviving meme from Lost in Space depicts the robot waving his arms wildly and shouting “Danger, Will Robinson.” The comic paradox of an assumedly unemotional machine hysterically gyrating like an overwrought metallic version of Oliver Hardy is what makes the meme stick in the mind, I suspect. But the robot’s warning takes a more sinister tone in my own consciousness. Part of the robot’s function is to act as a kind of cybernetic guard dog for the family. Its sensors are always interrupting the Robinsons’ sense of equanimity by detecting dangers just beyond their ken. The happy nuclear family does not know what is coming. Will it heed the warning? Will we, lost as we are?
Two weeks ago I took a group of poetry students aboard a river boat for a morning’s excursion. I had brought my guitar along, just on the off chance that it might prove useful. About half-way through the trip one of my students picked it up and gave us an impromptu concert, mostly playing the folk songs of Gregory Alan Isakov. As we sat looking at the high river bluffs sliding past us in the thinning light of September, the plaintive blend of guitar and voice seemed to infuse the valley with significance. Now, my formerly Modern self would be quick to call that a projection of a human feeling onto a blank universe; but these days I question that kind of knee-jerk dualism. The song, singing wistfully (as most modern folk songs do) of the provocations and limits of desire, awakened my awareness of time and loss and the preciousness of the moment, a feeling inseparable from the swift passage of the river through the rugged scars of its own corrosive past, and from the green canopy of life, perennially knitting those wounds into a home.
This experience reminded me of a moment some years back, when I attended the Great Dakota Gathering, an event held in our river town every year. It is a time when indigenous people who originally inhabited the land are invited to return for a weekend of traditional dancing, drumming and prayer. The drum circle, with its attendant singers, is my favorite part of the weekend. The sound of Dakota voices bouncing off the bluffs opens a gulf in the day, revealing a vertiginous view of the deep past. We can read about Indian removal policies, but to hear the echo of a nearly-lost language amplified by limestone cliffs is to connect viscerally with what came before, and what remains. The Dakota experience was and is shaped by this Mississippi valley, and the valley was shaped by the Dakota–who farmed it, burned it, hunted it. The return of the Dakota sound brings me that reality.
Loss is real and continual, as is growth and adaptation, erosion and alluvial deposition. To feel something about a landscape is to acknowledge it as a source of consciousness–giver of metaphors and plots, provider of the coordinates and the subject of our narratives. Landscape is both agent and stage. Antagonist and dramaturge. Framework and substance. To feel this is not to “project.” Moods, as I have been saying, contain information.
There is an odd little poem by William Stafford I’d like to insert here. At first glance it seems hopelessly naive, and the reader may understandably resist the “Assurance” the title promises. But, as usual with Stafford, the poem gets a bit more pithy as you read it over again.
To be non-Modern is to reject the false divide between human and world. This does not mean that you’ll be okay. Stafford’s poem says you are in the middle of a storm. You are like Lear, with the whole world pouring down on you. You are “aimed since birth” which implies there will be no rest until you quiver in the bull’s eye (and thus end your mortal career, to quote Thoreau). But you are not alone. Especially if you can hear the deep sound of the landscape. This is the only earth you get; you can’t escape it because at bottom you are it. It is intrinsic to your dreams, it is the shape of your intentions.
This quote is from Paul Gauguin–I wrote it down while visiting the “Gauguin: Artist as Alchemist” exhibit at the Art Institute of Chicago. The exhibit displayed aspects of Gauguin which you usually don’t see–his work as a wood carver, print-maker and potter. Beside many of his more famous paintings were objects he had chiseled from wood or sculpted from clay. I was struck by the pair of shoes he had made for himself during his time in Brittany–and I was intrigued by the idea that a sound could guide a painter. As I reflected on it, I realized that it explains a lot about Gauguin’s work. He wanted his paintings to be heard, not seen.
I have always been a fan of Gauguin–when I was a young man I kept a print of his painting Merahi metua no Tehamanaon the wall of whatever apartment I was renting. I was happy to see the original of that painting in the Chicago exhibit, and it was as I remembered: a brooding young woman in a blue-and-white striped “mother Hubbard” dress, a red gardenia behind her ear, stares at the viewer. Behind her are mythical figures and mysterious glyphs.
What is remarkable about Gauguin is his use of flattened perspective and bold, simple colors: he turned his back on the meticulous realism of the French Academy, much as the Impressionists had done before him. But his paintings are not just “impressionistic.” The Impressionists were keen to give a full account of light–adding time and motion to painting, emphasizing the ephemeral effects of light, privileging color and texture over outline. They wanted to find a way out of the static realism which Western art seemed trapped in. But their solution still assumed the Modern notion of objectivity–they just developed a more fluid kind of objectivity, one that reflected a sophisticated understanding of how human perception grasped the world. Gauguin’s interests were different: he was looking for a way to express the intensity with which the world grasped human perception. He was interested in the world’s agency. He was interested in myth.
The difference between seeing the world and hearing the world is profound. Sight is directive: you look at something. Sight encourages the notion that the world is secondary to your intelligence and your will. Hearing reverses that dynamic: you listen to something. You are a recipient of sound. In fact, you often hear a sound before you know what its source is. This is why a sound can frighten us–sound is omni-directional but sight is only in one direction. Sound is also penetrative, corporeal. Light waves are invisible, but sound waves are palpable–there is no separation between a sound and your hearing of it. You can pretend to be a disembodied viewer, but you can’t be a disembodied hearer. Sounds are profoundly wedded to their environment, as they are shaped by the space around them. While every sound has a distinct source, sound waves can be immersive, engulfing: can echo and reverberate. When a sound reaches you it not only bears information about its source but also about the space it has traversed.
We always acknowledge that images–real as they seem–are on some level merely a trick. They are made of reflected light. The source of a color or a shape is ultimately the sun–which makes vision oddly spectral. It is imaginary–from the Latin “imago,” meaning “image or likeness.”The world is appearance, and appearances deceive. But sound is always genuine. It begins with its source. We can mistakenly identify a sound, but we don’t feel that it is an illusion. Even when it is a recording, it is still a sound, not the image of a sound.
Part of that is because sound is already an abstraction–it lacks the comprehensiveness of sight. It carries less information–which is why it’s easier to store music on your computer than it is to store movies. But it is also less likely to be confused for something bigger than what it is, which means (in systems theory terms), it can point away from itself to the larger, ever-unknowable world. To say that something speaks implies it has to be interpreted. A picture may be worth a thousand words, but that means that a picture can dominate the conversation and silence the viewer.
Gauguin simplified his paintings, reduced them from elaborate imitations of the “real” to more abstract designs; his colors and lines speak more directly, the way sounds do. But like sounds, his designs are not complete worlds. His paintings point, not to themselves, but to the mysterious world that is their origin, the way sounds turn us toward an environment and ask to be interpreted. A photograph of a Tahitian woman may resemble her quite accurately, but it doesn’t doesn’t sound like anything. It is silent in the sense that there is nothing more for the viewer to say.
One of my justifications for this blog is to express a central claim: our technology has fooled us into believing we can accurately depict, and therefore control, the world. But the meaning of the Anthropocene–and its looming global blowback–is that the world has always been opaque to us. If we now live in the age of “unforeseen consequences,” then we have never seen clearly enough. There is more truth in a sketch than in a photo, therefore: the sketch contains the trace of the actor, and exposes the incompleteness of the action. A painting may say more than a photo because it says so much less. What it does say, you might be able to hear.
I was just at the La Crosse Folk Festival in Wisconsin where my daughter and I participated in a song writing contest (we came in fourth, in case you are wondering). I was reflecting, as I listened to a variety of compositions played on a sunny afternoon in a big circus tent, on the power of the guitar to communicate human emotions. This guy, for example, was way down inside himself and wanted us to go there with him, through the medium of his instruments.
The guitar has only recently become the prime instrument for the extroversion of feeling. Guitars have always been around, but became popular at the turn of the 20th century (surpassing fiddles, for example) primarily because they were fairly easy to manufacture and make available inexpensively through mail-order catalogues. Guitars could reach out-of-the-way communities at a reasonable price: poor Appalachian whites, or southern blacks, for example, took up the instrument with enthusiasm. And it turns out a guitar is an excellent portable accompaniment to the human voice. As one online commentator put it, “It’s chordal like a piano, though not as strongly so. It’s expressive and vocal like a sax, though again not as strongly. It covers a pretty broad range of notes across what people ordinarily sing in. You can get many sounds ranging from legato almost violin tones to percussion.” A guitar is the human instrument par excellence, which has led to its current status as the most popular instrument in the world.
This reminded me of a thumbnail sketch I made of a John Singer Sargent painting this summer. The work was El Jaleo, a Spanish tavern scene. It was a study in chiaroscuro, depicting a Flamenca dancer at the climax of her performance, but sitting against the wall in the background is the line of guitarists and a singer, all of whom drive the dance to its emotional crescendo. I sketched one of the guitarists, because he seemed to perfectly represent the use of the guitar as a kind of emotional prosthesis: I also sketched the singer, whose head is thrown back, eyes closed, throat vibrating with the intensity of the moment.
You can view the painting if you google it, but here is Sargent’s own preliminary sketch for the painting (in keeping with this blog’s sketchy aesthetic):
Of course we are left to imagine the music, but we have all been similarly effected by such a moment–when the human voice, accompanied by the voice-like guitar, seems to put us in a heightened state.
This took me in turn to the poet Wallace Stevens, and his poem “The Blue Guitar.” It’s a long and complicated poem, but here’s the bit people tend to remember:
The man bent over his guitar,
A shearsman of sorts. The day was green.
They said, "You have a blue guitar,
You do not play things as they are."
The man replied, "Things as they are
Are changed upon the blue guitar."
And they said then, "But play, you must,
A tune beyond us, yet ourselves,
A tune upon the blue guitar
Of things exactly as they are."
If you are a Modern, you believe that “things exactly as they are” must mean their material substantiality: that is, things are real insofar as they are exterior to the mind and measurable. Stevens’ poem takes issue with this. Moderns are obsessed with epistemology, and this makes them poor empiricists sometimes. They are quick to divide the real from the unreal, based on a prejudice for hard objects. Well, a Modern might say, we might measure your heart rate, the presence of certain chemicals in the blood. . . the frequency of the sounds you are hearing. . . but the song itself isn’t “real”–it is an epiphenomenon, a mere by-product of physical processes. But is the moment of union we feel when the guitar seems to pluck our very heartstrings unreal? That would imply that it could not be a cause, it could not make things happen. And yet–to get back to the song-writing contest– here was a tent full of people, brought together by a mutual interest in–not the guitars themselves, which are undoubtedly “real,” but not interesting as simply wooden boxes, and not sounds alone, which are also “real” but not uniformly interesting (as we wouldn’t be gathering to hear five-year-olds bang on guitars)–but rather in particular patterns of sound that make particular affects arise in human beings. Patterns make a difference and difference is, well, information. A song is as much information as an equation is, and its results are even reproducible, which explains record sales.
But what kind of information? Oddly enough, the emotional affects stimulated by the guitar seem to increase one’s sense of reality. Song is pregnant with meaning–that feeling of congruence between the inner and outer world. Usually that congruence is not a proposition or claim so much as a quality, a tone, a value. Grief, or melancholia, or nostalgia, for example. Joy, sympathy, affirmation. Such feelings, once evoked, speak to our existential condition: our limits, our agency, and the conflict between agency and limitation. It’s interesting that Stevens claims the guitarist is a “shearsman of sorts”–meaning, he is at first reducing something, clipping it. That is of course essential to art: first there must be a frame, a boundary, into which the composition can fit. Without the limits of art–the musical scale, the six strings, the finite number of frets, the enclosed volume of air in the guitar’s body–there can be no coherent pattern. Once there is a pattern, the relations within the pattern–harmonies and dissonances, tonal dynamics, etc.–can create meaning. And when musical patterns are combined with the separate patterns of human language, as happens in song-writing, we have a doubling of meaning. On the one hand, music points beyond language, to those existential conflicts that arise from our being individual organisms embedded in a larger tissue of life; on the other, language, with its capacity for narrative and for propositional statements, connects the more abstract, universal feeling tones to particular characters. The lyric “I” of the singer reconnects the abstraction of music with the human particular.
I spent some part of my summer reading Cary Wolfe’s What is Posthumanism. Wolfe is trying to explain the systems theory of Niklas Luhmann, another one of those prominent European thinkers we never hear about. Wolfe summarizes systems theory with the provocative statement that “systems increase their contacts with their environments paradoxically by virtualizing them.” To put this in human terms, we create little virtual models of the world in our heads in order to understand the boundlessly complex world outside of our head–these virtual models are how we contact the world. But they can never be the world–the world won’t fit in your head. So far, that’s pretty much philosophy since Kant. But Luhmann is big on process: he claims “Meaning is the unity of actualization and virtualization, or re-actualization and re-virtualization, as a self-propelling process.” We are constantly readjusting our models because they are never enough. Fated to mistake our virtual worlds for the real world, we nevertheless hunger for something beyond. For more reality, that is. For systems that come at the world through meaning (like human beings), Luhmann says that “Meaning becomes for them the form of the world, and consequently overlaps the difference between system and environment.” In other words,
Two curators in the cafe of the Gardiner Museum in Boston. They are listening to a consulting conservator “mansplain” the process of preserving a Roman sarcophagus, part of the permanent collection of the museum. The Gardener Museum is the former private residence of Isabella Stewart Gardener and it features a spectacular open courtyard in which a variety of classical bits and pieces disport themselves among the hosta and ferns. One of these is a Roman tomb with elaborate carvings. Here is a sketch by a better artist:
Apparently the skylight above the courtyard developed a leak and the sarcophagus has been stained by rainwater: hence the need for a protracted lunch with a conservator on a hot day in July.
Why worry about the up-keep on a stone coffin two thousand years old? Primarily for its beauty. The twining figures, so vivid and sensual, bear baskets brimmed with grapes as they dance in attitudes of ecstasy around what once was a corpse, but now is only an expressive absence. How much work went into this erotic celebration of physical life, all for the purpose of commemorating someone now out of it. Perhaps it is that contradiction that adds to the power of the sculpted figures. They express our central dilemma as human animals: we have the imagination to understand our mortality, but we can’t change it. We can envision eternity, but not experience it. We can sculpt beautiful works that seem to transcend time and decay, but we ourselves vanish from the midst of them.
Which gets me to a favorite Rilke poem, from his Sonnets to Orpheus. In Sonnet 10 he is remembering sarcophagi he saw both in Rome and in the fields outside of Arles:
You, who have never left my feelings,
I greet you, antique sarcophagi
that the cheerful water of Roman days
flowed through like a wandering song —
or those others so open, like the eye
of an early awakening shepherd
full of silence and bee balm,
around which charmed butterflies whirl.
All that’s been seized from doubt,
I greet you: re-opened mouths
that knew what silence was.
Do we know it, friends? Or not?
Both answers build the hesitating hour
in the human face.
The tombs have been re-purposed as aqueducts and flowerbeds; once final and sealed, they are now open and fluent. But they also retain their old meaning. The final image of the human face shaped by its vacillation between knowing and not knowing is a powerful statement about what makes us human: our double consciousness, flitting (like butterflies) between death and life, between past and present, between being and becoming. We, too, are re-purposed tombs: we carry the form of our ancestors, through whose mouths we speak our living moment; after which we will return to the rich silence that is our earth.
The rough sketch below is of one of the totem poles in Great Hall of the Field Museum of Chicago. It is considered an example of totemism, the belief that humans have kinship with the natural world. As James Frazer put it, a totem “is an intimate relation which is supposed to exist between a group of kindred people on the one side and a species of natural or artificial objects on the other side, which objects are called the totems of the human group.” The word derives from an Ojibwe root, “ote,” meaning “uterine kin.” A totem is literally descended from one’s mother, and is therefore subject to rules of family relationship. Totemism is part of the animist, pre-agricultural world-view that we humans have held for 98% of our existence. Hunting and gathering peoples, which all of us were until about 9,000 years ago, acknowledged kinship with the animals and plants around them; this is reflected in many current Native American beliefs, summed up in the frequently-used Lakota slogan, “We are all relatives” (though for a more nuanced look at this, see http://www.lakotacountrytimes.com/common/PastArchives/1237.html). It is only when humans develop intensive monocultures that we propose the fiction of separateness, a fiction which grows in relation to the our perceived mastery over the biological world. Which is all to say, Darwin’s theory of evolution would have astounded no-one from the pre-agricultural world: it was obvious that we are related to other species. That this idea was–and still is–such an affront to the dignity of “civilized” Westerners is a testament to the rigidity of agricultural (and pastoral) ideology.
One of the consequences of seeing humans as separate and distinct from animals and plants is loneliness. The person who carved this totem pole was completely enmeshed in a web of relations. His life was restricted by taboos and obligations–but he was never isolated.
The fiction of separateness arises in the 17th century, with the triumph of reductive rationalism and neo-Lucretian atomism. In the Cartesian age, only the human mind was important–animals were no better than automatons, and the world was like a great clock–active but dead. This mindset enabled endless expansion: it cleared the world of agency. All the intricate relations that once were posited–even for Medieval westerners–were swept away, giving Moderns a free field of action. If we were not kin to the plants and animals we were exploiting for profit, there was no sense of taboo in our action, no check to our expansion (and nothing to save us from collapse, as it turns out). We cling to this self-glorifying separatism, and that is why Darwin is scandalous–not just for religious fundamentalists, but Moderns who embrace a special human destiny. Many people pay lip service to evolution, but they actually interpret it as teleology: inevitable progress, leading up to (surprise!) humans. The actual theory does not posit that: it simply says that we must take life as a whole to be a vast machine for generating adaptive responses to the corrosive winds of chance and entropic decay. I always recall one crucial sentence in The Origin of the Species: “Let it also be borne in mind how infinitely complex and close-fitting are the mutual relations of all organic beings to each other and to their physical conditions of life; and consequently what infinitely varied diversities of structure might be of use to each being under changing conditions of life.”
Our main competition is not each other: it is Death. No being on the planet can exist alone. We are locked in webs of competition and mutuality. Every decade we find new evidence of the inter-relatedness of all beings. Yet our ideology, born of agricultural dominance, continues to drive us to act as if we alone mattered.