Back on the Safety Net

Here’s to healthcare—yesterday, apparently, marks the beginning of my health insurance coverage through the Freelancers Union, a fine internet-based organization that helps out the growing ranks of independent workers. It comes six months, just about to the day, from when my graduate school insurance cut off and I joined the 48 million uninsured Americans. My plan is a limited one, but at least I am back in the safety net. If I discover tomorrow that I have a rare disease requiring tremendously expensive treatments, it won’t be the ruin of me or my family. Things would have been different a week ago. I can even begin to consider going to the doctor if I feel a little ill.

The net is an odd metaphysical fact of modern societies, an imperceptible contractual reality that holds the scepter of life or death.

In his Homo Sacer, the Italian philosopher Gorgio Agamben explores the relationship between sovereign power and the capacity to define “bare life” from something else. Bare is the life that can be killed without committing a crime, yet cannot be sacrificed—a middle space between human and animal. He argues that the logic of European political power, since Rome at least, has revolved around the capacity to draw the line between bare life and human citizenry.

Being a citizen means having a net, or rather, being had by the net, being held in it. Killing a citizen is a crime, so citizens can consider themselves protected. The state serves as their divine, invisible bodyguard. A presence that can be felt but not seen, except in its works, on the lethal injection table.

Several years ago, I had a fascinating conversation with a friend in the Coast Guard about maritime law. He explained how utterly bare a ship without a flag is. In international waters, destroying it is not a crime. All that protects ships are the flags they carry and the web of treaty arrangements between countries that agree to recognize the sovereignty of one another. However beautiful your boat, without a flag, you are no longer a thing of intrinsic value (though your boat may be).

The net, Agamben makes clear, is the definition of citizenship—or more, of species. We stand as equals alongside those who share in the net that protects our lives from being mere flesh, that insists on its value and its worthiness of being saved.

In the United States, the healthcare net has been fashioned as an economic problem, even though the rest of the post-industrial world has shown it possible to make health a human right. This owes, in fact, to an historical mistake. The healthcare system developed in a postwar industrial culture that could depend on more or less stable employment. Benefits were distributed through employers, negotiated by unions, and regulated by government. Now, however, Americans can expect to work an increasing number of jobs over the course of their lives. Currently, I work three at once, none of which offers benefits. Though we have become a post-industrial economy, resting on the shifty ground of the service sector, the safety net has failed to keep up.

Politics has cast this historical mistake as a crisis of individual responsibility—the 48 million are apparently not responsible enough to put up for their healthcare costs. But of course the hurdles are innumerable and particular to every case. In my case, it took six months for me to assemble the paperwork I needed in order to join the Freelancers Union program, which, quite absurdly, was about a third the cost of buying the same policy directly from the insurance company as an individual. Some laziness was certainly involved, but it is amazing how little laziness it takes to be so utterly unprotected.

Politics then says the problem is economic. To keep the quality of care high, we need to ensure there is adequate market incentive for innovation and efficiency in the medical industry. By virtue of mathematical equilibria, there can be no better system than an open market. But an open market means that some people can lose.

No. The problem is metaphysical, which is to say a matter of human rights. If we are to be fellow citizens, protecting us from bare life must be the priority above all. The human problem must not be subjugated to the economic one. Just as free speech and the right to a trial must not be sold to the highest bidder, nor should the safety net that declares our fleshy, fragile lives worth protecting.

All the Web a Wiki

Web annotationFor a person who does lots of absorbing and creating on the internet, a big new thing can feel incredibly daunting. The specter of Being Behind always lurks as a possibility in the nightmare of waking up to discover that the internet has moved on and left you behind like an old Web 1.0 site. The changing internet means changing habits, ways of working, and language. Being in the present takes constant effort, a constant willingness to uproot and retool, to learn to absorb even more needless—but suddenly necessary—information. Fall behind and, well, you’ll know it. You’ll be like your parents.

This is the kind of feeling that hit me last week when I began to explore the burgeoning world of web annotation.

Don’t worry, it hasn’t hit the big-time yet, and perhaps it never will. But just the possibility is overwhelming enough to make one want to move to the desert. It also has the potential to make the internet a whole lot more interesting.

I am drawn to the internet—to keeping this blog, for instance—because of its capacity for conversation. I love conversation and will go to great lengths to get it. But you’ve got to admit, the possibilities for conversation on a blog are pretty limited. For the most part, it is the kind of conversation only the blogger can enjoy. The big main post sits on its throne at the top of the page while the little mere comments (usually in a smaller typeface) twiddle their thumbs at the bottom, hoping someone will look at them before posting yet another bit of inflammatory nonsense. It isn’t conversation, it’s a peanut gallery.

Web annotation changes the picture. Basically a ramped-up combination of a highlighting pen and social bookmarking, it enables users to write notes on webpages they visit and share what they write with friends and strangers. Web annotation tools usually take the form of browser plug-ins, as well as a profile page (i.e. Facebook or del.icio.us), where you can manage and share your annotations.

One the one hand, it is a neat personal tool to help you remember things that came to mind while reading. There is no substitute for a library full of books with your own scribbles in them, so why not have a scribbled up web too? On the other, though, the social networking aspect means that web annotation can change the way we approach the perennial problem of finding and organizing all the data that’s out there. In the process, though, an annotated web means a dizzingly wider worldly web.

The direction annotation takes the web—when carried to extremes, as everything on the internet is—leads to a phase shift. The current regime of Web 2.0 distinguishes between user-generated and “regular” content. A Web 3.0 based in annotation would blur this distinction. Cleverly organized, the miracle of Wikipedia could be replicated everywhere: an endless barrage of annotations, organized into something incredibly useful at a very small cost. It is the wet-dream of postmodern theorists: the annotations become the content.

(Meanwhile, things go in a rather opposite direction of one of the other proposals for Web 3.0, the semantic web. Rather than becoming more structured for machine comprehension, as the semantic web would have it, the web becomes less structured and even more webby. Machine comprehension, which will always be a holy grail, would have to be accomplished by making computers better at processing human language—which would be available in abundance on an annotated web.)

It is another thing to make your head spin.

The Voyage of the Beagle

I wrote what follows at the beginning of the year, upon first arriving in New York. I sent it in to a contest, and it didn’t win, so I thought it might be fair game to share here. It is called, “The Voyage of the Beagle.” I’m actually not sure if this is the final draft because I can’t seem to open the final file.

People take evolution to mean different things.

Now, at the end of what has already come, I take it to mean New York—my present voyage of the Beagle, going on day by day, not knowing what I do. There is a dream I keep having; never at night, for at night my dreams are taken by practical matters like finding a place to live and an income. The dream I mean, interpolated over such ordinary, ordinary life, looks like this:

Evolutionary tree

An upside-down tree, as if uprooted and inverted by the giant inhabitants of skyscrapers, with each person and thing around me, and me, forming its ends. We differ in song, shape, and size, displaying our differences like cocks. On the trains, all the variety of human sounds comes muffled through too-loud headphones.

On the trains, we are pushing and pulling in crowdedness, in flesh. The same people combine and splinter, one city and different neighborhoods, one train and different stops. Like that, combining and isolating, we speciate and hybridize, forming new branches in the tree. Sometimes all of a sudden, sometimes gradually. On my bike, weaving between the taxis and soaring over the bridges, the branches curve and split and open.

Technology is everywhere, on people of every sort. I want more of it myself with a newly unapologetic need. It makes me see how Henry Miller could have hated the place so much, reserving for it his worst vitriol. He seemed to know he was alive as long as he could find new words of condemnation for New York. But I have been to his hideouts—to Paris, to Big Sur, and to Greece—and his hatred only makes me desire this place more. A strange thing happens every day here, the way it wouldn’t quite or couldn’t in Virginia or Rhode Island or California, where I have lived before, where so far I have been forming. In the buildings and the strangers (stranger after stranger!) there is the occasional pregnant teenager, I inside of her and she inside of me, whose missing period has yet to come. She knows not what she bears within her.

But maybe that is because I moved here so recently that I have acquired nothing except for food and a single book about Charles Darwin’s birds. Life, in so little time, hasn’t had time to distinguish itself from dreams. And what seems so hard to believe, but what should be so simple, is that it matters (for evolutionary purposes above all) that now, twenty-three years old if a day, I am in New York.

But now I am the city boy
who doesn’t understand
the struggle
between nature and man.
Found taped to the side of a subway car.

* * *

A year younger than I am now and two days after Christmas, Charles Darwin set out on the H.M.S. Beagle in 1831. He joined on as gentleman companion to the captain, Robert FitzRoy, a melancholy firebrand with missionary ambitions for the South American savages. For almost five years, they sailed around the world making maps for the British admiralty, exploring strange new worlds. Along the way, Darwin collected creatures and observations, writing home to his family and to the English scientific societies, sending along crates full of taxidermy.

The BeagleWhat he saw in those years took a lifetime to unravel and interpret. Two years after his return, the first picture of an evolutionary tree appears in his notebooks. The Origin of Species, which exposed the theory of evolution to the public, was not printed for another twenty years. In the meantime he kept on refining, hiding, fearing the consequences of his ideas, and suffering from debilitating illness. He held the voyage in his mind, not permitting it to complete itself.

Yet what did it the voyage hinge on? The smallest things, the slightest branches of an inverted tree, growing by the falling of gravity. It depended on a letter that arrived in Shrewsbury, and on Captain FitzRoy’s ridiculous phrenological instincts. Darwin wrote in his autobiography, many years later:

The voyage of the Beagle has been by far the most important event in my life and has determined my whole career; yet it depended on so small a circumstance as my uncle offering to drive me 30 miles to Shrewsbury, which few uncles would have done, and on such a trifle as the shape of my nose.

So much depends upon the precious and ridiculous, on what supports the branches of the hanging tree.

Darwin returned from the voyage a celebrity among the genteel naturalist set. Always “ambitious to take a fair place among scientific men,” his Autobiography recalls that after reading a congratulatory letter at a port of call, he scaled the mountains of Ascension Island “with a bounding step and made the volcanic rocks resound under my geological hammer!” What he did with the subsequent decades, of course, made him the most significant of all his peers. Though by then confined to bed with a repulsive illness, Darwin became a hero and a villain, revealing the world in the same stroke that he vanquished a certain charm ascribed to it.

This discovery, this evolution, weaves together the insignificant and the grandiose. Darwin describes a creation that happens by a sequence of accidents; creatures’ chance variations are vetted by surrounding circumstance, by the island or city they find themselves in, that they depend on to survive. A few millimeters on a bird’s beak or the peppering of a moth is enough to be a difference. Then Darwin learned grandiosity from Charles Lyell, the geologist whose books he carried with him on the Beagle. Lyell saw from the present into imperceptible deep time. The layers of immovable rocks could tell how they’d once moved, and were moving, through a world whose scales of time hardly notice the lives of people. We’ll have to be strangers if we are to understand time. We’ll have to go to foreign places, big cities, open oceans, and silent caves to know anything about our familiar homes.

Still more than a decade before publishing the Origin, Darwin could write of his journey as if it were uncompleted. Compared to the later Autobiography‘s tone, the voice sounds cautious rather than triumphant. He insists on the voyage’s real mundaneness, the passing elation met by routine unpleasantness particular to life on an ocean, or on any unfamiliar adventure:

No doubt it is a high satisfaction to behold various countries and the many races of mankind, but the pleasures gained at the time do not counterbalance the evils. It is necessary to look forward to a harvest, however distant that may be, when some fruit will be reaped, some good effected.

The whole voyage was a point of departure, the beginning of an event that had not yet come. On the sea, so in the city.

* * *

I have been to New York before. My father would come here often when I was little to see operas, and occasionally he would bring my mother and me along—I squirmed and resisted. I remember a Thanksgiving Day parade, with all the giant balloons overhead. I remember the hotel where we always stayed, across the street from Lincoln Center, and I remember the tiny rooms where I heard the sounds of taxis all through the night.

These memories come in discrete segments, glimpses of trips that themselves were short, separated by no-man’s-lands of time in between and by forgotten things. Another hotel room, during high school, where my girlfriend and I tried to sleep away the distance between us while our friends groped and moaned in the other bed. In college I would end up in New York, for one reason or another, when my life felt fallen apart or my heart was broken. My heart has broken at least three times in New York, all so far before actually moving here. In college, for consolation, I learned my favorite churches, which became pilgrimage spots—Fifth Avenue Presbyterian, St. Bartholomew’s, and Corpus Christi in Morningside, among them.

The memories lie scattered, and as such they return when I see the places that now recall them. Their sequence is random, mutating with each recollection, though their foundations, I have no choice but to believe, actually happened.

Scientists have learned that each time memories are remembered they are reinvented, touched by the mark of the present. There is a chemical that, when introduced, can stop the act of remembering present events. If it is given to a rat while getting an electric shock, the rat won’t remember to avoid the shock next time around. But if a rat got the shock without the chemical, and has learned to avoid the hot wire, that memory can be erased. Inject the same chemical while showing the wire to the rat, reminding it of the shock, and from then on it forgets to be afraid. Memories depend on how they are tended—recollection opens the door to mutation, an imperfect rebuilding influenced by the conditions of the present.

In mutation Darwin found the diversity of life, from which new possibilities come. Miscopying and recombining, new genetic codes emerge and try their luck at life. Remembering, we ejaculate, sending forth, leaving ourselves fuller and emptier. As people, what we know of ourselves is what we know of our memories. Dividing, remembering, recalling, and forgetting—all in the shape of a tree—I come to inhabit my own species.

It could be quite appalling that Darwin considered his voyage of the Beagle incomplete in itself. Isn’t a voyage a feat of cosmic ordination that should stand as it was, in need of nothing else? Still awaiting the Origin‘s publication, those memories were unfulfilled and waiting for their harvest; only after could they be “the most important event” in Darwin’s life. Now, as my voyage of the Beagle is only beginning—or ending—in the city, my own memories fulfill themselves as accessories. Scattered as they are, they congeal in combinations of the present, according not to how they occurred but to how they are now recalled. They go on to form and shape each other, later memories now acting on earlier ones in defiance of chronology and common sense. They make a tree, dividing and inventing, beginning from a common root and branching off in new combinations, spread across the landscape of the city and dividing it into parts.

In her famous attack on the modernist planners, Jane Jacobs set forth an evolutionary ecology of the city; she wrote it while living in mid-century Greenwich Village. After finishing the book on Darwin’s birds, I borrowed hers from a friend. It brings to life the strangers that surround me here—they are the city’s guardians and its possibilities. Our anonymous eyes watch over each other. They graft the worlds they make for themselves onto the public landscape. This is what separates the species of city from that of towns and villages, where strangers mean trouble and upset the balance. The multitude of strangers is why a strange thing happens to me every day. I keep seeing old friends amidst the strangers: a miraculous event. Or hearing a new song or being invited to a place I’d never been before. The certain probability of these events, which turn my mind to improbable memories, lies exactly in the uncertainty, the strangeness of strangers. Jacobs’s strangers fill my imagination as I walk on these mortal streets, making heroes of the passers-by.

In the city as in Darwin’s evolution, the power of explaining erodes. The hope of easily explaining oneself, or delivering an apology, slithers away from the tree of knowledge. Explanations last for only an instant of the deep time on either end of the present, and the present that needs them keeps rebuilding its needs like new out of old dreams and old genes. This is why Jacobs insists that we should not build our cities around what we know now, planned and measured monotony in every place, but for the inventiveness of future multitudes. Her heretical argument for diversity is orthodox, Darwinian ecology.

These moments of my voyage, unfinished wholes, look toward their unseen conclusion and their unseen beginnings. Like ghosts. Still in pieces, I feel exuberantly whole.

This false memory once taunted me, also mapped over the inverted tree. I have an older sister (whom I do not have), and she talks down to me:

“I am older than you,” she explains. I hear the words without being able to touch them. “You are younger than me. Before you were I was.” And this sort of thing, which she says lightly in the heavy air.

“Once you did not exist. Nobody had even thought of your name or what you would look like,” she continues. “Until I did, Mama did, Dad did.” She stops and picks up a ball from inside my crib, where I am helpless.

“But there was nothing before me.”

* * *

I came here for a reason: because Mary Elston is here. So many other people too, but first of all because of her. I spent the last year and a half in California by the ocean unable to look at the ocean without missing my Mary, still in Rhode Island, where I had left her. Now I have come, living out of suitcases and sleeping on the floors of friends, exclaiming about the pregnancy of the moment, and together we feel like a glass house.

After missing each other for so long, the closeness is overwhelming—we might each be made of only the other’s dreams, clothed in skin. We get tired and annoyed between moments of happiness. Between my drifting, her working, her family, and all our scattered obligations, all the time that remains for us together has been sleep. So we argue about that. I say something terrible, somehow. Another unspeakable thing.

The other day we had a great bout in the street, by 20th and Broadway. I would shout one thing and she would shout another, in hushed shouts, wet by tears. You’ve seen it if you’ve seen New York. In this place of everyone all at once, there are lovers fighting everywhere, and Mary and I are among them too, in the rain, left behind, in the maze of streets with only a map. There was no use protecting our voices from the hundreds of passing strangers, remembering that they are our safety. Compassion can be found in them, in their unflinching passing, their undisturbed faces. One time or another, they have been here too before returning to normal, self-collected anonymity as strangers. If someone had stopped and tried to say something, to help us, we would have felt truly alone.

After that Mary and I shared a cautious hot chocolate in the cafe of the housewares store across the street, trying to be kind again, trying to remember how. And now that was the last time I saw her. I’ve called, and I’ve written, and this has happened before. Yet now is a new mutation, a new recollection of the same old complaints that we can never seem to settle. A new opportunity for mutation, and a new feeling-out of our evolutionary landscapes. With every recollection, recall, is the possibility for forgetting.

It was then that I realized: speciation—the creation of a new species in nature—is an act of forgetting.

Speciation is at once the central and the remotest part of Darwin’s theory. The possibility of creating new species by variation and natural selection, he argued, is what makes all the multitudes of creatures possible. Yet in The Origin of Species, Darwin could cite no example of a species observed originating. Following the habit of Lyell’s deep time, he assumed that such processes are so painstakingly slow that they cannot be observed in human lifetimes. The closest Darwin thought he could come was his examination of selective breeding for dogs and livestock.

Since then, researchers have found ways to notice signs of speciation among creatures. It appears to happen in different ways, whether by geographical isolation, by population bottlenecks, or by the discovery of a new niche. In any case, a population that was once a single species divides to become two or more. First, the diverging groups forget that they are able to hybridize—to produce offspring together—and keep to their own. Over generations, this isolation causes the groups to vary so much that they become physically unable to hybridize. Their bodies, as well as their minds, forget what once united them. Though experienced by no one, I imagine a wrenching horror in being pulled apart like that, gene from gene, creature from creature, relative from relative, memory from memory.

In the last century, these processes have been observed much more closely than Darwin was able to imagine. Careful studies of populations during times of shifting ecology—catastrophic weather or the introduction of a new species, for instance—make natural selection plain to see. It turns out that species are far less stable than people have usually imagined. They are preserved less by their own inertia than by the fragile ecology that they depend on to survive. In the event of cataclysm, sudden changes can occur, from mass extinctions to explosions of new forms. The scale of these events can be global, or they can be so isolated as to go unnoticed. My book on birds says that “for all species, including our own, the true figure of life is a perching bird, a passerine, alert and nervous in every part, ready to dart off in an instant.”

The anxiety of speciation runs through this silence Mary and I have now made. Ready to dart, awaiting the unfolding of ecology between us. I am waiting to move my boxes from her house to my new apartment, as she waits to shift downtown. An opportunity to dart away? The tremendous tragedy of separation sounds comical in its enormity, its bare factuality: speciation is the failure to hybridize. We would not create young. We would not create a world together. We would not be together. But we could come out of this silence in any of a million forms, so we are waiting to see which and working on our helpless plans.

Again, and I can’t even begin to say how: it matters that I am in New York.

An Exercise in Becoming

For the last three posts I have been exploring the process of becoming. An outgrowth of that, as far as the site goes, has been a rather radical transformation. Rather than being hosted at Small’s Clone Industries, where The Row Boat has lived since it began in 2005, it now lives at www.therowboat.com, a home of its own. I began SCI a long time ago in an effort to play around with the internet, developing art projects and hobbies. Of the several sites hosted there, The Row Boat has come to dominate my attention, as well as web traffic (though still it is a rather quiet corner).

On this domain, The Row Boat can come into itself, it can self-actualize. No longer one among several hobby projects, it can claim to be decidedly public, leaving something private behind to wander and wonder, probably to fester.

Everything in the systemic public must serve a function, so I have also begun developing a “Readings” section (for now, it is in the sidebar at left) where I will keep track of worthwhile bits and pieces I come across that may do some good for others. In the future, that section may develop, as well all do in this ever-changing cosmos, into something else.

Becoming a Professional

Charlie Chaplain in Modern TimesPreviously, in “Becoming a Person,” I wrote, with no great originality:

Incidentally, coherent personhood has been the assumption behind rational government (all but Louis XIV’s Le etat, c’est moi), especially republican democracy. Voting, opinion polls, representation, and constitutions all depend on the assumption that citizens are coherent persons.

The same goes, of course, for all rational organizations. Since Charlie Chaplin in Modern Times, we know well enough that working in industrial systems means becoming at least somewhat cog-like. It demands a spiritual shift, so to speak, because one’s ultimate aims, or at least proximate aims, become reconfigured.

Because of my own difficulties at day jobs—seemingly part of the transition from academic life to the workaday—I have been polling friends and family about professionalism. My father wrote, among other generous things:

You must learn to enjoy being useful as a first step.

This is truly a shift of scale, a shift of orientation. My difficulties at work, I realize, stem from the fact of spending the work day looking out for my own interests, and being mainly interested in them. But doing things right means thinking differently. Sitting in the office, my interests have to become one with the office, one with the people I work with. It enables them to trust me and me to trust myself. Professionalism is a technology, in that it makes one useful to others; the person becomes technology.

With the discovery of any new technology there is something gained and something lost. My generation has grown up being acutely aware of this, to the point of cliche. When I first saw the movie Fight Club in high school, nothing could possibly feel more true and obvious than that a successful young professional life, decorated by Ikea and so forth, was empty at its root. Spontaneity gets systematically eliminated. Danger, too. The fundamental facts of existence, death and so forth, are put aside as less significant concerns than the minutiae of office life. The only difference between this and outright slavery is that slaves are  aware of their condition; professionals are under the delusion that they are, in fact, self-actualized and coherent persons.

Kurt\'s office spaceOver at Garzuela, Kurt has been tossing these concerns around as well (beware of obscenity):

Do you like my new office? I think that it’s good to get some professionalism out of my system, so I can start to live like a real human. Let’s talk about dehumanization of the human race, and how we are being pushed to act more like mindless drones in order to be financially stable in the world? Actually, I think that some people will find it refreshing, and I actually landed the most professional job I’ve ever had yet, as a result of this kind of behavior. Or maybe it was getting this behavior out of my system, that allowed for me to get into my niche. Maybe I should say moist professional job, put hand on top of her head, and slowly guide it toward my crotch. Careful not to let her know that I’m unzipping my pants with my other hand, and getting a sweet ding a ling ready for her.

The obscenity to beware of is a prerequisite of such resistance against professionalism. Just like the fistfights in Fight Club it shocks the system, or shocks the person out of the system.

Yet these things I grew up knowing don’t feel quite known anymore. I’m twenty three years old, seeking my fortune and so forth, and a little professionalism has become required. Without it, I definitely can’t do my job. Without it, I keep messing up in little tasks, letting my personal interests overshadow my functional purpose. Doing the job right demands a little … inauthenticity, though in my life so far, evading professions, I have always denied the possibility of that concept. How, I have thought, can one be other than oneself, or be more or less oneself?

Professionalism demands a line of separation between person and public face, an segregation of spheres so that each might be coherent—all in such a way that invents, for the first time, the dialectic of authenticity. It becomes a question possible to ask: Am I being authentic? Authentic to what?

Becoming a Person

William JamesThe New York Times Week in Review, blessedly (and following the fabulous journal The New Atlantis), quotes William James on attention. The point, naturally, is yet another condemnation of our relentlessly multitasking, over-busy mental society. But there is much more at work in this pregnant piece:

To James, steady attention was thus the default condition of a mature mind, an ordinary state undone only by perturbation. To readers a century later, that placid portrayal may seem alien—as though depicting a bygone world. Instead, today’s multitasking adult may find something more familiar in James’s description of the youthful mind: an “extreme mobility of the attention” that “makes the child seem to belong less to himself than to every object which happens to catch his notice.” For some people, James noted, this challenge is never overcome; such people only get their work done “in the interstices of their mind-wandering.”

While perhaps on the one hand he is making the wizened philosopher’s usual gripes against the young, there is something bolder going on as well. James is talking about the constituent things of personhood, the requisites. As a person matures—that is, comes into being himself—he becomes a person by making clear the line between self and world. No longer does the world utterly dictate the person; now the person begins to be, with focused attention, nothing but himself.

This is a common idea, of course, and not James’s invention, only his little implication. We expect—ideally—a coherence in people that mirrors the coherence we expect in ideas. And coming into that personhood is a process. It develops and “comes into its own.” Half a century after James, the popularizing psychologist Carl Rodgers could write a book called On Becoming a Person. Like the whole swath of 20th century popular psychology, Rodgers’s goal is control, or in his terms, “self-actualization.” Being a person means being an agent, a dominion, a soul, which rules over personhood and its extensions. Becoming so coherent as to be actual.

Incidentally, coherent personhood has been the assumption behind rational government (all but Louis XIV’s Le etat, c’est moi), especially republican democracy. Voting, opinion polls, representation, and constitutions all depend on the assumption that citizens are coherent persons.

Attention, of course, has had religious value across traditions for all time—the capacity of centering prayer and meditation to alter consciousness is well attested to. And so, meanwhile, is this sense of personal coherence that goes with it. In the Sermon on the Mount, Jesus teaches, “Let your ‘Yes’ mean ‘Yes,’ and your ‘No’ mean ‘No.’ Anything more is from the evil one.” Mobility of belonging therefore, to mix up James’s terms, is not only childish but demonic. The proper religious subject, the one who belongs to God, is a coherent one, a trustworthy one, a unitary one.

But what of the rights of non-persons?—what if one refused to accept these terms? Say I will let myself be determined by the objects I encounter. Or, as Heidegger and Foucault might lead us to say, accept that in fact I am determined by the objects I encounter. With all due respect to James’s well-wrought assumptions, I find that, in its place, this personless existence may be philosophically defensible, fun, necessary, inevitable, and possibly even coherent. Especially, oddly enough, in religious terms. Some have insisted that one cannot learn about the beliefs of others without trying to enter bits of their personhood. (Ever since the nineteenth-century white Spiritualists who believed that dead Native Americans and slaves were entering their bodies, the politics of such endeavors has been questionable.) And then of course the necessary and uninformed references to Taoism, Buddhism, and apophatic mysticism.

But, as the wordless mystics know, the problems of non-personhood in a world of persons are endless.

Becoming a Generation

My generation continues to … flounder. Our biggest news lately was the Iowa caucus, when Barack Obama made a surprising showing, which the exit polls attributed to the youth vote—students had come back early to their campuses to caucus. The next day, as the whole show moved to New Hampshire, Hillary Clinton started making her speeches with fresh youngsters behind her. At least before everyone started talking about white, working class men, this seemed like it might be an era for the young. It turned out to be only fifteen minutes.

I recently noticed a new blog out of CampusProgress.org, Pushback, filled with youthful commentary mixing culture and politics. Perfectly legitimate, right? Reminds me of an ill-fated project I was involved in, Voting Is the New Apathy, except well-funded and determined to succeed.

We (I) were (was) raised and educated in the shadow of those who were young in the 60s and 70s, for whom generation represented an identity. They told us stories of activism, idealism, and world-changing, then ask over and over why we aren’t the same way. The litany goes: Iraq is just as insane as Vietnam—why don’t you care like we did? Yet my generation has resolutely decided not to define its identity as such. Except for the Iowa caucus, age has not dictated politics; we have embraced, politically at least, the categories set by our parents. Content with that, we spend hours and hours on the internet.

Maybe it is time to put the “millenialism” back in “millenials”?

The Theory of Double Truth

Have you ever had the desire, the urge, the dangerous little need to contradict yourself for its own sake? Or for the sake of something quite unspeakable? The words “paradox” and “contradiction” come eerily close to being synonyms—they mean the same encounter of irreconcilables—yet they connote different moods. A contradiction is the dumbest, most obvious falsehood, while we treat paradoxes as exalted mysteries. The two words themselves, meaning the same thing but different, are a contradiction, or a paradox. For complicated reasons, we make decisions about when something looks like one and when it looks like the other. Contradictions can be dismissed; paradoxes cannot. Paradoxes thrust themselves into our desire.

The theory of double truth, to speak historically, was a heresy in medieval Christian Europe. Often synonymous with “Latin Averroism” (after Ibn Rushd, or Averroes, the Arab philosopher), it grew out of the thirteenth century’s encounter with Islamic philosophy, and through it, ancient Greek thought. After several centuries of possessing only the barest scraps of Plato and Aristotle, Christiandom had gone its own way, ceasing to address questions the ancients’ ideas raised. So when Aristotle’s Metaphysics suddenly appeared, and “the interpreter” Averroes insists that his system means that the universe is eternal and souls are not, some valiant thinkers decided they had little choice but to agree. Among these were Siger of Brabant and Boethius of Dacia. They did so knowing, however, that Catholic dogma forbids such conclusions, and in those days there was no arguing with Catholic dogma. The only choice was to accept both truths, the dogmatic and the philosophical, at once.

In 1277, the bishop of Paris, Stephen Tempier, condemned those who “hold that something is
true according to philosophy but not according to the Catholic faith, as if there are two contrary truths, and as if in contradiction to the truth of Sacred Scripture there is a truth in the doctrines of the accursed pagans.” Both Siger and Boethius lost their teaching posts. The following decade, Siger was supposedly murdered by a secretary with a pen. Neither admitted to practicing the theory of double truth, and constantly sought to reconcile or explain the inconsistencies between philosophy and sacred doctrine. But the condemnation stuck. Still, for reasons we don’t quite understand, Dante depicts Siger in heaven with Thomas Aquinas (who opposed him in life) singing his praises.

Nobody quite admits to believing the theory, even those who are perpetually drawn to it. Somewhere, there has to be a resolution to the apparent contradiction—a single truth beneath the appearance of two. Trusting in that, what seems like a contradiction is really a paradox.

The theory is no stranger to Jewish thought. The Talmud tells of Elisha ben Abuyah, a brilliant scholar who was seduced by the Hellenic culture and ideas that surrounded him. There are stories of Elisha as both a great sage and a rather insane heretic. Apparently, he was both. Attested to in the books carried by Jews everywhere the diaspora took them, he became an icon of the problem of identity or assimilation. Rabbis argued about whether he ever made it to heaven. Just as Dante did with Siger, some thought he deserved a special place there. Others declared him an outcast, or, as the twentieth century rabbi Milton Steinberg would depict him, as a leaf driven to fall from his tree. For short, the rabbis in the Talmud call Elisha אחר—the “other.”

It was another Jew who truly carried the theory of double truth into the twentieth century, and in doing made it politically aware. Leo Strauss, a German-born philosopher, re-read double truths in great thinkers of history like Plato and Maimonides. Such men, in order to accomplish their political goals and avoid the persecution of power, had to tell two truths. They taught an exoteric one, safe and acceptable to the world, believable enough, and good for ordinary society. But within it they hid an esoteric teaching, one that paid no homage to the gods of the world. Both teachings were true, and needed to be said. Ordinary folks needed the exoteric to live by, and philosophers needed the esoteric to understand.

It reminds me of an altar in a Catholic church I once visited in Guatemala. The first time I went, I sat in the pews in silence. Nobody was there. It was an ordinary, Spanish-style church. The second time, it was morning, and people were coming in for their prayers. I saw them, one by one, go behind the altar. Finally I went back myself and saw that it was covered in feathers and candle wax and symbols of a different religion entirely.

At the University of Chicago, Strauss built himself a following, and some of his students went on to become prominent neoconservatives—notably Bush’s deputy secretary of defense, Paul Wolfowitz. Adam Curtis’s gripping documentary, The Power of Nightmares, argues that neoconservatives used evangelical Christianity as the exoteric guise behind their esoteric greedy nihilism.

George Orwell famously associated double truth—”doublethink”—with the dystopic world of his 1984. Promulgating double truth—indeed, filling the world with righteous paradoxes—becomes the policy of totalitarianism. Blur people’s ability to see contradiction, and they will believe anything. They will become utterly subservient to power.

Then again, another anti-totalitarian novelist celebrates the theory. Orhan Pamuk’s Snow, which closed the deal on his Nobel Prize, tells of a man named Ka caught in the bloody, black-or-white mix of Turkish political culture. As a poet from Istanbul, he cannot escape being a member of the secular bourgeoisie, which controls the army and the revolutionary legacy of Ataturk. Nevertheless, he becomes intrigued by the world of provincial Islamists, who carry hopes for a revolution of their own under the banner of ancient religion. What distinguishes Ka in the novel’s world is his desire for both, for the double truth. He falls in love with each, separate and together. And it does him in. Turkey has become a place where the theory of double truth is especially and undoubtedly dangerous.

That is the thing about double truth. It is dangerous. On a number of fronts, I am deeply drawn to it, inhabiting contradictions, playing many roles, and trusting each as real. And for the moment I can write about these sensations here and there, a few people read them, and not much happens. But times can change quickly. When they do, in one way or another, the rules concerning double truth change too. What makes a paradox and what makes a contradiction gets mixed up. Pamuk’s Ka gets trouble for double truth among Turks, but in Europe Pamuk gets the Nobel. To Christians the Trinity is a paradox, while for Muslims it is a foolish contradiction. The difference can be deadly.

The appeal of double truth, however, has persisted throughout history in whatever forms it can safely find. It must. Within it probably lies, in fact, a single truth, a cohesive reason why people are drawn to opposing things. In simplest terms: We people are complex, and we do not fit into our own logics. Our worlds are not satisfied with single truths, though they might hold one’s attention for a little while. Now, a cycle. The single truth that explains double truth will fail to satisfy. It is the illusion, and double truth the truth.
The Theory of Double Truth

The Local Neighborhood Conspiracy

The Family by Jeff SharletReligion Dispatches has just put up my review of Jeff Sharlet’s book, The Family, about a secret Christian political organization headquartered in my hometown of Arlington, Virginia.

Like the emperor’s new clothes, power is invisible to those who don’t happen to know about it. One could, as I did, spend eighteen years growing up less than a mile away from one of the great centers of theocratic power in the United States without knowing it was there. Tucked away in a quiet suburban neighborhood (as begin so many horror stories), its global influence can’t be seen from the street. I’ve been there and looked.

More at Religion Dispatches (link | pdf).

Don’t You Love It When Your Day Is in a Play?

I don’t know how many of you all out there have been spending your days like me, combing through proofs for and against the existence of God and trying to write clever things about them. But if you are, have I got a play for you: The Honest-to-God True Story of the Atheist, now playing at Under St. Mark’s in the East Village. Even if you’re not like me, I bet you’d like it. It is really funny. The story of a charming Viagra salesman and an existentially-troubled white couple wrapped around the story of an internet-celebrity atheist guy who steals a statue of the baby Jesus from a Nativity scene to prove that God doesn’t exist (things go wrong). As the trio begins to tell the story of the atheist, they talk about how obnoxious atheists always are. “But they’re right, obviously.” Yeah, obviously. And acting out a bit of internet urban legend, playwright Dan Trujillo manages to throw in a pretty good rendition of Augustine’s answer to the problem of evil. Everybody is funny, and everybody, bizarrely, is right.

The cast is fantastic, and the harmonies they sing warm one’s eternal soul. The only problem is, they way they’re dressed, you’d never believe it was around Christmas time, when a nativity scene might be out. At least while seeing it in New York.

1 27 28 29 30