You Can’t Navigate out of a MOOC

My career in libraries began as a detour from religious studies, with a part-time graduate student job at the UVa Library’s Electronic Text Center. Once I started down this path, for better or worse, I never returned to a traditional academic career. It was for the most part a happy change, absorbing XML and becoming initiated into the secrets of the command line. Having otherwise happily drunk the DH Kool-Aid, there was only a trace of the lingering aftertaste of concern for me. As much as I was a proponent of encoded texts, digital humanities, and the potential for transformation of scholarship through them, that did not make me, as it turned out, automatically a fan of all things digital. (Nor did it make me, as I was introduced to Tim Berners-Lee by my dad, “a fan of the internet.”) Despite otherwise qualifying as a techno fan-boi, the exception to the rule for me was online education. On further reflection, I realize I was not merely unenthusiastic about online education, but in fact quietly loathed it. What exactly was so off-putting about it to me remained unclear for some time. Not that it was just not the same as face-to-face teaching. Was it perhaps for the opposite reason: exactly because of how close it was? So close as to show how discontinuous they really were? Like the computer-generated Tom Hanks character(s) in Polar Express (the example from the NPR show on the subject) close enough to the original to be palpably too close. Unsettlingly close. So similar, but simultaneously and indeed precisely because of the similarity, manifestly dissimilar. The MOOC for me was the uncanny valley of higher education.

My undisputed fair-minded- and evenhandedness prompted me to then ask if the same couldn’t be said of electronic books, relative to their paper-and-ink originals? It could, but usually isn’t. Physical books are more often praised than electronic texts denigrated. Perhaps one reason being that the better the electronic reading experience becomes, the less there is to complain about. And it’s worth noting that the virtues extolled by digital humanists with respect to encoding are often and somewhat surprisingly, not for the deliverables produced—edited electronic texts or curated exhibits or collections—but for the process through which they are created. And it is the process—the making—that they especially want to share with students.

The apparent logical inconsistency bothered me more than it might have because I have been regularly—less so more recently—called upon to defend all things digital to colleagues mostly still bound to a codex paradigm. (Hey, you’re the ‘digital’ guy, right? What do you think of [insert digital-based non sequitur]…?) I had not really considered the source of this logical inconsistency, however, until I attended a lecture by Michael Suarez in the spring of 2012. As Editor in Chief of the Oxford Scholarly Editions Online, director of the Rare Book School and Honorary Curator of Special Collections at the University of Virginia, holder of a doctorate and several masters degrees, there is no one more qualified to speak on books, digital and otherwise, than him. I knew several colleagues would be attending, where for once I would not have to be the digital guy. And my colleagues were unlikely address him with that honorific. That night, I could let someone else defend electronic texts, and let the digital pessimists try to dismiss the arguments of the esteemed speaker with a smug platitude. I would not have missed this lecture for the world.

Suarez began: “The digital world is upon us, and that’s a good thing.” And that was the extent of his defense of all things digital. That statement was shortly followed by the start of the accompanying slide show, and an image projected on the screen. He continued: “Ladies and gentlemen, this is Antonine-Jean Gros’ painting, ‘Napoleon Visits the Plague House at Jaffa.’” He expounded on the history and iconography of the painting for a few minutes, seemingly building up to a point about the painter’s use of symbolism from the gospel of Luke, or the secular appropriation of messianic imagery. But just short of such a payoff, he instead stopped, and advanced the slide to a different photograph of the same painting, one with a noticeably yellowish cast to it, and began again: “Ladies and gentlemen: this is Antonine-Jean Gros’ painting, ‘Napoleon Visits the Plague-stricken at Jaffa…’” with a nearly word-for-word recitation of the commentary accompanying the previous slide. The misdirection had the desired effect, and there were a few giggles from the audience. Though color values and aspect were obviously different, both were clearly photographs of the same work of art. A third slide appeared, once again a noticeably different photograph of the same painting again: “Ladies and gentlemen: this is Antonine-Jean Gros’ painting, ‘Napoleon Visits the Plague-stricken at Jaffa,’” continuing this time slyly amid the laughter with “and I see from the flicker of recognition in your eyes that a number of you are already familiar with this painting…” Not how I had hoped the evening was going to go.


His presentation was less a criticism of digital remediation per se, than of the entire class of derivative representations, for instance, including images of paintings in art textbooks (real books!) an academic staple long before ‘the digital age.’ That said, his lecture had mainly digital representations in its sights. It was less formal argument than rhetorical flourish, through the presentation of a series of examples that could speak for themselves as to the impoverished experience of digital representations that (sometimes literally) paled in comparison to the originals. His graciousness and intellectual even-handedness somehow made the whole experience even worse for me. He acknowledged throughout his presentation, for instance, the validity of other positions, asking for instance, “isn’t something (a digital representation) better than nothing?” But he was persistent in his assertion of a loss in the transition to the derivative digital image, and digital text, which he had, it’s true, anticipated in his introductory remarks: “We spend so much time talking about what the digital world can do for us…that I worry that we lose sight of what the digital world can’t do for us.”

At the same time, he made a positive case for what he believed engagement with the tangible original can do for us: seeing paintings in person, handling (well, very carefully handling) incunabula. There’s no denying he made his case, as far as that goes. Perhaps he dwelled a little too much on the visceral pleasures of the physical text, expressing moderate indignation at a writer referring to the ‘fetishization’ of the printed book. While certainly the term ‘fetishization’ undoubtedly has itself become fetishized, I think there is something to its application to the awkwardly P.D.A. descriptions of physical texts. And it has to be conceded one is able to make the point much more forcefully about the inadequacy of the digital surrogate using paintings as exemplars, as compared to books (because the content—words—of a written work can be much more readily extricated from its physical form than the image). And like the TEI itself, Dr. Suarez’s discussion focused mainly on antique works and digital surrogates of them, setting aside for the most part the fact that many new books and other works are now themselves born digital. The argument is weakened when talking prospectively about what might be lost in deciding whether to publish a new book in a digital format instead of print.

Still, these complaints don’t really detract from his main point: things can be learned from originals that can’t be learned from surrogates, analog or digital. And felt, and sensed and experienced. And there is a loss in the experience of reading in moving from paper pages to the screen, from analog media to digital technology. This is something we all know. Don’t we? But in fact Suarez was less than explicit in describing what the benefits of direct experience of such objects actually are, and so fell short of describing exactly what the nature of the loss is. I suspect this is in part because such benefits are very difficult to articulate, if they can be articulated at all. And also because it’s possible these ephemeral benefits are only partly derived from experiencing the objects. Perhaps the benefits are derived as much (or more) from the teacher who shares the object with the student, as from the student experiencing it?

*        *        *        *        *        *

I have a number of pet peeves about the way the word ‘technology’ is used in the 21st century. The one that grates most is the truism that millennials ‘understand technology’. When really all that is meant is that they’re comfortable using Facebook, Instagram, or the platform du jour. Not that they’re any more capable of developing a Facebook, web site, or database than previous generations. To my mind, being an intuitive user of technology falls pretty short of ‘understanding’ it. Another, somewhat less domesticated peeve of mine is that ‘technology’ seems often to be used in a much more restricted sense, of electronic or digital technology exclusively, as opposed to the broad range of meaning the word really possesses—as though ‘technology’ didn’t really exist prior to the digital era. Which you begin to appreciate when someone starts talking to you about the technology that was used to build the pyramids, which you gradually realize refers to the lasers and spaceships the aliens employed in the construction process, as opposed to slave-powered lever and pulley systems.

This use of ‘technology’ as synonymous with digital technology suggests a historical discontinuity that isn’t really there. Of course, the technology of the 20th and 21st centuries is dramatically different from what had gone before. But it’s manifestly not the case that prior eras lacked technology entirely. And of course, I would not suggest Dr. Suarez traded on this ahistoricism to make his point. Still, it may have provided a more receptive context for his argument, in which technology is primarily modern and alienating, while its counterpart—craftsmanship?—belongs more to a more human past. The assumption of a radical discontinuity between the digital and previous eras exaggerates the sense of the discontinuity between between codex and eBook. By extension, the greater the discontinuity, the greater the theoretical loss in the transition from one medium (or technology) to another. While I acknowledge a loss, it may not be much more than the loss of today to tomorrow.

It was with some of these considerations in mind that I raised my hand during the question and answer period at the end of Dr. Suarez’s lecture. If there is a loss in the transition from the codex to the electronic text now, I asked, wouldn’t there also have been a corresponding loss for Christians who adopted the codex format for their scriptures over the scroll much earlier than the rest of late classical culture? I added that it was Harry Gamble, also of UVA, who framed the preference of the early church for the codex over against the scroll as deriving from the former’s capacity for random access, needed for theological purposes to associate Old and New Testaments passages.

Dr. Suarez was extremely gracious in his response to the question (and in conversation after the lecture) re-phrasing (and improving) my question to clarify it for the rest of the audience, supplying the pertinent theological term—typology—I had failed to recall. The point was that the random access afforded by the codex meant the reader could enter the Biblical text at any point (as opposed to the serial access of the scroll) and at multiple points in succession. For early Christian interpreters of the Bible, this easily allowed the association of key prophetic passages from the Hebrew Bible with those from the New Testament in which the prior passages were seen to be fulfilled. As Dr. Suarez put it:

“People wanted to be able to put a finger in a passage in John, and go back to a passage in Jeremiah and say, ‘Oh, I get it’—flip, flip, flip—and you can’t do that with a scroll, right? It’s a lot harder to do with a scroll. Anybody who’s used microfilm understands the problems and possibilities of a scroll, because that’s what microfilm is, is a scroll, right?”

As he continued he did not shy away from the real import of my question:

“But then I think the truth behind the question is: ‘Well, that was a form of remediation, and there was great gain there.’ And I think that that’s right, there was profound gain there, even as there’s great gain in the digital environment—tremendous gain. Let nobody leave this room and think we must become luddites any way. The digital domain is here to stay, and it’s a good thing for us. But as I began at the beginning to say, it will change the structures of knowledge, and it will change the structures of the academy…”

As thoroughly as Dr. Suarez acknowledged the point, he did not further unpack the character of the gain, at least for Christian theology, in the adoption by the church of the codex as a piece of information technology. The comparison of transitions from the scroll and codex, and then from codex to electronic text, makes the technological dimension of the codex more obvious. So it isn’t a matter of technology vs. something not technological seen this way. Just a matter of one technology displacing another. And I thought Dr. Suarez would at the very least have to acknowledge here was an exception to the rule of loss in the transition from scroll to codex, due to what I assumed to be his theological commitments. The prophecy-fulfillment structure was an essential component of the orthodox faith, which bound together the Hebrew Bible and Christian gospels and epistles to make the Christian Bible, comprised of Old and New Testaments. A parallel structure bound the God of the Hebrew Bible with God’s son, witnessed in the New Testament, making the codex of the Christian Bible itself an analog of sorts of the Trinitarian faith. The Biblical codex thus arguably became a structural model of and parallel to the content contained in that particular volume.

Despite this, Dr. Suarez maintained there was a loss in the Christian adoption of the codex for the Bible. Perhaps he was willing to live with one set of theological commitments being challenged in order to protect another. Which is to say, I think that the ineffable important something he alluded to that was threatened in forgoing experience of the physical original—painting or text—for the digital surrogate, was in some sense spiritual. And if so, I agree that there is a loss. But I think the potential loss has less to do with the physical objects themselves (though something to do with them too) than with important human connections mediated through those physical, visceral experiences of originals. The loss in the transition from physical book to ebook is less the loss of aroma and tactile pleasures of the codex, so much as the loss of the experience of one person handing that book to another, physically and metaphorically in one deft act.

*        *        *        *        *        *

I always assumed an inconsistency in my commitment to electronic texts on the one hand, and disdain for online education—MOOC or 1.0—on the other. That despite being forward-thinking in some areas, I had regressive, luddite tendencies in others. I no longer think these tendencies are in conflict or even inconsistent. At DHSI 2014, I went to dinner one evening with a group from my class, and I took an informal poll. All the others at dinner were currently teachers or recently had taught, and not one had anything good to say about MOOCs. And yet they were all there to take a class in XSL as a tool for transforming TEI-encoded texts—not luddites. What I have come to realize about digital humanities, which has brokered an encounter between literary works and XML, and other works in the humanities and other technologies, is that its technological dimension is not in contrast to, but explicitly supports, its pedagogical dimension. Because to a person, these scholars all involve their students in projects to encode and remediate texts through any number of technologies. And in involving and engaging their students in this way, with these projects, they share the works with them and afford them new perspectives on the works and their structure not possible before these technologies. But the emphasis here is not on the technology, but the handing-on, the traditio of humanities scholarship.

I love the TEI. For me, the process of encoding texts has everything to do with making the implicit explicit, capping a process from which first emerged spaces between words, punctuation, and then chapters and other readerly instruction above and beyond the narrative itself. But there are other things that do not lend themselves to explicit articulation. In the background of my dissertation lay a conviction I could not bring fully into to the fore (precisely for lack of explicit evidence) that the primary character of tradition rejected by Protestantism, was not of some specifiable content or other, which happened by historical accident to be transmitted verbally instead of textually, but instead of content—if it can even be said to be that—that was possibly inconsequential relative to the significance of its being literally handed on. This conviction derives for me from a number of sources, but the most immediately relevant was pointed in Harry Gamble’s Books and Readers in the Early Church, which referenced an article by Loveday Alexander, “The Living Voice: Skepticism Towards the Written Word in Early Christian and in Greco-Roman Texts.” In it, Alexander explores the skeptical disposition in antiquity toward written texts. In it she includes a remarkable quote by one of the best known thinkers (and educators) of the classical period. Whether or not Galen held that there are no stupid questions, he clearly believed some questions facilitated and others inhibited learning:

Ἀληθὴθς μ`ν ἀμέλει καί ὀ λεγόμενος ὐπὸ τῶω πλείστςν τεχνιτῶν ἐστι λόγος, ὠς οὐκ ἴσον οὐδ᾽ὅμοιον εἴη παρὰ ζώσης φωνῆς μαθεῖν ἤ ἐκ σθγγπάμματος ἀναλέξασθαι.

The people who ask this sort of question [i. e. theoretical questions] are those who are not learning from a teacher, but are like those who—according to the proverb—try to navigate out of a book.

[De libr. Propr. 5, Kuhn XIX 33/
Scripta Minora II 110. 25-27]

Throughout the talk, Dr. Suarez repeated his concern with the materiality of books, noting for instance that “materiality always instantiates meaning. Materiality effects meanings, and materiality affects meanings. It makes meanings and it influences meanings…” And it is largely in the loss of this materiality, in the transition from printed to digital book, that he locates the potential for loss. (Even though he also acknowledges that even the digital has a physical dimension.) My issue ultimately is not whether or not there is a potential for loss of meaning in the transition to the digital domain, but where the greatest potential for loss of meaning is located. Suarez locates it in the physical interaction between the person and object—book, painting or other work of art. Despite a shared appreciation for the original, I want to suggest the greater potential for loss is not in the loss of the physical object, but in the loss of the physical (I don’t know what else to call it) interaction between people around objects (like texts).

It is not that I have been remiss in unduly delaying the proper subject of the post—MOOCs—to this point. MOOCs are instead a convenient example of the more substantial threat to meaning as we enter the digital world. And of course, it’s not MOOCs per se. The more sinister threat to meaning—to education, at least, but certainly to knowledge and definitely wisdom—is from for-profit, online education, the payday lender of higher education.

In a context where the escalating costs of higher education have caught the attention of the White House, and institutions like the University of Virginia (incidentally, Suarez’s institution) are concerned to follow the example of Liberty University’s profitable online course offerings [Sullivan oustermath: A timeline of UVA in tumult] the threat to meaning, I think is even more to the personal interaction in education, than in the transition to the digital book.

Posted in Book History, Digital Humanities, Education | Tagged , , , , , | Leave a comment

DHSI: Summer of Love


A good conference might feature provocative presentations or lectures. Or for a business convention, the introduction of an exciting product or new program, with head-setted presenters pumping up the crowd. (See Steve Ballmer, “Developers! Developers! Developers!”) This is different. The atmosphere here at the Digital Humanities Summer Institute (DHSI) 2014 is one of prevailing happiness, the kind that comes from being thoroughly immersed in something you love. And that spills over to an appreciation for your co-participants and teachers (who are also co-participants). As Ray Siemens predicted in his opening remarks, people quickly come to really like their fellow students, and the happy quotient only continues to rise through the week. His predictions are based on past experience, as this marks the 14th year of the event.

As lovely as it is in Victoria, and as excellent as the program, it’s not just DHSI that accounts for the generally elevated generosity of spirit that seems to be the order of the day here. Hearing the institute’s milestone, I realized my involvement in digital humanities (DH) is also in its 14th year—even if I didn’t always know to call it that—starting as a graduate student assistant at the University of Virginia Library’s Electronic Text Center in 2000. The excitement, engagement and collegiality there was the pink elephant in the room too, obvious enough that I felt compelled to point it out (“What’s up with all the people being really great and interesting and nice?”) David Seaman said he liked to think it the happiest place on earth. For a while, at least, it was.

The generally welcoming, engaging and cooperating character of the DH community is often remarked on, maybe most memorably in a 2010 post on Tom Scheinfeldt’s Found History blog:

One of the things that people often notice when they enter the field of digital humanities is how nice everybody is… our most commonly used bywords are “collegiality,” “openness,” and “collaboration.” We welcome new practitioners easily and we don’t seem to get in lots of fights. We’re the Golden Retrievers of the academy.
[Why digital humanities is “nice”]

Besides his nice turn of phrase (repeated in the Birds-of-a-Feather session: Future Tense—Where Are We Going and Who Is With Us?) Scheinfeldt is the only one (I’m aware of) to speculate that the nice phenomenon is anything more than just—well, nice. He suggests it can be traced back to a distinctive aspect of DH, its organization around method as opposed to theory. Arguments over theory and evidence can be interminable, whereas methodological differences are finally determined in practice. Opposed positions don’t become entrenched. There is peace throughout the kingdom.

A similar connection between the excitement and enticement of DH (if not directly its charitable character) and its shift away from theory can be seen in a 2010 post by Stephen Ramsay:

…to me, there’s always been a profound — and profoundly exciting and enabling — commonality to everyone who finds their way to dh. And that commonality, I think, involves moving from reading and critiquing to building and making.
[On Building]

Ramsay’s post in turn prompted comments (from Ryan Heuser) even more explicit in the language of love and liberation that flows in the wake of the shift to making and building in DH:

But building is the opposite of detachment. Building is a form of creation. Creation is the ultimate participation…We in DH know…we are building. And we love it and learn from it.

The rejection of an artificial (at best) or deceptive (at worst) stance of objectivity, also seems to follow from the shift to method and making in DH and away from theory, both in these comments, and prominently in the DHSI Institute Lecture by Aimée Morrison, “DH as Fan Practice: Remix, Re-use, Re-Write,” where she asserted: “We are interested…” (in both senses) in our subjects, and argued for the superiority of scholarship by those engaged in their subjects.

Engaging in DH clearly is not just a matter of moving from reading to making, but also involves a dramatic shift in world-view. And while the significance of the turn to method and making is widely acknowledged, most accounts of digital humanities take as their point of departure the intersection of technology and the humanities, whether from inside or outside the community. So Adam Kirsch’s recent shot across the bow defines DH (in its minimalist form) as “the application of computer technology to traditional scholarly functions.” [“Technology Is Taking Over English Departments” ] The Australasian Association for Digital Humanities defines “digital humanities as being the application of computing technology and techniques to build greater understanding of our diverse social and cultural archives…” [] (which appeared on a slide in Paul Arthur’s Institute Lecture: “Building Digital Humanities Communities” as DHSI 2014 drew to a close.)

The equation computers + humanities = digital humanities is obviously true in one sense. But I believe the you-got-your-chocolate-in-my-peanut-butter account of DH is much less significant to understanding DH than might superficially seem. And technology may be something of a red herring to understanding DH, even to DH understanding itself. If the language of love and liberation makes DH sound like a movement (and not just a the attachment of a tool to a set of disciplines) I believe that’s because it is. At its core, technology, while enabling, is peripheral (and sometimes, a peripheral) to DH. And while the technology component of DH makes it appear shiny and new, I believe it belies the fact that DH in some important respects is a conservative movement, an effort to reclaim aspects of scholarship that were gradually marginalized as theory came to predominate in many disciplines. It is essentially a reform movement in the humanities. And as it matures, the implications of this shift away from theory will emerge, as they are now in DH engagement with scholarly publishing and tenure reform. Activities around these areas are not digital humanities work as such. But the world-view that comes with the shift away from theory is evident in the engagement with these areas.

The Tuesday BOF session at DHSI took as its point of departure the assertion from Mike Witmore at the 2013 MLA: “If the digital humanities is successful, it will disappear.” On the account here, the digital humanities will not render all the humanities digital. Success instead might mean a much more profound change, rippling not only through the humanities but higher education generally, as a world view of scholarship no longer dominated by theory, but engaged, interested and inhabited, takes hold.

Posted in Digital Humanities | Tagged , , , , | 3 Comments

Strangely Familiar

My favorite (if one can have a ‘favorite’ in a set of hated things) mis-used phrase, by virtue of its ubiquity and the frequency of its use and mis-use is: “blabbeddy blah blah blah, which begs the question….”

I became acquainted with the phrase when I was in high school, working on ‘The Question Game’ scene from Tom Stoppard’s Rozencrantz & Guildenstern Are Dead at the long-since defunct Children’s Theatre School, once half of the Children’s Theatre Company & School in Minneapolis. The game requires answering a question only with another question. A direct answer, statement or non sequitur forfeits the game. As does begging the question—assuming the conclusion in an argument, as famously in Voltaire’s Candide, when Pangloss opines that “Opium induces sleep because it has a soporific quality” (the standard example of “begging the question” and the one used in the Wikipedia article on the subject). But the phrase is now almost always used instead just as a way of setting-up a question: it’s almost certain now Clowney will be taken as the first pick in the draft, which begs the question: will Johnny Football go second?

That’s definitely the long-term favorite/most-hated phrase. But it recently lost the top spot to another, not so much abused as misunderstood phrase. One I’ve heard kicking around for years, but which I only just recently realized was being used in such a way that the loss of knowledge of its meaning in the wider public oddly serves the principle it represents (like René Girard’s scape-goating mechanism).

Familiarity breeds contempt. As with, which begs the question, the degree to which this phrase is misunderstood seems to correlate, interestingly, with its increased rate of adoption. In turn correlating to the level of confidence exhibited by speakers now invoking this phrase. I suspect its new-found popularity is due to its utility—it provides a less boring restatement of the truism that repeated exposure to a given stimulus will result in ennui.

But that’s not what it means—or at least what it meant. ‘Familiarity’ in this phrase means something very particular, not just acquaintance with in general but, Sir you are too familiar! As does ‘breeds,’ which does not mean ‘generate’ in a generic way, but in the 19th century sense of ‘breeding’ and ‘good breeding,’ referring to child-rearing. Not universally, but to the higher classes. And finally, ‘contempt’ means something specific too. Disdain, yes, but of a particular sort: the kind which leads to the unraveling of the social fabric, i.e., if you allow your social inferiors to address you by your Christian name, other infractions will follow, eventually resulting in the open challenge of the prevailing social order. Chaos will reign.

There are lots of reasons why most of us today (at least in the U.S.A.) would have forgotten the meaning of this phrase. Perhaps the most obvious is that the classless American society at some point dictated that we should all call each other by our first names, eliminating more formal conventions that acknowledged social distinctions. Out in the open. In theory proclaiming our equality. I’m hardly the first to suggest that social distinction may become more entrenched and insidious when it travels in disguise.

In a few places in our society, it still remains somewhat out in the open. Not to the level or say constancy of judges wearing wigs but still involving dress-up in ridiculous medieval costume at least once a year in a parade. It’s not the Society for Creative Anachronism. What could be a more obvious display of class distinction than that (in costume to this day still referred to as regalia)? So it is, somewhat ironically, chiseled into the collegiate gothic stone of the American university. Along with the colorful costume, there are less innocent manifestations of medieval style class separation in the university. If tenure is the distinction that defines the class, the rite of passage to that status is still called ‘peer review’. Like the phrase cautioning against extra-class relations, the real meaning of this phrase also hides in plain sight. Perhaps owing to the phrase ‘a jury of one’s peers,’ broadly disseminated through legal procedural television dramas, ‘peer’ seems to mean to Americans just ordinary citizenry, and the word has become innocuous. It does not immediately bring DeBrett’s to mind. But in the context of the academy and ‘peer review’, it means almost the opposite: judgment by a very exclusive group, the members of the guild. And the decisions by that group determine whether or not the judged gain admission to that guild. That said, among the offenders, I’ve also heard tenured professors say, “…which begs the question—”

Posted in Education, Language, Literature | Tagged , , , , , , , , | Leave a comment

Genealogy as Paranoia

This post was first going to be just a comment on the first (and previously, only) post on this blog, prompted by a column providing a case in point to part of my contention in my post on 9/11 [Dana Milbank, “The Weakest Generation?” in The Washington Post]. Especially as yet another 9/11 anniversary was looming. Then I saw the news that there was a new novel from Thomas Pynchon, The Bleeding Edge, released, not coincidentally, on 9/11. Also just before Banned Books Week (last week) and Pynchon has more than a few connections to banned books. And disaster loomed as the possibility of a job that might have obviated Not My Job reared its ugly head. Fortunately, this fate was averted, and the blog abides. This is a post about all the above.

The Bleeding Edge was released on 9/11/2013 because (according to one review) 9/11 is one of its themes.  Which is interesting for me, as it was lines from Pynchon’s Gravity’s Rainbow I first thought of when I heard the news on 9/11/2001:

There is a Hand to turn the time,
Though thy Glass today be run,
Till the Light that hath brought the Towers low
Find the last poor Preterite one . . .

Of course, then I thought of its first, much more apt line of the book: “A screaming comes across the sky—” (and why didn’t I think of that line first?) in reference to the V-2 rockets whose arc is referenced in the novel’s title.  My first reaction to the idea of a Pynchon novel about 9/11, apart from the rockets and the Towers Brought Low, is that it seems a little redundant, because the narratives of 9/11 already seem Pynchon-esque (minus the jokes). A little truth-being-stranger-than-fiction…

The lines between 9/11 and the lines from Gravity’s Rainbow may seem tenuous to anyone else.  But maybe they were just there, in my case, because they’re always there. Because I was haunted by Gravity’s Rainbow for a long time—not in a literary sense. In an actual sense.

Which brings us to Banned Books Week.  There’s every reason to think I’m about to talk about Gravity’s Rainbow in this connection, because, as Theodore Kharpertian notes in A Hand To Turn The Time (1997): “Gravity’s Rainbow … has been a source of awe, bewilderment, and disgust to its readers.”  There are truly disgusting things in this novel, and this book must have been banned in more than a few places. But it’s a banned book penned by an actual person fictionalized in Gravity’s Rainbow that I’m thinking of. Pynchon draws Gravity’s Rainbow with to its cataclysmic conclusion with the suggestion that readers join in song (above) one “They never taught anyone to sing, a hymn by William Slothrop, centuries forgotten…” The character William Slothrop introduced early on in the novel as the 17th century ancestor of the protagonist of Gravity’s Rainbow, Tyrone Slothrop:

He wrote a long tract … called On Preterition. It had to be published in England, and is among the first books to’ve been not only banned but also ceremonially burned in Boston. Nobody wanted to hear about all the Preterite, the many God passes over when he chooses a few for salvation. William argued holiness for these ‘second Sheep,’ without whom there’d be no elect. You can bet the Elect in Boston were pissed off about that. And it got worse. William felt that what Jesus was for the elect, Judas Iscariot was for the Preterite. Everything in the Creation has its equal and opposite counterpart. How can Jesus be an exception? Could we feel for him anything but horror in the face of the unnatural, the extracreational? Well, if he is the son of man, and if what we feel is not horror but love, then we have to love Judas too. Right? How William avoided being burned for heresy, nobody knows.

‘William Slothrop,’ then turns out to be a fictionalized version of the historical 17th century William Pynchon, who did indeed write an heretical book, which was not only ‘banned’ but publicly burned. Under the banner of “Celebrate Banned Books Week,” a web page from the Springfield City Library in Massachusetts proclaims, “Another Springfield First! The first book banned in the New England colonies was written by William Pynchon, founder of Springfield, Massachusetts.” Its actual content (I have yet to read it myself) falls somewhat short of the later Pynchon’s canonization of Judas, but it was enough to provoke a strong reaction from the Puritan orthodoxy. The book was “The Meritorious Price of our Redemption” (1650) and as the Springfield City Library page notes, “It was said at the time that the title page itself was sufficient to prove the heretical nature of the arguments…” Anyone with a passing familiarity with reformation doctrines will recognize that any concession to merit represented a creeping Roman influence. And as a former Congregationalist and Catholic convert, I appreciate that my antecedent had Catholic tendencies as well.

I didn’t learn about Thomas Pynchon’s fictionalized ancestor from the Springfield City Library.  (And interestingly, the Springfield City Library’s web page on William Pynchon has no reference to William’s famous descendent!) It was actually as I was reading Gravity’s Rainbow for the first time, when I was about 23, and visiting my grandparents’ house in Riverside, Connecticut to help with organization before the house was sold. We went through various things, most of which were already identified, including a photocopy of a portrait


that had hung on the wall of their living room for years, some (previously) nameless (to me) Puritan ancestor, who I learned was William Pynchon: witch trial judge, founder of Springfield, and author of an heretical book. It seemed incredibly obvious, however unlikely, that Pynchon had fictionalized our apparently mutual ancestor.

Finding oneself, so to speak, in the story, is unusual enough. To be found (or lost?) in this novel, however, had a unique set of considerations. Because it was not, after all, a mindless pleasure, and maybe the opposite, something like the rocket, aimed at the reader, with explosive intent. In a novel with historical paranoia as a major theme, the discovery of oneself in the novel, is disconcerting, just as it was for Constant Slothrop:

On the old schist of a tombstone in the Congregational churchyard back Home in Mingeborough, Massachusetts, the hand of God emerges from a cloud, the edges of the figure here and there eroded by 200 years of seasons’ fire and ice chisels at work, and the inscription reading:

 In Memory of Conftant Slothrop, who died March ye 4th 1766, in ye 29th year of his age.
Death is a debt to nature due, Which I have paid, and fo muft you.

Constant saw, and not only with his heart, that stone hand pointing out of the secular clouds, pointing directly at him

Posted in Genealogy, Literature | Tagged , , , , , , | Leave a comment

9/11: The Way We Were

I started writing this post more than a year ago. I managed to set up the blog on 9/11/12, and almost finish the post.  But not quite. It’s not that I’m a luddite. I just can’t keep up. But as far as the post being somewhat topical, it’s almost as appropriate to post on the day after we elected a President as it might have been to post on the anniversary of 9/11.

When the 10th anniversary of 9/11 attacks rolled around, I felt I should do something to observe the anniversary. A lot like people felt like that shortly after 9/11, like they should do something. What I did then was to attend a community conversation at the University of Virginia, sponsored by the then-new (now-defunct) Center on Religion and Democracy, where a panel of academics discussed the need to understand our neighbors at home, especially of the Muslim faith, as well to understand Islam abroad in its mainstream and marginal forms. (What my co-worker, a cartoonist starting out, did as her “doing something” was to contribute to 9-11 Emergency Relief: A Comic Book to Benefit the American Red Cross. And I made a cameo as a cartoon in the strip she contributed.)

© 2001 Jen Sorensen. Used with permission.

What I did for the 10th anniversary was to attend a community conversation sponsored by the new Danforth Center for Religion and Politics at Washington University in St. Louis. Where a panel of academics had pretty much the same discussion. It seemed not much had changed since 9/11/2001.

Of course, a few things have changed. Osama bin Laden is dead, which changed something. And before that, we had an economic meltdown, which may have changed everything. And before that, someone wrote a book about 9/11, and this book made me realize that we shouldn’t have been trying to understand Islam better, or Al-Qaeda. The real question we should have been asking all along is how well they had understood us.

The book was Lawrence Wright’s The Looming Tower: Al-Qaeda and the Road to 9/11 (2006) the most perceptive, useful and relevant discussion of Al-Qaeda I’ve read in all the years since 9/11 (amid many discussions that were superficial at best). It was widely understood and easily recognizable almost from the beginning that the immediate goal of the attack on the twin towers was to kill a large number of Americans, and in the process cause enough physical destruction to the financial district of New York to throw the financial system into chaos.  It was in recognition of this that George Bush encouraged us to resist terrorism through shopping.

What never seems to be raised—by now incredulously—in these discussions is that, as Wright argued in his book, the immediate goal of the confusion of the financial system created by the attacks was only the most obvious part of a broader goal: which was to execute an attack violent and spectacular—and frankly cinematic—enough to provoke the United States into mounting a full-blown military counter-attack in the Islamic world that would eventually drain the resources of the United States, and ultimately destroy its economy. In light of the two invasions undertaken by the United States into Afghanistan and then Iraq, the accumulation of massive federal debt as a result, and the likely not-coincidental economic collapse of 2008 that followed, all we can say of Al-Qaeda’s broader goals in the attack, is: mission accomplished. Nearly.

The other thing most people remember about 9/11—apart from the shock at the terrible loss—is the sense of unity we all felt as Americans. In retrospect, the very short-lived sense of unity. It seems ironic that we’d go from that sense of unity to a state of political division that now seems unprecedented. On further reflection, it doesn’t seem ironic at all.

Al-Qaeda understood us well enough to anticipate their attack would provoke a response on a scale we could not ultimately afford, which certainly was a major factor in sending our financial system to the brink. What I wonder is if their calculations extended to how we would get there. That a sense of American exceptionalism was operative is obvious. But the real trigger to our reaction was less our sense of shock and outrage at the attack than our inclination to sentimentality—especially when it comes to our idea of ourselves. Already within days (if not on the day of) the 9/11 attacks, the comparisons with Pearl Harbor began. And the idea that 9/11 was this generation’s Pearl Harbor. And with it, implicit comparisons of this generation to the Greatest Generation. And a desire to meet the challenge just like they had: a desire for things to be the way they used to be back then. And then the “embedded reporters” in the first days of the Iraq invasion, stating that the defeated forces of Hussein’s army were simply “melting away”—as if the army were like the Wicked Witch recently doused with water. As if they weren’t in fact escaping, toting significant weaponry away with them that the Americans would meet again. And we succumbed to that sentimental impulse because we so badly wanted to be the way we used to be.We gave in to that sentimental impulse because we so badly wanted to think of ourselves as being the way we used to be.

There had been a build up to this on the right in America in the 90s, that called for a return to an era that seemed less morally ambiguous. But toward the end of that decade—just before 9/11—there was a brief window when it seemed that America was ready to recognize both that we couldn’t just go back, but at the same time, maybe there was room for dialogue. Maybe there had been too much sex, drugs and rock and roll. Maybe education was important. And to unthinkingly try to go back to the way things were was like the middle-aged guy chasing after the teenaged cheerleader, pathetically trying to get back to a place that is gone. We seemed to be on the verge of rejecting that self-destructive sentimentality, while also rejecting the alternative of resignation. A kind of truce in the culture wars seemed to be represented in the media. The most clear example was the sitcom that introduced Téa Leoni, The Naked Truth–that was basically its plot line: the story of a party-girl turned nun. Somewhat less straightforwardly, this was represented in three movies released as the decade drew toward a close. In order of appearance: Pleasantville (1998) Election (1999) and American Beauty (1999).

It’s the first movie, Pleasantville, took these issues on most directly. The main characters of the movie—twin brother and sister—represent the two divergent trends: the nerdy, studious and shy brother David represents the longing in America for simpler times. At the outset of the movie he is planning to stay home to watch a marathon of his favorite TV show, Pleasantville, an amalgam of all the TV shows of the period that self-represented America in its simpler times: Father Knows Best, The Andy Griffith Show and Leave it to Beaver. His fantasy of a simpler time comes to life when he is visited by the symbolic guardian (played by Don Knotts) of those times, in the guise of a TV repairman, who gives David a special TV remote that transports him to the actual town of Pleasantville, in all its black and white glory.

Complicating the plot, his sister is transported with him as they struggle over control of the TV remote. America, the over-sexed, irresponsible and underachieving, is represented by David’s sister, Jennifer. The film traces both children’s deficiencies, to some extent, to their broken home. The drama unfolds as the two assume the roles of the children in the family around which the show is centered, and their entrance gradually begins to bring color to the colorless town. With color, the peaceful town becomes progressively disordered and divided between those embracing the changes and those wanting to try to return the town to its monochrome stability. The story reaches a climax in when it is revealed that even those calling for a return to the old Pleasantville have also been colorized, and a new peace emerges. Despite its transformation, the town is still recognizably the same, and clearly distinct from the reality from which the twins were transported. But the twins themselves reverse roles, with the wayward sister choosing to remain (“I’ll probably get better grades”) to go to college, while the brother chooses to leave his fantasy of a simpler time to engage the real world as it is, having learned in the quiet town to come out of his shell and stand up for what he believes.

There is an unreasonable amount of overlap in these films, from Reese Witherspoon’s appearance in two of them, to the symbolism of the rose, obviously in American Beauty, but also in Pleasantville where it is the first object to make the over-the-rainbow transformation into Technicolor. Above all, they share a high school at the center of their respective cinematic worlds, which is also the main indicator that these films are “about” America. It isn’t that High School is a perfect representation of America.  It’s through high school’s complete failure as a representation of an ideal America that it achieves an almost perfect representation of the actual America in its hypocrisy.

That hypocrisy is most pronounced in the fiction of student government, which drives the drama of Election. Though all three films have high schools as the center, Election and American Beauty also both feature men approaching middle age as their main characters. Both characters, typical of the stage of life, have Lost Their Way. Unlike Lester Burnham in American Beauty, whose speeches describe his efforts to break out of the hypocrisy, Jim McAllister’s self-narration at the opening of Election glories in its platitudinous virtues. It is Jim McAllister’s weak attempt to rebel against those hypocrisies that set the drama in motion, leading to an attempt to alter the outcome of the student body presidential election and obstruct his nemesis, student Tracy Flick, on her seemingly irresistible upwardly mobile trajectory in life. The way in which Jim McAllister almost seems to actually believe the description of his life and vocation, makes it painful to listen to:

It’s hard to remember how the whole thing started, the whole election mess. What I do remember is that I loved my job. I was a teacher—an educator. And I couldn’t imagine doing anything else. The students knew it wasn’t just a job for me. I got involved. And I cared. And I think I made a difference.

Painful as that speech is to hear, it is almost unbearable to hear nearly the same speech at the end of the movie, when he has lost all of what little he had at beginning of the movie. Having moved to New York and become a museum guide, Jim draws on yet another cliché to hide the reality of his life from himself: “I mean, that’s what’s great about America.  You can always start over.” But as Jim leads a tour group of schoolchildren, and in response to a question the eager hand of a precocious nine-year-old shoots up in the style of Tracy Flick, the audience knows that’s not possible for Jim McAllister to start over.

Though Jim McAllister’s lusting after a teenage object of desire in Election is not as overt as Lester Burnham’s, his transference of desire from Tracy to his wife’s best friend is the first thread in the unraveling of his life. Jim’s proclamation of faith in the American gospel of self-reinvention seems drowned out by the echo of Tracy’s statement from the beginning of the movie: “You can’t interfere with destiny. That’s why it’s destiny. And if you try to interfere, the same thing’s just gonna happen anyway. And you’ll just suffer.” But the movie isn’t about the inevitability of winners and losers. It isn’t even about the reality that underlies Jim’s banal professions of faith, just as he isn’t the real protagonist of the story. If there is a protagonist in the movie, it is the audience. The movie isn’t saying things could be otherwise. Or that they couldn’t. What is important, is that we know that these conventions of civic virtue are frauds. And our knowing—and laughing at it—may be resolution enough.

It is a cautionary tale to some extent. The characters of Jim McAllister in Election and Lester Burnham in American Beauty readily invite comparison: both are approaching middle-age, both lust after under-age objects of desire, and both surrender to their desires to some extent. Most importantly, both feel trapped in the lives they have made for themselves, and are seeking a way out. The critical difference is that Jim McAllister never states his discontent directly, and acts only in the most passive ways. He never acknowledges his own hypocrisy—and that is what makes the movie painfully funny. Lester Burnham, by contrast, undergoes successive changes where he acknowledges the fraud his life has become ever more forcefully, and actively—if misguidedly—takes steps to change it: “I feel like I’ve been in a coma for the past twenty years. And I’m just now waking up.” [American Beauty quotes from IMDB]

But the film begins with the conclusion. In a voice-over narration, Lester Burnham tells us that he’s going to be shot today, and the rest of the movie shows the stages of his transformation leading up to his death. He remembers a time in his life when his life wasn’t a fraud, especially as a teenager, where he recalls a summer when all he was flip burgers and get laid. He successively sheds the externals of his middle-age, suburban life, and attempts to recapture the life of his teenage self: resigning from his job as a magazine writer, getting a job flipping burgers in a fast food restaurant, and trading in his Corrolla for a 1970 Firebird:

Carolyn Burnham: Uh, whose car is that out front?

Lester Burnham: Mine. 1970 Pontiac Firebird. The car I’ve always wanted and now I have it. I rule!

And indulging in sexual fantasies revolving around Angela, in a flood of rose petals, the teenage friend of his daughter who is on the school’s dance team with her.

And the truth is, although somewhat obscured by the charisma of Kevin Spacey’s performance, throughout almost the whole movie, almost to the last frames, Lester Burnham is mostly an ass. His one saving grace is his attempt to live an authentic life, however puerile his attempts to do so might be. Lester is not alone in his inauthentic life. He is alienated from his whole existence, but especially in his relationships with his wife and daughter. Like the other movies, the high school where the daughter attends is significant. But the locus of the drama of the narrative is street where the Burnham family lives. It represents America in its political divisions almost as clearly—though somewhat more subtly—as Pleasantville, which in American Beauty is symbolized by the Burnham family home, which is flanked by a household of a gay couple, on the one side (representing the left) and by a new family that moves in on the other, headed by a homophobic (and suppressed homosexual) retired military officer (representing the right).

The specific events leading up to Lester’s death largely revolve around the tensions around these households. Janie, Lester’s daughter, becomes involved with Ricky, the son of Colonel Fitts. She finds in Ricky, some connection that she lacks in her alienated relationship with her father, telling Ricky at one point, “I need a father who’s a role model, not some horny geek-boy who’s gonna spray his shorts whenever I bring a girlfriend home from school.” It’s in statements like these that this movie integrates cultural rhetoric from both the right and left: while the main character, Lester, is on a journey of discovery, and the narrative indicates it is an important one, the film does not hide the juvenile selfishness of this journey, and his failure as a role model and a father. Finally, at in the last few minutes of the movie, Lester finally re-embraces his identity in his actual stage in life, and as a father, when, finally about to obtain his object of desire, Angela, she tells him he is a virgin. Lester suddenly sees her for who she is, a child who looking for attention, and he remembers his own daughter, and asks Angela, “How’s Jane? … I mean, how’s her life? Is she happy? Is she miserable? I’d really like to know.”

Lester’s transformation is complete, when he is shot to death by Colonel Frank Fitts, after Lester rejected his advances. The film concludes with a voice-over by Lester, after he has been killed, echoing an earlier speech of Ricky Fitts’, describing the beauty he saw in a plastic bag he videotaped dancing in a whirlwind:

I had always heard your entire life flashes in front of your eyes the second before you die. First of all, that one second isn’t a second at all, it stretches on forever, like an ocean of time… For me, it was lying on my back at Boy Scout camp, watching falling stars… And yellow leaves, from the maple trees, that lined our street… Or my grandmother’s hands, and the way her skin seemed like paper… And the first time I saw my cousin Tony’s brand new Firebird… And Janie… And Janie… And… Carolyn. I guess I could be pretty pissed off about what happened to me… but it’s hard to stay mad, when there’s so much beauty in the world. Sometimes I feel like I’m seeing it all at once, and it’s too much, my heart fills up like a balloon that’s about to burst… And then I remember to relax, and stop trying to hold on to it, and then it flows through me like rain and I can’t feel anything but gratitude for every single moment of my stupid little life…

But it’s the very last line, in retrospect, that seems to point to the then-near future, and the closing of that decade and the momentary, almost truce in the war with ourselves:

You have no idea what I’m talking about, I’m sure. But don’t worry… you will someday.

Posted in Culture, Movies, Politics | Tagged , , , , , , , , , , , | 1 Comment

Hello world!

Posted in Uncategorized | Leave a comment