That uneasy stare at an alien nature

Keywords

computing, history, future, strangeness / informatique, histoire, futur, étrangeté

How to Cite

McCarty, W. (2009). That uneasy stare at an alien nature. Digital Studies/le Champ Numérique, 1(1). DOI: http://doi.org/10.16995/dscn.135

Download

Download HTML

687

Views

192

Downloads



...I have seen nothing, which more completely astonished me, than the first sight of a Savage; It was a naked Fuegian his long hair blowing about, his face besmeared with paint. There is in their countenances, an expression, which I believe to those who have not seen it, must be inconceivably wild.... [Charles Darwin, Correspondence, i.396]

Here no relation, in the sense of message or narrative, can be established. The other is ‘inconceivably wild’. But that which is inconceivable is also here a mirror image. Beer 1996: 23, 25

The new gadget seems magical and mysterious. It arouses curiosity: How does it work? What does it do to us? To be sure, when the television sets will have appeared on the birthday tables and under the Christmas trees, curiosity will abate. Mystery asks for explanation only as long as it is new. Let us take advantage of the propitious moment. Arnheim 1957/1935: 188

1. Back to Mount Pisgah

The title of this paper is taken from Northrop Frye’s The Educated Imagination (22). Frye reminds us that although we have come a long way in humanizing the natural world, the reminder of something other remains disturbingly with us. Once upon a time the gods and heroes of old were the human face of nature. In time we ourselves took their place as measure of all things. Even now, configured posthuman by some, thought to be close to other primates or radically distinct from them, we are still using our literary and artistic abilities, Frye observes, to do “the same job that mythology did earlier … filling in its huge cloudy shapes with sharper lights and deeper shadows.” Ovid’s Metamorphoses, Francisco Goya’s Saturno, Francis Bacon’s Pope Innocent X, Seamus Heaney’s “Field of vision,” and all the rest remind us that as the writer and artist Bruno Schulz wrote,

In a work of art the umbilical cord linking it with the totality of our concerns has not yet been severed, the blood of the mystery still circulates; the ends of the blood vessels vanish into the surrounding night and return from it full of dark fluid…. [T]he work operates at a premoral depth, at a point where value is still in statu nascendi.... If art were merely to confirm what had already been established elsewhere, it would be superfluous. The role of art is to be a probe sunk into the nameless. (368-70)[2]

As good a definition of our job in the humanities as I know is to interpret the troubling strangeness communicated by art, then with it to explicate the extraordinariness of the world we have made routine. As practitioners of computing we view the strangeness of reality by way of a quite different art, which forces us to rigorously describe the extraordinary products of imagination as if they were perfectly routine, or as mathematicians say, “trivial” – not just deemed so out of boredom, stupidity, inattention, or other human failing, but found exactly and always in accordance with an algorithmic rule.

Anyone who has taught programming or who can remember first encounters will know the strangeness with which the perfection of such a rule, which is a product of the human imagination, is apt to strike the novice – closely enough to give the mind some purchase by a close enough match with routine behaviour, yet without question beyond the human pale. But how can this be? Bruno Schulz’s psycho-embryonic imagery and Frye’s mythography both suggest an answer: that this strangeness is a paradoxical quality of something not simply beyond our ordinary selves but also simultaneously, embryonically us in statu nascendi, “in the state of being born,” as Schulz wrote. Thus a human invention, such as Turing’s machine, properly takes on the full ambiguity of the etymological sense of invenio: encounter, come upon, find; devise, contrive, plan. Some inventions more than others have this quite distinct oddness about them, this sense of being both from somewhere else and from ourselves – like one’s own children, appreciated with all the wonder and disturbance they bring.

Here I wish to argue that computing is profoundly and perpetually ambiguous in this sense, and so wonder-producing. I want to explicate this wonder and ask what its value is for scholarship. But by now, I expect, my patient reader is wanting to know why I make such a case for computing, especially in a Festschrift. My answer is simply this: As a pioneer in the field the man honoured here first encountered computing when its strangeness was utterly unavoidable. The beginner’s Pisgah-sight he and others among us were afforded then, before progress in hardware and software engineering made that daunting encounter of strangeness and wonderment mostly a thing of the past, constitutes not just an interesting historiographical problem to work on, with a fascinating history to recover, but an urgently needed source of guidance for us now. That history is our future.


2. Primitive conditions

For Ian, as for me, computers were by that time of first encounter already products of commercial industry,[3] with an impressive record of successful application in the physical sciences and engineering, and with formidable promise elsewhere. Computing by then was on a roll, despite the gathering evidence that the towering mountains of difficulty already encountered by the early 1960s were a far better guide to computing’s future in the humanities than the small hills of problem-solving imagined in the 1950s.[4] Word reaching the humanities from the small cadre of scholars involved with computing already for decades, as well as from popular accounts in the press, was that something hugely important was going on. But only a very few of us could then see its relevance to our scholarly concerns.

For very good reason. By the early 1980s, when as a graduate student in English at the University of Toronto I first met Ian, computing was rebarbative to a degree few now can recall, despite all its progress and might in the world of techno-science and commerce. The machines themselves were physically inaccessible. At Toronto, for example, computing machinery was in the hands of the University of Toronto Computing Services (UTCS), housed in the McLennan Physical Laboratories building, kept safe behind glass walls and cooled by a noisy forced-air system whose conduits ran underneath a “floating floor.” The machines were very expensive and complicated to operate, requiring highly trained operators (among whom I was one, years before, at the Lawrence Radiation Laboratory), a clerical staff, managers, and “customer engineers” from DEC and IBM. For the ordinary user, computing was thus mediated and interpreted by professional technicians and administrators whose background training was far more likely to be in science or commerce than in the humanities. Long gone by then were the early days when computers were the personal instruments of those whose problems they helped to solve; still to come were the days when ordinary people could actually touch a computer, play with it, experiment. The sense of computing’s entrapment by a grey administrative bureaucracy and its deployment for dull if not sinister purposes was entertainingly depicted in 1984 by Steven Levy, in Hackers: Heroes of the Computer Revolution, and by the Orwellian commercial video circulated just prior to the release of the first Apple Macintosh.[5] To many, especially in North America, it seemed (to borrow Justice Oliver Wendell Holmes’ famous phrase, a “clear and present danger.”[6] From the beginning of commercialization into the early 1980s, advertisements for computer systems were almost entirely aimed at managers of “data processing,” as the commercial brochures of the time demonstrate.[7]

Computers were also very slow. Until time-sharing mainframes became common in the mid to late 1970s, the effective speed of computing, however fast the machine, was measured by users in “turnaround time”, i.e. the interval in hours or days between submitting a “job” across a counter-top in the form of punched cards and receiving the printout. More often than not (in my case at least) the printout would reveal a card-punch error or similarly trivial mistake and so compel error-prone correction, submission of a new job, another wait and so on. Even when, during my PhD research, a time-sharing system became accessible from home by dial-up acoustic modem from a “dumb” terminal at very slow speeds,[8] a printout still had to be fetched from the computing centre. Then “microcomputers” and dot-matrix printing arrived, and with them, computing centres began to recede into the infrastructure.

It is easy to forget how great the physical and cultural distance to be traversed was for the humanist scholar before computers became personal, how foreign the environment once you got there and, in our terms now, how meagre the reward. It is easy to forget how much violence we did to our inherited ways of working by accommodating ourselves to the “batch-mode” style imposed by commercial operating systems then.[9] It is easy to forget the excitement among those who braved such difficulties, and (for those who could program or who worked closely with programmers) the bracing challenges to be clever, the triumphs of success, the wild thoughts of what might now become possible. It is easy to forget the extent to which disciplinary provincialism on the part of one’s colleagues exacerbated the difficulties, indeed made computing toxic to a budding academic’s career and a stain on an established scholar’s reputation.


3. Excavating the cognitive strangeness

Recollecting all those difficulties, however accurately, has its perils too. By doing so I am at risk of suggesting that the Pisgah-sight to be recovered is a product of the technical and institutional mismatch between scholars and computers. Decades of brilliant progress in hardware and software engineering and decades of skillful marketing have made that mismatch far less obvious than it once was. Progress has made the graphical user interface our springboard of attention – something we attend from with too little thought about how it shapes our view of things. The Web, its “Real Soon Now” putatively semantic successor and equally careless talk of “information” to be had there have further distanced us from the strangeness of the cognitive interface between computing and scholarship. In a rush to domesticate innovation (whose perceived threat is easily recoverable from the literature)[10] we have buried this strangeness in familiarity.

From careful recollection and perusal of the literature I would suggest, however, that during the incunabular period of computing, awkward infelicities and cognitive strangeness led one to the other. The former served the novice in the humanities as forced introduction to a new, equally imaginative but radically different language game, new social relations and unfamiliar disciplines. The latter required of him or her new ways of thinking and a sometimes uncomfortable, always minute probing of mostly tacit research methods. It also led to incisive questioning of the claims made for computing and so to a critical perspective from which its place within humanistic discourse could begin to be explored, as we then did.


4. View from a moving centre

At the beginning of Open Fields Dame Gillian Beer comments:

Encounter, whether between peoples, between disciplines, or answering a ring at the bell, braces attention. It does not guarantee understanding; it may emphasize first (or only) what’s incommensurate. But it brings into active play unexamined assumptions and so may allow interpreters, if not always the principals, to tap into unexpressed incentives. Exchange, dialogue, misprision, fugitive understanding, are all crucial within disciplinary encounters as well as between peoples. Understanding askance, with your attention fixed elsewhere, or your expectations focused on a different outcome, is also a common enough event in such encounters and can produce effects as powerful, if stranger, than fixed attention. (2)

The creation of institutional centres for humanities computing[11] afforded those of us in them a perspective on scholarship much like that of a dean or research officer, whose panoptic view comes with the job.[12] We were and still are peripatetic, in respect of the job frequently in and out of the older disciplines rather than of them, with estranged attention not on but through their discourses to common research methods. Our interdisciplinary standing point I have elsewhere compared to the ethnographic perspective of a trader in intellectual goods, for whom disciplines are as cultures to the social anthropologist (McCarty, Humanities Computing 114-57; ”Tree, Turf, Centre, Archipelago.”). This makes us participant-observers of research (i.e., direct participants in our own research, to some degree indirect participants of others’ and always observers of both). For the digital humanist who is methodologically self-aware, that is, the roles of native and anthropologist are cohabiting states of mind.

At Toronto, before Ian’s Centre was founded, the scholar needing help would walk the distance to the UTCS Humanities Support Group, of which Ian was prime mover.[13] For me this meant a physical stroll down St George Avenue from the Robarts Library (institutional centre of the humanities) to the McLennan building – and a metaphorical journey from the familiar slog of research to often demanding, exciting and enlightening conversations with Lidio Presutti, a stubbornly exacting and brilliant programmer. The experience indeed afforded a Pisgah-sight of that which has remained central to humanities computing. Harold Short has called this educating interplay the “bilateral curiosity” of collaborators (Humanities Computing 121). What has changed or begun to change since Ian’s Centre was disassembled are the institutional arrangements allowing for this interplay to take place on level ground, among equals: not support, as then was, but collaboration, as now is, at least in London.


5. Modern and postmodern times

In attempting to move from a purely chronological telling to a genuine history one looks for clues to the unchanging (such as this interplay of curiosities) as well as for synchronous events and ideas. For me, as Academic Liaison Officer, then Assistant Director in Ian’s Centre at Toronto (1984-1996), the best clue was the evident collision of the promo-man’s hyped-up claims against the solid demands of scholarly interpretation ( Humanities Computing 3-4). The clue took on real substance through many conversations with working scholars and so led me to the cognitive dissonance of analytic computing for the humanities. My own decade-long research project, An Analytical Onomasticon to the Metamorphoses of Ovid,[14] became in effect my exploratory response. I used textual markup to implement minute acts of critical interpretation, simultaneously constraining my interpretative self according to the twin computational demands of complete explicitness and absolute consistency. The result was an inner wrestling match between two personae, an interpreter of literature on the one hand, a “computer” on the other. The latter was the stranger, and so more interesting, role because it allowed me to become, however briefly, and so to understand from the inside, the perfect mathematical factory-worker Alan Turing depicts at the beginning of his seminal paper, “On computable numbers, with an application to the Entscheidungsproblem” (231). In the paper Turing’s analogy quickly drops out of sight, of no further use to him in explicating the disembodied, flawlessly step-wise process that is his aim. But it nevertheless evokes a socially potent imaginary which runs tellingly alongside the development of automation in the 20th Century and the fears which I mentioned earlier.

Much historical spade-work remains to be done, but evidence of this imaginary surfaces, for example, in Charlie Chaplin’s tragicomic farce Modern Times, which appeared in that same rather amazing year of 1936.[15] Chaplin plays an assembly-line factory worker who in his struggle to conform to the machine is driven insane. The surreal truth in Chaplin’s depiction had by that time become deeply familiar in the daily experience of real factory-workers, whose workplace reflected Frederick Winslow Taylor’s programmatic elevation of “the system” above the individual (Principles of Scientific Management, published in 1911). Taylor in turn was indebted to Charles Babbage’s On the economy of machinery and manufactures (published in 1832), the writing of which was as Babbage says in the Preface “one of the consequences that have resulted from the calculating engine” for which he is celebrated. Babbage’s economics may be irretrievably Victorian, but his cognitive model, articulated through Taylorian ideas of automation, continues well into this century via the assembly line itself and the theorizing of operations research (or operational research, as it is called in the U.K.).[16]

Both rampant fear of automation and besotted love of it, especially visible in the early years of computing, are deterministic in character. But simultaneous with if not prior to both dystopian and utopian fantasies, quite opposite and far more practical and intimate ideas of computing arose in the work of Vannevar Bush, Douglas Engelbart and others whose aim was to augment human intelligence, as Engelbart wrote, rather than to suppress it or render it otiose.[17] Thus we can see from the very outset of computing machinery two opposed tendencies and arguments: one to replace humans in their current role, making them masters of electronic slaves or slaves to electronic masters; the other ultimately to join the two, hence the transformation underway in art from the 1960s and more recently in medicine (See Hacking, “The Cartesian vision fulfilled”, and Hayles). I will return to the master/slave dialectic below.

The principle involved here has been well articulated by Terry Winograd and Fernando Flores in Understanding Computers and Cognition:

All new technologies develop within the background of a tacit understanding of human nature and human work. The use of technology in turn leads to fundamental changes in what we do, and ultimately in what it is to be human. We encounter deep questions of design when we recognize that in designing tools we are designing ways of being (xi).

What was and is at issue, then, is something like the historical confrontation of humankind with a bi-directional self-imaging device, or rather with an indefinitely extensible scheme for constructing such devices. I will leave aside the question of whether or not computing is in this respect or any other “unprecedented.”[18] I prefer rather to join computing with the other media of expression we use to show ourselves to ourselves, then ask about the particular ways in which computing inflects self-reflection. Its use both as device and as metaphor to reflect on ideas about the mind is obvious and, we might say, the low-hanging fruit of early research. But it is clear from the vocabulary of John von Neumann’s First Draft Report on the EDVAC that, as Arthur W. Burks has written, his scheme used “idealised switches with delays, derived from the logical neurons of Warren McCulloch and Walter Pitts.”[19] From the beginning, that is, the most influential architecture we have for computing machinery reflected the architecture of cognition as it was then understood in brain science.[20] Furthermore, as computing systems emerged and developed into complex layered structures of subsystems, they furnished a receptive medium for existing ideas of cognitive modularity, already worked out in considerable detail by the phrenologists. Early twentieth-century imagery is vividly provided, for example, by Fritz Kahn in his multivolume treatise, Das Leben des Menschen, and by other forms catalogued in Dream Anatomy (Sappol).

My point here is not a conspiracy of theories, though a confluence of thinking is evident, but the inheritance of strangeness that had made our relation to the world, specifically as vested in automata, a cause of wonder and fear for millennia. The point of my point is the resonance that connects it to the unavoidable, formidable otherness which both Ian and I confronted when we began our turbulent love affairs with computing.


6. Estrangement, reduction and emergence

Computing in this respect is, as I said earlier, bi-directional: it not only appears strange, it also makes strange. Brian Cantwell Smith has pointed out that to do anything useful at all a computer requires a model of something to direct its actions (25ff). Hardware failures and software bugs aside, the correctness of these actions is the correctness of the model, i.e. its truth to the world. Truing it is in part the ongoing work of computer science. From that work comes a digital infrastructure on which we increasingly depend, with hugely consequential effects that we are only beginning to understand. Apart from the trade-off demanded of us, the rigorous strangeness of Turing’s machine makes “correctness” in any absolute sense an incoherent concept, as Cantwell Smith argues. In other words, the attempt to rule-bind the world inevitably fails. Particular failures can be fiddled – indeed, this is what computational modelling (the process) consists of. The incorrigible failures can, however, also be put to work to illumine the irregular, the non-trivial, the extraordinary, the anomalous. Hence computing as an agent of what the Russian Formalists called ostranenie or “defamiliarization” – the artistic technique of creating a pre-interpretative “vision” of an object rather than serving as a means of knowing it (Shlovsky 18), or “how we freshen things that have become banal, rather than banalize things that have become revolutionary” (Bruner, “Life and Language” 22).

What we might then call the negative theology of modelling is a subject I have written on extensively. But for present purposes, all I need to note is that with any kind of cultural artefact, modelling of this sort quickly runs aground. Indeed, in the humanities that is its goal: to drive a certain set of conditions for the interpretation of an artistic or literary work to their breaking point – as happened with the Analytical Onomasticon . If the reduction of the artefact’s actual data to computationally tractable elements is not a problem (as it isn’t usually with text), then the problem lies with how we choose them and, far more arbitrarily, what we deem to be their relevant context. The culprit here is the ill-defined notion of “context,” which sounds like data but turns out to be an indefinitely regressive, changeable semiotic penumbra (McCarty, ”Beyond the Word”).

Something fundamental is clearly wrong. I am reminded of Horatio’s and the sentinels’ attempt to engage with the ghost of Hamlet’s father:

Bernardo ‘Tis here. Horatio ‘Tis here. Marcellus ‘Tis gone.

There is, one might say, a strangeness to the strangeness of a reductive analysis ending in a vanished ghost of meaning, or as Beer has magnificently put it, the epistemophilia of “driving back towards an unaskable question” that “fuels the anxieties and pleasures” of our work (30). If, for example, we look from a methodological point of view to the increasingly successful research of John Burrows in computational stylistics, we see the traditional recursive hermeneutics of critical reading and reductive analysis. Apart from the statistical techniques, innovation lies first in an experimental method used to test and improve upon ideas about the text’s stylistic features and their fit with the tests used on them. Innovation, shared with natural language processing, also lies in the very view of text as stochastic rather than deterministic. [21]

Relatively early in the history of text analysis, attempts to realise what computational linguist Margaret Masterman called “a telescope of the mind” had been so unsuccessful as to provoke a melancholic round of disciplinary self-examination that has been a feature of the discourse in text-analysis ever since. The clearest, most theoretically sophisticated response of the last century was Susan Wittig’s (“The Computer and the Concept of Text,” published in 1978), taken up and developed in this one by Jerome McGann (“Marking Texts of Many Dimensions,” published in 2004). Quoting Masterman indirectly, Wittig concluded that the problem was an inadequate concept of text – hence what I have called the Wittig-McGann question, “what is text?” We can read this inadequate concept directly from the text-analytic software we have, which processes delimited character-strings to measure their frequency, proximity and juxtaposition. Its central format, the keyword-in-context (KWIC), functions visually as an interface which redefines our relation to text not as readers but as language-sleuths (hence the migration of the tool into corpus linguistics). The strength of text analysis as a whole lies in the insight that it affords into some of what must be happening between the first moments of eye contact with the page and engagement with whatever story is being told. But that is all that it does. Even further away from reading, closer to the machine, more abstract and alienating, is the later notion instantiated in the work of the Text Encoding Initiative (TEI) that text is “an ordered hierarchy of content objects” – the “OHCO Thesis”, as it is known.[22]

Thus estrangements of computing force the Wittig-McGann question. McGann’s own response and mine are sufficiently strange in themselves to raise a number of other questions having mostly to do with first attempts to devise a language capable of bridging current literary-critical theory to software (McCarty, “Beyond the Word”). Wittig points out that methods then extant were the products of doing just that with the positivistic New Critical idea of text. Beginning with reader-response criticism, ideas of text have been formed within a much broader cultural shift of cosmologies, from the reductive to the emergent. Despite the huge successes of reductive science, reaching a peak at about the same time as computing began, disquiet surfaced, for example in Erwin Schrödinger’s What is Life? (published in 1944). At the end of the century biologist Robert Rosen, his thoughts orbiting Schrödinger’s project, wrote that “our universes are limited, not by the demands of problems that need to be solved but by extraneous standards of rigor. The result ... is a mind-set of reductionism, of looking only downward toward subsystems, and never upward and outward.”[23] At about the same time two Nobel laureates, physicist Robert Laughlin and chemist Roald Hoffmann, both proclaimed, in Laughlin’s words, the beginning of an “Age of Emergence” (Laughlin 205-21; Hoffmann 18-21). Even if we speak in terms of a world “out there,” even if we observe but simple things, Hoffmann points out that what we find “is refractory to reduction, and if we insist that it must be reducible, all that we do it to put ourselves into a box. The box is the limited class of problems that are susceptible to a reductionist understanding. It’s a small box.” (21)

Allow me to leave this question by observing only that the Pisgah-sight of a computing that would model how we handle text as an emergent phenomenon of reading is anything but dim. How we get to the place where such computing is possible is quite another matter.

7. The success of not getting there

My objective has been to put some of the strangeness of computing – to quote Jerome Bruner’s wonderful phrase in a new context, an “alternativeness of human possibility” we might otherwise never have confronted – sufficiently before your eyes as to give you some appreciation of the far less ramified Pisgah-sight which motivated us in former times (“Possible Castles” 53). Yet, with some brilliant exceptions, scholars as a whole have treated the estranging potential of computing as an embarrassment and have either turned away from the challenge it poses or engaged themselves in a cover-up. They have had little to nothing to say that might instruct those for whom the poverty-stricken notions of user-friendliness, semantic compatibility and ubiquitous computing are sufficient goals. They have preached, and still today preach, the mainstream assimilation of our young field as if there were no value whatever in a methodological ship of exploration with which to investigate the interrelations among disciplines and enrich them in the process. They have considered as success the reduction of computing’s estranging vision to the totally ordinary, pen-in-the-pocket utility. At the same time everyone, even Microsoft, understands that computing isn’t quite what it should be, that it remains annoyingly alien to where we want to go today.

That computers annoy and disappoint is a commonplace and fact of life. In Lee Friedlander’s book of photographs, At Work, those stuck in front of a computer screen seem among the unhappiest. Are we, who put ourselves there with intent, better off? Reporting on his research when it had already been in progress for over three decades, Roberto Busa argued strongly against the then still persistent notion that computing was about saving scholarly labour (“The Annals of Humanities Computing,” published in 1980). For all walks of life the promise of “labour-saving” from automation had shown itself to require heavy qualification: what often tended to happen was that labour was shifted somewhere else, or the job to someone else. The most threatened were and are middle-level management, i.e. those whose jobs are most like the one Turing used as a metaphor. Among scholars one imagines that no one has ever had a problem with getting relief from genuine drudgery, such as making concordances – although Ione Dodson Young famously expressed nostalgic regret on completing “undoubtedly the last concordance to be made without the assistance of a computer” (vii; cf. Raben). Rather it was the temptation to define what the computer could do, and was for, as drudgery, hence, again, the machine as perfect slave which one could use without moral opprobrium. Yielding to this temptation, Louis Milic declared in the inaugural issue of Computers and the Humanities that there was “a real shortage of imagination among us” and a squandering of computing’s potential (4). Sir Geoffrey Vickers noted in an unsigned review in the Times Literary Supplement a few years later that computers could help us to resolve “the major epistemological problem of our time” – how the playing of more or less tacit roles differs from the mechanical following of rules that can and should be spelled out – if the exponents of computing “can resist the temptation to bury it. The temptation will be dangerously strong,” Sir Geoffrey concluded. “Slave labour is so seductive” (585).

When computers were expensive, massive and required expert care, scholars were easily tempted (as a defensive move?) to bestow the subservient role of computing to technical staff, and did so. Today even printers seldom need fixing, but as long as computing is something someone else does for the established scholar somewhere other than within a collaboration of equals, the problem will remain. The way out and toward a better arrangement is to take Douglas Engelbart’s path even more seriously than we have.

Should it be possible completely to recapture that Pisgah-sight which I have been chasing, I suppose it would prove a description of where we are and are trying to be but in a form quite different than we imagined back then. How can the past be connected to the present but that we connect it, choosing from or compelled by our predicaments to see a scene with different highlights and different shadows from those anyone would have been interested in seeing, and so seen, at another time? In honouring Ian Lancashire as pioneer in Canada and abroad, from abroad, by putting him, and sometimes myself, on Mount Pisgah, I am with Thomas Fuller’s seventeenth-century phrase invoking the great biblical metaphor of an exodus toward a visionary Promised Land never actually reached. Not to put too fine a point on the matter, computing is like that, and so perhaps keeps us in a pioneering state of mind. But for Ian, back when computing made life very difficult, Dame Gillian’s further description, with words from Keats’ The Fall of Hyperion, comes to mind:

Darwin, energetically observing and writing before the establishment of genetic theory, had to have the patience of the pioneer – the patience not to know for sure within his lifetime ‘Whether the dream now purposed to rehearse / Be poet’s or fanatic’s’, whether it would prove to be authentic or delusive. (14).

A Festschrift signifies authenticity. We can only hope for the future, but this time we have a guide.


Works Cited

ALPAC (Automatic Language Processing Advisory Committee). Language and Machines: Computers in Translation and Linguistics . Publication 1416. Washington DC: National Academy of Sciences, National Research Council, 1966. Web. 5 Jan. 2009. < www.nap.edu/openbook.php?isbn=ARC000005> .

Anon . “Keepers of rules versus players of roles”. Times Literary Supplement 21.5.71 (1971): 585. Print.

Arnheim, Rudolf . “A Forecast of Television”. Film as Art. Berkeley: U of California P, 1957/1935. 188-98. Print.

Babbage, Charles . On the economy of machinery and manufactures. London: C. Knight, 1835.

Beer, Gillian . Open Fields: Science in Cultural Encounter. Oxford: Oxford UP, 1996. Print.

Bessinger, Jess B., Jr, Stephen M. Parrish and Harry F. Arader, eds . Literary Data Processing Conference Proceedings September 9, 10, 11 – 1964. Armonk, NY: IBM Corporation, 1964. Print.

Bruner, Jerome . “Possible Castles”. Actual Minds, Possible Worlds. Cambridge, MA: Harvard UP, 1986. 44-54. Print.

—— . “Life and Language in Autobiography”. Alfred Korzybski Memorial Lecture 1988. General Semantics Bulletin 54 (1988). Web. 9 Mar. 2009. < www.generalsemantics.org/misc/akml/akmls/57-bruner.pdf>.

Burks, Arthur W . “From ENIAC to the Stored-Program Computer: Two Revolutions in Computers”. A History of Computing in the Twentieth Century. Ed. N. Metropolis, J Howlett and Gian-Carlo Rota. New York: Academic, 1980. 311-44. Print.

Burrows, John . “Never say always again: Reflections on the Numbers Game”. Text and genre in reconstruction: Effects of digitization on ideas, behaviours,products & institutions. Ed. Willard McCarty. Cambridge: Open Book, 2009. Forthcoming. Print.

Busa, R . “The Annals of Humanities Computing: The Index Thomisticus”. Computers and the Humanities 14 (1980): 83-90. Print.

Campbell-Kelly, Martin and William Aspray . Computer: A History of the Information Machine. New York: Basic Books, 1996. Print.

Ceruzzi, Paul E . A History of Modern Computing. Cambridge, MA: MIT, 1998. Print.

Dreyfus, Hubert . Alchemy and Artificial Intelligence. Report P-3244. Santa Monica: Rand, 1965. Web. 5 Jan. 2009. < www.rand.org/pubs/papers/P3244/>.

Engelbart, D C . Augmenting Human Intellect: A Conceptual Framework. Summary Report AFOSR-3233 on SRI Project 3578. Palo Alto, CA: Stanford Research Institute, 1962. Web. 5 Jan. 2009. <www.bootstrap.org/institute/bibliography.html >.

Fogel, Ephim G . “The Humanist and the Computer: Vision and Actuality”. In Bessinger, Parrish and Arader 1964: 11-24; rpt. American Behavioral Scientist 9.4/5 (1965): 37-40. Print.

Friedlander, Lee . At Work. Göttingen: Steidl, 2002. Print.

Frye, Northrop . The Educated Imagination. The Massey Lectures, Second Series. Toronto: CBC, 1963. Print.

Fuller, Thomas . A Pisgah-sight of Palestine and the Confines Thereof, with The History of the Old and New Testament acted thereon. London: J.F., for John Williams, 1650. Print and Web; rpt. Early English Books Online. Web. 5 Jan. 2009. <eebo.chadwyck.com/home>; rpt. the Jewish National and University Library. Web. 5 Jan. 2009. < aleph500.huji.ac.il/nnl/dig/books/bk001038198.html>.

Gandy, Robin . “The Confluence of Ideas in 1936.” In The Universal Turing Machine—A Half-Century Survey. Ed. Rolf Herken. Computerkultur II. 2nd ed. Wien: Springer Verlag, 1995. 51-102. Print.

Hacking, Ian . The Taming of Chance. Cambridge: Cambridge UP, 1990. Print.

—— . “The Cartesian vision fulfilled: analogue bodies and digital minds”. Interdisciplinary Science Reviews 30.2 (2005): 153-66. Print.

Hayles, N. Katherine . How we became posthuman: Virtual bodies in cybernetics, literature, and informatics. Chicago: U of Chicago P, 1999. Print.

Hoffmann, Roald . The Same and not the Same. New York: Columbia UP, 1995. Print.

Kahn, Fritz . Das Leben des Menschen: ein volkstümliche Anatomie, Biologie und Entwicklungsgeschichte des Menschen. Stuttgart: Kosmos, 1923-29. Print.

Lancashire, Ian . Computer Applications in Literary Studies: A Userbook for Students at Toronto. Toronto: Department of English, 1983. Print.

—— . “Letter from Toronto”. Computers and the Humanities 19.4 (1985): 251-3. Print.

Laughlin, Robert B . A Different Universe: Reinventing Physics from the Bottom Down. New York: Basic Books, 2005. Print.

Levy, Steven . Hackers: Heroes of the Computer Revolution. 2nd edition. New York: Penguin, 2001. Print

Mahoney, Michael S . “Software as Science – Science as Software”. History of Computing: Software Issues, ed. Ulf Hashagen, Reinhard Keil-Slawik and Arthur Norberg. Berlin: Springer Verlag, 2002. 25-48. Print.

—— . “The histories of computing(s)”. Interdisciplinary Science Reviews 30.2: 119-35.

Manning, Christopher D and Hinrich Schütze . 1999. Foundations of Statistical Natural Language Processing. Cambridge, MA: MIT Press, 2005. Print.

Masterman, Margaret . “The Intellect’s New Eye”. TLS (1962): 38-44.

McCulloch, Warren S . “What Is a Number, that a Man May Know It, and a Man, that He May Know a Number?” Embodiments of Mind. 2nd edition. Cambridge, MA: MIT, 1970. 1-18. Print.

McCarty, Willard . Humanities Computing. Basingstoke: Palgrave, 2005. Print.

—— . “Tree, Turf, Centre, Archipelago – or Wild Acre? Metaphors and Stories for Humanities Computing.” Literary and Linguistic Computing 21.1 (2006): 1-13. Print.

—— . “Beyond the word: Modelling literary context.” Text Technology (2009). Forthcoming. Web. 5 Jan. 2009. <staff.cch.kcl.ac.uk/~wmccarty/>.

McGann, Jerome . “Marking Texts of Many Dimensions”. In A Companion to Digital Humanities. Ed. Susan Schreibman, Ray Siemens and John Unsworth. Oxford: Blackwell, 2004. Print. 198-217.

Milic, Louis T . “The Next Step.” Computers and the Humanities 1.1 (1966): 3-6. Print.

Newell, Allen . “Metaphors for Mind, Theories of Mind: Should the Humanities Mind?” The Boundaries of Humanity: Humans, Animals, Machines. Ed. James J Sheehan and Morton Sosna. Berkeley: U of California P, 1991. 158-97. Print.

Nold, Ellen W . “Fear and Trembling: The Humanist Approaches the Computer.” College Composition and Communication 26.3 (1975): 269-73. Print.

Nyce, James M and Paul Kahn . From Memex to Hypertext: Vannevar Bush and the Mind’s Machine. Boston: Academic, 1991. Print.

P[egues], F[rankling] J. “Editorial: Computer Research in the Humanities”. The Journal of Higher Education 36.2 (1966): 105-8. Print.

Price, Derek J deS. “Gods in Black Boxes.” In Computers in Humanistic Research: Readings and Perspectives. Englewood Cliffs, NJ: Prentice-Hall, 1967. 3-7

Raben, Joseph . “The death of the handmade concordance.” Scholarly Publishing 1.1 (1969): 61-9. Print.

Renear, Allen . “Out of Praxis: Three (Meta)Theories of Textuality.” In Electronic Textuality: Investigations in Method and Theory. Ed. Kathryn Sutherland. Oxford: Oxford UP, 1997. 107-26. Print.

Rosen, Robert . “Effective Processes and Natural Law.” In The Universal Turing Machine: A Half-Century Survey, ed. Rolf Herken. Wien: Springer Verlag, 1995. 485-98. Print.

—— . Essays on Life Itself. Complexity in Ecological Systems Series. New York: Columbia UP, 2000. Print.

Sappol, Michael . Dream Anatomy. Washington DC: Department of Health and Human Services, 2006. Web. 31 Dec. 2008. < www.nlm.nih.gov/exhibition/dreamanatomy/>

Schrödinger, Erwin . What is Life? The physical aspect of the living cell. Cambridge: Cambridge UP, 1944. Print. Rpt. What is Life? The Physical Aspect of the Living Cell with Mind and Matter and Autobiographical Sketches. Cambridge: Canto, 1992. Print.

Schulz, Bruno . “An Essay for S. I. Witkiewicz.” In The Collected Works of Bruno Schulz. Ed. Jerzy Ficowski. 2nd edition. London: Picador, 1998. Print. 367-70.

Shepard, Paul . The Others: How Animals Made Us Human. Washington DC: Shearwater Books, 1996. Print.

Shlovsky, Victor . “Art as technique.” In Russian Formalist Criticism: Four Essays. Trans. Lee T. Lemon and Marion J. Reis. 2nd edition. Lincoln, NB: U of Nebraska P, 1965. 3-24. Print.

Simon, Herbert A . The Sciences of the Artificial. 3rd edition. Cambridge, MA: MIT, 2001. Print.

—— and Alan Newell. “Heuristic Problem Solving: The Next Advance in Operations Research.” Operations Research 6.1 (1958): 1-10. Print.

—— . “Reply: Heuristic Problem Solving.” Operations Research 6.3 (1958): 449-50. Print.

Smith, Brian Cantwell . “The Limits of Correctness.” ACM SIGCAS, Computers and Society 14-15.1-4 (1985): 18-26. Print. Rpt. “The Limits of Correctness in Computers.” Computers, Ethics & Social Values. Ed. Deborah G. Johnson and Helen Nissenbaum. Englewood Cliffs, NJ: Prentice-Hall, 1995. 456-69. Print.

Taylor, Charles Winslow . The Principles of Scientific Management. New York: Harper and Brothers, 1911. Print.

TLS . Freeing the Mind: Articles and Letters from The Times Literary Supplement during March-June , 1962. London: The Times Publishing Company, 1962. Print.

Trades Union Congress . Trade Unions and Automation. London: Trades Union Congress, 1956.

Turing, Alan . “On Computable Numbers, with an Application to the Entscheidungsproblem.” Proceedings of the London Mathematical Society 42 (1936): 230–65. Print.

von Neumann, John . First Draft of a Report on the EDVAC. Contract W-670-ORD-4926. Philadelphia, PA: Moore School of Electrical Engineering, U of Pennsylvania P, 1945. Web. 5 Jan. 2009. < www.virtualtravelog.net/entries/2003-08-TheFirstDraft.pdf>.

—— . The Computer and the Brain. Silliman Memorial Lecture, Yale University. New Haven, CN: Yale UP, 1958. Print.

Winograd, Terry and Fernando Flores . Understanding Computers and Cognition: A New Foundation for Design. Boston: Addison-Wesley, 1986. Print.

Wittig, Susan . “The Computer and the Concept of Text.” Computers and the Humanities 11 (1978): 211-15. Print.

Young, Ione Dodson, ed . A Concordance to the Poetry of Byron. Vol. 1. Austin, TX: Pemberton, 1965. Print.



[*] The present essay is in effect a rough narrative toward a history of computing in the study of literature, from the end of the 1940s to the public release of the World Wide Web in 1991. This history is intended to place literary computing in the social, technical and intellectual contexts of the time and will be based primarily on the reflections of contemporary practitioners and of both intra- and extramural commentators. Its historiographical model is Mahoney 2002. Any such attempt may seem outrageously premature, but the greatness of the need for a genuinely historical view of humanities computing (i.e. beyond the chronological) and the perpetual newness of computing itself will, I hope, excuse the attempt. The author would greatly appreciate comments, criticisms and suggestions. My thanks to an anonymous reviewer for encouragement and a helpful pointer, to members of Humanist for numerous suggestions along the way, to Ray Siemens for the persistence and editorial care and, as always, to Michael S. Mahoney for guidance even now.

[2] “W dziele sztuki nie została jeszcze przerwana pępowina łącząca je z całością naszej problematyki, krąży tam jeszcze krew tajemnicy, końce naczyń uchodzą w noc otaczającą i wracają stamtąd pełne ciemnego fluidu.... Ale sztuka operuje w głębi przedmoralnej, w punkcie, gdzie wartość jest dopiero in statu nascendi.... Gdyby sztuka miała tylko potwierdzać, co skądinąd już zostało ustalone – byłaby niepotrzebna. Jej rolą jest być sondą zapuszczoną w bezimienne.” „Bruno Schulz do St. I. Witkiewicza” (1935), www.brunoschulz.org/1935.htm (30/12/08).

[3] In addition to Campbell-Kelly and Aspray 1996 and Ceruzzi 1998 see the sales brochure collection, “Selling the Computer Revolution,” Computer History Museum, www.computerhistory.org/ (5/1/09).

[4] See esp. the triumphalist predictions of Simon and Newell in early 1958 for the decade to follow (Simon and Newell 1958a and 1958b), which already seemed highly dubious by the time of Dreyfus’ critique (1965) and the infamous “black book” on machine translation, ALPAC 1966.

[5] The video has its own Wikipedia entry, “1984 (advertisement)”, at en.wikipedia.org/wiki/1984_(television_commercial) (8/3/09) and may be found at numerous locations online.

[6] The phrase was first used in Schenck vs. United States, 249 U.S. 47 (1919), for which see supreme.justia.com/us/249/47/case.html (9/3/09). A representative sense of the threat as then perceived may be obtained e.g. from Harper’s ( www.harpers.org) from the mid 1960s onward.

[7] See note 3.

[8] At the time, acoustic modems were 300, then 1200, then 2400 baud (bits-per-second), i.e. 1, 4 or 8 ten-thousandths of the download speed commonplace at the time of writing.

[9] For a definition of “batch” see foldoc.org (2/1/09); note the strenuous objections to batch-mode computing from early hackers, whose “Hands-On Imperative” is well documented in Levy 1984.

[10] Preliminary sampling of literature from the trades demonstrates considerable early concern, for example in the annual Reports of the (British) Trades Union Congress, www.unionhistory.info/reports/ (5/1/09); see esp. for 1955: 247-50; 1956: 351-61; 1957: 249-50 and the special report of 1956. For the humanities, expression of anxiety about computing takes many forms; a desultory sampling from the mid 1960s yields reassurance in Fogel 1964 (in the Proceedings of the IBM conference on “Literary Data Processing”) and in Pegues 1965, reporting on that conference, who reassures his reader that scholars will not be replaced; half-playful references, such as Nold 1975, to the “fear and trembling” with which colleagues approach the machine; serious historical treatments of “Gods in Black Boxes”, the title of Price 1967. Assuming that ignorance betokens anxiety, Lancashire 1983 gives considerable evidence of the measured calmness he applied to the common mixture of fears and misconceptions. A simple Google search for “fear computer” in these sophisticated times yields over 4 million hits, some of which are on target, including an entry for “fear of computer” at www.phobia-fear-release.com/fear-of-computer.html (5/1/09).

[11] I make a distinction between the “digital humanities”, by which I denote the activities in the disciplines across the humanities, and “humanities computing”, the singular practice which views and interacts with all of the former together.

[12] I owe the decanal view to Professor Wayne McKenna, Executive Dean of the College of Arts, University of Western Sydney, and a distinguished scholar in computational stylistics.

[13] The Humanities Support Group, consisting of John Bradley (now my colleague at King’s College London) and Lidio Presutti, is described in Lancashire, “Letter from Toronto” 251.

[15] That year Alonzo Church, Stephen Kleene, Alan Turing and Emil Post all separately proposed the mathematical idea from which computing later arose; see Gandy 1995/1994.

[16] See Simon and Newell 1958a and 1958b; cf. the related work of both, esp. Simon 2001/1996; Newell 1991.

[17] See esp. Engelbart 1962, his letter of 1962 to Bush, Bush’s “As We May Think” and other materials in Nyce and Kahn 1991; note the coincidence of the starting points for the arguments in Bush’s article and in the anonymous Introduction to TLS 1962: 4-9.

[18] Note Michael Mahoney’s point, that “even surprises wind up having a history, first because the new most often comes incrementally embedded in the old, and because in the face of the unprecedented we look for precedents” (2005: 120).

[19] Burk 1980: 341; cf. McCulloch 1970/1961; von Neumann 1958; and the comment by Robert Rosen on “[o]ne of the most remarkable confluences of ideas in modern scientific history” (1995: 485).

[20] One could say that at the outset the computer modelled the brain. However, the neurological ideas of McCulloch and Pitts were developed in the confluence of several disciplines, including electrical engineering, computer science, and mathematics as well as neurophysiology and psychology. Thus it would be better to say that machine and body, outside and inside, have been in uneasy correspondence for a long time.

[21] See esp. Hacking 1990; in natural language processing Manning and Schütze 1999; in computational stylistics Burrows 2009.

[22] Renear 1997 summarizes the development of OHCO from textual markup strategies; see also the panel discussion with McGann at www2.iath.virginia.edu/ach-allc.99/proceedings/hockey-renear2.html (5/1/09).

[23] Rosen 2000: 2, written before his death in 30 December 1998.

Share

Authors

Willard McCarty (King's College London)

Download

Issue

Dates

Licence

Creative Commons Attribution 4.0

Identifiers

File Checksums (MD5)

  • HTML: 4537bba770983d3c683efdc6334de853