This paper, like many in this collection, explores roles for computing in the Humanities. The very act of blending a highly technical and often non-academic subject (computing) with an academic one has resulted in the emergence of the field of activity nowadays called the Digital Humanities (DH) or Humanities Computing (HC), which brings together people with technical skills and those of an academic orientation. Those who have followed the DH over the 60 years of its existence have seen many developments grow out of the generally fruitful collaboration between academics who have mastered various aspects of computing and technologists. The nature of this relationship, however, has sometimes been fraught, and vulnerable to “prejudices and misconceptions” that, if set aside, can “challenge the ego but open the mind” (Piez). This paper explores some aspects of this relationship from the perspective of the technologist rather than that of the academic.
All the papers in this volume, except mine, are written by those who come to Digital Humanities from academia, and view it from the perspective of the academic. This is natural, given that DH must remain connected to scholarly interests if it is to have any relevance at all. This paper, however, is different, since I do not have the training of a humanist, and certainly do not hold a traditional academic post at King’s College London (KCL) (nor is it a conventional technical support position). In terms of this duality I am a technologist – in this case a developer: someone who develops resources of various kinds on computers – and I have worked for almost all my professional life in one way or another as a developer in the Digital Humanities. In this way my situation is unusual. However, my experience at KCL has shown that the kind of combining of academic and technical expertise which characterises much of our work has resulted in valuable results that might not have been achieved otherwise. Thus, in this paper I will intertwine two issues: (i) I will propose a model of how computing can be applied to Humanities scholarship in ways that I believe can extend the established boundaries of the DH, and (ii) I will explain how my role as developer affects my contribution to the DH, and perhaps brings a useful new perspective to it.
Perhaps it would be useful to begin with a little personal history. My first experience of what we now call Digital Humanities came when I developed a KWIC concordance package called COGS at the University of Toronto for its IBM mainframe in 1977. Although I maintained an interest in applying computing to the Humanities throughout my time at the University of Toronto (1977-1997) my official involvement in DH came and went since my work title or work unit at the University never did contain the words “Digital Humanities.” Nonetheless, this did not prevent what was, from my perspective, a very fruitful relationship for many years (in particular up to the year of the first joint ACH/ALLC conference in 1989 at Toronto) with Ian Lancashire and members of Toronto’s Centre for Computing in the Humanities. During this time, my involvement in the DH culminated (in 1989-92) in the design and implementation (with my colleague, Lidio Presutti) of several versions of the text analysis software TACT. In the conference proceedings book The Dynamic Text (vi), Professor Lancashire described my work on TACT up to that point in the following terms: "[he] designed the system, wrote much of the code, supervised the project, and wrote the [then-available 171-page] documentation.". Due to the initiative of Ian Lancashire, resources were made available to me that permitted the development of TACT to proceed. These extended beyond hardware and programming resources to include provision for access to interested academics –for which I am particularly grateful.
There followed a few years of hiatus during which my involvement in work of this kind was barely recognised by the University at all. Indeed, the vice president responsible for computing at the time evidently believed that the kind of work that TACT development represented was not an appropriate use of the human resources he controlled, and it was this, in part at least, that lead to my separation from TACT development. However, with my move in 1997 to King’s College London and my placement in King’s remarkable Centre for Computing in the Humanities (CCH) – headed with great vision by Harold Short – I have been put in the place of being able to focus on applying computing to the Humanities.
At King’s College London (KCL) my role remains primarily as a developer, but at KCL the relationship between myself and our scholarly collaborators is perhaps clearer – it is centered on the development of the resources that emerge as a result of close, and long-term, collaboration between myself and others at CCH with Humanities researchers. The interaction between the technical and scholarly staff is extended and often detailed. The projects are defined in terms of Humanities-scholarly goals, and these always remain the priority. However, the nature of applying technology to these scholarly projects almost always involves situations in which ideas flow in both directions between the academic and technical teams. Insights gained from the richness of the materials and the scholarly depth of understanding that our academic partners bring to the development task have of course enlightened and informed our understanding as technical developers, but it is also true that our understanding of the potential significance of computing’s way to model and represent data – grown out of contact over many years with many different projects – has significantly affected how the scholarly work was thought about, carried out, and presented.
My work on TACT shows something of the potential role of the developer in the Digital Humanities. For instance, I did not approach the task of designing it from the perspective of a personal scholarly user because I was not a textual researcher. Nonetheless, TACT was not designed and developed in isolation from a Humanities community. In the earliest days of TACT’s design, most of the ideas of what TACT could do were based on experiences I had already gained in working with users of my first mainframe-based concordance generating system COGS, and by reviewing the potential uses and some aspects of the design of the Oxford Concordance program (OCP). Then, by examining the software and reading early writing by DH pioneers such as J.B.B. Smith (including his software Arras), Rosanne Potter, and Susan Hockey, I was able to recognise other ways that computing could support this kind of text analysis. By the time I had serious access to potential academic users at the University of Toronto and began to get their input into how the software should operate, much of it had already been designed.
If much of what TACT does came from the work of these pioneers, what, then, was my contribution as developer? TACT’s development began in the early days of personal computers, and with that development came the shift from batch to interactive computing. It is difficult for most of today’s computer users to understand the significance of this. The Oxford Concordance Program (OCP), for example, had already explored, in many sophisticated ways, the potential of machine-generated concordances, but the designers thought of this function in what is known in computing circles as a batch operation; that is, to use OCP you would create the text in one file, write instructions for OCP in another computer file, and submit both files to a queue of “jobs” that your computer mainframe would work through. After the passage of perhaps a few hours (as you waited for your job to get its turn) the mainframe would generate a result file that would contain the textual concordance which you could then print or consult.
Personal computing presented a radically different computing paradigm than the batch approach. In it one found a computing world in which the user had the potential of dynamically interacting with the material. Furthermore, it was widely believed in the computing industry at the time that this dynamic interaction would transform the nature of computing for most applications and allow the computer to enrich the user’s experience of the material they worked with. Indeed, the Macintosh, with its graphical interface and interactive-oriented applications such as a word processor, offered a radically different experience of computing than the world in which batch software such as OCP operated. Hence, TACT was designed from the ground up to support an interactive way of doing word-oriented textual analysis, and the mere fact that TACT’s Usebase needed to be interactive affected almost all the design thinking that went into it. For example, much went on inside TACT to ensure that the various displays were linked, so that the TACT user could rapidly switch between displays and not get lost. Many decisions about TACT’s design had to be made to support, as far as possible, the sense of immediate response to a user’s actions – and this on personal computers that are approximately 1000 times slower than those available today. This development of this sense of integration between the components, for the purpose of supporting user interaction to enrich their experience of interacting with their text, was at the centre of much of my work on Usebase. Near the end of TACT’s development in 1994 the newer components (several of which had not been developed during the time of my involvement) diverged from this interactive orientation in the interest of extending TACT’s function set in various useful ways, but the user experience as one sees in Usebase still best represents my view of what made TACT development interesting to me.
From my perspective, then, the time of my involvement in TACT represented the parallel development of two independent but linked agendas. For those academic users who wanted to do word-oriented text analysis, TACT provided a dynamic tool to support this work. Indeed, after the broad outline of TACT and its dynamic nature was sorted out, the feedback and suggestions provided by of scholarly users (from Toronto’s CCH, along with other early adopters) contributed invaluable insights into how TACT’s overall design could refined to make it more useful to its intended user community. My focus as developer, however, was always on how the interactive potential of personal computers could be usefully engaged to support this kind of activity. This joining together of two different agendas in one project is described well in Willard McCarty’s paper “Humanities computing: essential problems, experimental practice”– in which he mentions Peter Galison’s idea of the “Trading Zone” – a point of contact at which different communities interact, hopefully for mutual benefit. In this light, it is useful to recognise that the ideas in TACT also flowed in both directions – TACT was influenced by what others were doing in computer assisted text analysis and by the experience of working with an interested user community. However, thinking about computer assisted text analysis in the academic world was also affected by TACT and its contemporaries. The set of related ideas that arose in the minds of several in the Humanities from the experience of using this kind of software was at some point given the name of dynamic text. This concept, of a text that “in essence, indexed and concorded itself” (Siemens para 9) seems to have been first proposed by Ian Lancashire at the time of the ACH/ALLC 1989 conference (Siemens references Lancashire’s 1989 work of that name, and the conference itself was named “The Dynamic Text” by Lancashire, who was its program chair and organiser) and this phrase is, I think, a neat encapsulation of the significance of the combination of textual computation strategies such as automatic concordancing, which arose out of the batch computing world, with human-computing interaction.
From the number of references to TACT in subsequent Digital Humanities writings, it is clear that TACT has had some influence on the development of the discipline. However, it seems to me that its influence – indeed perhaps even the influence of the DH agenda as a whole – on broader Humanities scholarship in general has been very small. As developer I was in the position to build a tool like TACT, but I was, naturally enough, not in a position to promote its use as a way of working on scholarly textual issues inside the Humanities. If TACT was to affect the way scholarship was being done, this task needed to be done by academic users of TACT, who could show how TACT could usefully stimulate the development of new and useful research results. It now seems clear, however, that software from the DH world, including TACT, have had little or no significant impact on the current Humanities research mainstream. This is particularly striking when one realises that an ongoing goal of the DH has always been evangelical – from the very beginning DH practitioners, scholarly and technical, have been fired by a vision that the application of computing could radically affect many aspects of Humanities scholarship.
So why did TACT have so little impact, and why have many other interesting software developments from the DH community also seem to have little effect? In TACT’s case, some causes of its limited impact doubtless arose out of limitations or deficiencies in TACT’s design – some of which at least which were introduced while trying to deal with the severe limitations of the early PC technology on which TACT was designed to operate. There are, however, perhaps even more important reasons, some of which I have discussed in “What you (fore)see is what you get: Thinking about usage paradigms for computer assisted text analysis”. The main contention of that paper was that different communities who use computing to support their research often bring different pre-existing mental models or paradigms of how the computer can be useful to support scholarship. These paradigms, which differ significantly between the Digital Humanities community and the scholarly community at large, define and perhaps limit the range of possible roles that each community can think of for computing.
One of the models I discussed that has grown out of the Digital Humanities community, and seems largely foreign to the broader Humanities community, is the automatic transformation paradigm – a view of a role for computing in the Humanities that has perhaps the longest history, since it is evidently the one applied by Digital Humanities pioneers almost 50 years ago. It emphasised the computer’s ability to perform certain kinds of automatic transformations on a text – ranging from word-oriented transformations such as the KWIC concordance (provided by a host of text analysis tools including software as old as OCP, and by TACT) and distribution display to the markup-oriented transformation as performed by XML based tools such as XSLT. The automatic aspect of the transformation paradigm – that the computer did the transformation for the user – is always dependant, of course, on the computer being able to automatically recognise the material that it is to transform. Consequently, almost all transformation tools developed in the DH community have focused on items in a text that are straightforward for the machine to detect – words in alphabetic languages and markup in schemes such as XML.
As arguably powerful as this paradigm is, at least for certain kinds of textual work, it has in fact affected relatively few scholars. Commenting on the kinds of insights produced by kinds of software that emerged from the automatic transformation paradigm, Ray Siemens says that the word-oriented nature of tools like TACT might produce a “new kind of critical reader,” but:
“That said, this type of critical reading is not obviously in keeping with current trends in critical and textual theory that place emphasis on reading in a historical context and also on what Peter Shillingsburg has recently referred to as the "event-ness" of the historical textual edition.”
Indeed, as far back as 1978 critics such as Susan Wittig (“The Computer and the Concept of Text”) were already observing that the critical underpinnings of the DH software available at the time (and still recognisable in much of DH software development) were based on the critical context of the New Criticism and were, already in 1978, not very relevant to the then-current critical world of semiotics. In fact, the problem with word-oriented software like TACT and its descendents is that they are based, indeed they must be based, on an understanding of text that no longer forms the basis of current critical textual theory; furthermore, the issues that are currently of interest to literary critics are not drawn on aspects of the text that the computer can automatically detect, even when extended by current developments in, say, computer linguistics or artificial intelligence.
Markup (the other basic DH paradigm of how computers can support scholarship) seems closer to being applicable to current critical concerns. It is scholar-driven, and the fact that the scholar can add markup that is of scholarly interest, makes it plausible that more interesting results will begin to appear in time. However, so far, the practice seems to have little impact except in some rather specific areas of scholarship such as the preparation of scholarly editions.
So, if Digital Humanities has not much affected scholarship, why are computers in the offices of most of those in academia at least in Europe and North America? Indeed, most humanists acknowledge that they use computers, although they often don’t see them as affecting their research in any material way. They use the machine for word processing, and, more recently, for email and web browsing. In “What you (fore)see” I noted that World Wide Web access is thought of by most humanists in the context of a conduit model: focusing on the web’s ability to support the scholar’s own endeavours by delivering useful materials almost instantaneously to the desktop. Indeed, as online archives continue to develop in technical and scholarly sophistication, the World Wide Web has the prospect of playing a larger and larger role in scholarly research since scholars are likely to rely more and more on sources they see there as the basis for their work. However, from the perspective of the Digital Humanities, much of which has always been premised on the idea of the transformational role of computer on Humanities scholarship, there seems to be a sting in this tail: once the resources have been delivered to the desktop with a web browser, the researcher is still likely to use these resources in more or less exactly the same way as print ones. The potential of the digital resource is hardly available, if at all, to the end user. This is, indeed, one of the major disappointments that have arisen from the digital library developments over the past ten years that was identified by Brockmann et al in their study of the effect of digital libraries on scholarship for the American Council of Library and Information Resources entitled Scholarly Work in the Humanities and the Evolving Information Environment.
This disparity between how those within the Digital Humanities and those scholars who are not in the community see the role of the computer is, in fact, a substantial one. From the perspective of this paper, this gulf perhaps explains why a kind of parallel gulf has also opened up between my interests as developer and those of many (although not all) in the DH community who continue to develop software tools to support word-oriented research of the kind that TACT supports.
Within Digital Humanities, the tradition of developing transformation-oriented and markup-oriented software (driven by those researchers whose interests are well-served by this approach) continues. To the extent that this work supports a viable and vibrant research agenda – as it does – this orientation is, of course, natural enough and even arguably desirable. However, as long as much DH development is driven in this way, it is also natural enough that the software that is produced will be tightly tied to this particular research agenda. Following the evangelical nature of the Digital Humanities mentioned earlier, many of these researchers have attempted to promote the benefits of this approach to others within the larger Humanities community. We have now worked this way for quite a few years, and it is generally acknowledged that the overall impact has been small. If I understand Ray Siemens’ comment above correctly, it seems to me that in fact the larger community sees the possible benefits of much of the established DH work when applied to their interests as minor or nonexistent.
Here, then, is the dividing point between DH as practiced by most DH-oriented academics today, and my recent perspective as a developer. Many in the discipline, either those who practice under a transformation paradigm (and this includes most who are interested in word-oriented work of the kind that TACT enables) or a markup paradigm, find that what they do, and the tools that they build, might well further their own particular research agenda, but that it appears that this since very agenda is not perceived by their colleagues as directly relevant to much other current Humanities scholarship, the software that emerges from this is seen as similarly irrelevant.
It would seem that the time has come to step back from these established transformation and markup traditions within DH, and look at what non-DH scholars actually do to see if there are potential roles for computing that can usefully assist them – roles that are different from those implicit in the transformation and markup paradigms. Perhaps it is time to be a developer without an agenda that is attached to either of the established DH models. If one wants to apply computers effectively to what scholars do, one needs to have an understanding of what it is that scholars do, and then determine how what they do can be furthered by the computer.
In the business world, when computing is introduced into a new aspect of an existing organisation, it is not unusual for the software developers to be brought from outside the organisation itself. Being an outsider, the computing professionals have the task of understanding what the organisation does in some detail and then using their expertise to propose ways in which the various abstract models of what computers do could be applied to the organisation’s task at hand. It is important to understand that being an outsider is not necessarily a serious disadvantage here, as long as there is a sufficient design phase involved, and access is provided within the organisation so that the developers can develop an independent understanding of it. Indeed, this outsider’s view is often considered beneficial, as the new eyes of the developer can sometimes see new applications that have been missed by those inside the organisation.
It will doubtless seem presumptuous to the scholar in the Humanities that someone would propose this approach be applied to the act of scholarship. Computing analysis, when applied to an existing business organisation, normally has a clearly defined structure. There is generally a formal system in place (documentation, training manuals) in business organisations that can be used as the basis for understanding what is going on. Scholarship, on the other hand, is often thought of as an individual, idiosyncratic activity which is not suitable for systemization or formal structuring. As an outsider to scholarship myself, I have had a long association with scholars and at least some aspects of their activities, and it appears to me that although there are aspects of scholarship that are truly personal and internal, there is evidence that some aspects of the process of scholarship are widely shared, and that some of these aspects could benefit from computing support.
First of all, there has been a history of computing research that has explored how to approach applying computing to support intellectual work. Some would argue that aspects of it began with Vannevar Bush’s insightful article in the July 1945 issue of the Atlantic Monthly magazine entitled “As We May Think” in which Bush proposes a hypothetical (and pre-digital) machine to support research called the Memex. Furthermore, there is a traceable connection from the Memex through to the work of Douglas C. Engelbart, who founded and was head of the Augmentation Research Center at Stanford Research Institute (Menlo Park, California) during the period from 1962-1968. Engelbart’s work resulted in what has become several fundamental components of computing: the mouse and the Graphical User Interface (GUI). Indeed, a direct intellectual line can be traced from his work through to the first highly successful implementation of modern personal computing that we see in the Macintosh. However, Engelbart’s work started well before the Macintosh was thought of, and the first public viewing of his work (through a demonstration of their system by then called NLS) happened at the Fall Joint Computer Conference in San Francisco in 1968: a hugely influential event that some technologists have recently dubbed “the mother of all demos..”
The Engelbart approach is described in the conceptual framework he published in his “Augment” report in 1962. In this document he describes what he claimed was .”.. a new and systematic approach to improving the intellectual effectiveness of the individual human being” (iii)..”
From the very beginning of this report he explains how he thinks his approach can work to enhance human intellectual work:
By "augmenting human intellect" we mean increasing the capability of a man to approach a complex problem situation, to gain comprehension to suit his particular needs, and to derive solutions to problems. Increased capability in this report is taken to mean a mixture of the following: more-rapid comprehension, better comprehension, the possibility of gaining a useful degree of comprehension in a situation that previously was too complex, speedier solutions, better solutions, and the possibility of finding solutions to problems that before seemed insoluble.
We refer to a way of life in an integrated domain where hunches, cut-and-try, intangibles and the human "feel for the situation" usefully co-exist with powerful concepts, streamlined terminology and notation, sophisticated methods, and high-powered electronic aids. ... the first phase of [the] program [is] aimed at developing means to augment the human intellect. These "means" can include many things -- all of which appear to be but extensions of means developed and used in the past to help man apply his native sensory, mental, and motor capabilities... (1)
He defines a basis of this approach to tool design, given the name H-LAM/T, which is an acronym meaning “human augmented by language, artefacts and methodology in which he is trained.” (11).
Engelbart goes on to model some aspects of what we might now recognise as a precursor to a word processor system – a function for computers that, as far as I am aware, was completely unthought-of at the time – and then says:
”even so apparently minor an advance could yield total changes to an individual's repertoire hierarchy that would represent a great increase in over-all effectiveness. Normally the necessary equipment would enter the market slowly, changes from the expected would be small, people would change their ways of doing things a little at a time, and only gradually would their accumulated changes created markets for more radical versions of the equipment. (16)
Of course, the kind of intellectual work described by either Engelbart or Bush does not exactly correspond to what happens in Humanities scholarship. Using his engineering-oriented language describing engineering problem-solving, Engelbart probably sounds inappropriate when applied to the work of research in the Humanities. Nonetheless, his recognition that the computer could support our existing human-oriented ways of doing things, and that the machine supports the process of intellectual engagement rather than merely supporting the presentation of results are significant elements. Perhaps Engelbart’s H-LAM/T approach can be applied to scholarship if we find the appropriate aspect of scholarly work.
Let us begin by looking at the task of scholarship in a very broad way, and assuming that the goal of much scholarship is centered around the task of interpretation. This involves taking a text or a set of texts and either developing an overarching view of some aspect of them, or fitting aspects of them into some pre-existing model or critical view. John Lavagnino touches on this issue in his article “Reading, Scholarship, and Hypertext Editions,” while discussing the role of reading in scholarship. He describes one role for reading in scholarship as
something that is rarely mentioned in any kind of literary scholarship: on reading as an involving process, not as interpretation or decoding. It is reading as an experience and not as mere collection of data: it can lead to interpretation, but only by way of generating reactions that we subsequently seek to describe or explain.
The role of reading as part of scholarly research is widely established. Brockmann et al report on it as one of the primary activities as well (Scholarly Work in the Humanities). At the other end, as it were, of the scholarly task is the act of writing – perhaps an article to appear in a scholarly journal. The purpose of the writing is to convey what the researcher has discovered to others. What happens in between? Both Lavagnino and Brockmann give us some hints. Lavagnino speaks of reading “generating reactions..” Brockmann quotes a scholar interviewed for their study that reports that for texts that he really needs to understand in depth, that he writes:
“If … I really have to study, learn and absorb what’s in [something I’m reading], I make a photocopy and I write in the margins. And I underline, too. But I almost never underline without writing in the margin…Otherwise, I can find myself simply underlining, rather than absorbing”
Lavagnino suggests that the generation of reactions from reading is the first step towards the development of an interpretation. The act of noticing of things in a text as we read it starts a process that potentially develops over time to a textual interpretation. Brockmann further suggests that a common activity for many scholars is to record these reactions in a note or annotation attached to the spot in the text where the reaction was generated.
There has been significant discussion about the role of annotation in reading and research in computing science – where the interest seems to have arisen as a result of the appearance of the tablet computer; a machine upon which the user can write virtually on digital materials with a pen-like device on the surface of the computer’s screen. It would appear that the first step of scholarship, reading, results in the generation of reactions. It makes some sense to think that the recording of this reaction as an annotation might be a common scholarly activity and constitute a sort of second rung in the scholarly ladder, as it were. What are the subsequent rungs?
Although there has been some serious thinking about annotation within both the scholarly and computing world, there seems to be rather less attempt to connect activity of annotation to the broader tasks of scholarship. A few writers suggest that once the reactions have been noted there is the task of sifting through the notes to try to develop some sort of structure – let us call it a model – into which they fit. Clearly this will be most usefully done when there are a good number of reactions to deal with overall, particularly if one is developing an interpretation model for the first time rather than fitting new materials into a pre-existing system. Intermixed with the act of sorting through the reactions is more reading, and further generation of reactions that must, in the end, be fit in as well.
The interpretation arises, of course, from the reading, but there are a couple of important relationships between the reading and the interpretation that I’d like to note here. First, whereas reading is an activity that happens over time, and, thus the generation of reactions that will eventually contribute to an interpretation can be ordered by time as well, the resulting interpretation is more like an overview, and does not reflect the temporal order of the reactions that contributed to it. The interpretation will contain key ideas and observations that developed out of a cross-section, as it were, of the reactions. It is more like perhaps a back-of-the-book index than a book’s table of contents, and it is therefore not suitable for presentation in the temporal order represented by the reading of the texts that initially motivated it – indeed, it usually can not be expressed by markup layered on top of a set of texts from which it is derived. Furthermore, it is also often not more naturally represented as a single-dimension object, even if, in the end, the presentation of it is in a linear order of an argument that presents it in an article or a book. Instead, the mental representation of the researcher’s interpretive model who writes the article is often evidently not linear in the researcher’s head, and, indeed, some part of the difficulty of writing a scholarly article often emerges from the challenge of trying to represent this non-linear interpretation into a linear presentation required of an argument in a written document.
Several of the projects at KCL’s CCH work in ways that try to deal with some aspects of the interpretive model of the scholar. One such project is the Prosopography of Anglo-saxon England (PASE) which is run as collaborative projects between CCH and Janet Nelson FBA (King's College London) and Simon Keynes FBA (Cambridge), with research staff consisting of David Pelteret (King’s College London), Francesca Tinti (Cambridge), and Alex Burghart (King’s College London). The goals of this project are ambitious: "to provide a comprehensive biographical register of recorded inhabitants of Anglo-Saxon England (c. 450-1066)." (from website). Behind this project is a relational database which contains collections of objects that were collected from the textual sources read by PASE researchers. Since PASE is not a textual edition project, its structure is therefore not textual, but represents issues of interest to the prosopographer-researcher and contains entities such as Person, Event, Office, Source, etc. Material is added to the database as the sources are read, and each item added to the database is linked to the spot in the source at which the researcher decided some suitable information was provided. In this sense it is “annotatory” – at the bottom of it all, all the material in PASE is linked, as if through a large set of annotations, to locations in the documents that have survived from the Anglo-Saxon period. On the other hand, it links not to textual annotations of the kind that one might write in the margins of a book, but to the large, interlinked structure of the database.
The database structure contains objects like persons or sources because these things are the elements of interest to the prosopographer. We have found it helpful to consider the database design as modelling some aspects of how the prosopographer thinks rather than what the source says. There is more discussion about the PASE database structure, and some views on the significance of our approach on the creation and use of prosopographies in our article, “Texts into databases.” A further discussion in the context of XML rather than database technologies appears in my article, “Documents and data.”
The PASE researchers did not take the material in the database and interpret it into a series of articles for each individual. Instead, PASE is published digitally online , and PASE users are given mediated access to the database that has been created. As a result, the database material is presented directly to the PASE user, and stands on its own. Although there is no representation of the interpretation as modelled in the database (as one would expect to see in an article about a person in a traditional prosopography) – and, hence, no opportunity for the editors to inject into the materials, through the work of writing a summary article, a personal perspective on the individuals – there are some compensations. First of all, the computer knows about the structure of the material, and can therefore use any of the structural elements as ways into the material. In a traditional prosopography one often finds the bulk of the work is found in the articles about people – ordered by person’s name. If you know the name, you can find the person. However, if you wanted to consult the prosopography for other purposes (“find me all people who were described somewhere in the sources as being involved in a marriage”), you are out of luck unless a suitable index is included. In contrast, it is in the very nature of a database that you can use any element of its structure as an entry point to find any other related materials. For example, suppose you were interested in the role of individuals in a marriage ceremony, as described in Anglo-saxon sources. The PASE structure has an object called an event, which is made up of categories , including a marriage event. Thus, the user can ask the database to find all people who are linked to the existing marriage events just as easily as asking it to find the individuals by their names.
A central component of structuring in the PASE database is that of the authority list. The database design constantly encourages the researchers who constructed it to categorise the materials they find in various ways; we already mentioned that events were categorised into groups such as marriage, birth, death, battles, etc. Offices were given names and all references to offices held by individuals were meant to be assigned one of the office types. The task of structuring materials through assigning categories has a philosophical history as far back as Aristotle at least, and making the assertion that two things are, in some way, the same is identified by William James (in Chapter 12 of The Principles of Psychology) as “the very keel and backbone of our thinking. He continues in a way that is relevant to this paper in recognising that the sense of sameness he is talking about is “from the point of view of the mind’s structure alone, and not from the point of view of the universe. Later he says that “the mind makes continual use of the notion of sameness, and, if deprived of it, would have a different structure from what it has. Many have taken up the issue of classification, and ontological and taxonomy development, and other similar terms as one of the bases for intellectual modelling of materials of study and we have seen a growth in interest in some areas of the Digital Humanities in this aspect of computer modelling. See, for example, the CIDOC-Conceptual Reference model, (Crofts) which has been developed by the International Council of Museums to formalise cultural heritage information into what is called, in the computing business, a "domain ontology. TEI, a keystone of much DH thinking, has recognised this issue for many years, and contains a number of tags that support the representation of classification schemes.
Classification is sometimes a difficult task, and one must be careful not to abuse the subtleties of the materials one is studying while doing it. Although most of the time the model we arrived at for recording the PASE materials worked well, the PASE researchers were very aware of places at which the complexities or ambiguities of the materials they were reading meant that it did not work well. The database contained note fields in which remarks could be recorded about things that fit badly. Even then, the introduction of the structure and its failure from time to time to properly express the complexities of the material did not make its formal development useless – indeed, it may help to clarify where the scholarly thickets are in the materials, and provide at least a starting point to see what makes them so difficult to deal with.
We have placed PASE in the context of an annotation-like project, and have asserted that the PASE database, since it has been designed specifically to suit the prosopographical task, provides a model of some part of how the prosopographer thinks about his/her task. It therefore suits the task of the project editors and associated researchers who wish to build PASE. What would happen if we opened it up to allow PASE users to add their own annotations as well?
In the same way as it was useful in PASE to think about the structure behind the textual annotations with which PASE begins, we would need to think about what structure would suit PASE’s user. Clearly, the needs of users would not be the same as those of the editors of the PASE database, because each individual user would approach PASE with his/her own goals and interests, rather than with the goal of producing PASE in the first place. So, what do we need to think about in providing an annotation tool for the PASE user? First, from PASE’s perspective, these personal annotations would usually be triggered by something that a user had found in the PASE materials. Presumably, anything in the presentation of the structure might conceivably generate a reaction, and there require an annotation. Thus, user annotations might well apply to different elements of the PASE structure. Furthermore, if traditional research models are any guide (based on written annotations), each annotation would be considerably less formal than what is imposed by PASE’s structure, and might often take the form of a written comment with no evident formal structure. These personal annotations could, presumably, continue to accumulate even after the project materials were published. Furthermore, although we have here just begun to think about annotations to support the PASE user, in fact most users would want to annotate materials other than just PASE alone. A digital mechanism to support user annotation would be far more useful if it could apply to any number of digital resources, and would, therefore, have to sit outside of all of them.
The prosopographical structure we developed in concert with our historian colleagues was one that modelled their task quite well. For a user, however, the issues that are of interest, and the perspective that is taken on them is likely to be very personal – indeed, as noted earlier in this paper, a personal, idiosyncratic view is often prized in the Humanities. Thus, although PASE’s scholarly work is in some sense annotative, the entire set of structures and the categories that have been developed by and for the prosopography are likely to be inappropriate for the individual user’s work. As George Landow remarks, when recalling Ted Nelson’s observation about classifications – a classification scheme strategy is not necessarily bad, but that different people need different ones. (Landow 74)
PASE is an historical project which, although of course based on primary texts, involves having the researchers read them from an historical rather than literary perspective. Do the considerations we have presented so far (text base, annotations attached to these texts, structuring of the annotation material into a representation of an interpretation) still apply when we consider literary scholarship, and, where it does not, what has to change? We have already mentioned that the task of interpretation seems to be central to both historical and literary critical work, but it would seem likely that there would be a number of differences between an interpretation when by an historian versus by a textual critic.
In this paper, however, I intend to focus on the similarities more than the differences, as I think among the similarities are some aspects that are suitable for computer modelling. The most important similarity is that, ultimately at least, important aspects of a textual interpretation can be thought of in some sort of structural sense (this, at least, must be necessary if the critic is planning to describe his/her interpretation in a traditional article where an argument structure must be made evident), and that an important task of the developing of the interpretation is in developing this structure.
Certain scholars who have thought about the issues of interpretation seem to me to have come to similar conclusions. See, for example, the remark by Susan Brown and Patricia Clements in their article, “Tag Team”:
We don't, of course, think that books are merely linear: the footnoted, annotated scholarly text is its own kind of hypertext. But we do believe, with Jerome McGann, that computers can help to dispel "the illusion that eventual relations are and must be continuous, and that facts and events are determinate and determinable" (McGann 1991: 197). We think that the computing tools we are using should help to make evident the patterns and meanings immanent in massed historical detail.
Some literary critics who have taken up an interest in hypertext have come to similar conclusions. Indeed, several implications of hypertext upon scholarship have been extensively described by George Landow in his influential work Hypertext 2.0. Broadly speaking, Landow focuses on the implications of Hypertext’s ability to connect bits of text from two different documents together through the link. Associated with this is a sense that the link connects a chunk of text in one place to a chunk of text in another, and here he borrows terminology from Roland Barthes’ S/Z by applying the name of lexia to these chunks which Landow claims are described by Barthes in S/Z as “blocks of signification which reading grasps only the smooth surface” (64).
Although Landow thinks primarily of hypertext in the context of authorship of hypertexts, he does sometimes, as in this part of his book, touch upon the role of the reader/user of a hypertext. Furthermore, he points out that something interesting happens when one blends the role of the reader and writer as wreaders. As an analyst of hypertext and scholarship, it is natural to focus, as Landow does, on the principal hypertextual act of the reader – as creator of links. It seems that for Landow the links carry the burden of capturing the reader’s response. Landow seems to think of the text chunks which form the anchors for the hypertextual link, the lexia, as chunks in primarily pre-existing texts that the reader/scholar is in the process or working over. However, a significant amount of the material in Hypertext 2.0 also describes processes by which the reader writes his own text and links it to other pre-existing materials, and in the section “Hypertext, Scholarly Annotation, and the Electronic Scholarly Edition” (pp 69-73), he acknowledges a role for scholarly annotation and interprets it in the context of hypertext, although he immediately frames it in the context of collaboration between researchers.
As useful as Landow’s hypertextual paradigm is for drawing our attention to the fact that texts (or is it our interpretation of them?) exhibit, let us say, a conceptual structure which links between lexia draw out for us, it also misses what seems to me to be an important point, and the point becomes more clear when we look at the PASE project – one that is annotation-oriented in nature, but not in terms of the lexia model. In the PASE project one doesn’t think of its links as being between different lexia per se, but from the text to a set of digital things – not from, for example, a reference to a person to a set of lexiae about the person, but to the digital representation of aspects of the person him/herself. I believe that we get farther working with a structure that digitally represents some aspect of the interpretation that is developing/has developed in the mind of the scholar, rather than a paradigm that focuses on the linking between lexia. In the end, the annotation, if it is to play an ongoing scholarly role, has to fit into a broader, perhaps more abstract, conceptual framework.
It is perhaps useful to think about the act of scholarship as a process, and to focus on developing tools and strategies that support the process. If an interpretation can be represented in some sort of structure, then the most important aspect of how the computer might support the development of that structure is to recognise that, during most of the time the machine is to be used, the full structure has not yet emerged – indeed, may never fully stabilise in the mind of the critic. The dynamic nature of texts (what seems to me to be more a question of the dynamic interpretation of texts) is a favourite subject of critics like Jerome McGann. As he and Dino Buzzetti remark in their piece Critical Editing in a Digital Horizon: “The structure of [text’s] content very much depends on some act of interpretation by an interpreter, nor is its expression absolutely stable.” Even in the context of what is perhaps a more conservative scholarly activity, the production of a digital edition, the sense that one evolves a structure into which an interpretation can fit is also seen in John Lavagnino’s contribution to the same volume:
The appropriate scholarly tools for the early exploratory stages of a project may be pen and paper, or chalk and a large blackboard, or a word processor; some will find that the precision and formality required for TEI-encoded texts is not helpful at a stage when you may be entertaining many conflicting ideas about what sort of information will be in your edition and how it will be structured. (”Electronic Textual Editing” ).
Here Lavagnino seems to be warning the scholar embarking on a large-scale scholarly markup activity that the “precision and formality” of a scheme like TEI – in spite of TEI’s many, many opportunities for it to be used for individual scholarly activity – will impose from the very beginning a kind of structure on what you are doing before you are fully ready or able to take it on. Advice from within the TEI community about scholarly editing projects, meant to focus on perhaps the fundamental application of TEI markup, is perhaps doubly relevant when the interpretative act is not of the kind usually involved in scholarly editing.
If the machine is to be helpful during the development of the interpretation, it needs to handle materials in such a way that it is not dependant upon a predefined structure into which materials must fit. The user has to, in the end, evolve the structure. How can the machine assist with this task?
After the first two rungs of scholarships, reading and annotation, we can begin to see a pattern for the third rung – the task of taking the observations from the text and trying to organise them in ways that capture what one eventually wants to say about the materials – synthesis. The synthesis task not only allows you to find out how to say what you have already decided you want to say, it allows you to discover what it is that is to be said, as you summarise your materials and stand back from them to better consider what they might all mean. This is, perhaps, the essential but messy part of scholarship. There is evidence of how it is done in our colleague’s offices where papers that have been read are put in piles, with – wherever possible, a pile that represents a related set of papers. There is a considerable amount of shuffling of materials to try to find a set of relationships between them that best represent the emerging interpretation. William S. Brockmann et al mention, in Scholarly Work in the Humanities, one scholar’s attempt to type short notes into a word processor and then use its textual arrangement capabilities to help with this task – and they also acknowledge that a word processor is evidently not the most appropriate tool for this kind of work! They report that one of their interviewees remarked “[Synthesis] happens in that space between the reading and the note taking and the writing, because it’s what precipitates the writing…” (25).
There are pieces of software from the Social Sciences such as NVivo and Atlas.ti that support this third rung of scholarship, the development of an interpretation of texts for the purposes of the social scientist. I have discussed this, and touched on some of the issues of scholarship as process in ”Finding a Middle Ground,” and the approaches described there have significantly affected my own thinking. A further model (described in that same article) for using technology to support scholarly work, is the humble 4x5 card, with its classic strategies of sorting and laying out the cards as a way of promoting the gradual development of an overview of the materials contained on the cards, as a stage towards the development of an interpretation of it. In these examples, and perhaps others, the focus needs to shift from the model itself – what emerges from the interpretation – to a kind of meta-model, the model of how one develops and expresses models of materials that one is interested in.
In a meta-model we think less about what kind of things a model needs to have to represent the material of interest – instead the focus is on the kind of things we need in order to create models. There are two classic computing meta-models. One, represented by modelling environments diverse as the relational database base and software development tools such as UML, assumes that one defines classes of objects that are of interest in your model, and define ways that the classes relate to each other. Relationships can vary as widely: one important kind of relationship is referred to as an “isA” kind of relationship such as a “cats” is a kind of “pet.” Other relationships establish connections between different classes. If we have a class of animals and a class of foods, then we might want our model to assert a relationship between the two, such as “animals eat food.” Software built on top of the model, then, supports the introduction of instances of these classes of objects and relationships between them. PASE’s database is built upon this kind of approach.
The second model is the markup model. The established markup paradigm for the Digital Humanities, represented most clearly in the TEI, takes an hierarchical approach to structure. Tagging is added on top of an existing text, and is nested into hierarchical groups. One interesting aspect of XML-based systems such as TEI is that there is a place for document text that is not identified with any particular tag – so called “mixed content.” Unlike the first model based on collections of classes of objects, where everything in the model has to be fit into structures defined within the system, XML allows tagged materials in a text to be mixed with untagged materials. In a paragraph of text the tagging might identify references to persons, for example, but leave other textual materials unidentified. This aspect of XML makes it attractive for use in capturing interpretive material for texts; it allows the base material that is being tagged to remain, as it were, semi-structured, and does not impose the need to structure all the text.
Although there are important differences between the two approaches, and these differences have triggered discussions over many years about which one is “better” within the Digital Humanities community, there is an important similarity. In both the class-oriented model (in which materials of interest must be fit into a set of classes) and the markup approach by which material is tagged using a defined set of tags, the process introduces a kind of clarity that might well be the aim of scholarship in the end, but certainly does not always characterise it during its development. In both models a class or tag can, of course, always be defined as “I don’t know what this is, but it is interesting.” The use of such a tag or class could be useful in that it preserves some material for future reference, when perhaps it will be clearer what should be done with it.
However, sometimes it is possible to express something about your materials without the need to initially attach a label to them. A label may, in the end, be arguably useful and desirable for all the materials of interest in an interpretation, but during the process of developing the interpretation one may not immediately come to mind. It is here that the use of 2-dimensional space as a way of thinking about one’s material becomes relevant. One often hears of researchers who have recorded materials on 4x5 cards using a large flat surface to lay out the cards in various stacks, and have tried to position the stacks in ways that assist the development of an overall sense of the materials present. Pieces of software I mentioned earlier to support textual analysis for the Social Sciences (NVivo and Atlas.ti) also support a spatial metaphor for organising materials that the social scientist discovers from textual materials. The “hyper-space” metaphor is often used (although, it seems to me underexplored) in discussions about hypertext, and Landow describes the use of what he calls “concept maps” and “image maps” in Hypertext 2.0 where related ideas are presented to the user layed out on a 2D space (137-144). The spatial layout is one of the ways of looking at your material that allows you to stand back from the details and perhaps begin to see overall structure. It is, in this sense, compatible with the “overview” nature of interpretation.
The use of space for this kind of organisation – to allow one to stand back from the materials and see them overall – is not Cartesian in nature. The exact (x,y) co-ordinate of each item placed on the surface – each pile of 4x5 cards, or each item in Landow’s concept map – is not important. What is important is the spatial relationship between items. Things that are placed close together might be more closely related than those placed further apart. Notice, as well, that the positioning of the objects says something about them without imposing either a naming model (such as that needed in the class-oriented, ontological approach described above), or a hierarchical ordering that is characteristic of markup modelling. Not that the model works in opposition to a conceptual-naming approach – indeed, attaching names to important related concepts is often a significant and desirable outcome. With the 2D layout model, however, material can begin to be structured before a label for the concepts that it might contain have become evident.
Readers familiar with my current work will doubtless recognise in the above some of the basis for the development of my prototype software Pliny – a piece of software designed specifically to explore the issues that arise when trying to use the computer to support the tasks of Humanities scholarship. Pliny itself is available from its project website (http://pliny.cch.kcl.ac.uk/) and has recently been described in some detail in Bradley, “Pliny: A model.” As a prototype Pliny is meant more to promote a discussion about a particular role for computing in scholarship than it is to be a completed software application. I have, then, tried to make it fit into the world of discussion and argument that characterises traditional scholarly publishing. However, as a created artefact, it does not, and indeed cannot, present itself in the traditional form of scholarly argument as presented in a piece of writing. Pliny is open-source and is, in a sense, fully open to scrutiny by anyone familiar with the reading of the programming language in which it is expressed. However, although a critical engagement with the code would result in a kind of critique that focuses on the code-level of the software, it does not naturally translate into a critical engagement with that Pliny tries to do and how well it does it – a discussion that I had hoped Pliny would promote.
How, then, can a piece of software like Pliny participate in discussions that arise from the Digital Humanities? It shares some characteristics with other artefact-based ventures such as the creation of an artwork or the preparation of a performance of a piece of music or a piece of theatre. Like an artwork, it requires critical engagement to be effective – particularly that of others interested in the DH. These critics can engage with what Pliny is trying to do and assess its significance in its own terms. Admittedly, these are still early days, but so far, frankly, Pliny has failed to seriously engage others in a discussion about the issues it attempts to address. Are they the wrong issues? Perhaps, although the positive response I get from others when I speak about Pliny suggests that it is not entirely off the mark. The difficulty arises when one tries to get any kind of actual sustained engagement about what Pliny represents by getting those involved in the Humanities generally, or even the Digital Humanities more specifically, to spend the time to explore Pliny itself so that they can discover what it does right, and perhaps more importantly what it misses – to actually explore it by using it and then responding to it! Critical engagement with a piece of theatre or an artwork also must begin by responding to the piece itself – by engaging with it. However, this can only be done after there has been an extensive degree of critical preparation. Although seeing a play seemingly involves sitting in the theatre for only a few hours, an interesting critical response can only emerge when that particular experience is weighed against many other experiences in the theatre. Engagement with Pliny is more difficult, then, for two reasons – there is little or no cultural context into which a response to Pliny can be situated, and even if one did exist, perhaps an interesting response to Pliny cannot emerge from only a couple of hours of experience with it.
In the end, perhaps my aim of promoting a kind of extended critical engagement between the humanist and developer in the way that Pliny represents cannot be realised – particularly when the discussion needs to be outside the box of any particular established Humanities discipline, and not so much driven by its particular current concerns.
The scholar cannot expect to be the master of all the technologies required to do significant software tool development, or even, I maintain, be aware of the potential that technology might bring to the work of scholarship. This means that the scholar who is doing innovative work, such as developing a new scholarly tool, to work with the technologist/developer. If the innovative vision comes purely from the mind of the scholar then the technologist is merely providing a service to help the scholar realise that vision. However, our experience at CCH has shown us that the innovation in fact flows in both directions. The interaction has more the aspect of work between different but somewhat equal collaborators.
The developer alone cannot expect to understand the tasks of scholarship well enough to know how best, in detail, tools should be built to support those tasks as effectively as possible. Ideally, then, the partnership is essential, and in some cases must be based on some sense of equality of contribution.
Wendell Piez, in Humanist Vol. 18, No. 760) touches upon some of these issues:
“the Developer role is a different thing from the role of the scholar, making its own demands and constituting a very special kind of contribution.”
“HC will have to adapt itself increasingly to more of a collaborative model. That there can be a very effective and powerful "core collaboration" between a scholar and another person -- call it Lead Developer if you like -- has actually been recognized for some time within HC.”
“The danger of this kind of arrangement goes a bit beyond ordinary collaborations, because the skills required to do the development work (including but not limited to ‘programming’ [...]) are far afield and remote from what Humanists are commonly called on to do.”
“Such a collaboration works well when prejudices and misconceptions about the unknown are set aside, challenging the ego but opening the mind ... another reason those of us who have been ‘bit by the bug’ of such work like it so much.”
Within the institutional framework of academia, then, Humanities scholarship has been framed as an activity for solitary scholars, and the system or recognition and rewards attached to scholarship institutionally has been designed around this view. The Digital Humanities, from its earliest beginnings to today, shows that we actually need a scholarship model that is based on the various talents of various specialities. CCH provides a model of an operation that recognises the benefit of collaboration between scholars and technical experts, and that both groups, when working in ways that allows the insights of the other to act on the project with a degree of equality, produces the richest and best results. The role of the technical specialist at CCH is not just to provide a service that implements the ideas of the scholar. Instead, the ideas flow in both directions, from discipline specialist to technical specialist, certainly, but also from technical specialist to discipline specialist as well. The resulting product combines intellectual contributions from both groups.
If we need specialists from these various fields to work together, we need to recognise their joint role in this kind of academic activity properly. If we can get the rewards and incentives to more properly recognise this essential fact, we can further develop the richness and sophistication of scholarship based on Digital Humanities and its methods. In addition, and possibly more importantly, we can do this in a context that, by properly recognising the variety of intellectual input that these kind of projects require, reduces some of the pains that the lone-scholar model produces.
Bradley, John. "Finding a Middle Ground between 'Determinism' and 'Aesthetic Indeterminacy: a Model for Test Analysis Tools." Literary and Linguistic Computing 18.2 (2003): 185-207. Print.
——. “Documents and Data: Modelling Materials for Humanities Research in XML and Relational Databases.” Literary and Linguistic Computing 20.1 (2005): 133-151. Print.
——. “What you (fore)see is what you get: Thinking about usage paradigms for computer assisted text analysis.” Text Technology 14.2 (2005): 1-19. Web. Sept 2006 < http://texttechnology.mcmaster.ca/pdf/vol14_2/bradley14-2.pdf>.
——. “Pliny: A model for digital support of scholarship.” Journal of Digital Information (JoDI) 9.26 (2008). Web. < http://journals.tdl.org/jodi/article/view/209/198>.
—— and Harold Short. "Texts into databases: the Evolving Field of New-style Prosopography." Literary and Linguistic Computing 20.Suppl 1 (2005): 3-24. Print.
—— and Paul Vetch. “Supporting annotation as a scholarly tool: experiences from the Online Chopin Variorum Edition.” Literary and Linguistic Computing 22.2 (2007): 225-42. Print.
Brockman, William S., Laura Neumann, Carole L. Palmer, and Tonyia J. Tidline. Scholarly Work in the Humanities and the Evolving Information Environment. Council on Library and Information Resources, 2001. Print.
Brown, Susan, and Patricia Clements. "Tag Team: Computing, Collaborators, and the History of Women's Writing in the British Isles.” Technologising the Humanities / Humanitising the Technologies. CHWP, April 1998. Ed. R.G. Siemens and W. Winder. Print. Jointly published with TEXT Technology 8.1 (1998); Web. < http://www.chass.utoronto.ca/epc/chwp/orlando/index.html>.
Bush, Vannevar. “As we may think.” Atlantic Monthly July 1945. Web. Sept 2005 < http://www.theatlantic.com/doc/194507/bush>.
Buzzetti, Dino, and Jerome McGann. “Critical Editing in a Digital Horizon.” Electronic Textual Editing. Ed. John Unsworth, Katherine O'Brien O'Keeffe, and Lou Burnard. Forthcoming. Preprint; Web. Oct 2005. < http://www.tei-c.org.uk/Activities/ETE/Preview/index.xml>.
Crofts, Nick, Martin Doerr, Tony Gill, Stephen Stead, Matthew Stiff, eds.Definition of the CIDOC object-oriented Conceptual Reference Model. November 2003 (version 3.4.9). Web. 15 June 2004. < http://cidoc.ics.forth.gr/docs/cidoc_crm_version_3.4.9.pdf>.
Engelbart, Douglas C. Augmenting Human Intellect: A Conceptual Framework. Menlo ParkCA: Stanford Research Institute Report. Web. Nov 2006. < http://www.bootstrap.org/augdocs/friedewald030402/augmentinghumanintellect/AHI62.pdf >.
Faulhaber, Charles B. "Textual Criticism in the 21st Century." Romance Philology 45 (1991): 123-148. Print.
James, William. The Principles of Psychology, Vol. 1. New York: Dover, 1890.
Lancashire, D. Ian, ed. The Dynamic Text: Conference Guide. Toronto: Centre for Computing in the Humanities, 1989. Print.
——. "Working with Texts." Paper delivered at the IBM Academic Computing Conference, Anaheim, 23 June 1989. Noted in Faulhaber (128, 135). Print.
Landow, George P. Hypertext 2.0. Baltimore: John Hopkins UP, 1997. Print.
Lavagnino, John. “Reading, Scholarship, and Hypertext Editions.” The Journal of Electronic Publishing 3.1 (1997). Print.
——. “Electronic Textual Editing: When not to use TEI.” Electronic Textual Editing. Ed. John Unsworth, Katherine O'Brien O'Keeffe, and Lou Burnard Forthcoming. Web. Oct 2005 < http://www.tei-c.org.uk/Activities/ETE/Preview/index.xml>.
McCarty, Willard. Humanities computing: essential problems, experimental practice. Stanford University and the University of Georgia, April 2000. A preliminary version of a paper by that title published in Literary and Linguistic Computing 17.1 (2002): 103-25. Web. < http://www.kcl.ac.uk/humanities/cch/wlm/essays/stanford/>.
——. “Knowing true things by what their mockeries be’: Modelling in the Humanities.” CHWP A.24, September 2003. Web. < http://www.chass.utoronto.ca/epc/chwp/CHC2003/McCarty2.htm>.
McGann, Jerome. "History, Herstory, Theirstory, Ourstory." Theoretical Issues in Literary History. Ed. David Perkins. Cambridge: Harvard UP, 1991. Print.
Prosopography of Anglo-Saxon England . Co-directors Janet Nelson and Simon Keynes.Web. <http://www.pase.ac.uk>.
Piez, Wendell. “beyond being dubious and gloomy.” Online posting. 3 May 2005. Humanist 18.760. Web.
Pliny: A note manager . Web. <http://pliny.cch.kcl.ac.uk/>.
Siemens, Ray G. "Disparate Structures, Electronic and Otherwise: Conceptions of Textual Organisation in the Electronic Medium, with Reference to Electronic Editions of Shakespeare and the Internet." Early Modern Literary Studies 3.3 / Special Issue 2 (1998): 6.1-29. Web. < http://purl.oclc.org/emls/03-3/siemshak.html>.
Wittig, S. “The Computer and the Concept of Text.” Computers and the Humanities 11 (1978): 211-215. Print.