Skip to main content
CARAT–Computer-Assisted Reading and Analysis of Texts: The                 Appropriation of a Technology

Keywords

digital humanities, computer, reading, text analysis, information technology, philosophy / humanités numériques, ordinateur, lecture, analyse de texte, technologie d'information, philosophie

How to Cite

Meunier, J.-G. (2009). CARAT–Computer-Assisted Reading and Analysis of Texts: The Appropriation of a Technology. Digital Studies/le Champ Numérique, 1(3). DOI: http://doi.org/10.16995/dscn.263

Downloads

Download HTML

943

Views

194

Downloads

Introduction

For many of us, it has become obvious that our higher intellectual and academic activities are, to use a fashionable expression, “embedded in computers.” Increasingly, we read, write, find, and send texts with and through computers. But we maintain a kind of love-hate relationship with digitality. Sometimes our creativity feeds upon its powers; other times our despair breeds on its superficiality. One day, we are under its spell; the next, we dismiss it as nothing more than a gadget. Such intimacy with computers obscures our understanding of their dynamics. This is even more so as this attitude relates to the applications we as digital humanists are most familiar with: computer assisted reading and analysis of texts, or for short, CARAT (Meunier et al. “A Model”).

Although the title of this paper might suggest it, my aim is not to provide a journalistic or historical narration of my personal experience in this subfield nor of its development in the last 40 years. Historians would do a better job at that. My intention here is to explore some of the regions of this specific subfield of computer technology. Specifically, I would like to offer a conceptual schema that may help us to better understand this CARAT technology, which originated with the ingenuity of its incubators, but the development which rests upon the thoughtful vision of its current and future users.

My personal reflections emerge out of a serious difficulty encountered in this subfield that is well expressed by many, but most forcefully by Geoffrey Rockwell: “Computing, in the humanities, has been plagued by resistance.”[1] Susan Hockey and John Unsworth, among others, summarize the basic arguments behind this resistance:

1) The computer cannot replace human expert interpretation in reading and analyzing texts.

2) Available functionalities remain too elementary.

3) The ergonomics of these reading and analysis tasks have not yet been well designed.

In other words, the results obtained so far are superficial and uninteresting for an expert reader and analyst of text. These arguments are valid, and I take them as inspiration for my own reflections toward a better understanding of CARAT, its internal rationality, and its uses. Moreover, I hope these reflections may lead to better-adapted designs for CARAT and so help diminish resistance to it.

So, what is CARAT essentially? To answer this question, let us first remind ourselves that CARAT, whatever its varieties, is above all a type of technology. It is, as its name indicates, a digital technology, which really means that it is an information processing technology. But CARAT puts its own signature on this technology. Since we know it is distinct from artificial intelligence technologies, communication technologies, and even from information retrieval technologies, our question now becomes: what is this digital information technology as it is used in the realm of the humanities and the social sciences, and what is its specific rationale — that is, its own set of inherent arguments and norms?


1.0 A three-level architecture for digital technology

Extracting an explanatory architecture for any technology is not an easy task, less so when the technology is a digital-information processing technology. In keeping with good philological practice, a look to the Greek origin of the word technology may help us in our reflection. According to its Greek origin, technology is a technè, which is a form of poiesis: a human creative action with a purpose and a meaning, but one that is realized through physical means (physis). In this sense, a technology is more than an instrument or a tool. In a particular technology, the real action is not located in the tool itself, but in what the user does or realizes in the world with the tool. This points to an important dimension of technology: it is a mediation or a medium by which we inter-act with the world. This has been underlined by Martin Heidegger and repeated often in computer science by Terry Winograd and Fernando Flores and in the field of digital humanities by Willard McCarthy in terms of manipulative action.

This understanding of a technology can be translated into modern cognitive terms: a technology extends the various cognitive actions by which an agent interacts with its world. To illustrate, let us take the example of a camera. We can describe this technology solely in terms of the interaction a user has with it. But this is a restrictive description. We could describe it in a much richer way if we were to say that it is a means by which a photographer interacts visually with a given world, person, or object. In saying this, we change the focus of analysis from the interaction between a human and his tool to the interaction between a human and his world.

This shift of focus allows for a three-part explanation. First, we can explain the technology in terms of the cognitive tasks the photographer has to realize in order to capture a scene. Second, we can explain it in terms of the various functionalities the camera must possess in order to accomplish the desired outcome. Finally, we can explain the camera in terms of the physical structure (analogue or digital) it must have so as to afford these cognitive tasks and functions. In other words, we can understand a technology on three levels[2] of explanation: 1) the cognitive level, 2) the functional level, and 3) the physical level.

These three levels or layers are parallel to the conceptualisations that Daniel Dennett, David Marr, Zenon Pylyshyn, Allen Newell, and Robert M. Harnish have each proposed to explain or describe the architecture of an “intelligent” digital technology, although in a different context. [3] In explaining a technology, these levels have to be distinguished because each one identifies different types of regularities, invariants, rules or laws. Although these three levels present their own theoretical difficulties when applied in other domains (such as a computational theory of mind), they are, nonetheless, heuristic in understanding a technology such as CARAT, which is but an instance of a digital technology.


2.0 The cognitive level

The first level of explanation approaches the technology in terms of the cognitive operations a user must realize with it. These operations may be of different sorts. Some may be perceptual, others representational; some may rest upon reasoning, others upon background experiences. Our camera example can illustrate this. Certain camera functions could be described in technical terms, such as: “for speed, press R2”, “for light intensity, press R3 & R5.” But such statements would ultimately be useless for human users if they do not correspond to the user’s understanding of the concepts of speed and light intensity for taking a photo.

CARAT must also be explained in this respect. It is a technology that pertains to a specific set of human cognitive operations such as reading, analysing, understanding or interpreting texts. These operations are all unique to human beings: no animal accomplishes these actions. These operations are numerous and complex, and we only have a faint idea of what they are; this remains an active field of research in psychology, philosophy, literature, and linguistics. But one thing that all these operations have in common is that they are applied to a special type of object called “texts” whose constituents are symbols, and where each symbol has the essential property of “standing for” something else in some orderly fashion. In classical terms these textual objects and their symbolic constituents are representational, intentional, or meaningful objects: that is, they are structured semiotic objects. In more contemporary terms, they are informational objects.

CARAT is a technology that aims at assisting these operations if not mirroring them. It is therefore, in Allen Newell and Herbert Simon’s terms, a system that “manipulates symbols” (Newell and Simon). This is precisely why it is said to be an “information processing technology.” In contrast to robots that manipulate signals causally produced, a CARAT technology manipulates pure symbols. In this, CARAT is a perfect example of John Searle’s Chinese Room: it takes as input a text made out of symbols, manipulates them as such without any understanding or interpretation, and delivers another set of symbols as output. And in certain cases, some of these symbols have been either reorganized, reduced, or added from other sources. For instance, concordances and XML encoding are symbol-manipulation systems. These operations take texts made out of symbols as input and deliver some other set of symbols as output, some of which are symbols originating from other texts. In this sense CARAT is essentially a symbol-manipulation machine.

This description of CARAT as a symbol-manipulation technology entails two important principles. First, CARAT manipulates symbols according to rules. These rules are not generated by the technology itself but are given to it, if not imposed on it. Secondly, the technology does not interpret the symbols. Only the person who offers them as input into the CARAT machine or picks them up as output can interpret them. What this ultimately means is that CARAT technology is an externally rule-driven technology. It is not an interpretation technology by itself. The rules and the interpretation belong to or originate in the humans that use the technology as an extension of their own cognitive operations. In this sense, it is not a hermeneutical machine as such, for linguistic and textual interpretation are uniquely human competences. The machine merely manipulates the symbols to assist human interpretation to open up new cognitive horizons. Moreover, if the human user is an expert, he will employ the technology to manipulate symbols in such a way that he can control the rules of interpretation.

This explanation of the cognitive level allows us now to refine our understanding of the two embedded sub-technologies of CARAT: reading and analysis.


2.1 The cognitive level of the reading technology

One often forgets that, at the base of it, text analysis is a reading activity. If we are to understand CARAT technology, we must also be more specific regarding what it is to read the symbols of a text, and specifically, to read them as an expert. In its basic meaning, reading is simply the parsing of a sequence of alphanumerical symbols on a sheet of paper. Digital scanners or optical character recognition systems read text in this sense. But real reading refers to the cognitive operations that start with symbol parsing but ultimately ends with an understanding of the content of a text.

Expert reading involves complex cognitive processes. A casual reader may breeze through Shakespeare or Durkheim in a comfortable chair by a fireplace, sipping cognac, but an expert reader who reads these books will do so in a much different manner. An expert constructs representations of her understanding of a text and applies complex reasoning to them. And she will add many other cognitive operations. For instance, she may have to negotiate the various linguistic dimensions of the text while at the same time relating her reading to her general and professional background knowledge. She may in certain contexts supplement her reading by adding technical comments or criticisms, or even by producing a paraphrase of some passages or a summary. In another situation she might relate the text to some other repository of expert commentators, or she may associate a particular term with an external source: a dictionary, a thesaurus, or an encyclopaedia. Hence, reading explodes into multi-cognitive operations that include memorizing, organizing, comparing, structuring, synthesizing, associating, abstracting, generalizing, and deducing ideas, themes, topics, concepts, theses, and so on. Paradoxically, this expert reading may even involve writing, specifically creative writing, by recording immediate thoughts, ideas, and intuitions about the content of a text. In other words, when we look closely at them, the cognitive operations of an expert reader fan into a diversity of granular tasks and itineraries (combinations of basic tasks) where each one is often the result of internalized disciplinary habits and customs.

A reading technology must assist these various cognitive operations. Unfortunately, CARAT technology has not always paid much attention to the many cognitive operations embedded in expert reading. Jacques Virbel, Bernard Stiegler, and others were among the first to remind us of the complexity of the cognitive tasks underlying electronic text reading specifically. Christian Vandendorpe insisted on the implicit cognitive operations underlying the reading of a codex, and Herre Van Oostendorp and Sjaak de Mul underlined the numerous cognitive aspects of electronic texts. Jerome McGann reformulated this in terms of the revolution that the electronic document has imposed on our reading and analysis. Some elements of the social sciences computer technologies have been slightly more sensitive to this cognitive dimension of CARAT: for instance, qualitative analysis (Glaser and Straus) with computer applications such as NVivo (NUDE*IST), Atlas, QDA Miner (Barry; Gibbs; Lewis and Maas). Today, Lexist is one of the rare tools that directly assist expert reading of e-texts. Unfortunately, its scope is limited to ancient texts. More recent projects in the USA and Canada, such as WordHoard, MONK, and TaPOR offer more integrated environments for scholarly reading of electronic text. Pliny is another recent technology specially dedicated to reading assistance (Bradley). More and more formal content annotation technologies aim to assist some aspect of expert reading (Cieri and Bird; Calhoun et al.).

In the future, if technology is really to assist the process of expert reading, a better understanding of these operations is required. This will allow the development of applications that correspond to the reading operations of expert readers. And with the growing wave (if not tsunami) of electronic books, CARAT will have to develop a finer capacity to assist this expert reading of text.


2.2 The cognitive level of text analysis technology

This first, cognitive level has an impact on the second sub-technology of CARAT, that is, text analysis. Text analysis is not in itself the same as text reading: it rests upon different and specific cognitive operations. These operations essentially pertain to some technical apprehension of the meaning of a text. They decompose and recompose this meaning mostly by deduction and abduction from the text. And this is realized through some descriptive and normative procedures learned and practiced in a variety of scholarly disciplines. This analysis is often at the heart of what one calls an interpretative methodology, and CARAT must be faithful to this methodology.

In many CARAT projects, users have understood analysis in terms of parsing tools that are in some cases linguistically oriented (lemmatization, morphology, syntax and semantic categorization), and in other cases numerically oriented (statistics, classification, etc.). These techniques give users a powerful new perspective on a text. Indeed, these techniques unconsciously impose many elements of the computer paradigm onto humanities and social science methodologies: that is, they force the interpretative methodologies to fit into various formal computational processes. Although in the past these techniques were rigorous and heuristic, the results obtained were not very convincing for expert interpreters of text. The analysis produced did not correspond closely to the complex interpretive process that experts employ. For many, this was deceiving, and it had the effect of halting the appropriation[4] of the technology.

It seems to me that if the analysis strategies of CARAT technology are to have a future, they must mirror the real cognitive operations experts employ in analyzing texts as much as possible. These operations are various and complex, as when philosophers conduct conceptual analysis; when literary scholars critically deconstruct narration, style, discourse, or rhetoric; when ethnologists and sociologists apply qualitative analysis; when psychologists build schematizations of interviews they have conducted; or when theologians practice their exegesis. All these practices involve complex cognitive operations that have to be identified, modeled, and combined. Each of these operations is specific to an interpretative practice and a disciplinary methodology. Only once we have unveiled these cognitive operations can CARAT hope to produce adequate computer design and therefore attain some degree of maturity.

This level of explanation changes priorities in CARAT research. Indeed, it requires that before we design and use a CARAT technology we must have a better understanding of what expert readers and analysts do in their practice so the technology can really assist them in their work. In other words, be it reading or analyzing, an authentic CARAT technology is one that must adapt itself to the complexity of the expert’s interpretation processes, not the other way around. This will happen only if we more systematically analyse and model the processes expert readers and analysts perform in their interpretation of texts. To me, this seems to have been lacking in the CARAT technology developed in the past 50 years. Our appropriation of digital technology may have been too centered on what we can do with it, not what it can do for us. It is probably for this reason that technology has provoked some disappointment, if not resistance.


3.0 The functional level

The second level of architecture approaches the technology in functional terms. Indeed, if it is essential to examine the cognitive level to understand a technology, it is also necessary to understand how these cognitive operations are translated into functions that are implemented by a computational technology: that is, into formal computational functions. In other words, a technology also must be explained in what Daniel Dennett calls a “design stance,” or what Zenon Pylyshyn and Allen Newell both call a functional level. To go back to our camera example, to explain this technology, one must also indicate what functions it must possess in order to take a picture, and this independently of how it will be implemented physically. For instance, all cameras must have a light-sensitive function called aperture and distance measurement function called a focus.

For many technologies, the translation of a cognitive description into computational functions is not an easy task: there exists no automatic technique for doing this.[5] In fact, this process of translation is itself a research endeavour. Intuition, creativity, simulation, and testing are at the heart of this research. This is even truer when the functions to be identified pertain to the tasks of reading and analysis of text using computers.

At a high level of abstraction we may distinguish three main classes of functions that realize the manipulation of symbols pertinent to reading and analysis of text. The first set of functions has the purpose of simply organizing texts in such a way as to represent them differently. A typical example of this type of function is the electronic encoding of a manuscript into an electronic image facsimile. The encoded text is a re-presentation of the original. The second class of function is one that reorganizes a text in some manner or another. These are actually classification functions. The third type of function is made up of those that add symbols to an original set of symbols. These are categorization functions. Typical examples are annotation (or mark-up) functions of which XML syntactic and semantic tagging are obvious examples.

Finally, this functional level not only describes the technology in terms of some basic functions; it also describes it in terms of the combination of these functions, that is, through the way complex functions are built out of more elementary ones. These “itineraries” or processing paths are sequences of well-defined elementary functions. Each path then becomes an algorithm that directs the computer “analysis” of the text. In this sense, CARAT technology is rule-driven through its algorithms. The originality and creativity built into each algorithm is how the computer results ultimately become useful for human reading and analysis of text.


3.1 Functional design for expert reading

Since we are not completely aware of what the real cognitive operations of expert reading are, the existing computer-reading technologies (e.g. word processors, e-book) have been developed without much attention to the various cognitive operations involved in reading an electronic text. These often offer only a small set of basic functions (underlining, commenting, correcting, etc). More fundamentally, because we still have difficulty reading texts directly from the computer screen, we print our digital texts and revert back to well-mastered hand-based functions. A more adapted technology with more sophisticated functions is required if reading is really to be computer-assisted. These functions are of the three types we have presented above. Some of them help organize text for reading; others reduce the text; while still others add something new to it.

A first set of functions present the text in some new way: for example, highlighting its editorial structure, multi-colour underlining, personalised indentation of paragraphs, etc. All these tools help the user “see” the text anew. The second set of functions select and reorganise the text or parts of it in some manner or another. They may recombine the paragraphs, delete parts of a document though some guided or controlled means, or transform a document into some other form, such as a spread sheet. The third set of functions adds something to the text, augmenting it in some way. In one sense they categorize parts of the texts through some linking with other texts (personal notes, dictionaries, commentaries etc.).

In this perspective, an adequate CARAT technology must offer a multiplicity of computer functions and combinations of functions so that they correspond to the myriad of cognitive operations an expert reader accomplishes in the reading process. Only then can it really assist expert reading.


3.2 Functional design of a text analysis technology

Analysis technology is also influenced by a functional design. Its quality depends on the translation of the various cognitive operations involved in analysing a text. Unfortunately, as said earlier, we lack adequate understanding of the cognitive operations underlying the various methodologies of text analysis. So, in our translation of these cognitive operations into functions, we can only hypothesize what they might look like using the same three main functions identified above: organization, classification and categorization.

The first set of functions organizes a text in some manner so as to prepare it for further analysis. Examples of these are functions that segment or tokenize text so that it can be used as input for parsers, or functions that multiply the visualisations of the text (e.g. windows for rapid juxtaposition of segments of text). This second set is more important. It includes classifying functions, which reorganize in some manner the content of the text. Typical examples are functions that lemmatize, produce a lexicon or concordance, or classify segments of text. More sophisticated functions touch directly upon the semantic and pragmatic content of the text. For instance, they help produce indexes, summaries, syntheses, or statistical graphing to represent the results of text mining or thematic analysis. Some visualisation functions are also of this type, offering graphic reductions of the text’s content. These functions do not in themselves interpret a text, but they assist the expert in rapidly exploring the various components of the texts.

The most important functions of analysis are annotation and categorisation. Here annotation consists essentially of adding some type of categorical interpretative symbols to the original text. These functions go farther than just assisting the reading. They assist the expert in his interpretation of the text in the most pertinent manner. Because each annotation modifies the original, the corpus is hence transformed into a multi-layered series or set of secondary texts that can be explored in themselves. A simple original textual corpus is thus transformed into a multi-layered one.

These categorisation and annotation functions can be numerical, linguistic or even iconic (e.g. an arrow). Statistical analyses are typical of numerical annotation functions (Meunier et al. “A Model”; Nedjah et al.). A word frequency count is actually a mapping of a word onto a number. Linguistically-sensitive functions map segments of text onto syntactic, semantic or pragmatic categories, and discourse analysis functions map the text onto editorial, communicative, social, and political categories.

Some of these annotations can be generated automatically via parsers, but most of the deep and rich ones are done manually; these are often personalized notes or statements added to the text by the expert reader. These annotations help the interpreter’s construction and expression of his understanding of the text. Because of this personal aspect of interpretation, experts will not have confidence in any automation of this deep type of annotation. They do not believe automation is even possible. Naturally, all these annotations will require meta-functions that allow revision, correction, deletion, and management.

Finally, and of utmost importance, the technology must offer means of combining functions into chains, methodological itineraries, or workflows. This is often where much of the creativity is to be found in the analysis process. For instance, one particular course of analysis might first require lemmatisation, then the production of a concordance, then a statistical analysis of words. Another might first require lemmatisation, then classification, then graphical mapping. One quickly sees that specific and original functions and combinations of functions are essential constituents of CARAT technology and its practice. In fact, there are rarely two identical itineraries of analysis. Rather, each combination of functions offers a computational groundwork for the interpretive process of analysis.

In the digital humanities and social sciences many such functions have been created, explored, and tested. The library of text analysis software (a particular set of programmed combinations of functions) seems to expand constantly; we can get many of them off the shelf in ready-made software packages. But as is the case for computer-assisted reading, their usefulness for analysis depends on their adaptability and correspondence to the variety of cognitive operations involved in analysis.

Although such a description of CARAT’s functional design may seem very general and abstract, such that anything could fit into it, it allows us to see that a CARAT technology rests upon a formal or algorithmic structure that is not only made of basic functions but, most importantly, of a combinational structure of these functions. Hence CARAT technology possesses a systematicity that gives the interpretative process in the humanities and social sciences a new and different epistemological status. CARAT technology no longer appears as a stack of elements; rather, it possesses an architecture that allows parameterized, controlled and formalised manipulations of these functions and their combination. It brings literary and philological interpretation into the paradigm of simulation. This allows replicability and hence offers an unforeseen “experimental” dimension to interpretative processes. Although these CARAT structures are not always easy to identify concretely, they are at least possible in theory. In this sense, each research project by the community of CARAT users can be seen as explorative discovery and testing of these CARAT structures. It is important to emphasise this functional design, since many participants in the development of CARAT technology have come to believe there is no science involved, just pure intuition and analytical talent.

Developing these sets of functions and their combinations is a complex scientific research endeavour. Often humanists have intuitions about their own cognitive operations but lack the ability to translate these intuitions into functional models. If this could be corrected, it would facilitate a dialogue with the hard sciences and eventually have a strong economic impact. The development of a linguistic tagger, for example, is a greater technological feat than constructing a robotic grasping hand: it is much more difficult to manipulate symbols than to manipulate physical objects. On another level, if we look at the end of the last century, the manipulation of texts by computers has surely influenced our economy more significantly than the object-manipulating arm of the space shuttle.


4.0 The physical level

The last level of this explanatory architecture of a digital technology pertains to its physical support or digital appropriation. This level explains how the cognitive tasks and functions realized by a technology are constrained by time, space, and causality. Or, to say this more concretely, these functions must be physically implemented in a positive/negative electronic digital flow machine called a computer. For CARAT, this means two important things: 1) the paper text (a sheet of wood pulp) or oral utterance (sound waves) now have a new physical support, the electronic document; and 2) the functions that manipulate this document, either on the atomic or composite level, can be realized in modules and flexible programs.

The transformation of paper into digital format has been identified by several authors as the major revolution in information technology of the last century. George Landow and Paul Delany have insisted on the promise of this new physicality, the digital text:

Electronic text-processing marks the next major shift in text-based information technology after the printed book. It promises (or threatens) to produce effects on our culture just as radical as those first produced by movable type, and later by high-volume steam-driven printing technology. The characteristic effect of the digitised word derives from the central fact that computing stores information in the form of electronic code rather than physical marks on a physical surface (6).

One amazing but unforeseen dimension of this transformation of the paper text into electronic form is that it has allowed not only the insertion of textual symbols but also audio and visual symbols. What were traditionally three distinct semiotic forms find themselves on the same physical support or medium. This radically changes the nature of the “textual” corpus itself. Now, a “corpus” may include linked texts, images, sounds, and even animations. It will not be surprising to find on the same digital support an English sentence expressed in a string of text, a sound file, and perhaps even a video clip, all of which make a coherent contribution to the meaning of the whole. This new type of “multimedia text,” precisely because of this change of physical support, opens up new technological challenges for reading and analysis.

But even more significantly, digital texts themselves are changing, increasingly becoming entangled within various other formats of texts that are to be found in cyber-libraries and in Websites, electronic mail, even blogs, wikis, and social networking sites. All these changes require new flexible implementations of reading and analysis functions. If the functional design has allowed the creation of a host of functions and combinations of functions, the electronic support, for its part, has allowed the conception of computer programs that can implement these functions in a variety of languages and platforms. In their first steps, CARAT programs were often compact and not very flexible. They are now becoming more and more modular, flexible, and generic. This in turn has had a great effect on the technology of reading and analysis.


4.1 The physicality of reading and analysis technology

Initially, the advent of electronic texts was so seductive that it may well have dominated our understanding of the role of computer technology in the humanities and social sciences. In its beginnings, CARAT research was focused on transferring paper document into an electronic form. This was the pinnacle of research. Ten years ago, many of us justified our CARAT research projects by pointing to the immense ocean of electronic documents, mostly on the Web. This ocean has in fact swelled to billions of electronic-text pages. There are now large-scale public initiatives to digitalize texts. La Bibliothèque Nationale de France, the British Museum, the Library of Congress, and most recently Google all have taken leadership in this. The profusion of electronic documents and archives has created an unprecedented cyber library that is now easily available for reading and analysis.

Unfortunately, even with the best of intentions, many cyber-library projects often appear to approach digitalization tasks only in terms of storage and retrieval. Expert reading and analysis are not primary considerations in their design. Moreover, these libraries may even produce an unwieldy proliferation of cumbersome texts. Even a basic query of an e-text library returns a huge amount of documents often wrapped up in various meta-codes and meta-tags, sometimes themselves wrapped up with linguistic tags (some morphological, some syntactic, and even some basic-semantic), all of which need to be accepted, filtered, or deleted before one can perform any sophisticated reading or analysis. These cyber-libraries, as rich as they may be for storage and retrieval strategies, significantly burden if not constrain the dynamics of expert reading and, hence, analysis. One may easily drown in this ocean of digital documents, even more so if they are tagged. The gothic silence of library bookshelves, aisles, and halls has been replaced by the noisy mazes of virtual multimedia grottoes. In this new kind of library, CARAT reading is a great challenge.

The task of analysis is also constrained by the constant change of technology. Because of the electronic support on which the technology works, new software proliferates. Moreover, no single model of CARAT fits all possible requirements of analysis. Functions and combinations of functions are less and less to be found in closed and proprietary software. More and more they take the form of modules and combination of modules, all of which can be implemented on generic and flexible computer platforms, such as T2K,[6] GATE (Cunningham et al.) or SATIM (Meunier et al. “Classification”), some of which are open access or open source (Sinclair). These new computer platforms allow expert analysts to choose particular software modules for a specific task and combine them in original ways so as to produce new interpretative paths or itineraries simply by modifying modules and combinations of modules. Navigation in this cyber-sphere of modules and workflows requires new types of text management tools, and because of the immensity of these tasks, new means will be required for sharing the modules, workflows, and expertise that accompany them.

Finally, constantly-changing technology poses the very special challenge of obsolescence. For example, many computer platforms can no longer read the punch-card texts or even some older document formats that scholars so meticulously typed in, not to mention first-generation text analysis software. The manipulation of these older programs, modules, and platforms requires not only expertise in the cognitive and functional aspects of reading and analysis, but also some basic understanding of computer programming and engineering so as to recuperate this digital legacy. So, if we are not sensitive to the physical level of CARAT, that is, to the complexity of the electronic medium, we risk handing down a legacy of cumbersome, or worse yet, obsolete tools for reading and analysis to future generations; and what we originally thought would be an economical project may turn into a financial monster because of the reengineering required for updating and enhancing the technology.


Conclusion: the challenge of appropriation

In presenting this conceptualization of CARAT, I have defended the idea that its rationale rests upon a three-level explanation or description: one level pertaining to the cognitive tasks to be realized, a second relating to the functions to be manipulated, and a third to the physical environment. These three levels of description may help us understand the last fifty years in the history of CARAT technology in terms of a persistent effort by the humanities and social sciences to implement a technology whose function is to assist the higher intellectual activities of expert reading and analysis of text.

At first glance, it seems obvious to me that many of the initial projects launched in CARAT adopted the technology primarily in terms of the physical level, where the focus was on transforming the paper text into an electronic medium. Afterwards, the appropriation shifted to focus on certain functional operations. Such was the focus of the various encoding projects (SGML, XML, TEI), parsing-oriented programs (for concordances, statistical analysis, linguistic tagging), and, most recently, text mining strategies. Finally, some attention has been given to the cognitive level through the exploration of various methodologies (authorship attribution, stylistic analysis, thematic analysis, and conceptual analysis, etc.). However, because integration of these three levels at the design stage in many cases has been neglected, the results obtained by CARAT technology have not seemed to attain the richness and profundity that experts expect.

It also now seems to me that the future of CARAT depends on the integration of these three levels. If CARAT technology cannot accomplish this, its adoption within the wider scholarly community will stall in the face of continued resistance. The long and hard process of adapting and adopting this technology by humanities and social sciences communities is common to all types of technology. Often, because one does not have a clear idea where a specific technology started or how it developed, it becomes difficult to foresee its future. All one sees is what Michael Tomasello calls the ratchet effect, wherein a technological project always builds a technology from or on a previous one and transforms the present technology into a future one. This easily produces insecurity. Where are we going? Were we not content with what we had? CARAT technology is not just a new way of surfing through our literary, historical, and social text heritage: it is an original way of diving into the layers of human intelligence and creativity. It is the challenge of digital humanities research to integrate, through computer technology, the ideas, concepts, methods and styles that have traditionally characterized the humanities and social science practices in exploring such intelligence and creativity.



Works Cited

Barry, Christine A . “Choosing Qualitative Data Analysis Software: Atlas/ti and Nudist Compared.” Sociological Research Online 3.3 (1998); rpt. 17 Nov 2008. Web. <http://www.socresonline.org.uk/socresonline/3/3/4.html>.

Bradley, John . “Thinking Differently About Thinking: Pliny and Scholarship in the Humanities.” Digital Humanities 2007 Abstracts. Web. 17 Nov 2008. <http://www.digitalhumanities.org/dh2007/abstracts/xhtml.xq?id=124>.

Calhoun, Sasha, Malvina Nissim, Mark Steedman and Jason Brenier . “A Framework for Annotating Information Structure in Discourse.” Proceedings of the Workshop on Frontiers in Corpus Annotation II: Pie in the Sky. Ann Arbor, MI, 2005. 45-52; rpt. Web. 17 Nov 2008. <http://acl.ldc.upenn.edu/W/W05/W05-03.pdf#page=9>.

Cieri, Christopher and Steven Bird . “Annotation Graphs and Servers and Multi-Modal Resources: Infrastructure for Interdisciplinary Education, Research and Development.” Proceedings of the ACL 2001 Workshop on Sharing Tools and Resources 15 (2001): 23-30. Print.

Cunningham, Hamish, Diana Maynard, Kalina Bontcheva, and Valentin Tablan . “GATE: A Framework and Graphical Development Environment for Robust NLP Tools and Applications.” Proceedings of the 40th Anniversary Meeting of the Association for Computational Linguistics. Philadelphia, PA, 2002. Web. 17 Nov 2008. <http://eprints.aktors.org/90/01/acl-main.pdf>.

Dennett, Daniel C . The Intentional Stance. Cambridge, MA: MIT, 1987. Print.

Gibbs, Graham R . Qualitative Data Analysis: Explorations with NVivo. London: Open UP, 2002. Print.

Glaser, Barney G. and Anselm L. Strauss . The Discovery of Grounded Theory: Strategies for Qualitative Research. Chicago: Adline, 1967. Print.

Harnish, Robert M . Minds, Brains, Computers: An Historical Introduction to the Foundations of Cognitive Science. Oxford: Blackwell, 2002. Print.

Heidegger, Martin . Being and Time. Trans. John Macquarrie and Edward Robinson. Oxford: Blackwell, 1962. Print.

Hockey, Susan M . Electronic Texts in the Humanities: Principles and Practice. Oxford: Oxford UP, 2000. Print.

Landow, George P. and Paul Delany . The Digital Word: Text-Based Computing in the Humanities. Cambridge, MA: MIT, 1993. Print.

Lewis, R. Barry and Steven M. Maas .QDA Miner 2.0: Mixed-Model Qualitative Data Analysis Software.” Field Methods 19.1 (2007): 87-108. Print.

Marr, David . Vision: A Computational Investigation into the Human Representation and Processing of Visual Information. San Francisco, CA: Freeman, 1982. Print.

McCarthy, Willard . Humanities Computing. London: Palgrave, 2005. Print.

McGann, Jerome . Radiant Textuality: Literature After the World Wide Web. New York: Palgrave 2001. Print.

Meunier, Jean Guy, Ismail Biskri, and Dominic Forest . “A Model for Computer Analysis and Reading of Text (CARAT): the SATIM Platform.” Text Technology 14.2 (2005): 123-51. Print.

Meunier, Jean Guy, Ismail Biskri, and Dominic Forest . “Classification and Categorization in Computer Assisted Reading and Analysis of Texts.” Ed. Claire Lefebvre and Henri Cohen. Handbook of Categorization in Cognitive Science. New York: Elsevier, 2005. 955-978. Print.

Newell, Allen and Herbert A. Simon . Human Problem Solving. Englewood Cliffs, NJ: Prentice-Hall, 1972. Print.

Newell, Allen . Unified Theories of Cognition. Cambridge, MA: Harvard UP, 1987. Print.

Nedjah, Nadia, Luiza de Macedo Mourelle, and Janusz Kacprzyk, eds. Intelligent Text Categorization and Clustering . Berlin: Springer, 2009. Print.

Pylyshyn, Zenon W . Computation and Cognition: Toward a Foundation for Cognitive Science. Cambridge, MA: MIT, 1984. Print.

Rockwell, Geoffrey . “What is Text Analysis, Really?” Literary and Linguistic Computing 18.2 (2003): 209-219. Print.

Searle, John R . Minds, Brains, and Science. Cambridge, MA: Harvard UP, 1984. Print.

Sinclair, Stéfan . “Humanities Computing Resources: A Unified Gateway and Platform.” COCH/COSH 2002. University of Toronto. 26-28 May 2002. Abstract; rpt. Web 17 Nov 2008. <http://web.viu.ca/siemensr/C-C/2002/abstracts.htm#Sinclair>.

Stiegler, Bernard . “Machines à écrire et matières à penser.” Genesis 5 (1994): 25-49. Print.

Tomasello, Michael . The Cultural Origins of Human Cognition. Cambridge, MA: Harvard UP, 1999. Print.

Unsworth, John . “What is Humanities Computing and What is Not?” Jahrbuch für Computerphilologie 4 (2002): 71-83. Print.

Vandendorpe, Christian . Du papyrus à l’hypertexte : essai sur les mutations du texte et de la lecture. Montréal: Boréal, 1999. Print.

Van Oostendorp, Herre and Sjaak de Mul, eds . Cognitive Aspects of Electronic Text Processing. Advances in Discourses Processes 58. Norwood, NJ: Ablex, 1996. Print.

Virbel, Jacques . “Reading and Managing Texts on the Bibliothèque de France Station.” The Digital Word: Text-Based Computing in the Humanities. Ed. George P. Landow and Paul Delany. Cambridge, MA: MIT, 1993. 31-51. Print.

Winograd, Terry and Fernando Flores . Understanding Computers and Cognition: A New Foundation for Design. Reading, MA: Addison-Wesley, 1986. Print.



Endnotes

[1] Rockwell has been at the heart of this critique of digital humanities: “We have a model of computer-assisted literary text analysis that is guided by a view of what a text is and how we should use it that does not match the practice of many contemporary literary critics. … [T]ext-analysis tools and the practices of literary computer analysis have not had the anticipated impact on the research community” (210).

[2] There are many variants to these “layers” or levels for analyzing an architecture. These layers have to be understood in logical terms: that is, how one layer rests upon another.

[3] The three levels are slightly differently named by these authors: Newell, for instance, names the first the “knowledge level”; Pylyshyn calls it the “representational level”; Dennett names it the “intentional stance.” The second level is called the “functional level” by Pylyshyn and Newell, the “design stance” by Dennett, and the “algorithmic level” by Marr. The third level is named the “physical level” by all.

[4] By “appropriation,” in French, we mean the slow process of making personal and internal something that was foreign and external to us.

[5] Formally, a function is simply a mapping of one set of objects onto another set. The whole difficulty is to model and implement this concept of mapping on a specific domain and to define its specific operations and parameters.

[6] Data to knowledge (D2K) and Text to Knowledge (T2K) are generic computer platforms for data and text mining developed by the Automated Learning Group of the NCSA laboratory at the University of Illinois.

Share

Authors

Jean-Guy Meunier (Université du Québea à Montréal)

Downloads

Issue

Licence

Creative Commons Attribution 4.0

Identifiers

File Checksums (MD5)

  • HTML: 2f5276b2a0cfdaaa1a20069216f426f9