Skip to main content
Texpert Systems

Keywords

humanities computing, humanism, expert systems, meaning, interpretation, reusable code, ordinateurs dans les sciences humaines, humanisme, systèmes experts, sens, interprétation, réutilisation de code

How to Cite

Winder, W. (1997). Texpert Systems. Digital Studies/le Champ Numérique, (5). DOI: http://doi.org/10.16995/dscn.197

Downloads

Download HTML

766

Views

215

Downloads

2

Citations

Critical schools, like philosophical ones, are better thought of as programming models.
Northrop Frye

Most humanists approach humanities computing in a very practical manner -- from the bottom up. They begin quite simply by using a word processor and their library's electronic catalogue. Encouraged by the easy gains these tools offer, they move on to e-mail and to exploring the World Wide Web. Many stop there, where the law of diminishing returns dictates, since an array of purely technical problems make further exploration of the electronic medium considerably less rewarding. Others brave the difficulties and can be rightfully called computing humanists -- however awkward and unwelcome that title may be -- when they begin to explore seriously how the computer might be coaxed into doing tasks that are directly pertinent to their field of research. Finally, a small segment of these computing humanists have been converted to computing fundamentalism. They believe that the computer has a place in just about everything we do, and that it is the source of a shift in perspective that, if measured on some Richter scale for paradigm shifts, has only the Renaissance and the Gutenberg revolution for a parallel.[1]

As a member of this latter, extremist sect, I would like to present some of the articles of faith that guide those working on interpretation in the electronic medium. I will also outline a framework for understanding the electronic text, called the expert systems approach, which I believe occupies the strategic theoretical ground of the field as a whole.

Unlike the bottom-up perspective that brings most humanists to computers, the fundamentalist's perspective is (oddly enough) very much top-down. It starts with the overarching faith that the founding issues of the humanities, textuality and meaning, can be usefully studied through the computer. This is purely a question of faith, because it is not obvious that the computer will ever be able to treat meaning. In fact, one could argue that the core issues of the humanities are by definition independent of any kind of technology.

Regardless of the outcome of that debate, the fundamentalist's perspective has the force of argument; it brings us back to some very old questions, with some hope of getting new answers. We can ask anew "What is a text?" and "What is meaning?" because suddenly there is a new breed of texts to be read, electronic texts, and a new kind of reader, the computer.

It is indeed with the electronic text that the expert systems approach begins.

Electronic text

Text can be defined as the common ground between reader and author. We might define the electronic text analogously as the common ground between computer hardware and software and the reading humanist. Any text-like object can be more or less accessible to computers or people.

Some texts, such as computer programs, are designed with the computer "reader" in mind. The form and content of software programs are such that the computer, even more so than a human reader, will be sensitive to the orientation and logical flow of its "discourse". Low-level computer programs are opaque for most human readers, and programmers are employed precisely to translate between a readable human description of a task and the unreadable computer code that will do it. On the other hand, a typical novel is designed for the human reader alone, and the computer is only sensitive to what is most superficial, such as alphabetic character sequences, formatting, and typesetting.

Since there are different degrees of integration between the computer and the human reader, there are different kinds of electronic texts. One advanced kind, the "intelligent text", is that future digitized novel whose reading will be as pertinent and as accessible to computer software and hardware -- and indeed crucial to the computer industry itself -- as it is for the interpreting humanist reader. The digitized texts we know today are but foreshadowings of that future intelligent text, which will offer computers and people equal access to its informational and interpretative content.

Models and measures of meaning

The crucial, unanswered question is whether electronic texts will ever offer a level playing field for machines and people; can texts dialogue in the same way with both kinds of readers?

However we set about levelling the textual playing field, the theoretical challenge is ultimately the same: we must somehow formalize and explain the meaning of texts, for we find the most information and pertinent aspects of texts primarily in their various shades of meaning. As long as machines remain insensitive to meaning, they will be oblivious to the major part of a fictional text's information.

Some profound and devilishly complex questions plague work in this area. For instance, in what sense would we ever be willing to concede that meaning might be the same for people and machines? A robotic car that stops at a red light and goes on green might be said to understand much of the core content of these simple signs. However, human meaning goes beyond such a core, utilitarian meaning: it is always accompanied by alternate meanings that current machines certainly cannot understand. At a stop light, people may prefer green to red, or be depressed over increasing governmental intervention in the highway system, or find it appropriate that their Taurus should charge, rather than stop on red. Only a machine from some distant science-fiction future could ever have such thoughts about stop lights.

The theoretical challenge for this field of study thus consists of evaluating how, and to what extent, a text's meaning can be treated by computer. Transposed to the methodological domain, the essential problem is to design the experimental framework for measuring meaning and understanding. How do we know if anybody really understands what a stop light means? We may not ever be able to fully explain what a sign means, but perhaps that is not needed to measure a machine's understanding of texts.

On this issue, the computational critic is in a situation much like that of the English professor, who may not be able to tell her students what Hamlet really means, but knows when a student understands the text or not. As students do from time to time, computers take us to task and ask us to state explicitly the meaning of the text or to at least give some guidelines that will indicate why one interpretation is more plausible than another, or why one interpretation is worth an "A", another a "C". In other words, to satisfy the computer and the inquiring student, we must propose a conceptual framework that allows us to judge whether someone does or does not understand a text, and to what degree understanding or misunderstanding exists. Literature teachers in particular have much experience in reading evaluation, but generally their practical experience is not formalized in such a way that a computer could understand, nor even sufficiently that students will accept their grading practices!

The dialectical model

The expert system framework is a systematic approach to the problem of meaning evaluation. It is a very general and abstract dialectical model based on questions and answers. The most fundamental semantic assumption is that we can only know that someone understands a text when his or her responses to questions about the text follow a regular pattern.[2] The ability to answer questions is one thing computers have in common with people, at least superficially so. Expert systems are one of the most developed forms of automated question and answer systems, but in fact even a lowly bibliographical database "responds" in a certain sense when we query the library's holdings.

There is an obvious intuitive link between meaning, questions, and answers. But there is a need to make that link more explicit, since it is a crucial component of reading computers.

As a starting point, let us say that meaning can be recognized empirically through the effect it has on our discursive practice. One definition often given of meaning is meaning as the constant of translation: it is what the translator systematically tries to keep intact when translating. In some sense, we know meaning exists because translation exists, and we can see meaning in action by comparing translations. Good translators tend to produce statistically similar translations for the same text and recognizable as such despite considerable variation depending on the genre of the text (poetry will be more variable than cookbooks). For the most part, meaning informs a translation. If we wished to formalize the role of meaning in translation, we would say that it induces a statistically pertinent correlation between the discourse of the source text and that of the target text.

Critics are also translators of sorts, and it is the relative constancy of their interpretations that leads us to associate a core meaning with a given text. In the sciences, we find that truth is based on a repetition in the behaviour of nature: scientists perform experiments and recognize nature's repeatable responses. In the semiotic sciences, which in the extended sense include all the humanities, truth lies in the repetition of semiotic behaviour: the humanities and social sciences study meaning as a constant of human nature.

In the expert system framework, we will define the meaning of any text as the clustering of particular discursive behaviour around the text. We can say that we know what the text means when we can predict[3] how interpreters will structure their dialogue around the text. Obviously, that prediction is statistical and partial -- never absolute and complete. Any given interpreter can react in bizarre, miraculous, or idiosyncratic ways. But it is also clear that on certain questions most interpreters will respond in similar ways, just as most translators translate in similar ways.

Of course, the simplified theory of meaning outlined here requires considerable amplification and adjustment. For instance, it does not account for the fact that excellent interpretations and excellent translations are typically rare and true at the same time. In other words, the simplistic statistical interpretation presented here -- in which "good" discourse is the core, common discourse -- would lead us to conclude that "exceptional" could only mean "erroneous". This vision of interpretation and meaning is obviously too monolithic and requires refinement in several directions.[4]

It is is ultimately an act of faith to believe that such difficulties can be weeded out of a computational approach to meaning. These challenges do not in any way shake the fundamentalist's belief that despite computation being rooted in simple behaviourist or operationalist principles it is not limited by them. Peircean pragmatism and semiotics are good examples of how behaviourism and operationalism can be transcended through semiotic logic.

Expertise and Expert Systems

The expert system approach to text interpretation is precisely a framework for analyzing and modelling the questions and answers that bracket a text. It involves 1) capturing the source text; 2) capturing, making explicit, and formalizing the textual expertise of human interpreters; 3) defining and evaluating degrees of meaning and the plausibility of interpretations; and 4) implementing a query system for interpretative questions on the computer. Expert systems are presently fairly common in technical areas such as medicine, geology, and finance; in the humanities, there are examples in archeology (Gardin 1987), philology (Derniame et al. 1989; Miller et al. 1990), law (Gardner 1987), and literary interpretation (Miall 1990).

In the literary domain, such approaches inaugurate what might be called the neo-Wissenschaft period of literary criticism. Northrop Frye described the Wissenschaft period of the late 20s and early 30s as follows:

Just as societies have to go through a food-gathering stage before they enter a food-cultivating one, so there had to be a stage of gathering information about literature that might be relevant to it, even when there was still no clear idea of what literature was or how to arrive at any structural principle that would direct research from the heart of literature itself. This period of literary scholarship, which was dominant until about 1935, is sometimes called the Wissenschaft period, and its great scholars amassed an awesome amount of information. Its imaginative model was the assembly line, to which each scholar "contributed" something, except that the aim was not to produce a finite object like a motor car, but an indefinitely expanding body of knowledge. (Frye 1991: 4)

In contrast, the neo-Wissenschaft approach that expert systems technology makes possible is indeed product oriented and more intimately collaborative than its predecessor. The intelligent text can potentially change the nature and outcome of the Wissenschaft approach.

Amassing knowledge is relatively simple, as our towering research libraries demonstrate; organizing, retrieving, and understanding the interrelations of the information is another matter, and one whose mastery is not automatic, as our shaky understanding sometimes reveals. If the Wissenschaft period came to a relative end, it is largely because it collapsed under the weight of the data it produced. Structuralism, psychocriticism, and other critical schools that appeared after the Wissenschaft period were principally ways of organizing and viewing the data already collected. These systems of organization, however enlightening, remain purely conceptual, purely toy models; none can treat in any real way the mass of data we have accumulated about the literary text.

An eloquent example can be found in Greimas's work on Maupassant (Greimas 1976). One would hope that his excellent study of Maupassant's short story "Two Friends", a 274-page monument of interpretative labour, would clear the way to other studies of Maupassant. However, Greimas never produced anything else comparable, and even in the collected works of the Paris School one would probably only find a dozen stories studied at all, out of the 310 that Maupassant wrote.

It seems that the more interpretative information we produce, the less likely we are to ever build on it. There is a certain logic to that: if I wish to study another four-page short story by Maupassant, I will have to read not only the four pages of the story, but now the 274 pages of Greimas' study. There is little wonder that no one has been tempted to complete what Greimas began by systematically applying his work to the rest of Maupassant.

Reuse

A comparable problem exists in the programming world, called code reusability, which has become the central concern of the software industry. Programmers have always spent a large portion of their effort rewriting code that has already been written a thousand times by a thousand other programmers. However, reusable code, of object-oriented languages for example, does not need to be totally rewritten for each new variant of a task. It is "intelligent" code that can be easily adapted to new task environments.

The neo-Wissenschaft period, to which the computer naturally leads us, brings with it precisely these issues of retrieval and reuse. Contributing another article or book to our already groaning library shelves seems at times excessive. The challenge today is to be just as efficient at retrieving the information we produce as we are at stockpiling it.

Contributing to a knowledge base, such as described in Neuhaus (Neuhaus 1991), is a kind of publishing that goes well beyond what a journal article can aspire to. Print journals simply cannot offer the same degree of retrievability and reusability.

Retrievability and reusability will no doubt prove to be all the more crucial in light of the print explosion on the Internet. Knowledge-based publishing is one way to insure that publishing on the Internet is truly collaboration and not simply reduplication. Under such a system, one can confirm whether a given item of information has already been published or not.

We have arrived here, I believe, at the principal article of faith of fundamentalist humanities computing. Computer technology, like the codex itself, is the basis of a new incarnation of dialogue, and the source of renewed collaboration. The reincarnation of dialogue is the ultimate object of humanities research and the source of humanists' fascination. We find that reincarnation in the visual arts, where artists constantly redefine the meaning of art, we find it in the diverse languages and cultures of the world, and in scholarly exchange. Today we find a new incarnation of dialogue in the electronic medium. It is that new dialogue that makes humanities computing a fundamentally humanistic enterprise, to the point that one day the computer will surely come to be known as the "humanists' machine".


Notes

[1] Thus, Delany and Landow state that the electronic text represents "the most fundamental change in textual culture since Gutenberg" (Delany & Landow 1993: 5, quoted in Siemens 1996: 49 n.1).

[2] This perspective is essentially Turing's: how do we know if a computer is intelligent? Turing's answer (known as the Turing test) is: by its answers to our questions. If the machine's answers are indistinguishable from those of a human respondent, then the machine is at least as intelligent as a person.

[3] But see Thom 1993.

[4] Such refinements would require that the notion of "common" discourse be put in a broader, diachronic context. For instance, excellent translations and interpretations, which might be rare patterns of discourse that surround the source text at a particular moment, are in some sense diachronically "common" or frequent because of their impact on future interpretative discourse, since excellent interpretations tend to be reproduced. In other words, they are strong enough to generate a new perspective on the text that influences future translations.


Bibliography

  • DELANY, Paul and George P. LANDOW (1993). "Managing the Digital Word: The Text in an Age of Electronic Reproduction", The Digital Word: Text-Based Computing in the Humanities (ed. Paul Delany & George P. Landow), Cambridge: MIT Press: 3-28.
  • DERNIAME, O., J. GRAFF, M. HENIN, S. MONSONEGO & H. NAÏS (1989). "Vers un Système Expert pour l'Analyse des Textes de Moyen Français", Computers and the Humanities, 20: 253-61.
  • FRYE, Northrop (1991). "Literary and Mechanical Models", Research in Humanities Computing (ed. I. Lancashire). Oxford: Oxford UP: 3-13.
  • GARDIN, Jean-Claude (1987). Systèmes experts et sciences humaines: le cas de l'archéologie, Paris: Eyrolles.
  • GARDNER, Anne von der Lieth (1987). An Artificial Intelligence Approach to Legal Reasoning, Cambridge: MIT Press.
  • GREIMAS, A.-J. (1976). Maupassant. La sémiotique du text: exercices pratiques, Paris: Seuil.
  • MIALL, David (1990). "An Expert System Approach to the Interpretation of Literary Structure", Interpretation in the Humanities: Perspectives from Artificial Intelligence (ed. Richard Ennals & Jean-Claude Gardin), Chichester: Ellis Harwood: 196-214.
  • MILLER, George et al. (1990). "Introduction to WordNet: An On-line Lexical Database", International Journal of Lexicography, 3.4: 235-44.
  • NEUHAUS, H. Joachim (1991). "Integrating Database, Expert System, and Hypermedia: the Shakespeare CD-ROM Project", Literary and Linguistic Computing, 6.3: 187-91.
  • SIEMENS, Ray (1996). "The New Scholarly Edition in the Academic Marketplace", Text Technology, 6.1: 35-50.
  • THOM, René (1993). Prédire n'est pas expliquer, Paris: Flammarion.

Share

Authors

William Winder (University of British Columbia)

Downloads

Issue

Licence

Creative Commons Attribution 4.0

Identifiers

File Checksums (MD5)

  • HTML: e664f905f0c9b7c8e871597ea9fcc48a