Introduction: Emerging/Emergent Materiality

This study aims at presenting a theoretical model for the digitization of literary texts, taking the novel as a case study, in addition to providing a detailed description of a prototype (still under development), called iNovel.1 The study starts from the proposition that we need a theoretical model, which is both comprehensive (dealing with the major issues) and practical (can be easily translated into design choices), to understand the shift in material condition when literary texts are migrated to the digital. To achieve this, an overview of the evolution of the concept of materiality and the different models developed or proposed by scholars is necessary. For this particular topic, the focus will fall on the work of Johanna Drucker, Jerome McGann and Matthew Kirschenbaum, besides reference to the work of critics G. Thomas Tanselle and D. F. McKenzie.2

In their engagement with the materiality of texts, textual and literary critics seem to have moved from opposite directions to ultimately meet in the middle. The textual critics’ engagement with the physical aspects of texts led them to more attention to interpretational aspects by recognizing the inseparability between the two activities ([re]production and interpretation). Literary theories, each in their own way, moved from interpretational concerns to a serious consideration and inclusion of the physical and material factors, reaching more or less the same conclusions.

The concept of materiality, especially as it applies to written texts, was bound up with theories about language, signification, and literature, in other words, with philosophy and criticism, which were in turn intertwined with the actual production of literature and art (Drucker 1994, 9). The concept also developed along disciplinary, or practical, lines, as we will see with textual criticism. Therefore, the concept evolved with each critical and philosophical school. Johanna Drucker’s The Visible Word contextualizes the study of experimental typography in modern art, providing a useful overview about writing and materiality (Drucker 1994, 9–47). In broader terms, the attention paid to typography in modernist art and literature signaled a redefinition of materiality and its role in signification. Underpinning this was an urge to stress the autonomy of the work of art in terms of its “capacity to claim the status of being rather than representation”. To do so, the materiality of form was asserted as primary rather than subordinate condition (Drucker 1994, 10).

The experimental practices and the theoretical developments within literature, art, and criticism foregrounded the fact that the material aspects of texts cannot be separated from meaning production. A parallel development occurred in the field of textual studies towards an understanding of the intersection between the material reproduction of texts and the interpretation of these texts. We find this theme in the works of major theorists in the field like G. Thomas Tanselle and D. F. McKenzie. In “The Nature of Texts,” Tanselle talks about a “widespread failure to grasp the essential nature of the medium of literature [language]” which is “neither visual nor auditory,” and is “not directly accessible through paper or sound waves”(Tanselle 1989, 11–38). Tanselle’s argument relies heavily on the dichotomy between the work and the text. According to him, literary works are sequential arts (in addition to music, dance, and film) which “depend on sequence or duration.” These “can survive only as repetitions, which in turn require instructions for their reconstitution…which are not the same thing as the work itself” (Tanselle 1989, 22). From this, Tanselle concludes that “literary texts are analogous to musical scores in providing the basis for the reconstitution of works” (Tanselle 1989, 23). Somewhat ironically, Tanselle uses this view of language to argue against the split between textual and literary activities: “We can never know exactly what such works consist of and must always be questioning the correctness-by one or another standard – of the texts whereby we approach them” (Tanselle 1989, 25). The inaccessibility, and therefore indeterminacy, of literary works, which is due to the nature of the medium of language, makes the textual activity itself a form of interpretation. Therefore, “the act of interpreting the work is inseparable from the act of questioning the text” (Tanselle 1989, 32).

Tanselle emphasizes the fact that the engagement with the text and its materiality involves an interpretation act. Besides, his definition of texts as “instructions for the reconstitution of works” includes a gesture towards the dynamicity of texts, as noted earlier. He also affirms the specificity of each physical reproduction of the text: no two copies are the same; any change in the instructions would change the work (Tanselle 1989, 45). D. F. McKenzie, on the other hand, anchors meaning (the statement, the work) completely in the material text. McKenzie’s redefinition of bibliography, in his Bibliography and the Sociology of Texts, departs from traditional views which exclude “symbolic” meaning from the concern of bibliographers as he sees a “relation between form, function, and symbolic meaning” that cannot be ignored (10). Thus, he starts from challenging the rift between transcendent meaning and its material form. To him, being involved in “interpretive” questions is an automatic outcome of being engaged in the recording of physical forms, which is bibliography’s main job (McKenzie 1984, 61). In an indirect response to Tanselle’s theory, McKenzie offers the following proposition:

Whatever its metamorphoses, the different physical forms of any text, and the intentions they serve, are relative to a specific time, place, and person. This creates a problem only if we want meaning to be absolute and immutable. In fact, change and adaptation are a condition of survival, just as the creative application of texts is a condition of their being read at all. (McKenzie 1984, 60–1)

However, both McKenzie and Tanselle seem to agree on the indeterminacy and changeability of texts – though Tanselle believes that there is an original statement whose recovery constitutes a noble enterprise and a form of historical enquiry. But whether we believe or aim at an ideal Work, we are left with the fact that the main activity is the production of works or approximations of works through the production of texts and their documents.

The principles established above about material basis of symbolic meaning and the interpretational nature of textual production are necessary. However, it is also imperative for a comprehensive model, such as the one aimed at here, to understand the internal mechanisms by which a text produces meaning and the rules that govern the reciprocal influence between texts and their interpreters. Here comes the significance of the work of Jerome McGann, whose theories about textuality combine the perspectives of textual criticism, literary criticism, and digital humanities. Therefore, his approach to the issue of materiality is informed by expertise in all these fields. I would like to focus on three themes from his work: text as an autopoietic system, rethinking textuality and the algorithmic nature of text, and his theory about text dimensions/dementians.

McGann defines texts as “autopoietic mechanisms operating as self-generating feedback systems that cannot be separated from those who manipulate them” (McGann 1991, 15). McGann’s autopoietic theory reveals the dynamic nature of traditional or print texts. One recurrent theme in his work is rethinking (print) textuality in relation to digital technology, or what McGann calls “a thoroughgoing re-theorizing of our ideas about books and traditional textualities in general” (McGann 1991, 149). He calls for this in order to discover the dynamic nature of texts, all texts. There is a useful summary of McGann’s ideas in this regard in “Rethinking Textuality,” which starts with a list of conclusions about texted documents that came as a result of an experiment with Johanna Drucker (McGann 1991, 138–9). One of the main claims there is that “the rationale of a textualized document is an ordered ambivalence,” and that “any heuristic distinction,” perhaps in the manner of Tanselle’s theory, “between bibliographic and semantic elements obscures the field of textual meaning by identifying the signifying field (“its rationale”) with the semantic field.” Like McKenzie and Tanselle, McGann states that “marking is interpretational,” (McGann 1991, 138) an idea that he pointed out in earlier works like The Textual Condition when he claims that “(t)he idea that editors ‘establish’ the texts that critics then go on to ‘interpret’ is an illusion” (McGann 1991, 27).

With the help of McGann’s ideas, we have no difficulty in understanding that all texts are algorithmic. As his autopoietic theory shows, texts are far from static. McGann also talks about the textual dimensions/dementians. In “Marking Texts of Multiple Dimensions,” McGann talks about the following dimensions in any text: 1) linguistic, 2) graphical/auditional, 3) documentary, 4) semiotic, 5) rhetorical, and 6) social (McGann 2004). To be in line and consistent with McGann’s work as a whole, we should not impose any heuristic distinction among these but see them as operating within a dynamic field. Also, we should remember that this model includes both traditional and digital textualities and even other forms of language like oral expression, the second dimension, for example, includes oral forms of language. McGann suggests that other dimensions might be proposed or imagined, and this is indeed tempting. I am thinking about the factors related to the logistic of reading (navigation, for example, or what we might call allied textual functions).

McGann’s theories about texts show their heterogeneity and dynamicity but we are still unclear about what happens when the material conditions shift, especially with the migration of texts to the digital. Do they lose their material basis and become immaterial as the phenomenological experience of readers and some capabilities like transmissibility and reproducibility indicate? Matthew Kirschenbaum offers a full-fledged theory about the materiality of new media in Mechanisms: New Media and the Forensic Imagination. He talks about approaching the materiality of digital media from the specific vantage points of computer forensics and textual criticism from which there has been “very little consideration.” (Kirschenbaum 2008, 16) And before we look for evidence and counterexamples, like McGann’s approach, Kirshenbaum goes on to illustrate another peculiarity in his approach, explaining the keyword in the title, “mechanisms”: “Here I propose a close reading, a reading both close to and of a piece with the machine – a machine reading whose object is not text but a mechanism or device” (Kirschenbaum 2008, 88). This focus on the mechanism or device becomes clearer with the theory he proposes about materiality, and the binary classification into forensic and formal materialities which “are perhaps better brought to rest on the twin textual and technological bases of inscription (storage) and transmission (or multiplication)” (Kirschenbaum 2008, 15).

The two categories are useful on many planes; first, forensic materiality, which exists on the physical level of inscription and represents the “potential for individualization inherent in matter,” demonstrates that digital media is not only material in the general sense of having a material basis, but also in the specific sense of each object being unique and forensically tractable. I think that Kirschenbaum’s notion of “formal materiality” more comprehensively and specifically demonstrates the nature of the digital and computation in general. The “recursive ability to mimic or recapitulate” almost any formal environment gives the computer comprehensibility, but it also creates an assumption of immateriality.

Kirschenbaum also introduces a number of notions which, taken holistically, can further explain why code-based objects tend to behave immaterially, especially at the phenomenological level of the user’s experience. One of them is “system opacity”, the fact that new media is removed from the human eye (Kirschenbaum 2008, 40). At a later point, Kirschenbaum describes digital inscription as “a form of displacement” which “remove[s] digital objects from the channels of direct human intervention” (Kirschenbaum 2008, 86). In light of this observation, let us compare traditional books and their digital counterparts. Print books serve multiple functions. They are the “carrier’ of the text and the platform of reading. On the sensory level, the experience of reading primarily involves sight and touch, direct intervention. With e-texts or e-books, the interaction is not completely direct and there is a loss of one sensory type of data, that of touching. The carrier and the platform are independent from the text. The text and its carrier do not seem to exist in one perceptual space. The electronic platform becomes a container of the text, rather than one with it, like traditional books. We tend to think of the text as immaterial because our notion of materiality is related to what is physically accessible, or that which has a complete sensory basis.

Besides Kirschenbaum’s notions, I add the notion of “visiocentrism” which I think is the paradigmatic basis of formal materiality (with its roots in formalism and geometry). I mean by this the centrality of the visual in computational representations (formal modeling/materiality). This takes place both paradigmatically, by centralizing form, and perceptually, by centralizing sight. Things are represented by a reproduction of their visual imprint – scanning is prototypical here. Thus, a page on the screen is a rectangular white page, demarcated by virtue of visual differentiation.

As a concluding remark, I propose considering the outcome of migrating texts to the digital in terms of a rearrangement of the textual landscape. The formal modeling of the computer creates a break between the text and its platform and carrier. Therefore, the text seems to be independent of its forensic, physical basis, which remains, even if only seemingly, hidden, opaque, and undetectable. Another way to see this rearrangement is as a break between storage and multiplication. In Hayles’s words, we say that storage is now separate from performance (Hayles 2005, 164), and borrowing from Drucker, we say that the text and its configured format are no longer “inextricably intertwined” as is the case in print media (Drucker 2009, 147). We can also say, with the help of Kireschenbaum’s model, that these different elements/functions no longer exist in the same perceptual space. The physical/forensic is moved to another perceptual level of interaction, away from the human “naked” senses.

How can the understanding of materiality that has been gained from all these theories and models help in the practical agenda of this research? It is useful first to remember that we are working within a humanistic endeavor and way of knowing which, as Kirschenbaum reminds us, “assigns value to time, history, and social or material circumstance–even trauma and wear” (Kirschenbaum 2008, 23). Thus, discovering the materiality of digital media simply cannot be a “scientific” discovery. There is also need to understand the issue of materiality from the user’s point of view, the phenomenological level. The different theories, especially Kirschenbaum’s, help us see that digital texts have their artifactual existence (forensic materiality) and that they can be individualized. However, the significance of this fact on the phenomenological level, of the reader’s experience, is limited, due to the rearrangement and the mechanism of formal modeling. True to its name, forensic materiality remains only forensically significant, and ordinary users will remain interacting with formal models, continually dissolving them.

Within the framework of these remarks, I am going to propose a hybrid model for understanding materiality and its relation to textuality. This model tries to combine the most relevant aspects in the theories previously discussed and their underlying models. Drucker’s model, which responds to the paradox in Saussurian linguistics and its semiotic model as well as to the following critiques of this model, like the Marxian and Derridian, is a good starting point. Here she introduces this model:

Such a model includes two major intertwined strands: that of a relational, insubstantial, and nontranscendent difference and that of a phenomenological, apprehendable, immanent substance. Each of these must be examined in turn, as well as the relation they have to each other in interpretation. The basic conflict here – of granting to an object both immanence and nontranscendnece – disappears if the concept of materiality is understood as a process of interpretation rather than a positing of the characteristics of an object. The object, as such, exists only in relation to the activity of interpretation and is therefore granted its characteristic forms only as part of that activity, not assumed a priori or asserted as a truth. (Drucker 1994, 42)

The model is obviously challenging, especially in terms of “tak[ing] into account the physical, substantial aspects of production as well as the abstract and system-defined elements” (Drucker, 1994, 43). But, as Drucker suggests, we can view materiality as an interpretational outcome, not an inherent, self-evident characteristic of objects. I see this as a working definition of emergence. This can tie with a later notion developed by Drucker, probabilistic materiality, meant to indicate “a shift from a concept of entity (textual, graphical, or other representational element) to that of a constitutive condition (a field of codependent relations within which an apparent entity or elements emerges)” (Drucker 1994, 150–1). In short, materiality is the outcome of a process, of the interaction of different elements.

Drucker’s model has affinity with the concept of materiality developed by Katherine N. Hayles. Here Hayles introduces her concept:

I want to clarify what I mean by materiality. The physical attributes constituting any artifact are potentially infinite; […] From this infinite array a technotext will select a few to foreground and work into its thematic concerns. Materiality thus emerges from interactions between physical properties and a work’s artistic strategies. For this reason, materiality cannot be specified in advance, as if it preexisted the specificity of the work. An emergent property, materiality depends on how the work mobilizes its resources as a physical artifact as well as on the user’s interactions with the work and the interpretive strategies she develops—strategies that include physical manipulations as well as conceptual frameworks. In the broadest sense, materiality emerges from the dynamic interplay between the richness of a physically robust world and human intelligence as it crafts this physicality to create meaning. (Hayles 2002, 33)

Hayles stresses the fact that materiality is not an inherent, thus preexistent, quality but comes as a result of a form of human intervention, like interpretation. However, as Kirschenbaum notes, Hayles’s model seems to exclude “formal materiality” or “the computationally specific phenomenon of formal materiality, the simulation or modeling of materiality via programmed software processes” (Kirschenbaum 2008, 9). The space between “physical properties” and artistic and “interpretative strategies” is wide enough to include formal materiality. However, what was lacking in Hayles’s model is the intermediary, media itself and its technologies, most importantly the computer.

Kenneth Thibodaux’s classification of digital objects can too be engaged along these lines:

Every digital object is a physical object, a logical object, and a conceptual object, and its properties at each of those levels can be significantly different. A physical object is simply an inscription of signs on some physical medium. A logical object is an object that is recognized and processed by software. The conceptual object is the object as it is recognized and understood by a person, or in some cases recognized and processed by a computer application capable of executing business transactions. (Thibodeau 2001, 6)

The physical object is another conceptualization of the forensic/documentary level. The logical and conceptual categories could encompass the rest of McGann’s dimensions. The concept of logical objects is close to formal and computational modeling.

I would like to end this part by proposing a hybrid model that is based on the following premises: first, that texts are multi-dimensional and heterogeneous and the relation among their various dimensions, codes of significance, or levels is not heuristic; second, marking is interpretation and interpretation is marking, and third, everything comes as a result of the interaction, or the epistemological space created, between human beings and the world in the latter’s attempt to create and assign meaning. Therefore, my model has two basic levels: the physical (substantial, forensic, documentary) and the human (interpretational, phenomenological, artistic, conceptual, insubstantial). On the middle ground and true to its name, lies media (and technology) as an intermediary and an indispensable element. Here is a table that illustrates this model.

Table 1

Model for the multidimensionality of texts.

THE WORLD MEDIA/TECH HUMANS
Thibodaux Physical object Logical Object Conceptual Object
McGann Documentary Graphical Linguistic,
Rhetorical
Semantic, Social
Kirschenbaum Forensic Materiality Formal Materiality
Drucker Physical/Substantial Material Interpretational/Insubstantial

It can be noted that some dimensions, categories, lie in-between different levels and that formal materiality encompasses a large space that includes media and humans and thus a spectrum of dimensions. This is a good illustration of the computer’s powerful position due to its ability at formal modeling which gives it comprehensibility. This model will be a major guideline in the following parts of this research.

The Impasse of E-Book Design

In the early 1990s, after stressing the fact that the computer needs different conventions from those for print, Jay Bolter made this announcement:

The electronic book must instead have a shape appropriate to the computer’s capacity to structure and present text. Writers are in the process of discovering that shape, and the process may take decades, as it did with Gutenberg’s invention. The task is nothing less than remaking the book. (Bolter 2000, 3)

The book and almost all cultural and social forms have been being remade, and this remaking is indeed a process of discovering. We might disagree on the evaluation of a certain e-book interface but we can easily agree that it is not the ultimate e-book. What this also indicates is frustration with current results especially as compared to the early hype. This might be due to technical limitations. Our task, though, is to point out the underlying models and paradigms that ultimately guide the technical choices. To bypass the current impasse and discover a new form for the e-book, an alternative approach is required.

The conclusion of the previous section about emergent materiality is one starting point. A general guideline, therefore, is the acknowledgement of the multidimensionality and heterogeneity of textual forms, especially the book. The traditional book has a conceptual, virtual or phenomenal existence” (Drucker 2009, 169). Similarly, it creates, alongside its physical space, a “conceptual space in the minds of writers and readers” (Bolter 2000, 35). The page, for example, should be seen as “a complex series of marked and unmarked space” (McGann 2004, 176). We cannot deal with the book as a monolithic, one-dimensional form.

The paradigms that have handicapped e-book possibilities need to be revised. A major one is striving after perfect emulation of the traditional book. This leads to the fact that “electronic presentation often mimics the kitschiest elements of book iconography, while potentially useful features of electronic functionality are excluded,” and e-books simply are designed “to simulate in flat-screen space certain obvious physical characteristics of traditional books” (Drucker 2009, 166–7). In addition, the belief that underlies perfect emulation which states that simulation is simple and complete reproduction is fraught with misapprehensions about computation and materiality, especially in its formal dimension/modeling which forms the basis for simulation.

Understanding the evolution of books in the context of writing technologies is also a necessary step before reaching a better model for designing e-books free from misconceptions. The e-book belongs to a cluster of e-hyphenated genres which, as their names show, maintain the connection with old forms, and this has brought these old forms into new light. Now we understand the book as a technology, and a historically situated form. As Landow says, the advent of hypertextuality has enabled us “to see the book as unnatural, as a near miraculous technological innovation rather and not as something intrinsically and inevitably human” (Landow 2006, 25). This view – that can be termed historical, historicist, or, following Foucault, archaeological – has paid off, and it builds on previous attempts at understanding language technologies, especially writing – for example, Walter Ong’s study of orality and literacy discussed earlier. Writing, and then print, as Ong shows, has shaped the evolution of art and literary genres, and contributed to the rise of modern science, in addition to their role in forming the reading activity itself and bringing about “a clientele of reading public” (Ong 1982, 135, 158). With this realization, it is easier to see why the book is a “near miraculous innovation.” However, as Ong’s argument demonstrates, the book’s comprehensive and massive impact is inseparable from the technological and material specificity of writing and print.

As Landow’s previously quoted comment shows, the computer technology has exposed the technological nature of books, which resonates well with McGann’s theory about the algorithmic nature of texts. The computer is another addition to a cluster of technologies, each of which has its own possibilities and limitations. The shift from one to the next technology can be a form of intermediation, using Bolter’s term. Sometimes we tend to overstate the impact of the latest technology. Thus, Hobbes reminds us that “print is only a phase in the history of textual transmission, and that we may be at risk of over-stating its importance” (McKenzie 1984, 62). The same can be said about the computer especially in terms of seeing it as radicalizing the nature of texts, an assumption we should be cautious about.

These language technologies are supposed to mark and encode natural language, in McGann’s terms:

Print and manuscript technology represent efforts to mark natural language so that it can be preserved and transmitted. It is a technology that constrains the shapeshiftings of language, which is itself a special-purpose system for coding human communication. Exactly the same can be said of electronic encoding systems. In each case constraints are installed in order to facilitate operations that would otherwise be difficult or impossible. (McGann 2004, 9)

Writing, and therefore, its different types of impacts on the word, thought, and consciousness, evolved with each technological modification. This is why Ong makes a differentiation between the overall impact of writing and that of print: writing “moves speech from the oral-aural to a new sensory world, that of vision” (Ong 1982, 135, 85), while “print both reinforces and transforms the effects of writing on thought and expression” (Ong 1982, 117). Furthermore:

Print situates words in space more relentlessly than writing ever did. Writing moves words from the sound world to a world of visual space, but print locks words into position in this space. Control of position is everything in print. (Ong 1982, 121)

Though each of the language technologies has its own uniqueness, the computer still poses a challenge and it is safe to consider it a special case. The following generalized classification that fits the computer easily with other “techniques of writing” should be reconsidered: “This new medium [the computer] is the fourth great technique of writing that will take its place besides the ancient papyrus roll, the medieval codex, and the printed book” (Bolter & Grusin 2000, 6). There is no problem in classifying the computer as a writing technology, but we need to affirm that the computer is at least a unique technology, and in this sense, it does not perfectly fit in this teleological rendition of writing techniques. While the roll, the codex, and the print book are all inscription technologies, invented for the sole purpose of technologizing the word, the computer is just marginally a writing technology. Because the computer’s uses, unlike print, are diverse and are not centered on writing, its effects are not simply a matter of “reinforcing” in the way print has reinforced writing. This is another reason to discard perfect emulation. Being used for writing – and reading – is a byproduct of the computer’s complex mechanism of operation and susceptibility to different uses, exemplified by formal modeling.

Like all cultural forms, books serve, and reflect, dominant traditions of knowledge production and circulation. Drucker refers to the scholastic tradition as the context for the invention of many features that have become traditional and iconic. She states that these “formal features have their origin in specific reading practices,” and they are also “functional, not merely formal” (Drucker 2009, 171). This brings us to the conclusion that perpetuating forms means preserving, albeit indirectly, certain practices and condition of knowledge production. The implication here is that the book and its features are multi-dimensional and we cannot be preoccupied with one or few dimensions at the expense of others. A comprehensive, and hybrid, model is needed.

To further illustrate some of the previous points and show some complications when migrating from one medium to the other, let us take the page as an example. The page is a space, both physical and conceptual, where words are locked and controlled. Its rectangular shape is basically the outcome of engineering considerations related to efficiency, economy, and usability, in addition to features related to alphabetic writing like linearity. The page is also a unit of measurement and a tool for navigation; it marks a specific, measurable point in the whole artifact (through its number). It also, individually and collectively, serves as a lever for navigating within the artifact through flipping. The rectangular shape besides the material and physical qualities of paper allow all these functions. The attempt at simulating the page digitally is based on a misconception that form is ideal and can abstracted from its material condition, thus a paper can be modeled on screen. The resulted rectangular space that represents a page, whether created through photocopying, scanning, or simulating like pages in Word or other word processors, simply CANNOT function as it does in print: “The replication of such features in electronic space, however, is based on the false premise that they function as well in simulacral form as in their familiar physical instantiation” (Drucker 2009, 173). Of course this is different from claiming that the same functions cannot still be achieved in the digital medium–again with a certain acknowledgeable loss. We need, following Drucker, to differentiate between functionalities and forms with the latter being reproducible. This is why scrolling for example, which achieves the functionality of navigating, has been the more plausible choice for online texts whether the page space is simulated or not.

The previous insights and conclusions should be translated into guidelines and then technical choices, which collectively comprise the alternative approach. Johanna Drucker’s suggested approach can be a good starting point:

1) start by analyzing how a book works rather than describing what we think it is, 2) describe the program that arises from a book’s formal structures, 3) discard the idea of iconic metaphors of book structure in favor of understanding the way these forms serve as constrained parameters for performance. (Drucker 2009, 170)

The first two steps can be achieved via the theoretical model for materiality I proposed previously. A description of the book’s functionality and formal structure can be difficult if taken in a generic sense that includes all books. This is reason to consider being specific.

Specificity is another guideline that can be added. Acknowledging all the complications when migrating to the digital should lead us to be specific. In other words, because designing an e-book that perfectly emulates the traditional book – that is, which has the ability to perform all of the latter’s complex functions and uses, is not possible – we had better be more specific. This goes in line with Catherine Marshall’s sound conclusion at the end of her study about electronic books:

The lesson to take away from these examples is not to throw up our hands in frustration and limit development to building the perfect paper emulator; there would be little point in that … instead of developing the perfect paper emulator, we can develop capabilities based on our understanding of specific disciplinary practices, specific types of reading, and specific genre of material. We can tame the vagaries of our imaginations by checking in with our readers. (Marshall 2010, 148)

The target will be specific and independent “capabilities.” This would also encourage targeting those capabilities specific to the digital medium, in this case, beyond the book capabilities. This requires a re-aligning of each of these capabilities with textual dimensions to see how the shift influences them.

This alternative approach can also be an opportunity to rethink the social role of the book and its genres. Marshall makes a similar point while citing Dominick in the context of discussing the genre of textbooks:

The important lesson to take away here is that there is little point in carrying an unsuccessful genre forward from print to digital. Rather, this transition might provide exactly the right opportunity to rethink the form and its social role. As he winds up his dissertation, Dominick predicts that in two generations, textbooks as we know them will disappear completely. (Marshall 2010, 137)

Of course the disappearance of textbooks that Dominick talks about refers to the gradual replacement by new rising genres. It goes without saying that we cannot talk about new genres independent from a new set of pedagogical paradigms.

iNovel, the Prototype

The goal of the prototype3 is to apply the theoretical model that has been explained and to demonstrate their applicability. The choice of the novel genre is due to the fact that it represents a dominant print book genre, and it is a genre that “rose” with the advent of printing. I will start this section with a historical overview. I will also discuss a number of themes related to reading as an activity and the book form in general.

In The Rise of the Novel, a classic on the genesis of the novel as a genre, Ian Watt points to a number of “favorable conditions” (Watt 2001, 9) that coincided to cause the novel genre to rise. What Watt mainly does is show the interconnectedness and interdependence between formal conventions and the wider social, cultural, and material context:

[B]oth the philosophical and the literary innovations must be seen as parallel manifestations of larger change – that vast transformation of Western civilization since the Renaissance which has replaced the unified world picture of the Middle Ages with another very different one–one which presents us, essentially, with a developing but unplanned aggregate of particular individuals having particular experiences at particular times and at particular places. (Watt 2001, 31)

Some of the factors he enumerated that contributed to the rise of the novel are “rejection of universals and the emphasis on particulars” (Watt 2001, 15), “increase in feminine leisure” (Watt 2001, 44), and “the decline of literary patronage and the rise of booksellers as the new middlemen” (Watt 2001, 52–3).

Most of these were translated into formal conventions like the use of “actual and specific time and place,” “descriptive and denotative” language and a “copious particularity of description and explanation” (Watt 2001, 29). In elaboration, Watt explains that:

What is often felt as the formlessness of the novel, as compared, say, with tragedy or the ode, probably follows from this: the poverty of the novel’s formal conventions would seem to be the price it must pay for its realism. The new literary balance of power… tended to favor ease of entertainment at the expense of obedience to traditional critical standards (Watt 2001, 49).

We can easily see the affinity with Walter Ong’s work on orality and literacy, despite the difference in their focus. Both link formal conventions to the wider context (historical, social, philosophical and technological). It is useful to try to combine ideas from both of them. Here is a table that does so.

Table 2

Contextual factors in the rise of the novel.

Historical/Epistemological Context/Cultural parameters Technological Possibilities/Constraints Formal Features
Realism – Rejection of universals/study of particulars/the rise of booksellers Print codex/reading as a activity/writing as a solipsistic activity/repeated visual statement Lengthy narrative
Actual and specific time and place Closure Linear climactic plot
Descriptive and denotative use of language Control of space Temporal sequence of events
Authentic account of the actual experiences of the individual Round characters

The main lesson that we can learn is that genre is created at the intersection of history and technology. Through his argument, Ong differentiates between orally- based thought and chirographically and typographically based thought (Ong 1982, 36–48). The novel belongs to the second type, while a genre like the epic shows the dynamics of orality. Here is another table I prepared based on these ideas.

Table 3

The epic versus the novel.

Orally based thought The Epic Typographically based thought The Novel
Additive Subordinate
Aggregative Episodic Analytic
Redundant Repetitive Accumulative/progressive Linear plot
Conservative
Close to the human lifeworld Performed in front of an audience Read in isolation
Agnostically toned Dogmatic Ideological/political
Emphatic and participatory Distanced
Homeostatic
Situational Abstract

Both tables help us see the network of relations in which a genre is born. The lesson to be learned is that migrating a genre from one technology to another entails a break in this network. The digitization of a novel from the 19th century includes a break with several elements from the original network in which the work was produced. Migration of texts, which remains a necessity, is better thought of as a process of translation with a certain amount of loss and gain. The best strategy is to acknowledge the shift in its different levels (historical, philosophical, and technological) and do that systematically and self-consciously.

Basic Facts about Reading

In this part I will be drawing from Catherine Marshall’s up-to-date study about the e-book, Reading and Writing the Electronic Books. As she makes clear, an e-book “can refer to hardware, software, content prepared to be read on the screen, or a combination of the three” (Marshall 2010, 33). This validates the previous comment about reading as taking place in an ecosystem of devices. Because reading is becoming more diverse and multifarious, the definitions of the e-book is not stable. It can simply be a pdf, a Kindle file, or software for browsing documents.

When we think of novels, we tend to think of immersive reading as prototypical for this kind of reading. But reading is a diverse activity with multiple forms and objectives. Marshall provides the following table that illustrates the different types of readings (Marshall 2010, 20).

Table 4

Types of reading.

Type Characterization
Reading Canonical careful reading. The reader traverses the text linearly. The aim is comprehension.
Skimming Faster than canonical reading. Traversal is still linear, but comprehension is sacrificed for speed. The aim is to get the gist if the text.
Scanning Faster than skimming. Traversal becomes non-linear; the reader looks ahead and back in the story. The aim is often triage or to decide on further action.
Glancing Pages are turned very quickly; the reader spends almost as much time turning pages as looking at them. The aim is to detect important page element (e.g. beginnings and endings of article, photos or figures, headings) until something holds sufficient interest to transition to another type of reading.
Seeking Reader scans quickly for a particular page element (e.g. proper nouns) with an aim orthogonal to full comprehension.
Rereading Rereading is meta-type that is included in the tables a reminder that any type of reading may be occur multiple times.

Another classification of reading is that of Heyer, summarized by Vandendorpe in his discussion of reading on screen: “Heyer proposes three different modes of reading or gathering information, based on metaphors borrowed from the ways our ancestors gathered food: grazing, browsing, and hunting” (Vandendorpe 2008, 11). If we could talk about an emerging type or types of reading among these it would definitely be browsing and hunting. In this sense, a website is most comparable to a magazine. Vandendorpe adds that:

…the hunting mode is also becoming fairly common due to the availability of answers to any given question a user might happen to think of. In the future, if this trend continues, the novel as a literary genre could well become an endangered species, despite its long history. (Vandendorpe 2008, 43)

The claim about the novel as an endangered genre should not concern us here. The significant claim remains that the hunting and browsing modes which are generally shorter and pragmatic are becoming the norm, mainly due to the new technology for reading.

Besides its different types, reading as an activity requires a number of functionalities usually embedded in the reading platform/device. These are annotation and navigation, with two sub-categories: a. moving, b. orienting, clipping, and bookmarking (Marshall 2010, 51–67). Instead of viewing them as rigid forms, these functionalities are best thought of as “corollary to specific text genres and activities” (Marshall 2010, 51).

The Interface of iNovel

The prototype I am going to describe here is still under development and has not been programmed or tested yet. However, it will serve the present purpose of suggesting specific ways to put the principles so far discussed into use. Specifically, the main objective is to reach a form for digitizing novels and a platform for reading them that seeks to explore the possibilities of the digital medium without being limited by the urge to emulate the print codex. To this end, I will use Johanna Drucker’s suggestion: approaching the book as a program that creates a performative space for the activity of reading and viewing forms as “constrained parameters of performance” (Drucker 2009, 170) rather than as an iconic molds. The different forms of the codex book (layout, pagination, table of content) besides its material character (that emerges as a result of the interaction of the readers with the physicality of the book) are meant to facilitate a specific form or, the phrase I prefer, “ritual of reading.” The reader ends up performing this ritual by following the parameters of the form. The end product can be described as a program. Starting with the codex book, the program behind it can be described as follows: creating a performative space by “carrying,” storing, and organizing the text and by making the reading activity practical through easy navigation and orientation. In addition, the book as a logical object serves the purpose of simultaneously presenting an argument in logical sequence and making it debatable.

The following step is to look for functionalities behind the familiar forms in order to envision alternative ways to achieve them. The following table lists the different forms and their corresponding functionalities in the codex novel.

Table 5

Forms and functionalities of the codex novel.

Form Functionality Dimension concerned/type of object/parties concerned Alternative way to achieve it The code involved (translated into pseudo-html)
Table of content Mapping of content, guidance into text, Non-linear navigation Rhetorical, graphical, semiotic, logical Links and nodes, 3d map of content <work>
<chapter1 href=“work:chapter1”>
<section..>
<segment1>
</segment1>
<section1>
</chapter1
<chapter 2>
<section>
….
</work>
Page numbers Orientating Graphical, logical Slider <PageNo alt=“You are here” “You have read so far”>
The page (turning) Moving, navigating Physical Scrolling, automatic flow of content <TurnPage>
X=x+1
</TurnPage>
The page (material unit) organizing, content carrier Physical, graphical, linguistic A block of the screen, a narrative or thematic unit <work>
<chapter>
<page>
<paragraph>
</paragraph>
</page>
</chapter>
</work>
Paragraphs Content carrier, organizing message Graphical, linguistic A horizontal line, a block of text united thematically or narratively
White spaces Organizing space, defining units Physical, Graphical
Cover Orienting, introducing Graphical, Logical loading page
Title page Introducing, defining Webpage Title <title>
<author>
<genre, content>
Chapters Organizing content/message Graphical, logical Thematic or narrative divisions
Titles Introducing, defining Logical, linguistic
Font (type, size) Graphical
Lines Organizing content Graphical, linguistic
Level of language Linguistic, Social
The book as a physical object Individuality, character, unity Physical The interface
The book as a work Individuality, unity Logical
Words, sentences, the collective linguistic text Carrying content, communicating messages Linguistic
Plot Logical, semiotic
Characters Logical
Setting Logical
Narrator
POV
Themes

The table includes a variety of forms, ranging from those related to layout, materiality, and genre. I left the classifying and grouping of these elements to the third column which specifies the textual code each of them is related to. The table is neither meant to be exhaustive nor strictly technical as some items or their designated functionality might be controversial. The purpose is to explain in concrete examples the idea of the book as a program and of forms as parameters of performance. The table also helps expose the problematic nature of emulating or simulating print forms if we disregard the functionalities behind them.

As shown in the table, I propose some alternative ways to perform the same functionalities digitally and describe the program behind some of them in pseudo-code. By this, I try to illustrate the program or the group of coded events, “parameters of performance,” behind each form. The information in the table is necessary but reaching new forms always requires a great deal of experimentation, similar to the experimentation that definitely took place before the print codex took its final form. This urge towards experimentation is embraced in the prototype.

Another starting point for the prototype is a different conception of the reading activity, one that is more akin to the digital medium. The print codex served, and also created, a certain ritual of reading. The computer, similarly, has its own ritual of reading on screen, closer to the browsing and hunting modes as previously noted. The interface of iNovel is meant to serve this mode of reading, browsing and hunting, by creating a performative space that encourages it. The reader is supposed to be browsing and navigating short segments of text one at a time rather than engaging in long silent reading.

The general view of the interface of iNovel is shown in Figure 1. The interface has different sections each serving a purpose. The central space is reserved for the text which can be viewed in four formats: iFormat, pdf, plain, and experimental. The video clip icon on the bottom right corner becomes active when there is a video clip available about the section being read, usually a clip from a movie adaptation. On the left, there is a section entitled “Characterbook” which will list the characters and on the right a space is reserved for students’ notes and annotations which will be shared among them. There are two timelines: Historyline and Storyline located above and below the text. The icon at the bottom right corner opens the iCritic tools window.

Figure 1
Figure 1

iNovel general interface.

To illustrate with a real example, Figure 2 shows the interface with Charles Dickens’ Hard Times as the active text. The experimental tab inside the central section allows students to choose from among a number of experimental formats. One of these might be having the text flash as one word at a time or as a group of lines separated horizontally as shown in Figure 3.

Figure 2
Figure 2

iNovel interface with HT as the active text (Historyline adapted from David A Purdue).

Figure 3
Figure 3

The Experimental Tab.

Other formats can be added/thought of. The experimentation will help student be conscious of graphical aspects of texts and the fact that units like lines, paragraphs and pages are not ideal or transcendental. Such unusual presentation of the text will also help demonstrate the rearrangement process among the different dimensions or layers of the text, for example, the corresponding shift in the documentary and auditional levels. A button with the command “reveal code” could be added to iFormat and/or the plain text or the pdf version. The idea here is to reveal the deeper layers, much like the command that reveals the code in HTML pages. The code can refer to all the active but invisible codes like those related to genre, graphics, layout…etc.

The “characterbook” lists characters in the manner of Facebook and each of them has his/her own profile. Those who are involved in the section being read will have the small green dot as a sign of their activity. Figure 4 shows the starting profile of Thomas Gradgrind. As the example shows, the character profiles start with minimal information and remain editable so that students can add information as they progress in the novel. Presenting characters in this way will make reading and interacting with characters more enjoyable.

Figure 4
Figure 4

Sample Character Profile.

The pagination with “iPage” is based on a disproportionate division of the text into narrative and/or thematic sections, which are usually short, ranging from one or few lines to a paragraph or two. The presentation of the text in each iPage varies depending on the nature of the text whether it is narration, description of a place or dialogue. This is meant to go beyond the imitation of print besides making reading the novel closer to browsing because readers are engaged with a short segment of the text at a time and the variety of the presentation breaks the flow of the text. In addition, styles can vary in presenting segments of the text inside each iPage. For example, the line “The emphasis was helped” is repeated three times and this can be done by keeping the line on screen while shifting the text that follows it. This might serve as a good example of how the different dimensions of a text (documentary, linguistic, auditional) work together towards a certain effect. In this context, let us remember Hayles’s not that “the materiality interacts dynamically with linguistic, rhetorical, and literary practices to create the effects we call literature” (Hayles 2002, 31). It is also safe to assume that the effect created by the iFormatting in the previous example is definitely a modified one than the original due to the material shift. The iFormat tries to include multimedia in a creative way by making them tools in the narration presentation of the text and sometimes in real time. For example, the image of the classroom can be revealed gradually with the flow of the text which includes the narrator’s description of the scene. Another example is the dialogue between Gradgrind and Sissy which can be presented in a comics-like manner.

Both of the timelines provide orientation and background knowledge. They allow the presentation of information visually which helps easy absorption. The Historyline is designed to help students contextualize the work historically and in relation to other works by the author. The Historyline in the above image is adapted but it can be done in limitless ways. The general form is that it contains two sections: one about the author and another about the time, the work in focus should be highlighted as the example shows. The Plotline, on the other hand, has different objectives, as shown by the sample segment in Figure 5.

Figure 5
Figure 5

Plotline section.

The plotline does different things: first, it serves as a table of content by containing links to the different chapters and subchapters and by being navigatable, second, it serves as an orientation tool by showing how far the readers has gone in the text; third, it helps readers keep track of the events in the story and see the interrelation among them by including shortcuts to events, characters, and places; and fourth, it contributes in making the reading activity exciting by introducing suspense. As shown in the example, the events and details on the timeline unfold gradually and there are hints to major events or twists coming up in the narrative in the form of words or icons (a decision influencing a character in the example).

When students click on the iCritic icon, they open the window shown in Figure 6. There are four icons; the first one allows students to see the subtext, that is, the underlying themes behind a certain segment of the text (as shown by the example in Figure 7). Presenting the analysis in such a manner is a visualization of the idea of subtexts.

Figure 6
Figure 6

iCritic tools.

Figure 7
Figure 7

Subtext function (The analysis is adapted from Spark Notes).

The second icon “find articles” should allow students to find articles about the novel in general or a specific part of it. They should be able to copy and paste or post a link to this article in the Notes section on the interface. The “ask/answer q’s” icon opens a space where students post a question or browse Q’s and A’s and can choose to answer some of the posted questions. In short, this is meant to be an interactive space for discussion.

The “analyze text” icon should allow student to pick a section and provide their interpretation of it. This part should work in the following way: the program stores the different analyses of the students and the section of the text they address. Whenever there is a new attempt at analysis, the program, with a pop up window, points to previous analysis/analyses and asks for feedback on whether the two agree/disagree…etc. It should store this information. With the analysis of characters, for example, the program should ask for explanations, link this with stored information about other characters and form new questions for the student. This is just a general framework. The premise here is the idea of partnership (complementary division of tasks) and incremental intelligence (the machine becomes smarter with more interaction and feedback).

The search box is supposed to include the availability of content-based search, i.e., searching for something beyond single words or phrases. This will be helped by means of interpretative tagging. The html code of the text should include tagging of thematic and interpretative elements which will make searching for them possible. For example, the following tags:

<motif></motif>
<symbolic name></symbolic name>
<satire></satire>
<social comment></social comment>
<caricaturing></caricaturing>
<description of character></description of character>
<narrator’s interjection></narrator’s interjection>
<socialist overtone></socialist overtone>

In addition to tagging, search should include the subtexts: the analysis of themes and motifs. In this way, the result will show both the analysis and the relevant text.

iNovel allows the introduction of creative assignments. The different sections can remain editable. For example, students can be assigned characters and asked to complete their profiles or to rewrite certain section from a different POV and post this. The iFormat does not have to be complete but rather a sample like the one cited here can be posted and then students can work on the plain text and provide their own ideas as to how the text can be formatted. By doing so, they practically assume the role of reader/interpreter/writer.

Conclusion

In conclusion, the iNovel prototype serves as a tool that takes into account the complex factors at work when a text is migrated from print to digital, especially in terms of the multidimensionality, the interrelation between the materiality of the text and its interpretative possibilities, and the ritual of reading coded in the material form. The first part of this study has laid the ground for envisioning a better strategy other than that of simple emulation of the print codex as we came to understand the shift from print to digital in terms of a rearrangement of the reading space to create a new performative space. On the other hand, the digital medium provides a number of pedagogical possibilities like interactivity, academic search, and note sharing (group work and study) that would be helpful for educational purposes. Eventually, iNovel would make the reading an entertaining activity to young readers/students in particular. However, further insight can be gained when the prototype is implemented and tested by users which is something I hope to be able to do in the near future.

Notes

  1. The project started with my PhD dissertation entitled “iCriticism: Rethinking the Roles and Uses of the Computer in Literary Studies” 2013. [^]
  2. Reference to the individual works of these theorists is in the following part. [^]
  3. The prototype is being implemented by a grant from Philadelphia University – Jordan. [^]

Competing Interests

The author has no competing interests to declare.

References

Aljayyousi, Mohammad . (2013).  iCriticism: Rethinking the Roles and Uses of the Computer in Literary Studies (Unpublished Doctoral Thesis). Indiana University of PA.

Bolter, David; Grusin, Richard . (2000).  Remediation: Understanding New Media. Cambridge: MIT.

Drucker, Johanna . (1994).  The Visible Word: Experimental Typography and Modern Art, 1909–1923. Chicago: The U of Chicago P.

Drucker, Johanna . (2009).  SpecLab: Digital Aesthetics and Projects in Speculative Computing. Chicago: The U of Chicago P, DOI: http://dx.doi.org/10.7208/chicago/9780226165097.001.0001

Hayles, N. Katherine . (2002).  Writing Machines. Cambridge: MIT Press.

Hayles, N. Katherine . (2005).  My Mother Was a Computer: Digital Subjects and Literary Texts. Chicago: The U of Chicago P, DOI: http://dx.doi.org/10.7208/chicago/9780226321493.001.0001

Kirschenbaum, Matthew G. . (2008).  Mechanisms: New Media and the Forensic Imagination. Cambridge: MIT.

Landow, George P. . (2006).  Hypertext 3.0: Critical Theory and New Media in an Era of Globalization. Baltimore: Johns Hopkins UP.

Marshall, Catherine C. . (2010).  Reading and Writing the Electronic Book. San Francisco: Morgan & Claypool.

McGann, Jerome . (1991).  The Textual Condition. Princeton: Princeton UP.

McGann, Jerome . (2004). “Marking Texts of Many Dimensions” In:  Schreibman, Susan, Siemens, Ray; Ray and Unsworth, John John (eds.),   A Companion to Digital Humanities. Oxford: Blackwell, DOI: http://dx.doi.org/10.1002/9780470999875.ch16

McGann, Jerome . (2004).  Radiant Textuality: Literature after the World Wide Web. NY: McMillan.

D. F. McKenzie, (1984).  Bibliography and the Sociology of Texts. Cambridge: Cambridge UP.

Ong, Walter J. . (1982).  Orality and Literacy: The Technologizing of the Word. London: Methuen, DOI: http://dx.doi.org/10.4324/9780203328064

Tanselle, G. Thomas . (1989). “The Nature of Texts.” In:  A Rationale of Textual Criticism. Philadelphia: U of Pennsylvania P, pp. 11.

Thibodeau, Kenneth . (2001).  “Digital Preservation Techniques: Evaluating the Options.”.  Archivi & Computer: Automatione e Beni Culturali 10 (2/01) : 101.

Vandendorpe, Christian . (2008). “Reading on Screen: The New Media Sphere.” In:  Schreibman, Susan, Siemens, Ray Ray (eds.),   A Companion to Digital Literary Studies. Oxford: Blackwell.

Watt, Ian . (2001).  The Rise of the Novel. Berkeley: U of California P.