1. Democracy on the Internet, and the Growth of Control
Note: Like a reluctant second child forced to wear hand-me-downs that don't fit properly, the material in this essay has been adapted from its ideal medium -- the Internet -- to the print medium, less responsive to its referential and visual subject matter. The core of this paper was earlier presented in the ruthlessly linear format of a verbally delivered paper ; here it is expanded to the only slightly less linear format of the printed academic word, which does add a kind of hypertext link in the footnote. What follows is an attempt to recreate some of the variety of reference available in a hypertext format: in addition to the normal links from the text to footnote, and from footnote to extraneous material (printed or electronic), I have added a layer that I have called "hypernotes". These are the equivalent of a further page on the Internet, or a sidebar in a news story; they are short essays on related subjects that could be expanded into fuller discussions. It will become clear that the direction of my argument requires some rebellion against the restrictions placed on an author preparing a piece for publication in both print and electronic format: part of my thesis is that the electronic format is continually being constrained by the limitations of print.
In the early, heady, days of hypertext, writers like George Landow  argued that the electronic text would allow for a great democratizing of the process of communication, because it could be constructed by the reader both by contributing to it, and by creating a unique path through intricately linked hypertext nodes rather than a single linear path from page one to the end of the book. It was to be the medium that expressed the dynamic of poststructuralism, in all its difference, contingency, and self-reflexiveness. In the opening paragraph of his 1994 collection of essays, Hyper / Text / Theory, Landow claims for hypertext a central position in modern critical and literary theory: he manages to mention just about all the big names (Derrida, Barthes, Bakhtin, Foucault, and others) as he establishes his thesis: "The very idea of hypertext seems to have taken form at approximately the same time that poststructuralism developed, but their points of convergence have a closer relation than that of mere contingency, for both grow out of dissatisfaction with the related phenomena of the printed book and hierarchical thought" (Landow 1994).
The Internet seemed to be a triumphant demonstration of the accuracy of this insight: a medium where freely available information in various formats would be interlinked in new and powerful ways, extending the meaning of the "text" to include graphic, audio, and video materials. It was to be a medium of splendid democracy, where anyone with access to a computer was able to set up a site publishing his or her beliefs; in a vast cooperative venture, eventually whole libraries would be placed on line, providing unlimited access for all (Ess 1994). The machine would enhance human potential, providing a forum of communication and a resource for rapid access to and dissemination of knowledge. The Internet would become the ideal communications network which Jay Bolter described as: "a hypertext in which no one writer or reader has substantial control, and because no one has control, no one has substantial responsibility" (Bolter 1991: 29).
At least in the short term, some of this promise has been realized. The work of pioneering scholars  is making resources available to us and our students; electronic mail has changed the nature of informal communication within all kinds of communities; groups that allow for "threaded" discussions have provided a forum for the interchange of all kinds of reputable and disreputable ideas; and the technology has made space available for the eccentric and the personal, with home pages populated by all kinds of oddities -- several students of mine looking for research resources on Shakespeare found a page of a family dog by that name. In some ways at least, the Internet can be seen as a medium that is flexible, varied, informative, infuriating: surely a reflection of humanity, if not yet of the humanities.
One key to this development has been the essential anarchy of the medium, a space where those contributing can say what they like, do what they like, even if most of us do not like what they like. But anarchy makes many people nervous, and some attempts to control this ideal space for communication are inevitable. The US Communications Decency Act of 1996, ruled unconstitutional by the Supreme Court, was perhaps the most wide-ranging attempt thus far to institute political control on the content of Internet sites. The desirability of this kind of control is a matter of legitimate debate, given the real and perceived relationships between communication, knowledge, and power; it is certainly the case that untrammeled communication has always been seen as subversive of authority, and we must expect authority to attempt to control it.
The language of the Communications Decency Act is interestingly revealing of the expectations of our current governmental structures when it comes to the relationship between adult responsibility, economics, and power: service providers were not to be prosecuted if they "restrict[ed] access to such communication by requiring use of a verified credit card, debit account, adult access code, or adult personal identification number".  To be adult is to be in possession of credit. The kind of economic control implied here is becoming increasingly pervasive, in part because of the simple need for those publishing on the Internet to fund their activities. For many years it has seemed that the Internet was free, or almost free, especially for those of us privileged to have access to networks and equipment through our place of work. But even in the academic world, the need to answer to the bottom line has led to mechanisms by which those accessing the new medium are paying for the privilege, not only in the cost of connect time, but in the exposure to advertising, and in the number of publications that are asking for subscriptions or "pay per view" mechanisms. Familiar commercial pressures are shaping the Internet so that at times it appears to be turning into a kind of glossy magazine -- Chatelaine, PC World, Hustler, Travel West, depending on the content of the particular site.
An extension of the growing tendency to provide various kinds of economic control is the closely related issue of copyright, and the legal challenges to what are seen as violations of intellectual property. In the UK, New Zealand, and various states in the US there are legal challenges that attack the right of one site to link to another without permission, and possibly having to pay for the privilege. [Hypernote 1]
Quality content costs. Even for those who are trying to make academic materials freely available on the Internet, someone has to pay for the servers, the connections, the networking, the exhaustive business of data entry. And the means of financing these costs is increasingly moving from the generosity of our institutions to the traditional capitalist means of paying for printed materials, even if it means that some sites will soon look more like the Shoppers' Teleguide channel.
[H-1] There is an interesting paradox, worth exploring in a fuller essay: precisely at the moment when technology has made copying so easy as to be trivial, whether through the photocopy machine, or electronic cutting and pasting, the right to copy has become intensely vexed. It is a further twist that at the same time poststructuralist theory in particular is asking searching questions about the nature of authorship, and pointing out the degree to which all texts are contingent on other texts; techniques of sampling as a method of creating new forms, in both audio and graphic formats, are an illustration of the difficulty of determining both the nature and the rights of authorship. In the light of these challenges, it might not be difficult to construct a modest proposal to abolish copyright altogether.
A useful gateway to discussions about copyright is "Copyright and Fair Use", Stanford University, at <http://fairuse.stanford.edu/>. The National Initiative for a Networked Cultural Heritage (NINCH) fosters discussion and maintains an excellent page at <http://www-ninch.cni.org/>. On the legal challenges, see Kleiner 1997.
2. Control at the Level of the Pixel
Political and economic controls are not the only ones shaping the Internet. As the native language of the Internet -- HTML -- evolves, it is providing the means for a less obvious kind of control: the control over the way we as end users are being encouraged, or required, to view the contents of the sites.
A fundamental difference between the electronic text and the printed text is that the electronic text is dynamic, not only in the way the content can be changed at will by anyone with access to the digital version of the text, but in the way the electronic text is displayed visually. An obvious example is the way all word processors now provide the capability for changing font faces and sizes, as we are discovering from our students, who change the font to fit the number of pages required for an assignment. What is less obvious is the way that our word processors and other computer programs have indirectly imposed the limitations of the earlier medium of print on the new medium of electronic text: what we see on the screen is specifically geared to print output, and the more sophisticated the program, the more closely the display on the monitor approximates the way the document will appear when printed. This is the much vaunted feature we know as "WYSYWIG" (What You See Is What You Get). I have elsewhere pointed out that "what you get" is what you get on a printout, with the result that the image on the screen is every bit as limited to the medium of paper as a typewriter (Best 1995).
Since the purpose of most of our word processing is still to produce paper "hard copy," this is clearly an advantage, and those of us who have been using the computer as typewriter long enough to remember the arcane "dot" commands of early word processors are enormously relieved to be rid of them. But the material we view on the Internet is not usually designed for printing -- if users want to refer to the page again they will bookmark it or download it. The paradox is that the design of Web pages is increasingly being anchored to visual principles that are derived from print technology, and in the process they are making the Internet less flexible, less creative, and less responsive to individual needs or preferences.
Technology has always had the potential to reduce the personal and the varied texture of humanity. Nostalgically, we may regret the passing of the creations of the human hand as individually crafted artifacts stamped with the fingerprints of imperfection are replaced by machine-made, uniform objects; wormy organic food is replaced by the crisp product of a monoculture; handwriting is replaced first by the typewriter, then the wordprocessor, where we may suspect that our friends have sent us a form letter using mail merge to "personalize" it. It is hardly surprising that the technological has come to be associated with the impersonal, the less-than-human.
The electronic screen can offer a significant counterbalance to this trend in that it offers an opportunity for the recipient to become more actively involved in the process of reading than is possible from a fixed page. Not only can receivers control content through hypertext links, they can choose the visual format of display in their browsers, specifying default fonts, font sizes, colours and so on, suiting the display to their particular monitor, eyesight, or esthetic sense. Thus the potential is there for the expression of personality in a different way, as the receiver of the information rather than the sender/designer.
But does anyone actually customize the browser in this way? I have watched many of my students working on the Internet, and have seldom seen them even change the size of the browser window, let alone change the font. There is an interestingly contrasting decision by the designers of the major Internet browsers: Netscape Navigator, which is the most popular, requires that the user go deep into the menus to find "Preferences" and make changes to the default font and size in a complicated dialogue box, while Microsoft's Internet Explorer provides convenient buttons on the main toolbar for changing the font size of the display, if not the actual font face. This is not a trivial difference in emphasis. As soon as you change the font size or font face, you change the look of the page, and the designer of that page no longer knows exactly what it will look like. The difference between these two browsers is a reminder of the way that basic decisions of interface design can reflect ideology: Netscape provides several buttons on its basic toolbar that take the user directly to their site, but none to change the look of the screen.
Designers, educated on the printed page, want certainty; they want us to look at a screen exactly as they have planned it.  An Internet site at Microsoft discusses some of these issues in the process of introducing its range of standard fonts, using a new form of font description that will ensure the same typeface across different computer systems:
As a designer it's useful to know the typical size of sections of text you specify within a page. In printed pages this is relatively straightforward, but in Web pages there are various factors that will influence the physical size of an individual letter or word. These include the resolution of your monitor (and how it's set up under your operating system), [and] the Web browser's default font size chosen by the reader, as well the actual HTML size you specify in your pages using the FONT SIZE tag. 
The same discussion on the Microsoft site for OpenType touches briefly on two potential shortcomings of rigidly designed sites, though it is concerned more with the limitations of bitmapped images than of specifying font and size. It points out that the increasing use of images may rob the electronic text of much of its potential power:
Life is made more difficult for the visually impaired, especially when bitmap headings are used.
A bitmapped phrase cannot be increased in size; in addition, it is one of the many powerful applications of "machine readable" text that the computer can read it out loud for the visually impaired surfer. Richard Bear comments, in a posting to the HUMANIST discussion group, that "pages produced by public institutions that do not provide descriptive text-only versions... are probably in violation of the Americans with Disabilities Act". 
The Microsoft site continues, pointing out a further disadvantage of purely display-oriented sites:
It also becomes impossible for a site administrator to create an index based on subject headings.
The potentially powerful capacity of a conceptually structured text to be subjected to analysis by software is lost when the only information coded into the text is what it will look like, not why it is to look the way it does.  The example here is the capacity of software to generate automatic tables of contents when headings are coded as generic headings, rather presented as a graphic, or as bold text of a particular size and alignment, but the principle is a general and far-reaching one. In the world of the electronic text there has long been a battle between systems that mark up the text by logical tags or tags that give instructions on the representation of the text.  So far as the practical use of HTML is concerned, the battle for logical tags has been all but lost. A test case concerns the pairs of tags that will cause most browsers to display the text in the same way: <em> </em> or <i> </i> for italic text; <strong> </strong> or <b> </b> for bold text; a quick check of the source code of a dozen major sites I visited while compiling information for this paper found none that used the logical tags.
The Microsoft site illustrates neatly another habit of designers determined to decide how their text will be viewed. The screen is divided vertically, by tables rather than frames on this occasion, and the width of the text is designed exactly to add up to the 640 pixels on the standard computer screen. If I choose to be cranky, and want to keep the window of my browser small so that I can work on more than one task at a time, I am forced into what must surely be the most inelegant and inefficient way of viewing text: the horizontal scroll, which turns the experience of reading into something close to watching a tennis match with frustration replacing excitement.  The assumption that users will use their whole screen is so common now as to be the norm.
So what? There is nothing obviously limiting in the choice of a representational tag over a logical one, but I would argue that there are two submerged choices at work, both of which are in a sense ideological: the imposition of patterns of thought from the older print medium, as I have already demonstrated, and, more disconcertingly, the choice by the providers of data to control rigorously the precise format of the display -- a familiar desire on the part of the owner to exercise power over the material provided. The newest versions of HTML, both official and unofficial, are providing codes for the definition of the width of the window and the graphics, tables, or frames within it in absolute terms by pixel. [Hypernote 2] And as soon as the code is provided the designers will use it. At our end of the data stream choice melts into air, and our dreams of expressing our own personality over the appearance of our screens evaporate. In a kind of paternalistic or corporate mentality in which the designer knows best, we end up not with WYSYWIG but with WYSIWWWYTS: What You See Is What We Want You To See. As users (or "consumers" as the rhetoric would have it) we send a small signal upstream -- a click of a mouse button over a link or part of a graphic -- and the designers send us a huge stream of data downstream as they provide us with a glossy image, complete with three-dimensional graphic effects, headings with obligatory drop shadows, animation, and a sound bite or two.
Even when site designers try to do something new, they seem destined to perpetuate some of the same rigidities of design. The home page for Coke tries to break out of the current clichés. It is a single image: on the left is an antique Coke urn, on the right a message scrawled (in the same bitmap image), wittily trumpeting the timeless universality of Coke compared with the postmodern contingency of Fanta, which changes flavour with culture.  If you realize that you are supposed to click on the urn, you will be taken to a page where the designer has once again decided that the only way to view is with a full screen; the margins are elegantly wide (forced by a triple nesting of the <blockquote> tag), and the effect is delightful -- if you choose to display it the way you are supposed to.
Visually, web pages like this make it seem almost as if we are sitting in our armchairs with a remote control, clicking at the screen to change the channel.
And that is no joke.
[Slide show, black screen]
Nothing on TV tonight? [pause; new page]
Tune in to what you're into [pause; new page]
WebTV (Internet for the rest of us)
WebTV is a revolutionary new way to access the Internet from your TV. You don't need a computer and there's no software to load. All you need is a television, a phone line, and a WebTV Internet terminal, and you're on the Internet. 
The mission statement for WebTV is laudable: "To make the internet as accessible and compelling for consumers as broadcast television is today."
Now with the WebTV Network Service, you can explore the Internet in your favorite room with your family and friends. Think about how much more comfortable this experience can be in your living room using a remote control. Don't believe it? Click on the remote to check out how. It's that easy. 
It is an irresistible image: the family cozily sitting on the couch in front of the TV set, surfing the Internet together. There is one nagging question that the publicity material does not answer: who holds the remote? The business of choosing which channel to watch when there is more than one potato on the couch is complicated enough, fraught as it is with overtones of domestic power relationships; it is hard to imagine the kinds of discussions that would be needed to decide which link to click on from an average Web page.
The designers of WebTV are astute, however, in analyzing the current trends in Web design. They comment: "Up to now, most content developers have been designing pages very similar to a hard copy book or magazine" -- precisely the point I have been making. They also provide some acute comments on the likely preferences of an audience accustomed to the TV rather than the computer screen:
Use sound to make your page interesting. WebTV provides tags to let you add background music or theme music to your contents. By incorporating music into your content, you can provide an experience more like television ... Put the most important information on the first visible screen. When was the last time you scrolled to see your favourite television show? Television audiences are not accustomed to scrolling, so they may not see information that you place below the first screenful...
Reduce the number of items on your page -- television audiences are used to looking at one focal point. Next time you watch any television show, notice that your eyes are always directed to one particular spot on the screen. Although your page won't have just one element that directs focus, you can design your page with fewer items and with the most important item so placed on the page as to draw the viewer's eyes to it.
WebTV, if it succeeds, will encourage further the development of sites that are designed to be looked at rather than read, in a further manifestation of a culture that puts a premium on visual rather than oral or verbal rhetoric.
I do not believe, however, that these developments are entirely negative. There is room on the Internet for the glossy and impersonal as well as more content-oriented and flexible sites. If WebTV does win an audience, there is no doubt that some of those who access the Internet via their TV will find more than sound and word bytes as they explore. Indeed, the advent of the cramped, low-resolution screen may cause some current designers to be less rigid in their use of font faces, fixed screen widths, and so on, since the TV browser will not be able to scroll horizontally at all. There is no reason why the Internet should not be home to as many different levels of visual presentation as it already is to other kinds of discourse.
One of the greatest strengths of the Internet as a medium is that, as bandwidth increases, it is increasingly capable of absorbing all the older media -- print, the visual arts, music, television. But instead of using the multimedia capabilities of the Internet to create forms that more fully represent the complex content of modern culture, the stress has so far been on eye-candy. The gloss-to-signal ratio is rising as the new browsers encourage more rigid structures, whether they are derived from the printed page or the TV screen. There is a strong argument that the electronic medium requires fundamentally different design principles from those of the more traditional media. Not only is the Internet capable of all the advantages and challenges in the effective use of multimedia, but Net users read/look in a different way from the way they do as they look at a page or television screen: the revealing metaphor of "surfing" reminds us that the Internet user is a "hit and run" viewer, with an attention span often more transient than even the similarly labeled channel surfers of television. The challenge of the designer within the medium is to create new ways of orchestrating the dance between form and content such that the eye is offered more nutrition than candy.
There are some academic sites in the Humanities that are (in my view) falling into the trap of putting gloss before substance, and reducing the choice of the end users. Part of the problem of course is that humanists are not programmers, and HTML has become sufficiently complex that we must either rely on (that word again) WYSYWIG editing programs for HTML that decide for us the nature of the underlying code, or we get funding for programmers -- who love to program, and who therefore create wonderful, technologically ornate sites where the technology, far from enriching meaning, actively works against it.
An earlier version of the US Shakespeare Globe site at the University of Illinois was a good example of form triumphing over content (it has since been elegantly streamlined). The opening screen was very charming: Shakespeare was shown initially in a portrait derived from the familiar Droeshout illustration from the first Folio; then his eyes moved, his eyebrows were lifted, and he started to have a thought balloon, which turned by stages into the Globe Theatre in its 1997 manifestation (minus the conglomeration of modern buildings that in fact crowd it on every side). This was surely a case where technology was not in service of content, since there is no evidence at all that the Globe or its stage was the result of Shakespeare's personal inspiration -- it was almost certainly the thought balloon of the Burbages. 
The irony is that browsers are becoming more sophisticated in passing the innocently nicknamed "cookies" of information back and forth  so that the designer of a site can learn precisely what kind of system a visitor is using to view data; there is thus no reason why advanced sites should not provide multiple ways of viewing the text -- more than the perfunctory offer of a "text only" alternative. In addition, the capability already exists for varying texts to be generated "on the fly" by the site, according to the expressed preferences of the user. In the context of my own work on the Internet Shakespeare Editions, it is my aim to make it possible for users to select varying levels of annotation when viewing a text: unannotated, lightly annotated, or full scholarly annotation. In a more complex example, a viewer looking for a text of Othello will be able to choose between a version based on the Quarto, a version based on the Folio, or a version with readings from each, highlighted in different colours to distinguish them. In each case the "base" text will be the same, so that any changes will need to be made in only one place.
If there is a moral to this tale, I would argue that if politics is too important to be left to the politicians, site design on the Internet is too important to be left to the designers and the programmers. If it is expressed with sufficient force, a counter-ideology can be sustained in at least a corner of the communication space that is the Internet. This counter-ideology might establish as a primary objective the creation of sites that allow for maximum flexibility in viewing the materials; we should not be so puritanical as to reject the attraction of visual rhetoric, the pleasure of ornament or the power of multimedia to extend communication beyond the word, but, as in other civil forms of communication, the rhetoric should enrich the content, and should respect the viewer.  We should exercise what influence we have as users by being cranky and sending notes -- firm if politely unincendiary -- to web designers of sites that restrict our choices.
Those of us developing materials for web sites should insist on intelligent tagging: relative font/table/frame descriptions rather than fixed, and logical rather than descriptive HTML wherever applicable; we should generate as many literary texts as possible in some form of the more fully conceptual meta-language SGML (Standard Generalized Markup Language), while at the same time being aware of the fact that SGML as used in the Humanities has been developed to allow for the intelligent tagging of printed books and written manuscripts, in its own way perpetuating some structures that are anachronistic in the electronic text. [Hypernote 3]
Much of our energy in the Humanities will of course be directed towards representing the printed page on the electronic screen, as we provide machine readable editions of standard works. And as academics working within a tradition that has made a virtue of preserving the past, this is as it should be. But if we wish fully to humanize the new technology of the Internet, we must make every attempt to discover how to use the new medium in ways that exploit its dynamic text creatively, recreating and transforming the past rather than merely preserving it. We need to learn how to write and design for the electronic world, taking advantage of its capacity to be both linear and lateral, fixed and contingent; we need to ensure that we do not require our sites to wear the hand-me-down technology of print; we need to set an alternative example to those who seek to chain the pixel to paper.
[H-2] In HTML, as extended by Netscape and Internet Explorer, various attributes of a table cell or frame can either be expressed in relative or absolute terms, as can font sizes. The available attributes of a frame are particularly inviting for those who wish to impose a single way of viewing material: not only can frames be defined in size by pixel, but the <frame> tag includes the accurately named attribute <noresize>.
The increasing number of sites that offer materials in Adobe Acrobat format are a further indication of the desire to transfer the page rather than to design the screen. In addition, the much-vaunted introduction of style sheets to HTML is likely to offer the designer even more power over the appearance and font faces of electronic pages. Inherited from the meta-language SGML (Standard Generalized Markup Language), style sheets are one of the many features of that language that are designed to provide consistency across platforms, including the most restrictive platform of all, the printed page. The debate between rigid and flexible display continues in the development of the specifications for style sheets: one goal promoted is that they "[support] presentation hints, not commands (make no guarantees)".  However, the draft specifications include the capacity for defining "spatial" properties as well as "relative" properties in many elements, such as margins, indents, and font faces, thus once again providing programmers with the capacity to force control over the user. On SGML, see the next hypernote.
[H-3] The specific guidelines within SGML developed by the Text Encoding Initiative  have become a standard in the encoding of literary texts, but there have recently been some cogent criticisms of them, even in the area where they are most effective, the tagging of printed works. Ian Lancashire has written of the difficulties of adapting SGML for the process of tagging Renaissance printed texts accurately; Lancashire remarks that "SGML and TEI make anachronistic assumptions about text that fly in the face of the cumulative scholarship of the humanities".  Approaching SGML from the point of view of an encoder working on a modern printed work, Michael Neuman shows that the encoder continually encounters problems of interpretation as the codes become more detailed, yet it is precisely the detail of the tagging that makes the text useful for analysis. 
To these shortcomings of SGML/TEI I would add a further one that becomes pressing in the realm of texts created specifically in electronic format. Paradoxically, while the TEI Guidelines in general avoid tags that describe the physical nature of the book, they assume a general model of a conceptual book that is shaped by the traditional printed medium, divided into <front>, <body>, <back>, and various divisions and subdivisions. This sequential and hierarchical ordering of text is ultimately inadequate in the electronic medium, where sequence becomes multi-linear rather than linear, and where there are often multiple and overlapping hierarchies. Jay Bolter writes perceptively of the inherent clash between hierarchy and association: "A hierarchy is always an attempt to impose rigid order upon verbal ideas that are always prone to subvert that order" (Bolter 1991: 22).
 This paper was originally presented as part of a session on "Technologising the Humanities/ Humanitising the Technologies", a joint session of ACCUTE and COCH/COSH at the 1997 Learned Societies meeting at Memorial University in Saint John's, Newfoundland.
 In my own area of the Renaissance, general collections of high quality are available from Renaissance Electronic Texts (U of Toronto) at <http://library.utoronto.ca/www/utel/ret/ret.html> and the Perseus Project (Tufts U) at <http://www.perseus.tufts.edu/>. The Oxford Text Archive continues to grow; they are now providing a number of fully tagged SGML texts (<http://sable.ox.ac.uk/ota/>). The University of Hull provides material on women's writing in the Renaissance and Reformation at (<http://www.hull.ac.uk/Hull/EL_Web/renforum/v1no1/clare.htm>). Further sites provide texts of individual authors, notably Middleton's plays from the University of Virginia (<http://dayhoff.med.virginia.edu/~ecc4g/middhome.html>) and the work of Richard Bear on Sidney (<http://darkwing.uoregon.edu/~rbear/defence.html>) and Spenser (<http://darkwing.uoregon.edu/~rbear/>).
 Quoted as recorded in the electronic journal CYBERSPACE-LAW (#71, 12 March 1997); see <http://www.ssrn.com/CyberLaw/lawpaper.html>.
 A number of educational sites make the background image for their page a simulation of a notebook; see for example the pages for the Abbotsford School District (B.C.) at <http://www.sd34.abbotsford.bc.ca/>, and York University at <http://www.yorku.ca/>.
 From <http://www.microsoft.com/truetype/web/designer/face3.htm> (23 May 1997).
 Analysis can range from the algorithms employed by search engines on the Internet to the complex processes of textual analysis under development in Humanities computing, exemplified by such analytical programs as TACT, developed by Ian Lancashire, in collaboration with John Bradley, Willard McCarty, Michael Stairs, and T.R. Wooldridge (Lancashire et al. 1996).
 A particularly interesting example is the doublethink evident on the site for Intel that provides information to developers on the new technologies becoming available for networking. Intel prides itself on developing chips that are powerful enough to sustain operating systems capable of multitasking and "multithreading" -- but it designs its screen so that only one window will be visible. See <http://www.intel.com/iaweb/exptech/index.htm>.
 Located at <http://www.cocacola.com/> (9 September 1997). The graphic text reads "A Coke is a Coke no matter Where on the planet you drink it. but a Coke light can be a diet Coke. and a mello Yello can be a Lychee Mello. Fanta isa dozen different things -- Peach in Botswana, passion fruit (what else?) in France, and flower flavored in Japan (huh?) Other countries have their own flavors -- only Italy can pour a Beverly (and some travelers who've tried it are just fine with that)."
 The WebTV home page is at <http://www.webtv.net/> (22 May 1997).
 The US Globe site is at <http://www.shakespeare.uiuc.edu/>; (22 May 1997 [since revised]).
 There are, of course, many others arguing for sensitive and flexible Web design. Kathy E. Gill is exemplary; in a posting on HUMANIST (8 February 1997) she wrote: "There are many resources 'out there' for critiquing web sites -- my philosophy of design and ease-of-use can be found at http://www.enetdigest.com/design/design.html -- I publish a weekly guide/critique to web sites. I have links to other resources as well -- more at http://www.dotparagon.com/design.html."
 See "Cascading Style Sheets: a draft specification" at <http://www.pku.edu.cn/on_line/w3html/Style/css/draft.html>.
 In print media see Goldfarb 1990; also Sperberg-McQueen & Burnard 1994. In electronic media see "TEI Guidelines for Electronic Text Encoding and Interchange (P3)" at <http://www.hti.umich.edu/docs/TEI/>; "The official TEI Home Page" at <http://www-tei.uic.edu/orgs/tei/>; "A Gentle Introduction to SGML" at <http://ota.ahds.ac.uk/teip3sg/>.
 Lancashire 1995. See also Ian Lancashire's RET Encoding Guidelines <http://library.utoronto.ca/www/utel/ret/guidelines0.html>; the home page for RET is found at <http://library.utoronto.ca/www/utel/ret/ret.html>.
 See Neuman 1995. For an introductory discussion of the advantages and difficulties of the related process of "lemmatizing" -- marking texts specifically for searching and developing concordances -- see Siemens 1996.
- BEST, Michael (1995). "From Book to Screen: A Window on Renaissance Electronic Texts", Early Modern Literary Studies, 2.1: 4.1-27 <URL: http://purl.oclc.org/emls/01-2/bestbook.html>.
- BOLTER, Jay David (1991). Writing Space: the Computer, Hypertext, and the History of Writing, Hillsdale, NJ: Lawrence Erlbaum Associates.
- COOMBS, James H., Allen H. RENEAR, & Steven J. DEROSE (1993). "Markup Systems and the Future of Scholarly Text Processing", The Digital Word: Text-Based Computing in the Humanities (eds. Paul Delany & George Landow), Cambridge: MIT P: 85-118.
- DEROSE, Steven J. (1993). "Markup Systems in the Present", The Digital Word: Text-Based Computing in the Humanities (eds. Paul Delany & George Landow), Cambridge: MIT P: 119-38.
- ESS, Charles (1994). "The Political Computer: Hypertext, Democracy, and Habermas", Hyper / Text / Theory (ed. George Landow), Baltimore: Johns Hopkins UP: 225-67.
- GOLDFARB, Charles F (1990). The SGML Handbook (ed. Yuri Rubinsky), Oxford: Clarendon P.
- JOYCE, Michael (1995). Of Two Minds: Hypertext Pedagogy and Poetics, Ann Arbor, MI: U of Michigan P.
- KLEINER, Kurt (1997). "Surfing Prohibited", New Scientist, 25 January: 28-31.
- LANCASHIRE, Ian (1995). "Early Books, RET Encoding Guidelines, and the Trouble with SGML", Paper delivered at the Electric Scriptorium: Approaches to the Electronic Imaging, Transcription, Editing and Analysis of Medieval Manuscript Texts, University of Calgary, Alberta, 11 November. <URL: http://www.ucalgary.ca/~scriptor/papers/lanc.html>.
- LANCASHIRE, Ian, in collaboration with John BRADLEY, Willard MCCARTY, Michael STAIRS & T.R. WOOLDRIDGE (1996). Using TACT with Electronic Texts: Text-Analysis Computing Tools 2.1 for MS-DOS and PC DOS, New York: Modern Language Association.
- LANDOW, George P (1992). Hypertext: The Convergence of Contemporary Critical Theory and Technology, Baltimore: Johns Hopkins UP.
- LANDOW, George P, ed. (1994). Hyper / Text / Theory, Baltimore: Johns Hopkins UP.
- LANDOW, George P, & Paul DELANY, eds. (1991). Hypertext, Hypermedia and Literary Studies, Cambridge, MA: MIT P.
- LANDOW, George P, & Paul DELANY, eds. (1993). The Digital Word: Text-Based Computing in the Humanities, Cambridge: MIT P.
- NEUMAN, Michael (1995). "You Can't Always Get What You Want", Paper delivered at the ACH/ALLC Joint International Conference, Santa Barbara, CA, July.
- SIEMENS, R.G (1996). "Lemmatization and Parsing with TACT Preprocessing Programs", Computing in the Humanities Working Papers, A.1 <URL: http://www.epas.utoronto.ca:8080/epc/chwp/siemens2/>.
- SPERBERG-MCQUEEN, C. M., & Lou BURNARD, eds. (1994). Guidelines for Electronic Text Encoding and Interchange (TEI P3), Chicago, Oxford: Text Encoding Initiative.