CARCAT: Computer-Assisted Reading and Conceptual Analysis of Texts: An experiment applied to the concept of evolution in the work of Henri Bergson

CARCAT: Computer-Assisted Reading and Conceptual Analysis of Texts: An experiment applied to the concept of evolution in the work of Henri Bergson

Jean Danis, Laboratoire d'ANalyse Cognitive de l'Information (LANCI):

Jean-Guy Meunier, Université du Québec à Montréal (UQAM):

Abstract / Résumé

Abstract: When one needs to apply computer-assisted conceptual analyses of philosophical texts beyond their linguistic dimension, statistic macro-textual approaches have given appealing results but remain limited. In this article, we present a computer-assisted conceptual analysis methodology applied to a philosophical text. This methodology attempts to be as close as possible to the criteria of the philosophical approach. The method allows a systematic exploration through the multifaceted contexts of a specific philosophical concept. Our methodology is based on the use of a) a concordancer b) a clustering method and c) an interpretative annotation strategy. This method is applied to the concept of evolution in Bergson's corpus.


Key words: conceptual analysis, clustering, concordance mining, annotation, philosophical corpus


For 40 years now, social science and humanities researchers have harnessed technology to assist them in the reading and analysing of texts. Researchers are becoming increasingly familiar with these technologies, but many are still unsatisfied with the main text-analysis tools currently available. An interpretation aid is required that is faithful to standard disciplinary-specific practices (Rockwell; Unsworth "What is Humanities"; Bradley "Thinking", "Pliny"). This is true for the conceptual analyses used in philosophy, which, carried out to test hypotheses or better understand a philosophical system, involve examining the concepts that shape the various doctrines (e.g. Dasein in Being and Time; Ricoeur's "conscience;" "hypercomputation" in cognitive science; Wittgenstein's Sachverhalt (state of affairs); Husserl's "intentionality;" Hegel's "dialectic," and so forth).  

As some authors have pointed out, conceptual analysis in philosophy is not simply the decomposition of concepts into their ultimate constituents (Engel; Beaney). It involves much more than searching for basic definitions. From a wider philosophical perspective, the analysis of conceptual content requires, among other things, that the relationship between concepts within the doctrines be identified and understood (Engel). In many cases, these relationships arise not only through the concepts' linguistic or logical commonalities, but also through the meeting point between theoretical systems and either empirical experience or some form of pretheoretical intuition (Engel). For example, an analysis of the concept of knowledge in Plato's Theaetetus and in contemporary epistemological discourse leads the researcher to link knowledge to the concepts of truth, justification, intuition, method, etc. Understanding the concept of knowledge thus remains closely linked to capturing the meaning of different terms within their contexts of specific theorization. In addition, the analysis leads the researcher to place the meanings of the related terms in a broader context of a possibly epistemological, sociological, or historical nature.

Although textual statistics methods (Muller; Benzécri et al.) and Text Mining approaches (Unsworth "New Methods"; Hearst; Meunier et al.; Alexa and Zuell) are heuristic when dealing with extensive text corpora, they prove limited in their ability to help researchers interpret the multiple facets of philosophical concepts.

When approaching concepts from a philosophical point of view, analysis must go beyond linguistic and structural dimensions. Concepts present in philosophical discourse are often prone to wide semantic variation depending on the contexts in which they appear (Dixsaut). Philosophical concepts may thus have terminology that can be categorized under "multiple entries" (semantic, metaphorical, logical, rhetorical, etc.; Dixsaut). From a cognitivist perspective, or using a pragmatic language approach, conceptual content also results from various cognitive operations such as judgment, schematization, inferences, consciousness, propositional attitude, etc. (Rey; Brandom). In this light, conceptual content comes from sources and operations closely related to the context that, while sometimes related to linguistics (peritext and paratext), often concern scientific and cultural works related to the text being studied.

To be aware of the various operations and the variation of the conceptual content, the researcher in philosophy must focus on an in-depth analysis of the text's micro-textual dimension. This involves the examination of text passages in which specific expressions appear. In many cases, these analyses lead to some form of text fragment extraction. This type of extraction is often randomly carried out during reading: certain fragments of text are extracted by identifying various key passages, by identifying associative links, or simply by using a well-developed memory.

The reading practices involved in this form of analysis can be assisted to some extent by technology, using a "search and find" function, or by the production of concordances. Although these operations are heuristic and relatively simple, they can generate highly complex, extensive reading paths. For example, searching for occurrences of the term "mind" in Peirce's corpus gives 1,798 contexts, and a concordance of the term "evolution" in Bergson's work shows 280 contexts. The large number of search results requires the researcher to read by "jumping," which can make it difficult to grasp the complexity of the hypertextual relations within the selected text segments. The general hypothesis that has guided us in this experiment is that computers can assist the analysis process underlying this form of text exploration.

In this paper, we will present the initial results of a computer-assisted analysis strategy that permits the researcher to explore the properties of a concept within a specific philosophical system. This strategy attempts to adhere as strictly as possible to the requirements of the conceptual analysis process in philosophy. It permits a more technological and detailed exploration of the different contexts of a concept (predicate) specific to a particular author's text corpus. This method, known as Computer-Assisted Reading and Conceptual Analysis of Texts (CARCAT), was tested by analyzing the concept of evolution in the corpus of the philosopher Henri Bergson. We will first outline the method and its underlying hypotheses, then describe the main steps of this methodology and the results obtained in the experiment on Bergson's corpus.

CARCAT: Computer-Assisted Reading and Conceptual Analysis of Texts

Conceptual analysis of texts can be more technically defined as "a systematic method of text search that an expert carries out in order to identify, in the text, explicit instances of types of cognitive operations (semantic and logical) expressed in a canonical linguistic form" (Danis et al.). CARCAT is a method that aims to aid the researcher in the analysis process.

CARCAT: General Hypothesis

Our research is based on the following general hypothesis:

The expression of a canonical concept (a natural language predicate) manifests linguistic regularities that can be traced by classification algorithms.

This hypothesis leads us to consider the interdependence between textual units of a discourse (text segments) and the processes—pragmatic, semantic, rhetorical, logical, etc.—of conceptualization. Three sub-hypotheses stem from this hypothesis.

Sub-hypothesis 1:

•Conceptual analysis can be achieved by the contextual exploration of canonical linguistic forms expressing a concept.

The concepts almost always appear in a "canonical" form that lexicalizes as different synonyms, expressions, paraphrases, etc. (Rastier); for example, in Platonic philosophy, the concept of "spirit" does not limit itself to the term "spirit." What is signified is expressed through many words (soul, intelligence, nous, noesia, and noun) and, additionally, by contextualized expressions (immaterial substance, to be spirited, to have spirit, mental operation, and so forth). Conceptual analysis lies in the gradual identification of these forms. Concordancing and its variants are a simple, standard way of using computers in the exploration of the forms. Concordancing essentially consists in extracting fragments of text that contain terms likely to convey the concept being analyzed. This process is explained in detail in the section concerning the testing of the methodology.

Sub-hypothesis 2:

•Automatic classification algorithms allow pertinent exploration of the context of linguistic expressions that express a concept.  

As mentioned, the concordance of a textual unit applied to a large corpus gives, in most cases, an extensive inventory of textual contexts that is difficult to analyze manually; for example, the concordance of the term "mind" in Peirce's corpus provides 1,798 contexts. It is possible to make this process easier by classifying together different contexts containing a specific keyword; this enables contexts to be clustered—defined as "the grouping of documents which satisfy a set of common properties" with the intention of "assembl[ing] together documents which are related among themselves" (Baeza-Yates and Ribeiro‑Neto 43); the text documents that result from this process can be considered as new sub-texts—by their similarities (Lebart and Salem; Jain and Flynn; Manning and Schütze). The theory behind this strategy is that the resulting clusters correspond to various dimensions of a concept, i.e. various conceptual axes or semantic fields.

•Categorical annotations can assist with conceptual analysis.

Annotation is the informal categorization of each segment in a cluster; the process of document categorization can be formalized, according to Fabrizio Sebastiani, in the following terms: it is a matching function Φ: D x C→ {T, F} where C is a predefined list of categories {c1, c2, ... cn} and D is a collection of documents {d1, d2, ... dn}(4). The categorization process can consist of attributing one or more categories to each segment. This process can be thought of as the addition of peritextual and interpretive layers to the text segments. It is essentially a description of the logical, semantic, illocutionary, or argumentative dimensions of the segments' significant content. Annotation can be used to identify the nature of the conceptual content in the clusters being analyzed. As will be shown further on, annotation is used more specifically to structure cluster content.  


The five steps of the CARCAT process will be detailed, as tested in the analysis of the concept of evolution in Bergson's corpus. The first three steps relate to the philological aspect (textual forms, processing of textual data). Steps four and five regard the reading and interpretive analysis of the results from the first three steps.

Philological Steps

Step 1: Preparation of the corpus

This process consists in gathering within a closed corpus the set of texts that is dealt with in a particular discourse or body of philosophical work. As part of our experiment, all of Bergson's digitized works were grouped together into one digitized corpus. These works—his writings from 1888 to 1932—represent Bergson's philosophical work almost in its entirety.

Step 2: Construction of a concordance

This operation produces an initial subcorpus from fragments of text containing expressions or terms likely to be the lexicalization of the concept being analyzed. These terms may be known a priori to reading and analysis or be discovered during reading and analysis. As concepts may be linked with more than one lexicalization, the application of a concordance of a specific term is only a first step. The extraction of concordances containing a selected word can in fact lead to the discovery of other terms likely to be an expression of the concept. The use of this algorithm in the context of conceptual analysis can also give rise to an iterative process: the analysis of a concordance containing different keywords can lead to the composition of other concordances that contain terms identified among the text segments of the initial concordance.

The size of the extracted fragments of text may vary according to the concordancer's parameters or the type of analysis being done; the size of the fragments of text can be determined either by the number of sentences before and after the keyword or by a specific number of words. For the purposes of this experiment, we established beforehand that the size of the fragments would be five sentences: two sentences preceding and two sentences following the sentence containing the keyword. For this experiment, the size of the fragments proved large enough to gain an understanding of the main conceptual dimensions surrounding the concept being analysed. In the analysis of the concept of evolution, a concordance was produced with the keywords "ÉVOLUTION" and "L'ÉVOLUTION." This process produced a sub-text of 280 segments. Figure 1 below shows an example.

Figure 1: Concordance with "L'ÉVOLUTION" as the keyword.

Concordance with "L'ÉVOLUTION" as the

Step 3: Classification

The text segments in the concordance then undergo classification using a clustering method, which allows different segments to be grouped into clusters according to their common lexical characteristics. The theory underlying the interpretation of these clusters is that the more common lexical characteristics segments contain, the stronger the probability that their conceptual content is similar. Unlike most other text classification strategies, this process permits the classification of fragments of text according to words rather than the classification of words according to fragments of text. More specifically, the process is applied to a concordance that is then converted into a matrix of vectors made up of domains of information (DOMIFS) and units of information (UNIFS) (Figure 2).

Figure 2: The matrix fragments of text–units of information.

The matrix
                                 fragments of text–units of information.

In this experiment, the DOMIFS are the 280 fragments of text with the keywords, "L'ÉVOLUTION" and "ÉVOLUTION," whereas the units of information are the 812 tokens—symbols (words) considered as individual, distinctive occurrences and not as a class of identical units; they are the various lexical units that make up the segments of a concordance—from the concordance. Tokens are the lexical components of the domains of information that are used to compare segments during the classification process. These are the textual units that reveal statistical information about the domains of information. They are usually retained after the filtering process, which involves filtering units that are equally distributed throughout the concordance segments. This enables their classification to be unbiased. Non-pertinent units are often numbers or functional polysemic words (un, de, est, il, a, se). They are terms from the fragments that from a statistical point of view are considered to be representative of their textual content. Here is a representation of a concordance for the term "ÉVOLUTION," containing three fragments of text (DOMIFS 1 to 3).

_A_ _ B_ ÉVOLUTION _ C_ _ D_ (DOMIF 1)

_E_ _ F_ ÉVOLUTION _ G_ _ H_ (DOMIF 2)

_A_ _ B_ ÉVOLUTION _ C_ _ E_ (DOMIF 3)

The application of a classification algorithm will group within the same cluster the first and third fragments of text (DOMIF 1 and DOMIF 3), as they contain similar units of information (terms A, B and C). The second fragment (DOMIF 2) would be grouped into a different cluster. Several algorithms can be used for this classification[1] ; some are more particular than others, but because of the general parameters of this research, the algorithm used will not notably affect the results.[2] In our research, concordance segments were classified using a k-means algorithm. Figure 3 shows a sample of Cluster 20, with the content retained during the experiment.

Figure 3: Sample of the content of a cluster with the keyword "ÉVOLUTION."

Sample of the content of a cluster with
                                 the keyword "ÉVOLUTION."
As can be seen, classification groups together textual contexts containing specific keywords (in this case, "ÉVOLUTION") according to their shared terms. The content of Cluster 20 (Figure 3) reveals that the concept of evolution is used within a network of propositions containing recurring terms such as form (forme), transformism (transformisme), relation (rapport), chronology (chronologie), order (ordre), etc.

Interpretative steps

Step 4: Identification of conceptual axes

In this experiment, the 280 segments from the concordance were grouped into 30 clusters during the classification step. Interpretation of the classification results indicates that the concept of evolution in Bergson's work is expressed in connection with several specific semantic fields (conceptual axes) (Figure 4).

Figure 4: Various semantic fields associated with the concept of evolution in Bergson's work.

Various semantic
                                 fields associated with the concept of evolution in Bergson's

Interpretation of these various fields shows that the concept is represented within themes such as animal/vegetable, form, society, epistemology, intelligence/instinct, variation/adaptation, duration, movement, etc. The link to these themes is pertinent and corresponds to what is put forward in literary criticism in regards to Bergsonism (see Deleuze). Some of these fields provide the analyst-interpreter with textual environments that allow for the expansion of certain contemporary interpretations of Bergsonism (see Bergson and morphogenesis (Sheldrake New Science, "Morphic Fields") and Bergson and organization theory (Linstead; Linstead and Mullarkey)).  

While the identification of these fields is interesting, it is only a first step for the analyst in philosophy. The fields may suggest certain paths for the expert interpretation of the concept being studied, but the identification of semantic fields must be complemented by an in-depth analysis of their content,[3] which is an integral step in the standard procedures of every expert reader. The technological framework of this research aims to assist the researcher in this step through an annotation strategy, facilitating the continual interaction between the textual forms and the researcher's interpretive goals.

Step 5: Annotation of the content of specific clusters

The categorization of cluster content is carried out in the CARCAT process through various annotation strategies. These strategies aid the researcher in the reading of the different segments of the clusters through the addition of a multi-category classification (rhetorical, logical, discursive, linguistic, editorial, etc.).

There is much research illustrating the option of automatically applying various types of annotations (Dijoua et al.; Meyers; Loper and Bird). Here, annotation is done manually and is related to thematic and illocutionary dimensions as well as to the conceptualization strategies carried out by Bergson. To show how the annotation process plays a part in the interpretive step of our CARCAT methodology, a sample of the segments annotated during the analysis of Cluster 20 is provided below.

Annotation of Cluster 20: the "form" dimension of the concept of evolution

The concept of evolution is represented in Cluster 20 by a group of propositions that have the following terms as their most frequent lexical units:

 form (forme), engender (engendrer), transformism (transformisme), filiation (filiation), experience (expérience), relation (rapport), order (ordre), chronological (chronologique), organism (organisme)

When surveying the contents of Cluster 20, different segments were annotated according to the type of conceptualization strategy used and the illocutionary force expressed. In the next section, we will present a sample of segments annotated with the categories "ANALOGIES" and "MODELLINGS," followed by a sample of segments annotated with the category "EPISTEMIC STATEMENTS."

Three Examples of Conceptualization Strategies


Mais cette nouvelle comparaison, outre qu'elle attribue à l'histoire de la pensée plus de continuité qu'il ne s'en trouve réellement, a l'inconvénient de maintenir notre attention fixée sur la complication extérieure du système et sur ce qu'il peut avoir de prévisible dans sa forme superficielle, au lieu de nous inviter à toucher du doigt la nouveauté et la simplicité du fond. Cl. 20, segment 278, Bergson 1924.


Or, plus on fixe son attention sur cette continuité de la vie, plus on voit l'ÉVOLUTION organique se rapprocher de celle d'une conscience, où le passé presse contre le présent et en fait jaillir une forme nouvelle, incommensurable avec ses antécédents. Cl. 20, segment 22, Bergson 1907.


L'essentiel est la continuité de progrès qui se poursuit indéfiniment, progrès invisible sur lequel chaque organisme visible chevauche pendant le court intervalle de temps qu'il lui est donné de vivre. Cl. 20, segment 22, Bergson 1907.

Two Examples of Epistemic Statements


Tandis que la conception antique de la connaissance scientifique aboutissait à faire du temps une dégradation, du changement la diminution d'une forme donnée de toute éternité, au contraire (…) on fût arrivé à voir dans le temps un accroissement progressif de l'absolu et dans l'ÉVOLUTION des choses une invention continue de formes nouvelles. Cl. 20, segment 150, Bergson 1907.


Nous disions qu'il y a plus dans un mouvement que dans les positions successives attribuées au mobile, plus dans un devenir que dans les formes traversées tour à tour, plus dans l'ÉVOLUTION de la forme que les formes réalisées l'une après l'autre. (…) l'intelligence renverse l'ordre des deux termes, et, sur ce point, la philosophie antique procède comme fait l'intelligence. Cl. 20, segment 148, Bergson 1907.

Annotating cluster segments allows the conceptual properties within the segments to be identified and the discursive strategies used by Bergson in the conceptualization of evolution to be highlighted. This enables the researcher to interpret how Bergson theorizes the concept of evolution in relation to knowledge and the "form" of organisms. Cluster 20 shows Bergson conceptualizing evolution through the "form" of biological organisms, while paying particular attention to the vigorous interaction between the thinking subject—to a large extent, the researcher—and the concept. The concept of evolution is also viewed in relation to the phenomenological conditions associated with observing the processes of change of form.


Conceptual analysis is a key step in the research approach often used in the social sciences and humanities. Computer assistance in this type of analysis is appealing and works to encourage rather than inhibit a researcher's interpretive imagination.

The results of the various processes involved in conceptualization, strongly linked to context, must be analyzed through a detailed exploration of their micro-textual dimensions. In philosophical conceptual analysis, micro-textual exploration illuminates how concepts are structured within localized theoretical contexts. The exploration reaches beyond the purely linguistic, linking different interpretive horizons, such as the epistemological, philosophical or sociohistoric.

CARCAT, the methodology presented in this article, enables the in-depth exploration of the textual environments associated with a particular philosophical concept. Testing this methodology in the analysis of the concept of evolution in Bergson's work has made it possible to identify various contexts of theorization, which appear within particular semantic fields. The results obtained by the detailed analysis of one of these fields (the "form" dimension of the concept of evolution) show how Bergson approaches evolution by way of the phenomenological conditions linked to the process of form changes. These results are consistent with certain contemporary interpretations of key elements of Bergsonism as seen in contemporary organization theory (Linstead; Linstead and Mullarkey) and morphogenesis (Sheldrake New Science, "Morphic Fields"). The results obtained thus show how the CARCAT process can assist the researcher in a detailed interpretation of the multiple facets of a particular philosophical concept.

It should be noted that the results presented in this article address only the "form" dimension of Bergson's concept of evolution. CARCAT processing reveals that this concept is used in Bergson's work within other conceptual axes, which could be explored in order to better understand the way Bergson perceives, theorizes, and structures the multiple facets of evolution.  


[1]  There are many methods and algorithms used in the classification of text documents. Some are related to the use of vector spaces (latent semantic analysis (LSA), non-hierarchical classification methods such as k-means algorithm, ART1 algorithm, Kohonen's SOM algorithm, etc.), while others stem from statistical analysis methods such as principal component analysis (PCA).

[2] The many results of the various classifiers on reference texts (Reuters) have F-scores between 0.79 and 0.87. For an analysis such as ours, this can include one or two incorrectly classified segments in about twenty. What is important are those segments that were correctly classified. In our case, the noise produced has little influence on the analysis. See Sebastiani and Hotho et al.

[3] The clustering process is not the final step for conscientious expert readers in philosophy, and stopping at this stage can lead researchers to be resistant to the use of computers in analysis. The analysis of the clusters' semantic fields must be done cautiously, as it brings to light fairly general thematic and conceptual dimensions; they may be unspecific and likely hold little significance for expert readers with a good knowledge of Bergsonism.

Works Cited

Alexa, M. and C. Zuell. "Commonalities, Difference and Limitations of Text Analysis Software: The Results of a Review." Quality & Quantity 34.1 (2000): 299-321.

Baeza-Yates, R. and B. Ribeiro-Neto. Modern Information Retrieval. New York: Addison Wesley, 1999.

Beaney, M. "The Analytic Turn, Analysis in Early Analytic Philosophy and Phenomenology." Routledge Studies in Twentieth Century Philosophy. New York: Routledge, 2007.

Benzécri, J.P. et al. Pratique de l'analyse des données, Linguistique et lexicologie. Paris: Dunod, 1981.

Bradley, J. "Thinking about Interpretation: Pliny and Scholarship in the Humanities." Literary and Linguistic Computing. 23.3 (2007): 263-279.

Bradley, J. "Pliny: A Model for Digital Support of Scholarship." Journal of Digital Information 9.1 (2008): n. pag.

Brandom, R. Making it Explicit. Cambridge: Harvard UP, 1994.

Danis, J., J.-G. Meunier, J.P. Desclés, M. Alrahabi, and J.-P. Chartier. "Classification automatique et stratégie d'annotation appliquées à un concept philosophique: la dimension psychologique du concept de LANGAGE dans l'œuvre de Bergson." JADT 2010 – Statistical analysis of textual data. Actes des 10ièmes Journées internationales d'Analyse statistique des Données Textuelles. Eds. S. Bolasco, I. Chiarim, and L. Giuliano. Rome: Edizioni universitarie di lettere economia diritto, 2010: 49-60.

Deleuze, G. Le Bergsonisme. Paris: Presses Universitaires de France, 1966.

Dixsaut, M. Le naturel philosophe: Essai sur les dialogues de Platon. Paris: Les belles lettres-Vrin, 1985.

Djioua, B., J.P. Desclés, and G. Mourad. " Annotation et indexation des flux RSS par des relations discursives de citation et de rencontre: le système FluxExcom." Analyse de texte par ordinateur, multilinguisme et applications. 75e congrès de l'ACFAS, Trois-Rivières, Canada, 10-11 May 2007.

Engel, P. La dispute, une introduction à la philosophie analytique. Paris: Paradoxe, 1997.

Hearst, M. A. "Automated Discovery of WordNet Relations." WordNet: An Electronic Lexical Database. Ed. Christiane Fellbaum. Cambridge: MIT Press, 1998. 132-52.

Hotho, A., A. Nürnberger, and G. Paass. " A Brief Survey of Text Mining." GLDV-Journal for Computational Linguistics and Language Technologie 20.1 (2005): 19-62.

Jain, A. and P. J. Flynn. "Data Clustering: A Review." ACM Computing Surveys 31.3 (1999): 264–323.

Lebart, S. and S. A. Salem. Statistique textuelle. Paris: Dunod, 1994.

Linstead, S. " Organization as Reply: Henri Bergson and Casual Organization Theory." Organization 9.1 (2002): 95-111.

Linstead, S., and J. Mullarkey. "Time, Creativity and Culture: Introducing Bergson." Culture and Organization 9.1 (2003): 3–13.

Loper, E. and S. Bird. "Nltk: The Natural Language Toolkit." Proceedings of the ACL Workshop on Effective Tools and Methodologies for Teaching Natural Language Processing and Computational Linguistics. Philadelphia: Association for Computational Linguistics, 2002. 62-9.

Manning, C. and H. Schutze. Foundations of Statistical Natural Language Processing. Cambridge: MIT Press, 1999.

Meunier, J.G., D. Forest, and I. Biskri. " Classification and Categorization in Computer Assisted Reading and Analysis of Texts." Handbook of Categorization in Cognitive Science. Eds. Henri Cohen and Claire Lefebvre. Oxford: Elsivier, 2005.

Meyers, A. "Introduction to Frontiers in Corpus Annotation II Pie in the Sky Proceedings of the Workshop on Frontiers." Corpus Annotation II: Pie in the Sky. Ann Arbor: NYU Ann Arbor, 2005. 1–4.

Muller, C. Principes et méthodes de statistique lexicale. Hachette: Larousse, 1977, reprinted Champion-Slatkine, 1992.

Rastier, F. "Pour une sémantique des textes théoriques." Revue de sémantique et de pragmatique 17.1 (2005): 151-80.

Rey, G. "Concepts and Stereotypes." Cognition 15.1 (1983): 237-62.

Rockwell, G. "What is Text Analysis, Really?" Literary and Linguistic Computing 18.2 (2003): 209–19.

Sebastiani, F. "Machine Learning in Automated Text Categorization." ACM Computing Surveys 34.1 (2002): 1-47.

Sheldrake, R. A New Science of Life. Los Angeles: J. P. Tarcher, 1981.

Sheldrake, R. "Morphic Fields and Morphic Resonance: An Introduction." Rupert Sheldrake Online. Rupert Sheldrake Online, February 2005. Web. 1 July 2011.

Unsworth, J. "What is Humanities Computing, and What is Not?" Jahrbuch für Computerphilologie 4. Eds. George Braungart, Karl Eibl & Fotis Jannidis. Paderborn: mentis Verlag, 2002.

Unsworth, J. "New Methods for Humanities Research." The 2005 Lyman Award Lecture. National Humanities Center. Research Triangle Park, NC. 11 Nov 2005.

Valid XHTML 1.0!

Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.