After the Interview: The Interpretive Challenges of Oral History Video Indexing

After the Interview: The Interpretive Challenges of Oral History Video Indexing

Steve High and David Sworn

Concordia University

shigh@alcor.concordia.ca | http://storytelling.concordia.ca , www.lifestoriesmontreal.ca,


KEYWORDS / MOTS-CLÉS

oral history, database, computing, orality, narrative, video-indexing, Interclipper, Sturgeon Falls, mill closing / tradition orale, base de données, informatique, orality, narration, indexation vidéo, Interclipper, Sturgeon Falls, fermeture de moulin


  • 1.0 Why an Oral History Database?
  • 2.0 Building the Sturgeon Falls Database
  • 3.0 ‘Test Driving' the Database in the Classroom
  • 4.0 Learning from our Mistakes
  • 5.0 Where Do We Go From Here?
  • 5.0 Drawing Conclusions
  • Works Cited

One of the things that drew us to oral history, and draws us still, is its humanity.[1] Too often, professional historians view history in the abstract or the aggregate: through the structures and forces that shape our lives. This larger canvass is, of course, vitally important, but there is a danger that we contribute to the de-humanization and distance that surrounds us. By interviewing displaced industrial workers, for example, we are keenly interested in shifting the public discourse away from the financial “bottom line” – or the brief mention of the “body count” in the business pages – and instead looking to the profound connection that these men and women often make with their work and the places that they call home. Yes it was a pay cheque, but it was so much more. By putting a face and a name to the past, we value ordinary people and their lives and memories. We see history from their vantage points. At its best, oral history offers a way to view history not only from the “bottom up,” but also from the “inside out.” It also provides us with an opportunity to re-think the research process, whereby communities become partners in research as well as the objects of study (Greenspan; Lambert). A more inclusive history-making must go beyond new subject-matter and wider audiences. The public engagement inherent in Michael Frisch’s notion of “shared authority,” is, for us, critically important (Frisch; Corbett and Miller; High).

Digital audio and video are opening up new ways of working directly and easily with audio and video interviews (Christel and Frisch; Couldry; Lundby; Hartley and McWilliam). New digital tools have recently appeared that offer direct access to the audio and video “content” of oral history collections.[2] There are many broader implications of these changes to the theory and practice of oral history. First and foremost, digital oral history promises a move away from transcription. Recorded interviews were quickly transcribed and the original audio and video source was either set aside or (at one time) discarded altogether. With the loss of the orality of the source at such an early stage, the power of oral history to put a face and a name to history was muted. Analogue audio and video cassettes were ponderous to use and, as a result, underutilized. As Michael Frisch recently noted, the “Deep Dark Secret of oral history is that nobody spends much time listening to or watching recorded and collected interview documents” (Frisch 223).

As its name implies, the Centre for Oral History and Digital Storytelling (http://storytelling.concordia.ca ) at Concordia University in Montreal is committed to the integration of oral historical practice and emergent digital technologies. For oral historians, the digital revolution provides new possibilities for the analysis, archiving, and public dissemination of recorded narrative sources. Video indexing, for example, makes interview data more accessible and more easily organized, and may complement or even substitute traditional transcripts. Such indexing lends itself to the mapping of narrative and rhetorical patterns within and across interviews and allows researchers to account for non-linguistic modes of expression. But video indexing can also conflict with the basic ethos of oral historical research: far from giving voice to interviewees, indexing risks sundering and de-contextualizing their life stories, thereby obscuring the orality of their accounts and concealing the idiosyncrasies and digressions that give their narratives identity and meaning.

This article documents the creation of a fully-indexed digital video database of oral history interviews relating to the displacement of paper workers in the Northern Ontario town of Sturgeon Falls. It describes the challenges involved with the development of new interpretative technologies from a humanities standpoint and emphasizes the need to adapt nascent indexing software to oral historical practice, rather than the reverse. Such video-indexing may be an alternative to the time-consuming practice of transcription and, more importantly, it allows for the examination of data – body language, facial expression, and intonation – that traditional transcripts tend to ignore. Likewise, indexing makes specific portions of interviews – the discussion of a labour dispute from various perspectives, for example – more easily accessible.

In its current incarnation, however, video indexing revolves around the segmentation of recordings into “clips” that are small enough to be effectively indexed; this risks fragmenting and de-contextualizing the narratives of those interviewed, particularly when the database is accessed by those who are unfamiliar with the recordings (Gustman, Soergel). The practice of indexing itself, moreover, risks occluding the anomalous and specific in favour of the cross-referentiality afforded by topics and themes that are common to all interviews. All of this renders video indexing problematic. Yet we continue to believe in the value of oral history databases and, building on this first experience, we have undertaken to develop in-house an open-source tool called Stories Matter (http://storytelling.concordia.ca/storiesmatter/ ). This new oral history database software, still under development at the time of writing, will result in a downloadable version for individual researchers and a web-based version designed with larger projects and community access in mind. We have thus been able to tackle some of the problems that we identified while using Interclipper (www.interclipper.com) relating to the fragmentation of life stories and the lack of editorial transparency. However, we have come to realize that some of the problems encountered are simply the result of transforming individual audio or video recordings into searchable databases and cannot be solved by new or better versions of the software alone.


1.0 Why an Oral History Database?

In many ways, “database” is an impersonal and cold word; it is a tool usually associated with the abstraction of quantitative history rather than personal narrative analysis. Oral history interviewing reveals subjective meaning, and life stories have few pretences of being “objective” (Portelli; James). Our authority is based on shared inquiry rather than the distance of traditional academic research. Why then a database?

Neither of us are what you would call a “techie.” Far from it: we are sceptical of much of the missionary zeal and fashion-sense that has accompanied the digital revolution. The internet will set us free? We are drawn to oral history for its humanism and not for the recording devices that we use. For us, technology presents us with a way of opening up new possibilities of engagement and collaboration. We are particularly inspired by the creativity that digital technologies have unleashed. Though the interview space has changed over the years – Steve, for instance, has gone from using an audio cassette recorder in the 1980s to a big VHS video camera that seemed to weigh 30 pounds in the 1990s to a lightweight palmcorder video camera or an Edirol digital audio recorder today – it is “after the interview” where we see the most potential in the short term.

Too often, oral history projects collect stories only to box them. North American archives are filled to the rafters with reel-to-reels, audio cassettes, camcorder tapes, VHS and Beta tapes, DV tapes, mini-DVs, DVDs, and CD-ROMs. But how often are the boxes opened and the interviews re-played? We suspect not very often. What has frequently happened instead is that oral historians have transcribed their interviews and have relied on paper to interact with the interview. Michael Frisch has rightly flagged this as a problem, as the orality of the interview is lost at an early stage. According to him:

We all know, as well, that in most uses of oral history the shift from voice to text is extensive and controlling. Oral history source materials have generally been approached, used, and represented through expensive and cumbersome transcription into text. Even when the enormous flattening of meaning inherent in text reduction is recognized, transcription has seemed quite literally essential – not only inevitable but something close to ‘natural.’ The assumption in this near-universal practice is that only in text can the material be efficiently and effectively engaged – text is easier to read, scan, browse, search, publish, display, and distribute. Audio or video documents, in contrast, inevitably have to be experienced in ‘real time’ (Frisch 102-3).

This loss is particularly problematic as oral historians have shifted from mining the interview for information to listening to the life story more holistically. As a result, body language, emotions, silences, narrative structure, the rhythm of the language, and people’s relationship to their own words become central to our analysis of personal narrative (Maynes et al). How can a transcript capture all this?

There are two areas along the “software frontier,” as Frisch calls it, that Concordia University’s Centre for Oral History and Digital Storytelling has focused its energies. The first involves engaging with the audio-visual recordings themselves in the context of digital storytelling, audio tours, memoryscapes, and so on. By working with audio-visual material, oral historians find new ways to interpret life histories and to engage with various publics. Two graduate students, Nancy Rebelo and Jasmine St-Laurent, for example, put together an audio tour of St. Laurent Boulevard – Montreal’s immigrant corridor – for those taking the city bus (#55) from the Old Port area northwards. Those interested in taking this journey, called “Project 55,” can download the audio track, leaflet, research paper, and lesson plan and – if in Montreal – get on board the bus at the Old Port (Rebelo and St. Laurent). By listening to people’s stories as the bus rolls northward, the listener is invited to look upon the city in a new way. The project resonated to such a degree that the Montreal Gazette ran a three page story on it in May 2007.

Inspired by British geographer Toby Butler’s Memoryscape project (www.memoryscape.org.uk), installation artist Graeme Miller’s Linked public walk of art (www.linkedm11.info), Canadian historian Joy Parr’s megaprojects project (http://megaprojects.fims.uwo.ca/old_iroquois/), and Toronto’s Murmur project (http://murmurtoronto.ca/), the Centre for Oral History and Digital Storytelling has embarked on “rebuilding” the demolished corrugated paper mill in Sturgeon Falls, Ontario (High and Lewis). We hope that the resulting “mill-scape” is something more than a memory site, but a site of interpretation of place identity and attachment as well (for more on the concept of place, see Massey). The website, at http://storytelling.concordia.ca/high/sturgeon_falls, uses the mill’s floor plans and aerial photos to create a three dimensional model. “Visitors” are able able to take a virtual tour of the mill and listen to dozens of embedded audio clips taken from seventy interviews conducted with former hourly and salaried workers and see a selection of the a thousandphotos collected.[3]

The second software “frontier,” and the subject of this paper, relates to the transformation of individual interviews into searchable databases, allowing researchers and community members to follow the thematic “threads” across interviews. Oral history databases facilitate analysis both in their construction (which is a deep listening exercise akin to transcription) and in their usage, while keeping the recording front and centre. We therefore hear and see the person, always. This is important to us because we have found that our relationship to people’s words is largely dependent on the nature of the encounter. We feel very little connection with the people whose stories we encountered in transcripts, rather more with those encountered in audio-visual recordings, and most of all with those we interviewed ourselves. By then, it is intensely personal. One leaves an interview with a sense of obligation to the person who has entrusted you with their story (or at least what they chose to tell you).

There are several exciting oral history database projects underway in the United States and Great Britain. The most ambitious project, by far, has been initiated by the Survivors of the Shoah Visual History Foundation, formed in 1994 by Steven Spielberg, who was approached by many Holocaust survivors following the release of his film Schindler’s List. The mandate of the foundation is to collect and preserve the testimony of Holocaust survivors, catalogue their testimonies, and disseminate the testimony for educational purposes. The resulting 116,000 hours of videotaped testimony in thirty-two languages present a unique cataloguing challenge. The one hundred-eighty terabyte digital library of MPEG-1 video is the largest collection on the planet. To deal with this volume, the VHF has developed a custom-made system of cataloguing (clip boundaries, summaries, and descriptors) and developed a database-wide search system. Stream Sage (www.streamsage.com ) and the Informedia Digital Video Library of Carnegie Mellon University (www.informedia.cmu.edu ) have also been active in developing artificial intelligence tools for video retrieval (Frisch 228). The most promising of Informedia’s many initiatives, done in partnership with the Chicago-based History Makers (www.thehistorymakers.com ), is a searchable database of four hundred interviews with African-Americans, comprising 1,100 hours of videotaped interview divided into 18,254 “retrieval units” (Richardson 14). However, most of these video search engines rely on filename or accompanying text sources to search the video (Christel ).

These database projects are essentially archival: they catalogue a vast collection for all potential users. For historians working with more discrete, project-focused collections, whether for research, teaching or public history purposes, other digital capacities and tools are needed. We are particularly interested in emerging approaches that combine broader content cataloguing with user-driven qualitative analysis and indexing capacities. More to the point, we are interested in new digital environments in which recorded life stories can be indexed and annotated directly, rather than rely on indexing and searching only interview transcripts.

From its inception in 2005, the Centre has sought to find an alternative to the “transcription trap” without losing the opportunity to deeply listen to the life stories. We were urged onto this path by Michael Frisch, one of the world’s foremost oral historians. In an exchange of emails that go back to 2003, he spoke of the potential of databases to transform how we think and do oral history. He also emphasized that databases break the rigid linearity of so much of what we do. Many of the ideas that Michael Frisch raised in this email exchange were later incorporated in his “Oral History and the Digital Revolution” essay that appeared in the second edition of The Oral History Reader.

In the process, Frisch introduced us to Interclipper – a “realtime video organizer” first developed by marketers to organize data from focus groups, which he then adapted to serve the needs of oral historians. Interclipper digitizes, annotates and indexes a videotaped oral history collection, making it searchable and therefore usable. It works something like a book index. It offers ways of working analytically with the material at every stage, and thus opens up a range of possibilities. Because Interclipper permits each passage to be tagged, coded and copied into an interactive database, it enables a deeper level of analysis of oral interviews. The interface (see screenshot 1) features a media player, clip grid (with label titles and fields), timeline, “quick sort” searchability, and capacity to export clips at a touch of a button. Notes and transcripts can also be inputted. However, Interclipper’s video software is not for sale; it is available only as a service for a per hour fee from the license holder. Yet it had the advantage of being immediately available and was relatively easy to use. In sum, Interclipper enabled us to begin our journey.


main interface

Figure 1: Main Interface


These early conversations came at a formative moment for the Centre for Oral History and Digital Storytelling, and profoundly influenced the grant application that enabled us to build a digital research complex at Concordia University. From the outset, the Centre’s stated mission was four fold: (1) to provide access to high quality digital audio and video that turn formerly unwieldy text-based interviews into searchable, community-accessible databases; (2) to support the development of innovative research instrumentation based on digital technologies; (3) to transform Concordia University into an (inter)national leader in digital applications to oral history; and (4) to create a strong and vibrant research space where technological and methodological experimentation and collaboration are encouraged and where students are involved and mentored. Yet we initially saw ourselves as a “test bed,” since software development still seemed too big to contemplate. To some extent, it still is– but a great deal has changed in the interim.

In early 2006, Michael Frisch came to Montreal to “train” a group of us in the software. More than anything else, the two half-day workshops de-mystified databases and launched us on the road of software development. From there, David Sworn took the lead in developing a small database of interviews with displaced Sturgeon Falls workers, a prototype of the larger databases to come. We were all new to this and we fumbled forward. Eventually, a database of seventeen hours of interview material was created with six “completed” index fields (of the ten that are possible, thus leaving it open ended instead of “closed”). We developed a users’ manual and a training module and I proceeded to integrate them into a graduate seminar in oral history in Fall 2007. After the workshop, students were each given a copy of the database on CD-ROM and were asked to write a short analysis. The results (discussed below) were striking, and convinced us that we needed to develop our own open source database tool.


2.0 Building the Sturgeon Falls Database

We began developing the Sturgeon Falls Mill Closing database in the winter and spring of 2007. The database consisted of a dozen interviews with displaced and retired workers, clerical staff, and management from a corrugated paper mill in Sturgeon Falls, Ontario. Located in Northern Ontario, between Sudbury and North Bay, the century-old Sturgeon Falls mill was closed in December 2002 by Weyerhaeuser, an American multinational corporation (High and Lewis). Former workers at the mill, both hourly-paid and salaried, invoked home and family metaphors to describe their profound connection to people, place, and product. This was much more than a job for most of those interviewed: it was like a family. In fact, it was not unusual to have sons follow their fathers through the main gate right out of high school. Although the database was intended as a prototype, the selection of interviews offered a broad cross section of narratives about the mill closure and, with over seventeen hours of footage, it represented both a large body of data to work with and a real challenge for developing a usable database.

At first, we struggled with the blank database and the imposing number of interviews. We began clipping and indexing but grew reluctant as we spent hours working over a short portion of a single interview, knowing that our method was haphazard and that the more time we invested on flawed clipping, the more work would have to be re-done once we had developed a systematic process. After learning the basic method for clipping video with InterClipper, we were able to use it as a foundation for a more comprehensive system that suited the purposes of our database, purposes which, we were quickly discovering, diverged substantially from the InterClipper databases we were looking at as models. One such database, which comprised a comparable number of hours of recorded interviews, had opted for a vast index of several hundred subject categories (based on the Library of Congress system) and overlapping clips of short duration.


clipping

Figure 2: Clipping


In retrospect, clipping is such a straightforward process that it seems hard to believe that we struggled with it for so long: the researcher simply watches the interview, and when the interviewee says something interesting, the researcher makes a clip and gives it a short title. The processes of coming up with indexing tags and applying them to the clips were left to a later stage.

This selective method of clipping produced a particular kind of database: one that made it easy to quickly find quotes and sound bites from a large body of interviews but left a large proportion of each interview un-clipped, and therefore absent, from the index. In essence, this method produces a heavily abridged version of each interview, one that may or may not provide a general overview of its main themes. In our view, this presented a number of problems: first, the clipping process remained relatively unsystematic – clearly no two researchers would choose to clip and label the same interview in exactly the same way. Moreover, this selective clipping method could never be an alternative to transcription because it invariably produced a partial index of interviews. Even as a supplemental finding aid to a full transcript, the value of such an index would be limited; also, finding portions of an interview that had not been clipped and indexed in the databasewould not be any moreuseful than the original digital recording.

In our view, however, the most substantial limitation of this method was its failure to capture or draw attention to the layers of meaning embedded in the form and structure of the life story narratives themselves. A more holistic examination of the conversational narrative would prove virtually impossible using such a clipping strategy. Also, the narrative would have a noticeable absence of the meanings that can be derived from non-verbal expression, speech patterns, turns of phrase, the use of colloquialisms, and so on. Here, we felt, was this fantastic device for bringing oral histories to life and for capturing and interpreting all those non-verbal modes of communication that had long been lost in transcription, but rather than bringing these things to light this clipping method frequently relegated them to virtual invisibility. A frequently-repeated pregnant metaphor, for example, might appear in only one selected clip even if it had been used a half-dozen times over the course of the interview. Complex or unfamiliar expressions and colloquialisms might actually be deliberately excluded because they might seem incomprehensible outside of their larger context. Finally, and perhaps most importantly, the very conversation at the heart of the oral history interview is lost in selective clipping because the interviewer’s questions are almost invariably left out. Thus, the collaboration and communicative exchange that structures an interview is replaced by a series of brief monologues. This not only obscures the negotiated and participatory nature of the oral history interview, but it also creates the illusion that the speaker’s narrative is produced spontaneously outside of the context of the conversation and without the interviewer's elicited actions or statements. Indeed, it seems significant and, in our view, slightly problematic that when a user views these abridged dialogues in a completed database, the fragmented clips indicate that the role of the interviewer – an active listener – is functionally replaced by the user – a passive listener. What's more, the user, with the click of a mouse, can access a neatly-edited interview fragment on a wide variety of pre-determined topics (Parr et al).

Our conviction that the conversational and collaborative nature of any oral history interview is central to its interpretation led us to develop a clipping technique that was rooted in the question and answer couplets that structure an interview. This technique was no more complex than the selective method, but it differed in that all parts of every interview would be clipped and indexed. Using this method, a clip usually began when the interviewer asked a question and ended when the interviewee had finished answering that question. Another sign for ending the clip occurred when the interviewer was ready to ask a new question or, to put it another way, to turn the conversation to a different topic. Follow-up questions would not be made into separate clips, so that each clip normally contained a short conversation and would last as long as necessary. As a result, some clips were ten minutes or longer, although most were relatively short. Fortunately, InterClipper allows for shorter, “embedded,” clips within longer clips so that long digressive responses could be divided by theme or anecdote. (By specifically tagging the shorter clip as embedded within a larger one, the shortened clip could be easily viewed in the context of the longer response). In our database this embedding was only necessary in a handful of cases, but some interviews included many long responses and thus contained a large number of embedded clips.

That our clipping reflected how interviewees responded to questions – even in a very general way – was an indication that we were on the right track. We began to notice patterns in clip length that were common to all interviews; clips were short at the start of each interview, for example, and became longer as the questions became more opinion-oriented and the interviewee became more comfortable. Differences in clipping patterns between interviews, however, were particularly striking. In the case of two interviews with workers who had been made redundant by the mill closure, one interview was made up of short clips that seldom lasted longer than a minute while the other was composed of long drawn out clips, including one clip which was twelve minutes long. Both interviewees expressed varying degrees of the anger and despondency that one might expect from laid-off workers facing a fast-dwindling severance package and extremely limited job-prospects, but the first interview was also marked by a pronounced sense of betrayal and distrust – a distrust that the interviewer was also subjected – coupled with a deeply-held belief that the mill was still profitable and that it was effectively “killed” by its American owners. The second interview, though punctuated with wry humour, vacillates between the threat of economic despair and the hope associated with familial support and solidarity. In retrospect, this interviewee explained, the closure was the inevitable conclusion of a long decline. Of course the patterning of long and short clips does not tell us about any of these things, but it does express certain formal elements of each narrative: abrupt, angry responses on the one hand, and long, reflective rumination on the other. For us, this clipping system suggested the promise of new methods of visualizing conversations and represented the contours of what we came to think of as “narrative maps.”

Once we had completed clipping the dozen interviews, we may have been tempted to see our database as a series of narrative maps, but at this stage our maps were mostly unlabeled. Our clipping technique involved note-taking and clip titling, methods of recording which were originally intended as indexing aids but were ultimately included in the completed database. These methods were somewhat haphazard, however: in some cases the notes were just a few words or a couple of sentences and, in others they were almost verbatim transcription of the interview. The clip titles, like book titles, ranged from the specific to the vague and a few were even slightly poetic. In retrospect, we could have developed a more efficient method of recording by devising a system for annotating and labelling clips during the clipping process (sparse notes are generally more useful and less time-consuming). Nevertheless, our casual attitude to creating notes and titles was not a major impediment to indexing. Our own inexperience with developing InterClipper databases and the peculiarities of the software itself ultimately led us to produce a somewhat dysfunctional index.

InterClipper databases are inflexible in that it is not possible to take a fully-indexed interview from one database and add it to another one, nor is it possible to compile a brand new database from interviews that have already been indexed. Thus, it is very easy to think of an individual interview as part of a larger whole – the database – rather than an entity in its own right. For us, this was compounded by the fact that we had clipped all the interviews in one stage and would be indexing them all in a second stage, rather than clipping each one, indexing it, and then integrating it into the database. Finally, InterClipper allows a limited number of index tags per clip, even though each tag can contain any number of words. What all of this meant for our database was that we tended to create very general index tags and then qualify them with subordinate tags; ultimately we found ourselves applying fewer tags to clips because they reflected what the interviewee said or what the conversation had been about rather than because the tags made it possible to create groups of similarly tagged clips. In other words, we found that if we only made our tags general enough, they could accommodate dramatically divergent clips. In some cases general tagging meant that viewing clips with the same tags would provide a variety of views on one particular topic, such as the working conditions at the plant or the causes and effects of a job action. In other cases, however, clips with similar tags had only the most tenuous connection with one another.

The challenges involved in developing index tags really came to a head with one field in particular, “Topic C” that has become rather notorious among those of us who have used the database extensively. We decided early on that Topic C would be devoted to non-verbal and meta-narrative modes of expression. At a very early stage, David, who was working alone on the project at the time, began to worry about the challenges of tagging something as multifaceted, subtle and inexplicit as body language, for example. And when it came time to begin doing so, he was faced with the option of either confining these various and rather ineffable modes of expression to one index field or spreading them over three and essentially allowing them to dominate the database. Ultimately, it was decided that Topic C would cover three specific areas – intonation, body language, and figurative speech – but of course any of these could be expressed in an almost infinite number of ways; body language, for example, could include everything from an interviewee banging their fist on the table to simply crossing their arms or covering their mouth. Intonation might range from obvious sarcasm to a short hesitation. Therefore, we decided to simply tag “interesting” body language, intonation, and figurative speech – which frequently coincided with one another – rather than detail what, specifically, was interesting. Finally, to make this already deeply compromised approach worse, the database would require that we re-watch all the interviews. This re-indexing coincided with bringing a new researcher onto the project; he was given the job of indexing Topic C even though his familiarity with the database and with the interviews was limited. He was not instructed to make notes about what he found interesting with regard to the intonation, body language, or figurative speech of specific clips. With Topic C out of the way we were ready to begin “test driving” the database.


3.0 ‘Test Driving' the Database in the Classroom

The eighteen Concordia graduate students enrolled in the Oral History seminar in the Fall term of 2007 were asked to write a three or four page critique of the Sturgeon Falls database. In preparation, the entire class received a workshop from David Sworn that explained the origins of the project and the choices made; additionally, it provided some helpful tips on how to explore the database. David also created a user's guide and was available to handle their questions. The manual’s introduction read as follows:

This database has been developed as part of the Sturgeon Falls Mill Closing Project; it contains twelve interviews that have been indexed with Video Interclipper Analyst. The database is complete in that the interviews have been ‘divided’ into clips and the clips have been tagged with keywords. But the database will never be completely finished, you can make whatever changes you wish to your copy: you can make new clips, change the length of the clips, add new tags to clips or even develop new indexing fields. And your ideas and feedback are absolutely crucial to the future of the project. This document is an introduction to the InterClipper software and the database itself (2008).

The instructions went on to say that the database was imperfect, because it had “more than a few bugs (as you’ll find out), but it’s important to concentrate more on what the software can do and less on what it’s not capable of.” Students were then given some historical background of the mill closing and a short introduction of each of the interviewees.

The results of the exercise are instructive and we believe that the issues raised are fundamental to our endeavour. The four most important concerns raised by the graduate students were (in no particular order): transparency, fragmentation, path, and sequence, as well as indexing. We will discuss each of these potential problems in turn.

  1. Transparency: Several graduate students urged us to “frame” the database, thereby providing the user with a sense of the choices that were made in its creation, such as the roads not taken. This transparency is something that we have always taught in relation to transcription, so it made sense that the students would make the connection. J. Penney Burton also suggested that the origins of the database needed to be provided:why was it created? What was the larger project? Several students noted that the instructional manual (which focused mainly on how to use the database) was helpful, but as it was in paper form; the manual needed to be integrated into the software itself. Our inability to further adapt the proprietary software to our purposes prevents this integration from happening with Interclipper – but the point is very well taken and has been incorporated into the design of the new “Stories Matter” software.
  2. Path and Sequence: Students struggled to blaze their own paths through the material: where to start? How should they approach the assignment? Freed from the linearity of documentary, as Frisch notes, they felt disoriented and alone. “I was lost. I didn’t feel grounded,” reported Joyce Pillarella, continuing, “[w]hat’s my route?” and “[h]ow many clicks do I need to get there?” According to Sharon Murray, a doctoral candidate in Art History:
Diving into the clips like this was an interesting experience in part because of my lack of contextual information on each worker. This meant that the clips themselves become context; the overlaps and gaps in each interview created a bigger picture of the social experience of the mill. As I went along, I found myself wanting to go back to other interviews to compare the way each worker remembered a similar situation.
  1. Fragmentation: The database is meant to connect but it can end up doing the opposite, fragmenting the life stories by removing individual clips, or stories, from their context. This is a major interpretative challenge. The Interclipper database allows us, through its “quick sort” option, to watch and listen to clips back-to-back across interviews on selected themes. This is of obvious importance, but if interviews are shorn of their life history or biographical context, meanings change. This fear was raised by many of the students in the class. “My engagement with the interviews was much more fragmented,” reported interdisciplinary Master’s student Shauna Janssen. Another student wrote that “in disrupting the life-story narrative, removing memories from their context, the worker’s life-stories lost agency, became fragmented – the clips became information.” Yet another student in the class warned that, “[t]he disadvantage of my method was that the worker’s individual stories about the social life of the mill lost the meaning they might have had when contextualized by their life-stories rather than other workers’ memories of the same environment.” An oral history database must provide ways to retain the life history context. Users’ analysis of the interviews was similarly fragmented since the “notes section” was tied to each interview clip and disappeared from the screen when the next clip began to play. “We need to be able to write notes in a single place,” observed art historian J. Penney Burton.
  2. Indexing : While many of the points of criticism raised by students centred on the software, or the dangers in building oral history databases more generally, most agreed that the three fields created in the Sturgeon Falls database were either indistinguishable from one another or had an uncertain identity. Joyce Pillarella wrote that we needed to clearly demarcate topics A, B, and C: “[i]t really needs to be simple.” For example,

Topic A: WORK (broad)
Topic B: STRIKE (specific/narrow).

Everybody agreed that Topic C (on body language) was pretty much a total failure. According to Anna Wilkinson, “[t]opic C imposed on clips rather than originally being observed and defined within the interviews themselves.” This top-down approach resulted in “somewhat arbitrary categories[;]” “the categories themselves are broad to the point of being meaningless.” Indeed, she continued, “I assume that ‘use of language’, ‘intonation’[,] and ‘body language’ were applied when there seemed to be particularly visible or noticeable instances of the use of figurative language, hesitation, pauses, and body movement. However, in the clips I viewed it was difficult to locate what exactly was meant by these labels.” That said, Wilkinson agreed that it was “worth thinking about what kinds of emotions are being conveyed and what kinds of non-verbal cues are specific to these feelings.” For Anne Holloway, “[a]ny interview, or clip of an interview, is going to contain some level of body language, emotion or intonation. Who is to decide how much emotion is noteworthy, or how body language should be interpreted, and how does one achieve consistency?” For at least one student the database appeared “scientific” (or objective) with a fixed or stable meaning. How do we reconcile this with the subjectivity of each storyteller? Continuing, this same student observed that the term “index” itself was misleading as it suggests a single writer/indexer (as in a book) and not a larger multi-person authorship.

Yet what was interesting about the exercise is that each student approached the database in his or her own way. The student feedback indicates that many of the graduate students in the class were overwhelmed by the sheer volume of information and found it hard to know where to start their journey. Some found it difficult to keep track of where they were. They therefore navigated the database in all kinds of ways. No two people approached the database in the same way. Some focused on one or two interviews. Others set out to explore a key theme such as childhood memories. Still others “surfed” the "quick sort" function that allows them to search thematically across the interviews until they found a storyline that appealed to them. Joyce Pillarella noted that an “oral history database has to connect me to the sources I’m looking for. It’s only a MEANS and not the end.” In fact, this is comparable to how oral historians feel when faced with hundreds of pages of transcribed interviews.


4.0 Learning from our Mistakes

The students’ responses were enormously useful to us. For David, who had been so submerged in the project, the students’ fresh perspective on the database allowed him to identify the problems with greater clarity. In a number of key areas, the responses drew renewed attention to fundamentals that we had lost sight of. In particular, they underscored the complex relationship between the independence and interdependence of the interviewees’ narratives. While each life story is unique, there are larger patterns and recurring themes across the interviews with displaced paper workers.

The problems that the students identified can be broadly broken down into two groups: those that might be at least partially addressed through better use of the existing software and those that simply cannot be solved using InterClipper. Moreover, there are several key challenges that seem to be inherent to video indexing and may never be fully overcome such as the issue of narrative fragmentation and subjectivity associated with tagging certain elements of an interview and not others.

Perhaps the biggest lesson that can be taken from all of this is to focus on the part (the individual interview) before dealing with the whole (the entire database). Rather than clipping and annotating all the interviews in one long stage and then indexing them in another, each interview can be clipped and indexed at once in a process that would involve at least two full viewings of the interview. A short biography of the speaker and outline of the interview would be drawn up at the end of the clipping and indexing stage. Finally, when the database is complete a short biography should be developed of each interviewee, the basis of their selection and the interpretive questions involved in indexing them. Ideally, the abstract should appear whenever the database is opened and the bio/outlines should be at least partially visible whenever a clip from the relevant interview is selected. While this is presently impossible using InterClipper software, the Stories Matter software now under development will have this feature. These abstracts and outlines should help to overcome some of the fractured quality that students found in the database and help users feel grounded in specific interviews, rather than simply adrift in a sea of clips.

The strengths of InterClipper lie almost entirely with creating clips and virtually all of its shortcomings are evident in its indexing features. The limited number of index fields, unsearchable index tags, and artificial categorization of index tags were among the major stumbling blocks for our database, as some of the students’ comments make clear, and they put InterClipper at odds with most of the current databases with which humanities scholars may be familiar. In order to be fully functional as an index, InterClipper would have to allow an indefinite number of tags that could be flexibly categorized, searched, and filtered in any number of ways. In addition, end-users ought to have the ability to add new tags to their copies of the database and develop new ways of organizing the tags. This feature would solve the arguably counterproductive problem of having to think of the clips in terms of strict and necessarily mutually exclusive categories.

The whole issue of indexing, however, raises one of the most basic problems with our database, one that we have yet to fully address and which likely hinders our present and future database development projects: as one student astutely and emphatically pointed out, a database is “a MEANS and not the end,” but the proper response on the part of the researchers should be “yes, but a means to what end?” What exactly is our database for? Is it simply an archival aid, a supplement or substitute for a transcript? Or does it have some larger pedagogical or public historical purpose? This is a question that needs to be asked at the initial stage of any database project. It also raises concerns about the software itself because InterClipper is probably too difficult for the public to use casually and presents a number of serious challenges in a classroom setting as well. But, more broadly, the question of purpose sheds light on the various responsibilities of the database developers themselves.

If, for example, the database is one outcome of a larger project, which was the case for our Sturgeon Falls database, and is intended as a means of exposing the interviewees’ narratives to a wider audience through non-textual presentation and non-linear viewing, then the researcher has a responsibility to the interviewees and the database-users to ensure that the research questions that informed the project are made absolutely clear and that users are able to make use of the database in such a way that the narratives do not become de-contextualized and fractured. Specifically, users should be strongly encouraged to become familiar with the interviews on an individual basis before exploring them thematically. A purely thematic approach, we have discovered, produces a fractured and de-contextualized impression of the interviews, rendering them almost unrecognizable as life stories.

And yet, if the database is to be used primarily as an archival tool, then presumably the onus of using the database responsibly falls more on the end-user. In this regard, David in particular has become convinced of the virtues of a “less is more” approach to indexing for the purposes of archiving. Extensive indexing – which, in any case, is highly interpretative – would thus be eschewed in favour of a few simple keywords associated with each clip and more or less extensive annotations. This way the researcher in the archive could go over the keywords and, to a lesser extent, the notes for the forty or fifty clips that make up the average ninety minute interview. The researcher could then decide what elements, if any, would be useful for her project. This hypothetical researcher might watch some of the clips at this stage, particularly if she is interested more in narrative form than narrative content, but ultimately if she decides to incorporate the interview into her research project, she will want to view it in its entirety. In this scenario, the possible de-contextualization is overcome by the academic imperative to become as familiar as possible with one’s sources.


5.0 Where Do We Go From Here?

While this early endeavour was a complete project in its own right, it became something of a pilot for much more extensive indexing and database development as part of Montreal Life Stories (www.lifestoriesmontreal.ca), a SSHRC funded Community-University Research Alliance project based at the Centre for Oral History and Digital Storytelling. This five year major collaborative research grant, involving forty researchers and eighteen community partners, has undertaken to conduct five hundred interviews, in multiple sessions, with Montréalais displaced by war, genocide, and other human rights violations. The project places accent on retaining orality “after the interview” with a wide range of collective storytelling initiatives now underway. These activities include digital storytelling, performance, art installations, pedagogical resources, radio programming, and documentary film.

Learning from the shortcomings of our first attempt to build an oral history database, we have modified our indexing program in preparation for this larger project. Our first response has been something of a retreat into text − albeit hypertext. Newly indexed interviews will now include interview summaries that, in addition to the speaker's biographical information, will include a thematic outline and researchers' observations regarding modes of narrative expression, rhetorical devices, and patterns of body language or intonation as relevant. In order to illustrate these points, the summaries will be hyperlinked to appropriate video clips. Our second response has been to engage more thoroughly in software development. Abandoning the restrictive, proprietary software used for our early databases, we are currently working with a software engineer to develop Stories Matter, a free, open-source software which is tailored to the needs of oral historians. This new software will mitigate much of the narrative fragmentation associated with clip-based indexing and will allow greater flexibility in applying index tags, annotating interviews, and compiling databases. Finally, the new software will allow greater collaboration and user participation in database development. Users, for example, will be able to add, remove or modify index tags and notes, make new clips and adjust clip lengths, and even edit transcripts and interview summaries. These modifications can then be included in the master database or circulated as modified indexes.

It has become clear to us that InterClipper could only take us so far down the “road” that we are on. We found the software maddening at times and because it was proprietary we were unable to change any of its structures. For example, one of our ongoing fears with oral history databases is that its central purpose is to pull stories, or clips, out of each individual life story allowing us to follow various threads across interviews. This is great – but it strips the clips of their life story context. What is lost in the process? Will we end up with disembodied stories? Will it only hinder deep listening? We became convinced that the database needed to retain a measure of this life history context whenever we listened/viewed one of these clips. But there was no way to change the interface. Another problem related to the high financial cost of using InterClipper. We not only had to pay a stiff licensing fee for each hour of video streamed into the software, but we also had to send the DV-tapes to Buffalo in order to have them streamed in! Clearly, this method of processing was not sustainable. Any software solution has to be affordable to graduate students and community projects if it will have any future.

In the meantime, we met several times with Elena Razlogova – a digital historian out of George Mason University and co-director of our centre – and provided her with plenty of ideas about what we would like to see in her proposed media annotating plug-in for Zotero, an open source tool developed at George Mason University that gathers and organizes resources and lets you annotate, organize, and share results (www.zotero.org ). Launched in beta in April 2008, VERTOV is a relatively easy to use Firefox extension that helps the user collect, manage, and cite video and audio (http://digitalhistory.concordia.ca/vertov/ ). In our opinion, the resulting VERTOV digital tool surpasses anything that InterClipper can do. Zotero and Vertov have caused a great deal of excitement in digital history circles and have a lot of potential for oral historians wishing to index their interviews. However, it is a digital tool (similar to EndNote) that does many things well but maybe not “our thing” as well as it might. Like InterClipper, it was created for another purpose and adapted to suit our needs.

Now we arrive at the “here and now.” In August 2008, we embarked on the development of our own open-source database tool. The Canada Foundation for Innovation thankfully agreed to re-orient part of the grant from database software licensing fees to software development. Once we had the green light, we hired Jacques Langlois, a gifted software engineer with extensive experience in video game development at Ubisoft Entertainment, as project leader. In order to ensure that the development process for Stories Matter was tailor-made for oral historians, Steve also hired two experienced interviewers as “embedded oral historians.” They are in constant dialogue with Jacques about the choices that we must make at each step in development. In our view, the process is almost as important as the final product. You can follow the latest developments on our blog on the Centre for Oral History and Digital Storytelling website (http://storytelling.concordia.ca/storiesmatter/). What we learn in the coming months will be crucial to our long-term goal of building research capacity at the Centre in digital oral history. It is our hope that our database software will not only become an easy-to-use tool for research analysis, but also facilitate community access to these stories.


6.0 Drawing Conclusions

As this was our first “foray” into oral history databases, we would of course do much differently next time. But we learned a great deal from our mistakes – something that is not said often enough in academic scholarship. It is easier to tell about our unmitigated successes. This reconnaissance into the world of oral history databases has given us a far better understanding of what we would like to see and where the key methodological challenges lie. Today, we think a great deal more about process and the transparency of that process. The decision to develop a new open source database tool in-house was thus a direct result of our mixed experience using InterClipper and of our growing confidence that a new digital tool was needed. Yet, like our students, we are concerned that the context of the words spoken change depending on how they are accessed once taken out of the interview/life history. What is lost in the process? If meaning “inheres in context and setting,” as Frisch notes, as well as “in gesture, in tone, in body language, in expression, in pauses, in performed skills and movements,” then video databases offers us the continued embodiment of the story and the context of the interview setting, but at the potential cost of the life story context itself (Frisch103). “It was not the place to get to know people,” said one student. One would be hard-pressed to find a more prescient warning about the dangers inherent in oral history databases. Nonetheless, we will end with the words of Michael Frisch,

The basic point could not be simpler: There are worlds of meaning that lie beyond words, and nobody pretends for a moment that the transcript is in any real sense a better representation of an interview than the voice itself. Meaning is carried and expressed in context and setting, in gesture, in tone, in body language, in pauses, in performed skills and movements. To the extent to which we are restricted to text and transcription, we will never locate such moments and meaning, much less have the chance to study, reflect on, learn from, and share them. (Frisch223).

Acknowledgements

Graduate students enrolled in the Oral History seminar in Fall 2007 were required to write short critiques of the Sturgeon Falls Mill Closing Database (InterClipper). The following students agreed to let us to cite their untitled papers in this article: Joyce Pillarella, J. Penney Burton, Sharon Murray, Anne Holloway, Anna Wilkinson, William Hamilton, Maija Fenger, Shauna Jannsen, and Rylan Wadsworth.



Works Cited

Butler, Toby . "Memoryscape: How Audio Walks Can Deepen Our Sense of Place by Integrating Art, Oral History and Cultural Geography." Geography Compass 1.3 (2007): 360-72. Print.

Butler, Toby . "A Walk of Art: The Potential of the Sound Walk as Practice in Cultural Geography." Social and Cultural Geography 7.6 (2006): 889-908. Print.

--- and G. Miller . "Linked: A Landmark in Sound, a Public Walk of Art." Cultural Geographies 12.1 (2005): 77-88. Print.

Christel, Michael G . "Examining User Interactions with Video Retrieval Systems." Proceedings of SPIE, Volume 6506. 2007. Web. 29 Dec 2008. <www.informedia.cs.cmu.edu>.

--- and Michael H. Frisch . "Evaluating the Contributions of Video Representation for a Life Oral History Collection." 2007. Web. 29 Dec 2008. <www.informedia.cs.cmu.edu>.

Corbett, Katherine T. and Howard S. Miller . "A Shared Inquiry into Shared Inquiry." The Public Historian 28.1 (2006): 15-38. Print.

Couldry, Nick . "Mediatization or mediation? Alternative understandings of the emergent space of digital storytelling." New media & society 10.3 (2008): 373-91. Print.

Frisch, Michael . "Oral History and the Digital Revolution: Toward a Post-Documentary Sensibility." The Oral History Reader. Second Edition. Eds. Robert Perks and Alistair Thomson. London: Routledge, 2006. 102-114. Print.

--- . A Shared Authority: Essays on the Craft and Meaning of Oral and Public History . Albany: State U New York P, 1990. Print.

­--- . "Three Dimensions and More: Oral History Beyond the Paradoxes of Method." Handbook of Emergent Methods . Eds. Sharlene Nagy Hesse-Biber and Patricia Leavy New York: Guilford Press, 2008. Print.

Greenspan, Henry . On Listening to Holocaust Survivors: Recounting and Life History. Westport, Connecticut: Praeger, 1998. Print.

Gustman, Samuel. Dagobert Soergel, et al . "Supporting Access to Large Digital Oral History Archives." 2005. Web. April 2005. <www.glue.umd.edu/~oard/papers/jed102.pdf>.

Hartley, J. and K. McWilliam, eds. Story Circle: Digital Storytelling Around the World . Oxford: Blackwell, 2008. Print.

High, Steven and David W. Lewis . Corporate Wasteland: The Landscape and Memory of Deindustrialization . Toronto: Between the Lines, 2007. Print.

High, Steven . "Sharing Authority: An Introduction and Sharing Authority: Building Community University Research Alliances using Oral History, Digital Storytelling and Engaged Scholarship." Journal of Canadian Studies , 1.43 (2009). Print.

The History Makers Project . 2008. Web. 28 Dec 2008. <www.thehistorymakers.com>.

Informedia Video Library at Carnegie Mellon University . 2008. Web. 28 Dec 2008. <www.informedia.cmu.edu>.

InterClipper . 2008. Web. 28 Dec 2008 <www.interclipper.com>.

James, Daniel . Dona Maria’s Story: Life History, Memory and Political Identity. Durham: Duke UP, 2000. Print.

Lambert, J . The Digital Storytelling Cookbook. Berkeley: Center for Digital Storytelling, 2007. Print.

Life Stories CURA Project . 2008. Web. 28 Dec 2008. <www.lifestoriesmontreal.ca>.

Lundby, Knut, ed . Digital Storytelling: Mediatized Stories: Self-representations in New Media. New York: Peter Lang, 2008. Print.

Massey, Doreen . "Places and their Pasts." History Workshop Journal 39 (1995): 182-192. Print.

Maynes, Mary Jo., et al . Telling Stories. Minneapolis: U of Minnesota P, 2008. Print.

Murmur project . 2008. Web. <http://murmurtoronto.ca/>.

Parr, Joy, Jessica Van Horssen and Jon van der Veen . "The Practice of History Shared across Differences: Needs, Technologies and Ways of Knowing in the Megaprojects New Media Project” Journal of Canadian Studies 43.1 (2009). Print.

Pinder, David . "The Arts of Urban Exploration." Cultural Geographies 12.4 (2005): 383-411. Print.

--- . "Ghostly Footsteps: Voices, Memories and Walks in the City." Cultural Geographies 8.1 (2001): 1-19. Print.

Portelli, Alessandro . The Death of Luigi Trastulli and Other Stories: Form and Meaning in Oral History . Albany: SUNY, 1991. Print.

Rebelo, Nancy and Jasmine St. Laurent . "Project 55: A Historical Audio Tour of Ethnic Communities along St-Laurent Boulevard Aboard Bus 55." 2008. Web. 28 Dec 2008.

<http://storytelling.concordia.ca/workingclass/project55/index.html >.

Richardson, Julieanna . "The History Makers: A New Primary Source for Scholars." Great Cities Institute Publication #GCP-07-08, 2007. Print.

Survivors of the Shoah Visual History Foundation . 2008. Web. 21 Oct 2008. <www.vhf.org>.

Sworn, David . Sturgeon Falls Mill Closing Database Manual. 2008. Web.

<http://storytelling.concordia.ca/storiesmatter/>.

Stories Matter . 2008. Web. 28 Dec 2008. <http://storytelling.concordia.ca/storiesmatter/>.

Stream Sage . 2008. Web. 28 Dec 2008. <www.streamsage.com>.

Vertov : A Media Annotating Plugin for Zotero . 2008. Web. 28 Dec 2008. <http://digitalhistory.concordia.ca/vertov/>.

Zotero . Web. <www.zotero.org>.



Endnotes

[1] We would like to thank Michael Frisch, Alan Wong, Rob Shields, Stacey Zembrzycki, Nancy Rebelo, Kristen O’Hare, and those who attended our presentation at the Oral History Association Meeting in Pittsburgh in October 2008 for all of their assistance with this project. We would especially like to thank the Sturgeon Falls workers who agreed to be interviewed for the larger project and the students enrolled in Steven High’s Oral History seminar at Concordia University in Fall 2007. Funding for database development came from a research infrastructure grant awarded by the Canada Foundation for Innovation and a standard research grant from the Social Sciences and Humanities Research Council.

[2] There are several interesting database projects accessible online, including that of the Kentucky Oral History Commission ( http://history.ky.gov/Programs/KOHC/ ); the Virtual Oral/Aural History Archive at California State University, Long Beach (http://salticid.nmc.csulb.edu/cgi-bin/WebObjects/OralAural.woa/); the Survivors of the Shoah Visual History Foundation (www.vhf.org ); Alexander Street Press project (http://alexanderstreetpress.com ); and the Chicago-based History Makers ( www.thehistorymakers.com ).

[3] The project leader for the Sturgeon Falls mill-scape initiative is Michael Klassen, a graduate of the landscape architecture program at the University of Manitoba, who has extensive experience with mental mapping and memoryscapes as a methodology.



Creative Commons License
This work is licensed under a Creative Commons Attribution 3.0 License.