Annotating Simultaneous
Signed and Spoken Text
Presented by:           Brenda Farnell and Wally Hooper , U. of Illinios, Indiana U.  
Project / Software Title:       Annotating Simultaneous Signed and Spoken Text  
Project / Software URL:  
Access / Availability:       This software is available free if you contact the developer at xxx. Access to project archives requires a password which you can obtain by...  
The availability of inexpensive and portable visual technologies has stimulated renewed interest in visual aspects of language-in-use, especially those movements of the arms and hands – – somewhat loosely referred to as "gestures" – – that everywhere accompany speech in discursive practices. Such new technologies do not, in themselves, however, generate new theories, and it will probably be some time before a fully embodied conception of "language" transcends many habits of thinking and analysis inherited from a linguistic science accustomed to dealing only with spoken languages and speech data. Renewed interest in studies of gesture and ongoing research on signed languages indicate that this process is currently underway, dissolving the traditional boundary between "verbal" and so–called "non–verbal" communication (e.g., Farnell 1995a, 1999, 2002; Goodwin 1986, 2000; Haviland 1993, 2000; Kendon 1988, 1997; Levinson 1997; McNeill 1992, 2000; Streeck 1996; LeBaron & Streeck 2000). Linguistic data collected in visual form as well as audio thus provide important new theoretical challenges as well as challenges to best practices for transcription and translation, although only the latter can concern us here.

The discursive practices of indigenous people of the Plains region of North America offer an interesting challenge in this regard, since they occupy a unique niche in the languages of the world. Speakers of these endangered American languages not only use vocal signs (speech) and action signs (gestures) co–expressively, but their action signs are frequently drawn from a fully grammaticalized sign language, known as Plains Indian Sign Language or Plains Sign Talk (hereafter PST), that in other contexts can be used withoutspeech across spoken language barriers. In certain contexts such as storytelling and public oratory, talking with vocal signs and action signs simultaneously is the communicative norm. This oral/visual gestalt offers a special challenge for digitization, representation, and analysis. It requires full consideration of the visual-kinesthetic modality as well as sound in ways that will reveal the syntactic and semantic integration of vocal signs with action signs. The challenge is how best to create oral/visual and textual materials that will document and facilitate linguistic analysis of both modalities. In this demonstration we present research-in-progress that aims to develop appropriate frameworks and methods to meet this challenge.

Stage 1: A Presentational Model on CD ROM
WIYUTA: Assiniboine Storytelling with Signs (University of Texas Press, 1995), pioneered a multimedia approach to Endangered Language documentation. It was built at the U of Iowa with Supercard software plus some additional programming and combines three recording technologies in an interactive format– –video, the written word (Nakota texts with English translations) and written body movement (texts of the sign language in the Laban script [Labanotation] using Labanwriter software developed at Ohio State University). Additional annotations provide further ethnographic and linguistic detail, including photographs, visual art, music and comments by the storytellers and their relatives.

The user has three choices: 1) Play Entire Movie: view the entire videotaped narrative without transcription or translation. This fulfils the needs of Nakota speakers and PST users who only wish to see and/or hear the story; 2) Read Entire Story: read and study a transcription and translation of the spoken component using two scrolling text fields: one written in Nakota and the other providing a free English translation. This level of transcription fulfils the requirements of those learning to read and write Nakota. 3) Examine Story: allows user to study all the components- video, speech, written words and written signs – in great detail and on screen simultaneously. Users who are not literate in the Laban script but wish to learn can access an embedded Labanhelp section.

This program provides a rich environment for the end user but was designed to present linguistic and ethnographic material rather than support the work of transcription, translation and analysis itself. Its creation involved time–consuming labor on each modality – separately and without any time coding. The obvious next step was to explore applications that would support the work of transcription and analysis itself.

Program Papers & Handouts Readings
Instructions for Participants
Local Arrangements
Emeld 2001 Emeld 2002 Emeld Homepage