SITCO 3: Investigating multimodal interaction in storytelling

Utilising a multimodal approach, including conversation analytics and eye-tracking technology, this project aims to analyse interactions during storytelling. 

Colleagues telling stories

Researchers 

Research background

Telling stories to one another is an important part of everyday conversation. Conversational storytelling is inherently multimodal, with participants in storytelling using gaze and gesture to support their spoken interactions (see Rühlemann, Gee & Ptak 2019). This project is a collaboration between the University of Freiburg and BCU to investigate this phenomenon. It brings together conversation analytics, discourse analytics and corpus linguistic methods, as well as the latest video and eye-tracking technology, to enable a holistic analysis of multimodal interactions during storytelling.

Research aims 

The project aims to construct an innovative multimodal corpus, the Storytelling Interaction Corpus. The corpus will consist of transcribed conversations between two, three or four people. It will combine multiple levels of annotation in an XML format to capture gaze, gesture and conversational features. The conversations will be recorded at BCU and the XTranscript software (developed at BCU) will be used in corpus compilation and annotation.

Research methods 

The Storytelling Interaction Corpus comprises data from video recordings of spontaneous conversational interaction conducted in cooperation with BCU. Unlike corpus-linguistic multimodal corpora that use only orthographic transcription, the data will be transcribed in accordance with conversation-analytic (Jeffersonian) conventions to ensure rich multimodal detail in the transcripts. The conversation-analytic transcripts will be converted into XML transcripts using XTranscript, a piece of software developed specifically for this project in collaboration with BCU.

In addition to conversation-analytic transcription and annotation, the data will receive corpus-linguistic annotation in the form of Part-of-Speech tagging to capture morpho-syntactic functions of verbal actions as well as discourse-analytic annotation to grasp discourse structure, discourse presentation, and discourse roles.

In collaboration with the Chair of German Linguistics Professor Auer at Freiburg University, quantitative data on gaze behaviour will be collected, and integrated into XML, by using eye-tracking technology. The foundations for the statistical examination of multimodal storytelling practices will be provided by the XML structure of the corpus. By using the XML querying languages XPath and XQuery, relevant data of unlimited size and complexity can be addressed and extracted (Rühlemann & Gee 2018).

Research outcomes 

Based on the interdisciplinary approach, we intend to investigate multimodal storytelling interaction, addressing innovative research questions that the three disciplines of corpus linguistics, discourse analysis, and conversation analysis in isolation cannot adequately address. These questions relate to how gesture, gaze and spoken features of language combine in relation to conversational interaction and storytelling progression.