A Multimodal Augmented Reality System for Alignment Research

Dierker A, Bovermann T, Hanheide M, Hermann T, Sagerer G (2009)
In: Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer: 422-426.

Download
Restricted
Conference Paper | Published | English
Abstract
In this paper we present the Augmented Reality-based Interception Interface (ARbInI), a multimodal Augmented Reality (AR) system to investigate effects and structures of human-human interaction in collaborative tasks. It is introduced as a novel methodology to monitor, record, and simultaneously manipulate multimodal perception and to measure so-called alignment signals. The linguistic term 'alignment' here refers to automatic and unconscious processes during communication of interactants. As a consequence of these processes, the structure of communication between the two interactants conforms to each other (for example, they use the same terms, gestures, etc.). Alignment is a debated model for communication in the community [1] and here we strive for providing novel means to study it by instrumenting the interaction channels of human interactants [2]. AR allows for a very close coupling between the user and a technical system. ARbInI adds to this a decoupling mechanism between two interacting users from the outside world and from each other via cameras and head-mounted displays (resp. microphone and headphones). This allows to monitor and record the exact perceived auditory and visual stimuli. This furthermore adds full control to the audiovisual input of the subjects: for instance, AR allows the manipulalion of shown virtual objects on top of physical objects by displaying a different size, shape or level of detail to the participants. Thus, the ARbInI has full control on the visual and auditory input of the subjects by means of the perceptual decoupling. With the help of visual and auditory AR techniques it is possible to manipulate the stimuli, for example by selectively changing the size, color or shape of virtual objects that are augmented in the views of cooperating users. Using ARToolKit markers attached to physical objects we make full use of augmented reality to realize physical interaction with virtual objects in our studies. In this paper, we also present VideoDB as a scenario. It focuses on the task to collaboratively organize and arrange multimodal video snippets. We discuss its potential regarding the recording and investigation of alignment.
Publishing Year
Conference
International Conference on Human-Computer Interaction
Location
San Diego, USA
Conference Date
2009-07-18
PUB-ID

Cite this

Dierker A, Bovermann T, Hanheide M, Hermann T, Sagerer G. A Multimodal Augmented Reality System for Alignment Research. In: Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer; 2009: 422-426.
Dierker, A., Bovermann, T., Hanheide, M., Hermann, T., & Sagerer, G. (2009). A Multimodal Augmented Reality System for Alignment Research. Proceedings of the 13th International Conference on Human-Computer Interaction, 422-426.
Dierker, A., Bovermann, T., Hanheide, M., Hermann, T., and Sagerer, G. (2009). “A Multimodal Augmented Reality System for Alignment Research” in Proceedings of the 13th International Conference on Human-Computer Interaction (New York, Heidelberg: Springer), 422-426.
Dierker, A., et al., 2009. A Multimodal Augmented Reality System for Alignment Research. In Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer, pp. 422-426.
A. Dierker, et al., “A Multimodal Augmented Reality System for Alignment Research”, Proceedings of the 13th International Conference on Human-Computer Interaction, New York, Heidelberg: Springer, 2009, pp.422-426.
Dierker, A., Bovermann, T., Hanheide, M., Hermann, T., Sagerer, G.: A Multimodal Augmented Reality System for Alignment Research. Proceedings of the 13th International Conference on Human-Computer Interaction. p. 422-426. Springer, New York, Heidelberg (2009).
Dierker, Angelika, Bovermann, Till, Hanheide, Marc, Hermann, Thomas, and Sagerer, Gerhard. “A Multimodal Augmented Reality System for Alignment Research”. Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer, 2009. 422-426.
Main File(s)
Access Level
Restricted Closed Access
Last Uploaded
2011-06-16 01:06:50

This data publication is cited in the following publications:
This publication cites the following data publications:
External material:
Supplementary Material

Export

0 Marked Publications

Open Data PUB

Search this title in

Google Scholar
ISBN Search