A Multimodal Augmented Reality System for Alignment Research

Dierker A, Bovermann T, Hanheide M, Hermann T, Sagerer G (2009)
In: Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer: 422-426.

Konferenzbeitrag | Veröffentlicht | Englisch
 
Download
Restricted 2009-hcii-dierker_et_al-preprint.pdf
Abstract / Bemerkung
In this paper we present the Augmented Reality-based Interception Interface (ARbInI), a multimodal Augmented Reality (AR) system to investigate effects and structures of human-human interaction in collaborative tasks. It is introduced as a novel methodology to monitor, record, and simultaneously manipulate multimodal perception and to measure so-called alignment signals. The linguistic term 'alignment' here refers to automatic and unconscious processes during communication of interactants. As a consequence of these processes, the structure of communication between the two interactants conforms to each other (for example, they use the same terms, gestures, etc.). Alignment is a debated model for communication in the community [1] and here we strive for providing novel means to study it by instrumenting the interaction channels of human interactants [2]. AR allows for a very close coupling between the user and a technical system. ARbInI adds to this a decoupling mechanism between two interacting users from the outside world and from each other via cameras and head-mounted displays (resp. microphone and headphones). This allows to monitor and record the exact perceived auditory and visual stimuli. This furthermore adds full control to the audiovisual input of the subjects: for instance, AR allows the manipulalion of shown virtual objects on top of physical objects by displaying a different size, shape or level of detail to the participants. Thus, the ARbInI has full control on the visual and auditory input of the subjects by means of the perceptual decoupling. With the help of visual and auditory AR techniques it is possible to manipulate the stimuli, for example by selectively changing the size, color or shape of virtual objects that are augmented in the views of cooperating users. Using ARToolKit markers attached to physical objects we make full use of augmented reality to realize physical interaction with virtual objects in our studies. In this paper, we also present VideoDB as a scenario. It focuses on the task to collaboratively organize and arrange multimodal video snippets. We discuss its potential regarding the recording and investigation of alignment.
Stichworte
alignment; interception interface; augmented reality
Erscheinungsjahr
2009
Titel des Konferenzbandes
Proceedings of the 13th International Conference on Human-Computer Interaction
Seite(n)
422-426
Konferenz
International Conference on Human-Computer Interaction
Konferenzort
San Diego, USA
Konferenzdatum
2009-07-18
ISBN
978-0-3-642-02884-7
Page URI
https://pub.uni-bielefeld.de/record/1987346

Zitieren

Dierker A, Bovermann T, Hanheide M, Hermann T, Sagerer G. A Multimodal Augmented Reality System for Alignment Research. In: Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer; 2009: 422-426.
Dierker, A., Bovermann, T., Hanheide, M., Hermann, T., & Sagerer, G. (2009). A Multimodal Augmented Reality System for Alignment Research. Proceedings of the 13th International Conference on Human-Computer Interaction, 422-426. New York, Heidelberg: Springer.
Dierker, Angelika, Bovermann, Till, Hanheide, Marc, Hermann, Thomas, and Sagerer, Gerhard. 2009. “A Multimodal Augmented Reality System for Alignment Research”. In Proceedings of the 13th International Conference on Human-Computer Interaction, 422-426. New York, Heidelberg: Springer.
Dierker, A., Bovermann, T., Hanheide, M., Hermann, T., and Sagerer, G. (2009). “A Multimodal Augmented Reality System for Alignment Research” in Proceedings of the 13th International Conference on Human-Computer Interaction (New York, Heidelberg: Springer), 422-426.
Dierker, A., et al., 2009. A Multimodal Augmented Reality System for Alignment Research. In Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer, pp. 422-426.
A. Dierker, et al., “A Multimodal Augmented Reality System for Alignment Research”, Proceedings of the 13th International Conference on Human-Computer Interaction, New York, Heidelberg: Springer, 2009, pp.422-426.
Dierker, A., Bovermann, T., Hanheide, M., Hermann, T., Sagerer, G.: A Multimodal Augmented Reality System for Alignment Research. Proceedings of the 13th International Conference on Human-Computer Interaction. p. 422-426. Springer, New York, Heidelberg (2009).
Dierker, Angelika, Bovermann, Till, Hanheide, Marc, Hermann, Thomas, and Sagerer, Gerhard. “A Multimodal Augmented Reality System for Alignment Research”. Proceedings of the 13th International Conference on Human-Computer Interaction. New York, Heidelberg: Springer, 2009. 422-426.
Volltext(e)
Name
2009-hcii-dierker_et_al-preprint.pdf
Access Level
Restricted Closed Access
Zuletzt Hochgeladen
2019-09-06T08:57:14Z
MD5 Prüfsumme
8d4a6aba4babe54926c1855251565697


Link(s) zu Volltext(en)
Access Level
Restricted Closed Access

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar
ISBN Suche