Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance
Achenbach J (2019)
Bielefeld: Universität Bielefeld.
Bielefelder E-Dissertation | Englisch
Download
thesis.pdf
91.22 MB
Autor*in
Gutachter*in / Betreuer*in
Abstract / Bemerkung
Virtual humans are employed in various applications including computer games, special effects in movies, virtual try-ons, medical surgery planning, and virtual assistance. This thesis deals with virtual humans and their computer-aided generation for different purposes.
In a first step, we derive a technique to digitally clone the face of a scanned person. Fitting a facial template model to 3D-scanner data is a powerful technique for generating face avatars, in particular in the presence of noisy and incomplete measurements. Consequently, there are many approaches for the underlying non-rigid registration task, and these are typically composed from very similar algorithmic building blocks. By providing a thorough analysis of the different design choices, we derive a face matching technique tailored to high-quality reconstructions from high-resolution scanner data. We then extend this approach in two ways: An anisotropic bending model allows us to more accurately reconstruct facial details. A simultaneous constrained fitting of eyes and eyelids improves the reconstruction of the eye region considerably. Next, we extend this work to full bodies and present a complete pipeline to create animatable virtual humans by fitting a holistic template character. Due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing entire characters in less than ten minutes.
In a following part of this thesis, we build on our template fitting method and deal with the problem of inferring the skin surface of a head from a given skull and vice versa. Starting with a method for automated estimation of a human face from a given skull remain, we extend this approach to bidirectional facial reconstruction in order to also estimate the skull from a given scan of the skin surface. This is based on a multilinear model that describes the correlation between the skull and the facial soft tissue thickness on the one hand and the head/face surface geometry on the other hand. We demonstrate the versatility of our novel multilinear model by estimating faces from given skulls as well as skulls from given faces within just a couple of seconds. To foster further research in this direction, we made our multilinear model publicly available.
In a last part, we generate assistive virtual humans that are employed as stimuli for an interdisciplinary study. In the study, we shed light on user preferences for visual attributes of virtual assistants in a variety of smart home contexts.
In a first step, we derive a technique to digitally clone the face of a scanned person. Fitting a facial template model to 3D-scanner data is a powerful technique for generating face avatars, in particular in the presence of noisy and incomplete measurements. Consequently, there are many approaches for the underlying non-rigid registration task, and these are typically composed from very similar algorithmic building blocks. By providing a thorough analysis of the different design choices, we derive a face matching technique tailored to high-quality reconstructions from high-resolution scanner data. We then extend this approach in two ways: An anisotropic bending model allows us to more accurately reconstruct facial details. A simultaneous constrained fitting of eyes and eyelids improves the reconstruction of the eye region considerably. Next, we extend this work to full bodies and present a complete pipeline to create animatable virtual humans by fitting a holistic template character. Due to the careful selection of techniques and technology, our reconstructed humans are quite realistic in terms of both geometry and texture. Since we represent our models as single-layer triangle meshes and animate them through standard skeleton-based skinning and facial blendshapes, our characters can be used in standard VR engines out of the box. By optimizing computation time and minimizing manual intervention, our reconstruction pipeline is capable of processing entire characters in less than ten minutes.
In a following part of this thesis, we build on our template fitting method and deal with the problem of inferring the skin surface of a head from a given skull and vice versa. Starting with a method for automated estimation of a human face from a given skull remain, we extend this approach to bidirectional facial reconstruction in order to also estimate the skull from a given scan of the skin surface. This is based on a multilinear model that describes the correlation between the skull and the facial soft tissue thickness on the one hand and the head/face surface geometry on the other hand. We demonstrate the versatility of our novel multilinear model by estimating faces from given skulls as well as skulls from given faces within just a couple of seconds. To foster further research in this direction, we made our multilinear model publicly available.
In a last part, we generate assistive virtual humans that are employed as stimuli for an interdisciplinary study. In the study, we shed light on user preferences for visual attributes of virtual assistants in a variety of smart home contexts.
Jahr
2019
Urheberrecht / Lizenzen
Page URI
https://pub.uni-bielefeld.de/record/2936169
Zitieren
Achenbach J. Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Bielefeld: Universität Bielefeld; 2019.
Achenbach, J. (2019). Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Bielefeld: Universität Bielefeld. https://doi.org/10.4119/unibi/2936169
Achenbach, Jascha. 2019. Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Bielefeld: Universität Bielefeld.
Achenbach, J. (2019). Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Bielefeld: Universität Bielefeld.
Achenbach, J., 2019. Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance, Bielefeld: Universität Bielefeld.
J. Achenbach, Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance, Bielefeld: Universität Bielefeld, 2019.
Achenbach, J.: Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Universität Bielefeld, Bielefeld (2019).
Achenbach, Jascha. Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance. Bielefeld: Universität Bielefeld, 2019.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung - Weitergabe unter gleichen Bedingungen 4.0 International Public License (CC BY-SA 4.0):
Volltext(e)
Name
thesis.pdf
91.22 MB
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-25T06:54:22Z
MD5 Prüfsumme
4afa2292ed382386717059c40c16fc05
Material in PUB:
Dissertation, die diesen PUB Eintrag enthält
Supplemental Material for Thesis 'Generation of Virtual Humans for Virtual Reality, Medicine, and Domestic Assistance'
Achenbach J (2019)
Bielefeld University.
Achenbach J (2019)
Bielefeld University.