Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions"

Hermann T, Yang J, Nagai Y (2016) : Bielefeld University. doi:10.4119/unibi/2905039.

Download
OA
OA S1.4_vocal_disgusted.mp3
OA S3_vocal_cluster_centers.mp3
Alle
Datenpublikation
Creator
Abstract / Bemerkung
This paper presents a novel approach for using sound to externalize emotional states so that they become an object for communication and reflection both for the users themselves and for interaction with other users such as peers, parents or therapists. We present an abstract, vocal, and physiology-based sound synthesis model whose sound space each covers various emotional associations. The key idea in our approach is to use an evolutionary optimization approach to enable users to find emotional prototypes which are then in turn fed into a kernel-regression-based mapping to allow users to navigate the sound space via a low-dimensional interface, which can be controlled in a playful way via tablet interactions. The method is intended to be used for supporting people with autism spectrum disorder.
Erscheinungsjahr
Data Re-Use License
This Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions" is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
PUB-ID

Zitieren

Hermann T, Yang J, Nagai Y. (2016): Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University. doi:10.4119/unibi/2905039.
Hermann, T., Yang, J., & Nagai, Y. (2016). Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University. doi:10.4119/unibi/2905039
Hermann, T., Yang, J., and Nagai, Y. (2016). Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University. doi:10.4119/unibi/2905039.
Hermann, T., Yang, J., & Nagai, Y., 2016. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University. doi:10.4119/unibi/2905039
T. Hermann, J. Yang, and Y. Nagai, Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University, 2016. doi:10.4119/unibi/2905039.
Hermann, T., Yang, J., Nagai, Y.: Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University (2016). doi:10.4119/unibi/2905039.
Hermann, Thomas, Yang, Jiajun, and Nagai, Yukie. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University, 2016. doi:10.4119/unibi/2905039
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Volltext(e)
Beschreibung
Happy sound from each participant with the vocal model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
Disgusted sound from each participant with the vocal model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
The cluster centers of each emotions among all participants with the vocal model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
The global mean of all the parameters collected with the vocal model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
Demonstration of the kernel regression method to morph between emotions.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
Disgusted sound from each participant with the abstract model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
Happy sound from each participant with the abstract model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
The cluster centers of each emotions among all participants with the abstract model.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Titel
4.1 Abstract Sound Model
Beschreibung
The Abstract Sound Model interface demonstration.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Titel
4.2 Vocal Sound Model
Beschreibung
The Vocal Sound Model interface demonstration.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Titel
4.3.1 Synthesising Heartbeat Sounds
Beschreibung
The Heartbeat Sound Model interface demonstration.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Titel
4.3.2 Synthesising Breathing Sounds
Beschreibung
The Breathing Sound Model interface demonstration.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z
Beschreibung
Demonstration of the kernel regression method to interpolate between emotions.
Access Level
OA Open Access
Zuletzt Hochgeladen
2017-05-17T08:50:26Z

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar