Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions"
Hermann T, Yang J, Nagai Y (2016)
Bielefeld University.
Datenpublikation
Creator
Einrichtung
Abstract / Bemerkung
This paper presents a novel approach for using sound to externalize emotional states so that they become an object for communication and reflection both for the users themselves and for interaction with other users such as peers, parents or therapists. We present an abstract, vocal, and physiology-based sound synthesis model whose sound space each covers various emotional associations. The key idea in our approach is to use an evolutionary optimization approach to enable users to find emotional prototypes which are then in turn fed into a kernel-regression-based mapping to allow users to navigate the sound space via a low-dimensional interface, which can be controlled in a playful way via tablet interactions. The method is intended to be used for supporting people with autism spectrum disorder.
Stichworte
Emotions;
Sound;
Auditory Display;
Autism Spectrum Disorder (ASD)
Erscheinungsjahr
2016
Copyright und Lizenzen
Page URI
https://pub.uni-bielefeld.de/record/2905039
Zitieren
Hermann T, Yang J, Nagai Y. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University; 2016.
Hermann, T., Yang, J., & Nagai, Y. (2016). Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University. https://doi.org/10.4119/unibi/2905039
Hermann, Thomas, Yang, Jiajun, and Nagai, Yukie. 2016. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University.
Hermann, T., Yang, J., and Nagai, Y. (2016). Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University.
Hermann, T., Yang, J., & Nagai, Y., 2016. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions", Bielefeld University.
T. Hermann, J. Yang, and Y. Nagai, Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions", Bielefeld University, 2016.
Hermann, T., Yang, J., Nagai, Y.: Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University (2016).
Hermann, Thomas, Yang, Jiajun, and Nagai, Yukie. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University, 2016.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Open Database License (ODbL) v1.0:
Volltext(e)
Name
Beschreibung
Happy sound from each participant with the vocal model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
7d210bf89b1d7426dc2ad458c52933a0
Beschreibung
Disgusted sound from each participant with the vocal model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
bf0f2dcd75baf9b12cfed794fea95f2c
Beschreibung
The cluster centers of each emotions among all participants with the vocal model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
9f0f4e35dfadb73f3fb7d1559b87d376
Beschreibung
The global mean of all the parameters collected with the vocal model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
05dc1504e24d4110b684ce04c5f194df
Beschreibung
Demonstration of the kernel regression method to morph between emotions.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
f16a4070b5b2614f5be473bcf5c050c4
Beschreibung
Disgusted sound from each participant with the abstract model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
21a502d792cfc0cd34e19ff30248d6d2
Beschreibung
Happy sound from each participant with the abstract model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
d21f188b27e1d30a008111cce27091c0
Beschreibung
The cluster centers of each emotions among all participants with the abstract model.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
099cc1fda0e39e5fb8d3c71a8ff90651
Titel
4.1 Abstract Sound Model
Beschreibung
The Abstract Sound Model interface demonstration.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
2ce5a84037eccc5b0c49bcba5e04e919
Titel
4.2 Vocal Sound Model
Beschreibung
The Vocal Sound Model interface demonstration.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
06e393be01f8b726dc97c9dcf9e26b6a
Name
Titel
4.3.1 Synthesising Heartbeat Sounds
Beschreibung
The Heartbeat Sound Model interface demonstration.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
9d8d74c874d66888df3039a4f5d117c5
Name
Titel
4.3.2 Synthesising Breathing Sounds
Beschreibung
The Breathing Sound Model interface demonstration.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
fd959de3e54cda3e8033ccfe06913f59
Beschreibung
Demonstration of the kernel regression method to interpolate between emotions.
Access Level
Open Access
Zuletzt Hochgeladen
2019-09-12T10:01:46Z
MD5 Prüfsumme
96ca2d4ae6bae4e0f8bfbfd8ccf91a31
Material in PUB:
Wird zitiert von
EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions
Hermann T, Yang J, Nagai Y (2016)
Presented at the Audio Mostly 2016, Norrköping, Sweden.
Hermann T, Yang J, Nagai Y (2016)
Presented at the Audio Mostly 2016, Norrköping, Sweden.