Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions"

Hermann T, Yang J, Nagai Y (2016)
Bielefeld University.

Download
OA
OA S1.4_vocal_disgusted.mp3
OA S3_vocal_cluster_centers.mp3
All
Research Data
Creator
Abstract
This paper presents a novel approach for using sound to externalize emotional states so that they become an object for communication and reflection both for the users themselves and for interaction with other users such as peers, parents or therapists. We present an abstract, vocal, and physiology-based sound synthesis model whose sound space each covers various emotional associations. The key idea in our approach is to use an evolutionary optimization approach to enable users to find emotional prototypes which are then in turn fed into a kernel-regression-based mapping to allow users to navigate the sound space via a low-dimensional interface, which can be controlled in a playful way via tablet interactions. The method is intended to be used for supporting people with autism spectrum disorder.
Publishing Year
Data Re-Use License
This Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions" is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
PUB-ID

Cite this

Hermann T, Yang J, Nagai Y. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University; 2016.
Hermann, T., Yang, J., & Nagai, Y. (2016). Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University.
Hermann, T., Yang, J., and Nagai, Y. (2016). Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University.
Hermann, T., Yang, J., & Nagai, Y., 2016. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions", Bielefeld University.
T. Hermann, J. Yang, and Y. Nagai, Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions", Bielefeld University, 2016.
Hermann, T., Yang, J., Nagai, Y.: Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University (2016).
Hermann, Thomas, Yang, Jiajun, and Nagai, Yukie. Supplementary Material for "EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions". Bielefeld University, 2016.
All files available under the following license(s):
Main File(s)
Description
Happy sound from each participant with the vocal model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
Disgusted sound from each participant with the vocal model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
The cluster centers of each emotions among all participants with the vocal model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
The global mean of all the parameters collected with the vocal model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
Demonstration of the kernel regression method to morph between emotions.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
Disgusted sound from each participant with the abstract model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
Happy sound from each participant with the abstract model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
The cluster centers of each emotions among all participants with the abstract model.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
File Title
4.1 Abstract Sound Model
Description
The Abstract Sound Model interface demonstration.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
File Title
4.2 Vocal Sound Model
Description
The Vocal Sound Model interface demonstration.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
File Title
4.3.1 Synthesising Heartbeat Sounds
Description
The Heartbeat Sound Model interface demonstration.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
File Title
4.3.2 Synthesising Breathing Sounds
Description
The Breathing Sound Model interface demonstration.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z
Description
Demonstration of the kernel regression method to interpolate between emotions.
Access Level
OA Open Access
Last Uploaded
2016-10-04T09:15:31Z

This data publication is cited in the following publications:
2905036
EmoSonics – Interactive Sound Interfaces for the Externalization of Emotions
Hermann T, Yang J, Nagai Y (2016)
Presented at the Audio Mostly 2016, Norrköping, Sweden.
This publication cites the following data publications:

Export

0 Marked Publications

Open Data PUB

Search this title in

Google Scholar