Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes"

Hermann T, Niehus C, Ritter H (2003) : Bielefeld University. doi:10.4119/unibi/2703597.

Download
OA
OA AVDisplay-BlueCube1-musical.mp3
OA AVDisplay-BlueCube1-simple.mp3
All
Research Data
Creator
Abstract
On this page, sound examples, briefly described in section 5, are provided.

"blue cube" scenario

File/Track:
Simple Sonification AVDisplay-BlueCube1-simple.mp3
complete scenario   (3,00 MB)
Musical Sonification AVDisplay-BlueCube1-musical.mp3
complete scenario   (3,07 MB)
Description:
Passed Time Current Activity
0 min. 00 sec. The first module of the "Visual Attention" group introduces the attention loop.
0 min. 01 sec. The first module of the "Integration" group is initialized.
0 min. 03 sec. A new object (an aggregate with of an unknown amount of pieces) enters the view of the attention loop. In this context this object will be added to the object memory.
0 min. 03 sec. The same object leaves the hand model. The same object can't be in the robot hand and on the table at the same time.
0 min. 19 sec. The first module of the "Robot Arm" group performs a movement to a predicted position of the robot arm.
0 min. 26 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
0 min. 51 sec. Because of an absent "Robot Arm" module ("state batch") the user initializes the module of this group.
0 min. 51 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
2 min. 17 sec. An user charges the system with grasping a blue cube by verbal and gesture interaction.
3 min. 48 sec. A new "Speech Understanding" module generates a connection to the "Visual Attention" group. Its task is to combine a linguistic phrase with a gesture phrase.
4 min. 04 sec. A new "Visual Attention" module "LookForHand" is initialized to detect the human hand, which performs the gesture interaction.
4 min. 04 sec. In combination with the "LookForHand" module the "Get3DPoint" module is also initialized. Its task is to compute the spherical point of the finger-tip.
4 min. 11 sec. The task including the gesture is completely known.
5 min. 10 sec. A new verbal and gesture interaction gives a new instruction to the system.
5 min. 28 sec. The evaluation of the picture from the stereo camera provides an inconsistent spherical coordinate of the finger-tip.
5 min. 33 sec. A new computation of the finger-tip position supplies a capable spherical point.
5 min. 40 sec. The "Integration" module "M7-ResourceCtrl" transmits the coordinates to the "Robot Arm" group.
6 min. 05 sec. The robot arm drives to the predicted object position.
6 min. 11 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
Duration: about 6 minutes and 20 seconds
top of the page

"red cube" scenario

File/Track:
Simple Sonification AVDisplay-RedCube1-simple.mp3
complete scenario   (985 KB)
Musical Sonification  AVDisplay-RedCube1-musical.mp3
complete scenario   (920 KB)
Description:
Passed Time Current Activity
0 min. 00 sec. The first module of the "Visual Attention" group introduces the attention loop.
0 min. 04 sec. A blue cube leaves the view of the attention group. In this context this object will be deleted from the object memory.
0 min. 04 sec. The same object enters the memory of the hand model.
0 min. 07 sec. An user charges the system with grasping a red cube by verbal and gesture interaction.
0 min. 07 sec. A new "Speech Understanding" module generates a connection to the "Visual Attention" group.
0 min. 16 sec. A new "Visual Attention" module "LookForHand" is initialized to detect the human hand, which performs the gesture interaction.
0 min. 19 sec. In combination with the "LookForHand" module the "Get3DPoint" module, which computes the spherical point of the finger-tip, is also initialized.
0 min. 24 sec. The task including the gesture is completely known ("Speech Understanding" module "M7-whypmem" notifies a founded gesture).
0 min. 25 sec. The "Integration" module "M7-ResourceCtrl" transmits the coordinates to the "Robot Arm" group.
0 min. 39 sec The last "Visual Attention" module finishes its loop. In this and the following scenarios the visual attention loop stops if the robot arm begins moving. After its moving they will take up their observation.
0 min. 52 sec. The modules of the "Robot Arm" group are initialized.
0 min. 53 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
0 min. 57 sec. The robot arm drives to the predicted object position.
1 min. 02 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
1 min. 05 sec. The "Robot Arm" module "ps2serv" performs the derived correction movement.
1 min. 18 sec. The "Robot Arm" module "mainpat" notifies after 7 correction movements that the best correction has been reached.
1 min. 19 sec. The "Robot Arm" module "state batch" generates a grasping action.
1 min. 21 sec. The red cube is raised by the robot.
1 min. 24 sec. The robot drives to another position. After this movement it will turn back to its current position.
1 min. 27 sec. The robot takes off the red cube at its original position.
1 min. 29 sec. The robot turns back to its home position.
1 min. 34 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
1 min. 46 sec. The modules of the "Visual Attention" group take up their listening.
Duration: about 2 minutes
top of the page

"bad lighting" scenario

File/Track:
Simple Sonification AVDisplay-BadLighting1-simple.mp3
complete scenario   (833 KB)
Musical Sonification AVDisplay-BadLighting1-musical.mp3
complete scenario   (837 KB)
Description:
Passed Time Current Activity
0 min. 00 sec. The first module of the "Visual Attention" group introduces the attention loop. This is another module than in the other scenarios. The recording starts when the attention loop is already running.
0 min. 07 sec. The "Visual Attention" module "PicsFromEyes" notifies that the lighting condition is getting poorer.
0 min. 27 sec. Another "Visual Attention" module notifies a bad condition. This follows the bad condition of module "PicsFromEyes".
0 min. 39 sec. An user charges the system with grasping a red cube by verbal and gesture interaction.
0 min. 39 sec. The "Integration" module "M7-ResourceCtrl" starts the search for the communicated gesture.
1 min. 25 sec. The "Integration" module "M7-ResourceCtrl" stops the search for the communicated gesture (a gesture couldn't be found).
1 min. 27 sec. The task including the gesture can't be transposed ("Speech Understanding" module "M7-whypmem" notifies that no gesture were found).
Duration: about 1 minute and 45 seconds
top of the page

"module absent" scenario

File/Track:
Simple Sonification AVDisplay-ModuleAbsent1-simple.mp3
complete scenario   (1,65 MB)
Musical Sonification AVDisplay-ModuleAbsent1-musical.mp3
complete scenario   (1,65 MB)
Description:
Passed Time Current Activity
0 min. 00 sec. The first module ("handcam0") of the "Robot Arm" group is instantiated.
0 min. 05 sec. After three "info" messages the module "handcam0" transmits an "exit" message.
0 min. 07 sec. Three unknown NEO/NST modules start their "loop" activities. Unknown modules always are sonificated by the simple sonificator.
0 min. 20 sec. One of the unknown NEO/NST modules notifies a change of its state followed by an "action" message.
0 min. 48 sec. After three further state changes of one of the unknown NEO/NST modules another unknown NEO/NST module performs a similar activity.
1 min. 00 sec. After the unknown module activities have been turned off the "Visual Attention" module "PicsFromEyes" begins the attention circle.
1 min. 05 sec. The first "Integration" module "M7-ObjectMem" is initialized.
1 min. 07 sec. The "Integration" module "M7-Handmodelle" is initialized.
1 min. 09 sec. The "Integration" module "nbg_VIEW.3D.avdtime" is initialized.
1 min. 11 sec. The modules of the "Speech Understanding" group and another "Integration" module are initialized.
1 min. 39 sec. The modules of the "Robot Arm" group except the module "handcam0" are initialized.
1 min. 40 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
2 min. 19 sec. An user charges the system with grasping a red cube by verbal and gesture interaction.
2 min. 28 sec. The "Visual Attention" module "LookForHand" results the position of the hand used for the gesture.
2 min. 30 sec. In combination with the "LookForHand" module the "Get3DPoint" module computes the spherical point of the finger-tip. Its coordinates are inconsistent. This is why this module sends an "error" message.
2 min. 34 sec. Another evaluation of the hand results in new coordinates.
2 min. 36 sec. The module "Get3DPoint" now derives a correct position of the finger-tip.
2 min. 40 sec. The task including the gesture is completely known ("Speech Understanding" module "M7-whypmem" notifies a founded gesture).
2 min. 41 sec. The "Integration" module "M7-ResourceCtrl" transmits the coordinates to the "Robot Arm" group.
2 min. 57 sec. The robot arm drives to the predicted object position.
3 min. 02 sec. The "mainpat" module of the "Robot Arm" group asks the hand camera for a position correction.
3 min. 04 sec. Because of the absent module "handcam0" the "Robot Arm" module "mainpat" notifies that the communicated object can't be found.
3 min. 04 sec. The "mainpat" module of the "Robot Arm" group asks once more the hand camera for a position correction.
3 min. 06 sec. Because of the absent module "handcam0" the "Robot Arm" module "mainpat" notifies again that the communicated object can't be found.
3 min. 07 sec. The robot turns back to its home position.
3 min. 12 sec. The "mainpat" module notifies to the user that the "Robot Arm" group is ready to move to a predicted position.
3 min. 16 sec. The modules of the "Visual Attention" group take up their listening.
Duration: about 3 minutes and 30 seconds
Publishing Year
Data Re-Use License
This Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes" is made available under the Open Database License: http://opendatacommons.org/licenses/odbl/1.0. Any rights in individual contents of the database are licensed under the Database Contents License: http://opendatacommons.org/licenses/dbcl/1.0/
PUB-ID

Cite this

Hermann T, Niehus C, Ritter H. (2003): Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University. doi:10.4119/unibi/2703597.
Hermann, T., Niehus, C., & Ritter, H. (2003). Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University. doi:10.4119/unibi/2703597
Hermann, T., Niehus, C., and Ritter, H. (2003). Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University. doi:10.4119/unibi/2703597.
Hermann, T., Niehus, C., & Ritter, H., 2003. Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University. doi:10.4119/unibi/2703597
T. Hermann, C. Niehus, and H. Ritter, Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University, 2003. doi:10.4119/unibi/2703597.
Hermann, T., Niehus, C., Ritter, H.: Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University (2003). doi:10.4119/unibi/2703597.
Hermann, Thomas, Niehus, Christian, and Ritter, Helge. Supplementary Material for "Interactive Visualization and Sonification for Monitoring Complex Processes". Bielefeld University, 2003. doi:10.4119/unibi/2703597
All files available under the following license(s):
Main File(s)
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z
Access Level
OA Open Access
Last Uploaded
2017-06-23T11:10:06Z

This data publication is cited in the following publications:
2017380
Interactive Visualization and Sonification for Monitoring Complex Processes
Hermann T, Niehus C, Ritter H (2003)
In: Proceedings of the International Conference on Auditory Display. Brazil E, Shinn-Cunningham B (Eds); Boston, MA, USA: Boston University Publications Production Department: 247-250.
This publication cites the following data publications:

Export

0 Marked Publications

Open Data PUB

Search this title in

Google Scholar