Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation

Belardinelli A, Carbone A, Schneider WX (2013)
Pattern Recognition Letters 34(7): 713-722.

Journal Article | Published | English

No fulltext has been uploaded

Abstract
High level visual cognitive abilities such as scene understanding and behavioural analysis are modulated by attentive selective processes. These in turn rely on pre-attentive operations delivering perceptual organisation of the visual input and enabling the extraction of meaningful "chunks" of information. Specifically, the extraction and prioritisation of moving objects is a crucial step in the processing of dynamic scenes. Motion is of course a powerful cue for grouping regions and segregating objects but not all kinds of motion are equally meaningful and should be equally attended. On a coarse level, most interesting moving objects are associated with coherent motion, reflecting our sensitivity to biological motion. On the other hand, attention operates on a higher level, prioritising what moves differently with respect to both its surrounding and the global scene. In this paper, we propose how a qualitative segmentation of multiscale spatiotemporal energy features according to their frequency spectrum distribution can be used to pre-attentively extract regions of interest. We also show that discrimination boundaries between classes in the segmentation phase can be learned in an automatic and efficient way by a Support Vector Machine classifier in a multi-class implementation. Motion-related features are shown to best predict human fixations on an extensive dataset. The model generalises well to datasets other than that used for training, if scale is taken into account in the feature extraction step. Regions labelled as coherently moving are clustered in moving object files, described by the magnitude and phase of the pooled motion energy. The method succeeds in extracting meaningful moving objects from the background and identifying other less interesting motion patterns. A saliency function is finally computed on an object basis, instead of on a pixel basis, as in most current approaches. The same features are thus used for segmentation and selective attention and can be further used for recognition and scene interpretation. (C) 2012 Elsevier B.V. All rights reserved.
Publishing Year
ISSN
PUB-ID

Cite this

Belardinelli A, Carbone A, Schneider WX. Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation. Pattern Recognition Letters. 2013;34(7):713-722.
Belardinelli, A., Carbone, A., & Schneider, W. X. (2013). Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation. Pattern Recognition Letters, 34(7), 713-722.
Belardinelli, A., Carbone, A., and Schneider, W. X. (2013). Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation. Pattern Recognition Letters 34, 713-722.
Belardinelli, A., Carbone, A., & Schneider, W.X., 2013. Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation. Pattern Recognition Letters, 34(7), p 713-722.
A. Belardinelli, A. Carbone, and W.X. Schneider, “Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation”, Pattern Recognition Letters, vol. 34, 2013, pp. 713-722.
Belardinelli, A., Carbone, A., Schneider, W.X.: Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation. Pattern Recognition Letters. 34, 713-722 (2013).
Belardinelli, Anna, Carbone, Andrea, and Schneider, Werner X. “Classification of multiscale spatiotemporal energy features for video segmentation and dynamic objects prioritisation”. Pattern Recognition Letters 34.7 (2013): 713-722.
This data publication is cited in the following publications:
This publication cites the following data publications:

Export

0 Marked Publications

Open Data PUB

Web of Science

View record in Web of Science®

Search this title in

Google Scholar