Supervised learning in the presence of concept drift: a modelling framework

Straat M, Abadi F, Kan Z, Göpfert C, Hammer B, Biehl M (2021)
Neural Computing and Applications.

Zeitschriftenaufsatz | E-Veröff. vor dem Druck | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Straat, M.; Abadi, F.; Kan, Z.; Göpfert, ChristinaUniBi ; Hammer, BarbaraUniBi ; Biehl, M.
Abstract / Bemerkung
We present a modelling framework for the investigation of supervised learning in non-stationary environments. Specifically, we model two example types of learning systems: prototype-based learning vector quantization (LVQ) for classification and shallow, layered neural networks for regression tasks. We investigate so-called student-teacher scenarios in which the systems are trained from a stream of high-dimensional, labeled data. Properties of the target task are considered to be non-stationary due to drift processes while the training is performed. Different types of concept drift are studied, which affect the density of example inputs only, the target rule itself, or both. By applying methods from statistical physics, we develop a modelling framework for the mathematical analysis of the training dynamics in non-stationary environments. Our results show that standard LVQ algorithms are already suitable for the training in non-stationary environments to a certain extent. However, the application of weight decay as an explicit mechanism of forgetting does not improve the performance under the considered drift processes. Furthermore, we investigate gradient-based training of layered neural networks with sigmoidal activation functions and compare with the use of rectified linear units. Our findings show that the sensitivity to concept drift and the effectiveness of weight decay differs significantly between the two types of activation function.
Stichworte
Classification; Regression; Supervised learning; Drifting concepts; Learning vector quantization; Layered neural networks
Erscheinungsjahr
2021
Zeitschriftentitel
Neural Computing and Applications
ISSN
0941-0643
eISSN
1433-3058
Page URI
https://pub.uni-bielefeld.de/record/2955115

Zitieren

Straat M, Abadi F, Kan Z, Göpfert C, Hammer B, Biehl M. Supervised learning in the presence of concept drift: a modelling framework. Neural Computing and Applications. 2021.
Straat, M., Abadi, F., Kan, Z., Göpfert, C., Hammer, B., & Biehl, M. (2021). Supervised learning in the presence of concept drift: a modelling framework. Neural Computing and Applications. https://doi.org/10.1007/s00521-021-06035-1
Straat, M., Abadi, F., Kan, Z., Göpfert, Christina, Hammer, Barbara, and Biehl, M. 2021. “Supervised learning in the presence of concept drift: a modelling framework”. Neural Computing and Applications.
Straat, M., Abadi, F., Kan, Z., Göpfert, C., Hammer, B., and Biehl, M. (2021). Supervised learning in the presence of concept drift: a modelling framework. Neural Computing and Applications.
Straat, M., et al., 2021. Supervised learning in the presence of concept drift: a modelling framework. Neural Computing and Applications.
M. Straat, et al., “Supervised learning in the presence of concept drift: a modelling framework”, Neural Computing and Applications, 2021.
Straat, M., Abadi, F., Kan, Z., Göpfert, C., Hammer, B., Biehl, M.: Supervised learning in the presence of concept drift: a modelling framework. Neural Computing and Applications. (2021).
Straat, M., Abadi, F., Kan, Z., Göpfert, Christina, Hammer, Barbara, and Biehl, M. “Supervised learning in the presence of concept drift: a modelling framework”. Neural Computing and Applications (2021).
Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Web of Science

Dieser Datensatz im Web of Science®
Suchen in

Google Scholar