Learning interpretable kernelized prototype-based models
Hofmann D, Schleif F-M, Paaßen B, Hammer B (2014)
Neurocomputing 141: 84-96.
Zeitschriftenaufsatz
| Veröffentlicht | Englisch
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Einrichtung
Abstract / Bemerkung
Since they represent a model in terms of few typical representatives, prototype based learning such as learning vector quantization (LVQ) constitutes a directly interpretable machine learning technique. Recently, several LVQ\ schemes have been extended towards a kernelized or dissimilarity based version which can be applied if data are represented by pairwise similarities or dissimilarities only. This opens the way towards its application in domains where data are typically not represented in vectorial form. Albeit kernel LVQ\ still represents models by typical prototypes, interpretability is usually lost this way: since no vector space model is available, prototypes are represented indirectly in terms of combinations of data. In this contribution, we extend a recent kernel LVQ\ scheme by sparse approximations to overcome this problem: instead of the full coefficient vectors, few exemplars which represent the prototypes can be directly inspected by practitioners in the same way as data in this case. For this purpose, we investigate different possibilities to approximate a prototype by a sparse counterpart during or after training relying on different heuristics or approximation algorithms, respectively, in particular sparsity constraints while training, geometric approaches, orthogonal matching pursuit, and core techniques for the minimum enclosing ball problem. We discuss the behavior of these methods in several benchmark problems as concerns quality, sparsity, and interpretability, and we propose different measures how to quantitatively evaluate the performance of the approaches.
Stichworte
Interpretable models
Erscheinungsjahr
2014
Zeitschriftentitel
Neurocomputing
Band
141
Seite(n)
84-96
ISSN
0925-2312
Page URI
https://pub.uni-bielefeld.de/record/2678214
Zitieren
Hofmann D, Schleif F-M, Paaßen B, Hammer B. Learning interpretable kernelized prototype-based models. Neurocomputing. 2014;141:84-96.
Hofmann, D., Schleif, F. - M., Paaßen, B., & Hammer, B. (2014). Learning interpretable kernelized prototype-based models. Neurocomputing, 141, 84-96. doi:10.1016/j.neucom.2014.03.003
Hofmann, Daniela, Schleif, Frank-Michael, Paaßen, Benjamin, and Hammer, Barbara. 2014. “Learning interpretable kernelized prototype-based models”. Neurocomputing 141: 84-96.
Hofmann, D., Schleif, F. - M., Paaßen, B., and Hammer, B. (2014). Learning interpretable kernelized prototype-based models. Neurocomputing 141, 84-96.
Hofmann, D., et al., 2014. Learning interpretable kernelized prototype-based models. Neurocomputing, 141, p 84-96.
D. Hofmann, et al., “Learning interpretable kernelized prototype-based models”, Neurocomputing, vol. 141, 2014, pp. 84-96.
Hofmann, D., Schleif, F.-M., Paaßen, B., Hammer, B.: Learning interpretable kernelized prototype-based models. Neurocomputing. 141, 84-96 (2014).
Hofmann, Daniela, Schleif, Frank-Michael, Paaßen, Benjamin, and Hammer, Barbara. “Learning interpretable kernelized prototype-based models”. Neurocomputing 141 (2014): 84-96.
Link(s) zu Volltext(en)
Access Level
Closed Access
Export
Markieren/ Markierung löschen
Markierte Publikationen
Web of Science
Dieser Datensatz im Web of Science®Suchen in