Learning interpretable kernelized prototype-based models

Hofmann D, Schleif F-M, Paaßen B, Hammer B (2014)
Neurocomputing 141: 84-96.

Journal Article | Published | English

No fulltext has been uploaded

Abstract
Since they represent a model in terms of few typical representatives, prototype based learning such as learning vector quantization (LVQ) constitutes a directly interpretable machine learning technique. Recently, several LVQ\ schemes have been extended towards a kernelized or dissimilarity based version which can be applied if data are represented by pairwise similarities or dissimilarities only. This opens the way towards its application in domains where data are typically not represented in vectorial form. Albeit kernel LVQ\ still represents models by typical prototypes, interpretability is usually lost this way: since no vector space model is available, prototypes are represented indirectly in terms of combinations of data. In this contribution, we extend a recent kernel LVQ\ scheme by sparse approximations to overcome this problem: instead of the full coefficient vectors, few exemplars which represent the prototypes can be directly inspected by practitioners in the same way as data in this case. For this purpose, we investigate different possibilities to approximate a prototype by a sparse counterpart during or after training relying on different heuristics or approximation algorithms, respectively, in particular sparsity constraints while training, geometric approaches, orthogonal matching pursuit, and core techniques for the minimum enclosing ball problem. We discuss the behavior of these methods in several benchmark problems as concerns quality, sparsity, and interpretability, and we propose different measures how to quantitatively evaluate the performance of the approaches.
Publishing Year
ISSN
PUB-ID

Cite this

Hofmann D, Schleif F-M, Paaßen B, Hammer B. Learning interpretable kernelized prototype-based models. Neurocomputing. 2014;141:84-96.
Hofmann, D., Schleif, F. - M., Paaßen, B., & Hammer, B. (2014). Learning interpretable kernelized prototype-based models. Neurocomputing, 141, 84-96.
Hofmann, D., Schleif, F. - M., Paaßen, B., and Hammer, B. (2014). Learning interpretable kernelized prototype-based models. Neurocomputing 141, 84-96.
Hofmann, D., et al., 2014. Learning interpretable kernelized prototype-based models. Neurocomputing, 141, p 84-96.
D. Hofmann, et al., “Learning interpretable kernelized prototype-based models”, Neurocomputing, vol. 141, 2014, pp. 84-96.
Hofmann, D., Schleif, F.-M., Paaßen, B., Hammer, B.: Learning interpretable kernelized prototype-based models. Neurocomputing. 141, 84-96 (2014).
Hofmann, Daniela, Schleif, Frank-Michael, Paaßen, Benjamin, and Hammer, Barbara. “Learning interpretable kernelized prototype-based models”. Neurocomputing 141 (2014): 84-96.
This data publication is cited in the following publications:
This publication cites the following data publications:
2692491
VBB Midi Dataset
Paaßen B (2013)
Bielefeld University.

Export

0 Marked Publications

Open Data PUB

Web of Science

View record in Web of Science®

Search this title in

Google Scholar