Explaining Neural Networks - Deep and Shallow
Hammer B (2024)
In: Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) . Villmann T, Kaden M, Geweniger T, Schleif F-M (Eds); Lecture Notes in Networks and Systems, 1087. Cham: Springer : 139-140.
Konferenzbeitrag
| Veröffentlicht | Englisch
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Herausgeber*in
Villmann, Thomas;
Kaden, Marika;
Geweniger, Tina;
Schleif, Frank-Michael
Abstract / Bemerkung
Variable importance determination refers to the challenge to identify the most relevant input dimensions or features for a given learning task and quantify their relevance, either with respect to a local decision or a global model. Feature relevance determination constitutes a foundation for feature selection, and it enables an intuitive insight into the rational of model decisions. Indeed, it constitutes one of the oldest and most prominent explanation technologies for machine learning models with relevance for both, deep and shallow networks. A huge number of measures have been proposed such as mutual information, permutation feature importance, deep lift, LIME, GMLVQ, or Shapley values, to name just a few.
Within the talk, I will address recent extensions of feature relevance determination, which occur as machine learning models are increasingly used in everyday life. Here, models face an open environment, possibly changing dynamics, and the necessity of model adaptation to account for changes of the underlying distribution. At present, feature relevance determination almost solely focusses on static scenarios and batch training. In the talk, I will target the question of how to efficiently and effectively accompany a model which learns incrementally by feature relevance determination methods [1, 3]. As a second challenge, features are often not mutually independent, and the relevance of groups rather than single features should be judged. While mathematical models such as Shapley values take feature correlations into account for individual additive feature relevance terms, it is unclear how to efficiently and effectively extend those to groups of features. In the talk, I will discuss novel methods for the efficient computation of feature interaction indices [2, 4].
Erscheinungsjahr
2024
Titel des Konferenzbandes
Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024)
Serien- oder Zeitschriftentitel
Lecture Notes in Networks and Systems
Band
1087
Seite(n)
139-140
Konferenz
15th International Workshop on Self-Organizing Maps, Learning Vector Quantization and Beyond (WSOM)
Konferenzort
Mittweida, Germany
Konferenzdatum
2024-07-10 – 2024-07-12
ISBN
978-3-031-67158-6,
978-3-031-67159-3
ISSN
2367-3370
eISSN
2367-3389
Page URI
https://pub.uni-bielefeld.de/record/2994158
Zitieren
Hammer B. Explaining Neural Networks - Deep and Shallow. In: Villmann T, Kaden M, Geweniger T, Schleif F-M, eds. Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) . Lecture Notes in Networks and Systems. Vol 1087. Cham: Springer ; 2024: 139-140.
Hammer, B. (2024). Explaining Neural Networks - Deep and Shallow. In T. Villmann, M. Kaden, T. Geweniger, & F. - M. Schleif (Eds.), Lecture Notes in Networks and Systems: Vol. 1087. Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) (pp. 139-140). Cham: Springer . https://doi.org/10.1007/978-3-031-67159-3_16
Hammer, Barbara. 2024. “Explaining Neural Networks - Deep and Shallow”. In Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) , ed. Thomas Villmann, Marika Kaden, Tina Geweniger, and Frank-Michael Schleif, 1087:139-140. Lecture Notes in Networks and Systems. Cham: Springer .
Hammer, B. (2024). “Explaining Neural Networks - Deep and Shallow” in Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) , Villmann, T., Kaden, M., Geweniger, T., and Schleif, F. - M. eds. Lecture Notes in Networks and Systems, vol. 1087, (Cham: Springer ), 139-140.
Hammer, B., 2024. Explaining Neural Networks - Deep and Shallow. In T. Villmann, et al., eds. Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) . Lecture Notes in Networks and Systems. no.1087 Cham: Springer , pp. 139-140.
B. Hammer, “Explaining Neural Networks - Deep and Shallow”, Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) , T. Villmann, et al., eds., Lecture Notes in Networks and Systems, vol. 1087, Cham: Springer , 2024, pp.139-140.
Hammer, B.: Explaining Neural Networks - Deep and Shallow. In: Villmann, T., Kaden, M., Geweniger, T., and Schleif, F.-M. (eds.) Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) . Lecture Notes in Networks and Systems. 1087, p. 139-140. Springer , Cham (2024).
Hammer, Barbara. “Explaining Neural Networks - Deep and Shallow”. Advances in Self-Organizing Maps, Learning Vector Quantization, Interpretable Machine Learning, and Beyond (WSOM+ 2024) . Ed. Thomas Villmann, Marika Kaden, Tina Geweniger, and Frank-Michael Schleif. Cham: Springer , 2024.Vol. 1087. Lecture Notes in Networks and Systems. 139-140.
Export
Markieren/ Markierung löschen
Markierte Publikationen
Web of Science
Dieser Datensatz im Web of Science®Suchen in