Agnostic Explanation of Model Change based on Feature Importance
Muschalik M, Fumagalli F, Hammer B, Hüllermeier E (2022)
KI - Künstliche Intelligenz.
Zeitschriftenaufsatz
| E-Veröff. vor dem Druck | Englisch
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Einrichtung
Technische Fakultät > AG Machine Learning
Center of Excellence - Cognitive Interaction Technology CITEC > Machine Learning
SFB/Transregio 318 Constructing Explainability > Projektbereich C: Darstellung und Berechnung von Erklärungen > Teilprojekt C03: Interpretierbares maschinelles Lernen: Erklärbarkeit in dynamischen Umgebungen
Center of Excellence - Cognitive Interaction Technology CITEC > Machine Learning
SFB/Transregio 318 Constructing Explainability > Projektbereich C: Darstellung und Berechnung von Erklärungen > Teilprojekt C03: Interpretierbares maschinelles Lernen: Erklärbarkeit in dynamischen Umgebungen
Projekt
Abstract / Bemerkung
**Abstract**
Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem ofexplaining model change, i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
Explainable Artificial Intelligence (XAI) has mainly focused on static learning tasks so far. In this paper, we consider XAI in the context of online learning in dynamic environments, such as learning from real-time data streams, where models are learned incrementally and continuously adapted over the course of time. More specifically, we motivate the problem ofexplaining model change, i.e. explaining the difference between models before and after adaptation, instead of the models themselves. In this regard, we provide the first efficient model-agnostic approach to dynamically detecting, quantifying, and explaining significant model changes. Our approach is based on an adaptation of the well-known Permutation Feature Importance (PFI) measure. It includes two hyperparameters that control the sensitivity and directly influence explanation frequency, so that a human user can adjust the method to individual requirements and application needs. We assess and validate our method’s efficacy on illustrative synthetic data streams with three popular model classes.
Erscheinungsjahr
2022
Zeitschriftentitel
KI - Künstliche Intelligenz
Urheberrecht / Lizenzen
ISSN
0933-1875
eISSN
1610-1987
Page URI
https://pub.uni-bielefeld.de/record/2964421
Zitieren
Muschalik M, Fumagalli F, Hammer B, Hüllermeier E. Agnostic Explanation of Model Change based on Feature Importance. KI - Künstliche Intelligenz. 2022.
Muschalik, M., Fumagalli, F., Hammer, B., & Hüllermeier, E. (2022). Agnostic Explanation of Model Change based on Feature Importance. KI - Künstliche Intelligenz. https://doi.org/10.1007/s13218-022-00766-6
Muschalik, Maximilian, Fumagalli, Fabian, Hammer, Barbara, and Hüllermeier, Eyke. 2022. “Agnostic Explanation of Model Change based on Feature Importance”. KI - Künstliche Intelligenz.
Muschalik, M., Fumagalli, F., Hammer, B., and Hüllermeier, E. (2022). Agnostic Explanation of Model Change based on Feature Importance. KI - Künstliche Intelligenz.
Muschalik, M., et al., 2022. Agnostic Explanation of Model Change based on Feature Importance. KI - Künstliche Intelligenz.
M. Muschalik, et al., “Agnostic Explanation of Model Change based on Feature Importance”, KI - Künstliche Intelligenz, 2022.
Muschalik, M., Fumagalli, F., Hammer, B., Hüllermeier, E.: Agnostic Explanation of Model Change based on Feature Importance. KI - Künstliche Intelligenz. (2022).
Muschalik, Maximilian, Fumagalli, Fabian, Hammer, Barbara, and Hüllermeier, Eyke. “Agnostic Explanation of Model Change based on Feature Importance”. KI - Künstliche Intelligenz (2022).
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung 4.0 International Public License (CC-BY 4.0):
Link(s) zu Volltext(en)
Access Level
Open Access
Export
Markieren/ Markierung löschen
Markierte Publikationen
Web of Science
Dieser Datensatz im Web of Science®Suchen in