Adversarial attacks hidden in plain sight
Göpfert JP, Wersing H, Hammer B (2019) .
Preprint
| Veröffentlicht | Englisch
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Abstract / Bemerkung
Convolutional neural networks have been used to achieve a string of successes
during recent years, but their lack of interpretability remains a serious
issue. Adversarial examples are designed to deliberately fool neural networks
into making any desired incorrect classification, potentially with very high
certainty. We underline the severity of the issue by presenting a technique
that allows to hide such adversarial attacks in regions of high complexity,
such that they are imperceptible even to an astute observer.
Erscheinungsjahr
2019
Page URI
https://pub.uni-bielefeld.de/record/2934181
Zitieren
Göpfert JP, Wersing H, Hammer B. Adversarial attacks hidden in plain sight. 2019.
Göpfert, J. P., Wersing, H., & Hammer, B. (2019). Adversarial attacks hidden in plain sight. doi:10.4119/unibi/2934181
Göpfert, Jan Philip, Wersing, Heiko, and Hammer, Barbara. 2019. “Adversarial attacks hidden in plain sight”.
Göpfert, J. P., Wersing, H., and Hammer, B. (2019). Adversarial attacks hidden in plain sight.
Göpfert, J.P., Wersing, H., & Hammer, B., 2019. Adversarial attacks hidden in plain sight.
J.P. Göpfert, H. Wersing, and B. Hammer, “Adversarial attacks hidden in plain sight”, 2019.
Göpfert, J.P., Wersing, H., Hammer, B.: Adversarial attacks hidden in plain sight. (2019).
Göpfert, Jan Philip, Wersing, Heiko, and Hammer, Barbara. “Adversarial attacks hidden in plain sight”. (2019).