Adversarial attacks hidden in plain sight

Göpfert JP, Wersing H, Hammer B (2019) .

Download
Es wurde kein Volltext hochgeladen. Nur Publikationsnachweis!
Preprint | Veröffentlicht | Englisch
Abstract / Bemerkung
Convolutional neural networks have been used to achieve a string of successes during recent years, but their lack of interpretability remains a serious issue. Adversarial examples are designed to deliberately fool neural networks into making any desired incorrect classification, potentially with very high certainty. We underline the severity of the issue by presenting a technique that allows to hide such adversarial attacks in regions of high complexity, such that they are imperceptible even to an astute observer.
Erscheinungsjahr
PUB-ID

Zitieren

Göpfert JP, Wersing H, Hammer B. Adversarial attacks hidden in plain sight. 2019.
Göpfert, J. P., Wersing, H., & Hammer, B. (2019). Adversarial attacks hidden in plain sight. doi:10.4119/unibi/2934181
Göpfert, J. P., Wersing, H., and Hammer, B. (2019). Adversarial attacks hidden in plain sight.
Göpfert, J.P., Wersing, H., & Hammer, B., 2019. Adversarial attacks hidden in plain sight.
J.P. Göpfert, H. Wersing, and B. Hammer, “Adversarial attacks hidden in plain sight”, 2019.
Göpfert, J.P., Wersing, H., Hammer, B.: Adversarial attacks hidden in plain sight. (2019).
Göpfert, Jan Philip, Wersing, Heiko, and Hammer, Barbara. “Adversarial attacks hidden in plain sight”. (2019).

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Quellen

arXiv: 1902.09286

Suchen in

Google Scholar