Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning
Niemann C, Leins D, Lach LM, Haschke R (2024)
In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Konferenzbeitrag | Englisch
Download

Autor*in
Einrichtung
Abstract / Bemerkung
Actively guiding attention is an important mechanism to employ limited processing resources efficiently. The Recurrent Visual Attention Model (RAM) has been successfully applied to process large input images by sequentially attending to smaller image regions with an RL framework. In tactile perception, sequential attention methods are required naturally due to the limited size of the tactile receptive field. The concept of RAM was transferred to the haptic domain by the Haptic Attention Model (HAM) to iteratively generate a fixed number of informative haptic glances for tactile object classification. We extend HAM to a system capable of actively determining when sufficient haptic data is available for reliable classification.
To this end, we introduce a hybrid action space, augmenting the continuous glance location with the discrete decision of when to classify.
This allows balancing the cost of obtaining new samples against the cost of misclassification, resulting in an optimized number of glances while maintaining reasonable accuracy. We evaluate the efficiency of our approach on a hand-crafted dataset, which allows us to compute the most efficient glance locations.
Erscheinungsjahr
2024
Titel des Konferenzbandes
2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Urheberrecht / Lizenzen
Konferenz
2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)
Konferenzdatum
2024-10-14 – 2024-10-18
Page URI
https://pub.uni-bielefeld.de/record/2994195
Zitieren
Niemann C, Leins D, Lach LM, Haschke R. Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning. In: 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2024.
Niemann, C., Leins, D., Lach, L. M., & Haschke, R. (2024). Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning. 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). https://doi.org/10.1109/IROS58592.2024.10801966
Niemann, Christopher, Leins, David, Lach, Luca Michael, and Haschke, Robert. 2024. “Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning”. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Niemann, C., Leins, D., Lach, L. M., and Haschke, R. (2024). “Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning” in 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Niemann, C., et al., 2024. Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning. In 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
C. Niemann, et al., “Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning”, 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2024.
Niemann, C., Leins, D., Lach, L.M., Haschke, R.: Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning. 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). (2024).
Niemann, Christopher, Leins, David, Lach, Luca Michael, and Haschke, Robert. “Learning When to Stop: Efficient Active Tactile Perception with Deep Reinforcement Learning”. 2024 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS). 2024.
Alle Dateien verfügbar unter der/den folgenden Lizenz(en):
Creative Commons Namensnennung 4.0 International Public License (CC-BY 4.0):
Volltext(e)
Name
IROS_Haptic_Exploration.pdf
1.27 MB
Access Level

Zuletzt Hochgeladen
2025-01-15T13:17:03Z
MD5 Prüfsumme
d3705b212b4f24e4ae2af3d0d53b420b