Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning

Wang T, Peng X, Jin Y, Xu D (2022)
Memetic Computing 14(1): 3-17.

Zeitschriftenaufsatz | Veröffentlicht | Englisch
 
Download
Es wurden keine Dateien hochgeladen. Nur Publikationsnachweis!
Autor*in
Wang, Tonghao; Peng, Xingguang; Jin, YaochuUniBi ; Xu, Demin
Abstract / Bemerkung
In transfer learning (TL) for multiagent reinforcement learning (MARL), most popular methods are based on action advising scheme, in which skilled agents directly transfer actions, i.e., explicit knowledge, to other agents. However, this scheme requires an inquiry-answer process, which quadratically increases the computational load as the number of agents increases. To enhance the scalability of TL for MARL when all the agents learn from scratch, we propose an experience sharing based memetic TL for MARL, called MeTL-ES. In the MeTL-ES, the agents actively share implicit memetic knowledge (experience), which avoids the inquiry-answer process and brings highly scalable and effective acceleration of learning. In particular, we firstly design an experience sharing scheme to share implicit meme based experience among the agents. Within this scheme, experience from the peers is collected and used to speed up the learning process. More importantly, this scheme frees the agents from actively asking for the states and policies of other agents, which enhances scalability. Secondly, an event-triggered scheme is designed to enable the agents to share the experiences at appropriate timings. Simulation studies show that, compared with the existing methods, the proposed MeTL-ES can more effectively enhance the learning speed of learning-from-scratch MARL systems. At the same time, we show that the communication cost and computational load of MeTL-ES increase linearly with the growth of the number of agents, indicating better scalability compared to the popular action advising based methods.
Erscheinungsjahr
2022
Zeitschriftentitel
Memetic Computing
Band
14
Ausgabe
1
Seite(n)
3-17
ISSN
1865-9284
eISSN
1865-9292
Page URI
https://pub.uni-bielefeld.de/record/2978350

Zitieren

Wang T, Peng X, Jin Y, Xu D. Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning. Memetic Computing. 2022;14(1):3-17.
Wang, T., Peng, X., Jin, Y., & Xu, D. (2022). Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning. Memetic Computing, 14(1), 3-17. https://doi.org/10.1007/s12293-021-00339-4
Wang, Tonghao, Peng, Xingguang, Jin, Yaochu, and Xu, Demin. 2022. “Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning”. Memetic Computing 14 (1): 3-17.
Wang, T., Peng, X., Jin, Y., and Xu, D. (2022). Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning. Memetic Computing 14, 3-17.
Wang, T., et al., 2022. Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning. Memetic Computing, 14(1), p 3-17.
T. Wang, et al., “Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning”, Memetic Computing, vol. 14, 2022, pp. 3-17.
Wang, T., Peng, X., Jin, Y., Xu, D.: Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning. Memetic Computing. 14, 3-17 (2022).
Wang, Tonghao, Peng, Xingguang, Jin, Yaochu, and Xu, Demin. “Experience Sharing Based Memetic Transfer Learning for Multiagent Reinforcement Learning”. Memetic Computing 14.1 (2022): 3-17.

Link(s) zu Volltext(en)
Access Level
Restricted Closed Access

Export

Markieren/ Markierung löschen
Markierte Publikationen

Open Data PUB

Suchen in

Google Scholar