Multi-Agent Reinforcement Learning: Maximum Entropy, Rationality and Equilibrium
Speaker
温颖,University College London
Time
2019-10-11 14:00:00 ~ 2019-10-11 15:30:00
Location
Room 1319, Software Expert Building
Host
Weinan Zhang, Assistant Professor, John Hopcroft Center for Computer Science
Abstract
In the multi-agent reinforcement learning (MARL), one common assumption is that all agents behave rationally during their interactions. For example, we assume agents' behaviors will converge to Nash equilibrium. However, in practice, it is hard to guarantee that all agents have same level of sophistication in terms of their abilities in understanding and learning from each other. Therefore, the effectiveness of MARL models decreases, especially when the opponents act irrationally.
In this talk, Ying will quickly go through the background on reinforcement learning/game theory and summarize recent works. Then, combining the principle of maximum entropy and the cognitive hierarchy theory, he will introduce a novel framework--Generalized Recursive Reasoning--that recognizes agents' bounded rationality and thus models their corresponding sub-optimal behaviors. Finally, he will outline future directions on how this framework can be applied in real-world problems.
Bio
Ying Wen is a final year Ph.D. Student in the Department of Computer Science, University College London, under the supervision of Prof. Jun Wang. His research interests are in the fields of reinforcement learning, multi-agent learning, and applications in real-world scenarios. More precisely, he is interested in the modeling dynamics and rationality in multi-agent reinforcement learning. He has published several papers in top-tier international conferences, such as ICLR, IJCAI, AAMAS, ICDM. He has over four years' experience working with tech companies to ground intelligent machine learning solutions in real business problems.