4 May 2020 - Arrasy Rahman

Link

https://eu.bbcollab.com/guest/97e5b2666aa74b30870a4c83ad03760a  

Speaker

Arrasy Rahman

Title

Learning to Interact in Open Multi-agent Systems With Graph Neural Networks

Abstract

Many real-world decision-making problems requires an agent to interact with other agents whose policies are unknown. In addition to the existence of other agents, several problems also display the characteristic of evolving number of agents during different points in time. For example, autonomous vehicle control in real-world traffic situations with changing numbers of nearby vehicles, robotic teams with evolving number of members, and electronic markets where decision-makers could be active or inactive at different points of time are real-world problems with both of these characteristics. In previous works, methods that enable agents to learn to effectively interact with others have been explored under the topic of ad hoc coordination. However, these has been limited to closed multi-agent systems where the number of agents in the environment remains the same throughout the interaction process.
 
In this talk, I will describe a deep reinforcement learning-based approach for learning to interact in open multi-agent systems. Our proposed method utilizes graphical models, such as Coordination Graphs and Markov Random Fields, to represent the possible relation between agents in an open multi-agent system. We subsequently learn the parameters of the graphical models using Reinforcement Learning, Supervised Learning, along with Graph Neural Networks. We finally compare the performance of our approach with several baseline approaches in open ad hoc coordination problems.