16 November 2020 - Francisco José Quesada Real, Adarsh Prabhakaran & Filippos Christianos

 

Chair: Arrasy Rahman

 

 

Speaker:  Adarsh Prabhakaran

Title: Studying the spread of smoking.

Abstract:

Mathematical models have extensively been used to study infections and spreading phenomenon. From modelling the spread of smallpox in 1760 to the recent COVID -19 pandemic, these models have helped in understanding the spread of diseases. In this talk, I will introduce a new compartmental model, motivated from epidemiological models to study the spread of smoking in a population. Empirical studies show that social ties have an effect on smoking cessation, these interactions are incorporated in the model, and we show the effects of these interactions on smoking cessation. Specifically, we model non-smoker - smoker interactions and smoker - quitter interactions in the system.

Bio: Adarsh is a second-year PhD student working on mathematical modelling of complex systems. 

 

 

Speaker: Francisco José Quesada Real

Title:  Using domain lexicon and grammar for ontology matching.

Abstract:

There are multiple ontology matching approaches that use domain-specific background knowledge to match labels in domain ontologies or classifications. However, they tend to rely on lexical knowledge and do not consider the specificities of domain grammar. In this talk, it is demonstrated the usefulness of both lexical and grammatical linguistic domain knowledge for ontology matching through examples from multiple domains. An evaluation of the impact of such knowledge on a real-world problem of matching classifications of mental illnesses from the health domain is also provided. Our experimentation with two matcher tools that use very different matching mechanisms -LogMap and SMATCH- shows that both lexical and grammatical knowledge improve matching results.

 

 

Speaker: Filippos Christianos

Title: Shared Experience Actor-Critic for Multi-Agent Reinforcement Learning

Abstract:

Exploration in multi-agent reinforcement learning is a challenging problem, especially in environments with sparse rewards. In this talk, I will go through the difficulties of applying reinforcement learning in such sparse-reward multi-agent environments. Next, I will describe our method, recently published in NeurIPS, which we showed to be efficient in exploration by sharing experience amongst agents. Our proposed algorithm, called Shared Experience Actor-Critic (SEAC), applies experience sharing in an actor-critic framework by combining the gradients of different agents. We evaluate SEAC in a collection of sparse-reward multi-agent environments and find that it consistently outperforms several baselines and state-of-the-art algorithms by learning in fewer steps and converging to higher returns. In some harder environments, experience sharing makes the difference between learning to solve the task and not learning at all.

Bio: Filippos is currently a second-year PhD working on Multi-Agent Deep Reinforcement Learning with emphasis on exploration in sparse-reward environments.