14 December 2020 - Mark Chevallier, Thomas Fletcher & Giannis Papantonis

 

Speaker: Mark Chevallier

 

Title: Representations of Rewards on Markov Decision Processes in Isabelle

 

Abstract: I will present a brief discussion about how Markov Decision Processes can be formalised using Isabelle/HOL, a formal proof assistant. I will show the existing solutions and introduce my own representation of rewards on a Markov Decision Process. I will conclude with several proofs I have formalised using my representation.

 

Bio: Mark Chevallier is a PhD student working under the supervision of Jacques Fleuriot on formalisations of reinforcement learning.

 

 

Speaker: Thomas Fletcher

 

Title: GPy-ABCD: A Configurable Automatic Bayesian Covariance Discovery Implementation

 

Abstract:

Gaussian Processes (GPs) are a very flexible class of nonparametric models which only require the assumption of the type of correlation (kernel) the data is expected to display.

Automatic Bayesian Covariance Discovery (ABCD) is an iterative Gaussian Process regression framework aimed at removing the requirement for even this initial correlation form assumption.

The original ABCD implementation is a complex system now part of a larger unsupervised learning project and is able to produce very detailed multi-page text-based analyses of provided input data. The open-source Python package GPy-ABCD is a lighter, more usable and configurable re-implementation of an ABCD system, improving its core algorithm but producing simpler outputs.; it currently has around 8.5k downloads.

 PS: This project is not the focus of my research but was created about a year ago and improved in the past month since a paper on it  is in the works; it is just one of many statistical modelling components for an adaptive modelling framework to be used by the FRANK Query-Answering system.

 

Specific topic: Inferential Data Modelling in a Query-Answering System

General details: British-Italian dual national; grew up in Italy and moved to the UK for university (7 years now).

Currently in Italy since summer; had intended to come back but repeatedly pushed the date since the Covid situation kept getting worse faster in the UK; hopefully back after the holidays.

 

Bio: Academic details: Mathematics & Physics MSci, Advanced Statistics MRes and now 1.25 years into Data Science & AI PhD under Alan Bundy.

 

 

Speaker: Giannis Papantonis

 

Title: Tractable Probabilistic Models: A Causal Perspective

 

Abstract: In recent years, there has been an increasing interest in studying causality-related properties in machine learning models generally, and in generative models in particular.  While that is well motivated, it inherits the fundamental computational hardness of probabilistic inference, making exact reasoning intractable. Tractable probabilistic models have also recently emerged, which guarantee that conditional marginals can be computed in time linear in the size of the model, where the model is usually learned from data. In this paper, we ask the following technical question: can we use the distributions represented or learned by these models to perform causal queries, such as reasoning about interventions and counterfactuals?  By appealing to some existing ideas on transforming such models to Bayesian networks, we answer mostly in the negative. We show, with one notable exception, that causal reasoning reduces to computing marginal distributions.

 

Bio: I am a 2nd year PhD student working on tractable models with an emphasis on their connection with causality concepts and explainability.