AIAI Seminar - 31 January 2022 - Talks by Jiawei Zheng, Miguel Mendez Lucero and Cillian Brewitt

 

Talk by Jiawei Zheng

Title:   

Predictive Behavioural Monitoring and Deviation Detection in Activities of Daily Living of Older Adults

Abstract:

Predictive behaviour monitoring of Activities of Daily Living (ADLs) can provide unique, personalised insights about an older person’s physical and cognitive health and lead to unique opportunities to support self-management, proactive intervention and promote independent living. In this paper, we analyse ADL data from ambient sensors to model behaviour markers on a daily basis. Using a number of machine learning and statistical methods we model a predicted daily routine for each marker, detect deviations based on a set of relative thresholds and calculate long-term drifts. We further analyse the causal factors of deviations by investigating relationships between different activities. We demonstrate our results using data from a sample of 11 participants from the CASAS dataset. Finally, we develop a dashboard to visualize our computed daily routines and quantified deviations in an attempt to offer useful feedback to the monitored person and their caregivers.

 

Talk by Miguel Mendez Lucero

Title:

Signal Perceptron: On the Identifiability of Boolean Function  Spaces and Beyond

Abstract: 

In a seminal book, Minsky and Papert define the perceptron as a limited implementation of what they called "parallel machines”.

They showed that some binary Boolean functions including XOR are not definable in a single layer perceptron due to its limited capability to learn only linearly separable functions.

In this work, we propose a new more powerful implementation of such parallel machines.

This new mathematical tool is defined using analytic sinusoids---instead of linear combinations---to form an analytic signal representation of the function that we want to learn.

We show that this re-formulated parallel mechanism is able to learn, with a single layer, any non-linear $k$-ary Boolean function.

Finally, in order to provide an example of its practical applications, we show that it outperforms the single hidden layer multilayer perceptron in both function learning and image classification tasks, while also being faster and requiring less parameters.

 

Talk by Cillian Brewitt 

Title: 

GRIT: Fast, Interpretable, and Verifiable Goal Recognition with Learned Decision Trees for Autonomous Driving

Abstract:

It is important for autonomous vehicles to have the ability to infer the goals of other vehicles (goal recognition), in order to safely interact with other vehicles and predict their future trajectories. This is a difficult problem, especially in urban environments with interactions between many vehicles. Goal recognition methods must be fast to run in real time and make accurate inferences. As autonomous driving is safety-critical, it is important to have methods which are human interpretable and for which safety can be formally verified. Existing goal recognition methods for autonomous vehicles fail to satisfy all four objectives of being fast, accurate, interpretable, and verifiable. We propose Goal Recognition with Interpretable Trees (GRIT), a goal recognition system which achieves these objectives. GRIT makes use of decision trees trained on vehicle trajectory data. We evaluate GRIT on two datasets, showing that GRIT achieved fast inference speed and comparable accuracy to two deep learning baselines, a planning-based goal recognition method, and an ablation of GRIT. We show that the learned trees are human interpretable and demonstrate how properties of GRIT can be formally verified using a satisfiability modulo theories (SMT) solver.