AIAI Seminar-20 May-Talk by Zonglin Ji and Jessica Ciupa

 

 

Speaker: Zonglin Ji

 

Title: Medical History-based In-Hospital Mortality Prediction for ICU Patients with Liver Disease​

 

Abstract: In the Intensive Care Unit (ICU) setting, patients with liver disease often face a high risk of in-hospital mortality. It is crucial to assist the decision-making related to mortality predictions so that physicians can prioritise and provide efficient treatments. Despite existing methods, e.g. severity scores, offering mortality prediction based on the patient’s current condition, they do not consider the patient's prior medical history. In this work, we employ a model that combines Process Mining (PM) and Deep Learning (DL) techniques. PM is utilised to extract and analyse patterns from the historical events of previous admissions of ICU patients with liver disease, effectively transforming the events into a structured format. The structured historical data, along with other relevant clinical information, is then fed into a DL model to predict mortality. Improvements are shown from the evaluation using the Medical Information Mart for Intensive Care IV (MIMIC-IV) dataset, suggesting that it is advantageous to incorporate events from the past admissions of liver disease patients when predicting whether they might survive their ICU stay. 

 

Speaker: Jessica Ciupa

 

Title: Ethical Reward Machines

 

Abstract: The Ethical Reward Machine investigates reward design involving ethical constraints with reinforcement learning. Designed to promote good behaviour across specific domains, such as simulated driving and search-and-rescue scenarios, the Ethical Reward Machine explores ethical constraints based on Act Deontology and Utilitarianism. Our contribution to the literature is an algorithmic pipeline and discussion on ethical constraints implemented into the Reward Machine structure introduced by Icarte et al. (2022). Our findings indicate that integrating ethical principles does not significantly increase runtime, suggesting that ethical considerations do not substantially burden computational resources while offering the same benefits of sample efficiency and partial observability found in Reward Machines. Ultimately, the overarching objective is to develop and validate a learning framework that ensures AI alignment with human learning, ethical preferences, and standards.

 

Knowledge Representation, Interpretable Reinforcement Learning and Ethics.