AIAI Seminar-13 May-Talk by Mark Chevallier and Chiara Manganini Speaker: Mark Chevallier Title: A Verified Neurosymbolic Pipeline for Learning Temporal Behaviour Abstract: We introduce a pipeline integrating theorem proving, automatic code generation, and neural networks for neurosymbolic learning. In particular, we mechanise Signal Temporal Logic (STL) over tensors in Isabelle/HOL and use it to specify constraints that are then injected into the learning process. Our aim is to improve the safety of robotic movement using these temporal constraints in a rigorous, formally verified manner. The pipeline consists of two main parts: a formally verified logical system with code generation mechanisms and a learning agent that uses both the symbolic constraint and the training data. We demonstrate our approach with several experiments involving behaviour over time. Speaker: Chiara Manganini Title: Reasoning About Bias of ML Decision-Making - Chiara Manganini Abstract: In this talk, I present a logical approach to the fairness of ML decision-making, where decisions are automatically driven by data through the use of a probabilistic ML model. Contrarily to the standard statistically based approach, I propose a logical understanding of bias that can be used in the context of logical reasoning, with possible applications in neuro-symbolic settings and formal verification. More specifically, a measure of the monotonicity of an ML prediction is used to quantitatively assess bias. Finally, I present a game-theoretical interpretation of the work, particularly suitable for gaining insight into some crucial data-quality aspects of ML unfairness. May 13 2024 14.00 - 15.00 AIAI Seminar-13 May-Talk by Mark Chevallier and Chiara Manganini AIAI Seminar hosted by Mark Chevallier and Chiara Manganini Informatics Forum, G.03
AIAI Seminar-13 May-Talk by Mark Chevallier and Chiara Manganini Speaker: Mark Chevallier Title: A Verified Neurosymbolic Pipeline for Learning Temporal Behaviour Abstract: We introduce a pipeline integrating theorem proving, automatic code generation, and neural networks for neurosymbolic learning. In particular, we mechanise Signal Temporal Logic (STL) over tensors in Isabelle/HOL and use it to specify constraints that are then injected into the learning process. Our aim is to improve the safety of robotic movement using these temporal constraints in a rigorous, formally verified manner. The pipeline consists of two main parts: a formally verified logical system with code generation mechanisms and a learning agent that uses both the symbolic constraint and the training data. We demonstrate our approach with several experiments involving behaviour over time. Speaker: Chiara Manganini Title: Reasoning About Bias of ML Decision-Making - Chiara Manganini Abstract: In this talk, I present a logical approach to the fairness of ML decision-making, where decisions are automatically driven by data through the use of a probabilistic ML model. Contrarily to the standard statistically based approach, I propose a logical understanding of bias that can be used in the context of logical reasoning, with possible applications in neuro-symbolic settings and formal verification. More specifically, a measure of the monotonicity of an ML prediction is used to quantitatively assess bias. Finally, I present a game-theoretical interpretation of the work, particularly suitable for gaining insight into some crucial data-quality aspects of ML unfairness. May 13 2024 14.00 - 15.00 AIAI Seminar-13 May-Talk by Mark Chevallier and Chiara Manganini AIAI Seminar hosted by Mark Chevallier and Chiara Manganini Informatics Forum, G.03
May 13 2024 14.00 - 15.00 AIAI Seminar-13 May-Talk by Mark Chevallier and Chiara Manganini AIAI Seminar hosted by Mark Chevallier and Chiara Manganini