LFCS Seminar Wednesday 9th April: Alessandro Abate

Please note unusual day and time.

Title: Logic meets Learning - Formal Synthesis with Neural Templates
 
Abstract
 
I shall present recent and ongoing work on sound synthesis for
sequential decision problems that are relevant in safety critical
applications such as for Safe AI.
 
 
Given a dynamical system, a reactive program, or a cyber-physical
combination of both, possibly under external control, consider a
complex task, such as a requirement or a temporally-extended
specification. I am interested in soundly asserting whether the
system/program satisfies the task, or possibly to design a
policy/strategy/controller that does so.
 
 
I shall frame this general problem as that of synthesis of a certificate
(aka a proof rule), possibly in conjunction with that of a
policy/strategy/controller. I show that this equivalent synthesis
problem can be tackled very generally, for broad classes of systems and
of tasks/objectives/requirements/specifications.
 
 
I shall additionally elucidate an inductive synthesis framework to
practically compute such certificates (and corresponding
strategies). This inductive synthesis framework comprises the
interaction of two components, a learner and a verifier. The learner
trains a neural template on finite samples. The verifier soundly
validates the candidates trained by the learner, by means of calls to a
SAT-modulo-theory solver. However, whenever the candidate isn't
valid, SMT-generated counter-examples are passed to the learner for
further training.
 
 
I shall display this synthesis framework on numerous control
engineering and programming languages problems, eg: synthesis of
Lyapunov functions or of barrier certificates for control models, and
of more complicated proof rules to analyse probabilistic programs;
hybridisation of nonlinear dynamics for safety verification; and
(time permitting) more applications in safety-critical autonomy.
 
 
Biosketch:
 
 
Alessandro Abate is Professor of Verification & Control at the
Department of Computer Science, where he leads the Oxford Control &
Verification group (OXCAV). OXCAV is broadly concerned with Safe AI,
with research grounded in the disciplines of Control Theory and Formal
Verification, and blending in techniques from Machine Learning.
Please visit the group’s webpage at: oxcav.web.ox.ac.uk