21 May 2019 - Stefanie Speichert Abstract The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. Especially in hybrid domains, domains that contain discrete and continuous features alike, learning has been difficult as inference over continuous features is almost always intractable. In practice, most systems approximate inference during the learning step as well as the querying step, often with weak guarantees, which is problematic for safety-critical applications. On the other hand, tractable learning has recently emerged as a powerful paradigm to learn distributions that support efficient probabilistic querying. A huge limitation of current approaches in tractable graphical models as well as probabilistic programming languages is that learning and inference are often only supported for a very limited family of continuous distributions such as Gaussians. This talk will show how to learn explainable representations from hybrid data without any prior assumptions about the density and integrate those into the Probabilistic Logic Programming as well as the (deep) tractable graphical models framework. May 21 2019 14.00 - 15.00 21 May 2019 - Stefanie Speichert Learning Symbolic Representations in Mixed Discrete-Continuous Domains IF4.31/33
21 May 2019 - Stefanie Speichert Abstract The field of statistical relational learning aims at unifying logic and probability to reason and learn from data. While many systems offer inference capabilities, the more significant challenge is that of learning meaningful and interpretable symbolic representations from data. Especially in hybrid domains, domains that contain discrete and continuous features alike, learning has been difficult as inference over continuous features is almost always intractable. In practice, most systems approximate inference during the learning step as well as the querying step, often with weak guarantees, which is problematic for safety-critical applications. On the other hand, tractable learning has recently emerged as a powerful paradigm to learn distributions that support efficient probabilistic querying. A huge limitation of current approaches in tractable graphical models as well as probabilistic programming languages is that learning and inference are often only supported for a very limited family of continuous distributions such as Gaussians. This talk will show how to learn explainable representations from hybrid data without any prior assumptions about the density and integrate those into the Probabilistic Logic Programming as well as the (deep) tractable graphical models framework. May 21 2019 14.00 - 15.00 21 May 2019 - Stefanie Speichert Learning Symbolic Representations in Mixed Discrete-Continuous Domains IF4.31/33
May 21 2019 14.00 - 15.00 21 May 2019 - Stefanie Speichert Learning Symbolic Representations in Mixed Discrete-Continuous Domains