Friday, 8th November - 11am Paul Soulos : Seminar Title: Compositional Generalization Across Distributional Shifts with Sparse Tree Operations Abstract: Neural networks continue to struggle with compositional generalization, and this issue is exacerbated by a lack of massive pre-training. One successful approach for developing neural systems which exhibit human-like compositional generalization is hybrid neurosymbolic techniques. However, these techniques run into the core issues that plague symbolic approaches to AI: scalability and flexibility. The reason for this failure is that at their core, hybrid neurosymbolic models perform symbolic computation and relegate the scalable and flexible neural computation to parameterizing a symbolic system. We investigate a unified neurosymbolic system where transformations in the network can be interpreted simultaneously as both symbolic and neural computation. We extend a unified neurosymbolic architecture called the Differentiable Tree Machine in two central ways. First, we significantly increase the model’s efficiency through the use of sparse vector representations of symbolic structures. Second, we enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems. The improved model retains its prior generalization capabilities and, since there is a fully neural path through the network, avoids the pitfalls of other neurosymbolic techniques that elevate symbolic computation over neural computation. This work was carried out with Henry Conklin, Mattia Opper, Paul Smolensky, Jianfeng Gao, and Roland Fernandez Bio: Paul Soulos is a PhD candidate in Computational Cognitive Science at Johns Hopkins University, where he specializes in computational approaches to understanding human-like generalization in language. His research focuses on integrating insights from Computational Linguistics and Psycholinguistics to advance Natural Language Processing systems, with a particular emphasis on fully differentiable neurosymbolic models (Vector Symbolic Architectures). Paul predominantly publishes at Machine Learning and Computational Linguistics conferences, where he has received various awards such as a NeurIPS Spotlight Award and an invited Spotlight talk at a NeurIPS workshop. He is advised by Paul Smolensky and is currently completing an internship at IBM Research in Zürich. Nov 08 2024 11.00 - 12.00 Friday, 8th November - 11am Paul Soulos : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk. IF G.03 and on Teams Contact
Friday, 8th November - 11am Paul Soulos : Seminar Title: Compositional Generalization Across Distributional Shifts with Sparse Tree Operations Abstract: Neural networks continue to struggle with compositional generalization, and this issue is exacerbated by a lack of massive pre-training. One successful approach for developing neural systems which exhibit human-like compositional generalization is hybrid neurosymbolic techniques. However, these techniques run into the core issues that plague symbolic approaches to AI: scalability and flexibility. The reason for this failure is that at their core, hybrid neurosymbolic models perform symbolic computation and relegate the scalable and flexible neural computation to parameterizing a symbolic system. We investigate a unified neurosymbolic system where transformations in the network can be interpreted simultaneously as both symbolic and neural computation. We extend a unified neurosymbolic architecture called the Differentiable Tree Machine in two central ways. First, we significantly increase the model’s efficiency through the use of sparse vector representations of symbolic structures. Second, we enable its application beyond the restricted set of tree2tree problems to the more general class of seq2seq problems. The improved model retains its prior generalization capabilities and, since there is a fully neural path through the network, avoids the pitfalls of other neurosymbolic techniques that elevate symbolic computation over neural computation. This work was carried out with Henry Conklin, Mattia Opper, Paul Smolensky, Jianfeng Gao, and Roland Fernandez Bio: Paul Soulos is a PhD candidate in Computational Cognitive Science at Johns Hopkins University, where he specializes in computational approaches to understanding human-like generalization in language. His research focuses on integrating insights from Computational Linguistics and Psycholinguistics to advance Natural Language Processing systems, with a particular emphasis on fully differentiable neurosymbolic models (Vector Symbolic Architectures). Paul predominantly publishes at Machine Learning and Computational Linguistics conferences, where he has received various awards such as a NeurIPS Spotlight Award and an invited Spotlight talk at a NeurIPS workshop. He is advised by Paul Smolensky and is currently completing an internship at IBM Research in Zürich. Nov 08 2024 11.00 - 12.00 Friday, 8th November - 11am Paul Soulos : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk. IF G.03 and on Teams Contact
Nov 08 2024 11.00 - 12.00 Friday, 8th November - 11am Paul Soulos : Seminar This event is co-organised by ILCC and by the UKRI Centre for Doctoral Training in Natural Language Processing, https://nlp-cdt.ac.uk.