
Talk Title
Integrating Symbolic Algorithms in Neural Models (and LLMs)
Talk Summary
Neural models, including LLMs, can exhibit remarkable abilities; paradoxically, they also struggle with algorithmic tasks where much simpler models excel. To solve these issues, we propose Implicit Maximum Likelihood Estimation (IMLE), a framework for end-to-end learning of models combining algorithmic combinatorial solvers and differentiable neural components, which allows us to incorporate planning and reasoning algorithms in neural architectures by just adding a simple decorator [1, 2]. Finally, we’ll discuss some very recent extensions of these ideas for making LLMs teach themselves how to use tools [3].
[1] Implicit MLE: Backpropagating Through Discrete Exponential Family Distributions.https://arxiv.org/abs/2106.01798, NeurIPS 2021
[2] Adaptive Perturbation-Based Gradient Estimation for Discrete Latent Variable Models.https://arxiv.org/abs/2209.04862, AAAI 2023
[3] Self-Training Large Language Models for Tool-Use Without Demonstrations.https://arxiv.org/abs/2502.05867, NAACL 2025
Speaker’s Bio
Pasquale Minervini is a Lecturer in Natural Language Processing at the School of Informatics, University of Edinburgh; co-founder and CTO of the generative AI start-up Miniml.AI; and an ELLIS Scholar – Edinburgh Unit. His research interests include NLP and ML, focusing on relational learning and learning from graph-structured data, solving knowledge-intensive tasks, hybrid neuro-symbolic models, compositional generalisation, and designing data-efficient and robust deep learning models