Dr Michael Schlichtkrull - In Models we Trust: Reasoning about Truth with NLP

Dr Michael Schlichtkrull Talk

Event details

This event has taken place.

Ada Lovelace Room (108), The ¾«¶«Ó°Òµ, 211 Portobello, Sheffield, S1 4DP

Description

Dr Michael Schlichtkrull  - In Models we Trust: Reasoning about Truth with NLP
 
Date & Time: 5th June, 13:30 
 
For UoS Staff: There will also be an opportunity to speak with Michael afterwards if you wish to discuss anything in particular. If you would like an appointment, please add your name to this . If the time slots are all taken, continue to add your name underneath.
 

Abstract: From retrieval-augmented generation to Deep Research agents, models that retrieve, process, and reason about external documents have become a key research direction. Unfortunately, not everything on the internet is true. In this talk, I discuss how malicious actors can misinform models, making them unwilling accomplices against their users. Human experts overcome this challenge by gathering signals about the veracity, context, reliability, and tendency of documents - that is, they fact-check, and they perform source criticism. I argue that models should do the same. I present findings from our recent AVeriTeC shared task, showcasing the state of automated fact-checking. I also introduce the novel NLP task of finding and summarising indicators of reliability, rather than truthfulness: automated source criticism. I argue that these are critical defense mechanisms for knowledge-seeking AI agents.

Michael is a lecturer at Queen Mary University of London, and a research associate at Fitzwilliam College, Cambridge. His research is focused on automated reasoning (by LLMs and other NLP models) over retrieved evidence, especially for fact-checking and other problems with similarly complex epistemology. He is also interested in modeling structured data sources, such as knowledge graphs, tables, or parse trees. Michael studies technologies that improve the way we interact with information, whether that is through question answering systems that allow us to interrogate data, or fact-checking systems that help us reason about the evidential support for data. Prior to starting at QMUL, Michael was a postdoctoral research associate  and affiliated lecturer at the University of Cambridge, where he worked with Andreas Vlachos on automated fact verification. He graduated in 2021 with his PhD thesis from the University of Amsterdam, where he worked with Ivan Titov on building NLP models that incorporate structured data.

Events at the University

Browse upcoming public lectures, exhibitions, family events, concerts, shows and festivals across the University.