1) 12/1/2026

Organizer(s)
Usual Time
Monday, January 12th 2026 at 12:00
Place
BUILDING 503 (Computer Science), AUDITORIUM
More Details

WHO: Jonathan Shafer, MIT

WHEN: Monday, January 12h 2026 at 12:00

WHERE: BUILDING 503 (Computer Science), AUDITORIUM

 

 

Title: From Learning Theory to Cryptography: Provable Guarantees for AI

Speaker: Jonathan Shafer (MIT)

 

Abstract:

Ensuring that AI systems behave as intended is a central challenge in contemporary AI. This talk offers an exposition of provable mathematical guarantees for learning and security in AI systems.

 

Starting with a classic learning-theoretic perspective on generalization guarantees, we present two results quantifying the amount of training data that is provably necessary and sufficient for learning: (1) In online learning, we show that access to unlabeled data can reduce the number of prediction mistakes quadratically, but no more than quadratically [NeurIPS23, NeurIPS25 Best Paper Runner-Up]. (2) In statistical learning, we discuss how much labeled data is actually necessary for learning—resolving a long-standing gap left open by the celebrated VC theorem [COLT23].

 

Provable guarantees are especially valuable in settings that require security in the face of malicious adversaries. The main part of the talk adopts a cryptographic perspective,  showing how to: (1) Utilize interactive proof systems to delegate data collection and AI training tasks to an untrusted party [ITCS21, COLT23, NeurIPS25]. (2) Leverage random self-reducibility to provably remove backdoors from AI models, even when those backdoors are themselves provably undetectable [STOC25].

 

Bio: Jonathan Shafer is a Postdoctoral Associate at MIT, working with Vinod Vaikuntanthan. He co-organizes the MIT ML+Crypto Seminar. Previously, he earned a PhD from UC Berkeley advised by Shafi Goldwasser.