Reliable Learning for Adaptive Environments

Friday, May 1, 2026 - 10:00am to 11:00am
Location: 
32-G575; https://mit.zoom.us/j/5048003936?omn=95496228068
Speaker: 
Maxwell Fishelson
Biography: 
https://maxkfish.com/
Seminar group: 
As artificial intelligence becomes ubiquitous in complex, interactive systems, we need models that perform reliably under dynamic conditions. AI systems are moving beyond passive perception to active participation: controlling autonomous vehicles in mixed traffic, participating in markets, and representing users in complex decision-making environments. In these roles, an agent's decisions reshape the environment and invite strategic responses from others. This feedback loop invalidates a core assumption of classical machine learning: that data is independent and identically distributed. We require a foundation for learning that is inherently robust to these adaptive dynamics.
 
In this talk, I will demonstrate how we can approach this challenge using regret minimization and calibration. An algorithm with low regret effectively “learns to play” optimally, even against strategic counter-parties. Calibration guarantees that we can distill trustworthy forecasts from dynamic data. I will discuss my work pushing the boundaries of what is possible in this domain, resolving long-standing open questions on regret minimization in games, the scalability of algorithms for strong notions of regret, and the fundamental limits of trustworthy forecasting. Together, these results provide an algorithmic pathway towards AI systems that remain robust and reliable even in complex, interactive environments.
 
Advisor: Constantinos Daskalakis
 
Committee: Yuval Dagan, Gabriele Farina