Lattices, Learning, and Lies: A Cryptographic Lens on Trustworthy Machine Learning

Thursday, April 30, 2026 - 2:00pm to 3:00pm
Location: 
32-G449 (Kiva/Patil); https://mit.zoom.us/j/98200205967
Speaker: 
Neekon Vafa
Biography: 
https://neekonvafa.com/
Seminar group: 
This thesis defense establishes a cryptographic foundation for studying computational hardness in algorithmic statistics and its implications for trustworthiness in machine learning.
 
On the algorithmic statistics side, we show that well-studied cryptographic assumptions imply surprising computational lower bounds for several problems: Gaussian mixture models, symmetric binary perceptrons, number partitioning, and dimensionality reduction. These assumptions underlie NIST’s post-quantum standardization efforts and can ultimately be based on the worst-case hardness of standard computational problems on integer lattices such as the shortest vector problem.
 
On the machine learning side, we focus on adversarially robust dimensionality reduction and backdoors in models. For dimensionality reduction, we show that random compressing matrices already satisfy a form of adversarial robustness as hash functions. For backdoors, we study how cryptographic assumptions can be used to plant them and how random self-reducibility can be used to remove them.
 
Committee: Vinod Vaikuntanathan (advisor), Shafi Goldwasser (committee chair), Michel Goemans, David Gamarnik.