Thesis Defenses

Toward Effective and Efficient Transformer-Based Multimodal Reasoning
Friday, March 20, 2026 - 3:00pm to 4:00pm
Transformer-based models have become the dominant architecture for modeling complex relationships across a wide range of data modalities, including text, graphs, and knowledge bases.
Simpler hardness proofs via gadget frameworks
Friday, April 3, 2026 - 10:00am to 11:00am
In this thesis, I consider the general notion of a "gadget framework," which is roughly a family of hard problems that are useful as sources of reductions, where such reductions consist of implementations of a set of "gadgets," and especially when there is a notion of
Modern SNARGitecture: New Constructions of Succinct Non-interactive Arguments
Wednesday, May 6, 2026 - 2:00pm to 3:00pm

Cryptographic proof systems enable a prover to convince a computationally weak verifier that a statement is true.

The Computational Landscape of Sequential Learning and Inference
Monday, May 4, 2026 - 3:00pm to 4:00pm
The frontier of modern machine learning lies in designing systems that can reliably make sequences of decisions. Such systems typically invoke a trained machine learning model in a loop, where the model's decision at one step affects its input henceforth.
Explicit Lossless Expanders and Interactive Codes
Monday, May 4, 2026 - 1:00pm to 2:00pm
One of the main results in this thesis is the first explicit construction of two-sided lossless expanders. A lossless expander is a sparse graph where every small set of vertices has nearly as many neighbors as its sparsity permits.
Reliable Learning for Adaptive Environments
Friday, May 1, 2026 - 10:00am to 11:00am
As artificial intelligence becomes ubiquitous in complex, interactive systems, we need models that perform reliably under dynamic conditions.
Lattices, Learning, and Lies: A Cryptographic Lens on Trustworthy Machine Learning
Thursday, April 30, 2026 - 2:00pm to 3:00pm
This thesis defense establishes a cryptographic foundation for studying computational hardness in algorithmic statistics and its implications for trustworthiness in machine learning.
 
Probabilistically Checkable Proofs and Applications
Wednesday, April 29, 2026 - 10:00am to 11:00am
Probabilistically Checkable Proofs (PCPs) are proof systems that allow a verifier to check the correctness of a proof by reading only a few locations — sometimes as few as two.
Marrying Worst-Case Analysis and Machine Learning
Monday, April 27, 2026 - 3:00pm to 4:00pm
Worst-case analysis, the de-facto standard in the analysis of algorithms, certifies that an algorithm is correct and efficient on any possible input.
Thesis Defense: Prashant Vasudevan: Fine-Grained Cryptography
Monday, July 16, 2018 - 2:45pm to 3:15pm
Abstract:

Pages

Subscribe to Thesis Defenses