Near-Optimal Time-Sparsity Trade-Offs for Solving Noisy Linear Equations

Friday, February 21, 2025 - 10:30am to 12:00pm
Location: 
32-G882, Hewlett Room
Speaker: 
Kiril Bangachev (MIT EECS)
Seminar group: 

We present a polynomial-time reduction from solving noisy linear equations over a finite field F with a uniformly random coefficient matrix to noisy linear equations over F where each row of the coefficient matrix has uniformly random support of size k. This allows us to deduce the hardness of sparse problems from their dense counterparts. In particular, under popular cryptographic assumptions, we demonstrate that learning with errors and learning parity with noise with uniformly random k-sparse public matrices in dimension n are hard in time n^o(k). These results are nearly tight as both sparse problems can be solved in time n^k given sufficiently many samples.

Our reduction allows us to derive several consequences in cryptography and the computational complexity of statistical problems. In addition, as a new application, we give a reduction from k-sparse LWE to noisy tensor completion of a low-rank order-k tensor. As a result, we obtain a nearly optimal lower bound for the problem based on standard lattice-theoretic assumptions.

This talk is based on a joint work with Stefan Tiegel, Vinod Vaikuntanathan, and Guy Bresler.