Lower Bounds for Non-adaptive Local Computation Algorithms

Wednesday, March 11, 2026 - 4:00pm to 5:00pm
Location: 
32-G575
Speaker: 
Alma Ghafari
Biography: 
https://almaghafari.github.io/

We study non-adaptive LCAs. A classical reduction of Parnas and Ron converts any distributed algorithm into a non-adaptive LCA. Applied to known distributed algorithms, this yields non-adaptive LCAs for constant approximations of Maximum Matching (MM) and Minimum Vertex Cover (MVC) with query complexity O(log ∆/ log log ∆), where ∆ is the maximum degree. With adaptivity, this bound improves dramatically to but is such a gap necessary or are there better non-adaptive LCAs?

We answer this question affirmatively. Our main result is a matching lower bound: any non-adaptive LCA computing a constant approximation of MM or MVC requiresO(log ∆/ log log ∆) queries. This is the first separation between adaptive and non-adaptive LCAs, and shows that the Parnas-Ron reduction is essentially optimal for MM or MVC in the non-adaptive setting.

Our proof blends techniques from two distinct lines of work: sublinear-time algorithm lower bounds and distributed computing lower bounds. We build on the technique of coupling over acyclic subgraphs Behnezhad, Roghani, and Rubinstein, and apply it to a modified version of the construction of Kuhn, Moscibroda, and Wattenhofer. A key technical contribution is showing that this modified KMW instance has an important property: random walks of any length identify a matching edge with probability at most O(log ∆/ log log ∆) a substantial strengthening of the original KMW result, which only handled walks of depth .