The \emph{Memory Reallocation problem} asks to dynamically maintain an assignment of given objects of various sizes to non-overlapping contiguous chunks of memory, while supporting updates (insertions/deletions) in an online fashion. The total size of live objects at any time is guaranteed to be at most $1-\epsilon$ fraction of the total memory. To handle an online update, the allocator may rearrange the objects in the memory to make space, and the \emph{overhead} for this update is defined as the total size of moved objects divided by the size of the object being inserted/deleted.
Our main result is an allocator with worst-case expected overhead $\mathrm{polylog}(\epsilon^{-1})$. This exponentially improves the previous worst-case expected overhead $\widetilde O({\epsilon}^{-1/2})$ achieved by Farach-Colton, Kuszmaul, Sheffield, and Westover (2024), reducing the gap towards the $\Omega(\log\epsilon^{-1})$ lower bound. Our improvement is based on an application of the sunflower lemma previously used by Erd\H{o}s and S\'{a}rk\"{o}zy (1992) in the context of subset sums.
Our allocator achieves polylogarithmic overhead only in expectation, and sometimes performs expensive rebuilds. Our second technical result shows that this is necessary: it is impossible to achieve subpolynomial overhead with high probability.