Home

The Adaptive Complexity of Finding a Stationary Point


Speaker

Huanjian Zhou,东京大学博士生

Time

2025-07-07 09:00:00 ~ 2025-07-07 10:30:00

Location

腾讯会议310 216 102

Host

李帅

Abstract
In large-scale applications, such as machine learning, it is desirable to design non-convex optimization algorithms with a high degree of parallelization. In this work, we study the adaptive complexity of finding a stationary point, which is the minimal number of sequential rounds required to achieve stationarity given polynomially many queries executed in parallel at each round. For the high-dimensional case, we show that even with poly(d) queries per iteration, no algorithm has better convergence rate than those achievable with one-query-per-round algorithms. In other words, gradient descent, the cubic-regularized Newton's method, and the p-th order adaptive regularization method are adaptively optimal. Our proof relies upon novel analysis with the characterization of the output for the hardness potentials based on a chain-like structure with random partition. For the constant-dimensional case, we propose an algorithm that bridges grid search and gradient flow trapping, finding an approximate stationary point in constant iterations. Its asymptotic tightness is verified by a new lower bound on the required queries per iteration. This lower bound is tight up to a logarithmic factor, and implies that the gradient flow trapping is adaptively optimal.
Bio
Huanjian Zhou(周焕健)is a final-year Ph.D. student at The University of Tokyo, supervised by Prof. Masashi Sugiyama (Director of RIKEN AIP). In 2025, he was visiting Yale University, working with Prof. Sinho Chewi. His research interests lie in continuous sampling, continuous optimization, and submodular optimization. His works have been published in top-tier conferences such as ICML, NeurIPS, ICLR, and COLT.
© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240