Home

Model-based RL as a Minimalist Approach to Horizon-Free and Second-Order Bounds


Speaker

Zhiyong Wang (王智勇), The Chinese University of Hong Kong (CUHK)

Time

2025-05-27 09:00:00 ~ 2025-05-27 10:00:00

Location

腾讯会议:995-425-359 ; 会议密码:250502 (听众可到软件大楼专家楼1319会议室)

Host

李帅

Abstract

Learning a transition model via Maximum Likelihood Estimation (MLE) followed by planning inside the learned model is perhaps the most standard and simplest Model-based Reinforcement Learning (RL) framework. In this work, we show that such a simple Model-based RL scheme, when equipped with optimistic and pessimistic planning procedures, achieves strong regret and sample complexity bounds in online and offline RL settings. Particularly, we demonstrate that under the conditions where the trajectory-wise reward is normalized between zero and one and the transition is time-homogenous, it achieves nearly horizon-free and second-order bounds. Nearly horizon-free means that our bounds have no polynomial dependence on the horizon of the Markov Decision Process. A second-order bound is a type of instance-dependent bound that scales with respect to the variances of the returns of the policies which can be small when the system is nearly deterministic and (or) the optimal policy has small values. We highlight that our algorithms are simple, fairly standard, and indeed have been extensively studied in the RL literature: they learn a model via MLE, build a version space around the MLE solution, and perform optimistic or pessimistic planning depending on whether operating in the online or offline mode. These algorithms do not rely on additional specialized algorithmic designs such as learning variances and performing variance-weighted learning and thus can easily leverage non-linear function approximations. The simplicity of the algorithms also implies that our horizon-free and second-order regret analysis is actually standard and mainly follows the general framework of optimism/pessimism in the face of uncertainty.

Bio

Zhiyong Wang is a final-year Ph.D. candidate in Computer Science at The Chinese University of Hong Kong (CUHK), supervised by Prof. John C.S. Lui (ACM/IEEE Fellow). In 2024, he was a visiting scholar at Cornell University, working with Prof. Wen Sun. His research interests lie in reinforcement learning, multi-armed bandits, and post-training of large language models (LLMs). His work has been published in top-tier conferences such as ICML, NeurIPS, ICLR, AISTATS, and AAAI.

© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240