Home

Pretraining, Instruction Tuning, Alignment, Specialization: On the Source of Large Language Model Abilities


Speaker

Yao Fu, University of Edinburgh

Time

2023-02-14 12:22:00 ~ 2023-02-14 12:22:00

Location

电院4号楼E谷TED讲座厅

Host

林洲汉

Abstract

Recently, the field has been greatly impressed and inspired by Large Language Models (LLMs) like GPT-3.5. The LLMs' multi-dimensional abilities are significantly beyond many NLP researchers’ and practitioners’ expectations and thus reshaping the research paradigm of NLP. A natural question is how LLMs get there, and where these fantastic abilities come from. In this talk we try to dissect the strong LLM abilities and trace them to their sources, hoping to give a comprehensive roadmap about the evolution of LLMs.


Bio

Yao Fu is a Ph.D. student at University of Edinburgh and a student researcher at Allen Institute for AI. Previously he finished his M.S. in Columbia University and B.S. in Peking University. Yao studies large scale probabilistic generative models for human language. His publication covers topics of large language models, emergent abilities, and complex reasoning.

© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240