Home

Unlocking Text Generation


Speaker

Junxian He, Carnegie Mellon University

Time

2021-09-10 10:00:00 ~ 2021-09-10 11:30:00

Location

电信群楼1-418会议室(腾讯会议号码:885682727, 参会密码为:820081)

Host

林洲汉

Abstract

Natural language generation (NLG) has seen remarkable success benefiting from the development of deep learning techniques. However, the prevalent neural models function like a black box; they do not explicitly model the underlying factors behind the observed text, such as syntax or sentiment. As a result, such models are not conducive to interpretation and do not allow control over the generation process. On the other hand, the expensive economical and environmental cost to train and deploy huge models makes it questionable whether scaling up model parameters is the only direction to enhance NLG models. In this talk, I will cover my research efforts towards addressing these challenges as: (1) developing interpretable and controllable generative text models, and (2) strengthening parametric NLG models through the lens of external, non-parametric memories. At the end of this talk I will briefly describe my visions on future work.

Bio

Junxian He is a fifth-year Ph.D. student in the school of computer science at Carnegie Mellon University, co-advised by Prof. Graham Neubig and Prof. Taylor Berg-Kirkpatrick. His research focuses on natural language processing, in particular on generative modeling and text generation. Before joining CMU, he obtained the bachelor degree from Shanghai Jiao Tong University. He is one of the ten recipients worldwide of 2020 Baidu PhD fellowship.

© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240