Home

硬件为主的自动学习:为高效的深度学习实现设计自动化


Speaker

Song Han, Massachusetts Institute of Technology (MIT)

Time

2019-01-07 10:00:00 ~ 2019-01-07 11:30:00

Location

Room 3-412, SEIEE Building

Host

Weinan Zhang, Assistant Professor, John Hopcroft Center for Computer Science

Abstract
In the post-Moore’s Law era, the amount of computation per unit cost and power is no longer increasing at its historic rate In the post-ImageNet era, researchers are solving more complicated AI problems using larger data sets which drives the demand for more computation This mismatch between supply and demand for computation highlights the need for co-designing efficient machine learning algorithms and domain-specific hardware architectures We introduce our recent work using machine learning to optimize the machine learning system (Hardware-centric AutoML) I’ll describe efficient deep learning accelerators that can take advantage of these efficient algorithms, and also hardware-efficient video understanding algorithms I’ll conclude the talk by giving an outlook of the design automation for efficient deep learning computing
Bio
Song Han is an assistant professor in the EECS Department of Massachusetts Institute of Technology (MIT) and PI for HAN Lab: Hardware, AI and Neural-nets. He is a member of MIT Quest for Intelligence. Dr. Han's research focuses on energy-efficient deep learning and domain-specific architectures. He proposed “Deep Compression” that widely impacted the industry. He received the best paper award in ICLR’16 and FPGA’17. Prior to joining MIT, Song Han graduated from Stanford University.

Homepage: https://songhan.mit.edu/
© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学约翰·霍普克罗夫特计算机科学中心
邮编:200240