TVM: an End to End IR Stack for Deep Learning Systems
Speaker
Tianqi Chen, University of Washington
Time
2017-10-30 14:00:00 ~ 2017-10-30 15:30:00
Location
SEIEE-3-412
Host
Weinan Zhang
Abstract
Deep learning has become ubiquitous and indispensable Part of this revolution has been fueled by scalable deep learning systems In this talk, I am going to talk about TVM: a unified intermediate representation (IR) stack that will close the gap between the productivity-focused deep learning frameworks, and the performance- or efficiency-oriented hardware backends TVM is a novel framework that can: Represent and optimize the common deep learning computation workloads for CPUs, GPUs, and other specialized hardware; Automatically transform the computation graph to minimize memory utilization, optimize data layout and fuse computation patterns; Provide an end-to-end compilation from existing front-end frameworks down to bare-metal hardware I will talk about the problems and chance of learning system research around TVM
Bio
Tianqi is a PhD student in University of Washington, working on machine learning and systems. He received his bachelor and master degrees from Shanghai Jiao Tong University. He is recipient of a Google PhD Fellowship in Machine Learning.