Home

【报名】图神经网络的前沿算法和系统研讨会


  • 2019-07-03 13:30:00 ~ 2019-07-03 17:00:00

      由上海交通大学约翰·霍普克罗夫特计算机科学中心主办的“图神经网络的前沿算法和系统研讨会”将于2019年7月3日(周三)在电信群楼3-200号报告厅举行。

      最近两三年,图表示学习和图神经网络已经成为深度学习领域最火热的研究方向。有不少学者表示,图神经网络有望成为一统结构化数据和感知数据的深度学习框架,成为逻辑型AI和直觉型AI的桥梁。

      本次研讨会我们邀请到了来自Montreal Institute for Learning Algorithms (蒙特利尔学习算法研究所)的唐建老师为我们讲解图表示学习与推断的前沿算法研究,以及来自上海Amazon AWS AI团队的张峥老师为我们讲解从DGL系统到前沿应用和研究落地的工作。研讨会由中心助理教授张伟楠主持。

活动时间:7 月 3 日(周三)13 : 30 - 17 : 00

活动地点:上海交通大学电信群楼3-200号报告厅

报名方式点击此链接进行报名!

活动日程:

 

13:30

-

13:40

开场致辞

王新兵,上海交通大学约翰·霍普克罗夫特计算机科学中心执行主任

13:40

-

14:40

图表示学习与判断

唐建

14:40

-

15:10

茶歇

15:10

-

16:30

深度图计算平台DGL及其研究工作

上海Amazon AWS AI团队 (张峥、甘全、李牧非、叶子豪)

16:30

-

17:00

Q&A自由答疑环节


演讲嘉宾与报告摘要:

1、Title: Graph Representation Learning and Reasoning
 
Abstract: Graphs, a general type of data structures for capturing interconnected objects, are ubiquitous in a variety of disciplines and domains. This talk is divided into two parts. In the first part, I will introduce our work on learning node representations (LINE, WWW’15), extremely low-dimensional node representation learning for graph and high-dimensional data visualization (LargeVis, WWW’16), knowledge graph embedding (RotatE, ICLR’19), and a general and high-performance graph embedding system (GraphVite, WWW’19). In the second part, I will introduce our recent work on combining statistical relational learning and graph neural networks for predictions and reasoning on graphs (GMNN, ICML’19).

 

Bio: Dr. Jian Tang is an assistant professor at Mila (Quebec AI institute) and HEC Montreal since December, 2017. He is named to the first cohort of Canada CIFAR Artificial Intelligence Chairs (CIFAR AI Research Chair). His research interests focus on deep graph representation learning with a variety of applications such as knowledge graphs, drug discovery and recommender systems. He was a research fellow in University of Michigan and Carnegie Mellon University. He received his Ph.D degree from Peking University and was a visiting student in University of Michigan for two years. He was a researcher in Microsoft Research Asia for two years. His work on graph representation learning (e.g., LINE, LargeVis, and RotatE) are widely recognized. He received the best paper award of ICML’14 and was nominated for the best paper of WWW’16.


2、Title: Deep Graph Made Easy (and faster); and a Number of Studies
 
Abstract: All real-world data has structures that are best described as graphs. If there is one data structure for deep learning algorithms, graph would be the foremost candidate. The graph structure can be either explicit, such in social networks, knowledge graphs, and protein-interaction networks, etc., or latent and implicit, as in the case of languages and images. Leveraging and discovering graph structures have many immediate applications and also serves as a futile ground for the next generation of algorithms.
 
This talk begins with an introduction of DGL, an open-source platform designed to accelerate research in this new emerging field, with its philosophy to support graph as the core abstraction and take care to maintain both forward (i.e. supporting new research ideas) and backward (i.e. integration with existing components) compatibility. DGL has been tested on a variety of models, including but not limited to the popular Graph Neural Networks (GNN) and its variants, with promising speed, memory footprint and scalability.
 
We then describe several more recent work. The first, SegTran, takes a graph perspective of the popular Transformer model and applies sparsification to derive at a lighter architecture. SegTran's core idea is to leverage a latent tree over different spans of spans so as to extract hierarchical features. By imposing structural inductive bias this way we are able to strike a balance between the power of the model and training/computational efficiency, arriving at an O(n log n) architecture.
 
The second is an empirical study of learned attention in Graph Attention Networks (GATs). We found that is, independent of learning setting, task and attention variant, datasets have a much stronger influence. We further explore the possibility of transferring attentions for graph sparsification, and show that, when applicable, attention-based sparsification retains enough information to obtain good performance while reducing computational and storage cost.

 

Bio:Zheng Zhang is Professor of Computer Science, NYU Shanghai; Global Network Professor, NYU. As of fall of 2018, Professor Zhang is taking a leave of absence and has joined Amazon AWS, taking the role of the founding Director of AWS Shanghai AI Lab. He also holds an affiliated appointment with the Department of Computer Science at the Courant Institute of Mathematical Sciences and with the Center for Data Science at NYU's campus in New York City. Prior to joining NYU Shanghai, he was the founder of the System Research Group in Microsoft Research Asia, where he served as Principle Researcher and research area manager. Before he moved to Beijing, he was project lead and member of technical staff in HP-Labs. He holds a PhD from the University of Illinois, Urbana-Champaign, an MS from University of Texas, Dallas, and a BS Fudan University.


点击“报名链接”,报名参会!
© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240