Home

Towards DNN Interpretability at the Right Granularity Level


Speaker

Zining ZHU, Stevens Institute of Technology

Time

2024-08-15 13:00:00 ~ 2024-08-15 14:30:00

Location

上海交通大学电信群楼3-404会议室

Host

张拳石

Abstract

Recently, deep neural networks have demonstrated incredible performances, leading to inquiries towards their underlying mechanisms. In this presentation, I will briefly review some most promising mechanisms along three different granularity levels: representation, module, and neuron. I will present some of our lab’s works along each of the granularity levels, and how I consider the future interpretability research will develop.

Bio

Dr. Zining Zhu is an Assistant Professor at the Stevens Institute of Technology. He received Ph.D. degree at the University of Toronto and Vector Institute, advised by Frank Rudzicz. He directs the Explainable and Controllable AI lab. He is also affiliated with the Stevens Institute for Artificial Intelligence. He is interested in understanding the mechanisms and abilities of neural network AI systems, and incorporating the findings into controlling the AI systems. In the long term, He looks forward to empowering real-world applications with safe and trustworthy AIs that can collaborate with humans.

© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240