Adversarial Training for Deep Learning: A Framework for Improving Robustness, Generalization and Interpretability
Speaker
Zhanxing Zhu, Peking University
Time
2019-11-13 16:00:00 ~ 2019-11-13 17:30:00
Location
Room 310, Yi Fu Building
Host
Weinan Zhang, Assistant Professor, John Hopcroft Center for Computer Science
Abstract
Deep learning has achieved tremendous success in various application areas. Unfortunately, recent works show that an adversary is able to fool the deep learning models into producing incorrect predictions by manipulating the inputs maliciously. The corresponding manipulated samples are called adversarial examples. This vulnerability issue dramatically hinders the deployment of deep learning, particularly in safety-critical applications.
In this talk, I will introduce various approaches for how to construct adversarial examples. Then I will present a framework, named as adversarial training, for improving robustness of deep networks to defense the adversarial examples. Two approaches will be introduced for accelerating adversarial training from perspective of optimal control theory. We also discover that adversarial training could help to enhance the interpretability of CNNs. Moreover, I will show that the introduced adversarial learning framework can be extended as an effective regularization strategy to improve the generalization in semi-supervised learning.
This talk will cover recent works of my group on NeurIPS, ICML, CVPR and ICLR (under review).
In this talk, I will introduce various approaches for how to construct adversarial examples. Then I will present a framework, named as adversarial training, for improving robustness of deep networks to defense the adversarial examples. Two approaches will be introduced for accelerating adversarial training from perspective of optimal control theory. We also discover that adversarial training could help to enhance the interpretability of CNNs. Moreover, I will show that the introduced adversarial learning framework can be extended as an effective regularization strategy to improve the generalization in semi-supervised learning.
This talk will cover recent works of my group on NeurIPS, ICML, CVPR and ICLR (under review).
Bio
Dr. Zhanxing Zhu, is assistant professor at School of Mathematical Sciences, Peking University, also affiliated with Center for Data Science, Peking University. He obtained Ph.D degree in machine learning from University of Edinburgh in 2016. His research interests cover machine learning and its applications in various domains. Currently he mainly focuses on deep learning theory and optimization algorithms, reinforcement learning, and applications in traffic, computer security, computer graphics, medical and healthcare etc. He has published more than 40 papers on top AI journals and conferences, such as NIPS, ICML, CVPR, ACL, IJCAI, AAAI, ECML etc. He was awarded “2019 Alibaba Damo Young Fellow”, and obtained “Best Paper Finalist” from the top computer security conference CCS 2018.