Home

Towards Interpretability and Robustness of Machine Learning


Speaker

Jianbo Chen, University of California, Berkeley

Time

2019-07-26 14:30:00 ~ 2019-07-26 16:00:00

Location

Room 1319, Software Expert Building

Host

Quanshi Zhang, Associate Professor, John Hopcroft Center for Computer Science

Abstract

Interpretability and robustness become important criteria when a machine learning model is applied in critical areas such as medicine, financial markets, and criminal justice. Many complex models, such as random forests and deep neural networks, have been developed and employed to optimize prediction accuracy. However, their complex and black-box nature leads to difficulty in the interpretation of the decision making process, and vulnerability under minimal adversarial perturbation.

This talk addresses interpretability and robustness of machine learning models as a black box. We discuss tools for scalable instancewise feature attribution and beyond. We also examine the robustness of state-of-the-art models in real-world applications with a family of scalable decision-based adversarial attacks. The talk concludes with a recently developed method for detection of adversarial examples based on feature attribution.

Bio

Jianbo Chen is a PhD candidate in Statistics at University of California, Berkeley, working with Michael I. Jordan and Martin J. Wainwright. His research interests lie in model interpretation and adversarial robustness. He completed a B.Sc. with honors in Mathematics at the University of Hong Kong. He was also fortunate to spend one preparation year in the department of Mathematics at Shanghai Jiao Tong University before that.

© John Hopcroft Center for Computer Science, Shanghai Jiao Tong University
分享到

地址:上海市东川路800号上海交通大学软件大楼专家楼
邮箱:jhc@sjtu.edu.cn 电话:021-54740299
邮编:200240