Invited Speaker
Dr. Naveed Akhtar

Dr. Naveed Akhtar

Office of National Intelligence Australia - Research Fellow
Lecturer (AI, Machine Learning & Data Science)
Department of Computer Science & Software Engineering, University of Western Australia, Australia
Speech Title: Explaining Deep Learning with Adversarial Attacks

Abstract: Deep visual models are susceptible to adversarial perturbations to inputs. Although these signals are carefully crafted, they still appear noise-like patterns to humans. This observation has led to the argument that deep visual representation is misaligned with human perception. In this talk, we will slightly counter-argue by providing evidence of human-meaningful patterns in adversarial perturbations. We will introduce an attack that fools a network to confuse a whole category of objects (source class) with a target label. Our attack also limits the unintended fooling by samples from non-sources classes, thereby circumscribing human-defined semantic notions for network fooling. We will demonstrate that our attack not only leads to the emergence of regular geometric patterns in the perturbations, but also reveals insightful information about the decision boundaries of deep models. Exploring this phenomenon further, we will alter the `adversarial' objective of our attack to use it as a tool to `explain' deep visual representation. We will show that by careful channeling and projection of the perturbations computed by our method, we can visualize a model's understanding of human-defined semantic notions.


Biography: Dr. Naveed Akhtar is an Australian Office of National Intelligence Fellow and a Lecturer at the University of Western Australia (UWA). He has also served as a Research Fellow at the Australian National University (ANU). His research focuses on adversarial machine learning and explainable Artificial Intelligence (AI). He has also contributed extensively towards various applications of computer vision, including multiple-object tracking, action recognition, point cloud analysis, remote sensing and language-and-vision. His research is regularly published in the leading sources of computer vision and machine learning research. He currently serves as an Associate Editor for IEEE Access and Guest Editor for Remote Sensing and Neural Computing and Applications journals. He is also an Area Chair of IEEE Conf. on Computer Vision and Pattern Recognition (CVPR'22). He also serves as a Chief Investigator for two Defense Advanced Research Projects Agency (DARPA) USA projects in the domain of adversarial machine learning and explainable AI.