On Robustness of Deep Neural Networks: A Comprehensive Study on the Effect of Architecture and Weight Initialization to Susceptibility and Transferability of Adversarial Attacks

  • Ibrahim Ben Daya
  • Mohammad Javad Shaifee
  • Michelle Karg
  • Christian Scharfenderger
  • Alexander Wong

Abstract

Neural network models have shown state of the art performance in
several applications. However it has been observed that they are
susceptible to adversarial attacks: small perturbations to the input
that fool a network model into mislabelling the input data. These
attacks can also transfer from one network model to another, which
raises concerns over their applicability, particularly when there are
privacy and security risks involved. In this work, we conduct a study
to analyze the effect of network architectures and weight initial-
ization on the robustness of individual network models as well as
transferability of adversarial attacks. Experimental results demon-
strate that while weight initialization has no affect on the robustness
of a network model, it does have an affect on attack transferability
to a network model. Results also show that the complexity of a
network model as indicated by the total number of parameters and
MAC number is not indicative of a network’s robustness to attack
or transferability, but accuracy can be; within the same architec-
ture, higher accuracy usually indicates a more robust network, but
across architectures there is no strong link between accuracy and
robustness.

Published
2018-12-24
How to Cite
Ben Daya, I., Shaifee, M., Karg, M., Scharfenderger, C., & Wong, A. (2018). On Robustness of Deep Neural Networks: A Comprehensive Study on the Effect of Architecture and Weight Initialization to Susceptibility and Transferability of Adversarial Attacks. Journal of Computational Vision and Imaging Systems, 4(1), 3. https://doi.org/10.15353/jcvis.v4i1.329