Guarding Against Adversarial Attacks using Biologically Inspired Contour Integration
Abstract
Artificial vision systems are susceptible to adversarial attacks. Small
intentional changes to images can cause these systems to mis-
classify with high confidence. The brain has many mechanisms for
strengthening weak or confusing inputs. One such technique, con-
tour integration can separate objects from irrelevant background.
We show that incorporating contour integration within artificial vi-
sual systems can increase their robustness to adversarial attacks.