Deep neural network loses attention to adversarial images

研究成果: Contribution to journalConference article査読

抄録

Adversarial algorithms have shown to be effective against neural networks for a variety of tasks. Some adversarial algorithms perturb all the pixels in the image minimally for the image classification task in image classification. In contrast, some algorithms perturb few pixels strongly. However, very little information is available regarding why these adversarial samples so diverse from each other exist. Recently, [Vargas and Su, 2020] showed that the existence of these adversarial samples might be due to conflicting saliency within the neural network. We test this hypothesis of conflicting saliency by analysing the Saliency Maps (SM) and Gradient-weighted Class Activation Maps (Grad-CAM) of original and few different types of adversarial samples. We also analyse how different adversarial samples distort the attention of the neural network compared to original samples. We show that in the case of Pixel Attack, perturbed pixels either calls the network attention to themselves or divert the attention from them. Simultaneously, the Projected Gradient Descent Attack perturbs pixels so that intermediate layers inside the neural network lose attention for the correct class. We also show that both attacks affect the saliency map and activation maps differently. Thus, shedding light on why some defences successful against some attacks remain vulnerable against other attacks. We hope that this analysis will improve understanding of the existence and the effect of adversarial samples and enable the community to develop more robust neural networks.

本文言語英語
ジャーナルCEUR Workshop Proceedings
2916
出版ステータス出版済み - 2021
イベント2021 Workshop on Artificial Intelligence Safety, AISafety 2021 - Virtual, Online
継続期間: 8 19 20218 20 2021

All Science Journal Classification (ASJC) codes

  • コンピュータ サイエンス(全般)

フィンガープリント

「Deep neural network loses attention to adversarial images」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル