TY - JOUR
T1 - Watch out! Motion is blurring blurring the vision of your deep neural networks
AU - Guo, Qing
AU - Juefei-Xu, Felix
AU - Xie, Xiaofei
AU - Ma, Lei
AU - Wang, Jian
AU - Yu, Bing
AU - Feng, Wei
AU - Liu, Yang
N1 - Funding Information:
We appreciate the anonymous reviewers for their valuable feedback. This research was supported in part by Singapore National Cy-bersecurity R&D Program No. NRF2018NCR-NCR005-0001, Na-tional Satellite of Excellence in Trustworthy Software System No.NRF2018NCR-NSOE003-0001, NRF Investigatorship No. NRFI06-2020-0022. It was also supported by JSPS KAKENHI Grant No.20H04168, 19K24348, 19H04086, and JST-Mirai Program Grant No.JPMJMI18BB, Japan, and the National Natural Science Foundation of China (NSFC) under Grant 61671325 and Grant 62072334. We gratefully acknowledge the support of NVIDIA AI Tech Center (NVAITC) to our research.
Publisher Copyright:
© 2020 Neural information processing systems foundation. All rights reserved.
PY - 2020
Y1 - 2020
N2 - The state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples with additive random noise-like perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights. To generate visually more natural and plausible examples, we further propose the saliency-regularized adversarial kernel prediction, where the salient region serves as a moving object, and the predicted kernel is regularized to achieve visual effects that are natural. Besides, the attack is further enhanced by adaptively tuning the translations of object and background. A comprehensive evaluation on the NeurIPS’17 adversarial competition dataset demonstrates the effectiveness of ABBA by considering various kernel sizes, translations, and regions. The in-depth study further confirms that our method shows more effective penetrating capability to the state-of-the-art GAN-based deblurring mechanisms compared with other blurring methods. We release the code to https://github.com/tsingqguo/ABBA.
AB - The state-of-the-art deep neural networks (DNNs) are vulnerable to adversarial examples with additive random noise-like perturbations. While such examples are hardly found in the physical world, the image blurring effect caused by object motion, on the other hand, commonly occurs in practice, making the study of which greatly important especially for the widely adopted real-time image processing tasks (e.g., object detection, tracking). In this paper, we initiate the first step to comprehensively investigate the potential hazards of blur effect for DNN, caused by object motion. We propose a novel adversarial attack method that can generate visually natural motion-blurred adversarial examples, named motion-based adversarial blur attack (ABBA). To this end, we first formulate the kernel-prediction-based attack where an input image is convolved with kernels in a pixel-wise way, and the misclassification capability is achieved by tuning the kernel weights. To generate visually more natural and plausible examples, we further propose the saliency-regularized adversarial kernel prediction, where the salient region serves as a moving object, and the predicted kernel is regularized to achieve visual effects that are natural. Besides, the attack is further enhanced by adaptively tuning the translations of object and background. A comprehensive evaluation on the NeurIPS’17 adversarial competition dataset demonstrates the effectiveness of ABBA by considering various kernel sizes, translations, and regions. The in-depth study further confirms that our method shows more effective penetrating capability to the state-of-the-art GAN-based deblurring mechanisms compared with other blurring methods. We release the code to https://github.com/tsingqguo/ABBA.
UR - http://www.scopus.com/inward/record.url?scp=85108426847&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85108426847&partnerID=8YFLogxK
M3 - Conference article
AN - SCOPUS:85108426847
SN - 1049-5258
VL - 2020-December
JO - Advances in Neural Information Processing Systems
JF - Advances in Neural Information Processing Systems
T2 - 34th Conference on Neural Information Processing Systems, NeurIPS 2020
Y2 - 6 December 2020 through 12 December 2020
ER -