TY - GEN
T1 - Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation
AU - Su, Jiawei
AU - Vargas, Danilo Vasconcellos
AU - Sakurai, Kouichi
N1 - Funding Information:
This research was partially supported by Collaboration Hubs for International Program (CHIRP) of SICORP, Japan Science and Technology Agency (JST), and Kyushu University Education and Research Center for Mathematical and Data Science Grant. We also thank Dr. Sheng He from University of Groningen for his suggestion and advice to this work, which is motivated by his comment on the presentation carried out by the third author of this paper, at ICT open workshop 2018 at Netherlands.
Publisher Copyright:
© 2018 IEEE.
PY - 2018/12/26
Y1 - 2018/12/26
N2 - Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.
AB - Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.
UR - http://www.scopus.com/inward/record.url?scp=85061449814&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85061449814&partnerID=8YFLogxK
U2 - 10.1109/CANDARW.2018.00049
DO - 10.1109/CANDARW.2018.00049
M3 - Conference contribution
AN - SCOPUS:85061449814
T3 - Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018
SP - 223
EP - 227
BT - Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 6th International Symposium on Computing and Networking Workshops, CANDARW 2018
Y2 - 27 November 2018 through 30 November 2018
ER -