Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.

Original languageEnglish
Title of host publicationProceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages223-227
Number of pages5
ISBN (Electronic)9781538691847
DOIs
Publication statusPublished - Dec 26 2018
Event6th International Symposium on Computing and Networking Workshops, CANDARW 2018 - Takayama, Japan
Duration: Nov 27 2018Nov 30 2018

Publication series

NameProceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018

Conference

Conference6th International Symposium on Computing and Networking Workshops, CANDARW 2018
CountryJapan
CityTakayama
Period11/27/1811/30/18

Fingerprint

Activation Function
Chemical activation
Neural Networks
Robustness
Neural networks
Perturbation
Evaluation
Attack
Pixel
Pixels
Launching
Vulnerability
Confidence
Empirical evaluation
Activation
Nonlinearity
Target
Evaluate

All Science Journal Classification (ASJC) codes

  • Computer Networks and Communications
  • Hardware and Architecture
  • Statistics, Probability and Uncertainty
  • Computer Science Applications

Cite this

Su, J., Vargas, D. V., & Sakurai, K. (2018). Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation. In Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018 (pp. 223-227). [8590903] (Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CANDARW.2018.00049

Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation. / Su, Jiawei; Vargas, Danilo Vasconcellos; Sakurai, Kouichi.

Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018. Institute of Electrical and Electronics Engineers Inc., 2018. p. 223-227 8590903 (Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Su, J, Vargas, DV & Sakurai, K 2018, Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation. in Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018., 8590903, Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018, Institute of Electrical and Electronics Engineers Inc., pp. 223-227, 6th International Symposium on Computing and Networking Workshops, CANDARW 2018, Takayama, Japan, 11/27/18. https://doi.org/10.1109/CANDARW.2018.00049
Su J, Vargas DV, Sakurai K. Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation. In Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018. Institute of Electrical and Electronics Engineers Inc. 2018. p. 223-227. 8590903. (Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018). https://doi.org/10.1109/CANDARW.2018.00049
Su, Jiawei ; Vargas, Danilo Vasconcellos ; Sakurai, Kouichi. / Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation. Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018. Institute of Electrical and Electronics Engineers Inc., 2018. pp. 223-227 (Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018).
@inproceedings{662b96205a784556b0f75f7286d6fc60,
title = "Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation",
abstract = "Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.",
author = "Jiawei Su and Vargas, {Danilo Vasconcellos} and Kouichi Sakurai",
year = "2018",
month = "12",
day = "26",
doi = "10.1109/CANDARW.2018.00049",
language = "English",
series = "Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "223--227",
booktitle = "Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018",
address = "United States",

}

TY - GEN

T1 - Empirical evaluation on robustness of deep convolutional neural networks activation functions against adversarial perturbation

AU - Su, Jiawei

AU - Vargas, Danilo Vasconcellos

AU - Sakurai, Kouichi

PY - 2018/12/26

Y1 - 2018/12/26

N2 - Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.

AB - Recent research has shown that deep convolutional neural networks (DCNN) are vulnerable to several different types of attacks while the reasons of such vulnerability are still under investigation. For instance, the adversarial perturbations can conduct a slight change on a natural image to make the target DCNN make the wrong recognition, while the reasons that DCNN is sensitive to such small modification are divergent from one research to another. In this paper, we evaluate the robustness of two commonly used activation functions of DCNN, namely the sigmoid and ReLu, against the recently proposed low-dimensional one-pixel attack. We show that the choosing of activation functions can be an important factor that influences the robustness of DCNN. The results show that comparing with sigmoid, the ReLu non-linearity is more vulnerable which allows the low dimensional one-pixel attack exploit much higher success rate and confidence of launching the attack. The results give insights on designing new activation functions to enhance the security of DCNN.

UR - http://www.scopus.com/inward/record.url?scp=85061449814&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85061449814&partnerID=8YFLogxK

U2 - 10.1109/CANDARW.2018.00049

DO - 10.1109/CANDARW.2018.00049

M3 - Conference contribution

AN - SCOPUS:85061449814

T3 - Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018

SP - 223

EP - 227

BT - Proceedings - 2018 6th International Symposium on Computing and Networking Workshops, CANDARW 2018

PB - Institute of Electrical and Electronics Engineers Inc.

ER -