Universal Rules for Fooling Deep Neural Networks based Text Classification

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses.

Original languageEnglish
Title of host publication2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings
PublisherInstitute of Electrical and Electronics Engineers Inc.
Pages2221-2228
Number of pages8
ISBN (Electronic)9781728121536
DOIs
Publication statusPublished - Jun 2019
Event2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Wellington, New Zealand
Duration: Jun 10 2019Jun 13 2019

Publication series

Name2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings

Conference

Conference2019 IEEE Congress on Evolutionary Computation, CEC 2019
CountryNew Zealand
CityWellington
Period6/10/196/13/19

Fingerprint

Text Classification
Neural Networks
Gradient methods
Information use
Attack
Processing
Spam
Random Search
Gradient Method
Black Box
Vulnerability
Justify
Natural Language
Social Networks
Search Algorithm
Deep neural networks
Optimization Algorithm
Perturbation
Evaluation
Deep learning

All Science Journal Classification (ASJC) codes

  • Computational Mathematics
  • Modelling and Simulation

Cite this

Li, D., Vargas, D. V., & Kouichi, S. (2019). Universal Rules for Fooling Deep Neural Networks based Text Classification. In 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings (pp. 2221-2228). [8790213] (2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings). Institute of Electrical and Electronics Engineers Inc.. https://doi.org/10.1109/CEC.2019.8790213

Universal Rules for Fooling Deep Neural Networks based Text Classification. / Li, Di; Vargas, Danilo Vasconcellos; Kouichi, Sakurai.

2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. p. 2221-2228 8790213 (2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Li, D, Vargas, DV & Kouichi, S 2019, Universal Rules for Fooling Deep Neural Networks based Text Classification. in 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings., 8790213, 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings, Institute of Electrical and Electronics Engineers Inc., pp. 2221-2228, 2019 IEEE Congress on Evolutionary Computation, CEC 2019, Wellington, New Zealand, 6/10/19. https://doi.org/10.1109/CEC.2019.8790213
Li D, Vargas DV, Kouichi S. Universal Rules for Fooling Deep Neural Networks based Text Classification. In 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc. 2019. p. 2221-2228. 8790213. (2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings). https://doi.org/10.1109/CEC.2019.8790213
Li, Di ; Vargas, Danilo Vasconcellos ; Kouichi, Sakurai. / Universal Rules for Fooling Deep Neural Networks based Text Classification. 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings. Institute of Electrical and Electronics Engineers Inc., 2019. pp. 2221-2228 (2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings).
@inproceedings{385261c384e4409f9d7dab6364746bda,
title = "Universal Rules for Fooling Deep Neural Networks based Text Classification",
abstract = "Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses.",
author = "Di Li and Vargas, {Danilo Vasconcellos} and Sakurai Kouichi",
year = "2019",
month = "6",
doi = "10.1109/CEC.2019.8790213",
language = "English",
series = "2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings",
publisher = "Institute of Electrical and Electronics Engineers Inc.",
pages = "2221--2228",
booktitle = "2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings",
address = "United States",

}

TY - GEN

T1 - Universal Rules for Fooling Deep Neural Networks based Text Classification

AU - Li, Di

AU - Vargas, Danilo Vasconcellos

AU - Kouichi, Sakurai

PY - 2019/6

Y1 - 2019/6

N2 - Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses.

AB - Recently, deep learning based natural language processing techniques are being extensively used to deal with spam mail, censorship evaluation in social networks, among others. However, there is only a couple of works evaluating the vulnerabilities of such deep neural networks. Here, we go beyond attacks to investigate, for the first time, universal rules, i.e., rules that are sample agnostic and therefore could turn any text sample in an adversarial one. In fact, the universal rules do not use any information from the method itself (no information from the method, gradient information or training dataset information is used), making them black-box universal attacks. In other words, the universal rules are sample and method agnostic. By proposing a coevolutionary optimization algorithm we show that it is possible to create universal rules that can automatically craft imperceptible adversarial samples (only less than five perturbations which are close to misspelling are inserted in the text sample). A comparison with a random search algorithm further justifies the strength of the method. Thus, universal rules for fooling networks are here shown to exist. Hopefully, the results from this work will impact the development of yet more sample and model agnostic attacks as well as their defenses.

UR - http://www.scopus.com/inward/record.url?scp=85071302977&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85071302977&partnerID=8YFLogxK

U2 - 10.1109/CEC.2019.8790213

DO - 10.1109/CEC.2019.8790213

M3 - Conference contribution

AN - SCOPUS:85071302977

T3 - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings

SP - 2221

EP - 2228

BT - 2019 IEEE Congress on Evolutionary Computation, CEC 2019 - Proceedings

PB - Institute of Electrical and Electronics Engineers Inc.

ER -