Towards evolving robust neural architectures to defend from adversarial attacks

研究成果: Chapter in Book/Report/Conference proceedingConference contribution

1 被引用数 (Scopus)

抄録

Neural networks are known to misclassify a class of subtly modified images known as adversarial samples. Recently, numerous defences have been proposed against these adversarial samples; however, none have improved the robustness of neural networks consistently. Here, we propose to use adversarial samples as a function evaluation to explore for robust neural architectures that can resist such attacks. Experiments on existing neural architecture search algorithms from the literature reveal that although accurate, they are not able to find robust architectures. An essential cause for this lies in their confined search space. We were able to evolve an architecture that is intrinsically accurate on adversarial samples by creating a novel neural architecture search. Thus, the results here demonstrate that more robust architectures exist as well as opens up a new range of possibilities for the development and exploration of neural networks using neural architecture search.

本文言語英語
ホスト出版物のタイトルGECCO 2020 Companion - Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion
出版社Association for Computing Machinery, Inc
ページ135-136
ページ数2
ISBN(電子版)9781450371278
DOI
出版ステータス出版済み - 7 8 2020
イベント2020 Genetic and Evolutionary Computation Conference, GECCO 2020 - Cancun, メキシコ
継続期間: 7 8 20207 12 2020

出版物シリーズ

名前GECCO 2020 Companion - Proceedings of the 2020 Genetic and Evolutionary Computation Conference Companion

会議

会議2020 Genetic and Evolutionary Computation Conference, GECCO 2020
国/地域メキシコ
CityCancun
Period7/8/207/12/20

All Science Journal Classification (ASJC) codes

  • 計算数学

フィンガープリント

「Towards evolving robust neural architectures to defend from adversarial attacks」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル