Towards Understanding the Space of Unrobust Features of Neural Networks

Liao Bingli, Takahiro Kanzaki, Danilo Vasconcellos Vargas

研究成果: 書籍/レポート タイプへの寄稿会議への寄与

抄録

Despite the convolutional neural network has achieved tremendous monumental success on a variety of computer vision-related tasks, it is still extremely challenging to build a neural network with doubtless reliability. Previous works have demonstrated that the deep neural network can be efficiently fooled by human imperceptible perturbation to the input, which actually revealed the instability for interpolation. Like human-beings, an ideally trained neural network should be constrained within desired inference space and maintain correctness for both interpolation and extrapolation. In this paper, we develop a technique to verify the correctness when convolutional neural networks extrapolate beyond training data distribution by generating legitimated feature broken images, and we show that the decision boundary for convolutional neural network is not well formulated based on image features for extrapolating.

本文言語英語
ホスト出版物のタイトル2021 5th IEEE International Conference on Cybernetics, CYBCONF 2021
出版社Institute of Electrical and Electronics Engineers Inc.
ページ91-94
ページ数4
ISBN(電子版)9781665403207
DOI
出版ステータス出版済み - 6月 8 2021
イベント5th IEEE International Conference on Cybernetics, CYBCONF 2021 - Virtual, Sendai, 日本
継続期間: 6月 8 20216月 10 2021

出版物シリーズ

名前2021 5th IEEE International Conference on Cybernetics, CYBCONF 2021

会議

会議5th IEEE International Conference on Cybernetics, CYBCONF 2021
国/地域日本
CityVirtual, Sendai
Period6/8/216/10/21

!!!All Science Journal Classification (ASJC) codes

  • 人工知能
  • コンピュータ サイエンスの応用
  • コンピュータ ビジョンおよびパターン認識

フィンガープリント

「Towards Understanding the Space of Unrobust Features of Neural Networks」の研究トピックを掘り下げます。これらがまとまってユニークなフィンガープリントを構成します。

引用スタイル