TY - GEN
T1 - An empirical study on robustness of DNNs with out-of-distribution awareness
AU - Zhou, Lingjun
AU - Yu, Bing
AU - Berend, David
AU - Xie, Xiaofei
AU - Li, Xiaohong
AU - Zhao, Jianjun
AU - Liu, Xusheng
N1 - Funding Information:
This work is sponsored by Analytical Method Research of Loop and Recursion (Grant No. 61872262) and Key Technology Research of Mobile Security under the Condition of Ubiquitious Uncertain Access (Grant No. 61572349).
Publisher Copyright:
© 2020 IEEE.
PY - 2020/12
Y1 - 2020/12
N2 - The state-of-the-art deep neural network (DNN) achieves impressive performance on the input that is similar to training data. However, it fails to make reasonable decisions on the input that is quite different from training data, i.e., out-of-distribution (OOD) examples. Although many techniques have been proposed to detect OOD examples in recent years, it is still a lack of a systematic study about the effectiveness and robustness of different techniques as well as the performance of OOD-aware DNN models. In this paper, we conduct a comprehensive study to unveil the mystery of current OOD detection techniques, and investigate the differences between OOD-unaware/-aware DNNs in model performance, robustness, and uncertainty. We first compare the effectiveness of existing detection techniques and identify the best one. Then, evasion attacks are performed to evaluate the robustness of techniques. Furthermore, we compare the accuracy and robustness between OOD-unaware/-aware DNNs. At last, we study the uncertainty of different models on various kinds of data. Empirical results show OOD-aware detection modules have better performance and are more robust against random noises and evasion attacks. OOD-awareness seldom degrades the accuracy of DNN models in training/test datasets. In contrast, it makes the DNN model more robust against adversarial attacks and noisy inputs. Our study calls for attention to the development of OOD-aware DNN models and the necessity to take data distribution into account when robust and reliable DNN models are desired.
AB - The state-of-the-art deep neural network (DNN) achieves impressive performance on the input that is similar to training data. However, it fails to make reasonable decisions on the input that is quite different from training data, i.e., out-of-distribution (OOD) examples. Although many techniques have been proposed to detect OOD examples in recent years, it is still a lack of a systematic study about the effectiveness and robustness of different techniques as well as the performance of OOD-aware DNN models. In this paper, we conduct a comprehensive study to unveil the mystery of current OOD detection techniques, and investigate the differences between OOD-unaware/-aware DNNs in model performance, robustness, and uncertainty. We first compare the effectiveness of existing detection techniques and identify the best one. Then, evasion attacks are performed to evaluate the robustness of techniques. Furthermore, we compare the accuracy and robustness between OOD-unaware/-aware DNNs. At last, we study the uncertainty of different models on various kinds of data. Empirical results show OOD-aware detection modules have better performance and are more robust against random noises and evasion attacks. OOD-awareness seldom degrades the accuracy of DNN models in training/test datasets. In contrast, it makes the DNN model more robust against adversarial attacks and noisy inputs. Our study calls for attention to the development of OOD-aware DNN models and the necessity to take data distribution into account when robust and reliable DNN models are desired.
UR - http://www.scopus.com/inward/record.url?scp=85102370680&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85102370680&partnerID=8YFLogxK
U2 - 10.1109/APSEC51365.2020.00035
DO - 10.1109/APSEC51365.2020.00035
M3 - Conference contribution
AN - SCOPUS:85102370680
T3 - Proceedings - Asia-Pacific Software Engineering Conference, APSEC
SP - 266
EP - 275
BT - Proceedings - 2020 27th Asia-Pacific Software Engineering Conference, APSEC 2020
PB - IEEE Computer Society
T2 - 27th Asia-Pacific Software Engineering Conference, APSEC 2020
Y2 - 1 December 2020 through 4 December 2020
ER -