DeepGauge: Multi-granularity testing criteria for deep learning systems

Lei Ma, Felix Juefei-Xu, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Chunyang Chen, Ting Su, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

31 Citations (Scopus)

Abstract

Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

Original languageEnglish
Title of host publicationASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering
EditorsChristian Kastner, Marianne Huchard, Gordon Fraser
PublisherAssociation for Computing Machinery, Inc
Pages120-131
Number of pages12
ISBN (Electronic)9781450359375
DOIs
Publication statusPublished - Sep 3 2018
Event33rd IEEE/ACM International Conference on Automated Software Engineering, ASE 2018 - Montpellier, France
Duration: Sep 3 2018Sep 7 2018

Publication series

NameASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering

Other

Other33rd IEEE/ACM International Conference on Automated Software Engineering, ASE 2018
CountryFrance
CityMontpellier
Period9/3/189/7/18

Fingerprint

Learning systems
Testing
Deep learning
Testbeds
Neurons
Systems analysis

All Science Journal Classification (ASJC) codes

  • Computational Theory and Mathematics
  • Human-Computer Interaction
  • Software

Cite this

Ma, L., Juefei-Xu, F., Zhang, F., Sun, J., Xue, M., Li, B., ... Wang, Y. (2018). DeepGauge: Multi-granularity testing criteria for deep learning systems. In C. Kastner, M. Huchard, & G. Fraser (Eds.), ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering (pp. 120-131). (ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering). Association for Computing Machinery, Inc. https://doi.org/10.1145/3238147.3238202

DeepGauge : Multi-granularity testing criteria for deep learning systems. / Ma, Lei; Juefei-Xu, Felix; Zhang, Fuyuan; Sun, Jiyuan; Xue, Minhui; Li, Bo; Chen, Chunyang; Su, Ting; Li, Li; Liu, Yang; Zhao, Jianjun; Wang, Yadong.

ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ed. / Christian Kastner; Marianne Huchard; Gordon Fraser. Association for Computing Machinery, Inc, 2018. p. 120-131 (ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ma, L, Juefei-Xu, F, Zhang, F, Sun, J, Xue, M, Li, B, Chen, C, Su, T, Li, L, Liu, Y, Zhao, J & Wang, Y 2018, DeepGauge: Multi-granularity testing criteria for deep learning systems. in C Kastner, M Huchard & G Fraser (eds), ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering, Association for Computing Machinery, Inc, pp. 120-131, 33rd IEEE/ACM International Conference on Automated Software Engineering, ASE 2018, Montpellier, France, 9/3/18. https://doi.org/10.1145/3238147.3238202
Ma L, Juefei-Xu F, Zhang F, Sun J, Xue M, Li B et al. DeepGauge: Multi-granularity testing criteria for deep learning systems. In Kastner C, Huchard M, Fraser G, editors, ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. Association for Computing Machinery, Inc. 2018. p. 120-131. (ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering). https://doi.org/10.1145/3238147.3238202
Ma, Lei ; Juefei-Xu, Felix ; Zhang, Fuyuan ; Sun, Jiyuan ; Xue, Minhui ; Li, Bo ; Chen, Chunyang ; Su, Ting ; Li, Li ; Liu, Yang ; Zhao, Jianjun ; Wang, Yadong. / DeepGauge : Multi-granularity testing criteria for deep learning systems. ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering. editor / Christian Kastner ; Marianne Huchard ; Gordon Fraser. Association for Computing Machinery, Inc, 2018. pp. 120-131 (ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering).
@inproceedings{9e860088e3f4428ca082293ece6c8f6d,
title = "DeepGauge: Multi-granularity testing criteria for deep learning systems",
abstract = "Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.",
author = "Lei Ma and Felix Juefei-Xu and Fuyuan Zhang and Jiyuan Sun and Minhui Xue and Bo Li and Chunyang Chen and Ting Su and Li Li and Yang Liu and Jianjun Zhao and Yadong Wang",
year = "2018",
month = "9",
day = "3",
doi = "10.1145/3238147.3238202",
language = "English",
series = "ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering",
publisher = "Association for Computing Machinery, Inc",
pages = "120--131",
editor = "Christian Kastner and Marianne Huchard and Gordon Fraser",
booktitle = "ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering",

}

TY - GEN

T1 - DeepGauge

T2 - Multi-granularity testing criteria for deep learning systems

AU - Ma, Lei

AU - Juefei-Xu, Felix

AU - Zhang, Fuyuan

AU - Sun, Jiyuan

AU - Xue, Minhui

AU - Li, Bo

AU - Chen, Chunyang

AU - Su, Ting

AU - Li, Li

AU - Liu, Yang

AU - Zhao, Jianjun

AU - Wang, Yadong

PY - 2018/9/3

Y1 - 2018/9/3

N2 - Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

AB - Deep learning (DL) defines a new data-driven programming paradigm that constructs the internal system logic of a crafted neuron network through a set of training data. We have seen wide adoption of DL in many safety-critical scenarios. However, a plethora of studies have shown that the state-of-the-art DL systems suffer from various vulnerabilities which can lead to severe consequences when applied to real-world applications. Currently, the testing adequacy of a DL system is usually measured by the accuracy of test data. Considering the limitation of accessible high quality test data, good accuracy performance on test data can hardly provide confidence to the testing adequacy and generality of DL systems. Unlike traditional software systems that have clear and controllable logic and functionality, the lack of interpretability in a DL system makes system analysis and defect detection difficult, which could potentially hinder its real-world deployment. In this paper, we propose DeepGauge, a set of multi-granularity testing criteria for DL systems, which aims at rendering a multi-faceted portrayal of the testbed. The in-depth evaluation of our proposed testing criteria is demonstrated on two well-known datasets, five DL systems, and with four state-of-the-art adversarial attack techniques against DL. The potential usefulness of DeepGauge sheds light on the construction of more generic and robust DL systems.

UR - http://www.scopus.com/inward/record.url?scp=85056490436&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85056490436&partnerID=8YFLogxK

U2 - 10.1145/3238147.3238202

DO - 10.1145/3238147.3238202

M3 - Conference contribution

AN - SCOPUS:85056490436

T3 - ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering

SP - 120

EP - 131

BT - ASE 2018 - Proceedings of the 33rd ACM/IEEE International Conference on Automated Software Engineering

A2 - Kastner, Christian

A2 - Huchard, Marianne

A2 - Fraser, Gordon

PB - Association for Computing Machinery, Inc

ER -