DeepMutation: Mutation Testing of Deep Learning Systems

Lei Ma, Fuyuan Zhang, Jiyuan Sun, Minhui Xue, Bo Li, Felix Juefei-Xu, Chao Xie, Li Li, Yang Liu, Jianjun Zhao, Yadong Wang

Research output: Chapter in Book/Report/Conference proceedingConference contribution

18 Citations (Scopus)

Abstract

Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.

Original languageEnglish
Title of host publicationProceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018
EditorsSudipto Ghosh, Bojan Cukic, Robin Poston, Roberto Natella, Nuno Laranjeiro
PublisherIEEE Computer Society
Pages100-111
Number of pages12
ISBN (Electronic)9781538683217
DOIs
Publication statusPublished - Nov 16 2018
Event29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018 - Memphis, United States
Duration: Oct 15 2018Oct 18 2018

Publication series

NameProceedings - International Symposium on Software Reliability Engineering, ISSRE
Volume2018-October
ISSN (Print)1071-9458

Other

Other29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018
CountryUnited States
CityMemphis
Period10/15/1810/18/18

Fingerprint

Learning systems
Testing
Deep learning
Software testing

All Science Journal Classification (ASJC) codes

  • Software
  • Safety, Risk, Reliability and Quality

Cite this

Ma, L., Zhang, F., Sun, J., Xue, M., Li, B., Juefei-Xu, F., ... Wang, Y. (2018). DeepMutation: Mutation Testing of Deep Learning Systems. In S. Ghosh, B. Cukic, R. Poston, R. Natella, & N. Laranjeiro (Eds.), Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018 (pp. 100-111). [8539073] (Proceedings - International Symposium on Software Reliability Engineering, ISSRE; Vol. 2018-October). IEEE Computer Society. https://doi.org/10.1109/ISSRE.2018.00021

DeepMutation : Mutation Testing of Deep Learning Systems. / Ma, Lei; Zhang, Fuyuan; Sun, Jiyuan; Xue, Minhui; Li, Bo; Juefei-Xu, Felix; Xie, Chao; Li, Li; Liu, Yang; Zhao, Jianjun; Wang, Yadong.

Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018. ed. / Sudipto Ghosh; Bojan Cukic; Robin Poston; Roberto Natella; Nuno Laranjeiro. IEEE Computer Society, 2018. p. 100-111 8539073 (Proceedings - International Symposium on Software Reliability Engineering, ISSRE; Vol. 2018-October).

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Ma, L, Zhang, F, Sun, J, Xue, M, Li, B, Juefei-Xu, F, Xie, C, Li, L, Liu, Y, Zhao, J & Wang, Y 2018, DeepMutation: Mutation Testing of Deep Learning Systems. in S Ghosh, B Cukic, R Poston, R Natella & N Laranjeiro (eds), Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018., 8539073, Proceedings - International Symposium on Software Reliability Engineering, ISSRE, vol. 2018-October, IEEE Computer Society, pp. 100-111, 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018, Memphis, United States, 10/15/18. https://doi.org/10.1109/ISSRE.2018.00021
Ma L, Zhang F, Sun J, Xue M, Li B, Juefei-Xu F et al. DeepMutation: Mutation Testing of Deep Learning Systems. In Ghosh S, Cukic B, Poston R, Natella R, Laranjeiro N, editors, Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018. IEEE Computer Society. 2018. p. 100-111. 8539073. (Proceedings - International Symposium on Software Reliability Engineering, ISSRE). https://doi.org/10.1109/ISSRE.2018.00021
Ma, Lei ; Zhang, Fuyuan ; Sun, Jiyuan ; Xue, Minhui ; Li, Bo ; Juefei-Xu, Felix ; Xie, Chao ; Li, Li ; Liu, Yang ; Zhao, Jianjun ; Wang, Yadong. / DeepMutation : Mutation Testing of Deep Learning Systems. Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018. editor / Sudipto Ghosh ; Bojan Cukic ; Robin Poston ; Roberto Natella ; Nuno Laranjeiro. IEEE Computer Society, 2018. pp. 100-111 (Proceedings - International Symposium on Software Reliability Engineering, ISSRE).
@inproceedings{fc9da1f66fab487ea7f1d22d727adb2d,
title = "DeepMutation: Mutation Testing of Deep Learning Systems",
abstract = "Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.",
author = "Lei Ma and Fuyuan Zhang and Jiyuan Sun and Minhui Xue and Bo Li and Felix Juefei-Xu and Chao Xie and Li Li and Yang Liu and Jianjun Zhao and Yadong Wang",
year = "2018",
month = "11",
day = "16",
doi = "10.1109/ISSRE.2018.00021",
language = "English",
series = "Proceedings - International Symposium on Software Reliability Engineering, ISSRE",
publisher = "IEEE Computer Society",
pages = "100--111",
editor = "Sudipto Ghosh and Bojan Cukic and Robin Poston and Roberto Natella and Nuno Laranjeiro",
booktitle = "Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018",
address = "United States",

}

TY - GEN

T1 - DeepMutation

T2 - Mutation Testing of Deep Learning Systems

AU - Ma, Lei

AU - Zhang, Fuyuan

AU - Sun, Jiyuan

AU - Xue, Minhui

AU - Li, Bo

AU - Juefei-Xu, Felix

AU - Xie, Chao

AU - Li, Li

AU - Liu, Yang

AU - Zhao, Jianjun

AU - Wang, Yadong

PY - 2018/11/16

Y1 - 2018/11/16

N2 - Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.

AB - Deep learning (DL) defines a new data-driven programming paradigm where the internal system logic is largely shaped by the training data. The standard way of evaluating DL models is to examine their performance on a test dataset. The quality of the test dataset is of great importance to gain confidence of the trained models. Using an inadequate test dataset, DL models that have achieved high test accuracy may still lack generality and robustness. In traditional software testing, mutation testing is a well-established technique for quality evaluation of test suites, which analyzes to what extent a test suite detects the injected faults. However, due to the fundamental difference between traditional software and deep learning-based software, traditional mutation testing techniques cannot be directly applied to DL systems. In this paper, we propose a mutation testing framework specialized for DL systems to measure the quality of test data. To do this, by sharing the same spirit of mutation testing in traditional software, we first define a set of source-level mutation operators to inject faults to the source of DL (i.e., training data and training programs). Then we design a set of model-level mutation operators that directly inject faults into DL models without a training process. Eventually, the quality of test data could be evaluated from the analysis on to what extent the injected faults could be detected. The usefulness of the proposed mutation testing techniques is demonstrated on two public datasets, namely MNIST and CIFAR-10, with three DL models.

UR - http://www.scopus.com/inward/record.url?scp=85056557793&partnerID=8YFLogxK

UR - http://www.scopus.com/inward/citedby.url?scp=85056557793&partnerID=8YFLogxK

U2 - 10.1109/ISSRE.2018.00021

DO - 10.1109/ISSRE.2018.00021

M3 - Conference contribution

AN - SCOPUS:85056557793

T3 - Proceedings - International Symposium on Software Reliability Engineering, ISSRE

SP - 100

EP - 111

BT - Proceedings - 29th IEEE International Symposium on Software Reliability Engineering, ISSRE 2018

A2 - Ghosh, Sudipto

A2 - Cukic, Bojan

A2 - Poston, Robin

A2 - Natella, Roberto

A2 - Laranjeiro, Nuno

PB - IEEE Computer Society

ER -