How are deep learning models similar? an empirical study on clone analysis of deep learning software

Xiongfei Wu, Liangyu Qin, Bing Yu, Xiaofei Xie, Lei Ma, Yinxing Xue, Yang Liu, Jianjun Zhao

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Deep learning (DL) has been successfully applied to many cutting-edge applications, e.g., image processing, speech recognition, and natural language processing. As more and more DL software is made open-sourced, publicly available, and organized in model repositories and stores (Model Zoo, ModelDepot), there comes a need to understand the relationships of these DL models regarding their maintenance and evolution tasks. Although clone analysis has been extensively studied for traditional software, up to the present, clone analysis has not been investigated for DL software. Since DL software adopts the data-driven development paradigm, it is still not clear whether and to what extent the clone analysis techniques of traditional software could be adapted to DL software.In this paper, we initiate the first step on the clone analysis of DL software at three different levels, i.e., source code level, model structural level, and input/output (I/0)-semantic level, which would be a key in DL software management, maintenance and evolution. We intend to investigate the similarity between these DL models from clone analysis perspective. Several tools and metrics are selected to conduct clone analysis of DL software at three different levels. Our study on two popular datasets (i.e., MNIST and CIFAR-10) and eight DL models of five architectural families (i.e., LeNet, ResNet, DenseNet, AlexNet, and VGG) shows that: 1). the three levels of similarity analysis are generally adequate to find clones between DL models ranging from structural to semantic; 2). different measures for clone analysis used at each level yield similar results; 3) clone analysis of one single level may not render a complete picture of the similarity of DL models. Our findings open up several research opportunities worth further exploration towards better understanding and more effective clone analysis of DL software.

Original languageEnglish
Title of host publicationProceedings - 2020 IEEE/ACM 28th International Conference on Program Comprehension, ICPC 2020
PublisherIEEE Computer Society
Pages172-183
Number of pages12
ISBN (Electronic)9781450379588
DOIs
Publication statusPublished - Jul 13 2020
Event28th IEEE/ACM International Conference on Program Comprehension, ICPC 2020, collocated with the 42nd International Conference on Software Engineering, ICSE 2020 - Seoul, Korea, Republic of
Duration: Jul 13 2020Jul 15 2020

Publication series

NameIEEE International Conference on Program Comprehension

Conference

Conference28th IEEE/ACM International Conference on Program Comprehension, ICPC 2020, collocated with the 42nd International Conference on Software Engineering, ICSE 2020
Country/TerritoryKorea, Republic of
CitySeoul
Period7/13/207/15/20

All Science Journal Classification (ASJC) codes

  • Hardware and Architecture
  • Software

Fingerprint

Dive into the research topics of 'How are deep learning models similar? an empirical study on clone analysis of deep learning software'. Together they form a unique fingerprint.

Cite this