On the usage and development of deep learning compilers: an empirical study on TVM

Xiongfei Wu, Jinqiu Yang, Lei Ma, Yinxing Xue, Jianjun Zhao

Research output: Contribution to journalArticlepeer-review

Abstract

Recent advances in deploying deep learning (DL) models have inspired the innovation of DL compilers from both industry and academia such as Facebook Glow and TVM. Given the importance of DL compilers, we seek for answering the important question to ease the adoption and development of TVM: What challenges do users face when using DL compilers and what are common challenges for developers when developing DL compilers. This paper presents the first empirical study on identifying the challenges in both usage and development of a DL compiler. We choose TVM as the representative DL compiler and manually inspect 347 sampled posts from its official discuss forum. We identify a taxonomy of challenges in usage of TVM consisting of 15 categories and seven types of common topics about developing TVM. Furthermore, we characterize TVM bugs in total of four impacts to obtain an initial understanding on defects of TVM through manual inspection of 44 bug reports and propose five implications for both developers and researchers in order to improve the development practices and build more robust DL compilers.

Original languageEnglish
Article number172
JournalEmpirical Software Engineering
Volume27
Issue number7
DOIs
Publication statusPublished - Dec 2022

All Science Journal Classification (ASJC) codes

  • Software

Fingerprint

Dive into the research topics of 'On the usage and development of deep learning compilers: an empirical study on TVM'. Together they form a unique fingerprint.

Cite this