An Empirical Study on Program Failures of Deep Learning Jobs
Deep learning has made significant achievements in many application areas. To train and test models more efficiently, enterprise developers submit and run their deep learning programs on a shared, multi-tenant platform. However, some of the programs fail after a long execution time due to code/script defects, which reduces the development productivity and wastes expensive resources such as GPU, storage, and network I/O.
This paper presents the first comprehensive empirical study on program failures of deep learning jobs. 4960 real failures are collected from a deep learning platform in Microsoft. We manually examine their failure messages and classify them into 20 categories. In addition, we identify the common root causes and bug-fix solutions on a sample of 400 failures. To better understand the current testing and debugging practices for deep learning, we also conduct developer interviews. Our major findings include: (1) 48.0% of the failures occur in the interaction with the platform rather than in the execution of code logic, mostly due to the discrepancies between local and platform execution environments; (2) Deep learning specific failures (13.5%) are mainly caused by inappropriate model parameters/structures and framework API misunderstanding; (3) Current debugging practices are not efficient for fault localization in many cases, and developers need more deep learning specific tools. Based on our findings, we further suggest possible research topics and tooling support that could facilitate future deep learning development.
Tue 7 JulDisplayed time zone: (UTC) Coordinated Universal Time change
08:05 - 09:05
|DISSECTOR: Input Validation for Deep Learning Applications by Crossing-layer DissectionTechnical|
|White-box Fairness Testing through Adversarial SamplingTechnical|
Peixin Zhang Zhejiang University, Jingyi Wang National University of Singapore, Singapore, Jun Sun Singapore Management University, Guoliang Dong Computer College of Zhejiang University, Xinyu Wang Zhejiang University, Xingen Wang Zhejiang University, Jin Song Dong National University of Singapore, Dai Ting Huawei Corporation
|FeatureNET: Diversity-driven Generation of Deep Learning ModelsDemo|
|EvalDNN: A Toolbox for Evaluating Deep Neural Network ModelsDemo|
Yongqiang TIAN The Hong Kong University of Science and Technology, Zhihua Zeng Zhejiang University, Ming Wen Huazhong University of Science and Technology, China, Yepang Liu Southern University of Science and Technology, Tzu-yang Kuo The Hong Kong University of Science and Technology, Shing-Chi Cheung Department of Computer Science and Engineering, The Hong Kong University of Science and Technology
|Taxonomy of Real Faults in Deep Learning SystemsTechnical|
Nargiz Humbatova Università della Svizzera italiana, Gunel Jahangirova Università della Svizzera italiana, Gabriele Bavota Università della Svizzera italiana, Vincenzo Riccio Università della Svizzera italiana, Andrea Stocco Università della Svizzera italiana, Paolo Tonella Università della Svizzera italiana
|An Empirical Study on Program Failures of Deep Learning JobsTechnical|
Ru Zhang Microsoft Research, Wencong Xiao Alibaba, Hongyu Zhang University of Newcastle, Australia, Yu Liu Microsoft Research, Haoxiang Lin Microsoft Research, Mao Yang Microsoft ResearchDOI Pre-print