Detecting regression bugs in software evolution, analyzing side-channels in programs and evaluating robustness in deep neural networks (DNNs) can all be seen as instances of differential software analysis, where the goal is to generate diverging executions of program paths. Two executions are said to be diverging if the observable program behavior differs, e.g., in terms of program output, execution time, or (DNN) classification. The key challenge of differential software analysis is to simultaneously reason about multiple program paths, often across program variants.
This paper presents HyDiff, the first hybrid approach for differential software analysis. HyDiff integrates and extends two very successful testing techniques: Feedback-directed greybox fuzzing for efficient program testing and shadow symbolic execution for systematic program exploration. HyDiff extends greybox fuzzing with divergence-driven feedback based on novel cost metrics that take into account the control flow graph of the program. Furthermore HyDiff extends shadow symbolic execution by applying four-way forking in a systematic exploration and still having the ability to incorporate concrete inputs in the analysis. HyDiff applies divergence revealing heuristics based on resource consumption and control-flow information to efficiently guide the symbolic exploration, which allows its efficient usage beyond regression testing applications. We introduce differential metrics such as output, decision and cost difference, as well as patch distance, to assist the fuzzing and symbolic execution components in maximizing the execution divergence.
We implemented our approach on top of the fuzzer AFL and the symbolic execution framework Symbolic PathFinder. We illustrate HyDiff on regression and side-channel analysis for Java bytecode programs, and further show how to use HyDiff for robustness analysis of neural networks.
Tue 7 Jul Times are displayed in time zone: (UTC) Coordinated Universal Time change
|07:00 - 07:12|
|07:12 - 07:24|
Yannic NollerHumboldt-Universität zu Berlin, Corina S. PasareanuCarnegie Mellon University Silicon Valley, NASA Ames Research Center, Marcel BöhmeMonash University, Youcheng SunQueen's University Belfast, Hoang Lam NguyenHumboldt-Universität zu Berlin, Lars GrunskeHumboldt-Universität zu BerlinPre-print
|07:24 - 07:36|
Towards Characterizing Adversarial Defects of Deep Learning Software from the Lens of UncertaintyTechnical
Xiyue ZhangPeking University, Xiaofei XieNanyang Technological University, Lei MaKyushu University, Xiaoning DuNanyang Technological University, Qiang HuKyushu University, Japan, Yang LiuNanyang Technological University, Singapore, Jianjun ZhaoKyushu University, Meng SunPeking UniversityPre-print
|07:36 - 07:48|
One Size Does Not Fit All: A Grounded Theory and Online Survey Study of Developer Preferences for Security Warning TypesTechnical
|07:48 - 07:54|
New Ideas and Emerging Results
Gian Luca ScocciaUniversity of L'Aquila, Matteo Maria FioreUniversity of L'Aquila, Patrizio PelliccioneUniversity of L'Aquila and Chalmers | University of Gothenburg, Marco AutiliUniversity of L'Aquila, Italy, Paola InverardiUniversity of L'Aquila, Alejandro RussoChalmers University of Technology, Sweden
|07:54 - 08:00|
New Ideas and Emerging Results
Koen Yskoutimec - DistriNet, KU Leuven, Thomas HeymanToreon, Dimitri Van LanduytKatholieke Universiteit Leuven, Laurens Sionimec-DistriNet, KU Leuven, Kim Wuytsimec-DistriNet, KU Leuven, Wouter JoosenKatholieke Universiteit LeuvenPre-print