Evaluating Synthetic Bugs

Joshua Bundt, Andrew Fasano, Brendan Dolan-Gavitt, William Robertson, Tim Leek

    Research output: Chapter in Book/Report/Conference proceedingConference contribution

    Abstract

    Fuzz testing has been used to find bugs in programs since the 1990s, but despite decades of dedicated research, there is still no consensus on which fuzzing techniques work best. One reason for this is the paucity of ground truth: bugs in real programs with known root causes and triggering inputs are difficult to collect at a meaningful scale. Bug injection technologies that add synthetic bugs into real programs seem to offer a solution, but the differences in finding these synthetic bugs versus organic bugs have not previously been explored at a large scale. Using over 80 years of CPU time, we ran eight fuzzers across 20 targets from the Rode0day bug-finding competition and the LAVA-M corpus. Experiments were standardized with respect to compute resources and metrics gathered. These experiments show differences in fuzzer performance as well as the impact of various configuration options. For instance, it is clear that integrating symbolic execution with mutational fuzzing is very effective and that using dictionaries improves performance. Other conclusions are less clear-cut; for example, no one fuzzer beat all others on all tests. It is noteworthy that no fuzzer found any organic bugs (i.e., one reported in a CVE), despite 50 such bugs being available for discovery in the fuzzing corpus. A close analysis of results revealed a possible explanation: a dramatic difference between where synthetic and organic bugs live with respect to the "main path"discovered by fuzzers. We find that recent updates to bug injection systems have made synthetic bugs more difficult to discover, but they are still significantly easier to find than organic bugs in our target programs. Finally, this study identifies flaws in bug injection techniques and suggests a number of axes along which synthetic bugs should be improved.

    Original languageEnglish (US)
    Title of host publicationASIA CCS 2021 - Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security
    PublisherAssociation for Computing Machinery, Inc
    Pages716-730
    Number of pages15
    ISBN (Electronic)9781450382878
    DOIs
    StatePublished - May 24 2021
    Event16th ACM Asia Conference on Computer and Communications Security, ASIA CCS 2021 - Virtual, Online, Hong Kong
    Duration: Jun 7 2021Jun 11 2021

    Publication series

    NameASIA CCS 2021 - Proceedings of the 2021 ACM Asia Conference on Computer and Communications Security

    Conference

    Conference16th ACM Asia Conference on Computer and Communications Security, ASIA CCS 2021
    Country/TerritoryHong Kong
    CityVirtual, Online
    Period6/7/216/11/21

    Keywords

    • evaluation
    • fuzzing
    • synthetic bugs

    ASJC Scopus subject areas

    • Computer Networks and Communications
    • Computer Science Applications
    • Information Systems
    • Software

    Fingerprint

    Dive into the research topics of 'Evaluating Synthetic Bugs'. Together they form a unique fingerprint.

    Cite this