MUBench: A benchmark for API-misuse detectors

Sven Amann, Sarah Nadi, Hoan A. Nguyen, Tien N. Nguyen, Mira Mezini

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Over the last few years, researchers proposed a multitude of automated bug-detection approaches that mine a class of bugs that we call API misuses. Evaluations on a variety of software products show both the omnipresence of such misuses and the ability of the approaches to detect them. This work presents MuBench, a dataset of 89 API misuses that we collected from 33 real-world projects and a survey. With the dataset we empirically analyze the prevalence of API misuses compared to other types of bugs, finding that they are rare, but almost always cause crashes. Furthermore, we discuss how to use it to benchmark and compare API-misuse detectors.

Original languageEnglish (US)
Title of host publicationProceedings - 13th Working Conference on Mining Software Repositories, MSR 2016
PublisherAssociation for Computing Machinery, Inc
Pages464-467
Number of pages4
ISBN (Electronic)9781450341868
DOIs
StatePublished - May 14 2016
Event13th Working Conference on Mining Software Repositories, MSR 2016 - Austin, United States
Duration: May 14 2016May 15 2016

Publication series

NameProceedings - 13th Working Conference on Mining Software Repositories, MSR 2016

Conference

Conference13th Working Conference on Mining Software Repositories, MSR 2016
Country/TerritoryUnited States
CityAustin
Period5/14/165/15/16

Keywords

  • API-misuse detection
  • Benchmark
  • Bug detection

ASJC Scopus subject areas

  • Software
  • Information Systems and Management

Fingerprint

Dive into the research topics of 'MUBench: A benchmark for API-misuse detectors'. Together they form a unique fingerprint.

Cite this