BlinkDB: Queries with bounded errors and bounded response times on very large data

Sameer Agarwal, Barzan Mozafari, Aurojit Panda, Henry Milner, Samuel Madden, Ion Stoica

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

In this paper, we present BlinkDB, a massively parallel, approximate query engine for running interactive SQL queries on large volumes of data. BlinkDB allows users to trade-off query accuracy for response time, enabling interactive queries over massive data by running queries on data samples and presenting results annotated with meaningful error bars. To achieve this, BlinkDB uses two key ideas: (1) an adaptive optimization framework that builds and maintains a set of multi-dimensional stratified samples from original data over time, and (2) a dynamic sample selection strategy that selects an appropriately sized sample based on a query's accuracy or response time requirements. We evaluate BlinkDB against the well-known TPC-H benchmarks and a real-world analytic workload derived from Conviva Inc., a company that manages video distribution over the Internet. Our experiments on a 100 node cluster show that BlinkDB can answer queries on up to 17 TBs of data in less than 2 seconds (over 200 x faster than Hive), within an error of 2-10%.

Original languageEnglish (US)
Title of host publicationProceedings of the 8th ACM European Conference on Computer Systems, EuroSys 2013
Pages29-42
Number of pages14
DOIs
StatePublished - 2013
Event8th ACM European Conference on Computer Systems, EuroSys 2013 - Prague, Czech Republic
Duration: Apr 15 2013Apr 17 2013

Publication series

NameProceedings of the 8th ACM European Conference on Computer Systems, EuroSys 2013

Other

Other8th ACM European Conference on Computer Systems, EuroSys 2013
Country/TerritoryCzech Republic
CityPrague
Period4/15/134/17/13

ASJC Scopus subject areas

  • Hardware and Architecture
  • Electrical and Electronic Engineering

Fingerprint

Dive into the research topics of 'BlinkDB: Queries with bounded errors and bounded response times on very large data'. Together they form a unique fingerprint.

Cite this