Bounding cache miss costs of multithreaded computations under general schedulers

Richard Cole, Vijaya Ramachandran

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

We analyze the caching overhead incurred by a class of multithreaded algorithms when scheduled by an arbitrary scheduler. We obtain bounds that match or improve upon the well-known O(Q + S · (M/B)) caching cost for the randomized work stealing (RWS) scheduler, where S is the number of steals, Q is the sequential caching cost, and M and B are the cache size and block (or cache line) size respectively.

Original languageEnglish (US)
Title of host publicationSPAA 2017 - Proceedings of the 29th ACM Symposium on Parallelism in Algorithms and Architectures
PublisherAssociation for Computing Machinery
Pages351-362
Number of pages12
ISBN (Electronic)9781450345934
DOIs
StatePublished - Jul 24 2017
Event29th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2017 - Washington, United States
Duration: Jul 24 2017Jul 26 2017

Publication series

NameAnnual ACM Symposium on Parallelism in Algorithms and Architectures
VolumePart F129316

Other

Other29th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2017
Country/TerritoryUnited States
CityWashington
Period7/24/177/26/17

Keywords

  • Cache oblivious
  • Fork-Join algorithms
  • Schedulers

ASJC Scopus subject areas

  • Software
  • Theoretical Computer Science
  • Hardware and Architecture

Fingerprint

Dive into the research topics of 'Bounding cache miss costs of multithreaded computations under general schedulers'. Together they form a unique fingerprint.

Cite this