Optimal parallel dictionary matching and compression

Martin Farach, S. Muthukrishnan

Research output: Contribution to conferencePaper

Abstract

Emerging applications in multi-media and the Human Genome Project require storage and searching of large databases of strings - a task for which parallelism seems the only hope. In this paper, we consider the parallelism in some of the fundamental problems in compressing strings and in matching large dictionaries of patterns against texts. We present the first work-optimal algorithms for these well-studied problems including the classical dictionary matching problem, optimal compression with a static dictionary and the universal data compression with dynamic dictionary of Lempel and Ziv. All our algorithms are randomized and they are of the Las Vegas type. Furthermore, they are fast, working in time logarithmic in the input size. Additionally, our algorithms seem suitable for a distributed implementation.

Original languageEnglish (US)
Pages244-253
Number of pages10
StatePublished - 1995
Externally publishedYes
EventProceedings of the 7th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA'95 - Santa Barbara, CA, USA
Duration: Jul 17 1995Jul 19 1995

Other

OtherProceedings of the 7th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA'95
CitySanta Barbara, CA, USA
Period7/17/957/19/95

ASJC Scopus subject areas

  • Software
  • Safety, Risk, Reliability and Quality

Fingerprint Dive into the research topics of 'Optimal parallel dictionary matching and compression'. Together they form a unique fingerprint.

  • Cite this

    Farach, M., & Muthukrishnan, S. (1995). Optimal parallel dictionary matching and compression. 244-253. Paper presented at Proceedings of the 7th Annual ACM Symposium on Parallel Algorithms and Architectures, SPAA'95, Santa Barbara, CA, USA, .