One-Bit Sigma-Delta Quantization with Exponential Accuracy

Research output: Contribution to journalArticlepeer-review


One-bit quantization is a method of representing band-limited signals by ±1 sequences that are computed from regularly spaced samples of these signals; as the sampling density λ → ∞, convolving these one-bit sequences with appropriately chosen filters produces increasingly close approximations of the original signals. This method is widely used for analog-to-digital and digital-to-analog conversion, because it is less expensive and simpler to implement than the more familiar critical sampling followed by fine-resolution quantization. However, unlike fine-resolution quantization, the accuracy of one-bit quantization is not well understood. A natural error lower bound that decreases like 2 can easily be given using information-theoretic arguments. Yet, no one-bit quantization algorithm was known with an error decay estimate even close to exponential decay. In this paper we construct an infinite family of one-bit sigma-delta quantization schemes that achieves this goal. In particular, using this family, we prove that the error signal for π-band-limited signals is at most O (2-.07λ).

Original languageEnglish (US)
Pages (from-to)1608-1630
Number of pages23
JournalCommunications on Pure and Applied Mathematics
Issue number11
StatePublished - Nov 2003

ASJC Scopus subject areas

  • General Mathematics
  • Applied Mathematics


Dive into the research topics of 'One-Bit Sigma-Delta Quantization with Exponential Accuracy'. Together they form a unique fingerprint.

Cite this