Coding techniques for handling failures in large disk arrays

L. Hellerstein, G. A. Gibson, R. M. Karp, R. H. Katz, D. A. Patterson

    Research output: Contribution to journalArticlepeer-review

    Abstract

    A crucial issue in the design of very large disk arrays is the protection of data against catastrophic disk failures. Although today single disks are highly reliable, when a disk array consists of 100 or 1000 disks, the probability that at least one disk will fail within a day or a week is high. In this paper we address the problem of designing erasure-correcting binary linear codes that protect against the loss of data caused by disk failures in large disk arrays. We describe how such codes can be used to encode data in disk arrays, and give a simple method for data reconstruction. We discuss important reliability and performance constraints of these codes, and show how these constraints relate to properties of the parity check matrices of the codes. In so doing, we transform code design problems into combinatorial problems. Using this combinatorial framework, we present codes and prove they are optimal with respect to various reliability and performance constraints.

    Original languageEnglish (US)
    Pages (from-to)182-208
    Number of pages27
    JournalAlgorithmica
    Volume12
    Issue number2-3
    DOIs
    StatePublished - Sep 1994

    Keywords

    • Availability
    • Error-correcting codes
    • Input/output architecture
    • RAID
    • Redundant disk arrays
    • Reliability

    ASJC Scopus subject areas

    • General Computer Science
    • Computer Science Applications
    • Applied Mathematics

    Fingerprint

    Dive into the research topics of 'Coding techniques for handling failures in large disk arrays'. Together they form a unique fingerprint.

    Cite this