ESPN: Extremely sparse pruned networks

Minsu Cho, Ameya Joshi, Chinmay Hegde

    Research output: Chapter in Book/Report/Conference proceedingConference contribution


    Deep neural networks are often highly over-parameterized, prohibiting their use in compute-limited systems. However, a line of recent works has shown that the size of deep networks can be considerably reduced by identifying a subset of neuron indicators (or mask) that correspond to significant weights prior to training. We demonstrate that a simple iterative mask discovery method can achieve state-of-the-art compression of very deep networks. Our algorithm represents a hybrid approach between single-shot network pruning methods (such as SNIP) with Lottery-Ticket type approaches. We validate our approach on several datasets and outperform several existing pruning approaches in both test accuracy and compression ratio.

    Original languageEnglish (US)
    Title of host publication2021 IEEE Data Science and Learning Workshop, DSLW 2021
    PublisherInstitute of Electrical and Electronics Engineers Inc.
    ISBN (Electronic)9781665428255
    StatePublished - Jun 5 2021
    Event2021 IEEE Data Science and Learning Workshop, DSLW 2021 - Toronto, Canada
    Duration: Jun 5 2021Jun 6 2021

    Publication series

    Name2021 IEEE Data Science and Learning Workshop, DSLW 2021


    Conference2021 IEEE Data Science and Learning Workshop, DSLW 2021


    • Model compression
    • Neural network pruning
    • Sparsification

    ASJC Scopus subject areas

    • Artificial Intelligence
    • Information Systems
    • Education


    Dive into the research topics of 'ESPN: Extremely sparse pruned networks'. Together they form a unique fingerprint.

    Cite this