PAC-Bayes learning bounds for sample-dependent priors

Pranjal Awasthi, Satyen Kale, Stefani Karp, Mehryar Mohri

Research output: Contribution to journalConference articlepeer-review

Abstract

We present a series of new PAC-Bayes learning guarantees for randomized algorithms with sample-dependent priors. Our most general bounds make no assumption on the priors and are given in terms of certain covering numbers under the infinite-Rényi divergence and the l1 distance. We show how to use these general bounds to derive learning bounds in the setting where the sample-dependent priors obey an infinite-Rényi divergence or l1-distance sensitivity condition. We also provide a flexible framework for computing PAC-Bayes bounds, under certain stability assumptions on the sample-dependent priors, and show how to use this framework to give more refined bounds when the priors satisfy an infinite-Rényi divergence sensitivity condition.

Original languageEnglish (US)
JournalAdvances in Neural Information Processing Systems
Volume2020-December
StatePublished - 2020
Event34th Conference on Neural Information Processing Systems, NeurIPS 2020 - Virtual, Online
Duration: Dec 6 2020Dec 12 2020

ASJC Scopus subject areas

  • Computer Networks and Communications
  • Information Systems
  • Signal Processing

Fingerprint

Dive into the research topics of 'PAC-Bayes learning bounds for sample-dependent priors'. Together they form a unique fingerprint.

Cite this