Generalization bounds for non-stationary mixing processes

Vitaly Kuznetsov, Mehryar Mohri

Research output: Contribution to journalArticlepeer-review

Abstract

This paper presents the first generalization bounds for time series prediction with a non-stationary mixing stochastic process. We prove Rademacher complexity learning bounds for both average-path generalization with non-stationary β-mixing processes and path-dependent generalization with non-stationary ϕ-mixing processes. Our guarantees are expressed in terms of β- or ϕ-mixing coefficients and a natural measure of discrepancy between training and target distributions. They admit as special cases previous Rademacher complexity bounds for non-i.i.d. stationary distributions, for independent but not identically distributed random variables, or for the i.i.d. case. We show that, using a new sub-sample selection technique we introduce, our bounds can be tightened under the natural assumption of asymptotically stationary stochastic processes. We also prove that fast learning rates can be achieved by extending existing local Rademacher complexity analyses to the non-i.i.d. setting. We conclude the paper by providing generalization bounds for learning with unbounded losses and non-i.i.d. data.

Original languageEnglish (US)
Pages (from-to)93-117
Number of pages25
JournalMachine Learning
Volume106
Issue number1
DOIs
StatePublished - Jan 1 2017

Keywords

  • Asymptotic stationarity
  • Fast rates
  • Generalization bounds
  • Local Rademacher complexity
  • Markov processes
  • Mixing
  • Non-stationary processes
  • Time series
  • Unbounded loss

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence

Fingerprint

Dive into the research topics of 'Generalization bounds for non-stationary mixing processes'. Together they form a unique fingerprint.

Cite this