Time series prediction and online learning

Vitaly Kuznetsov, Mehryar Mohri

Research output: Contribution to journalConference articlepeer-review


We present a series of theoretical and algorithmic results for time series prediction leveraging recent advances in the statistical learning analysis of this problem and on-line learning. We prove the first generalization bounds for a hypothesis derived by online-to-batch conversion of the sequence of hypotheses output by an online algorithm, in the general setting of a non-stationary non-mixing stochastic process. Our learning guarantees hold for adapted sequences of hypotheses both for convex and non-convex losses. We further give generalization bounds for sequences of hypotheses that may not be adapted but that admit a stability property. Our learning bounds are given in terms of a discrepancy measure, which we show can be accurately estimated from data under a mild assumption. Our theory enables us to devise a principled solution for the notoriously difficult problem of model section in the time series scenario. It also helps us devise new ensemble methods with favorable theoretical guarantees for forecasting non-stationary time series.

Original languageEnglish (US)
Pages (from-to)1190-1213
Number of pages24
JournalJournal of Machine Learning Research
Issue numberJune
StatePublished - Jun 6 2016
Event29th Conference on Learning Theory, COLT 2016 - New York, United States
Duration: Jun 23 2016Jun 26 2016


  • Ensembles
  • Generalization bounds
  • Model selection
  • Non-mixing
  • Non-stationary
  • On-line learning
  • Regret minimization
  • Stability
  • Time series prediction
  • Validation

ASJC Scopus subject areas

  • Software
  • Artificial Intelligence
  • Control and Systems Engineering
  • Statistics and Probability


Dive into the research topics of 'Time series prediction and online learning'. Together they form a unique fingerprint.

Cite this