An explicit analysis of the entropic penalty in linear programming

Jonathan Weed

Research output: Contribution to journalConference articlepeer-review

Abstract

Solving linear programs by using entropic penalization has recently attracted new interest in the optimization community, since this strategy forms the basis for the fastest-known algorithms for the optimal transport problem, with many applications in modern large-scale machine learning. Crucial to these applications has been an analysis of how quickly solutions to the penalized program approach true optima to the original linear program. More than 20 years ago, Cominetti and San Martín showed that this convergence is exponentially fast; however, their proof is asymptotic and does not give any indication of how accurately the entropic program approximates the original program for any particular choice of the penalization parameter. We close this long-standing gap in the literature regarding entropic penalization by giving a new proof of the exponential convergence, valid for any linear program. Our proof is non-asymptotic, yields explicit constants, and has the virtue of being extremely simple. We provide matching lower bounds and show that the entropic approach does not lead to a near-linear time approximation scheme for the linear assignment problem.

Original languageEnglish (US)
Pages (from-to)1841-1855
Number of pages15
JournalProceedings of Machine Learning Research
Volume75
StatePublished - 2018
Event31st Annual Conference on Learning Theory, COLT 2018 - Stockholm, Sweden
Duration: Jul 6 2018Jul 9 2018

Keywords

  • assignment problem
  • Entropic penalization
  • optimal transport
  • Sinkhorn algorithm

ASJC Scopus subject areas

  • Artificial Intelligence
  • Software
  • Control and Systems Engineering
  • Statistics and Probability

Fingerprint

Dive into the research topics of 'An explicit analysis of the entropic penalty in linear programming'. Together they form a unique fingerprint.

Cite this