Abstract
Solving linear programs by using entropic penalization has recently attracted new interest in the optimization community, since this strategy forms the basis for the fastest-known algorithms for the optimal transport problem, with many applications in modern large-scale machine learning. Crucial to these applications has been an analysis of how quickly solutions to the penalized program approach true optima to the original linear program. More than 20 years ago, Cominetti and San Martín showed that this convergence is exponentially fast; however, their proof is asymptotic and does not give any indication of how accurately the entropic program approximates the original program for any particular choice of the penalization parameter. We close this long-standing gap in the literature regarding entropic penalization by giving a new proof of the exponential convergence, valid for any linear program. Our proof is non-asymptotic, yields explicit constants, and has the virtue of being extremely simple. We provide matching lower bounds and show that the entropic approach does not lead to a near-linear time approximation scheme for the linear assignment problem.
Original language | English (US) |
---|---|
Pages (from-to) | 1841-1855 |
Number of pages | 15 |
Journal | Proceedings of Machine Learning Research |
Volume | 75 |
State | Published - 2018 |
Event | 31st Annual Conference on Learning Theory, COLT 2018 - Stockholm, Sweden Duration: Jul 6 2018 → Jul 9 2018 |
Keywords
- assignment problem
- Entropic penalization
- optimal transport
- Sinkhorn algorithm
ASJC Scopus subject areas
- Artificial Intelligence
- Software
- Control and Systems Engineering
- Statistics and Probability