Scaling up linear programming with PDLP

Traditional linear programming (LP) issues are one of the vital foundational issues in pc science and operations analysis. With intensive purposes throughout quite a few sectors of the worldwide financial system, akin to manufacturing, networking, and different fields, LP has been the cornerstone of mathematical programming and has considerably influenced the event of immediately’s subtle modeling and algorithmic frameworks for data-driven choice making. If there’s one thing to optimize, there is a good probability LP is concerned.

Because the late Nineteen Forties, LP fixing has advanced considerably, with the simplex technique by Dantzig and numerous interior-point strategies being essentially the most prevalent strategies. Right now’s superior industrial LP solvers make the most of these strategies however face challenges in scaling to very giant cases as a result of computational calls for. In response to this limitation, first-order strategies (FOMs) have gained traction for large-scale LP issues.

With the above in thoughts, we introduce our solver PDLP (Primal-dual hybrid gradient enhanced for LP), a brand new FOM–based mostly LP solver that considerably scales up our LP fixing capabilities. Using matrix-vector multiplication quite than matrix factorization, PDLP requires much less reminiscence and is extra appropriate with trendy computational applied sciences like GPUs and distributed methods, providing a scalable different that mitigates the reminiscence and computational inefficiencies of conventional LP strategies. PDLP is open-sourced in Google’s OR-Instruments. This undertaking has been in growth since 2018 [1, 2, 3], and we’re proud to announce that it was co-awarded the celebrated Beale — Orchard-Hays Prize on the Worldwide Symposium of Mathematical Programming in July 2024. This accolade is among the highest honors within the discipline of computational optimization, awarded each three years by the Mathematical Optimization Society.