Entropy-regularized optimal transport augments the classical Monge-Kantorovich problem with an entropic penalty, yielding , where is the entropy of the coupling and controls regularization strength. This yields smoother, more tractable transport plans than classical OT, and the minimizer can be computed efficiently via the Sinkhorn algorithm.

The static version of the Schrödinger bridge problem is equivalent to entropic OT with quadratic cost when using Brownian reference processes. As , the regularized solution converges to the classical OT plan and the regularized cost converges to the Wasserstein-2 distance. This connection (surveyed by Léonard 2014) places SB methods within the computational OT toolkit and provides a principled way to interpolate between stochastic (SB) and deterministic (OT) transport maps.

In machine learning, entropic OT is used for: minibatch OT approximations in flow matching (OT-CFM), computing Schrödinger bridges for generative modeling and domain transfer, and as a theoretically grounded loss function.

Key Details

  • Adds to OT objective
  • Solved via Sinkhorn/IPF
  • Equivalent to static SB problem — the static Schrödinger problem : with fixed marginals (Léonard 2014)
  • recovers classical OT via Γ-convergence of Schrödinger to Monge-Kantorovich
  • The optimal coupling has product-shaped density where are the Schrödinger potentials
  • Enables efficient computation via matrix scaling
  • Cuturi (2013) popularized in ML
  • The fluid dynamic perspective (Chen et al. 2014) shows entropic OT differs from standard OT by a Fisher information penalty, not just an entropy term — the SB functional is ∫∫[½||v||² + ⅛||∇ln ρ||²]ρ dtdx vs Benamou-Brenier’s ∫∫½||v||²ρ dtdx

concept