The method of moments for random metric measure spaces, developed by Foutel-Rodier (2024), is a technique for proving vague convergence of sequences of random marked metric measure spaces in the Gromov-weak topology. It extends the classical method of moments from probability theory to the much richer setting of metric measure spaces.

The moment measure of order k >= 1 of a measure M on the space X of mmm-spaces is the unique measure M_k on R_+^{k x k} x E^k such that

M_k[phi] = M[Phi]

for all monomials Phi of order k with test function phi. Intuitively, M_k captures the distribution of k-tuples of points sampled from a random mmm-space, weighted by M.

Algorithm

  1. Compute moment measures M_{k,n} for the sequence of random mmm-spaces X_n
  2. Show M_{k,n} M_k weakly for each k >= 1
  3. Verify the Carleman condition: sum_{k>=1} 1/(M_k[1]^{1/2k}) = infinity
  4. Conclude: (M_n) converges vaguely in the Gromov-weak topology

If additionally M_{0,n} M_0 (convergence of total mass / zeroth moment), then weak convergence holds.

Key Properties

  • Only moments of order k >= 1 are needed for vague convergence — the k = 0 moment (total mass) is not required
  • In branching process applications, k >= 1 moments come from many-to-few formulas, while the k = 0 moment requires estimating survival probabilities — which is often much harder
  • The Carleman condition is a condition on the total mass moments M_k[1] only, not on the full moment measures
  • A perturbation theorem (Theorem 5.1) allows comparing truncated approximations to the original sequence, without knowing the limit in advance
  • For the Brownian continuum random tree under the excursion measure, this approach enables proving convergence without survival probability estimates

method