Using asymptotic expansions of forward-backward stochastic differential equations (FBSDEs) as control variates for deep BSDE solvers. Instead of learning the full solution (Y, Z) from scratch, the neural network learns only the residual difference Y - Y^{AE,l} and Z - Z^{AE,l}, where (Y^{AE,l}, Z^{AE,l}) is an l-th order asymptotic expansion of the FBSDE solution.
The asymptotic expansion is constructed by introducing a perturbation parameter epsilon in the FBSDE coupling and Taylor expanding the solution in powers of epsilon. The leading-order terms (X^0, Y^0) are deterministic (solving an ODE and its adjoint), and Z^0 = 0 identically. The first-order correction (X^1, Y^1, Z^1) is Gaussian with deterministic coefficients. Despite this simplicity, these expansions provide highly effective prior knowledge for the neural network.
Key Details
- Variance reduction: the control variate removes the “known” part of the solution, leaving the network to learn only the small, unknown correction
- Stability: without the expansion, 90/100 random initializations from [-2,2] fail to update the network within 100 learning steps. With the expansion, 0/100 fail
- Both Y and Z needed: using only the Y expansion or only the Z expansion as control variate gives marginal improvement; the simultaneous use of both is essential due to the joint dependence of the neural network controls u and z on both prior components
- Implementation: replace network output phi^1 with chi * Y^{AE,l} + phi^1 and phi^2 with chi * Z^{AE,l} + phi^2, where chi in {0,1} toggles the expansion on/off
- Computational overhead: negligible, since the expansion terms are deterministic functions
- Connection to transaction costs: the asymptotic expansions in small transaction cost parameters (e.g., sqrt(lambda) scaling) used in equilibrium theory can serve directly as control variates for the deep BSDE solver of the equilibrium FBSDE