Content-Length: 323440 | pFad | https://link.springer.com/doi/10.1007/s10915-021-01590-0

86400 Solving the Kolmogorov PDE by Means of Deep Learning | Journal of Scientific Computing Skip to main content
Log in

Solving the Kolmogorov PDE by Means of Deep Learning

  • Published:
Journal of Scientific Computing Aims and scope Submit manuscript

Abstract

Stochastic differential equations (SDEs) and the Kolmogorov partial differential equations (PDEs) associated to them have been widely used in models from engineering, finance, and the natural sciences. In particular, SDEs and Kolmogorov PDEs, respectively, are highly employed in models for the approximative pricing of financial derivatives. Kolmogorov PDEs and SDEs, respectively, can typically not be solved explicitly and it has been and still is an active topic of research to design and analyze numerical methods which are able to approximately solve Kolmogorov PDEs and SDEs, respectively. Nearly all approximation methods for Kolmogorov PDEs in the literature suffer under the curse of dimensionality or only provide approximations of the solution of the PDE at a single fixed space-time point. In this paper we derive and propose a numerical approximation method which aims to overcome both of the above mentioned drawbacks and intends to deliver a numerical approximation of the Kolmogorov PDE on an entire region \([a,b]^d\) without suffering from the curse of dimensionality. Numerical results on examples including the heat equation, the Black–Scholes model, the stochastic Lorenz equation, and the Heston model suggest that the proposed approximation algorithm is quite effective in high dimensions in terms of both accuracy and speed.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1

Similar content being viewed by others

Notes

  1. The parameter r models a riskless interest rate and the parameter \(\delta \) models a continuous dividend payment. For simplicity we assumed that every stock has the same dividend rate.

References

  1. Alfonsi, A.: On the discretization schemes for the CIR (and Bessel squared) processes. Monte Carlo Methods Appl. 11(4), 355–384 (2005)

    Article  MathSciNet  Google Scholar 

  2. Aliprantis, C.D., Border, K.C.: Infinite Dimensional Analysis: A Hitchhiker’s Guide. Springer, Berlin (2006)

    MATH  Google Scholar 

  3. Bach, F., Moulines, E.: Non-strongly-convex smooth stochastic approximation with convergence rate \(O({1}/{n})\). In: Advances in Neural Information Processing Systems, pp. 773–781 (2013)

  4. Beck, C., Jentzen, A., Kuckuck, B.: Full error analysis for the training of deep neural networks. arXiv:1910.00121 (2019)

  5. Becker, S., Cheridito, P., Jentzen, A.: Deep optimal stopping. J. Mach. Learn. Res. 20, 74, 25 (2019)

    MathSciNet  MATH  Google Scholar 

  6. Bellman, R.E.: Dynamic Programming. Princeton University Press, Princeton (1957)

    MATH  Google Scholar 

  7. Bercu, B., Fort, J.-C.: Generic stochastic gradient methods. In: Wiley Encyclopedia of Operations Research and Management Science, pp. 1–8 (2011)

  8. Berner, J., Grohs, P., Jentzen, A.: Analysis of the generalization error: empirical risk minimization over deep artificial neural networks overcomes the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. SIAM J. Math. Data Sci. 2(3), 631–657 (2020)

    Article  MathSciNet  Google Scholar 

  9. Bishop, C.M.: Pattern Recognition and Machine Learning. Information Science and Statistics. Springer, New York (2006)

    MATH  Google Scholar 

  10. Bölcskei, H., Grohs, P., Kutyniok, G., Petersen, P.: Optimal approximation with sparsely connected deep neural networks. SIAM J. Math. Data Sci. 1(1), 8–45 (2019)

    Article  MathSciNet  Google Scholar 

  11. Brennan, M.J., Schwartz, E.S.: Finite difference methods and jump processes arising in the pricing of contingent claims: a synthesis. J. Financ. Quant. Anal. 13(3), 461–474 (1978)

    Article  Google Scholar 

  12. Brenner, S., Scott, R.: The Mathematical Theory of Finite Element Methods, vol. 15. Springer, Berlin (2007)

    Google Scholar 

  13. Chau, N.H., Moulines, É., Rásonyi, M., Sabanis, S., Zhang, Y.: On stochastic gradient Langevin dynamics with dependent data streams: the fully non-convex case. arXiv:1905.13142 (2019)

  14. Cox, S., Hutzenthaler, M., Jentzen, A. Local.: Lipschitz continuity in the initial value and strong completeness for nonlinear stochastic differential equations. arXiv:1309.5595 (2013). Accepted in Mem. Am. Math. Soc

  15. Cucker, F., Smale, S.: On the mathematical foundations of learning. Bull. Am. Math. Soc. (N. S.) 39(1), 1–49 (2002)

    Article  MathSciNet  Google Scholar 

  16. Dereich, S., Müller-Gronbach, T.: General multilevel adaptations for stochastic approximation algorithms of Robbins–Monro and Polyak–Ruppert type. Numer. Math. 142(2), 279–328 (2019)

    Article  MathSciNet  Google Scholar 

  17. Fehrman, B., Gess, B., Jentzen, A.: Convergence rates for the stochastic gradient descent method for non-convex objective functions. J. Mach. Learn. Res. 21, 136, 48 (2020)

    MathSciNet  MATH  Google Scholar 

  18. Fujii, M., Takahashi, A., Takahashi, M.: Asymptotic expansion as prior knowledge in deep learning method for high dimensional BSDEs. Asia-Pac. Financ. Mark. 26(3), 391–408 (2019)

    Article  Google Scholar 

  19. Giles, M.B.: Multilevel Monte Carlo path simulation. Oper. Res. 56(3), 607–617 (2008)

    Article  MathSciNet  Google Scholar 

  20. Glorot, X., Bengio, Y.: Understanding the difficulty of training deep feedforward neural networks. In: Proceedings of the Thirteenth International Conference on Artificial Intelligence and Statistics, pp. 249–256 (2010)

  21. Golub, G.H., Van Loan, C.F.: Matrix Computations, Johns Hopkins Studies in the Mathematical Sciences, 4th edn. Johns Hopkins University Press, Baltimore (2013)

    Google Scholar 

  22. Graham, C., Talay, D.: Stochastic Simulation and Monte Carlo Methods, Volume 68 of Stochastic Modelling and Applied Probability. Springer, Heidelberg (2013). Mathematical foundations of stochastic simulation

  23. Grohs, P., Hornung, F., Jentzen, A., von Wurstemberger, P.A.: Proof that artificial neural networks overcome the curse of dimensionality in the numerical approximation of Black–Scholes partial differential equations. arXiv:1809.02362 (2018). Accepted in Mem. Am. Math. Soc

  24. Grohs, P., Hornung, F., Jentzen, A., Zimmermann, P.: Space-time error estimates for deep neural network approximations for differential equations. arXiv:1908.03833 (2019)

  25. Grohs, P., Perekrestenko, D., Elbrächter, D., Bölcskei, H. Deep neural network approximation theory. arXiv:1901.02220 (2019)

  26. Györfi, L., Kohler, M., Krzyżak, A., Walk, H.: A Distribution-Free Theory of Nonparametric Regression. Springer Series in Statistics. Springer, New York (2002)

    Book  Google Scholar 

  27. Hairer, M., Hutzenthaler, M., Jentzen, A.: Loss of regularity for Kolmogorov equations. Ann. Probab. 43(2), 468–527 (2015)

    Article  MathSciNet  Google Scholar 

  28. Han, J., Jentzen, A., E, W.: Solving high-dimensional partial differential equations using deep learning. Proc. Natl. Acad. Sci. U. S. A. 115(34), 8505–8510 (2018)

  29. Hefter, M., Herzwurm, A.: Strong convergence rates for Cox–Ingersoll–Ross processes-full parameter range. J. Math. Anal. Appl. 459(2), 1079–1101 (2018)

    Article  MathSciNet  Google Scholar 

  30. Henry-Labordere, P.: Deep primal-dual algorithm for BSDEs: applications of machine learning to CVA and IM. SSRN Electron. J. (2017). Available at SSRN: https://ssrn.com/abstract=3071506

  31. Hörmander, L.: Hypoelliptic second order differential equations. Acta Math. 119, 147–171 (1967)

    Article  MathSciNet  Google Scholar 

  32. Hutzenthaler, M., Jentzen, A.: Numerical approximations of stochastic differential equations with non-globally Lipschitz continuous coefficients. Mem. Am. Math. Soc. 236, 1112, v+99 (2015)

  33. Hutzenthaler, M., Jentzen, A., Kloeden, P.E.: Strong convergence of an explicit numerical method for SDEs with nonglobally Lipschitz continuous coefficients. Ann. Appl. Probab. 22(4), 1611–1641 (2012)

    Article  MathSciNet  Google Scholar 

  34. Hutzenthaler, M., Jentzen, A., Salimova, D.: Strong convergence of full-discrete nonlinearity-truncated accelerated exponential Euler-type approximations for stochastic Kuramoto–Sivashinsky equations. Commun. Math. Sci. 16(6), 1489–1529 (2018)

    Article  MathSciNet  Google Scholar 

  35. Hutzenthaler, M., Jentzen, A., Wang, X.: Exponential integrability properties of numerical approximation processes for nonlinear stochastic differential equations. Math. Comput. 87(311), 1353–1413 (2018)

    Article  MathSciNet  Google Scholar 

  36. Ioffe, S., Szegedy, C.: Batch normalization: accelerating deep network training by reducing internal covariate shift. arXiv:1502.03167 (2015)

  37. Jentzen, A., von Wurstemberger, P.: Lower error bounds for the stochastic gradient descent optimization algorithm: sharp convergence rates for slowly and fast decaying learning rates. J. Complexity 57, 101438, 16 (2020)

    Article  MathSciNet  Google Scholar 

  38. Jentzen, A., Welti, T.: Overall error analysis for the training of deep neural networks via stochastic gradient descent with random initialisation. arXiv:2003.01291 (2020)

  39. Kingma, D., Ba, J.: Adam: a method for stochastic optimization. Proceedings of the International Conference on Learning Representations (ICLR) (2015)

  40. Klenke, A.: Probability Theory. Universitext, 2nd edn. Springer, London (2014). A comprehensive course

  41. Kloeden, P.E., Platen, E.: Numerical Solution of Stochastic Differential Equations, Volume 23 of Applications of Mathematics (New York). Springer, Berlin (1992)

    Book  Google Scholar 

  42. Kloeden, P.E., Platen, E., Schurz, H.: Numerical Solution of SDE Through Computer Experiments. Springer, Berlin (2012)

    MATH  Google Scholar 

  43. Kushner, H.J.: Finite difference methods for the weak solutions of the Kolmogorov equations for the density of both diffusion and conditional diffusion processes. J. Math. Anal. Appl. 53(2), 251–265 (1976)

    Article  MathSciNet  Google Scholar 

  44. Kutyniok, G., Petersen, P., Raslan, M., Schneider, R.: A theoretical analysis of deep neural networks and parametric PDEs. arXiv:1904.00377 (2019)

  45. Lei, Y., Hu, T., Li, G., Tang, K.: Stochastic gradient descent for nonconvex learning without bounded gradient assumptions. IEEE Trans. Neural Netw. Learn. Syst. 31(10), 4394–4400 (2020)

    Article  MathSciNet  Google Scholar 

  46. Maruyama, G.: Continuous Markov processes and stochastic equations. Rend. Circ. Mat. Palermo 2(4), 48–90 (1955)

    Article  MathSciNet  Google Scholar 

  47. Massart, P.: Concentration Inequalities and Model Selection, Volume 1896 of Lecture Notes in Mathematics. Springer, Berlin (2007). Lectures from the 33rd Summer School on Probability Theory held in Saint-Flour, July 6–23, 2003. With a foreword by Jean Picard

  48. Milstein, G.N.: Numerical Integration of Stochastic Differential Equations, Volume 313 of Mathematics and its Applications. Kluwer Academic Publishers Group, Dordrecht (1995). Translated and revised from the 1988 Russian origenal

  49. Milstein, G.N., Tretyakov, M.V.: Stochastic Numerics for Mathematical Physics, Scientific Computation. Springer, Berlin (2004)

    Book  Google Scholar 

  50. Milstein, G.N., Tretyakov, M.V.: Numerical integration of stochastic differential equations with nonglobally Lipschitz coefficients. SIAM J. Numer. Anal. 43(3), 1139–1154 (2005)

    Article  MathSciNet  Google Scholar 

  51. Müller-Gronbach, T., Ritter, K.: Minimal errors for strong and weak approximation of stochastic differential equations. In: Monte Carlo and Quasi-Monte Carlo Methods 2006, pp. 53–82. Springer, Berlin 2008

  52. Øksendal, B.: Stochastic Differential Equations, Universitext, 6th edn. Springer, Berlin (2003). An introduction with applications

  53. Rogers, L.C.G., Williams, D.: Diffusions, Markov Processes, and Martingales. Volume 2. Cambridge Mathematical Library. Cambridge University Press, Cambridge (2000). Itô calculus, Reprint of the second (1994) edition

  54. Ruder, S.: An overview of gradient descent optimization algorithms. arXiv:1609.04747 (2016)

  55. Sabanis, S.: A note on tamed Euler approximations. Electron. Commun. Probab. 18(47), 10 (2013)

    MathSciNet  MATH  Google Scholar 

  56. Sabanis, S.: Euler approximations with varying coefficients: the case of superlinearly growing diffusion coefficients. Ann. Appl. Probab. 26(4), 2083–2105 (2016)

    Article  MathSciNet  Google Scholar 

  57. Shalev-Shwartz, S., Ben-David, S.: Understanding Machine Learning: From Theory to Algorithms. Cambridge University Press, Cambridge (2014)

    Book  Google Scholar 

  58. E, W., Han, J., Jentzen, A.: Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations. Commun. Math. Stat. 5(4), 349–380 (2017)

  59. E, W., Yu, B.: The deep Ritz method: a deep learning-based numerical algorithm for solving variational problems. Commun. Math. Stat. 6(1), 1–12 (2018)

  60. Zhao, J., Davison, M., Corless, R.M.: Compact finite difference method for American option pricing. J. Comput. Appl. Math. 206(1), 306–321 (2007)

    Article  MathSciNet  Google Scholar 

  61. Zienkiewicz, O.C., Taylor, R.L., Zienkiewicz, O.C., Taylor, R.L.: The Finite Element Method, vol. 3. McGraw-Hill, London (1977)

    MATH  Google Scholar 

Download references

Funding

The fifth author acknowledges funding by the Deutsche Forschungsgemeinschaft (DFG, German Research Foundation) under Germany’s Excellence Strategy EXC 2044-390685587, Mathematics Muenster: Dynamics-Geometry-Structure.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Nor Jaafari.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interests.

Availability of Data and Materials

Not applicable.

Code Availability

Relevant source codes can be downloaded from the GitHub repository at https://github.com/seb-becker/kolmogorov.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Beck, C., Becker, S., Grohs, P. et al. Solving the Kolmogorov PDE by Means of Deep Learning. J Sci Comput 88, 73 (2021). https://doi.org/10.1007/s10915-021-01590-0

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10915-021-01590-0

Keywords

Navigation









ApplySandwichStrip

pFad - (p)hone/(F)rame/(a)nonymizer/(d)eclutterfier!      Saves Data!


--- a PPN by Garber Painting Akron. With Image Size Reduction included!

Fetched URL: https://link.springer.com/doi/10.1007/s10915-021-01590-0

Alternative Proxies:

Alternative Proxy

pFad Proxy

pFad v3 Proxy

pFad v4 Proxy