Applied Mathematics and Mechanics (English Edition) ›› 2023, Vol. 44 ›› Issue (7): 1111-1124.doi: https://doi.org/10.1007/s10483-023-2997-7

• 论文 • 上一篇    下一篇

Variational inference in neural functional prior using normalizing flows: application to differential equation and operator learning problems

Xuhui MENG   

  1. Institute of Interdisciplinary Research for Mathematics and Applied Science, School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China
  • 收稿日期:2023-02-22 修回日期:2023-05-28 出版日期:2023-07-01 发布日期:2023-07-05
  • 通讯作者: Xuhui MENG, E-mail:xuhui_meng@hust.edu.cn
  • 基金资助:
    the National Natural Science Foundation of China (No. 12201229)

Variational inference in neural functional prior using normalizing flows: application to differential equation and operator learning problems

Xuhui MENG   

  1. Institute of Interdisciplinary Research for Mathematics and Applied Science, School of Mathematics and Statistics, Huazhong University of Science and Technology, Wuhan 430074, China
  • Received:2023-02-22 Revised:2023-05-28 Online:2023-07-01 Published:2023-07-05
  • Contact: Xuhui MENG, E-mail:xuhui_meng@hust.edu.cn
  • Supported by:
    the National Natural Science Foundation of China (No. 12201229)

摘要: Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the over-parameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In paper "MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. Journal of Computational Physics, 457, 111073 (2022)" has two stages: (i) prior learning, and (ii) posterior estimation. At the first stage, the GANs are utilized to learn a functional prior either from a prescribed function distribution, e.g., the Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference (VI), which naturally enables the mini-batch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional (100D) Darcy problem, are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the "gold rule" HMC. Moreover, the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation (PDE) problems with big data.

关键词: uncertainty quantification (UQ), physics-informed neural network (PINN), deep operator network (DeepONet), generative adversarial network (GAN), normalizing flow (NF), differential equation

Abstract: Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the over-parameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In paper "MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. Journal of Computational Physics, 457, 111073 (2022)" has two stages: (i) prior learning, and (ii) posterior estimation. At the first stage, the GANs are utilized to learn a functional prior either from a prescribed function distribution, e.g., the Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference (VI), which naturally enables the mini-batch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional (100D) Darcy problem, are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the "gold rule" HMC. Moreover, the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation (PDE) problems with big data.

Key words: uncertainty quantification (UQ), physics-informed neural network (PINN), deep operator network (DeepONet), generative adversarial network (GAN), normalizing flow (NF), differential equation

中图分类号: 

APS Journals | CSTAM Journals | AMS Journals | EMS Journals | ASME Journals