Please wait a minute...


当期目录

    2023年 第44卷 第7期    刊出日期:2023-07-01
    论文
    Preface: machine-learning approaches for computational mechanics
    Z. LI, Guohui HU, Zhiliang WANG, G. E. KARNIADAKIS
    2023, 44(7):  1035-1038.  doi:10.1007/s10483-023-2999-7
    摘要 ( 291 )   HTML ( 4)   PDF (90KB) ( 531 )  
    参考文献 | 相关文章 | 多维度评价
    Effective data sampling strategies and boundary condition constraints of physics-informed neural networks for identifying material properties in solid mechanics
    W. WU, M. DANEKER, M. A. JOLLEY, K. T. TURNER, L. LU
    2023, 44(7):  1039-1068.  doi:10.1007/s10483-023-2995-8
    摘要 ( 329 )   HTML ( 14)   PDF (7194KB) ( 282 )  
    参考文献 | 相关文章 | 多维度评价
    Material identification is critical for understanding the relationship between mechanical properties and the associated mechanical functions. However, material identification is a challenging task, especially when the characteristic of the material is highly nonlinear in nature, as is common in biological tissue. In this work, we identify unknown material properties in continuum solid mechanics via physics-informed neural networks (PINNs). To improve the accuracy and efficiency of PINNs, we develop efficient strategies to nonuniformly sample observational data. We also investigate different approaches to enforce Dirichlet-type boundary conditions (BCs) as soft or hard constraints. Finally, we apply the proposed methods to a diverse set of time-dependent and time-independent solid mechanic examples that span linear elastic and hyperelastic material space. The estimated material parameters achieve relative errors of less than 1%. As such, this work is relevant to diverse applications, including optimizing structural integrity and developing novel materials.
    Physics-informed neural networks with residual/gradient-based adaptive sampling methods for solving partial differential equations with sharp solutions
    Zhiping MAO, Xuhui MENG
    2023, 44(7):  1069-1084.  doi:10.1007/s10483-023-2994-7
    摘要 ( 362 )   HTML ( 5)   PDF (3076KB) ( 248 )  
    参考文献 | 相关文章 | 多维度评价
    We consider solving the forward and inverse partial differential equations (PDEs) which have sharp solutions with physics-informed neural networks (PINNs) in this work. In particular, to better capture the sharpness of the solution, we propose the adaptive sampling methods (ASMs) based on the residual and the gradient of the solution. We first present a residual only-based ASM denoted by ASM I. In this approach, we first train the neural network using a small number of residual points and divide the computational domain into a certain number of sub-domains, then we add new residual points in the sub-domain which has the largest mean absolute value of the residual, and those points which have the largest absolute values of the residual in this sub-domain as new residual points. We further develop a second type of ASM (denoted by ASM II) based on both the residual and the gradient of the solution due to the fact that only the residual may not be able to efficiently capture the sharpness of the solution. The procedure of ASM II is almost the same as that of ASM I, and we add new residual points which have not only large residuals but also large gradients. To demonstrate the effectiveness of the present methods, we use both ASM I and ASM II to solve a number of PDEs, including the Burger equation, the compressible Euler equation, the Poisson equation over an L-shape domain as well as the high-dimensional Poisson equation. It has been shown from the numerical results that the sharp solutions can be well approximated by using either ASM I or ASM II, and both methods deliver much more accurate solutions than the original PINNs with the same number of residual points. Moreover, the ASM II algorithm has better performance in terms of accuracy, efficiency, and stability compared with the ASM I algorithm. This means that the gradient of the solution improves the stability and efficiency of the adaptive sampling procedure as well as the accuracy of the solution. Furthermore, we also employ the similar adaptive sampling technique for the data points of boundary conditions (BCs) if the sharpness of the solution is near the boundary. The result of the L-shape Poisson problem indicates that the present method can significantly improve the efficiency, stability, and accuracy.
    Peri-Net-Pro: the neural processes with quantified uncertainty for crack patterns
    M. KIM, G. LIN
    2023, 44(7):  1085-1100.  doi:10.1007/s10483-023-2991-9
    摘要 ( 342 )   HTML ( 4)   PDF (8009KB) ( 129 )  
    参考文献 | 相关文章 | 多维度评价
    This paper develops a deep learning tool based on neural processes (NPs) called the Peri-Net-Pro, to predict the crack patterns in a moving disk and classifies them according to the classification modes with quantified uncertainties. In particular, image classification and regression studies are conducted by means of convolutional neural networks (CNNs) and NPs. First, the amount and quality of the data are enhanced by using peridynamics to theoretically compensate for the problems of the finite element method (FEM) in generating crack pattern images. Second, case studies are conducted with the prototype microelastic brittle (PMB), linear peridynamic solid (LPS), and viscoelastic solid (VES) models obtained by using the peridynamic theory. The case studies are performed to classify the images by using CNNs and determine the suitability of the PMB, LBS, and VES models. Finally, a regression analysis is performed on the crack pattern images with NPs to predict the crack patterns. The regression analysis results confirm that the variance decreases when the number of epochs increases by using the NPs. The training results gradually improve, and the variance ranges decrease to less than 0.035. The main finding of this study is that the NPs enable accurate predictions, even with missing or insufficient training data. The results demonstrate that if the context points are set to the 10th, 100th, 300th, and 784th, the training information is deliberately omitted for the context points of the 10th, 100th, and 300th, and the predictions are different when the context points are significantly lower. However, the comparison of the results of the 100th and 784th context points shows that the predicted results are similar because of the Gaussian processes in the NPs. Therefore, if the NPs are employed for training, the missing information of the training data can be supplemented to predict the results.
    An artificial viscosity augmented physics-informed neural network for incompressible flow
    Yichuan HE, Zhicheng WANG, Hui XIANG, Xiaomo JIANG, Dawei TANG
    2023, 44(7):  1101-1110.  doi:10.1007/s10483-023-2993-9
    摘要 ( 397 )   HTML ( 8)   PDF (2017KB) ( 190 )  
    参考文献 | 相关文章 | 多维度评价
    Physics-informed neural networks (PINNs) are proved methods that are effective in solving some strongly nonlinear partial differential equations (PDEs), e.g., Navier-Stokes equations, with a small amount of boundary or interior data. However, the feasibility of applying PINNs to the flow at moderate or high Reynolds numbers has rarely been reported. The present paper proposes an artificial viscosity (AV)-based PINN for solving the forward and inverse flow problems. Specifically, the AV used in PINNs is inspired by the entropy viscosity method developed in conventional computational fluid dynamics (CFD) to stabilize the simulation of flow at high Reynolds numbers. The newly developed PINN is used to solve the forward problem of the two-dimensional steady cavity flow at Re=1 000 and the inverse problem derived from two-dimensional film boiling. The results show that the AV augmented PINN can solve both problems with good accuracy and substantially reduce the inference errors in the forward problem.
    Variational inference in neural functional prior using normalizing flows: application to differential equation and operator learning problems
    Xuhui MENG
    2023, 44(7):  1111-1124.  doi:10.1007/s10483-023-2997-7
    摘要 ( 328 )   HTML ( 2)   PDF (3658KB) ( 376 )  
    参考文献 | 相关文章 | 多维度评价
    Physics-informed deep learning has recently emerged as an effective tool for leveraging both observational data and available physical laws. Physics-informed neural networks (PINNs) and deep operator networks (DeepONets) are two such models. The former encodes the physical laws via the automatic differentiation, while the latter learns the hidden physics from data. Generally, the noisy and limited observational data as well as the over-parameterization in neural networks (NNs) result in uncertainty in predictions from deep learning models. In paper "MENG, X., YANG, L., MAO, Z., FERRANDIS, J. D., and KARNIADAKIS, G. E. Learning functional priors and posteriors from data and physics. Journal of Computational Physics, 457, 111073 (2022)" has two stages: (i) prior learning, and (ii) posterior estimation. At the first stage, the GANs are utilized to learn a functional prior either from a prescribed function distribution, e.g., the Gaussian process, or from historical data and available physics. At the second stage, the Hamiltonian Monte Carlo (HMC) method is utilized to estimate the posterior in the latent space of GANs. However, the vanilla HMC does not support the mini-batch training, which limits its applications in problems with big data. In the present work, we propose to use the normalizing flow (NF) models in the context of variational inference (VI), which naturally enables the mini-batch training, as the alternative to HMC for posterior estimation in the latent space of GANs. A series of numerical experiments, including a nonlinear differential equation problem and a 100-dimensional (100D) Darcy problem, are conducted to demonstrate that the NFs with full-/mini-batch training are able to achieve similar accuracy as the "gold rule" HMC. Moreover, the mini-batch training of NF makes it a promising tool for quantifying uncertainty in solving the high-dimensional partial differential equation (PDE) problems with big data.
    Towards a unified nonlocal, peridynamics framework for the coarse-graining of molecular dynamics data with fractures
    H. Q. YOU, X. XU, Y. YU, S. SILLING, M. D'ELIA, J. FOSTER
    2023, 44(7):  1125-1150.  doi:10.1007/s10483-023-2996-8
    摘要 ( 306 )   HTML ( 1)   PDF (12447KB) ( 114 )  
    参考文献 | 相关文章 | 多维度评价
    Molecular dynamics (MD) has served as a powerful tool for designing materials with reduced reliance on laboratory testing. However, the use of MD directly to treat the deformation and failure of materials at the mesoscale is still largely beyond reach. In this work, we propose a learning framework to extract a peridynamics model as a mesoscale continuum surrogate from MD simulated material fracture data sets. Firstly, we develop a novel coarse-graining method, to automatically handle the material fracture and its corresponding discontinuities in the MD displacement data sets. Inspired by the weighted essentially non-oscillatory (WENO) scheme, the key idea lies at an adaptive procedure to automatically choose the locally smoothest stencil, then reconstruct the coarse-grained material displacement field as the piecewise smooth solutions containing discontinuities. Then, based on the coarse-grained MD data, a two-phase optimization-based learning approach is proposed to infer the optimal peridynamics model with damage criterion. In the first phase, we identify the optimal nonlocal kernel function from the data sets without material damage to capture the material stiffness properties. Then, in the second phase, the material damage criterion is learnt as a smoothed step function from the data with fractures. As a result, a peridynamics surrogate is obtained. As a continuum model, our peridynamics surrogate model can be employed in further prediction tasks with different grid resolutions from training, and hence allows for substantial reductions in computational cost compared with MD. We illustrate the efficacy of the proposed approach with several numerical tests for the dynamic crack propagation problem in a single-layer graphene. Our tests show that the proposed data-driven model is robust and generalizable, in the sense that it is capable of modeling the initialization and growth of fractures under discretization and loading settings that are different from the ones used during training.
    Deep convolutional Ritz method: parametric PDE surrogates without labeled data
    J. N. FUHG, A. KARMARKAR, T. KADEETHUM, H. YOON, N. BOUKLAS
    2023, 44(7):  1151-1174.  doi:10.1007/s10483-023-2992-6
    摘要 ( 285 )   HTML ( 1)   PDF (1410KB) ( 113 )  
    参考文献 | 相关文章 | 多维度评价
    The parametric surrogate models for partial differential equations (PDEs) are a necessary component for many applications in computational sciences, and the convolutional neural networks (CNNs) have proven to be an excellent tool to generate these surrogates when parametric fields are present. CNNs are commonly trained on labeled data based on one-to-one sets of parameter-input and PDE-output fields. Recently, residual-based deep convolutional physics-informed neural network (DCPINN) solvers for parametric PDEs have been proposed to build surrogates without the need for labeled data. These allow for the generation of surrogates without an expensive offline-phase. In this work, we present an alternative formulation termed deep convolutional Ritz method (DCRM) as a parametric PDE solver. The approach is based on the minimization of energy functionals, which lowers the order of the differential operators compared to residual-based methods. Based on studies involving the Poisson equation with a spatially parameterized source term and boundary conditions, we find that CNNs trained on labeled data outperform DCPINNs in convergence speed and generalization abilities. The surrogates generated from the DCRM, however, converge significantly faster than their DCPINN counterparts, and prove to generalize faster and better than the surrogates obtained from both CNNs trained on labeled data and DCPINNs. This hints that the DCRM could make PDE solution surrogates trained without labeled data possibly.
    Gaussian process hydrodynamics
    H. OWHADI
    2023, 44(7):  1175-1198.  doi:10.1007/s10483-023-2990-9
    摘要 ( 316 )   HTML ( 2)   PDF (18312KB) ( 67 )  
    参考文献 | 相关文章 | 多维度评价
    We present a Gaussian process (GP) approach, called Gaussian process hydrodynamics (GPH) for approximating the solution to the Euler and Navier-Stokes (NS) equations. Similar to smoothed particle hydrodynamics (SPH), GPH is a Lagrangian particle-based approach that involves the tracking of a finite number of particles transported by a flow. However, these particles do not represent mollified particles of matter but carry discrete/partial information about the continuous flow. Closure is achieved by placing a divergence-free GP prior $\xi$ on the velocity field and conditioning it on the vorticity at the particle locations. Known physics (e.g., the Richardson cascade and velocity-increment power laws) is incorporated into the GP prior by using physics-informed additive kernels. This is equivalent to expressing $\xi$ as a sum of independent GPs $\xi^l$, which we call modes, acting at different scales (each mode $\xi^l$ self-activates to represent the formation of eddies at the corresponding scales). This approach enables a quantitative analysis of the Richardson cascade through the analysis of the activation of these modes, and enables us to analyze coarse-grain turbulence statistically rather than deterministically. Because GPH is formulated by using the vorticity equations, it does not require solving a pressure equation. By enforcing incompressibility and fluid-structure boundary conditions through the selection of a kernel, GPH requires significantly fewer particles than SPH. Because GPH has a natural probabilistic interpretation, the numerical results come with uncertainty estimates, enabling their incorporation into an uncertainty quantification (UQ) pipeline and adding/removing particles (quanta of information) in an adapted manner. The proposed approach is suitable for analysis because it inherits the complexity of state-of-the-art solvers for dense kernel matrices and results in a natural definition of turbulence as information loss. Numerical experiments support the importance of selecting physics-informed kernels and illustrate the major impact of such kernels on the accuracy and stability. Because the proposed approach uses a Bayesian interpretation, it naturally enables data assimilation and predictions and estimations by mixing simulation data and experimental data.
    A dive into spectral inference networks: improved algorithms for self-supervised learning of continuous spectral representations
    J. WU, S. F. WANG, P. PERDIKARIS
    2023, 44(7):  1199-1224.  doi:10.1007/s10483-023-2998-7
    摘要 ( 297 )   HTML ( 2)   PDF (13551KB) ( 44 )  
    参考文献 | 相关文章 | 多维度评价
    We propose a self-supervising learning framework for finding the dominant eigenfunction-eigenvalue pairs of linear and self-adjoint operators. We represent target eigenfunctions with coordinate-based neural networks and employ the Fourier positional encodings to enable the approximation of high-frequency modes. We formulate a self-supervised training objective for spectral learning and propose a novel regularization mechanism to ensure that the network finds the exact eigenfunctions instead of a space spanned by the eigenfunctions. Furthermore, we investigate the effect of weight normalization as a mechanism to alleviate the risk of recovering linear dependent modes, allowing us to accurately recover a large number of eigenpairs. The effectiveness of our methods is demonstrated across a collection of representative benchmarks including both local and non-local diffusion operators, as well as high-dimensional time-series data from a video sequence. Our results indicate that the present algorithm can outperform competing approaches in terms of both approximation accuracy and computational cost.
[an error occurred while processing this directive]
APS Journals | CSTAM Journals | AMS Journals | EMS Journals | ASME Journals