Zing Forum

Reading

Physics-Informed Neural Networks (PINN) for Solving the Allen-Cahn Phase Field Equation: An Intelligent PDE Solver Without Simulation Data

This project demonstrates how to use Physics-Informed Neural Networks (PINN) to solve the Allen-Cahn phase field equation from scratch, without any labeled simulation data, by training the network solely through minimizing a composite loss function that includes PDE residuals, initial conditions, and boundary conditions.

物理信息神经网络PINNAllen-Cahn方程偏微分方程PDE相场模型材料科学自动微分科学机器学习无网格方法
Published 2026-05-11 19:54Recent activity 2026-05-11 20:06Estimated read 7 min
Physics-Informed Neural Networks (PINN) for Solving the Allen-Cahn Phase Field Equation: An Intelligent PDE Solver Without Simulation Data
1

Section 01

Introduction: Core Value of Physics-Informed Neural Networks (PINN) for Solving the Allen-Cahn Equation

This project demonstrates how to use Physics-Informed Neural Networks (PINN) to solve the Allen-Cahn phase field equation from scratch, without labeled simulation data, by training the network through minimizing a composite loss function that includes PDE residuals, initial conditions, and boundary conditions. As a key advancement in scientific machine learning, PINN addresses limitations of traditional PDE solving methods (such as finite difference and finite element methods) like grid dependency and difficulty in handling geometric complexity, providing a new tool for complex problems in fields like materials science.

2

Section 02

Background: Limitations of Traditional PDE Methods and the Allen-Cahn Equation

Partial Differential Equations (PDEs) are fundamental tools for describing the physical world, but traditional solving methods (Finite Difference Method FDM, Finite Element Method FEM) have obvious limitations: strong grid dependency (high cost for fine grids), difficulty in handling complex geometries, curse of dimensionality, and some methods requiring large amounts of simulation data. The Allen-Cahn equation is a classic parabolic PDE for phase separation processes, with the form ∂φ/∂t = ε²Δφ - φ³ + φ. The phase field variable φ represents the phase state, which can capture interface motion and topological changes without explicitly tracking the interface.

3

Section 03

Core Principles of PINN and Network Architecture Design

PINN embeds physical laws into neural network training and learns the PDE solution through a composite loss function (PDE residual loss + initial condition loss + boundary condition loss). This project uses a fully connected MLP architecture: input layer (x,t) → 4 hidden layers (64 neurons each) → output layer (φ). The tanh activation function is chosen (smooth, bounded, nonlinear), with a parameter size of approximately 12900, balancing expressive power and training efficiency.

4

Section 04

Key Technologies: Automatic Differentiation and Training Strategies

Automatic differentiation is the technical core of PINN. High-order derivatives (such as the first-order time derivative and second-order spatial derivative of φ) are computed using PyTorch's reverse-mode automatic differentiation. Training strategies include: collocation point sampling (uniform/adaptive/boundary-enhanced), Adam optimizer, learning rate decay scheduling, and monitoring total loss, component losses, and residual distribution to optimize the training process.

5

Section 05

Result Validation and Physical Consistency Check

Validation of the PINN solution includes: quantitative accuracy comparison with traditional numerical methods (e.g., finite difference); physical consistency checks (φ values between -1 and 1, decreasing energy, mass conservation, etc.); and visualization (spatiotemporal evolution diagrams, interface position trajectories, energy curves) to intuitively show phase field evolution.

6

Section 06

Limitations and Challenges of PINN

Challenges faced by PINN include: training difficulties (multi-task optimization, spectral bias, stiff problems), high computational cost (derivative calculation increases overhead, more significant for high-dimensional problems), and accuracy generally lower than specialized numerical methods, which may be insufficient for high-precision applications.

7

Section 07

Application Prospects and Expansion Directions of PINN

PINN has wide applications in materials science (crystal growth, fracture mechanics, porous media flow) and fluid mechanics (Navier-Stokes equations, multiphase flow); it can solve inverse problems (parameter identification, source term identification, shape optimization); and can be combined with traditional numerical methods, experimental data, and multi-fidelity data sources to expand its capabilities.

8

Section 08

Conclusion: Value of PINN in Scientific Machine Learning

PINN represents a significant advancement in scientific machine learning, proving that neural networks can learn from physical laws. With the development of technology and computing power, PINN is expected to play a greater role in scientific computing and engineering applications. This project provides a clear starting point for the principles and practice of PINN, reflecting the deep integration of physics and machine learning.