Zing Forum

Reading

RHEED-AI: A Deep Learning-Driven Real-Time Recognition System for Molecular Beam Epitaxy Growth Modes

The RHEED-AI project integrates the EfficientNet deep learning architecture into Molecular Beam Epitaxy (MBE) technology, enabling automatic classification and real-time monitoring of five epitaxial growth modes, providing AI-driven quality assurance for semiconductor material growth.

深度学习分子束外延RHEED材料科学计算机视觉EfficientNet半导体迁移学习实时监控
Published 2026-05-16 13:20Recent activity 2026-05-16 13:29Estimated read 7 min
RHEED-AI: A Deep Learning-Driven Real-Time Recognition System for Molecular Beam Epitaxy Growth Modes
1

Section 01

Introduction: RHEED-AI—An AI-Driven Real-Time Recognition System for MBE Growth Modes

In the field of semiconductor and nanomaterial preparation, Molecular Beam Epitaxy (MBE) is a key technology for precise atomic-scale control of thin film growth, but real-time judgment of growth modes has always been a core challenge. The RHEED-AI project introduces the EfficientNet deep learning architecture into Reflection High-Energy Electron Diffraction (RHEED) image analysis, achieving automatic classification and real-time monitoring of five epitaxial growth modes, providing AI-driven quality assurance for semiconductor material growth.

2

Section 02

Technical Background: Challenges of RHEED and Epitaxial Growth Modes

Reflection High-Energy Electron Diffraction (RHEED) is a real-time characterization tool for MBE systems; diffraction patterns reflect the periodicity of surface atomic arrangements. Different growth modes correspond to distinct diffraction features:

  • Layered growth mode (2D) : Clear Laue streaks (streaky)
  • Island growth mode (3D) : Reciprocal lattice spots (spotty)
  • Layer-island mixed mode : Modulated streaks
  • Amorphous/polycrystalline surface : Diffuse background
  • Anomalous spots : Spots at irregular positions

Traditional methods rely on empirical visual recognition, which is time-consuming, labor-intensive, and prone to subjective errors.

3

Section 03

Methodology: EfficientNet-Based Deep Learning Architecture and Training Strategy

System Architecture

Adopting transfer learning with EfficientNetB0 as the backbone network:

  • Input layer: 224×224×3 RGB images (inverse normalization to adapt to pre-trained weights)
  • Feature extractor: Frozen EfficientNetB0 (≈4 million parameters)
  • Global average pooling + batch normalization
  • Classification head: 2 fully connected layers (256/128 units, ReLU activation) + Dropout (0.4/0.3), outputting Softmax distribution for 5 classes

Two-Stage Training

  1. Classification head training (1-20 epochs) : Freeze the backbone, train only the fully connected layers with a learning rate of 1×10⁻⁴ + early stopping
  2. Fine-tuning optimization (21-50 epochs) : Unfreeze the last 30 layers of the backbone, learning rate of 1×10⁻⁵ + learning rate decay

Total parameters are approximately 4.4 million, with 360,000 trainable parameters.

4

Section 04

Experimental Evidence: Model Performance Evaluation Results

Performance on 60 synthetic validation images (15% of the dataset):

  • Overall accuracy: 95%
  • Macro-average F1 score: 0.95
  • Top-2 accuracy: 100%

diffuse, modulated_streaks, and streaky classes have an F1 score of 1.00; only anomalous_spots and spotty have confusion (physically subtle differences). The evaluation is based on a physics-inspired synthetic data generator; the next step will validate with real laboratory images.

5

Section 05

Functionality and Implementation: Real-Time Monitoring and Technical Details

Real-Time Monitoring Features

  • Extract the specular beam intensity curve I(t), analyze the oscillation frequency to calculate deposition rate
  • Provide a PyQt GUI supporting training/real-time inference/full mode, with input from video or camera

Technical Implementation

  • Development language: Python 3.10/3.11
  • Framework: TensorFlow 2.x (Windows users are advised to use WSL2 or directml plugin)
  • Data organization: Real data stored by category; synthetic samples are automatically generated when no real data is available

Cites open-source datasets and research results from the University of Notre Dame, University of Delaware, etc.

6

Section 06

Application Prospects and Future Plans

Application Value

  • Real-time monitoring of growth quality, timely detection of deviations
  • Reduce reliance on experience, shorten training cycles
  • Accumulate structured data to support process optimization

Future Plans

  • Validation with real experimental data
  • Improve oscillation frequency analysis algorithms
  • Support ONNX format export
  • Enhance synthetic data for camera geometric parameter variations
7

Section 07

Conclusion: An Interdisciplinary Example of AI Empowering Materials Science

RHEED-AI demonstrates the potential of deep learning in the field of precision material preparation, transforming expert experience into a quantifiable and automated intelligent analysis process, providing an example for interdisciplinary AI for Science research. With further functional improvements, it is expected to become a standard configuration in next-generation material laboratories.