Zing Forum

Reading

PReD: The First Multimodal Large Model in Electromagnetic Perception, Enabling a Closed Loop of Perception-Recognition-Decision Intelligence

PReD is the first foundational model in the electromagnetic field that covers the closed loop of perception, recognition, and decision intelligence. Trained on the PReD-1.3M dataset with 1.3 million samples, it supports multiple tasks such as signal detection, modulation recognition, and radio frequency fingerprinting, and achieves SOTA performance on both open-source and self-collected datasets.

电磁感知多模态大模型信号处理调制识别射频指纹认知无线电基础模型PReD
Published 2026-03-30 16:47Recent activity 2026-03-31 10:17Estimated read 6 min
PReD: The First Multimodal Large Model in Electromagnetic Perception, Enabling a Closed Loop of Perception-Recognition-Decision Intelligence
1

Section 01

[Introduction] PReD: The First Multimodal Large Model in Electromagnetic Perception, Enabling a Closed Loop of Perception-Recognition-Decision Intelligence

PReD is the first foundational model in the electromagnetic field that covers the closed loop of perception, recognition, and decision intelligence. Trained on the PReD-1.3M dataset with 1.3 million samples, it supports multiple tasks such as signal detection, modulation recognition, and radio frequency fingerprinting, and achieves SOTA performance on both open-source and self-collected datasets. Its core design integrates multi-dimensional representation of electromagnetic signals with the reasoning capabilities of large language models, opening up a new path for intelligent processing of electromagnetic signals.

2

Section 02

Background: Bottlenecks and Challenges in Intelligent Processing of Electromagnetic Signals

Multimodal large language models have performed excellently in general vision, text understanding, and other fields, but the electromagnetic field has long faced challenges such as data scarcity and insufficient integration of domain knowledge. As the core carrier of systems like communication and radar, traditional single-task models struggle to form end-to-end understanding and decision-making capabilities. Existing models either lack sufficient domain data support or fail to achieve collaborative optimization of multi-task knowledge transfer.

3

Section 03

Core Design and Training Strategy of the PReD Model

PReD constructs a multi-perspective signal representation system covering three core representations: original time-domain waveforms (instantaneous amplitude/phase), frequency-domain spectrograms (frequency distribution), and constellation diagrams (modulation symbol distribution). It adopts a multi-stage training strategy: basic alignment (vision-language pre-training to establish signal-semantic associations), task unification (multi-task learning + flexible prompting mechanism for task switching), and closed-loop optimization (end-to-end perception-recognition-decision reasoning).

4

Section 04

PReD-1.3M Dataset and PReD-Bench Evaluation Benchmark

The PReD-1.3M dataset contains 1.3 million high-quality samples, supporting six core tasks such as signal detection, modulation recognition, and parameter estimation. It integrates open-source and self-collected data to ensure scenario diversity. The PReD-Bench evaluation benchmark assesses model capabilities from multiple dimensions: task accuracy, cross-task transfer, generalization performance, and reasoning quality.

5

Section 05

Experimental Results: Fully Achieving SOTA Performance

PReD achieves SOTA in multiple tasks on PReD-Bench: signal detection under low signal-to-noise ratio outperforms traditional methods; modulation recognition accuracy exceeds 95%; radio frequency fingerprint extraction automatically learns hardware features, breaking through the limitations of traditional manual design. Multi-task collaborative training improves the performance of each task while maintaining general multimodal understanding capabilities.

6

Section 06

Technical Significance and Application Prospects

PReD marks the entry of intelligent electromagnetic signal processing into the era of foundational models, verifying the feasibility of combining large language models with domain signal processing. It is applied in fields such as cognitive radio (intelligent spectrum perception), electronic countermeasures (anti-jamming decision-making), IoT security (device authentication), and spectrum regulation (anomaly detection). The team plans to open-source the dataset and evaluation benchmark to promote community progress.

7

Section 07

Limitations and Future Directions

PReD has limitations such as real-time performance (inference latency on edge devices), few-shot adaptation (rare signal types), and adversarial robustness (stability against malicious samples). Future directions include optimizing edge inference latency, developing active learning to reduce annotation dependency, and expanding to radar signal processing and other fields.