Zing Forum

Reading

FHE-native Mamba-3: FHE-native Architecture Unleashes a New Era of Privacy-Preserving LLM Inference

Explore the deep integration of Fully Homomorphic Encryption (FHE) and Mamba's state space model, and learn how to directly perform large language model (LLM) inference on ciphertext, achieving dual breakthroughs in data privacy and model performance.

全同态加密FHEMamba状态空间模型隐私保护LLM推理加密机器学习同态计算数据隐私Transformer替代方案
Published 2026-05-10 20:43Recent activity 2026-05-10 20:48Estimated read 6 min
FHE-native Mamba-3: FHE-native Architecture Unleashes a New Era of Privacy-Preserving LLM Inference
1

Section 01

【Introduction】FHE-native Mamba-3: FHE-native Architecture Unleashes a New Era of Privacy-Preserving LLM Inference

This article introduces the FHE-native Mamba-3 project, which deeply integrates Fully Homomorphic Encryption (FHE) with Mamba's state space model to build a native architecture optimized for encrypted inference. It addresses the inefficiency of traditional Transformers in FHE environments, achieves dual breakthroughs in data privacy and model performance, and paves a new path for privacy-preserving large language model (LLM) inference.

2

Section 02

Background: Limitations of Existing Privacy-Preserving Solutions and Transformer's Dilemma in FHE

Existing privacy-preserving solutions have shortcomings: differential privacy reduces accuracy, SMPC has high communication overhead, and TEE faces side-channel risks. Meanwhile, the O(n²) attention mechanism of traditional Transformers leads to extremely high computational complexity in FHE environments, and non-linear operations require complex approximations, making porting impractical. Although FHE provides strict privacy protection, its huge computational overhead has spurred the demand for FHE-native architectures.

3

Section 03

Methodology: Advantages of Mamba Model and FHE-native Architecture Design

The Mamba model is based on the State Space Model (SSM), achieving O(n) linear complexity, a selective mechanism, and hardware-aware optimization, making it suitable for FHE environments. FHE-native Mamba-3 adopts a native design: selecting homomorphic-friendly operations (linear transformations/state updates), optimizing quantization encoding, and using a hierarchical encryption strategy. Its core components include a selective SSM layer (performing selection and state updates on ciphertext), a convolutional projection layer (implemented via homomorphic matrix multiplication), and secure output decoding.

4

Section 04

Evidence: Performance Improvements and Security Guarantees

In terms of performance, compared to Transformers in FHE environments, FHE-native Mamba-3 reduces computational complexity from O(n²) to O(n), cuts circuit depth by over 60%, has more compact memory usage, and narrows the performance gap to an acceptable range. In terms of security, based on standard FHE assumptions, it provides semantic security, computational privacy, and anti-collusion protection, and its FHE-native design increases the difficulty of attacks.

5

Section 05

Application Scenarios: Practical Value in Privacy-Sensitive Fields

  1. Healthcare: Running diagnostic models on encrypted medical records, complying with HIPAA regulations; 2. Finance: Detecting fraud using encrypted transaction data, meeting compliance requirements; 3. Cross-organizational collaboration: Multiple parties contributing encrypted data to complete inference; 4. Edge devices: Using cloud models with locally encrypted data without trusting service providers.
6

Section 06

Challenges and Future: Directions to Break Through Bottlenecks

Current limitations: High startup overhead, room for improvement in batch processing efficiency, and limited supported model scale; Future directions: Hardware acceleration (FPGA/ASIC integration), hybrid solutions (TEE+FHE), model compression (quantization and pruning), and standardized interfaces (compatible with PyTorch/Hugging Face).

7

Section 07

Conclusion: A Milestone in the New Chapter of Privacy Computing

FHE-native Mamba-3 is an important milestone in privacy-preserving machine learning, proving that architectural innovation can enable practical privacy-preserving LLM inference and open up new possibilities for sensitive industries. With the advancement of FHE technology and hardware, the vision of "data available but not visible" will move toward production, and this project lays the foundation for the next generation of privacy models.