Zing Forum

Reading

Empirical Study on the Parameter Efficiency Advantages of Hybrid Quantum Neural Networks in Financial Fraud Detection

This article presents a systematic benchmarking study on Hybrid Quantum Neural Networks (HQNN) in the context of financial fraud detection, comparing the parameter efficiency and predictive performance between quantum hybrid architectures and classical deep learning models.

量子机器学习混合量子神经网络金融欺诈检测参数效率NISQ变分量子电路类别不平衡SMOTE可解释AI
Published 2026-05-01 04:42Recent activity 2026-05-01 04:54Estimated read 6 min
Empirical Study on the Parameter Efficiency Advantages of Hybrid Quantum Neural Networks in Financial Fraud Detection
1

Section 01

Introduction: Study on Parameter Efficiency Advantages of Hybrid Quantum Neural Networks in Financial Fraud Detection

This article conducts a systematic benchmarking on the parameter efficiency and predictive performance of Hybrid Quantum Neural Networks (HQNN) in financial fraud detection scenarios, comparing the performance of quantum hybrid architectures with classical deep learning models. Key findings: In the NISQ era, quantum hybrid models (e.g., Single-layer Hybrid Neural Network, SHNN) achieve comparable performance with far fewer parameters than classical models, demonstrating significant parameter efficiency advantages and providing a new direction for resource-constrained scenarios.

2

Section 02

Research Background and Motivation

Quantum Machine Learning (QML) is limited by NISQ-era hardware constraints (number of qubits, coherence time), making it difficult for model scales to match classical deep learning. Core question: Can quantum models achieve comparable performance with fewer parameters? This question is crucial for financial fraud detection—data is extremely imbalanced (fraud accounts for 0.17%), and models require interpretability and deployment efficiency.

3

Section 03

Dataset and Experimental Design

The Kaggle Credit Card Fraud Dataset was used (284,807 transactions, 492 fraudulent). Features include 28 anonymous PCA components, amount, and time. The experiment uses 5-fold stratified cross-validation; SMOTE is applied independently to the training set of each fold to handle class imbalance. Inputs for quantum models are processed via RobustScaler normalization → PCA dimensionality reduction to 8 dimensions → MinMax scaling to [0, π] to adapt to angle encoding.

4

Section 04

Detailed Model Architecture

Seven models are compared:

  • Quantum hybrid models: SHNN (122 parameters: input → classical linear layer → variational quantum circuit → output layer); Parallel Hybrid (489 parameters: classical MLP + quantum branch concatenation).
  • Classical baselines: SNN (3201 parameters), TabNet (6176), ResNet (8897), FT-Transformer (14869), SAINT (29357).
5

Section 05

Key Experimental Results

Key findings:

  1. Parameter efficiency advantage: SHNN (122 parameters) achieves an MCC of 0.576, comparable to SNN (3201 parameters), with only 1/26 the number of parameters; its MCC per thousand parameters is 27 times that of SNN.
  2. Performance trade-off: Classical large models (e.g., ResNet) have higher absolute MCC values (0.69-0.70), but their parameter counts are 73-240 times those of SHNN, and their parameter efficiency is 60-197 times lower.
  3. Ablation validation: Removing the Variational Quantum Circuit (VQC) reduces model performance to random levels, proving that VQC provides 100% of the predictive signal.
6

Section 06

Technical Challenges and Solutions

  • Class imbalance: SMOTE applied within each fold to avoid data leakage;
  • Outliers: RobustScaler used to process the amount field;
  • Qubit limitations: PCA dimensionality reduction to 8 dimensions to match hardware;
  • Gradient calculation: Adjoint differentiation method used to improve efficiency;
  • Barren plateaus: Mitigated by limiting circuit depth (2 layers) + early stopping strategy.
7

Section 07

Research Significance and Implications

Core argument: The advantage of QML in the NISQ era lies in parameter efficiency (achieving comparable performance with fewer parameters), which is of significant value for resource-constrained scenarios (edge devices). Implications:

  1. Model selection needs to consider efficiency, speed, and interpretability;
  2. QML has entered an empirically verifiable stage;
  3. Hybrid architectures are a key direction for recent QML applications.
8

Section 08

Limitations and Future Directions

Limitations: Experiments were only conducted on simulators, single dataset used, hyperparameters not fully optimized. Future directions: Validation on real quantum hardware, expansion to more datasets (medical/industrial), exploration of deeper quantum-classical architectures, research on applications in federated learning scenarios.