Zing Forum

Reading

One-for-All: A Lightweight, Stable, Parameter-Efficient Pre-trained Large Model for Time Series Forecasting

One-for-All introduces the Gaussian Rank-Stable Low-Rank Adapter (rsLoRA). By keeping self-attention weights frozen while only training positional embeddings and the output layer, it achieves a 168-1776x memory reduction and up to 21x improvement in parameter efficiency, supporting edge device deployment.

时间序列预测参数高效微调LoRA边缘部署轻量化模型预训练模型迁移rsLoRA
Published 2026-03-31 21:54Recent activity 2026-04-01 09:24Estimated read 6 min
One-for-All: A Lightweight, Stable, Parameter-Efficient Pre-trained Large Model for Time Series Forecasting
1

Section 01

【Introduction】One-for-All: A Lightweight, Stable, Parameter-Efficient Large Model for Time Series Forecasting

One-for-All is a lightweight pre-trained large model for time series forecasting, with its core innovation being the Gaussian Rank-Stable Low-Rank Adapter (rsLoRA). The model keeps self-attention weights frozen and only trains positional embeddings and the output layer, achieving a 168-1776x memory reduction and up to 21x improvement in parameter efficiency, supporting edge device deployment.

2

Section 02

Background: Dilemmas in Combining Time Series Forecasting with LLMs

Time series forecasting is widely used in finance, power, healthcare, meteorology, and other fields. LLMs have become a promising direction due to their rich knowledge and strong sequence modeling capabilities, but mainstream LLMs have large parameter scales (billions to hundreds of billions), requiring powerful GPU support for inference and fine-tuning. Resource demands limit their deployment in scenarios like edge devices.

3

Section 03

Core Methods: rsLoRA and One-for-All Architecture Design

Parameter-Efficient Fine-Tuning and rsLoRA

Parameter-Efficient Fine-Tuning (PEFT) learns task knowledge through a small number of adapters. LoRA is a classic method, but its gradients are unstable at low ranks. rsLoRA improves the initialization strategy: using Gaussian distribution initialization + rank-related scaling factors, which is mathematically proven to ensure stable gradient convergence at low ranks (e.g., rank=16).

One-for-All Architecture

  • Frozen Components: Self-attention weights are fully frozen to retain the general sequence capabilities of the pre-trained LLM and avoid catastrophic forgetting.
  • Trainable Components: rsLoRA adapters are injected only into positional embeddings (to capture time features) and the output layer (for numerical prediction mapping), leading to an extremely small parameter scale.
4

Section 04

Efficiency and Performance Verification: Order-of-Magnitude Breakthroughs and SOTA Performance

Efficiency Breakthroughs

  • Parameter Efficiency: 6.8x fewer trainable parameters than TimesNet, 21x fewer than GPT4TS, and 11.8x fewer than TIME-LLM.
  • Memory Efficiency: Memory usage is only 2.2MiB, a 168-1776x reduction compared to existing models (340MiB ~4.18GiB).
  • Parameter Reduction: 98.3% fewer parameters than traditional Transformers.

Performance Verification

  • Accuracy: MSE reaches 0.33 across 6 tasks, matching SOTA models such as TimesNet and GPT4TS.
  • Parameter Efficiency: 5.5x higher than TimesNet and 21x higher than GPT4TS.
  • Stability: Consistent performance across datasets (e.g., ETT, Weather) and prediction ranges of 96-720 steps.
5

Section 05

Edge Deployment: Unlocking New Scenarios for Time Series Forecasting

One-for-All's lightweight features support various edge scenarios:

  • Healthcare: Wearable devices analyze physiological signals (heart rate, blood glucose) in real time to enable local early warning.
  • Finance: Edge servers/terminals run the model for microsecond-level market signal recognition to support high-frequency trading.
  • Environmental Monitoring: Sensor nodes predict air quality and water quality locally, reducing cloud dependency.
  • Industrial Maintenance: Edge gateways analyze equipment vibration and temperature to predict failure risks.
6

Section 06

Technical Insights: Future Directions for High-Efficiency AI

One-for-All brings three insights:

  1. Pre-trained knowledge transfer can achieve extremely high parameter efficiency via sophisticated adapters.
  2. Algorithm design guided by mathematical theory (e.g., the rank-stable mechanism of rsLoRA) is more reliable and interpretable.
  3. Efficiency and performance are reconcilable: Reasonable architecture design can maintain top-tier performance while reducing resource consumption, promoting AI popularization and sustainable development.