Zing Forum

Reading

BrainStacks: Enabling Continuous Multi-Domain Fine-Tuning of Large Models via Frozen MoE-LoRA Adapter Stacks

BrainStacks proposes a modular architecture that enables continuous multi-domain fine-tuning of large language models using frozen MoE-LoRA adapter stacks, addressing the catastrophic forgetting problem in traditional methods.

BrainStacksMoE-LoRA持续学习多领域微调灾难性遗忘适配器栈大语言模型Gemma 3零空间投影认知原语
Published 2026-04-05 00:14Recent activity 2026-04-05 00:21Estimated read 6 min
BrainStacks: Enabling Continuous Multi-Domain Fine-Tuning of Large Models via Frozen MoE-LoRA Adapter Stacks
1

Section 01

Introduction: BrainStacks Architecture Solves Catastrophic Forgetting in Continuous Multi-Domain Fine-Tuning of Large Models

BrainStacks proposes a modular architecture that enables continuous multi-domain fine-tuning of large language models using frozen MoE-LoRA adapter stacks, with the core goal of addressing the catastrophic forgetting problem in traditional methods. Its key insight is that domain adapters learn transferable cognitive primitives (such as instruction following, numerical reasoning, etc.) rather than just domain-specific knowledge.

2

Section 02

Background: Dilemmas of Continuous Learning and Insights into Cognitive Primitives

Large model fine-tuning faces the problem of catastrophic forgetting: fine-tuning on new domains easily loses existing knowledge, and traditional methods (regularization, memory replay, single adapter) still have challenges when handling multiple domains. The core insight of BrainStacks: domain adapters learn transferable cognitive primitives (such as instruction following clarity, numerical reasoning ability, etc.). For example, 97% of medical prompts are routed to dialogue + math adapters (without medical data).

3

Section 03

Methodology: The Five Core Components of the BrainStacks Architecture

BrainStacks architecture consists of five core components:

  1. MoE-LoRA Building Block: Mixture-of-Experts LoRA with 4 experts (rank 16, top-2 routing), applied to 7 projection layers of the Transformer, with 4-bit NF4 quantization, Shazeer routing + rsLoRA scaling.
  2. Inner-Loop Residual Enhancement: Multi-stack iterative training within a domain. After freezing the previous stack, the next stack is trained to handle residual errors. Three rounds of enhancement achieve a 2.4% relative improvement.
  3. Outer-Loop Continuous Domain Training: Train domains in a curriculum order, freeze already trained stacks, and train new domains on top of them to accumulate knowledge without forgetting.
  4. Null Space Projection: Extract the main directions of frozen stack activations before training a new domain. Projection constrains the output of the new stack to the orthogonal complement space, achieving zero forgetting (cosine similarity of cross-domain subspaces: 0.034-0.047).
  5. Outcome-Based Sigmoid Meta-Router: A 2-million-parameter network trained on domain combination objectives. Sigmoid outputs support simultaneous activation of multiple stacks.
4

Section 04

Evidence: Key Experimental Results of BrainStacks

Experiments validate its effectiveness:

  • MoE-LoRA vs Single LoRA: On TinyLlama-1.1B, MoE-LoRA converges 2.5x faster, reaching the validation loss of single LoRA at 400 steps in just 160 steps.
  • Gemma3 12B Benchmark Tests: Maintains competitiveness across 8 zero-shot benchmarks without catastrophic drops (e.g., TruthfulQA +0.02, MedMCQA +0.03).
  • Cognitive Primitive Discovery: 97% of medical prompts are routed to dialogue + math stacks; domain labels are not optimal routing signals.
5

Section 05

Technical Implementation: Environmental Dependencies and Training/Inference Workflow

Environmental Dependencies: Python 3.10+, CUDA GPU (24GB+; 48GB+ for Gemma3 12B), PyTorch 2.0+, with dependencies like transformers and trl. Training Workflow: Train in the order of dialogue → code → math → medical → reasoning. Each domain generates an independent stack file, and manifest.json tracks the domains. Inference Workflow: Input prompt → meta-router outputs stack weights → base model + weighted stacks output tokens (e.g., dialogue 0.85 + math 0.91).

6

Section 06

Conclusion and Outlook: Practical Significance and Future Directions of BrainStacks

Practical Significance: Redefines the understanding of fine-tuning (cognitive primitives are composable), providing a path to build multi-domain models under limited resources (deployable on consumer-grade hardware). Future Outlook: MIT license + HF release to promote widespread adoption; modular composable methods may become a new standard for continuous learning, facilitating the application of large models in vertical domains.