Zing Forum

Reading

RELISH: A Lightweight Text Regression Architecture for Large Language Models

RELISH predicts scalar values directly from frozen LLM representations by iteratively refining a latent state head. With only 3.4-3.7M additional trainable parameters (0.01-0.04% extra overhead), it significantly outperforms existing text regression baseline methods.

文本回归RELISH架构参数高效微调大语言模型迭代精炼连续数值预测交叉注意力
Published 2026-04-02 01:50Recent activity 2026-04-02 10:50Estimated read 8 min
RELISH: A Lightweight Text Regression Architecture for Large Language Models
1

Section 01

Introduction to the RELISH Architecture: A Breakthrough in Lightweight LLM Text Regression

RELISH (REgression with a Latent Iterative State Head) is a lightweight text regression architecture for large language models. Its core mechanism involves iteratively refining a latent state head to predict scalar values directly from frozen LLM representations. With only 3.4-3.7M additional trainable parameters (0.01-0.04% extra overhead), it significantly outperforms existing text regression baseline methods, addressing the efficiency and accuracy pain points of current LLMs in continuous numerical prediction tasks.

2

Section 02

Challenges in Text Regression and Limitations of Existing Methods

Importance of Text Regression

Text regression requires predicting continuous numerical values from input text (e.g., article popularity, sentiment intensity, code complexity). In reality, a large amount of valuable information exists in continuous numerical form, but this task is often underestimated.

Three Limitations of Existing Methods

  1. Autoregressive Decoding Family: Treats numerical values as discrete tokens for generation, leading to precision loss from discretizing continuous space mappings and format complexity issues;
  2. Regression-Aware Reasoning Family: Relies on multiple sampling aggregations, resulting in high computational cost and unstable results;
  3. Prediction Head Family: Existing implementations require a large number of trainable parameters, losing the advantage of parameter-efficient fine-tuning (e.g., LoRA parameters grow linearly with model size).
3

Section 03

Core Innovations of the RELISH Architecture and Iterative Refinement Mechanism

Core Components

RELISH includes three key parts:

  1. Latent State: A learnable vector serving as the 'working memory' for numerical prediction;
  2. Cross-Attention Mechanism: Interacts between the latent state and input token representations to selectively focus on relevant information;
  3. Linear Regressor: Maps the final latent state to a scalar value, ensuring stability and interpretability.

Intuition Behind Iterative Refinement

Numerical prediction requires multi-step reasoning (e.g., evaluating article popularity needs understanding of topic, timeliness, etc.). RELISH simulates a progressive understanding process through a parameter-shared iterative mechanism, achieving complex reasoning capabilities with minimal parameters.

4

Section 04

Parameter Efficiency of RELISH and Experimental Validation Results

Parameter Efficiency

RELISH requires only 3.4-3.7M trainable parameters, with an extra overhead of just 0.01-0.04% for mainstream LLMs—far lower than LoRA (0.26-0.42%, about 10-40 times). Moreover, the number of parameters is fixed and does not depend on the size of the backbone model.

Experimental Validation

On 5 datasets (covering code complexity prediction, text quality assessment, etc.) and 4 LLM backbones, RELISH outperforms all baseline methods (autoregressive decoding, regression-aware reasoning, prediction head family) across the board. It also performs better in fine-grained numerical differentiation tasks (e.g., more accurate prediction of 0-1 continuous values).

5

Section 05

Synergy Between RELISH and Frozen LLMs & Potential Application Scenarios

Compatibility with Frozen Backbones

RELISH is fully compatible with frozen LLM backbones, requiring no modifications to the base model:

  • Computational efficiency: No need for gradient updates on large backbones;
  • Modularity: The same backbone can be paired with multiple RELISH heads to handle different tasks;
  • Stability: The basic language capabilities are stable, only the lightweight head needs adjustment.

Potential Applications

Covers scenarios such as content platforms (recommendation optimization), finance (market indicator prediction), healthcare (clinical indicator extraction), software development (code quality assessment), and education (automatic essay quality scoring).

6

Section 06

Limitations of RELISH and Future Research Directions

Current Limitations

  • Fixed number of iterations, lack of adaptive strategies;
  • Only supports univariate regression;
  • Insufficient interpretability of the iterative process.

Future Directions

  1. Explore adaptive iteration strategies (dynamically adjust the number of rounds based on input complexity);
  2. Extend to multivariate regression;
  3. Improve the interpretability of the iterative process (visualize intermediate states).
7

Section 07

Significance and Insights of RELISH

RELISH is an important breakthrough in the field of LLM text regression, proving that strong regression capabilities can be achieved with extremely low parameter overhead through elegant architectural design. It finds an ideal balance between parameter efficiency and task performance, providing insights for LLM adaptation research: in the era of large models, small architectural innovations can still generate great value.