Zing Forum

Reading

Universality of Feature Spaces in Large Language Models: Sparse Autoencoders Reveal Cross-Model Representation Commonalities

A study based on Sparse Autoencoders (SAE) proposes the 'Analogical Feature Universality' hypothesis, finding that the feature spaces of different large language models have high geometric structural similarity, providing a theoretical foundation for cross-model transfer of interpretability techniques.

稀疏自编码器特征空间普适性大语言模型可解释性多语义性表征相似性分析机械可解释性
Published 2026-05-14 07:42Recent activity 2026-05-14 07:47Estimated read 5 min
Universality of Feature Spaces in Large Language Models: Sparse Autoencoders Reveal Cross-Model Representation Commonalities
1

Section 01

[Introduction] Study on Universality of Feature Spaces in Large Language Models: SAE Reveals Cross-Model Representation Commonalities

A study based on Sparse Autoencoders (SAE) proposes the 'Analogical Feature Universality' hypothesis, finding that the feature spaces of different large language models (LLMs) have high geometric structural similarity, providing a theoretical foundation for cross-model transfer of interpretability techniques. The study disentangles neuronal representations via SAE, verifies the universality of feature spaces, and holds significant importance for the field of LLM interpretability.

2

Section 02

Research Background: Black-Box Nature of LLMs and Challenges to the Universality Hypothesis

The 'black-box' nature of large language models (LLMs) is a core challenge in AI interpretability research. The academic community has proposed the 'Universality Hypothesis', which suggests that different models may converge to similar concept representations, but direct feature comparison faces the obstacle of polysemy—individual neurons often correspond to multiple unrelated concepts, leading to difficulties in cross-model feature alignment.

3

Section 03

Methodology Tool: Sparse Autoencoders (SAE) Disentangle Neuronal Entanglement

To address the polysemy issue, researchers introduced Sparse Autoencoders (SAE). SAE decomposes model neurons into sparse, interpretable feature representations, where each feature corresponds to an independent concept, achieving 'disentanglement' of representations and helping to clearly observe the internal concept organization of models.

4

Section 04

Core Hypothesis: Analogical Feature Universality—Similar Geometric Structure of Feature Spaces

The authors propose the 'Analogical Feature Universality' hypothesis: even if SAEs of different models learn different feature representations, the geometric structure of their feature spaces remains similar and can be aligned via rotation transformations. The significance of this hypothesis lies in the possibility that interpretability techniques (such as steering vectors) can be transferred across models through transformations.

5

Section 05

Research Methods: Activation Correlation Pairing and Representational Similarity Analysis

The study uses a two-step method to verify the hypothesis: 1. Pair cross-model similar features by analyzing feature activation patterns of the same input text via activation correlation analysis; 2. Evaluate the spatial relationship similarity of paired feature weight vectors using Representational Similarity Analysis (RSA) and Singular Vector Canonical Correlation Analysis (SVCCA).

6

Section 06

Experimental Evidence: Cross-Model Feature Spaces Show High Similarity

Experiments comparing models like the Pythia series (70M and 160M parameters) found that the geometric structure of feature spaces of different-scale models is significantly consistent. The study also provides an interactive visualization tool that displays cross-model feature space correspondences via dual-panel UMAP projections, allowing users to select regions to synchronously highlight paired features.

7

Section 07

Practical Significance and Outlook: Cross-Model Transfer and Unified Framework

The practical significance of this study includes: 1. Interpretability tools may not need to be developed separately; cross-model reuse can be achieved via transformations; 2. It implies the existence of a 'universal language' or fundamental representation that LLMs approach; 3. The open-source codebase and visualization tools provide infrastructure for subsequent research. In the future, it is expected to promote the development of universal AI understanding tools.