Zing Forum

Reading

SIGIR 2026 Highlight: Training-Free User Representation Initialization Method SG-URInit Boosts Multimodal Recommendation Systems

The University of Hong Kong team proposes SG-URInit, a training-free and model-agnostic user representation initialization method. By fusing modal features of items interacted with by users and global clustering features, it effectively narrows the semantic gap between user and item representations, significantly improving multimodal recommendation performance and accelerating model convergence.

多模态推荐用户表征初始化SIGIR 2026训练无关冷启动问题推荐系统表征学习
Published 2026-04-25 09:30Recent activity 2026-04-25 09:47Estimated read 4 min
SIGIR 2026 Highlight: Training-Free User Representation Initialization Method SG-URInit Boosts Multimodal Recommendation Systems
1

Section 01

[Introduction] SIGIR 2026 Highlight: SG-URInit's Training-Free User Representation Initialization Boosts Multimodal Recommendation

The University of Hong Kong team proposes SG-URInit, a training-free and model-agnostic user representation initialization method. By fusing modal features of items interacted with by users and global clustering features, it effectively narrows the semantic gap between user and item representations, significantly improving multimodal recommendation performance and accelerating model convergence. This method has been accepted by SIGIR 2026.

2

Section 02

Research Background: The Semantic Gap Problem in Multimodal Recommendation

Multimodal recommendation integrates modal information such as text and images to alleviate data sparsity. However, user representations are often randomly initialized, while items have rich modal features, leading to a semantic gap between user and item representations, which limits recommendation performance.

3

Section 03

SG-URInit Scheme: Fusing Item Modal and Global Clustering Features

SG-URInit constructs initial user representations in two steps: 1. Cluster users based on their interaction behaviors; 2. Fuse the modal features of items interacted with by the user and the global features of the cluster they belong to, enabling the initial representation to capture both local (item-level) and global (cluster-level) semantic information.

4

Section 04

Method Advantages: Dual Traits of Training Independence and Model Agnosticism

SG-URInit does not require an additional training process and directly generates high-quality initial representations; it can be seamlessly integrated into various multimodal recommendation models (e.g., MMGCN, LGMRec, etc.) and has strong generality.

5

Section 05

Experimental Validation: Performance Improvement on Multiple Datasets and Additional Advantages

Validated on datasets such as Baby, Sports, Clothing, and TikTok, the recommendation performance is significantly improved after integrating SG-URInit; it also alleviates the item cold-start problem, enhances the ability to recommend new items, accelerates model convergence, and reduces the number of training iterations.

6

Section 06

Technical Details and Open-Source Implementation

The core is an attention mechanism that fuses item-level and cluster-level features; the computational complexity is low, and the overhead of clustering (e.g., K-Means) and fusion is negligible; the PyTorch implementation (Python3.9 + PyTorch2.1.0) has been open-sourced, including dataset preprocessing scripts and training code.

7

Section 07

Research Significance and Future Outlook

SG-URInit reveals the importance of user representation initialization and provides new ideas for multimodal recommendation; future directions include exploring more refined clustering strategies, extending to sequence/session recommendation scenarios, and combining large language models to improve representation quality.