Zing Forum

Reading

StyleMind: A Personalized Fashion Recommendation System Based on Knowledge Graph and RAG

StyleMind is an AI-driven fashion styling assistant that delivers personalized clothing recommendations based on user profiles through Neo4j knowledge graph, vector similarity search, and a dual LLM pipeline. This article provides an in-depth analysis of its architectural design, technology selection, and innovative features.

StyleMind时尚推荐知识图谱RAGNeo4j个性化推荐LLM应用向量搜索
Published 2026-04-30 13:09Recent activity 2026-04-30 13:22Estimated read 7 min
StyleMind: A Personalized Fashion Recommendation System Based on Knowledge Graph and RAG
1

Section 01

StyleMind Introduction: Core Overview of the AI-Driven Personalized Fashion Recommendation System

StyleMind is an AI-driven fashion styling assistant whose core idea is to silently learn user taste through dialogue, combining Neo4j knowledge graph, vector similarity search, and a dual LLM pipeline to provide personalized clothing recommendations based on user profiles. It is not a simple product search tool, but an intelligent styling companion that understands user style preferences, occasion needs, and personal characteristics— a typical case of combining large language models with domain-specific deep knowledge.

2

Section 02

Project Background: Needs and Challenges of Integrating AI with the Fashion Domain

As AI applications flourish, how to combine large language model capabilities with domain-specific deep knowledge is a key challenge. StyleMind addresses this issue by aiming to build a fashion recommendation system that understands users' personalized needs, solving the pain points of traditional recommendation tools that struggle to capture users' implicit style preferences and scenario-based needs.

3

Section 03

Technical Architecture and Methods: Core Design of Dual LLM + Neo4j

Technology Stack Selection

StyleMind uses Python 3.14, Neo4j 5 Community (with both graph database and vector indexing functions), Groq Llama 3.3 70B (dual LLM for dialogue and extraction), all-MiniLM-L6-v2 local embedding model, etc. The highlight is Neo4j's dual role, which avoids the complexity of maintaining multiple systems.

System Flow

User dialogue → Get profile snapshot → Product retrieval → Profile reordering → Streaming response generation → Asynchronous profile update (fire-and-forget mode).

Key Features

  • Streaming response: Achieved via FastAPI SSE for a natural experience;
  • Dual LLM division: Dialogue LLM generates natural responses, extraction LLM outputs structured style signals;
  • Profile-driven: Each round uses profiles to guide recommendations and update confidence levels.
4

Section 04

Detailed Explanation of Core Functions: Profile Reasoning, Knowledge Graph, and RAG Pipeline

Profile Reasoning

Records users' explicit preferences (color, material, etc.) and infers implicit tendencies (style keywords, budget sensitivity), updating confidence scores each round to detect drift.

Knowledge Graph Traversal

Stores product matching relationships, style hierarchies, occasion associations, etc., supporting semantic queries (e.g., "What occasions is this coat suitable for?").

RAG Pipeline

Combines vector similarity and graph traversal retrieval, with profile reordering, and recommendations include source signals to ensure transparency.

Outfit Construction

Through the /outfit/{product_id} endpoint, it analyzes attributes around the anchor product, queries matching relationships, filters items mismatched with the profile, and generates complete styling suggestions.

5

Section 05

Interactive Interfaces: Usage of Web API and CLI

Web API

  • POST /chat: SSE streaming chat (profile-aware RAG);
  • GET /persona/{user_id}: Get profile snapshot;
  • GET /outfit/{product_id}: Build outfit plan;
  • GET /health: Health check.

CLI Interface

Run uv run python -m stylemind to start, supports /help (command list), /persona (view profile), /outfit <name> (build outfit) commands, and product names support Tab completion.

6

Section 06

Observability and Debugging: Tracking and Development Tools

  • Langfuse Cloud integration: Tracks dialogue spans, LLM token usage, profile confidence scores;
  • Local debugging: /debug-dev command displays session profile signals in Rich tables without network.
7

Section 07

Innovations: Insights from Architectural Patterns

  1. Unified Knowledge Graph and Vector: Neo4j supports both graph traversal and vector search, avoiding data silos;
  2. Explicit Profile Management: Persists explicit profiles, improving recommendation interpretability and cross-session consistency;
  3. Dual LLM Architecture: Separates dialogue generation and structured extraction, balancing naturalness and reliability;
  4. Balance Between Streaming and Background Processing: SSE ensures a smooth experience, and profile updates are executed asynchronously without blocking responses.
8

Section 08

Conclusion and Quick Start: Project Value and Deployment Guide

StyleMind is a well-architected example of a vertical AI application, providing references for recommendation systems and personalized assistant development.

Quick Start

  1. Configure environment: Copy .env.example to .env, set Groq API key and Neo4j password;
  2. Start service: docker-compose up --build (automatically executes seed and embedding);
  3. Access: App (http://localhost:8000), Neo4j Browser (http://localhost:7474). StyleMind demonstrates the potential of combining LLMs with domain knowledge and is worth in-depth study by developers.