Zing Forum

Reading

Emotion Probes Visualiser: Real-time Visualization of Large Language Models' Emotional Trajectories

An open-source tool based on Anthropic's emotion concept research that can real-time extract and visualize the changes in emotional vectors during LLM text generation, providing an intuitive interface to understand the internal emotional mechanisms of models.

LLMemotion visualizationmechanistic interpretabilityAnthropicTinyLlamahidden statesreal-time
Published 2026-04-19 19:43Recent activity 2026-04-19 19:56Estimated read 7 min
Emotion Probes Visualiser: Real-time Visualization of Large Language Models' Emotional Trajectories
1

Section 01

[Introduction] Emotion Probes Visualiser: An Open-source Tool for Real-time Visualization of LLM Emotional Trajectories

This article introduces Emotion Probes Visualiser, an open-source tool based on Anthropic's emotion concept research. It can real-time extract and visualize the changes in emotional vectors during text generation by large language models (LLMs). This tool provides an intuitive interface to understand the internal emotional mechanisms of models, with research, development, and educational value. It supports the TinyLlama model, uses a front-end and back-end separated architecture, and helps users intuitively 'see' the emotional tendencies of the model during generation.

2

Section 02

Research Background: Quantifiable Exploration of LLM Emotional Mechanisms

Whether large language models can 'understand' or 'express' emotions is a hot topic in AI research. The Anthropic team proposed a method in the paper 'Emotion Concepts and their Function in a Large Language Model': by comparing the hidden layer activation differences when the model processes emotion-evoking text versus neutral text, extract 'emotion probe' vectors representing specific emotion concepts, providing a quantifiable tool to understand the model's emotional mechanisms.

3

Section 03

Core Technical Principles: Emotional Vector Extraction and Real-time Visualization

1. Emotional Vector Extraction

Using the TinyLlama model (requiring about 2GB of VRAM), compare the hidden layer activations when processing emotion-evoking text (e.g., 'I feel angry') and neutral text (e.g., 'I feel calm'), calculate vectors representing emotions like anger and joy, and capture the model's internal emotional neural representations.

2. Real-time Similarity Calculation

During text generation, real-time extract the hidden state of each new token, use cosine similarity to calculate the matching degree with predefined emotional vectors, and obtain continuous emotional scores to reflect the current emotional tendency.

3. Interactive Visualization

The front-end is built with React and Vite, receives back-end data via Server-Sent Events (SSE), and displays: real-time line charts, emotional scores corresponding to tokens, chart highlight interactions, and multi-emotion dimension switching.

4

Section 04

System Architecture and Tech Stack: Implementation Details of Front-end and Back-end Separation

Backend (Python)

  • FastAPI provides high-performance asynchronous APIs
  • uv manages dependencies
  • Supports CUDA acceleration (optional, can fall back to CPU)
  • Preloads models at startup to reduce latency

Frontend (Node.js/React)

  • Vite build toolchain
  • SSE for real-time data streaming
  • Interactive charts to display emotional trajectories

Hardware Requirements

  • Python 3.11+
  • Node.js 18+
  • 8GB+ RAM (model loading takes about 4GB)
  • Optional GPU (CUDA-supported)
5

Section 05

Usage Scenarios and Value: Application Potential for Multiple Roles

Researchers

An experimental platform to verify hypotheses about emotion manipulation and model interpretability, observe the impact of prompts on emotional trajectories, and test the effectiveness of intervention strategies.

Developers

Understand emotional changes during generation, design more controllable AI applications (e.g., customer service robots maintaining neutral/positive tones, creative writing tools guiding specific emotional styles).

Educators

Transform abstract 'hidden layer activations' into visual emotional curves to help students understand the internal working mechanisms of LLMs.

6

Section 06

Future Development Directions: Function Expansion and Model Support

The developers plan to add features:

  • Support for larger models (Llama2, Mistral, etc.)
  • Emotion manipulation (actively guide generation to target emotions)
  • Trajectory data export
  • Dark mode

Note: Larger models require re-extraction of emotional vectors (due to differences in hidden space representations between different models). The author maintains a sister repository 'emotion-concepts' that records the research reproduction process (vector extraction, manipulation, and scoring methods).

7

Section 07

Conclusion: An Open-source Tool from Research to Practical Use

Emotion Probes Visualiser transforms cutting-edge AI interpretability research into a practical open-source tool, allowing users to 'see' the emotional dimensions of LLMs. It opens up new possibilities for emotion-controllable generation, model debugging, and educational popularization, making it a project worth exploring in the fields of AI interpretability and emotion computing.