Zing Forum

Reading

Intelligent Customer Service Ticket Auto-Recommendation System Based on Large Language Models

An intelligent knowledge management system combining LLM and TF-IDF that can automatically analyze customer service tickets and recommend relevant solutions, while supporting local Ollama models and a fallback strategy.

大语言模型客服系统知识管理OllamaStreamlitFastAPITF-IDF智能推荐工单处理
Published 2026-04-06 12:13Recent activity 2026-04-06 12:21Estimated read 8 min
Intelligent Customer Service Ticket Auto-Recommendation System Based on Large Language Models
1

Section 01

[Introduction] Core Overview of the Intelligent Customer Service Ticket Auto-Recommendation System Based on Large Language Models

This open-source project combines Large Language Models (LLM) and TF-IDF technology to build an intelligent customer service ticket auto-recommendation system. It supports local Ollama model deployment and a TF-IDF fallback strategy, aiming to solve problems such as time-consuming information retrieval and inconsistent responses in enterprise customer service scenarios, thereby improving customer service efficiency and business continuity.

2

Section 02

Project Background and Problem Definition

In enterprise customer service scenarios, technical support teams handle a large number of repetitive issues daily. Customer service staff often need to manually search for documents in a huge knowledge base, which is time-consuming and prone to inconsistent responses. According to industry statistics, customer service representatives spend an average of 30% of their time searching for information, and the learning curve for new employees can take several weeks. This project proposes an intelligent solution: using LLM to automatically analyze ticket content, match relevant solutions, and recommend them to customer service staff in real time.

3

Section 03

System Architecture and Technology Selection

The system adopts a front-end and back-end separation architecture. The tech stack includes: Front-end (gradient-themed Web UI built with Streamlit), Back-end (high-performance API using FastAPI), AI inference engine (llama3.2:1b model supporting local deployment via Ollama), Text matching (TF-IDF as a fallback solution when LLM is unavailable), Knowledge base (support articles stored in CSV format). The core advantage of the dual-mode design is robustness, ensuring that the system can continue to work via TF-IDF even if the LLM fails.

4

Section 04

Core Function Analysis

Intelligent Recommendation Engine

When a customer service agent enters a ticket description, the system performs: 1. Semantic understanding (LLM extracts key intents and problem types); 2. Knowledge retrieval (searches for semantically relevant articles in the knowledge base); 3. Relevance ranking (combines matching degree and timeliness); 4. Result presentation (displays recommended articles in card form, including title, summary, and confidence score).

Auto-start and Deployment

Users can run streamlit run app.py to start both FastAPI (port 8000) and the Streamlit interface (port 8501) simultaneously, lowering the deployment threshold.

Knowledge Base Management

Knowledge articles are stored in CSV files, containing fields such as title, content, tags, and creation time. They can be edited via Excel/Google Sheets without database experience.

5

Section 05

Practical Application Scenarios

This system is suitable for the following scenarios:

  • IT technical support: Helps desktop support teams quickly locate troubleshooting guides
  • E-commerce customer service: Automatically recommends common problem solutions such as return/refund policies and logistics inquiries
  • SaaS product support: Provides new users with function usage guidance and best practices
  • Internal IT service desk: Assists employees in solving issues like VPN, email, and software installation
6

Section 06

Technical Highlights and Best Practices

Local AI Priority Strategy

Choosing Ollama to run LLM ensures sensitive data does not leave the local server, protecting privacy; the llama3.2:1b model can run smoothly on consumer-grade hardware, reducing costs.

Graceful Fallback Mechanism

Automatically switches to TF-IDF mode when the LLM service is unavailable, embodying the concept of defensive programming and ensuring business continuity.

Progressive Enhancement Architecture

Starting with a TF-IDF-based basic version, gradually introducing LLM capabilities, and improving the effect after verifying business value—avoiding large initial investments in complex AI engineering.

7

Section 07

Deployment and Usage Guide

System deployment requires Python 3.8+. Optional installation of Ollama and the llama3.2:1b model. Steps:

  1. Clone the project and enter the directory
  2. Install dependencies: pip install -r requirements.txt
  3. Start the service: streamlit run app.py
  4. Access the Web interface: http://localhost:8501 The project provides an online demo version for quick feature experience.
8

Section 08

Summary and Outlook

This open-source project demonstrates the method of combining LLM with traditional software engineering to build practical enterprise-level applications, with a design philosophy of simplicity, robustness, and scalability. Its core value lies in proving that AI technology can be implemented through reasonable design to build intelligent and reliable production systems. In the future, functions such as multi-language support, automatic ticket classification, and satisfaction feedback loops can be added according to needs, making it an ideal starting point for improving customer service efficiency.