Zing Forum

Reading

AI Meeting Summarizer: A Privacy-First Automated Meeting Note Tool Based on Local LLM

Introducing an open-source tool developed with C# .NET 8 that automatically generates structured meeting summaries using Ollama local large language models. It supports participant tracking, extraction of personal status updates, and to-do item generation, with full local processing to ensure data privacy.

会议总结本地LLMOllama隐私保护自动化工具.NET开源项目
Published 2026-05-11 11:55Recent activity 2026-05-11 11:59Estimated read 6 min
AI Meeting Summarizer: A Privacy-First Automated Meeting Note Tool Based on Local LLM
1

Section 01

Introduction: AI Meeting Summarizer — A Privacy-First Local LLM Meeting Note Tool

This article introduces the open-source tool ai-meeting-summarizer, designed to address the pain points of time-consuming meeting note-taking and privacy risks associated with cloud-based tools. Developed with C# .NET 8, this tool uses Ollama local large language models to achieve fully offline automatic summarization of meeting content. It supports participant tracking, extraction of personal status updates, and to-do item generation, with the core feature being privacy-first local processing.

2

Section 02

Project Background and Design Intent

In modern work environments, knowledge workers attend an average of over 15 hours of meetings per week. Taking meeting minutes is time-consuming and prone to omissions; existing cloud-based AI tools carry compliance risks of uploading sensitive data. To fill this gap, the developers used .NET 8 and Ollama local inference engine, following the Clean Architecture pattern. They abstracted the process into a pipeline of reading, preprocessing, summarization, evaluation, and output to ensure maintainability and scalability.

3

Section 03

Core Features and Technical Implementation

Multi-format Input Support

Supports plain text dialogue (each line starts with the speaker's name) and JSON array format (compatible with Whisper output), with automatic detection and preprocessing conversion.

Structured Summary Generation

The output includes a participant list, meeting overview, personal status updates (completed/in progress/blocked), and a to-do list, enhancing information readability and operability.

LLM-as-a-Judge Quality Evaluation

The generated summary is evaluated by a secondary local model, scored on 5 dimensions (0-2 points) including completeness and accuracy, with qualitative feedback provided to support optimization.

4

Section 04

Local Deployment and Privacy Assurance

All processing steps are completed locally; meeting data never leaves the user's device, making it suitable for handling commercial secrets or content subject to NDA constraints. Users need to run the Ollama service and can use open-source models like Llama3, Mistral, Qwen2.5, etc. The Llama3/Qwen2.5 7B version is recommended, as consumer-grade hardware can balance performance and quality. For those with limited VRAM, Mistral is an option. The default model can be modified via configuration.

5

Section 05

Technical Architecture and Code Organization

Follows a layered architecture: The Domain layer defines core models (summary/evaluation results); the Application layer handles pipeline interfaces and process orchestration; the Infrastructure layer is divided into Ollama client, file IO, and data processing. Unit test coverage is over 90%, using xUnit and Moq. The configuration file is appsettings.json, supporting customization of Ollama address, model, timeout, and other parameters.

6

Section 06

Use Cases and Limitations

Applicable Scenarios: Standups/sync meetings for technical teams, consulting firms handling sensitive client information, financial institutions with strict compliance requirements, etc.

Current Limitations: No integrated audio transcription (text conversion required first), no support for automatic separation of unlabeled speakers, limited by the local model's context window (4K-8K tokens), and chunk processing functionality is pending development.

7

Section 07

Future Development Directions

Development plans include: Integrating Whisper to achieve end-to-end automation from audio to summary; adding speaker separation technology; chunk processing and multi-round summarization to break context limitations; developing a graphical interface; integrating collaboration tools like Jira, Confluence, and Slack.

8

Section 08

Conclusion: A Pragmatic AI Application Balancing Privacy and Efficiency

ai-meeting-summarizer focuses on core needs in specific scenarios, achieving a balance between privacy and efficiency through reasonable architecture and local processing. It is an open-source project worth attention for organizations that value data sovereignty while wanting to enjoy the convenience of AI.