Zing Forum

Reading

RAGQA: A Professional Retrieval-Augmented Question Answering System for Cardiovascular Research

RAGQA is a retrieval-augmented generation (RAG) question answering system specifically designed for the field of cardiovascular research. It integrates MongoDB vector search, a multi-dimensional evaluation framework, and evaluator variability analysis, providing a reproducible research paradigm for medical AI applications.

RAG检索增强生成心血管医学MongoDB向量搜索LLM评估评估器变异性医学AI
Published 2026-04-07 22:43Recent activity 2026-04-07 22:53Estimated read 9 min
RAGQA: A Professional Retrieval-Augmented Question Answering System for Cardiovascular Research
1

Section 01

Introduction: RAGQA—A Professional Retrieval-Augmented Question Answering System for Cardiovascular Research

RAGQA is a retrieval-augmented generation (RAG) question answering system specifically designed for the field of cardiovascular research. It integrates MongoDB vector search, a multi-dimensional evaluation framework, and evaluator variability analysis, providing a reproducible research paradigm for medical AI applications. Its core value lies in combining external knowledge bases with generative models to balance answer accuracy and the flexibility of natural language generation, addressing key challenges in AI question answering for the medical field.

2

Section 02

Background: Unique Challenges of AI Question Answering in the Medical Field

  • Knowledge Accuracy: Medical information cannot tolerate errors; incorrect answers may lead to serious consequences
  • Domain Specialization: Involves a large number of professional terms and complex pathological mechanisms
  • Information Timeliness: Needs access to the latest research findings
  • Interpretability: Medical decisions require traceable evidence support

Traditional general-purpose question answering systems struggle to meet these needs, while RAG technology provides a new approach to solving the above problems by combining external knowledge bases with generative models.

3

Section 03

Overview of Core Features of the RAGQA Project

The RAGQA project implements a complete RAG pipeline and includes a comprehensive evaluation framework. Its core features are as follows:

  • Semantic retrieval based on MongoDB Atlas vector search
  • Support for multiple LLM backends (Gemma-2, Llama, DeepSeek, etc.)
  • Multi-dimensional answer quality evaluation system
  • Evaluator variability analysis framework
  • Complete statistical analysis and visualization tools
4

Section 04

Technical Architecture: Full Pipeline of Retrieval-Generation-Evaluation

Vector Retrieval Layer

Uses MongoDB Atlas vector search, with advantages including mature vector indexing, flexible query interfaces, scalability, and transaction support; uses the thenlper/gte-large model to generate 1024-dimensional text embeddings, suitable for medical professional literature.

Generation Layer

Supports multiple LLM backends:

  • HuggingFace Transformers (e.g., google/gemma-2-2b-it)
  • Ollama local service (e.g., llama3.3, deepseek-r1)
  • vLLM batch processing (efficient large-scale inference)

Evaluation Framework

Multi-dimensional LLM evaluation system, scoring from three aspects (1-7 point Likert scale):

  • Accuracy: Correctness of content
  • Clarity: Understandability of expression
  • Completeness: Comprehensiveness of information coverage Uses Pydantic to validate structured outputs, ensuring scoring standardization.
5

Section 05

Evaluator Variability Analysis: Research on Differences in AI Judging AI

Experimental Design

To address the result differences of different LLMs as evaluators, experiments were designed:

  1. Multi-model evaluation (Llama3.1, Llama3.3, etc.)
  2. Multi-replica runs (multiple evaluations of the same answer to observe internal consistency)
  3. Different inference settings (batch processing vs. sequential mode)
  4. Quantized model testing (impact of 4-bit AWQ quantization on judgment quality)

Key Findings

  • Inter-model differences: Systematic differences exist in the judgment criteria of models with different architectures/scales
  • Randomness impact: Temperature parameters and random seeds significantly affect evaluation results
  • Quantization effect: 4-bit quantization reduces computational costs but may change evaluation behavior
  • Task dependency: Some question-answer types are prone to evaluation disagreements
6

Section 06

Practical Application Scenarios and Value of RAGQA

Medical Researchers

  • Quickly retrieve literature and knowledge in the cardiovascular field
  • Obtain background information when verifying hypotheses
  • Assist in literature review and knowledge organization

AI System Developers

  • Learn to build domain-specific RAG systems
  • Understand the design ideas of multi-dimensional evaluation frameworks
  • Master the methodology of evaluator variability analysis

Evaluation Method Researchers

  • Gain an in-depth understanding of the limitations of LLM-as-a-Judge
  • Explore more reliable automatic evaluation schemes
  • Provide empirical evidence for the standardization of evaluation protocols
7

Section 07

Highlights of Technical Implementation

Modular Design

Clear code structure with core modules including:

  • RAG_Mongodb.py: Core RAG system implementation
  • RAG_poblate_db.py: Database population and index construction
  • LLM_answer_supervised_evaluation_strucutred_output.py: Supervised evaluation

Configuration Management

Configuration management via environment variables and .env files:

  • MongoDB connection settings
  • Model selection and parameter adjustment
  • Evaluation parameter configuration

Robustness Design

  • Automatic retry mechanism: Auto-retry when evaluation fails
  • Batch processing support: Efficiently handle large-scale evaluation tasks
  • Quantization support: Run large models in resource-constrained environments
8

Section 08

Open Source Ecosystem and Project Conclusion

Open Source Ecosystem

Following the principles of open science, a complete open-source implementation is provided:

  • Code and configuration files are public
  • Detailed README documentation and examples
  • Clear dependencies for easy reproducibility
  • Complete statistical analysis and visualization scripts

Cites open-source frameworks such as MongoDB Atlas and HuggingFace Transformers, reflecting the power of community collaboration.

Conclusion

RAGQA demonstrates the construction method of domain-specific question answering systems. By combining advanced RAG technology with a rigorous evaluation framework, it provides practical tools for cardiovascular researchers while promoting the development of AI evaluation methodologies. Its evaluator variability analysis provides empirical data for improving automatic evaluation systems, which is of great significance to medical AI applications.