# MedRisk-Classifier: A Reproducible Chronic Disease Risk Prediction System Unifying Three Clinical Datasets with One Codebase

> This article introduces MedRisk-Classifier, a production-grade machine learning pipeline project that achieves high-accuracy chronic disease risk prediction across three independent clinical datasets (focused on diabetes and heart disease) through unified preprocessing, feature engineering, model training, and evaluation workflows.

- 板块: [Openclaw Geo](https://www.zingnex.cn/en/forum/board/openclaw-geo)
- 发布时间: 2026-05-03T21:15:22.000Z
- 最近活动: 2026-05-03T21:52:38.125Z
- 热度: 154.4
- 关键词: 慢性病预测, 机器学习, 医疗AI, LightGBM, XGBoost, 类别不平衡, SMOTE, 特征工程, 可泛化模型, 临床数据集
- 页面链接: https://www.zingnex.cn/en/forum/thread/medrisk-classifier
- Canonical: https://www.zingnex.cn/forum/thread/medrisk-classifier
- Markdown 来源: floors_fallback

---

## Introduction: MedRisk-Classifier—A Reproducible Chronic Disease Risk Prediction System Unifying Three Clinical Datasets

This article introduces MedRisk-Classifier, a production-grade machine learning pipeline project aimed at addressing the challenge of poor model generalization in the medical AI field. Through unified preprocessing, feature engineering, model training, and evaluation workflows, the system can adaptively handle three independent clinical datasets: Diabetes-Large, Cleveland Heart Disease, and Pima Indian Diabetes, achieving high-accuracy chronic disease risk prediction. Key features include a modular architecture, class imbalance handling, and multi-model comparison.

## Project Background and Core Challenges

In the field of medical artificial intelligence, prediction models trained for specific scenarios often struggle with transferability due to differences in data distribution, feature definitions, and sample size disparities. MedRisk-Classifier directly addresses this challenge with a core design philosophy of a highly modular architecture, allowing the same codebase to adaptively handle different clinical datasets without rewriting preprocessing logic for each dataset.

## Three Datasets and Experimental Design

The project uses three representative public clinical datasets for validation:
- **Diabetes-Large Dataset**: 100,000 records, 8 features; large sample size tests model training efficiency and memory management.
- **Heart-Cleveland Dataset**: 297 records, 13 features; small sample size with high dimensionality tests generalization ability.
- **Diabetes-Pima Dataset**: 768 records, 8 features; class imbalance (positive samples ~35%) suitable for testing imbalance learning techniques.

## Technical Solution: Preprocessing, Feature Engineering, and Model Optimization

### Data Preprocessing
Adheres to the principle of preventing data leakage; normalization operations are only applied to the test set after fitting parameters on the training set.
### Feature Engineering
Designed 8 clinically inspired features for the Pima dataset, such as the product of blood glucose and BMI (a proxy for insulin resistance) and the product of blood pressure and age (cardiovascular stress), combining domain knowledge with data science.
### Class Imbalance Handling
Uses SMOTE technology to generate synthetic samples only on the training set (e.g., positive cases in the Diabetes-Large dataset expanded from 6.8k to 73.2k), while the test set retains its original distribution.
### Multi-Model Comparison and Tuning
Trains four types of models: Logistic Regression, Random Forest, XGBoost, and LightGBM. For the optimal model of each dataset, parameters like learning rate and tree depth are tuned using Optuna (TPE sampler).

## Experimental Results and Evaluation Metrics

Evaluation in medical scenarios uses ROC-AUC, sensitivity (ability to identify patients), and specificity (ability to avoid misdiagnosing healthy people):
| Dataset               | Optimal Model                  | ROC-AUC | Sensitivity | Specificity |
|-----------------------|--------------------------------|---------|-------------|-------------|
| Diabetes-Large        | LightGBM                       | 0.979   | 0.709       | 0.995       |
| Heart-Cleveland       | Logistic Regression            | 0.958   | 0.821       | 1.000       |
| Diabetes-Pima         | XGBoost + Feature Engineering  | 0.838   | 0.685       | 0.770       |
LightGBM achieved a specificity of 0.995 on the Diabetes-Large dataset, almost never misclassifying healthy people and avoiding unnecessary medical interventions.

## Visualization and Deployment

The project automatically saves 12 publication-level visualization charts (ROC curves, confusion matrices, feature importance, etc.) to assist in model diagnosis and parameter tuning. The final model is deployed as an interactive web application via Gradio, with three dataset tabs. After users input physiological indicators, the system displays risk using color coding (green for low, yellow for medium, red for high) and generates a shareable link.

## Engineering Practice Insights and Recommendations

MedRisk-Classifier demonstrates the complete form of a production-grade medical AI project: end-to-end automation, strict training-test separation to prevent leakage, customized evaluation metrics for medical scenarios, and reproducible experimental workflows. For medical AI developers, this project provides valuable references: modular design facilitates dataset/model replacement, and detailed documentation and visualization lower the barrier to reproducibility.
