Zing Forum

Reading

Aether Oncology: When AI Meets Tumor Early Screening—How to Rebuild Doctor-Patient Trust with 'Explainable' and 'Auditable' AI

A breast cancer screening AI system built by a Brazilian developer. Drawing lessons from IBM Watson's 2017 failure, it proposes a new paradigm of 'AI-assisted decision-making rather than replacing doctors' by leveraging MLOps for active monitoring, XAI with explainable radar charts, and HIPAA-level security compliance.

医疗AI肿瘤筛查机器学习MLOps可解释AIXAI乳腺癌HIPAAEU AI Act数据漂移
Published 2026-05-16 09:55Recent activity 2026-05-16 09:58Estimated read 4 min
Aether Oncology: When AI Meets Tumor Early Screening—How to Rebuild Doctor-Patient Trust with 'Explainable' and 'Auditable' AI
1

Section 01

Aether Oncology: How 'Explainable' & 'Auditable' AI Rebuilds Trust in Tumor Early Screening

Aether Oncology, a breast cancer screening AI system developed by a Brazilian developer, responds to IBM Watson's 2017 failure by advocating an 'AI-assisted decision-making rather than replacing doctors' paradigm. It leverages MLOps for active monitoring, XAI with explainable radar charts, and HIPAA-level security compliance to rebuild doctor-patient trust in medical AI.

2

Section 02

Lessons from IBM Watson's Failure

IBM Watson for Oncology was discontinued in 2017 due to 'unsafe' treatment recommendations—its black-box system lacked transparency, clinical context, and governance, with no awareness of data drift leading to performance decay. Aether Oncology learned from this: it positions AI as a risk assessor (not autonomous diagnoser) to respect medical ethics, as false negatives in cancer screening mean lost early treatment opportunities.

3

Section 03

Technical Architecture: MLOps as Core Infrastructure

Aether uses a Remote-First Local-Fallback decoupled inference architecture (Hugging Face API with local PyTorch fallback). It employs Pydantic/Pandera for strict data contracts, records every prediction in tamper-proof audit logs (end-to-end traceable via Request ID). For XAI, it uses Integrated Gradients and radar charts to show feature contributions (e.g., tumor radius, texture) to predictions. It also monitors data drift via Kolmogorov-Smirnov test and alerts on significant distribution changes.

4

Section 04

Safety & Compliance: HIPAA & EU AI Act Alignment

Aether implements HIPAA-level security (strict CORS, payload cleaning, container vulnerability scans). It proactively aligns with EU AI Act, self-classifying as a 'high-risk system' (medical field) and implements risk management, data governance, technical documentation, and transparency requirements to reduce global deployment legal risks.

5

Section 05

Performance: Prioritizing Recall Rate

In cancer screening, Aether prioritizes recall rate (97.2% current version) over other metrics to minimize false negatives. Its F1 score is 96.5% and ROC-AUC is 99.1%. The trade-off (higher false positives for lower false negatives) is ethically justified: false positives may cause anxiety, but false negatives miss life-saving early treatment.

6

Section 06

Developer Background & Medical AI Paradigm

Developed by Vitor Diogo Fonseca da Silva as a FIAP Pós-Tech ML engineering project, it has industrial-grade maturity (CI/CD, Grype scans, MLflow tracking, C4 architecture docs) and is deployed (front-end Vercel, backend Render: https://api.vitorsilva.engineer). It proposes a replicable medical AI paradigm: 1) Human-AI collaboration not replacement;2) Explainability = safety;3) MLOps as baseline;4) Compliance from day one. This balances tech innovation and clinical trust.