Zing Forum

Reading

Medical AI and End-of-Life Care: Potential Applications of Large Language Models in Pediatric Shared Decision-Making

This article explores how large language model (LLM) technology can assist in doctor-patient communication and shared decision-making in pediatric end-of-life care, analyzing the ethical boundaries and implementation paths for the technology's application.

大语言模型儿科医疗临终关怀共同决策医患沟通AI伦理医疗AI
Published 2026-05-08 08:46Recent activity 2026-05-08 10:29Estimated read 10 min
Medical AI and End-of-Life Care: Potential Applications of Large Language Models in Pediatric Shared Decision-Making
1

Section 01

[Introduction] Medical AI and Pediatric End-of-Life Care: Potential Applications of LLMs in Shared Decision-Making

This article explores the potential applications of large language model (LLM) technology in assisting doctor-patient communication and shared decision-making in pediatric end-of-life care, analyzing the ethical boundaries and implementation paths for the technology's application. It focuses on the communication dilemmas in pediatric end-of-life care (limited expression by children, barriers to shared decision-making), proposes a model for dynamically assessing children's decision-making ability, discusses the value of LLMs as communication aids, and emphasizes that technology should serve human nature while retaining the final decision-making authority of the medical team.

2

Section 02

Background: Communication Dilemmas in Pediatric End-of-Life Care

Background: Communication Dilemmas in Pediatric End-of-Life Care

In pediatric medical settings, end-of-life care has always been a challenging issue. Due to differences in age and cognitive development stages, minor patients often struggle to fully express their wishes regarding treatment plans. The traditional medical decision-making model tends to be fully delegated to parents or guardians, but in recent years, the medical community has gradually recognized that respecting the child's own right to participate in decision-making is equally important.

This "shared decision-making" model requires in-depth communication among medical staff, parents, and the child. However, in reality, doctors face multiple barriers such as time pressure, differences in communication skills, and limited expression ability of children. How to ensure medical quality while making the child's voice heard has become an urgent clinical problem to solve.

3

Section 03

Core Research: Dynamic Assessment of Decision-Making Ability

Core Research: Dynamic Assessment of Decision-Making Ability

This study, included in Seton Hall University's Scholarship Repository, proposes a participation model based on children's decision-making ability. The study points out that the decision-making ability of minors is not simply "present" or "absent" but a continuous spectrum that changes with age, illness, and context. Even young children can express meaningful views on their treatment preferences with appropriate support.

The study emphasizes that medical teams need to establish a systematic assessment framework to identify which children have the cognitive basis to participate in decision-making and create space for their expression. This assessment should not rely solely on age thresholds but should comprehensively consider the child's understanding, judgment, and stability of expression.

4

Section 04

Technical Intervention Points of Large Language Models

Technical Intervention Points of Large Language Models

Large language model (LLM) technology provides a potential solution to the above dilemmas. First, LLMs can help medical staff understand children's non-standard expressions through natural language processing technology. When describing symptoms or feelings, children often use metaphorical, vague, or emotional language, and traditional consultation models easily miss key information.

Second, LLMs can assist in generating age-appropriate explanations of medical information. For children of different ages, the system can automatically adjust the complexity of medical terms and explain the condition and treatment options in language that the child can understand, thereby enhancing the authenticity and effectiveness of their decision-making participation.

In addition, conversational AI can serve as a "communication bridge", providing children with a low-pressure channel for expression outside of formal doctor-patient communication. Children may be more willing to reveal their true thoughts to a "neutral" AI system, and these feedbacks, after integration, can assist the medical team in formulating plans that better align with the child's wishes.

5

Section 05

Ethical Boundaries and Risk Considerations

Ethical Boundaries and Risk Considerations

Introducing AI into pediatric end-of-life decision-making is not without controversy. The primary concern is algorithmic bias—biases in training data may lead to systematic deviations in the system's recommendations for certain groups of children. Second, over-reliance on AI may weaken the humanistic connection between doctors and patients, and emotional support has irreplaceable value in end-of-life care.

Privacy protection is also crucial. Children's medical information and psychological state data are extremely sensitive, and any technical solution must be based on strict data security guarantees. In addition, the recommendations of AI systems should always be positioned as auxiliary references, and the final decision-making authority must remain in the hands of the medical team with professional judgment and humanistic care capabilities.

6

Section 06

Implementation Path: From Pilot to Standardization

Implementation Path: From Pilot to Standardization

The application of technology requires a gradual verification process. It is recommended to first pilot in non-end-of-life chronic disease management scenarios to accumulate empirical data on AI-assisted pediatric doctor-patient communication. At the same time, establish an interdisciplinary ethical review mechanism to ensure that technological evolution and medical ethics develop in sync.

Medical institutions should formulate clear AI usage guidelines, define the boundaries of system capabilities, and train medical staff on their role positioning in human-machine collaboration. Technology suppliers need to improve the transparency of the model's decision-making process so that the medical team can understand the logic behind AI recommendations.

7

Section 07

Conclusion: Technology Serves Human Nature

Conclusion: Technology Serves Human Nature

The core of pediatric end-of-life care has always been the protection of the dignity of life. The value of AI technologies such as large language models lies in amplifying rather than replacing the care capabilities of human medical staff. When technology can help children express their wishes more fully, help parents understand options more comprehensively, and help doctors balance multiple demands more accurately, it truly achieves meaningful application in medical scenarios.

The future development direction is not to let AI make decisions, but to let AI become a bridge that connects different voices and promotes true shared decision-making.