Zing Forum

Reading

Ethical Boundaries of Medical AI: Can Chatbots Serve as Medical Ethical Agents?

This article explores the ethical boundaries of generative AI in healthcare, distinguishing four levels of delegation—information support, preference elicitation, moral reasoning, and surrogate decision-making. It argues that the latter two should not be delegated to chatbots and proposes a corresponding governance framework.

医疗AI医疗伦理代理决策聊天机器人大语言模型生成式AI医疗治理患者自主权姑息治疗AI伦理
Published 2026-04-05 07:57Recent activity 2026-04-05 07:58Estimated read 13 min
Ethical Boundaries of Medical AI: Can Chatbots Serve as Medical Ethical Agents?
1

Section 01

[Introduction] Ethical Boundaries of Medical AI: Can Chatbots Serve as Medical Ethical Agents?

This article explores the ethical boundaries of generative AI in healthcare, distinguishing four levels of delegation: information support, preference elicitation, moral reasoning, and surrogate decision-making. It argues that the latter two should not be delegated to chatbots and proposes a corresponding governance framework.

2

Section 02

Current Status and Issues of Generative AI Entering Core Healthcare Domains

With the rapid development of large language model (LLM) technology, generative AI is shifting from peripheral interface issues in healthcare to core governance problems. Healthcare systems are using or piloting LLM chatbots for patient messaging, triage, documentation, health guidance, and information support. Regulators and standard-setting bodies are also increasingly requiring transparency, lifecycle risk management, and human oversight for high-risk medical AI systems. However, the most difficult question is not whether chatbots can summarize information, but whether they can legally participate in or even replace human ethical judgments when patients cannot make decisions for themselves—recent comments have raised the question of whether chatbots trained on patient records, communications, or digital traces can act as medical surrogates. This paper published in the Resp AI journal provides an in-depth analysis of this issue.

3

Section 03

Four-Level Delegation Framework: What Can Be Authorized, What Cannot?

The core argument of the paper is: At least at the level recognized by medical ethics, health law, and institutional accountability, chatbots cannot act as medical surrogates. The authors distinguish four types of delegation in clinical dialogue:

  1. Information Support

    • Provide medical information, explain terminology, and summarize research for patients
    • Conditionally authorized under strict governance
  2. Preference Elicitation

    • Help patients clarify values, preferences, and treatment goals
    • Conditionally authorized under strict governance
  3. Moral Reasoning

    • Participate in ethical decision-making and weigh the values of treatment options
    • Should not be delegated to chatbots in principle
  4. Surrogate Authority

    • Make medical decisions on behalf of incapacitated patients
    • Should not be delegated to chatbots in principle

The first two levels can be conditionally authorized under a strict governance framework, while the latter two should not be delegated to chatbots as a matter of principle.

4

Section 04

Why Can't Chatbots Be Ethical Agents? Four Core Reasons

The paper points out that the reason is not just technical unreliability, but that surrogate medical decision-making is a fiduciary, relational practice requiring institutional accountability, which demands:

1. Answerability

Surrogate decision-makers must be able to provide reasons for their decisions, accept challenges, and defend them when necessary. Chatbots lack this ability—they cannot truly 'take responsibility' or 'answer' why a decision was made in an ethical or legal sense.

2. Interpretive Humility

Medical decisions often involve subtle understanding of patients' values, cultural backgrounds, and life contexts. Human decision-makers need interpretive humility—awareness of the limits of their understanding and willingness to seek more information or advice when uncertain. Chatbots lack this self-awareness and humility.

3. Legal Standing

Surrogate decision-makers have clear status and responsibilities in the legal system. If a decision goes wrong, there is a clear accountability path. As non-legal entities, chatbots cannot bear such legal responsibility.

4. Responsibility-Bearing Agency

Most importantly, surrogate decision-makers must be able to bear responsibility—not only legal responsibility but also moral and relational responsibility. This includes genuine care for the patient's well-being and willingness to feel guilt, regret, and take remedial actions when decisions are wrong. Chatbots lack this subjectivity and emotional capacity.

5

Section 05

Supportive Roles of Chatbots in Healthcare

The paper does not completely否定 chatbots' role in healthcare, but argues that they can provide valuable support in the following areas:

  • Prepare: Help collect and organize information needed for decision-making
  • Clarify: Explain complex medical concepts and options
  • Document: Record the decision-making process and reasons
  • Structure: Provide frameworks and structures for human deliberation

In these supportive roles, chatbots can enhance the capabilities of human decision-makers but cannot replace their judgment and authority.

6

Section 06

Governance Framework Recommendations for High-Risk Healthcare Domains

The paper proposes a governance framework for designers, hospitals, and regulators, with the core being to separate acceptable supportive uses from prohibited decision-making delegation. This is particularly important in the following high-risk domains:

Intensive Care

Patients may be unconscious or unable to communicate, requiring surrogate decisions. Chatbots can assist in recording patients' advance directives and providing information on treatment options, but cannot participate in actual decision-making.

Oncology

Cancer treatment decisions involve complex trade-offs and value judgments. Chatbots can help explain treatment plans, side effects, and prognosis, but cannot replace ethical dialogues between patients (or surrogates) and doctors.

Palliative Care

End-of-life care decisions are extremely sensitive, involving deep ethical issues such as quality of life, pain management, and life extension. Chatbots can provide information support but should not participate in these profound value judgments.

Mental Health

Decision-making capacity of patients with mental illness may fluctuate or be impaired. Chatbots can help monitor symptoms and provide information on coping strategies, but cannot participate in judgments about treatment consent or hospitalization decisions.

7

Section 07

Three Key Insights for AI Healthcare Applications: Function vs. Ethics, Human-Machine Boundaries, Regulatory Directions

This paper has important guiding significance for the development of current AI healthcare applications:

1. Distinction Between Function and Ethics

Technical capability (can do) and ethical permission (should do) are two different issues. Even if chatbots can generate seemingly reasonable medical advice, it does not mean they should be authorized to make ethical judgments.

2. Boundaries of Human-Machine Collaboration

The paper draws a clear boundary for human-machine collaboration: AI can process information, provide support, and enhance human capabilities, but the final authority for ethical decisions must remain in human hands. This is not a matter of technical limitations but an understanding of the essence of medical ethics.

3. Regulatory Directions

For regulators and standard-setters, the paper recommends focusing on how to distinguish between supportive AI applications and decision-making AI applications, and formulating governance requirements accordingly. Transparency, interpretability, and human oversight have different meanings at different levels.

8

Section 08

Conclusion: Chatbots' Positioning—Support, Not Decision-Making

This paper provides an important ethical analysis framework for the application of generative AI in healthcare. It reminds us that while pursuing technological innovation, we cannot ignore the fundamental ethical foundations of medical practice—trust, relationship, accountability, and human care.

Chatbots have their place in healthcare, but that place is supportive rather than decision-making. They can help humans make better decisions but cannot be the subject of the decision itself. This distinction is crucial for protecting patient rights, maintaining medical ethics, and ensuring the responsible use of AI technology.

As AI technology deepens its application in healthcare, the arguments of this paper will become increasingly important. It is not just an academic discussion but a practical guide, providing a framework for designers, clinicians, hospital managers, and policymakers to think about the ethical boundaries of AI.