The paper points out that the reason is not just technical unreliability, but that surrogate medical decision-making is a fiduciary, relational practice requiring institutional accountability, which demands:
1. Answerability
Surrogate decision-makers must be able to provide reasons for their decisions, accept challenges, and defend them when necessary. Chatbots lack this ability—they cannot truly 'take responsibility' or 'answer' why a decision was made in an ethical or legal sense.
2. Interpretive Humility
Medical decisions often involve subtle understanding of patients' values, cultural backgrounds, and life contexts. Human decision-makers need interpretive humility—awareness of the limits of their understanding and willingness to seek more information or advice when uncertain. Chatbots lack this self-awareness and humility.
3. Legal Standing
Surrogate decision-makers have clear status and responsibilities in the legal system. If a decision goes wrong, there is a clear accountability path. As non-legal entities, chatbots cannot bear such legal responsibility.
4. Responsibility-Bearing Agency
Most importantly, surrogate decision-makers must be able to bear responsibility—not only legal responsibility but also moral and relational responsibility. This includes genuine care for the patient's well-being and willingness to feel guilt, regret, and take remedial actions when decisions are wrong. Chatbots lack this subjectivity and emotional capacity.