Panel: PERCEPTION AND UNDERSTANDING OF ARTIFICIAL INTELLIGENCE- THEOLOGICAL, PHILOSOPHICAL, ANTHROPOLOGICAL, AND ETHICAL PERSPECTIVES



879.11 - PERCEPTIONS OF AI IN CLINICAL DECISION-MAKING: ANSWERABILITY AND DEFERENCE

AUTHORS:
Miklavcic J. (University of Ljubljana, Faculty of Theology ~ Ljubljana ~ Slovenia)
Text:
As artificial intelligence becomes routine in clinical decision support, the ethical challenge is not only what AI can do, but how it is perceived in practice—tool, advisor, or authority. This paper argues that responsibility in AI-assisted medicine should be understood as answerability: the clinician's duty to stand behind decisions with reasons that can be owned in the patient encounter. When AI outputs are treated as decisive, responsibility can be "buffered" ("the system said so"), and a deference paradox emerges: higher accuracy can increase pressure to defer, while full deference weakens moral agency and care. The paper further highlights that many clinical choices cannot be settled by probabilities alone, because they involve values, meaning, and human vulnerability (e.g., threshold decisions, borderline cases, end-of-life contexts). Rather than framing these tensions as a simple "responsibility gap," it proposes an ecology of responsibility that links clinicians, institutions, and developers—while keeping accountability visible at the point of care. Practical implications include reason-giving checkpoints, constrained use conditions, and governance that supports human judgment instead of replacing it.