This paper examines the increasingly widespread tendency to regard AI systems in medicine as epistemically authoritative, often at the expense of physicians' clinical judgment. It argues that this shift stems from a problematic conflation of statistical reliability with normative authority in clinical decision-making. While algorithmic systems can be highly effective at identifying patterns and estimating probabilities, clinical judgment involves far more than mere statistical calculation: it requires contextual understanding, practical reasoning, and the assumption of professional responsibility. By taking into account the differences between algorithmic forms of knowledge and clinical understanding, the paper highlights the risks of automation bias and the danger of a gradual deprofessionalization of medical practice. Particular attention is given to questions of ethical responsibility in situations where clinical decisions are strongly shaped—or even implicitly justified—by algorithmic recommendations. Rather than rejecting artificial intelligence in medicine, the paper advocates an approach in which AI functions as a valuable but supportive epistemic resource, clearly subordinate to physicians' clinical judgment and moral accountability.