Abstract The question of whether the time has come to hang up the stethoscope is bound up in the promises of artificial intelligence (AI), promises that have so far proven difficult to deliver, perhaps because of the mismatch between the technical capability of AI and its use in real‐world clinical settings. This perspective argues that it is time to move away from discussing the generalised promise of disembodied AI and focus on specifics. We need to focus on how the computational method underlying AI, i.e. machine learning (ML), is embedded into tools, how those tools contribute to clinical tasks and decisions and to what extent they can be relied on. Accordingly, we pose four questions that must be asked to make the discussion real and to understand how ML tools contribute to health care: (1) What does the ML algorithm do? (2) How is output of the ML algorithm used in clinical tools? (3) What does the ML tool contribute to clinical tasks or decisions? (4) Can clinicians act or rely on the ML tool? Two exemplar ML tools are examined to show how these questions can be used to better understand the role of ML in supporting clinical tasks and decisions. Ultimately, ML is just a fancy method of automation. We show that it is useful in automating specific and narrowly defined clinical tasks but likely incapable of automating the full gamut of decisions and tasks performed by clinicians.
【저자키워드】 artificial intelligence, machine learning, Software, medical device, clinical decision support systems, human factors and ergonomics,