The AI Revolution in Medicine Needs a Translator. Here’s Why.

Artificial intelligence is sweeping through healthcare, promising a new era of predictive accuracy and personalized medicine. The hype is palpable. Yet, a crucial, inconvenient truth is emerging from the front lines: simply giving these powerful AI tools to doctors doesn't automatically lead to better medicine. In fact, sometimes it doesn’t help at all.

This isn't a failure of the technology, but a failure of translation. A thought-provoking article in npj Digital Medicine argues that we've missed a critical step. We’re building brilliant tools but forgetting to train the specialized experts needed to wield them safely and effectively (Marwaha et al., 2025). The authors contend that to truly unlock the potential of clinical AI, we need to create a new kind of medical specialist: the algorithmic consultant.

The Doctor-AI Disconnect: A Troubling Reality

The core problem is that expecting every physician to become an expert in data science is impractical and unrealistic. The evidence for this disconnect is mounting.

The article highlights a study where an algorithm consistently outperformed surgeons in predicting post-surgical outcomes. Yet, when the surgeons were shown the algorithm's predictions, their own accuracy didn't improve (Brennan et al., 2019, as cited in Marwaha et al., 2025). Similarly, another recent study found that while a large language model (LLM) had superior diagnostic capabilities, clinicians who used the LLM as an assistant showed no significant improvement in their diagnostic performance (Goh et al., 2024, as cited in Marwaha et al., 2025).

Expecting doctors to interpret raw AI output is, as the authors suggest, like asking a primary care physician to read and clinically translate the unprocessed data from an MRI scan without the help of a radiologist. While efforts to create "explainable AI" with tools like "Model Facts" labels are well-intentioned, they often fall short. They can place an unrealistic burden on the physician and have not been shown to consistently improve decision-making or mitigate the negative effects of incorrect AI predictions (Jabbour et al., 2023, as cited in Marwaha et al., 2025).

Expecting doctors to interpret raw AI output is, as the authors suggest, like asking a primary care physician to read and clinically translate the unprocessed data from an MRI scan without the help of a radiologist.
— Jayson S. Marwaha, Department of Biomedical Informatics, Harvard Medical School

The Solution: A Clinical Pharmacist for Algorithms

To bridge this chasm, Marwaha et al. (2025) propose the role of the algorithmic consultant, drawing a powerful analogy to the clinical pharmacist. Pharmacists are indispensable experts who act as intermediaries, guiding physicians on complex medication use and governing a hospital’s entire drug supply. The algorithmic consultant would serve the same function for AI.

This new specialty would have two core responsibilities:

1. Point-of-Care Guidance 🧑‍⚕️

Figure 1A: The point-of-care workflow of an algorithmic consultant, which is modeled after that of an inpatient clinical pharmacist (Author’s own work.).

At the bedside, the consultant would act as a trusted advisor. When a physician faces a complex clinical scenario, they could call on this specialist to help select the most appropriate AI model from the hospital's arsenal. More importantly, the consultant would translate the model's often-opaque output—its predictions, probabilities, and confidence intervals—into a clear, clinically actionable recommendation, taking into account the specific nuances of the patient's case.

2. Institutional Governance 🏛️

Figure 1B: A clinical pharmacist’s organizational governance responsibilities (e.g., managing an institution’s formulary), and the parallel role of an algorithmic consultant in governing an institution’s AI models through their lifecycle. (Author’s own work.).

On a system level, the consultant would be the steward of the hospital's entire AI ecosystem. Much like a pharmacist manages a hospital's drug formulary, this specialist would be responsible for:

  • Vetting and deployment: Evaluating new AI models from vendors or academia to ensure they are safe, effective, and fair before they are used on the hospital's patients.

  • Monitoring and quality control: Continuously monitoring the performance of deployed models, watching for performance degradation or the introduction of bias as patient populations or clinical practices shift.

  • Safety: Acting as a crucial safeguard to prevent catastrophes, such as the real-world example of an AI transcription tool that began "hallucinating" and fabricating information in patient notes.

Why This is Non-Negotiable for Virtual Care

This role becomes even more critical in the context of digital health and virtual care. As healthcare moves into the home, we increasingly rely on algorithms to power remote patient monitoring, predict emergencies, and personalize telehealth interventions.

Without an expert algorithmic consultant, a health system could easily deploy a remote monitoring algorithm that, while accurate for one demographic, is biased against another, worsening the health disparities that digital tools sometimes perpetuate (Obermeyer et al., 2019). The consultant's role in vetting for fairness and validating models on local patient populations is a fundamental pillar of equitable virtual care. They would be the human checkpoint ensuring that the AI driving a virtual nursing platform is safe, unbiased, and genuinely effective for the community it serves.

Building a Future of Safe, Effective, and Trustworthy AI

The creation of this specialty is about more than just convenience; it’s about establishing the foundational pillars required for the successful integration of AI into medicine.

  • Quality and Effectiveness: An expert guide ensures that these powerful tools are used correctly, maximizing their potential to improve diagnoses and patient outcomes.

  • Safety and Trust: This role creates an essential layer of human oversight. For clinicians to trust and adopt AI, they need to know an expert has validated the tools and is available to guide their use. This human-in-the-loop model is a cornerstone of building clinician trust, which remains a significant barrier to AI adoption (Asan & Bayrak, 2021).

  • Liability and Risk: The consultant model helps mitigate the immense liability risks for both physicians and hospitals. By having specialists who carefully curate a portfolio of safe models and guide their use, health systems de-risk their AI initiatives and empower physicians to use new technology with confidence.

The path forward, as proposed by the authors, is to build this specialty from within, creating a specialized training track for algorithmic consultants within existing clinical informatics fellowship programs.

The message is resounding. To truly realize the promise of AI in medicine, we must invest not only in technology but also in the human expertise to translate it. The algorithmic consultant isn't just a good idea—it's the essential human bridge to a future of safer, smarter, AI-enabled healthcare.

References

Asan, O., & Bayrak, C. (2021). The role of physician-AI collaboration in healthcare: A multiple case study of three smart hospitals. Digital Health, 7. https://doi.org/10.1177/20552076211020473

Marwaha, J. S., Yuan, W., Poddar, M., Elsamadisi, P., & Brat, G. A. (2025). The algorithmic consultant: a new era of clinical AI calls for a new workforce of physician-algorithm specialists. npj Digital Medicine, 8(1).

Obermeyer, Z., Powers, B., Vogeli, C., & Mullainathan, S. (2019). Dissecting racial bias in an algorithm used to manage the health of populations. Science, 366(6464), 447-453. https://doi.org/10.1126/science.aax2342

Next
Next

The Digital Lifeline: How Remote Patient Monitoring Can Solve America's Healthcare Crisis