Scaling medical AI across clinical contexts.

Nature medicine
Authors
Abstract

Medical artificial intelligence (AI) tools, including clinical language models, vision-language models and multimodal health record models, are used to summarize clinical notes, answer questions and support decisions. Their adaptation to new populations, specialties or care settings often relies on fine-tuning, prompting or retrieval from external knowledge bases. These strategies can scale poorly and risk contextual errors-outputs that appear plausible but miss critical patient or situational information. We envision context switching as an emergent solution. Context switching adjusts model reasoning at inference, without retraining. Generative models can tailor outputs to patient biology, care setting or disease. Multimodal models can switch between notes, laboratory results, imaging and genomics, even when some data are missing or delayed. Agent models can coordinate tools and roles based on task and user context. In each case, context switching enables medical AI to adapt across specialties, populations and geographies. This approach requires advances in data design, model architectures and evaluation frameworks, and establishes a foundation for medical AI that scales to an infinite number of contexts, while remaining reliable and suited to real-world care.

Year of Publication
2026
Journal
Nature medicine
Date Published
02/2026
ISSN
1546-170X
DOI
10.1038/s41591-025-04184-7
PubMed ID
41634392
Links