Google has released MedGemma 1.5, an open-source medical AI model designed to analyze full 3D CT and MRI scans, improve anatomical localization, support medical dictation, and enable advanced diagnostics for developers building healthcare applications.

Think of an AI that can drill down into the complex layers of a full CT scan, pinpointing abnormalities with accuracy that challenges experienced radiologists, all from an open-source model running on your laptop.
Topics
ToggleAs a watchdog of AI’s advancements in medicine, I am absolutely giddy about Google’s newest tool, MedGemma 1.5.
This 4-billion-parameter behemoth, is not just an iteration; it’s a game changer for developers creating app’s that could democratize advanced diagnostics in remote corners.
Announcing our latest open medical AI models for developers: MedGemma 1.5, which is small enough to run offline & improves performance on 3D imaging (CT & MRI), & MedASR, a speech-to-text model for medical dictation. Both available on Hugging Face + Vertex AI.… pic.twitter.com/w0OvuKQKiV
— Google Research (@GoogleResearch) January 13, 2026
Published on yesterday, it is built to natively process high-dimensional imaging like CT scans and MRIs in a way that no other open medical AI has ever shown at this scale.
Combined with the new MedASR for speech-to-text dictation, and you’ve got yourself a toolkit with the potential to take patient care from rural clinics all the way to busy hospitals.
What’s New in MedGemma 1.5

An extension of the previous version, MedGemma 1.5, where instead it analyzes from 2D images to complete 3D volumes so that heavier examinations in complex scans could be performed. Key innovations include:
- High-Dimensional Imaging Support: Processes entire CT, MRI, and histopathology volumes for comprehensive insights.
- Anatomical Localization: Pinpoints features in chest X-rays with 35% better accuracy.
- Longitudinal Review: Analyzes time-series data, like evolving chest X-rays, with 5% improved performance.
- Medical Document Understanding: Extracts structured data from lab reports, boosting efficiency by 18%.
- Text-Based Reasoning: Handles medical Q&A and EHR queries with up to 22% gains.
MedASR adds to this by significantly reducing the word error rate on medical dictation, 58% fewer errors on chest X-ray reports than Whisper large-v3.
Just Dropped: Google Drops Veo 3.1 Upgrades: Vertical Videos, Smarter AI, and 4K Magic That’ll Transform Your Creations Overnight
Comparison with Previous Version
MedGemma 1 excelled in 2D tasks but struggled with volumetric data. Here’s a quick breakdown:
| Aspect | MedGemma 1 | MedGemma 1.5 | Improvement |
|---|---|---|---|
| Parameter Count | 27B (larger variant) | 4B (efficient core) | More accessible |
| 3D Scan Interpretation | Limited to slices | Full volumes (CT/MRI) | New capability |
| Chest X-ray Localization | 3% IoU | 38% IoU | +35% |
| Lab Report Extraction | 60% F1 score | 78% F1 score | +18% |
| MedQA Accuracy | 64% | 69% | +5% |
The 4B model is lighter, making it ideal for edge devices, while retaining high fidelity.
Hardware Requirements and What Users Need to Know
In order to run MedGemma 1.5 efficiently, the GPU should have at least 16GB VRAM (e.g., RTX 4090 or A100) for the 4B model – that’s great for offline inference without huge billings from cloud providers! For 27B, you will want at least 32GB of VRAM.
CPU fallback is also available but slower; 128GB of system RAM will be useful for larger datasets. It is now freely available for research and commercial use, either on Hugging Face or the Google Cloud Vertex AI platform.
MedGemma 1.5 is a major upgrade to our open models for healthcare developers.
— Sundar Pichai (@sundarpichai) January 13, 2026
The new 4B model enables developers to build applications that natively interpret full 3D scans (CTs, MRIs) with high efficiency – a first, we believe, for an open medical generalist model. MedGemma…
Developers receive tutorials on fine-tunings through LoRA or reinforcement learning, as well as a $100,000 hackathon on Kaggle to encourage innovations like conversational clinical tools.
As for security, it is HIPAA-eligible on Vertex AI. It does well in benchmarks, beating baselines on tasks such as CT classification (61% accuracy).
For me, the real win is accessibility, empowering global health apps without proprietary barriers.













