Spring 2025

March
Dr. Wei Shao
Dr. Wei Shao and his team looked into the recently published work, "HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogenous Knowledge Adaptation," to discuss how HealthGPT is transforming medical AI! Developed as a Medical Large Vision-Language Model (Med-LVLM), HealthGPT integrates medical visual comprehension and generation within a unified autoregressive paradigm. Using a novel Heterogeneous Low-Rank Adaptation (H-LoRA) technique, hierarchical visual perception, and a three-stage learning strategy, it efficiently adapts comprehension and generation knowledge to pre-trained LLMs. HealthGPT is trained on VL-Health, a comprehensive medical domain-specific dataset, achieving exceptional performance and scalability in medical visual tasks.

February
Divya Vellanki
Divya Vellanki explored the recently published work, "An evaluation framework for clinical use of large language models in patient interaction tasks." She introduced CRAFT-MD, a new framework designed to bridge this gap. Instead of just measuring AI performance on static tasks, CRAFT-MD simulates real patient interactions to assess how well models like GPT-4 and Mistral respond in clinical settings.

January
Dr. Mackenzie Meni
Dr. Mackenzie Meni, Ph.D. delved into the recently published work, “Unified Clinical Vocabulary Embeddings for Advancing Precision Medicine” in our latest Journal Club. She explained how the study proposes a solution by unifying seven medical vocabularies and validating clinical vocabulary embeddings using real-world data from 4.57 million patients. This innovative approach bridges gaps in training datasets, reduces bias, and supports the development of AI models that better represent clinical relationships for population-level and patient-specific healthcare applications.