Fall 2025
August
Zhenhong Hu
Zhenhong Hu explored recently published study, “Beyond the Modality: Unlocking Healthcare Insights with Multimodal Intelligence.” Throughout this paper, he discussed how it is a flexible and efficient approach that learns a unified representation space for seven diverse modalities, and how existing works use an image-centered representation space, which is sub-optimal and leads to an unbalanced space among all modalities. Also, it learns alignment centers that are modality-agnostic to create a unified and balanced representation space, delivering remarkable performance boosts.
Spring 2025
May
Dr. Akshith Ullal
Dr. Akshith discussed recently published work, “Fine-tuning language model embeddings to reveal domain knowledge: An explainable artificial intelligence perspective on medical decision making.” Throughout this paper, he investigated how researchers are fine-tuning LLMs to identify if tumors are present in reports, understand how aggressive and dangerous the tumor is, and explaining how and why it classified the tumor while also keeping patient data privacy, transparency, and interpretability integrity.
APRIL
Dr. Ruing Deng
Dr. Ruing Deng explored the published work “CASC-AI: Consensus-aware Self-corrective Learning for Cell Segmentation with Noisy Labels.” Within this paper, he discussed how the model tackles noisy annotations by using a Consensus Matrix to identify and prioritize high-confidence regions for strong training. For noisy, low-confidence areas, it uses contrastive learning to separate noisy features from reliable ones, allowing it to iteratively correct labels and improve segmentation accuracy.
March
Dr. Wei Shao
Dr. Wei Shao and his team looked into the recently published work, "HealthGPT: A Medical Large Vision-Language Model for Unifying Comprehension and Generation via Heterogenous Knowledge Adaptation," to discuss how HealthGPT is transforming medical AI! Developed as a Medical Large Vision-Language Model (Med-LVLM), HealthGPT integrates medical visual comprehension and generation within a unified autoregressive paradigm. Using a novel Heterogeneous Low-Rank Adaptation (H-LoRA) technique, hierarchical visual perception, and a three-stage learning strategy, it efficiently adapts comprehension and generation knowledge to pre-trained LLMs. HealthGPT is trained on VL-Health, a comprehensive medical domain-specific dataset, achieving exceptional performance and scalability in medical visual tasks.
February
Divya Vellanki
Divya Vellanki explored the recently published work, "An evaluation framework for clinical use of large language models in patient interaction tasks." She introduced CRAFT-MD, a new framework designed to bridge this gap. Instead of just measuring AI performance on static tasks, CRAFT-MD simulates real patient interactions to assess how well models like GPT-4 and Mistral respond in clinical settings.
January
Dr. Mackenzie Meni
Dr. Mackenzie Meni, Ph.D. delved into the recently published work, “Unified Clinical Vocabulary Embeddings for Advancing Precision Medicine” in our latest Journal Club. She explained how the study proposes a solution by unifying seven medical vocabularies and validating clinical vocabulary embeddings using real-world data from 4.57 million patients. This innovative approach bridges gaps in training datasets, reduces bias, and supports the development of AI models that better represent clinical relationships for population-level and patient-specific healthcare applications.