Please use this identifier to cite or link to this item: https://ruomoplus.lib.uom.gr/handle/8000/2104
Title: Performance of Publicly Available Large Language Models on Internal Medicine Board-style Questions
Authors: Tarabanis, Constantine 
Zahid, Sohail 
Mamalis, Marios Evangelos 
Zhang, Kevin 
Kalampokis, Evangelos 
Jankelson, Lior 
Author Department Affiliations: Department of Business Administration 
Department of Business Administration 
Author School Affiliations: School of Business Administration 
Subjects: FRASCATI__Engineering and technology__Electrical engineering, Electronic engineering, Information engineering
FRASCATI__Medical and Health sciences__Health sciences (including: hospital administration, health care financing)
Issue Date: 1-Sep-2024
Journal: PLOS Digital Health 
ISSN: 2767-3170
Volume: 3
Issue: 9
Start page: e0000604
Abstract: 
Ongoing research attempts to benchmark large language models (LLM) against physicians’ fund of knowledge by assessing LLM performance on medical examinations. No prior study has assessed LLM performance on internal medicine (IM) board examination questions. Limited data exists on how knowledge supplied to the models, derived from medical texts improves LLM performance. The performance of GPT-3.5, GPT-4.0, LaMDA and Llama 2, with and without additional model input augmentation, was assessed on 240 randomly selected IM board-style questions. Questions were sourced from the Medical Knowledge Self-Assessment Program released by the American College of Physicians with each question serving as part of the LLM prompt. When available, LLMs were accessed both through their application programming interface (API) and their corresponding chatbot. Mode inputs were augmented with Harrison’s Principles of Internal Medicine using the method of Retrieval Augmented Generation. LLM-generated explanations to 25 correctly answered questions were presented in a blinded fashion alongside the MKSAP explanation to an IM board-certified physician tasked with selecting the human generated response. GPT-4.0, accessed either through Bing Chat or its API, scored 77.5–80.7% outperforming GPT-3.5, human respondents, LaMDA and Llama 2 in that order. GPT-4.0 outperformed human MKSAP users on every tested IM subject with its highest and lowest percentile scores in Infectious Disease (80th) and Rheumatology (99.7th), respectively. There is a 3.2–5.3% decrease in performance of both GPT-3.5 and GPT-4.0 when accessing the LLM through its API instead of its online chatbot. There is 4.5–7.5% increase in performance of both GPT-3.5 and GPT-4.0 accessed through their APIs after additional input augmentation. The blinded reviewer correctly identified the human generated MKSAP response in 72% of the 25-question sample set. GPT-4.0 performed best on IM board-style questions outperforming human respondents. Augmenting with domain-specific information improved performance rendering Retrieval Augmented Generation a possible technique for improving accuracy in medical examination LLM responses.
URI: https://ruomoplus.lib.uom.gr/handle/8000/2104
DOI: 10.1371/journal.pdig.0000604
Rights: Αναφορά Δημιουργού - Μη Εμπορική Χρήση - Παρόμοια Διανομή 4.0 Διεθνές
CC0 1.0 Παγκόσμια
Corresponding Item Departments: Department of Business Administration
Department of Business Administration
Appears in Collections:Articles

Files in This Item:
File Description SizeFormat
journal.pdig.0000604.pdf472,79 kBAdobe PDF
View/Open
Show full item record

SCOPUSTM   
Citations

8
checked on Dec 9, 2025

Page view(s)

41
checked on Dec 16, 2025

Download(s)

12
checked on Dec 16, 2025

Google ScholarTM

Check

Altmetric

Altmetric


This item is licensed under a Creative Commons License Creative Commons