Sebo Paul, Wang Ting
University Institute for Primary Care (IuMFE), University of Geneva, 1211 Geneva, Switzerland.
School of Library and Information Management, Emporia State University, Emporia, KS 66801, United States.
Fam Pract. 2025 Aug 14;42(5). doi: 10.1093/fampra/cmaf069.
BACKGROUND: Artificial intelligence tools, including large language models such as ChatGPT, are increasingly integrated into clinical and primary care research. However, their ability to assist with specialized statistical tasks, such as sample size estimation, remains largely unexplored. METHODS: We evaluated the accuracy and reproducibility of ChatGPT-4.0 and ChatGPT-4o in estimating sample sizes across 24 standard statistical scenarios. Examples were selected from a statistical textbook and an educational website, covering basic methods such as estimating means, proportions, and correlations. Each example was tested twice per model. Models were accessed through the ChatGPT web interface, with a new independent chat session initiated for each round. Accuracy was assessed using mean and median absolute percentage error compared with validated reference values. Reproducibility was assessed using symmetric mean and median absolute percentage error between rounds. Comparisons were performed using Wilcoxon signed-rank tests. RESULTS: For ChatGPT-4.0 and ChatGPT-4o, absolute percentage errors ranged from 0% to 15.2% (except one case: 26.3%) and 0% to 14.3%, respectively, with most examples showing errors below 5%. ChatGPT-4o showed better accuracy than ChatGPT-4.0 (mean absolute percentage error: 3.1% vs. 4.1% in round#1, P-value = .01; 2.8% vs. 5.1% in round#2, P-value =.02) and lower symmetric mean absolute percentage error (0.8% vs. 2.5%), though not significant (P-value = .18). CONCLUSIONS: ChatGPT-4.0 and ChatGPT-4o provided reasonably accurate sample size estimates across standard scenarios, with good reproducibility. However, inconsistencies were observed, underscoring the need for cautious interpretation and expert validation. Further research should assess performance in more complex contexts and across a broader range of AI models.
J Med Internet Res. 2025-6-23
Bioengineering (Basel). 2025-6-10
Int J Med Sci. 2025-5-31
BMJ Evid Based Med. 2025-4-8
Healthcare (Basel). 2025-3-10
Med Sci Educ. 2024-11-13
Front Artif Intell. 2025-1-30
Educ Psychol Meas. 2025-1-3
J Med Internet Res. 2024-12-9
Ann Med Surg (Lond). 2024-11-8