• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

Accuracy, satisfaction, and impact of custom GPT in acquiring clinical knowledge: Potential for AI-assisted medical education.

作者信息

Pu Jiaxi, Hong Jie, Yu Qiao, Yu Pan, Tian Jiaqi, He Yuehua, Huang Hanwei, Yuan Qiongjing, Tao Lijian, Peng Zhangzhe

机构信息

Department of Nephrology, Xiangya Hospital, Central South University, Changsha, China.

Department of Nephrology, The Third Hospital of Changsha, Changsha, China.

出版信息

Med Teach. 2025 Sep;47(9):1502-1508. doi: 10.1080/0142159X.2025.2458808. Epub 2025 Feb 2.

DOI:10.1080/0142159X.2025.2458808
PMID:39893644
Abstract

BACKGROUND

Recent advancements in artificial intelligence (AI) have enabled the customization of large language models to address specific domains such as medical education. This study investigates the practical performance of a custom GPT model in enhancing clinical knowledge acquisition for medical students and physicians.

METHODS

A custom GPT was developed by incorporating the latest readily available teaching resources. Its accuracy in providing clinical knowledge was evaluated using a set of clinical questions, and responses were compared against established medical guidelines. Satisfaction was assessed through surveys involving medical students and physicians at different stages and from various types of hospitals. The impact of the custom GPT was further evaluated by comparing its role in facilitating clinical knowledge acquisition with traditional learning methods.

RESULTS

The custom GPT demonstrated higher accuracy (83.6%) compared to general AI models (65.5%, 69.1%) and was comparable to a professionally developed AI (Glass Health, 83.6%). Residents reported the highest satisfaction compared to clerks and physicians, citing improved learning independence, motivation, and confidence ( < 0.05). Physicians, especially those from teaching hospitals, showed greater eagerness to develop a custom GPT compared to clerks and residents ( < 0.05). The impact analysis revealed that residents using the custom GPT achieved better test scores compared to those using traditional resources ( < 0.05), though fewer perfect scores were obtained.

CONCLUSIONS

The custom GPT demonstrates significant promise as an innovative tool for advancing medical education, particularly for residents. Its capability to deliver accurate, tailored information complements traditional teaching methods, aiding educators in promoting personalized and consistent training. However, it is essential for both learners and educators to remain critical in evaluating AI-generated information. With continued development and thoughtful integration, AI tools like custom GPTs have the potential to significantly enhance the quality and accessibility of medical education.

摘要

相似文献

1
Accuracy, satisfaction, and impact of custom GPT in acquiring clinical knowledge: Potential for AI-assisted medical education.
Med Teach. 2025 Sep;47(9):1502-1508. doi: 10.1080/0142159X.2025.2458808. Epub 2025 Feb 2.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
The performance of ChatGPT on medical image-based assessments and implications for medical education.ChatGPT在基于医学图像的评估中的表现及其对医学教育的影响。
BMC Med Educ. 2025 Aug 23;25(1):1192. doi: 10.1186/s12909-025-07752-0.
4
Utility of Generative Artificial Intelligence for Japanese Medical Interview Training: Randomized Crossover Pilot Study.生成式人工智能在日本医学面试培训中的效用:随机交叉试点研究。
JMIR Med Educ. 2025 Aug 1;11:e77332. doi: 10.2196/77332.
5
Feasibility study of using GPT for history-taking training in medical education: a randomized clinical trial.在医学教育中使用GPT进行病史采集训练的可行性研究:一项随机临床试验。
BMC Med Educ. 2025 Jul 10;25(1):1030. doi: 10.1186/s12909-025-07614-9.
6
Large language models (LLMs) in radiology exams for medical students: Performance and consequences.面向医学生的放射学考试中的大语言模型:表现与影响。
Rofo. 2024 Nov 4. doi: 10.1055/a-2437-2067.
7
The diagnostic and triage accuracy of the GPT-3 artificial intelligence model: an observational study.GPT-3 人工智能模型的诊断和分诊准确性:一项观察性研究。
Lancet Digit Health. 2024 Aug;6(8):e555-e561. doi: 10.1016/S2589-7500(24)00097-9.
8
Development of a GPT-4-Powered Virtual Simulated Patient and Communication Training Platform for Medical Students to Practice Discussing Abnormal Mammogram Results With Patients: Multiphase Study.开发一个由GPT-4驱动的虚拟模拟患者和沟通训练平台,供医学生练习与患者讨论异常乳房X光检查结果:多阶段研究。
JMIR Form Res. 2025 Apr 17;9:e65670. doi: 10.2196/65670.
9
Development of a Clinical Clerkship Mentor Using Generative AI and Evaluation of Its Effectiveness in a Medical Student Trial Compared to Student Mentors: 2-Part Comparative Study.使用生成式人工智能开发临床实习导师并在医学生试验中评估其与学生导师相比的有效性:两部分比较研究
JMIR Med Educ. 2025 Sep 4;11:e76702. doi: 10.2196/76702.
10
Performance of ChatGPT Across Different Versions in Medical Licensing Examinations Worldwide: Systematic Review and Meta-Analysis.ChatGPT 在全球医学执照考试不同版本中的表现:系统评价和荟萃分析。
J Med Internet Res. 2024 Jul 25;26:e60807. doi: 10.2196/60807.