• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用生成式人工智能开发临床实习导师并在医学生试验中评估其与学生导师相比的有效性:两部分比较研究

Development of a Clinical Clerkship Mentor Using Generative AI and Evaluation of Its Effectiveness in a Medical Student Trial Compared to Student Mentors: 2-Part Comparative Study.

作者信息

Ebihara Hayato, Kasai Hajime, Shimizu Ikuo, Shikino Kiyoshi, Tajima Hiroshi, Kimura Yasuhiko, Ito Shoichi

机构信息

Department of Medicine, School of Medicine, Chiba University, Chiba, Japan.

Department of Medical Education, Graduate School of Medicine, Chiba University, Chiba, Japan.

出版信息

JMIR Med Educ. 2025 Sep 4;11:e76702. doi: 10.2196/76702.

DOI:10.2196/76702
PMID:40907969
Abstract

BACKGROUND

At the beginning of their clinical clerkships (CCs), medical students face multiple challenges related to acquiring clinical and communication skills, building professional relationships, and managing psychological stress. While mentoring and structured feedback are known to provide critical support, existing systems may not offer sufficient and timely guidance owing to the faculty's limited availability. Generative artificial intelligence, particularly large language models, offers new opportunities to support medical education by providing context-sensitive responses.

OBJECTIVE

This study aimed to develop a generative artificial intelligence CC mentor (AI-CCM) based on ChatGPT and evaluate its effectiveness in supporting medical students' clinical learning, addressing their concerns, and supplementing human mentoring. The secondary objective was to compare AI-CCM's educational value with responses from senior student mentors.

METHODS

We conducted 2 studies. In study 1, we created 5 scenarios based on challenges that students commonly encountered during CCs. For each scenario, 5 senior student mentors and AI-CCM generated written advice. Five medical education experts evaluated these responses using a rubric to assess accuracy, practical utility, educational appropriateness (5-point Likert scale), and safety (binary scale). In study 2, a total of 17 fourth-year medical students used AI-CCM for 1 week during their CCs and completed a questionnaire evaluating its usefulness, clarity, emotional support, and impact on communication and learning (5-point Likert scale) informed by the technology acceptance model.

RESULTS

All results indicated that AI-CCM achieved higher mean scores than senior student mentors. AI-CCM responses were rated higher in educational appropriateness (4.2, SD 0.7 vs 3.8, SD 1.0; P=.001). No significant differences with senior student mentors were observed in accuracy (4.4, SD 0.7 vs 4.2, SD 0.9; P=.11) or practical utility (4.1, SD 0.7 vs 4.0, SD 0.9; P=.35). No safety concerns were identified in AI-CCM responses, whereas 2 concerns were noted in student mentors' responses. Scenario-specific analysis revealed that AI-CCM performed substantially better in emotional and psychological stress scenarios. In the student trial, AI-CCM was rated as moderately useful (mean usefulness score 3.9, SD 1.1), with positive evaluations for clarity (4.0, SD 0.9) and emotional support (3.8, SD 1.1). However, aspects related to feedback guidance (2.9, SD 0.9) and anxiety reduction (3.2, SD 1.0) received more neutral ratings. Students primarily consulted AI-CCM regarding learning workload and communication difficulties; few students used it to address emotional stress-related issues.

CONCLUSIONS

AI-CCM has the potential to serve as a supplementary educational partner during CCs, offering comparable support to that of senior student mentors in structured scenarios. Despite challenges of response latency and limited depth in clinical content, AI-CCM was received well by and accessible to students who used ChatGPT's free version. With further refinements, including specialty-specific content and improved responsiveness, AI-CCM may serve as a scalable, context-sensitive support system in clinical medical education.

摘要

背景

在临床实习初期,医学生面临着与获取临床和沟通技能、建立职业关系以及应对心理压力相关的多重挑战。虽然指导和结构化反馈已知能提供关键支持,但由于教师的时间有限,现有系统可能无法提供足够及时的指导。生成式人工智能,尤其是大型语言模型,通过提供上下文相关的回应为医学教育提供了新的机会。

目的

本研究旨在基于ChatGPT开发一种生成式人工智能临床实习导师(AI - CCM),并评估其在支持医学生临床学习、解决他们的问题以及补充人力指导方面的有效性。次要目的是将AI - CCM的教育价值与高年级学生导师的回复进行比较。

方法

我们进行了两项研究。在研究1中,我们根据学生在临床实习期间常见的挑战创建了5个场景。对于每个场景,5名高年级学生导师和AI - CCM生成了书面建议。5名医学教育专家使用评分标准评估这些回复,以评估准确性实用性、教育适宜性(5点李克特量表)和安全性(二分制量表)。在研究2中,共有17名四年级医学生在临床实习期间使用AI - CCM一周,并完成一份问卷,该问卷根据技术接受模型评估其有用性、清晰性、情感支持以及对沟通和学习的影响(5点李克特量表)。

结果

所有结果表明,AI - CCM的平均得分高于高年级学生导师。AI - CCM的回复在教育适宜性方面得分更高(4.2,标准差0.7对3.8,标准差1.0;P = 0.001)。在准确性(4.4,标准差0.7对4.2,标准差0.9;P = 0.11)或实用性(4.1,标准差0.7对4.0,标准差0.9;P = 0.35)方面,未观察到与高年级学生导师有显著差异。在AI - CCM的回复中未发现安全问题,而在学生导师的回复中发现了2个问题。针对具体场景的分析显示,AI - CCM在情感和心理压力场景中表现得更好。在学生试用中,AI - CCM被评为中等有用(平均有用性得分3.9,标准差1.1),对清晰性(4.0,标准差0.9)和情感支持(3.8,标准差1.1)给予了积极评价。然而,与反馈指导(2.9,标准差0.9)和减轻焦虑(3.2,标准差1.0)相关的方面得到的评价较为中性。学生主要就学习工作量和沟通困难咨询AI - CCM;很少有学生用它来解决与情感压力相关的问题。

结论

AI - CCM有潜力在临床实习期间作为辅助教育伙伴,在结构化场景中提供与高年级学生导师相当的支持。尽管存在响应延迟和临床内容深度有限的挑战,但使用ChatGPT免费版本的学生对AI - CCM的接受度良好且易于使用。通过进一步改进,包括特定专业内容和提高响应能力,AI - CCM可能成为临床医疗教育中一个可扩展的、上下文相关的支持系统。

相似文献

1
Development of a Clinical Clerkship Mentor Using Generative AI and Evaluation of Its Effectiveness in a Medical Student Trial Compared to Student Mentors: 2-Part Comparative Study.使用生成式人工智能开发临床实习导师并在医学生试验中评估其与学生导师相比的有效性:两部分比较研究
JMIR Med Educ. 2025 Sep 4;11:e76702. doi: 10.2196/76702.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
The educational effects of portfolios on undergraduate student learning: a Best Evidence Medical Education (BEME) systematic review. BEME Guide No. 11.档案袋对本科学生学习的教育效果:最佳证据医学教育(BEME)系统评价。BEME指南第11号。
Med Teach. 2009 Apr;31(4):282-98. doi: 10.1080/01421590902889897.
4
Pharmacy meets AI: Effect of a drug information activity on student perceptions of generative artificial intelligence.药学与人工智能相遇:药物信息活动对学生对生成式人工智能认知的影响。
Curr Pharm Teach Learn. 2025 Jul 7;17(10):102439. doi: 10.1016/j.cptl.2025.102439.
5
Utility of Generative Artificial Intelligence for Japanese Medical Interview Training: Randomized Crossover Pilot Study.生成式人工智能在日本医学面试培训中的效用:随机交叉试点研究。
JMIR Med Educ. 2025 Aug 1;11:e77332. doi: 10.2196/77332.
6
Medical Student Perspectives on Professionalism in a Third-Year Surgery Clerkship - A Mixed Methods Study.医学生对三年级外科实习中职业素养的看法——一项混合方法研究
J Surg Educ. 2024 Nov;81(11):1720-1729. doi: 10.1016/j.jsurg.2024.08.018. Epub 2024 Sep 18.
7
Validation of Checklists and Evaluation of Clinical Skills in Cases of Abdominal Pain With Simulation in Formative, Objective, Structured Clinical Examination With Audiovisual Content in Third-Year Medical Students' Surgical Clerkship.在第三年医学生外科实习中,使用形成性、客观、结构化临床考试中的模拟病例进行腹痛检查表验证和临床技能评估,同时具有视听内容。
J Surg Educ. 2024 Nov;81(11):1756-1763. doi: 10.1016/j.jsurg.2024.08.016. Epub 2024 Sep 20.
8
Navigating the peer mentoring journey: Experiences of peer mentors in an undergraduate nursing programme.探索同伴指导之旅:本科护理专业同伴导师的经历
Nurse Educ Today. 2025 Jul 16;154:106829. doi: 10.1016/j.nedt.2025.106829.
9
Clinical Performance and Communication Skills of ChatGPT Versus Physicians in Emergency Medicine: Simulated Patient Study.ChatGPT与急诊医学医生的临床表现及沟通技巧:模拟患者研究
JMIR Med Inform. 2025 Jul 17;13:e68409. doi: 10.2196/68409.
10
AI in Medical Questionnaires: Innovations, Diagnosis, and Implications.医学问卷中的人工智能:创新、诊断及影响
J Med Internet Res. 2025 Jun 23;27:e72398. doi: 10.2196/72398.