Department of Radiation Oncology, Peking University Third Hospital, 49th North Garden Road, Haidian District, Beijing, 100191, P.R. China.
BMC Med Educ. 2024 Nov 19;24(1):1328. doi: 10.1186/s12909-024-06217-0.
Traditional puncture skills training for refresher doctors faces limitations in effectiveness and efficiency. This study explored the application of generative AI (ChatGPT), templates, and digital imaging to enhance puncture skills training.
90 refresher doctors were enrolled sequentially into 3 groups: traditional training; template and digital imaging training; and ChatGPT, template and digital imaging training. Outcomes included theoretical knowledge, technical skills, and trainee satisfaction measured at baseline, post-training, and 3-month follow-up.
The ChatGPT group increased theoretical knowledge scores by 17-21% over traditional training at post-training (81.6 ± 4.56 vs. 69.6 ± 4.58, p < 0.001) and follow-up (86.5 ± 4.08 vs. 71.3 ± 4.83, p < 0.001). It also outperformed template training by 4-5% at post-training (81.6 ± 4.56 vs. 78.5 ± 4.65, p = 0.032) and follow-up (86.5 ± 4.08 vs. 82.7 ± 4.68, p = 0.004). For technical skills, the ChatGPT (4.0 ± 0.32) and template (4.0 ± 0.18) groups showed similar scores at post-training, outperforming traditional training (3.6 ± 0.50) by 11% (p < 0.001). At follow-up, ChatGPT (4.0 ± 0.18) and template (4.0 ± 0.32) still exceeded traditional training (3.8 ± 0.43) by 5% (p = 0.071, p = 0.026). Learning curve analysis revealed fastest knowledge (slope 13.02) and skill (slope 0.62) acquisition for ChatGPT group over template (slope 11.28, 0.38) and traditional (slope 5.17, 0.53). ChatGPT responses showed 100% relevance, 50% completeness, 60% accuracy, with 15.9 s response time. For training satisfaction, ChatGPT group had highest scores (4.2 ± 0.73), over template (3.8 ± 0.68) and traditional groups (2.6 ± 0.94) (p < 0.01).
Integrating AI, templates and digital imaging significantly improved puncture knowledge and skills over traditional training. Combining technological innovations and AI shows promise for streamlining complex medical competency mastery.
传统的穿刺技能培训对于进修医生来说,在效果和效率方面存在局限性。本研究探讨了生成式人工智能(ChatGPT)、模板和数字成像在增强穿刺技能培训中的应用。
90 名进修医生被顺序纳入 3 组:传统培训;模板和数字成像培训;ChatGPT、模板和数字成像培训。结果包括基线、培训后和 3 个月随访时的理论知识、技术技能和学员满意度。
ChatGPT 组在培训后理论知识得分提高了 17-21%(81.6±4.56 比 69.6±4.58,p<0.001)和随访时(86.5±4.08 比 71.3±4.83,p<0.001)。与模板培训相比,培训后得分提高了 4-5%(81.6±4.56 比 78.5±4.65,p=0.032)和随访时(86.5±4.08 比 82.7±4.68,p=0.004)。在技术技能方面,ChatGPT(4.0±0.32)和模板(4.0±0.18)组在培训后得分相似,比传统培训(3.6±0.50)高 11%(p<0.001)。在随访时,ChatGPT(4.0±0.18)和模板(4.0±0.32)仍比传统培训(3.8±0.43)高 5%(p=0.071,p=0.026)。学习曲线分析显示,ChatGPT 组在知识(斜率 13.02)和技能(斜率 0.62)的获取方面比模板组(斜率 11.28,0.38)和传统组(斜率 5.17,0.53)更快。ChatGPT 的回答相关性为 100%,完整性为 50%,准确性为 60%,响应时间为 15.9 秒。在培训满意度方面,ChatGPT 组得分最高(4.2±0.73),高于模板组(3.8±0.68)和传统组(2.6±0.94)(p<0.01)。
将人工智能、模板和数字成像相结合,明显提高了穿刺知识和技能,优于传统培训。结合技术创新和人工智能,有望简化复杂的医学能力掌握。