• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ChatGPT-4o 在为放射技师制定继续专业发展计划方面的有效性:一项描述性研究。

Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study.

机构信息

Faculty of Science and Health, Charles Sturt University, Bathurst NSW, Australia.

UniSA Allied Health & Human Performance, University of South Australia, Adelaide, SA, Australia.

出版信息

J Educ Eval Health Prof. 2024;21:34. doi: 10.3352/jeehp.2024.21.34. Epub 2024 Nov 18.

DOI:10.3352/jeehp.2024.21.34
PMID:39552083
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11637979/
Abstract

PURPOSE

This study evaluates the use of ChatGPT-4o in creating tailored continuing professional development (CPD) plans for radiography students, addressing the challenge of aligning CPD with Medical Radiation Practice Board of Australia (MRPBA) requirements. We hypothesized that ChatGPT-4o could support students in CPD planning while meeting regulatory standards.

METHODS

A descriptive, experimental design was used to generate 3 unique CPD plans using ChatGPT-4o, each tailored to hypothetical graduate radiographers in varied clinical settings. Each plan followed MRPBA guidelines, focusing on computed tomography specialization by the second year. Three MRPBA-registered academics assessed the plans using criteria of appropriateness, timeliness, relevance, reflection, and completeness from October 2024 to November 2024. Ratings underwent analysis using the Friedman test and intraclass correlation coefficient (ICC) to measure consistency among evaluators.

RESULTS

ChatGPT-4o generated CPD plans generally adhered to regulatory standards across scenarios. The Friedman test indicated no significant differences among raters (P=0.420, 0.761, and 0.807 for each scenario), suggesting consistent scores within scenarios. However, ICC values were low (-0.96, 0.41, and 0.058 for scenarios 1, 2, and 3), revealing variability among raters, particularly in timeliness and completeness criteria, suggesting limitations in the ChatGPT-4o's ability to address individualized and context-specific needs.

CONCLUSION

ChatGPT-4o demonstrates the potential to ease the cognitive demands of CPD planning, offering structured support in CPD development. However, human oversight remains essential to ensure plans are contextually relevant and deeply reflective. Future research should focus on enhancing artificial intelligence's personalization for CPD evaluation, highlighting ChatGPT-4o's potential and limitations as a tool in professional education.

摘要

目的

本研究评估了 ChatGPT-4o 在为放射学学生制定定制的持续专业发展(CPD)计划方面的应用,旨在解决将 CPD 与澳大利亚医疗放射实践委员会(MRPBA)要求保持一致的挑战。我们假设 ChatGPT-4o 可以在满足监管标准的同时支持学生进行 CPD 规划。

方法

采用描述性实验设计,使用 ChatGPT-4o 生成 3 个独特的 CPD 计划,每个计划都针对不同临床环境下的假设毕业生放射技师。每个计划都遵循 MRPBA 指南,重点是第二年的计算机断层扫描专业。三位注册为 MRPBA 的学者使用适当性、及时性、相关性、反思和完整性标准评估这些计划,评估时间为 2024 年 10 月至 2024 年 11 月。使用 Friedman 检验和组内相关系数(ICC)对评分进行分析,以衡量评估者之间的一致性。

结果

ChatGPT-4o 生成的 CPD 计划在所有场景中总体上都符合监管标准。Friedman 检验表明,评分者之间没有显著差异(每个场景的 P=0.420、0.761 和 0.807),表明在每个场景内评分具有一致性。然而,ICC 值较低(场景 1、2 和 3 的值分别为-0.96、0.41 和 0.058),表明评分者之间存在差异,特别是在及时性和完整性标准方面,这表明 ChatGPT-4o 在满足个性化和具体情境需求方面存在局限性。

结论

ChatGPT-4o 显示出减轻 CPD 规划认知负担的潜力,为 CPD 发展提供了结构化支持。然而,为了确保计划具有情境相关性和深度反思性,仍然需要人工监督。未来的研究应侧重于增强人工智能在 CPD 评估方面的个性化,突出 ChatGPT-4o 作为专业教育工具的潜力和局限性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f37/11637979/bc4228ba39d5/jeehp-21-34f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f37/11637979/ec39cde97bbf/jeehp-21-34f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f37/11637979/bc4228ba39d5/jeehp-21-34f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f37/11637979/ec39cde97bbf/jeehp-21-34f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/6f37/11637979/bc4228ba39d5/jeehp-21-34f2.jpg

相似文献

1
Effectiveness of ChatGPT-4o in developing continuing professional development plans for graduate radiographers: a descriptive study.ChatGPT-4o 在为放射技师制定继续专业发展计划方面的有效性:一项描述性研究。
J Educ Eval Health Prof. 2024;21:34. doi: 10.3352/jeehp.2024.21.34. Epub 2024 Nov 18.
2
GPT-4o’s competency in answering the simulated written European Board of Interventional Radiology exam compared to a medical student and experts in Germany and its ability to generate exam items on interventional radiology: a descriptive study.GPT-4o 在回答模拟的欧洲介入放射学委员会考试方面的能力与德国医学生和专家相比,以及其在介入放射学方面生成考试项目的能力:一项描述性研究。
J Educ Eval Health Prof. 2024;21:21. doi: 10.3352/jeehp.2024.21.21. Epub 2024 Aug 20.
3
AI-powered standardised patients: evaluating ChatGPT-4o's impact on clinical case management in intern physicians.人工智能驱动的标准化病人:评估ChatGPT-4o对实习医生临床病例管理的影响。
BMC Med Educ. 2025 Feb 20;25(1):278. doi: 10.1186/s12909-025-06877-6.
4
Comparing diagnostic skills in endodontic cases: dental students versus ChatGPT-4o.比较牙髓病病例中的诊断技能:牙科学生与ChatGPT-4o。
BMC Oral Health. 2025 Mar 29;25(1):457. doi: 10.1186/s12903-025-05857-y.
5
ChatGPT-4 Omni Performance in USMLE Disciplines and Clinical Skills: Comparative Analysis.ChatGPT-4 在 USMLE 学科和临床技能中的全能表现:比较分析。
JMIR Med Educ. 2024 Nov 6;10:e63430. doi: 10.2196/63430.
6
ChatGPT's Performance on Portuguese Medical Examination Questions: Comparative Analysis of ChatGPT-3.5 Turbo and ChatGPT-4o Mini.ChatGPT在葡萄牙语医学考试问题上的表现:ChatGPT-3.5 Turbo与ChatGPT-4o Mini的比较分析。
JMIR Med Educ. 2025 Mar 5;11:e65108. doi: 10.2196/65108.
7
Integrating AI into clinical education: evaluating general practice trainees' proficiency in distinguishing AI-generated hallucinations and impacting factors.将人工智能融入临床教育:评估全科医学实习生辨别人工智能生成幻觉的能力及影响因素。
BMC Med Educ. 2025 Mar 19;25(1):406. doi: 10.1186/s12909-025-06916-2.
8
Assessing ChatGPT for Clinical Decision-Making in Radiation Oncology, With Open-Ended Questions and Images.通过开放式问题和图像评估ChatGPT在放射肿瘤学临床决策中的应用
Pract Radiat Oncol. 2025 Apr 29. doi: 10.1016/j.prro.2025.04.009.
9
Exploring the use of ChatGPT-4o in enhancing career development counseling for medical students: a study protocol.探讨 ChatGPT-4o 在增强医学生职业发展咨询中的应用:一项研究方案。
BMJ Open. 2024 Nov 28;14(11):e083697. doi: 10.1136/bmjopen-2023-083697.
10
High identification and positive-negative discrimination but limited detailed grading accuracy of ChatGPT-4o in knee osteoarthritis radiographs.ChatGPT-4o在膝关节骨关节炎X光片方面具有较高的识别能力和正负鉴别能力,但详细分级准确性有限。
Knee Surg Sports Traumatol Arthrosc. 2025 May;33(5):1911-1919. doi: 10.1002/ksa.12639. Epub 2025 Mar 7.

引用本文的文献

1
Halted medical education and medical residents’ training in Korea, journal metrics, and appreciation to reviewers and volunteers.韩国医学教育和住院医师培训的暂停、期刊指标以及对审稿人和志愿者的感谢。
J Educ Eval Health Prof. 2025;22:1. doi: 10.3352/jeehp.2025.22.1. Epub 2025 Jan 13.

本文引用的文献

1
The performance of ChatGPT-4.0o in medical imaging evaluation: a cross-sectional study.ChatGPT-4.0 在医学影像评估中的性能:一项横断面研究。
J Educ Eval Health Prof. 2024;21:29. doi: 10.3352/jeehp.2024.21.29. Epub 2024 Oct 31.
2
Factors influencing final year radiography students' intention to pursue postgraduate education in medical imaging.影响放射医学专业大四学生攻读医学影像学研究生意向的因素。
Radiography (Lond). 2024 Jan;30(1):388-393. doi: 10.1016/j.radi.2023.12.006. Epub 2023 Dec 29.
3
Continuing professional development requirements for UK health professionals: a scoping review.
英国卫生专业人员继续教育要求:范围综述。
BMJ Open. 2020 Mar 10;10(3):e032781. doi: 10.1136/bmjopen-2019-032781.
4
Continuing professional development to foster behaviour change: From principles to practice in health professions education.持续专业发展以促进行为改变:从健康专业教育的原则到实践。
Med Teach. 2019 Sep;41(9):1045-1052. doi: 10.1080/0142159X.2019.1615608. Epub 2019 May 26.
5
A Guideline of Selecting and Reporting Intraclass Correlation Coefficients for Reliability Research.可靠性研究中组内相关系数选择与报告指南
J Chiropr Med. 2016 Jun;15(2):155-63. doi: 10.1016/j.jcm.2016.02.012. Epub 2016 Mar 31.