Charles Sturt University, Wagga Wagga, NSW, Australia.
Charles Sturt University, Wagga Wagga, NSW, Australia.
Radiography (Lond). 2023 Jul;29(4):792-799. doi: 10.1016/j.radi.2023.05.011. Epub 2023 Jun 2.
Academic integrity among radiographers and nuclear medicine technologists/scientists in both higher education and scientific writing has been challenged by advances in artificial intelligence (AI). The recent release of ChatGPT, a chatbot powered by GPT-3.5 capable of producing accurate and human-like responses to questions in real-time, has redefined the boundaries of academic and scientific writing. These boundaries require objective evaluation.
ChatGPT was tested against six subjects across the first three years of the medical radiation science undergraduate course for both exams (n = 6) and written assignment tasks (n = 3). ChatGPT submissions were marked against standardised rubrics and results compared to student cohorts. Submissions were also evaluated by Turnitin for similarity and AI scores.
ChatGPT powered by GPT-3.5 performed below the average student performance in all written tasks with an increasing disparity as subjects advanced. ChatGPT performed better than the average student in foundation or general subject examinations where shallow responses meet learning outcomes. For discipline specific subjects, ChatGPT lacked the depth, breadth, and currency of insight to provide pass level answers.
ChatGPT simultaneously poses a risk to academic integrity in writing and assessment while affording a tool for enhanced learning environments. These risks and benefits are likely to be restricted to learning outcomes of lower taxonomies. Both risks and benefits are likely to be constrained by higher order taxonomies.
ChatGPT powered by GPT3.5 has limited capacity to support student cheating, introduces errors and fabricated information, and is readily identified by software as AI generated. Lack of depth of insight and appropriateness for professional communication also limits capacity as a learning enhancement tool.
在高等教育和科学写作领域,放射技师和核医学技术人员/科学家的学术诚信受到人工智能(AI)进步的挑战。最近发布的 ChatGPT 是一种基于 GPT-3.5 的聊天机器人,能够实时生成准确且类似人类的回答,这重新定义了学术和科学写作的界限。这些界限需要客观的评估。
我们在医学放射科学本科课程的前三年中,对六个科目进行了测试,包括考试(n=6)和书面作业任务(n=3)。ChatGPT 的提交内容是根据标准化的评分标准进行评分的,并与学生群体进行了比较。提交内容还通过 Turnitin 进行了相似性和 AI 分数的评估。
GPT-3.5 驱动的 ChatGPT 在所有书面任务中的表现均低于平均学生水平,且随着科目的推进,差距越来越大。在基础或一般科目考试中,ChatGPT 的表现优于平均学生,因为浅层的回答可以满足学习成果。对于特定学科的科目,ChatGPT 缺乏深度、广度和及时性,无法提供及格水平的答案。
ChatGPT 同时对写作和评估中的学术诚信构成风险,同时为增强学习环境提供了工具。这些风险和收益可能仅限于较低的分类学习成果。风险和收益都可能受到较高分类的限制。
GPT-3.5 驱动的 ChatGPT 支持学生作弊的能力有限,会引入错误和伪造的信息,并且很容易被软件识别为 AI 生成的内容。洞察力的深度和适当性不足以及不适合专业沟通也限制了其作为学习增强工具的能力。