Amano Izuki, Obi-Nagata Kisho, Ninomiya Ayane, Fujiwara Yuki, Koibuchi Noriyuki
Department of Integrative Physiology, Gunma University Graduate School of Medicine, Maebashi, Japan.
JMA J. 2025 Jul 15;8(3):730-735. doi: 10.31662/jmaj.2024-0375. Epub 2025 Jul 2.
Generative artificial intelligence (AI) has become more accessible due to technological advancements. While it can support more efficient learning, improper use may lead to legal issues or hinder self-directed learning. Medical education is no exception, as generative AI has the potential to become a powerful tool. However, its practicality remains uncertain. Therefore, we investigated how generative AI is perceived among medical students and utilized within the realm of medical education.
In January 2024, we conducted a study with 123 second-year medical students who had completed a physiology course and laboratory training at Gunma University, Japan. Students used ChatGPT (Chat Generative Pre-trained Transformer) 3.5 (OpenAI) for four tasks and evaluated its responses. A survey on the use of generative AI was also conducted. Responses from 117 participants were analyzed, excluding six non-participants.
Among the students, 41.9% had used ChatGPT. The average scores for tasks 1-4 were 6.5, 4.6, 7.4, and 6.2 out of 10, respectively. Although 13% had a negative impression, 54 students found it challenging to apply for medical purposes. However, 64.1% expressed a willingness to continue using generative AI, provided its use extended beyond medical contexts.
Nearly 60% of students had never used generative AI before, which is consistent with general usage trends. Although they were impressed by the speed of generative AI responses, many students found that it lacked precision for medical studies and required additional verification. Limitations of generative AI, such as "hallucinations," were evident in medical education. It remains important to educate students on AI literacy and their understanding of the potential issues that generative AI could bring about.
由于技术进步,生成式人工智能(AI)的使用变得更加便捷。虽然它可以支持更高效的学习,但不当使用可能会导致法律问题或阻碍自主学习。医学教育也不例外,因为生成式AI有潜力成为一种强大的工具。然而,其实际应用仍不确定。因此,我们调查了医学生对生成式AI的看法以及它在医学教育领域的应用情况。
2024年1月,我们对123名在日本群马大学完成生理学课程和实验室培训的二年级医学生进行了一项研究。学生们使用ChatGPT(聊天生成预训练变换器)3.5(OpenAI)完成四项任务并评估其回答。我们还进行了一项关于生成式AI使用情况的调查。分析了117名参与者的回答,排除了6名未参与者。
在学生中,41.9%的人使用过ChatGPT。任务1 - 4的平均得分分别为6.5分、4.6分、7.4分和6.2分(满分10分)。虽然13%的人有负面印象,但54名学生发现将其应用于医学目的具有挑战性。然而,64.1%的人表示愿意继续使用生成式AI,前提是其使用范围超出医学领域。
近60%的学生此前从未使用过生成式AI,这与一般使用趋势一致。尽管他们对生成式AI的回答速度印象深刻,但许多学生发现它在医学研究中缺乏精确性,需要额外验证。生成式AI的局限性,如“幻觉”,在医学教育中很明显。对学生进行人工智能素养教育以及让他们了解生成式AI可能带来的潜在问题仍然很重要。