Hanna Mytsyk, Yana Suchikova
Department of Applied Psychology and Speech Therapy, Berdyansk State Pedagogical University, Zaporizhzhia, Ukraine.
Int J Lang Commun Disord. 2025 Jul-Aug;60(4):e70088. doi: 10.1111/1460-6984.70088.
BACKGROUND: Integrating large language models (LLMs), such as ChatGPT, into speech-language pathology (SLP) presents promising opportunities and notable challenges. While these tools can support diagnostics, streamline documentation and assist in therapy planning, they also raise concerns related to misinformation, cultural insensitivity, overreliance and ethical ambiguity. Current discourse often centres on technological capabilities, overlooking how future speech-language pathologists (SLPs) are being prepared to use such tools responsibly. AIMS: This paper examines the pedagogical, ethical and professional implications of integrating LLMs into SLP. It emphasizes the need to cultivate professional responsibility, ethical awareness and critical engagement amongst student SLPs, ensuring that such technologies are applied thoughtfully, appropriately and in accordance with evidence-based and contextually relevant therapeutic standards. METHODS: The paper combines a review of recent interdisciplinary research with reflective insights from academic practice. It presents documented cases of student SLPs' overreliance on ChatGPT, analyzes common pitfalls through a structured table of examples and synthesizes perspectives from SLP, education, data ethics and linguistics. MAIN CONTRIBUTION: Reflective examples presented in the article illustrate challenges that arise when LLMs are used without sufficient oversight or a clear understanding of their limitations. Rather than questioning the value of LLMs, these cases emphasize the importance of ensuring that student SLPs are guided towards thoughtful, ethical and clinically sound use. To support this, the paper offers a set of pedagogical recommendations-including ethics integration, reflective assignments, case-based learning, peer critique and interdisciplinary collaboration-aimed at embedding critical engagement with tools such as ChatGPT into professional training. CONCLUSIONS: LLMs are becoming an integral part of SLP. Their impact, however, will depend on how effectively student SLPs are trained to balance technological innovation with professional responsibility. Higher education institutions (HEIs) must take an active role in embedding responsible engagement with LLMs into pre-service training and SLP curricula. Through intentional and early preparation, the field can move beyond the risks associated with automation and towards a future shaped by reflective, informed and ethically grounded use of generative tools. WHAT THIS PAPER ADDS: What is already known on this subject Large language models (LLMs), including ChatGPT, are increasingly used in speech-language pathology (SLP) for tasks such as diagnostic support, therapy material generation and documentation. While prior research acknowledges both their utility and risks, limited attention has been paid to how student SLPs engage with these tools and how educational institutions prepare them for responsible use. What this paper adds to existing knowledge This paper identifies key challenges in how student SLPs interact with ChatGPT, including overreliance, lack of critical evaluation and ethical blind spots. It emphasizes the role of higher education in developing critical AI literacy aligned with clinical and ethical standards. The study offers specific, practice-oriented recommendations for embedding responsibility-focused engagement with LLMs into SLP curricula. These include ethics integration, reflective assignments, peer feedback and interdisciplinary dialogue. What are the potential or actual clinical implications of this work? Without structured guidance, future SLPs may misuse LLMs in ways that compromise diagnostic accuracy, cultural appropriateness or therapeutic quality. Embedding reflective, ethics-focused training into SLP curricula can reduce these risks and ensure that generative tools like ChatGPT support rather than undermine clinical decision-making and patient care.
背景:将诸如ChatGPT之类的大语言模型(LLMs)整合到言语语言病理学(SLP)中既带来了充满希望的机遇,也带来了显著的挑战。虽然这些工具可以支持诊断、简化文档记录并协助治疗计划制定,但它们也引发了与错误信息、文化不敏感、过度依赖和伦理模糊性相关的担忧。当前的讨论往往集中在技术能力上,而忽视了未来的言语语言病理学家(SLPs)如何为负责任地使用此类工具做好准备。 目的:本文探讨了将大语言模型整合到言语语言病理学中的教学、伦理和专业影响。它强调了培养学生言语语言病理学家的职业责任感、伦理意识和批判性参与的必要性,确保此类技术能够经过深思熟虑、恰当地应用,并符合基于证据且与具体情境相关的治疗标准。 方法:本文将对近期跨学科研究的综述与来自学术实践的反思性见解相结合。它展示了学生言语语言病理学家过度依赖ChatGPT的记录案例,通过一个结构化的示例表分析常见陷阱,并综合了言语语言病理学、教育、数据伦理和语言学的观点。 主要贡献:文章中呈现的反思性示例说明了在没有充分监督或对其局限性缺乏清晰理解的情况下使用大语言模型时出现的挑战。这些案例并非质疑大语言模型的价值,而是强调了确保引导学生言语语言病理学家进行深思熟虑、符合伦理且临床合理使用的重要性。为支持这一点,本文提供了一系列教学建议,包括伦理融入、反思性作业、基于案例的学习、同行批评和跨学科合作,旨在将对ChatGPT等工具的批判性参与融入专业培训。 结论:大语言模型正成为言语语言病理学不可或缺的一部分。然而,它们的影响将取决于学生言语语言病理学家在平衡技术创新与职业责任方面接受培训的效果。高等教育机构(HEIs)必须积极参与,将对大语言模型的负责任参与融入职前培训和言语语言病理学课程中。通过有意识的早期准备,该领域可以超越与自动化相关的风险,朝着由对生成工具进行反思性、明智且基于伦理的使用所塑造的未来迈进。 本文补充的内容:关于该主题已知的信息 包括ChatGPT在内的大语言模型(LLMs)越来越多地用于言语语言病理学(SLP)中的诊断支持、治疗材料生成和文档记录等任务。虽然先前的研究既承认它们的效用也认识到风险,但对于学生言语语言病理学家如何与这些工具互动以及教育机构如何使他们为负责任的使用做好准备的关注有限。本文对现有知识的补充 本文确定了学生言语语言病理学家与ChatGPT互动中的关键挑战,包括过度依赖、缺乏批判性评估和伦理盲点。它强调了高等教育在培养与临床和伦理标准相一致的关键人工智能素养方面的作用。该研究为将以责任为重点的对大语言模型的参与融入言语语言病理学课程提供了具体的、面向实践的建议。这些建议包括伦理融入、反思性作业、同伴反馈和跨学科对话。这项工作的潜在或实际临床意义是什么?如果没有结构化的指导,未来的言语语言病理学家可能会以损害诊断准确性、文化适宜性或治疗质量的方式滥用大语言模型。将以反思、伦理为重点的培训融入言语语言病理学课程可以降低这些风险,并确保像ChatGPT这样的生成工具支持而不是破坏临床决策和患者护理。
Int J Lang Commun Disord. 2025
J Psychiatr Ment Health Nurs. 2024-8
Autism Adulthood. 2024-12-2
Int J Lang Commun Disord. 2023
Int J Lang Commun Disord. 2022-3
Arch Ital Urol Androl. 2025-6-30