Michigan State University College of Human Medicine, Traverse City Campus, Traverse City, Michigan, USA.
ChronoRecord Association, Fullerton, California, USA.
Br J Psychiatry. 2024 Feb;224(2):33-35. doi: 10.1192/bjp.2023.136.
With the recent advances in artificial intelligence (AI), patients are increasingly exposed to misleading medical information. Generative AI models, including large language models such as ChatGPT, create and modify text, images, audio and video information based on training data. Commercial use of generative AI is expanding rapidly and the public will routinely receive messages created by generative AI. However, generative AI models may be unreliable, routinely make errors and widely spread misinformation. Misinformation created by generative AI about mental illness may include factual errors, nonsense, fabricated sources and dangerous advice. Psychiatrists need to recognise that patients may receive misinformation online, including about medicine and psychiatry.
随着人工智能 (AI) 的最新进展,患者越来越多地接触到误导性的医疗信息。生成式 AI 模型,包括 ChatGPT 等大型语言模型,根据训练数据创建和修改文本、图像、音频和视频信息。生成式 AI 的商业应用正在迅速扩大,公众将经常收到生成式 AI 生成的信息。然而,生成式 AI 模型可能不可靠,经常出错,并广泛传播错误信息。生成式 AI 生成的关于精神疾病的错误信息可能包括事实错误、无稽之谈、编造的来源和危险的建议。精神科医生需要认识到,患者可能会在网上收到错误信息,包括关于药物和精神病学的信息。