Gummadi Ramakrishna, Dasari Nagasen, Kumar D Sathis, Pindiprolu Sai Kiran S S
Aditya Pharmacy College, Surampalem, Andhra Pradesh, 533 437, India.
Adv Pharm Bull. 2024 Oct;14(3):499-503. doi: 10.34172/apb.2024.060. Epub 2024 Jul 31.
Artificial intelligence (AI), particularly large language models like ChatGPT developed by OpenAI, has demonstrated potential in various domains, including medicine. While ChatGPT has shown the capability to pass rigorous exams like the United States Medical Licensing Examination (USMLE) Step 1, its proficiency in addressing breast cancer-related inquiries-a complex and prevalent disease-remains underexplored. This study aims to assess the accuracy and comprehensiveness of ChatGPT's responses to common breast cancer questions, addressing a critical gap in the literature and evaluating its potential in enhancing patient education and support in breast cancer management.
A curated list of 100 frequently asked breast cancer questions was compiled from Cancer.net, the National Breast Cancer Foundation, and clinical practice. These questions were input into ChatGPT, and the responses were evaluated for accuracy by two primary experts using a four-point scale. Discrepancies in scoring were resolved through additional expert review.
Of the 100 responses, 5 were entirely inaccurate, 22 partially accurate, 42 accurate but lacking comprehensiveness, and 31 highly accurate. The majority of the responses were found to be at least partially accurate, demonstrating ChatGPT's potential in providing reliable information on breast cancer.
ChatGPT shows promise as a supplementary tool for patient education on breast cancer. While generally accurate, the presence of inaccuracies underscores the need for professional oversight. The study advocates for integrating AI tools like ChatGPT in healthcare settings to support patient-provider interactions and health education, emphasizing the importance of regular updates to reflect the latest research and clinical guidelines.
人工智能(AI),特别是OpenAI开发的像ChatGPT这样的大型语言模型,已在包括医学在内的各个领域展现出潜力。虽然ChatGPT已显示出有能力通过诸如美国医学执照考试(USMLE)第一步这样的严格考试,但其在回答与乳腺癌相关问题(一种复杂且常见的疾病)方面的熟练程度仍未得到充分探索。本研究旨在评估ChatGPT对常见乳腺癌问题回答的准确性和全面性,填补文献中的关键空白,并评估其在加强乳腺癌管理中患者教育和支持方面的潜力。
从Cancer.net、美国国家乳腺癌基金会和临床实践中整理出一份包含100个常见乳腺癌问题的清单。将这些问题输入ChatGPT,并由两位主要专家使用四分制对回答的准确性进行评估。评分差异通过额外的专家评审解决。
在100个回答中,5个完全不准确,22个部分准确,42个准确但缺乏全面性,31个高度准确。大多数回答至少部分准确,表明ChatGPT在提供关于乳腺癌的可靠信息方面具有潜力。
ChatGPT有望成为乳腺癌患者教育的辅助工具。虽然总体准确,但不准确之处凸显了专业监督的必要性。该研究主张在医疗环境中整合像ChatGPT这样的人工智能工具,以支持患者与医疗服务提供者的互动和健康教育,强调定期更新以反映最新研究和临床指南的重要性。