Coskun Burhan, Ocakoglu Gokhan, Yetemen Melih, Kaygisiz Onur
Bursa Uludag University, Department of Urology, Nilüfer, Bursa, Turkey.
Bursa Uludag University, Department of Biostatistics, Nilüfer, Bursa, Turkey.
Urology. 2023 Oct;180:35-58. doi: 10.1016/j.urology.2023.05.040. Epub 2023 Jul 4.
To evaluate the performance of ChatGPT, an artificial intelligence (AI) language model, in providing patient information on prostate cancer, and to compare the accuracy, similarity, and quality of the information to a reference source.
Patient information material on prostate cancer was used as a reference source from the website of the European Association of Urology Patient Information. This was used to generate 59 queries. The accuracy of the model's content was determined with F1, precision, and recall scores. The similarity was assessed with cosine similarity, and the quality was evaluated using a 5-point Likert scale named General Quality Score (GQS).
ChatGPT was able to respond to all prostate cancer-related queries. The average F1 score was 0.426 (range: 0-1), precision score was 0.349 (range: 0-1), recall score was 0.549 (range: 0-1), and cosine similarity was 0.609 (range: 0-1). The average GQS was 3.62 ± 0.49 (range: 1-5), with no answers achieving the maximum GQS of 5. While ChatGPT produced a larger amount of information compared to the reference, the accuracy and quality of the content were not optimal, with all scores indicating need for improvement in the model's performance.
Caution should be exercised when using ChatGPT as a patient information source for prostate cancer due to limitations in its performance, which may lead to inaccuracies and potential misunderstandings. Further studies, using different topics and language models, are needed to fully understand the capabilities and limitations of AI-generated patient information.
评估人工智能(AI)语言模型ChatGPT在提供前列腺癌患者信息方面的表现,并将信息的准确性、相似性和质量与参考来源进行比较。
将欧洲泌尿外科学会患者信息网站上的前列腺癌患者信息材料用作参考来源。以此生成59个问题。通过F1值、精确率和召回率得分来确定模型内容的准确性。用余弦相似度评估相似性,并使用名为总体质量评分(GQS)的5点李克特量表来评估质量。
ChatGPT能够回答所有与前列腺癌相关的问题。平均F1值为0.426(范围:0 - 1),精确率得分为0.349(范围:0 - 1),召回率得分为0.549(范围:0 - 1),余弦相似度为0.609(范围:0 - 1)。平均GQS为3.62 ± 0.49(范围:1 - 5),没有答案达到最高的GQS 5分。虽然与参考资料相比,ChatGPT生成的信息量更大,但内容的准确性和质量并不理想,所有得分都表明该模型的性能需要改进。
由于ChatGPT在性能上存在局限性,可能导致信息不准确和潜在的误解,因此在将其用作前列腺癌患者信息来源时应谨慎。需要使用不同主题和语言模型进行进一步研究,以全面了解人工智能生成的患者信息的能力和局限性。