Jeyaraman Madhan, Ramasubramanian Swaminathan, Balaji Sangeetha, Jeyaraman Naveen, Nallakumarasamy Arulkumar, Sharma Shilpa
Department of Orthopaedics, ACS Medical College and Hospital, Dr MGR Educational and Research Institute, Chennai 600077, Tamil Nadu, India.
Department of General Medicine, Government Medical College, Omandurar Government Estate, Chennai 600018, Tamil Nadu, India.
World J Methodol. 2023 Sep 20;13(4):170-178. doi: 10.5662/wjm.v13.i4.170.
Artificial intelligence (AI) tools, like OpenAI's Chat Generative Pre-trained Transformer (ChatGPT), hold considerable potential in healthcare, academia, and diverse industries. Evidence demonstrates its capability at a medical student level in standardized tests, suggesting utility in medical education, radiology reporting, genetics research, data optimization, and drafting repetitive texts such as discharge summaries. Nevertheless, these tools should augment, not supplant, human expertise. Despite promising applications, ChatGPT confronts limitations, including critical thinking tasks and generating false references, necessitating stringent cross-verification. Ensuing concerns, such as potential misuse, bias, blind trust, and privacy, underscore the need for transparency, accountability, and clear policies. Evaluations of AI-generated content and preservation of academic integrity are critical. With responsible use, AI can significantly improve healthcare, academia, and industry without compromising integrity and research quality. For effective and ethical AI deployment, collaboration amongst AI developers, researchers, educators, and policymakers is vital. The development of domain-specific tools, guidelines, regulations, and the facilitation of public dialogue must underpin these endeavors to responsibly harness AI's potential.
人工智能(AI)工具,如OpenAI的聊天生成预训练变换器(ChatGPT),在医疗保健、学术界和其他各行各业都具有巨大潜力。有证据表明,在标准化测试中,它在医学生层面展现出了相应能力,这表明其在医学教育、放射学报告、遗传学研究、数据优化以及撰写出院小结等重复性文本方面具有实用价值。然而,这些工具应增强而非取代人类专业知识。尽管有前景广阔的应用,但ChatGPT也面临着局限性,包括批判性思维任务和生成虚假参考文献等问题,这就需要进行严格的交叉验证。随之而来的担忧,如潜在的滥用、偏见、盲目信任和隐私问题,凸显了透明度、问责制和明确政策的必要性。对人工智能生成内容的评估以及学术诚信的维护至关重要。如果合理使用,人工智能可以在不损害诚信和研究质量的前提下,显著改善医疗保健、学术界和行业。为了有效且符合道德地部署人工智能,人工智能开发者、研究人员、教育工作者和政策制定者之间的合作至关重要。开发特定领域的工具、指南、法规以及促进公众对话,必须成为这些负责任地利用人工智能潜力的努力的基础。