Kumar Mukesh, Mani Utsav Anand, Tripathi Pranjal, Saalim Mohd, Roy Sneha
Emergency Medicine, King George's Medical University, Lucknow, IND.
Psychiatry, King George's Medical University, Lucknow, IND.
Cureus. 2023 Aug 10;15(8):e43313. doi: 10.7759/cureus.43313. eCollection 2023 Aug.
One of the critical challenges posed by artificial intelligence (AI) tools like Google Bard (Google LLC, Mountain View, California, United States) is the potential for "artificial hallucinations." These refer to instances where an AI chatbot generates fictional, erroneous, or unsubstantiated information in response to queries. In research, such inaccuracies can lead to the propagation of misinformation and undermine the credibility of scientific literature. The experience presented here highlights the importance of cross-checking the information provided by AI tools with reliable sources and maintaining a cautious approach when utilizing these tools in research writing.
像谷歌巴德(谷歌有限责任公司,美国加利福尼亚州山景城)这样的人工智能(AI)工具带来的关键挑战之一是存在“人工幻觉”的可能性。这指的是人工智能聊天机器人在回答查询时生成虚构、错误或未经证实信息的情况。在研究中,这种不准确可能导致错误信息的传播,并损害科学文献的可信度。这里介绍的经验凸显了在研究写作中使用这些工具时,将人工智能工具提供的信息与可靠来源进行交叉核对并保持谨慎态度的重要性。