Poudel Utsav, Jakhar Sachin, Mohan Prakash, Nepal Anuj
School of Computer Science and Engineering, Vellore Institute of Technology, Vellore, Tamil Nadu, India.
Deakin Cyber, School of Information Technology, Deakin University, Geelong, Australia.
Issues Ment Health Nurs. 2025 Jul;46(7):693-701. doi: 10.1080/01612840.2025.2502943. Epub 2025 May 16.
Artificial intelligence (AI) is transforming digital health, its influence is expanding across multiple sectors, with mental health and psychiatric care emerging as key areas of transformation. While significant advancements have been made in medical AI, there remains a need to better understand how these technologies are integrated into clinical practice and what challenges they introduce. We examine the use of AI in identifying and treating mental health disorders, highlighting its impact on screening, diagnosis, and intervention strategies. Technologies such as natural language processing (NLP), machine learning (ML), and computer-delivered cognitive behavioral therapy (CBT) are discussed in the context of enhancing Clinical Decision Support Systems (CDSS). While these innovations promise increased efficiency and accessibility in psychiatric care, they also introduce ethical challenges, including concerns over privacy, bias, and reduced human interaction. Through a critical evaluation, we find that greater transparency, unbiased model development and unbiased AI systems that work hand in hand with human-led care should be encouraged. Our findings underscore the importance of continued research and regulation to ensure the responsible and effective deployment of medical AI services.
人工智能(AI)正在改变数字健康领域,其影响力正扩展到多个领域,心理健康和精神科护理成为关键的变革领域。虽然医学人工智能已取得重大进展,但仍需要更好地理解这些技术如何融入临床实践以及它们带来了哪些挑战。我们研究了人工智能在识别和治疗精神健康障碍方面的应用,强调了其对筛查、诊断和干预策略的影响。在增强临床决策支持系统(CDSS)的背景下,讨论了自然语言处理(NLP)、机器学习(ML)和计算机辅助认知行为疗法(CBT)等技术。虽然这些创新有望提高精神科护理的效率和可及性,但它们也带来了伦理挑战,包括对隐私、偏见和人际互动减少的担忧。通过批判性评估,我们发现应鼓励提高透明度、进行无偏见的模型开发以及开发与人为主导的护理携手合作的无偏见人工智能系统。我们的研究结果强调了持续研究和监管的重要性,以确保医疗人工智能服务的负责任和有效部署。