Joyce Dan W, Kormilitzin Andrey, Smith Katharine A, Cipriani Andrea
University of Oxford, Department of Psychiatry, Warneford Hospital, Oxford, OX3 7JX, UK.
Institute of Population Health, Department of Primary Care and Mental Health, University of Liverpool, Liverpool, L69 3GF, UK.
NPJ Digit Med. 2023 Jan 18;6(1):6. doi: 10.1038/s41746-023-00751-9.
The literature on artificial intelligence (AI) or machine learning (ML) in mental health and psychiatry lacks consensus on what "explainability" means. In the more general XAI (eXplainable AI) literature, there has been some convergence on explainability meaning model-agnostic techniques that augment a complex model (with internal mechanics intractable for human understanding) with a simpler model argued to deliver results that humans can comprehend. Given the differing usage and intended meaning of the term "explainability" in AI and ML, we propose instead to approximate model/algorithm explainability by understandability defined as a function of transparency and interpretability. These concepts are easier to articulate, to "ground" in our understanding of how algorithms and models operate and are used more consistently in the literature. We describe the TIFU (Transparency and Interpretability For Understandability) framework and examine how this applies to the landscape of AI/ML in mental health research. We argue that the need for understandablity is heightened in psychiatry because data describing the syndromes, outcomes, disorders and signs/symptoms possess probabilistic relationships to each other-as do the tentative aetiologies and multifactorial social- and psychological-determinants of disorders. If we develop and deploy AI/ML models, ensuring human understandability of the inputs, processes and outputs of these models is essential to develop trustworthy systems fit for deployment.
关于人工智能(AI)或机器学习(ML)在心理健康和精神病学领域的文献,对于“可解释性”的含义尚未达成共识。在更广泛的可解释人工智能(XAI)文献中,对于可解释性的含义已出现了一些趋同的观点,即可解释性意味着采用与模型无关的技术,用一个更简单的模型来增强一个复杂模型(其内部机制人类难以理解),据认为这个简单模型能给出人类可理解的结果。鉴于“可解释性”一词在人工智能和机器学习中有不同的用法和预期含义,我们建议通过将可理解性定义为透明度和可解释性的函数来近似模型/算法的可解释性。这些概念更易于阐述,能在我们对算法和模型如何运行的理解中“落地”,并且在文献中使用得更为一致。我们描述了TIFU(为可理解性的透明度和可解释性)框架,并研究其如何应用于心理健康研究中的人工智能/机器学习领域。我们认为,在精神病学中对可理解性的需求更为迫切,因为描述综合征、结果、疾病以及体征/症状的数据彼此之间存在概率关系,疾病的暂定病因以及多因素的社会和心理决定因素之间也是如此。如果我们开发和部署人工智能/机器学习模型,确保人类能够理解这些模型的输入、过程和输出对于开发适合部署的可靠系统至关重要。