Haas Stefan, Hegestweiler Konstantin, Rapp Michael, Muschalik Maximilian, Hüllermeier Eyke
Institute of Informatics, LMU Munich, Munich, Germany.
BMW Group, Munich, Germany.
Front Artif Intell. 2024 Oct 24;7:1471208. doi: 10.3389/frai.2024.1471208. eCollection 2024.
Machine learning has made tremendous progress in predictive performance in recent years. Despite these advances, employing machine learning models in high-stake domains remains challenging due to the opaqueness of many high-performance models. If their behavior cannot be analyzed, this likely decreases the trust in such models and hinders the acceptance of human decision-makers. Motivated by these challenges, we propose a process model for developing and evaluating explainable decision support systems that are tailored to the needs of different stakeholders. To demonstrate its usefulness, we apply the process model to a real-world application in an enterprise context. The goal is to increase the acceptance of an existing black-box model developed at a car manufacturer for supporting manual goodwill assessments. Following the proposed process, we conduct two quantitative surveys targeted at the application's stakeholders. Our study reveals that textual explanations based on local feature importance best fit the needs of the stakeholders in the considered use case. Specifically, our results show that all stakeholders, including business specialists, goodwill assessors, and technical IT experts, agree that such explanations significantly increase their trust in the decision support system. Furthermore, our technical evaluation confirms the faithfulness and stability of the selected explanation method. These practical findings demonstrate the potential of our process model to facilitate the successful deployment of machine learning models in enterprise settings. The results emphasize the importance of developing explanations that are tailored to the specific needs and expectations of diverse stakeholders.
近年来,机器学习在预测性能方面取得了巨大进展。尽管有这些进步,但由于许多高性能模型的不透明性,在高风险领域应用机器学习模型仍然具有挑战性。如果无法分析它们的行为,这可能会降低对这类模型的信任,并阻碍人类决策者的接受。受这些挑战的启发,我们提出了一个用于开发和评估可解释决策支持系统的过程模型,该模型是根据不同利益相关者的需求定制的。为了证明其有用性,我们将该过程模型应用于企业环境中的一个实际应用。目标是提高汽车制造商开发的用于支持人工善意评估的现有黑箱模型的接受度。按照提议的过程,我们针对该应用的利益相关者进行了两项定量调查。我们的研究表明,基于局部特征重要性的文本解释最符合所考虑用例中利益相关者的需求。具体而言,我们的结果表明,所有利益相关者,包括业务专家、善意评估员和技术IT专家,都同意这种解释显著增加了他们对决策支持系统的信任。此外,我们的技术评估证实了所选解释方法的忠实性和稳定性。这些实际发现证明了我们的过程模型在促进机器学习模型在企业环境中成功部署方面的潜力。结果强调了开发针对不同利益相关者的特定需求和期望的解释的重要性。