IHSAN Lab, Information Technology University of the Punjab (ITU), Lahore, Pakistan.
Research Center for Islamic Legislation and Ethics (CILE), College of Islamic Studies, Hamad Bin Khalifa University (HBKU), Doha, Qatar.
Comput Biol Med. 2022 Oct;149:106043. doi: 10.1016/j.compbiomed.2022.106043. Epub 2022 Sep 7.
With the advent of machine learning (ML) and deep learning (DL) empowered applications for critical applications like healthcare, the questions about liability, trust, and interpretability of their outputs are raising. The black-box nature of various DL models is a roadblock to clinical utilization. Therefore, to gain the trust of clinicians and patients, we need to provide explanations about the decisions of models. With the promise of enhancing the trust and transparency of black-box models, researchers are in the phase of maturing the field of eXplainable ML (XML). In this paper, we provided a comprehensive review of explainable and interpretable ML techniques for various healthcare applications. Along with highlighting security, safety, and robustness challenges that hinder the trustworthiness of ML, we also discussed the ethical issues arising because of the use of ML/DL for healthcare. We also describe how explainable and trustworthy ML can resolve all these ethical problems. Finally, we elaborate on the limitations of existing approaches and highlight various open research problems that require further development.
随着机器学习 (ML) 和深度学习 (DL) 技术在医疗等关键领域的应用,关于其输出的责任、信任和可解释性的问题也随之出现。各种 DL 模型的黑盒性质是临床应用的障碍。因此,为了获得临床医生和患者的信任,我们需要对模型的决策提供解释。为了提高黑盒模型的信任和透明度,研究人员正在成熟可解释机器学习 (XML) 领域。在本文中,我们全面回顾了各种医疗保健应用的可解释和可解释性机器学习技术。我们强调了阻碍 ML 可信度的安全、安全和鲁棒性挑战,还讨论了由于将 ML/DL 用于医疗保健而产生的道德问题。我们还描述了可解释和值得信赖的 ML 如何解决所有这些道德问题。最后,我们详细说明了现有方法的局限性,并强调了需要进一步开发的各种开放研究问题。