Faculty of Electrical Engineering and Computer Science, University of Maribor, Maribor, Slovenia.
Community Healthcare Center Dr Adolf Drolc Maribor, Maribor, Slovenia.
Sci Prog. 2024 Jul-Sep;107(3):368504241266573. doi: 10.1177/00368504241266573.
In solving the trust issues surrounding machine learning algorithms whose reasoning cannot be understood, advancements can be made toward the integration of machine learning algorithms into mHealth applications. The aim of this paper is to provide a transparency layer to black-box machine learning algorithms and empower mHealth applications to maximize their efficiency.
Using a machine learning testing framework, we present the process of knowledge transfer between a white-box model and a black-box model and the evaluation process to validate the success of the knowledge transfer.
The presentation layer of the final output of the base white-box model and the knowledge-infused white-box model shows clear differences in reasoning. The correlation between the base black-box model and the new knowledge-infused model is very high, indicating that the knowledge transfer was successful.
There is a clear need for transparency in digital health and health in general. Adding solutions to the toolbox of explainable artificial intelligence is one way to gradually decrease the obscurity of black-box models.
解决机器学习算法推理不可理解所带来的信任问题,推动机器学习算法融入移动医疗应用。本文旨在为黑盒机器学习算法提供一个透明层,使移动医疗应用能够最大限度地提高效率。
我们使用机器学习测试框架,展示了白盒模型和黑盒模型之间的知识转移过程,以及验证知识转移成功的评估过程。
基础白盒模型和知识注入白盒模型最终输出的表示层在推理上显示出明显的差异。基础黑盒模型和新的知识注入模型之间的相关性非常高,表明知识转移是成功的。
数字医疗和一般健康领域都非常需要透明度。为可解释人工智能的工具箱添加解决方案是逐渐减少黑盒模型模糊性的一种方法。