Typal Academy, Richland, USA.
Department of Applied Mathematics and Statistics, Colorado School of Mines, Golden, USA.
Sci Rep. 2023 Jun 21;13(1):10103. doi: 10.1038/s41598-023-36249-3.
Indecipherable black boxes are common in machine learning (ML), but applications increasingly require explainable artificial intelligence (XAI). The core of XAI is to establish transparent and interpretable data-driven algorithms. This work provides concrete tools for XAI in situations where prior knowledge must be encoded and untrustworthy inferences flagged. We use the "learn to optimize" (L2O) methodology wherein each inference solves a data-driven optimization problem. Our L2O models are straightforward to implement, directly encode prior knowledge, and yield theoretical guarantees (e.g. satisfaction of constraints). We also propose use of interpretable certificates to verify whether model inferences are trustworthy. Numerical examples are provided in the applications of dictionary-based signal recovery, CT imaging, and arbitrage trading of cryptoassets. Code and additional documentation can be found at https://xai-l2o.research.typal.academy .
黑盒在机器学习(ML)中很常见,但应用程序越来越需要可解释的人工智能(XAI)。XAI 的核心是建立透明和可解释的数据驱动算法。在必须对先验知识进行编码并标记不可信推理的情况下,这项工作为 XAI 提供了具体的工具。我们使用“学习优化”(L2O)方法,其中每个推理都解决一个数据驱动的优化问题。我们的 L2O 模型易于实现,直接编码先验知识,并产生理论保证(例如满足约束条件)。我们还提出使用可解释的证书来验证模型推理是否可信。在基于字典的信号恢复、CT 成像和加密资产套利交易等应用中提供了数值示例。代码和其他文档可在 https://xai-l2o.research.typal.academy 找到。