Michaud Eric J, Liao Isaac, Lad Vedang, Liu Ziming, Mudide Anish, Loughridge Chloe, Guo Zifan Carl, Kheirkhah Tara Rezaei, Vukelić Mateja, Tegmark Max
Department of Physics, Massachusetts Institute of Technology, Cambridge, MA 02139, USA.
Institute for Artificial Intelligence and Fundamental Interactions, Cambridge, MA 02139, USA.
Entropy (Basel). 2024 Dec 2;26(12):1046. doi: 10.3390/e26121046.
Can we turn AI black boxes into code? Although this mission sounds extremely challenging, we show that it is not entirely impossible by presenting a proof-of-concept method, MIPS, that can synthesize programs based on the automated mechanistic interpretability of neural networks trained to perform the desired task, auto-distilling the learned algorithm into Python code. We test MIPS on a benchmark of 62 algorithmic tasks that can be learned by an RNN and find it highly complementary to GPT-4: MIPS solves 32 of them, including 13 that are not solved by GPT-4 (which also solves 30). MIPS uses an integer autoencoder to convert the RNN into a finite state machine, then applies Boolean or integer symbolic regression to capture the learned algorithm. As opposed to large language models, this program synthesis technique makes no use of (and is therefore not limited by) human training data such as algorithms and code from GitHub. We discuss opportunities and challenges for scaling up this approach to make machine-learned models more interpretable and trustworthy.
我们能否将人工智能的黑箱转化为代码?尽管这项任务听起来极具挑战性,但我们通过提出一种概念验证方法MIPS表明这并非完全不可能。MIPS可以基于经过训练以执行所需任务的神经网络的自动机制可解释性来合成程序,将学习到的算法自动提炼成Python代码。我们在一个由62个可由循环神经网络学习的算法任务组成的基准上测试了MIPS,发现它与GPT-4具有高度互补性:MIPS解决了其中32个任务,包括13个GPT-4未解决的任务(GPT-4也解决了30个任务)。MIPS使用整数自动编码器将循环神经网络转换为有限状态机,然后应用布尔或整数符号回归来捕获学习到的算法。与大语言模型不同,这种程序合成技术不使用(因此也不受限于)来自GitHub等的算法和代码等人类训练数据。我们讨论了扩大这种方法以使机器学习模型更具可解释性和可信度的机遇与挑战。