Zenil Hector, Hernández-Orozco Santiago, Kiani Narsis A, Soler-Toscano Fernando, Rueda-Toicen Antonio, Tegnér Jesper
Algorithmic Dynamics Lab, Unit of Computational Medicine, Department of Medicine Solna, Center for Molecular Medicine, Karolinska Institute and SciLifeLab, SE-171 77 Stockholm, Sweden.
Algorithmic Nature Group, Laboratoire de Recherche Scientifique (LABORES) for the Natural and Digital Sciences, 75005 Paris, France.
Entropy (Basel). 2018 Aug 15;20(8):605. doi: 10.3390/e20080605.
We investigate the properties of a Block Decomposition Method (BDM), which extends the power of a Coding Theorem Method (CTM) that approximates local estimations of algorithmic complexity based on Solomonoff-Levin's theory of algorithmic probability providing a closer connection to algorithmic complexity than previous attempts based on statistical regularities such as popular lossless compression schemes. The strategy behind BDM is to find small computer programs that produce the components of a larger, decomposed object. The set of short computer programs can then be artfully arranged in sequence so as to produce the original object. We show that the method provides efficient estimations of algorithmic complexity but that it performs like Shannon entropy when it loses accuracy. We estimate errors and study the behaviour of BDM for different boundary conditions, all of which are compared and assessed in detail. The measure may be adapted for use with more multi-dimensional objects than strings, objects such as arrays and tensors. To test the measure we demonstrate the power of CTM on low algorithmic-randomness objects that are assigned maximal entropy (e.g., π ) but whose numerical approximations are closer to the theoretical low algorithmic-randomness expectation. We also test the measure on larger objects including dual, isomorphic and cospectral graphs for which we know that algorithmic randomness is low. We also release implementations of the methods in most major programming languages- (Mathematica), , , , , , , and -and an online algorithmic complexity calculator.
我们研究了一种块分解方法(BDM)的性质,该方法扩展了编码定理方法(CTM)的能力。CTM基于所罗门诺夫 - 莱文的算法概率理论来近似算法复杂度的局部估计,与基于统计规律(如流行的无损压缩方案)的先前尝试相比,它与算法复杂度的联系更为紧密。BDM背后的策略是找到能生成更大的分解对象组件的小型计算机程序。然后,可以巧妙地将这些短计算机程序按顺序排列以生成原始对象。我们表明,该方法能提供算法复杂度的有效估计,但在失去准确性时其表现类似于香农熵。我们估计了误差,并研究了BDM在不同边界条件下的行为,所有这些都进行了详细的比较和评估。该度量可适用于比字符串更多的多维对象,例如数组和张量等对象。为了测试该度量,我们展示了CTM对具有最大熵(例如π)但数值近似更接近理论低算法随机性预期的低算法随机性对象的能力。我们还对包括对偶、同构和共谱图在内的更大对象进行了测试,我们知道这些对象的算法随机性较低。我们还发布了该方法在大多数主要编程语言(Mathematica)以及其他语言中的实现,以及一个在线算法复杂度计算器。