Gajowniczek Krzysztof, Orłowski Arkadiusz, Ząbkowski Tomasz
Department of Informatics, Faculty of Applied Informatics and Mathematics, Warsaw University of Life Sciences-SGGW, Nowoursynowska 159, 02-787 Warsaw, Poland.
Entropy (Basel). 2018 Apr 3;20(4):249. doi: 10.3390/e20040249.
Artificial neural networks are currently one of the most commonly used classifiers and over the recent years they have been successfully used in many practical applications, including banking and finance, health and medicine, engineering and manufacturing. A large number of error functions have been proposed in the literature to achieve a better predictive power. However, only a few works employ Tsallis statistics, although the method itself has been successfully applied in other machine learning techniques. This paper undertakes the effort to examine the q -generalized function based on Tsallis statistics as an alternative error measure in neural networks. In order to validate different performance aspects of the proposed function and to enable identification of its strengths and weaknesses the extensive simulation was prepared based on the artificial benchmarking dataset. The results indicate that Tsallis entropy error function can be successfully introduced in the neural networks yielding satisfactory results and handling with class imbalance, noise in data or use of non-informative predictors.
人工神经网络是目前最常用的分类器之一,近年来已成功应用于许多实际领域,包括银行与金融、健康与医学、工程与制造。文献中提出了大量误差函数以获得更好的预测能力。然而,尽管Tsallis统计方法本身已成功应用于其他机器学习技术,但只有少数研究采用该方法。本文致力于研究基于Tsallis统计的q广义函数,将其作为神经网络中的一种替代误差度量。为了验证所提出函数的不同性能方面,并确定其优缺点,基于人工基准数据集进行了广泛的模拟。结果表明,Tsallis熵误差函数可以成功引入神经网络,产生令人满意的结果,并能处理类别不平衡、数据中的噪声或使用无信息预测变量的问题。