Yu Zhuoting, Deng Hongzhong, Tang Shuaiwen
College of Systems Engineering, National University of Defense Technology, Changsha 410073, China.
Entropy (Basel). 2025 May 31;27(6):591. doi: 10.3390/e27060591.
Recently, interest in optimizing judging schemes for large-scale innovation competitions has grown as the complexities in evaluation processes continue to escalate. Although numerous methods have been developed to improve scoring fairness and precision, challenges such as evaluator subjectivity, workload imbalance, and the inherent uncertainty of scoring systems remain inadequately addressed. This study introduces a novel framework that integrates a genetic algorithm-based work cross-distribution model, advanced Z-score adjustment methods, and a BP neural network-enhanced score correction approach to tackle these issues. First, we propose a work crossover distribution model based on the concept of information entropy. The model employs a genetic algorithm to maximize the overlap between experts while ensuring a balanced distribution of evaluation tasks, thus reducing the entropy generated by imbalances in the process. By optimizing the distribution of submissions across experts, our model significantly mitigates inconsistencies arising from diverse scoring tendencies. Second, we developed modified Z-score and Z-score Pro scoring adjustment models aimed at eliminating the scoring discrepancies between judges, thereby enhancing the overall reliability of the normalization process and evaluation results. Additionally, evaluation metrics were proposed based on information theory. Finally, we incorporate a BP neural network-based score adjustment technique to further refine the assessment accuracy by capturing latent biases and uncertainties inherent in large-scale evaluations. Experimental results conducted on datasets from national-scale innovation competitions demonstrate that the proposed methods not only improve the fairness and robustness of the evaluation process but also contribute to a more scientific and objective assessment framework. This research advances the state of the art by providing a comprehensive and scalable solution for addressing the unique challenges of large-scale innovative competition judging.
近年来,随着评估过程的复杂性不断升级,人们对优化大规模创新竞赛评判方案的兴趣日益浓厚。尽管已经开发出许多方法来提高评分的公平性和准确性,但诸如评估者主观性、工作量不平衡以及评分系统固有的不确定性等挑战仍未得到充分解决。本研究引入了一个新颖的框架,该框架集成了基于遗传算法的工作交叉分配模型、先进的Z分数调整方法以及基于BP神经网络的分数校正方法来解决这些问题。首先,我们基于信息熵的概念提出了一种工作交叉分配模型。该模型采用遗传算法来最大化专家之间的重叠,同时确保评估任务的均衡分配,从而减少过程中不平衡产生的熵。通过优化提交材料在专家之间的分配,我们的模型显著减轻了因不同评分倾向而产生的不一致性。其次,我们开发了改进的Z分数和Z分数Pro评分调整模型,旨在消除评委之间的评分差异,从而提高归一化过程和评估结果的整体可靠性。此外,还基于信息理论提出了评估指标。最后,我们纳入了基于BP神经网络的分数调整技术,通过捕捉大规模评估中固有的潜在偏差和不确定性来进一步提高评估准确性。对国家级创新竞赛数据集进行的实验结果表明,所提出的方法不仅提高了评估过程的公平性和稳健性,还有助于构建一个更科学、客观的评估框架。这项研究通过提供一个全面且可扩展的解决方案来应对大规模创新竞赛评判的独特挑战,推动了该领域的技术发展。