Baird Sterling G, Issa Ramsey, Sparks Taylor D
Materials Science & Engineering, 122 S. Central Campus Drive, #304 Salt Lake City, UT 84112-0056, United States.
Chemistry Department, University of Liverpool, Liverpool, L7 3NY, United Kingdom.
Data Brief. 2023 Aug 10;50:109487. doi: 10.1016/j.dib.2023.109487. eCollection 2023 Oct.
In scientific disciplines, benchmarks play a vital role in driving progress forward. For a benchmark to be effective, it must closely resemble real-world tasks. If the level of difficulty or relevance is inadequate, it can impede progress in the field. Moreover, benchmarks should have low computational overhead to ensure accessibility and repeatability. The objective is to achieve a kind of ``Turing test'' by creating a surrogate model that is practically indistinguishable from the ground truth observation, at least within the dataset's explored boundaries. This objective necessitates a large quantity of data. This data encompasses numerous features that are characteristic of chemistry and materials science optimization tasks that are relevant to industry. These features include high levels of noise, multiple fidelities, multiple objectives, linear constraints, non-linear correlations, and failure regions. We performed 494498 random hard-sphere packing simulations representing 206 CPU days' worth of computational overhead. Simulations required nine input parameters with linear constraints and two discrete fidelities each with continuous fidelity parameters. The data was logged in a free-tier shared MongoDB Atlas database, producing two core tabular datasets: a failure probability dataset and a regression dataset. The failure probability dataset maps unique input parameter sets to the estimated probabilities that the simulation will fail. The regression dataset maps input parameter sets (including repeats) to particle packing fractions and computational runtimes for each of the two steps. These two datasets were used to create a surrogate model as close as possible to running the actual simulations by incorporating simulation failure and heteroskedastic noise. In the regression dataset, percentile ranks were calculated for each group of identical parameter sets to account for heteroskedastic noise, thereby ensuring reliable and accurate data. This differs from the conventional approach that imposes a-priori assumptions, such as Gaussian noise, by specifying mean and standard deviation. This technique can be extended to other benchmark datasets to bridge the gap between optimization benchmarks with low computational overhead and the complex optimization scenarios encountered in the real world.
在科学学科中,基准在推动进步方面发挥着至关重要的作用。要使基准有效,它必须与现实世界的任务紧密相似。如果难度或相关性水平不足,可能会阻碍该领域的进展。此外,基准应具有较低的计算开销,以确保可及性和可重复性。目标是通过创建一个在实际中与真实观测几乎无法区分的替代模型来实现一种“图灵测试”,至少在数据集所探索的范围内。这一目标需要大量数据。这些数据包含许多与化学和材料科学优化任务相关的、具有行业特征的特征。这些特征包括高水平的噪声、多个保真度、多个目标、线性约束、非线性相关性以及失效区域。我们进行了494498次随机硬球堆积模拟,相当于206个CPU日的计算开销。模拟需要九个具有线性约束的输入参数以及两个离散保真度,每个保真度都有连续的保真度参数。数据记录在免费层级的共享MongoDB Atlas数据库中,生成了两个核心表格数据集:一个失效概率数据集和一个回归数据集。失效概率数据集将唯一的输入参数集映射到模拟将失败的估计概率。回归数据集将输入参数集(包括重复项)映射到两个步骤中每一步的颗粒堆积分数和计算运行时间。通过纳入模拟失败和异方差噪声,使用这两个数据集创建了一个尽可能接近运行实际模拟的替代模型。在回归数据集中,为每组相同的参数集计算百分位排名以考虑异方差噪声,从而确保数据可靠且准确。这与通过指定均值和标准差来强加先验假设(如高斯噪声)的传统方法不同。该技术可扩展到其他基准数据集,以弥合低计算开销的优化基准与现实世界中遇到的复杂优化场景之间的差距。