Suppr超能文献

校准中的加权最小二乘法:有何不同?

Weighted least-squares in calibration: what difference does it make?

作者信息

Tellinghuisen Joel

机构信息

Department of Chemistry, Vanderbilt University, Nashville, TN 37235, USA.

出版信息

Analyst. 2007 Jun;132(6):536-43. doi: 10.1039/b701696d. Epub 2007 Apr 18.

Abstract

In univariate calibration, an unknown concentration or amount x(0) is estimated from its measured response y(0) by comparison with a calibration data set obtained in the same way for known x values. The calibration function y = f(x) contains parameters obtained from a least-squares (LS) fit of the calibration data. Since minimum-variance estimation requires that the data be weighted inversely as their true variances, any other weighting leads to predictable losses of precision in the calibration parameters and in the estimation of x(0). Incorrect weighting also invalidates the apparent standard errors returned by the LS calibration fit. Both effects are studied using Monte Carlo calculations. For the strongest commonly encountered heteroscedasticity, proportional error (sigma(i) proportional, varianty(i)), neglect of weights yields as much as an order of magnitude precision loss for x(0) in the small x region, but only nominal loss in the calibration mid-range. Use of replicates gives great improvement at small x but can underperform unweighted regression in the mid-to-large x region. Variance function estimation approximates minimum-variance, even though the true variance functions are not well reproduced. A relative error test applied to the calibration data themselves is predisposed to favor 1/y(2) (or 1/x(2)) weighting, even if the data are homoscedastic. This predisposition weakens when replicate measurements are taken and disappears when the test is applied to an independent set of data. The distinction between a priori and a posteriori parameter standard errors is emphasized. Where feasible, the a priori approach permits reliable assignment of weights, application of a chi(2) test, and use of the normal distribution for confidence limits.

摘要

在单变量校准中,通过将未知浓度或量(x(0))的测量响应(y(0))与以相同方式针对已知(x)值获得的校准数据集进行比较来估计其值。校准函数(y = f(x))包含从校准数据的最小二乘(LS)拟合中获得的参数。由于最小方差估计要求数据的加权与它们的真实方差成反比,任何其他加权都会导致校准参数以及(x(0))估计中的精度出现可预测的损失。不正确的加权还会使LS校准拟合返回的表观标准误差无效。这两种效应都使用蒙特卡罗计算进行了研究。对于最常见的强异方差性,即比例误差((\sigma(i))成比例,方差((i))),在小(x)区域中忽略权重会导致(x(0))的精度损失高达一个数量级,但在校准中程仅导致名义损失。使用重复测量在小(x)时能有很大改进,但在中到大(x)区域可能不如未加权回归。方差函数估计近似于最小方差,即使真实方差函数没有得到很好的再现。应用于校准数据本身的相对误差检验倾向于支持(1/y(2))(或(1/x(2)))加权,即使数据是同方差的。当进行重复测量时,这种倾向会减弱,而当将该检验应用于一组独立数据时则会消失。强调了先验和后验参数标准误差之间的区别。在可行的情况下,先验方法允许可靠地分配权重、应用卡方检验以及使用正态分布确定置信限。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验