Azeem Muhammad, Salam Abdul, Albalawi Olayan, Hussain Sundus
Department of Statistics, University of Malakand, Khyber Pakhtunkhwa, Pakistan.
Department of Statistics, Faculty of Science, University of Tabuk, Tabuk, Saudi Arabia.
Heliyon. 2024 Aug 8;10(16):e35852. doi: 10.1016/j.heliyon.2024.e35852. eCollection 2024 Aug 30.
Randomized response scrambling techniques have been in existence for over fifty years. These scrambling methods are very useful in sample surveys where researchers deal with sensitive variables. Out of many available scrambling techniques, survey researchers often need to evaluate these techniques to choose the best technique for real-world surveys. In the current literature, only a limited number of model-evaluation metrics are available for analyzing the performance of different scrambling methods. This leaves a big research gap for the development of new unified evaluation measures which can quantify all aspects of a scrambling technique. We develop a novel unified metric for evaluation of randomized response models and compare it with the existing unified measure. The proposed measure can quantify the efficiency and the level of the respondents' privacy of any scrambling technique. Being less sensitive to sample sizes than the existing unified measure, the proposed measure can be used with small sample sizes to evaluate models.
随机化回答加扰技术已经存在五十多年了。这些加扰方法在研究人员处理敏感变量的抽样调查中非常有用。在众多可用的加扰技术中,调查研究人员通常需要评估这些技术,以便为实际调查选择最佳技术。在当前文献中,只有有限数量的模型评估指标可用于分析不同加扰方法的性能。这为开发能够量化加扰技术各个方面的新的统一评估方法留下了很大的研究空白。我们开发了一种用于评估随机化回答模型的新型统一指标,并将其与现有的统一指标进行比较。所提出的指标可以量化任何加扰技术的效率和受访者隐私水平。与现有的统一指标相比,该指标对样本量的敏感性较低,可用于小样本量来评估模型。