Suppr超能文献

评估人因分析与分类系统的可靠性。

Evaluating the Reliability of the Human Factors Analysis and Classification System.

作者信息

Cohen Tara N, Wiegmann Douglas A, Shappell Scott A

机构信息

Embry-Riddle Aeronautical University, Daytona Beach, FL, USA.

出版信息

Aerosp Med Hum Perform. 2015 Aug;86(8):728-35. doi: 10.3357/AMHP.4218.2015.

Abstract

INTRODUCTION

This paper examines the reliability of the Human Factors Analysis and Classification System (HFACS) as tool for coding human error and contributing factors associated with accidents and incidents.

METHODS

A systematic review of articles published across a 13-yr period between 2001 and 2014 revealed a total of 14 peer-reviewed manuscripts that reported data concerning the reliability of HFACS.

RESULTS

Results revealed that the majority of these papers reported acceptable levels of interrater and intrarater reliability.

CONCLUSION

Reliability levels were higher with increased training and sample sizes. Likewise, when deviations from the original framework were minimized, reliability levels increased. Future applications of the framework should consider these factors to ensure the reliability and utility of HFACS as an accident analysis and classification tool.

摘要

引言

本文探讨了人为因素分析与分类系统(HFACS)作为一种用于对人为错误以及与事故和事件相关的促成因素进行编码的工具的可靠性。

方法

对2001年至2014年这13年间发表的文章进行系统综述,共发现14篇同行评审的手稿,这些手稿报告了有关HFACS可靠性的数据。

结果

结果显示,这些论文中的大多数报告了评分者间和评分者内可靠性的可接受水平。

结论

随着培训和样本量的增加,可靠性水平更高。同样,当与原始框架的偏差最小化时,可靠性水平也会提高。该框架未来的应用应考虑这些因素,以确保HFACS作为事故分析和分类工具的可靠性和实用性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验