Cohen Tara N, Wiegmann Douglas A, Shappell Scott A
Embry-Riddle Aeronautical University, Daytona Beach, FL, USA.
Aerosp Med Hum Perform. 2015 Aug;86(8):728-35. doi: 10.3357/AMHP.4218.2015.
This paper examines the reliability of the Human Factors Analysis and Classification System (HFACS) as tool for coding human error and contributing factors associated with accidents and incidents.
A systematic review of articles published across a 13-yr period between 2001 and 2014 revealed a total of 14 peer-reviewed manuscripts that reported data concerning the reliability of HFACS.
Results revealed that the majority of these papers reported acceptable levels of interrater and intrarater reliability.
Reliability levels were higher with increased training and sample sizes. Likewise, when deviations from the original framework were minimized, reliability levels increased. Future applications of the framework should consider these factors to ensure the reliability and utility of HFACS as an accident analysis and classification tool.
本文探讨了人为因素分析与分类系统(HFACS)作为一种用于对人为错误以及与事故和事件相关的促成因素进行编码的工具的可靠性。
对2001年至2014年这13年间发表的文章进行系统综述,共发现14篇同行评审的手稿,这些手稿报告了有关HFACS可靠性的数据。
结果显示,这些论文中的大多数报告了评分者间和评分者内可靠性的可接受水平。
随着培训和样本量的增加,可靠性水平更高。同样,当与原始框架的偏差最小化时,可靠性水平也会提高。该框架未来的应用应考虑这些因素,以确保HFACS作为事故分析和分类工具的可靠性和实用性。