%0 Journal Article %T Coding ATC Incident Data Using HFACS: Intercoder Consensus %A Liang Wang %A Yaohua Wang %A Xiaoqiang Yang %A Kai Cheng %A Haishan Yang %A Baoguo Zhu %A Chengfei Fan %A Xinwei Ji %J Journal of Quality and Reliability Engineering %D 2011 %R 10.1155/2011/379129 %X Reliability studies for coding contributing factors of incident reports in high hazard industries are rarely conducted and reported. Although the Human Factors Analysis and Classification System (HFACS) appears to have a larger number of such studies completed than most other systems doubt exists as the accuracy and comparability of results between studies due to aspects of methodology and reporting. This paper reports on a trial conducted on HFACS to determine its reliability in the context of military air traffic control (ATC). Two groups participated in the trial: one group comprised of specialists in the field of human factors, and the other group comprised air traffic controllers. All participants were given standardized training via a self-paced workbook and then read 14 incident reports and coded the associated findings. The results show similarly low consensus for both groups of participants. Several reasons for the results are proposed associated with the HFACS model, the context within which incident reporting occurs in real organizations and the conduct of the studies. 1. Introduction There are numerous techniques available for the classification of incident and accident contributing factors into codes that are fundamental to trend analyses and the mitigation of human error (e.g., TRACEr, [1]; SECAS, [2]; HFACS, [3]). Many of these techniques are in the form of taxonomies, containing separate categories and codes from which coders select and then apply to incident contributing factors. However, few taxonomies have been subject to independent reliability studies to provide evidence that the classification system can provide consistent coding over time and consensus amongst different coders. Such evidence is important because contributions to trend analysis are made via incident reports investigated by different safety investigators or analysts, often from different departments within the one organisation. The accuracy of the resultant analyses, which are key to the development of accident prevention measures, is therefore dependent on the ability of those contributors to achieve consensus on their classification decisions across all contributing factors highlighted in the reports [4]. Ross et al. [5] drew attention to the fact that reliability studies of taxonomies can be overlooked, use inappropriate methods, and be reported ambiguously. These faults are indeed prevalent in the relatively few reliability studies conducted on incident classification systems. For example, there is often inadequate reporting of the methodology in cited %U http://www.hindawi.com/journals/jqre/2011/379129/