THEA DUNMIRE, JD, CIH, CSP, is the president of ENLAR Compliance Services, Inc., where she specializes in helping organizations implement management systems. She can be reached on her blog about management system standards at
One of the primary reasons for establishing a management system is to improve organizational performance. A frequently asked question within organizations that have implemented a management system is “Has our performance improved?” If performance has improved, the management system is perceived a success. If performance has not improved, the management system is judged a failure. In order to answer this question, three things are required: 1. a common understanding of what improved performance means 2. metrics (that is, performance indicators) appropriate for measuring performance 3. data of sufficient quality for determining whether the selected metrics have been achieved This article focuses on the third requirement for evaluating performance—the collection and analysis of monitoring and measurement data, and, in particular, the issues that need to be taken into account to avoid the phenomenon commonly referred to as GIGO: garbage in, garbage out. DEFINING INTENDED OUTCOMES Throughout the initial stages of developing the ISO 45001 standard, several discussions have focused on answering the question, “What are the intended outcomes of an OHS management system?” Although this sounds like a simple question, the answer has important implications. One of the requirements of ISO 45001 is that top management of the organization “ensure that the OHS management system achieves its intended outcomes” (as set out in 5.1, the leadership and commitment section of the standard). Therefore, one needs to know what the intended outcomes are. The intended outcomes identified in ISO 45001 for an OHS management system are two-fold: the prevention of injury and ill-health to workers, and the provision of a safe and healthy workplace. The first intended outcome is reactive—the prevention of harm. The second intended outcome is proactive—providing a safe place of work. The control measures put in place in an OHS management system should focus on achieving one or both of these intended outcomes. SELECTING APPROPRIATE METRICS Whenever metrics are selected, it is important to understand the linkages needed between these metrics and the outcomes to be achieved. A prospective metric is not necessarily appropriate for use simply because it seems right or it is easy to measure. Some metrics may be easy to measure but have no relevance to achieving improved outcomes. Other metrics may seem right at first, but have serious unintended consequences when put into use. They may actually degrade performance by encouraging counterproductive behavior or diverting resources away from activities that matter more. For more about selecting appropriate management system metrics, check out my previous article, “Selecting Management System Metrics,” in the November 2014 Synergist. PREVENTING GIGO The selection of appropriate metrics (that is, performance indicators), in turn, defines what data is needed. Understanding how data is intended to be used is also important for determining the quality of the data that is required. There are two criteria that are important in defining data needs: relevance and accuracy. For example, a common monitoring activity associated with an OHS management system is using instrumentation for measuring noise levels. The instrumentation available to conduct noise monitoring ranges from a cell phone application to noise level dosimeters to sophisticated multi-range sound level meters. Each type of instrumentation has appropriate uses depending on the nature of the decision to be made with the data that is collected. A cell phone application may be helpful as a screening tool for determining areas that may need further analysis, but the data obtained is likely insufficient for developing a noise abatement solution. Similarly, a multi-range sound level meter may provide data useful for designing a noise abatement solution but may be inappropriate for determining whether an employee’s exposure exceeds regulatory permissible levels on a time-weighted average basis. Collecting and analyzing data as part of a monitoring activity is often expensive and time consuming, so it is important to be clear on what data is needed and how it is intended to be used. Once those decisions are made, then the focus needs to shift to the methods used to collect and analyze the data. When it comes to data collection and analysis, three concepts are important to consider: validation, verification, and calibration. Although these terms are related and often used together, they are not the same. Validation is related to the design of a process. In the context of monitoring and measuring, validation focuses on making sure the methods used to collect the data are appropriate. In other words, will the method used generate the intended results? For example, in the context of employee noise exposure monitoring in the United States, the focus of validation is on whether the instrumentation and methodology used will generate sampling results that are appropriate for comparison to the OSHA permissible exposure levels. The focus is on the relevance of the data obtained. Verification focuses on whether a process is done right (that specified requirements have been fulfilled). In the context of monitoring and measuring processes, verification focuses on whether the data collection (sampling) was completed as set out in the selected sampling method. The focus is on ensuring the accuracy of the data obtained. Calibration focuses on the accuracy of a particular measuring device, usually by comparing a reading obtained from the device against a known result. An example would be comparing the reading on a sound level meter with the sound generated by an acoustical calibrator. Calibration of measuring devices is often an important component of verifying that sampling has been conducted correctly. These concepts of validation, verification, and calibration need to be considered whenever measuring and monitoring activities are conducted. This includes both traditional industrial hygiene monitoring, for noise, dust, or toxic gas exposure, and for other assessment processes such as inspections, employee surveys, and internal audits. Data does not have to be perfect, but it does need to be “good enough.” If data is going to be used to assess OHS performance, precautions must be taken to ensure it is appropriate for use in our decision making. This is particularly the case in a world that is inundated with data. To quote the author Nate Silver:
Information is no longer a scarce commodity; we have more of it than we know what to do with. But relatively little of it is useful. We perceive it selectively, subjectively, and without much self-regard for the distortions that this causes. We think we want information when we really want knowledge. (fromThe Signal and the Noise: Why So Many Predictions Fail–But Some Don’t)
If we do not make sure the measuring and monitoring data we collect and use is relevant and accurate, we run the risk of falling victim to GIGO—garbage in, garbage out—in our decision-making.
Evaluating OHS Performance
The Need for Relevant and Reliable Data BY THEA DUNMIRE
LINKS •Departments HomeTable of Contents