How Accurate Are Industrial Hygienists' Qualitative Exposure Assessments?
Judgment Day
From the Archives
Editor's note: "From the Archives" is a special section of the digital Synergist that brings previously published articles to our current audience. This article originally appeared in the January 2014 issue. The text below has not been updated.
We live in an age that offers instant access to a wide range of information. It’s tempting to believe that more information means less uncertainty, leading to more accurate decisions. However, some experts say that too much information—especially when the information is complex—overwhelms our ability to distinguish between relevant and irrelevant cues. The sheer number of inputs exceeds our capacity to process them, resulting in inconsistent and inaccurate decisions.
This observation has important consequences for industrial and occupational hygienists. For example, in situations where quantitative exposure monitoring data is unavailable, do IHs use the information available to them in the most efficient and effective manner to produce accurate qualitative exposure judgments?
Unfortunately, a growing body of evidence indicates that the answer is no. In fact, qualitative exposure assessments based on subjective professional judgments are inaccurate more often than not. Equally troubling, they are more likely to underestimate exposure—yes, underestimate. If we are to live up to our mantra of protecting worker health and safety, we need to significantly improve the accuracy of our qualitative judgments.
HOW EXPOSURE JUDGMENTS ARE MADE AIHA’s publication A Strategy for Assessing and Managing Occupational Exposures describes a simple yet elegant framework for exposure assessments, which directs follow-up activities where they are most needed. According to the AIHA strategy, judgments are made by identifying the exposure control category (ECC) in which the 95th percentile of the exposure distribution—that is, the value below which 95 percent of exposures fall—is most likely located for a given job or task. The acceptability of the exposures is commonly evaluated by comparing the 95th percentile to the occupational exposure limit (OEL) and classifying the exposure in one of four ECCs: “highly controlled” (ECC 1), “well controlled” (ECC 2), “controlled” (ECC 3), and “poorly controlled” (ECC 4).
IHs can document a separate judgment for each similar exposure group (SEG). An SEG can represent either a single task that may be short in duration or a group of tasks that comprise a full-shift exposure. IHs perform qualitative and quantitative exposure assessments following review of worker interviews and any available information related to their jobs, including materials, exposure agents, exposure limits, work practices, engineering controls, and protective equipment.
In most cases, initial exposure assessments are qualitative—that is, they are performed without the benefit of personal sampling data. Follow-up activities, if any, are determined by the outcome of the preliminary assessment. When sampling data are limited, IHs use a combination of professional judgment, personal experience with a given operation, and review of exposures from similar operations to assess the acceptability of exposures for managing engineering controls, medical surveillance, hazard communication, and personal protective equipment programs. Aids to decision making, such as rules, guidelines, and models, are also sometimes used.
The AIHA strategy supports a comprehensive approach that calls for assessments of all exposures to all chemical and physical agents for all workers. In other words, the strategy implies that the thousands to tens of thousands of exposure scenarios that routinely occur each year in some workplaces will all be assessed, eventually. The strategy’s iterative design focuses quantitative resources, limited as they are, on the exposures most likely to need further investigation. But there is a caveat: the approach assumes that the initial, qualitative judgments are accurate.
And studies of exposure judgment accuracy indicate that these initial, qualitative judgments are not very accurate. According to the literature, initial judgments have a mean accuracy of approximately 30 percent; in some cases, they are no more accurate than judgments based on random chance.
The sheer volume of information IHs must process likely contributes to this poor result: confronted with too much information to process internally, IHs forget relevant details, get distracted by irrelevant information, and take mental shortcuts. These factors lead to mistakes in judgment and biased decisions that underestimate exposure. The industrial hygiene profession is not alone in this dilemma; doctors, for example, face the same problem. Everyone needs to find better methods for making decisions in the face of uncertainty.
THE QUALITATIVE JUDGMENT WORKSHOP During a two-day workshop we offered at AIHce 2013 in Montreal, we investigated the accuracy of qualitative judgments and explored the impact of some alternate approaches. Workshop participants made initial exposure judgments for a broad range of scenarios involving inhalation, dermal, and noise exposures. Next, they received training on several objective approaches to qualitative exposure assessment, after which they updated their initial judgments. In addition, participants provided information about their education and professional experience, which allowed us to identify important factors contributing to accurate decision making.
On the first day of the workshop, participants were randomly assigned two scenarios. Two-page narratives distributed for each scenario contained details about the job or task, the workplace environment, the chemical agent, and the relevant OEL. In most cases, pictures or process diagrams were provided. Participants were asked to read through their scenarios and make an initial judgment using whatever approaches they would typically use in their own workplaces.
We used personal sampling data—not shared in the narratives—and commercially available IH software to determine the reference ECC (the category to which the 95th percentile of the exposure distribution most likely belongs, given the data). The software produced a decision chart showing the reference ECC. We captured participants’ exposure judgments in a spreadsheet tool designed for the workshop. Using this tool, participants indicated which ECC they believed the exposure scenario’s 95th percentile belonged to by assigning the highest probability to that category. The total assigned to the four categories had to sum to 100 percent. We quantified the exposure judgments by expressing the probability, based on a participant’s level of confidence, that the 95th percentile belonged to each ECC.
We are currently analyzing the data and will present our results at AIHce 2014. Clearly, though, the data support the notion that subjective professional judgments are often inaccurate and underestimate exposures. In our opinion, IHs should strive, at least initially, to achieve 80 percent accuracy; this goal can be adjusted as qualitative exposure judgments improve. But given that the accuracy of judgments currently hovers around 30 percent, our profession has a long way to go.
IDEAS FOR IMPROVEMENT Fortunately, several approaches are available to help His improve their judgments. One approach is to consider more objective inputs. Algorithms, for example, are objective tools that can help produce more accurate judgments.
Algorithms require minimal, relevant, and consistent inputs. They limit decision criteria to details that are germane to the decision, each and every time, leading to relevant, consistent outputs and more accurate decision making. Algorithms shift the focus from the subjective to the objective, from a problem-centric approach in which each scenario is considered a novel situation to a process-based approach that reduces the likelihood that IHs will consider irrelevant details or forget about critical inputs.
In The Checklist Manifesto, author Atul Gawande makes a compelling argument for using algorithms in the form of checklists. Gawande discusses case studies in which the implementation of checklists resulted in positive outcomes across a range of disciplines, including medicine, public health, and the airline industry. Algorithms, especially when formatted as checklists, can help us meet our quality goal.
Simple algorithms are parsimonious, wisdom-rich tools that capture fundamental scientific principles and make use of patterns. For example, the “Rule of 10” applies patterns in airborne concentration reduction associated with specific levels of engineering control. It accounts for the likely level of reduction in the airborne concentration of a chemical relative to the saturated vapor concentration, given a specific type of ventilation. With increasing capture efficiency, the maximum expected airborne concentration is reduced by orders of magnitude. To apply the Rule of 10, the user needs just four pieces of information: the vapor pressure of the pure chemical, the observed or reported level of control (ObsLC) used in the workplace, the predicted reduction in the airborne concentration associated with that level of control from the Rule of 10 matrix (ReqLC), and the OEL.
Consider a scenario involving isopropanol, in a workplace that has good general ventilation. With a vapor pressure of 45.4 mm Hg at 25° C, the predicted saturated vapor concentration is 59,736 ppm. From the Rule of 10 matrix provided in A Strategy for Assessing and Managing Occupational Exposures (see Table 1), the likely reduction in airborne concentration is determined, given the observed or reported level of control. In this example, the predicted reduction in airborne concentration is 1,000x, suggesting an upper bound concentration of 59 ppm. Comparing this level to the OSHA PEL of 400 ppm, the predicted 95th percentile of the exposure distribution belongs to ECC 2 (“well controlled”):
0.10 OEL < C0.95< 0.5 OEL (59 ppm/400 ppm = 0.15% OEL)
Is this prediction correct? To answer this question, compare the predicted ECC to the “reference” ECC, which is derived from the personal exposure monitoring data (n ≥ 6). In this case, the reference ECC is 2, so the ECC based on the Rule of 10 is correct.
Table 1. Rule of 10 Matrix
Click or tap on the table to open a larger version in your browser.
INCORPORATING OBJECTIVITY The Rule of 10 algorithm and other rules and guidelines are presented in the third edition of the Strategy. These tools are efficient and effective, and can be applied by the IH in the field in just a few minutes. Other tools discussed in the book help decision making across a range of situations and conditions, and consistently produce more accurate judgments than those based on subjective professional judgment.
Similarly, exposure models are algorithms based on fundamental physical principles. These algorithms range in complexity according to the complexity of the situation they are designed to simulate. For example, the well-mixed room model (WMR), when applied to simulate an environment under steady-state conditions, requires only two inputs: the generation rate (G) and the ventilation rate (Q). This algorithm is applicable to many scenarios for which there is no point source or directional airflow. To account for directional airflow there are turbulent eddy diffusion models, requiring additional inputs but accommodating the more complex scenarios.
These models provide a framework for conducting qualitative exposure assessment that is process-centric, objective, and more accurate than subjective professional judgment. IHs should use these tools more, and use subjective professional judgment less.
DR. GURUMURTHY RAMACHANDRAN, CIH, PHD, is a professor in the Division of Environmental Health Sciences in the School of Public Health at the University of Minnesota, Minneapolis.
SUSAN ARNOLD, CIH, is Principal at EH&S, LLC in Roswell, Ga., and a past chair of AIHA’s Exposure Assessment Strategies Committee (2008) and Modeling Subcommittee (1999–2003).
Send feedback to The Synergist.
DKosig/Getty Images PeterSnow/Getty Images
“Desktop Study of Occupational Exposure Judgments: Do Education and Experience Influence Accuracy?” JOEH, 746-758 (2011).
“Effect of Training, Education, Professional Experience and Need for Cognition on Exposure Assessment Decision Making.” Ann. Occup. Hyg., 56 No. 3, 292-394 (2012).
“Effect of Training on Exposure Judgment Accuracy of Industrial Hygienists.” JOEH, 9:4, 242-256 (2012).
“Occupational Exposure Decisions: Can Limited Data Interpretation Training Help Improve Accuracy?” Ann. Occup. Hyg., 1-14 (2009).