What Kind of Near-miss Was Ebola? When I wrote this article in mid-October 2014, Americans were still getting used to the new and scary risk of Ebola. Ebola fears led to a number of airline passengers being yanked off planes because they exhibited flu-like symptoms and had some connection, however remote, to Africa. So far they’ve all tested negative for Ebola. If that remains true, the number of such disruptions will soon decline precipitously. 
Are these events warnings that we should continue to take seriously, “casting a wide net” to reduce the odds of missing an actual Ebola case onboard? Or are they false alarms that we should learn to stop worrying about? Most experts, officials, and journalists say they’re false alarms. But that answer will change in hindsight if a traveler from West Africa ever infects some fellow passengers with Ebola.
Ebola also offers an object lesson in learned overconfidence. The discovery that two nurses were infected with the virus while treating an Ebola sufferer at a Dallas hospital raised many questions. Did the nurses breach PPE protocols? Were the protocols insufficiently protective in the first place? Is it realistic to expect healthcare workers to be 100 percent meticulous in following such protocols? 
One relevant fact: every nurse has considerable experience with breaches of infection control protocols that didn’t end in infection. And all too often the lesson learned isn’t that “We need to be more meticulous.” It is that “Infection control is pretty forgiving. Even when we mess up, it doesn’t usually do any harm.” Then along comes a much less forgiving pathogen, Ebola, and learned overconfidence becomes life-threatening.
Peter Sandman
DEPARTMENTS​
RISK COMMUNICATION​
PETER M. SANDMAN is a risk communication consultant and speaker. Much of his work on risk communication can be found on his web site, www.psandman.com. Comments on this and future columns can be sent to peter@psandman.com.

Every industrial hygienist knows that near misses teach important safety lessons. An organization that harvests those lessons is more likely to avoid actual accidents. A near miss is a warning that deserves to be heeded.

But not everybody sees it that way. In fact, safety professionals tend to see near misses differently than everybody else. In this column I want to explore the differences, and their implication for safety communication. HINDSIGHT BIAS AND THE GAMBLER’S FALLACY Part of the problem is what psychologists call “hindsight bias.” After every serious accident, experts, officials, and journalists look for precursors—near misses whose lessons weren’t learned. They almost always find some. 
But other near misses that weren’t followed by accidents don’t get catalogued. Precursor events that didn’t look bad beforehand look bad in hindsight. Did the tank that blew up yesterday have more near misses in prior years than the tanks that didn’t? If not, were those near misses really meaningful warnings that this particular tank might blow up? Did tanks in general have more near misses than other aspects of your operation? If not, why should you have known to focus on tanks?
After something goes wrong, most people, including safety professionals, interpret near misses as warnings we should have heeded—sometimes this is wise, while other times it’s unwise, unduly influenced by hindsight bias. But if nothing has gone wrong, the rest of us may very well see these near misses as false alarms, while safety professionals tend to see them as warnings. 
Figuring out whether near misses are actually warnings or false alarms is surprisingly difficult, both before an accident and after one.
Hindsight bias isn’t the only fallacy affecting how people interpret near misses. Another is the so-called “gambler’s fallacy.” There are actually two gambler’s fallacies, which distort judgment in opposite directions. 
One—the one to which safety professionals are prone—says a series of near misses means you’re on borrowed time, “overdue” for an accident. At the roulette wheel, this takes the form of “Number 27 hasn’t come up all night. It’s due!”

The opposite gambler’s fallacy says a series of near misses means you’re bound to keep having near misses, rather than accidents. At the roulette table: “Number 27 hasn’t come up all night, so it’s not worth betting on. Number 19 is hot!” LEARNED OVERCONFIDENCE Absolutely no one in your current workforce has ever been the victim of a fatal accident. That may sound like a foolish point to make, but it has real psychological implications. Everybody’s life experience, until the day we die, tells us that we are immortal.
Similarly, if we’ve never had a serious accident—and most people haven’t—then our experience with safety precautions tells us that the precautions are unnecessary. We’re snapping our fingers to keep away the elephants, but there aren’t any elephants.
Until something bad happens, in short, near misses look to most people like false alarms. Safety professionals are nearly alone in routinely seeing near misses as warnings even beforehand, not just afterwards. NEAR MISS PERCEPTION The research on near miss perception is unequivocal: people are much more likely to find near misses reassuring than alarming. 
The leading researchers in this area—Robin Dillon-Merrill, Catherine H. Tinsley, and Matthew A. Cronin—distinguish “resilient near misses” (you did a good job of preventing the accident) from “vulnerable near misses” (dumb luck prevented the accident). These aren’t so much kinds of near misses as they are ways of seeing the near miss: does your information about it stress resilience or vulnerability?
​Resilient near misses, their research shows, reduce people’s precaution-taking substantially. Vulnerable near misses don’t, but they don’t help, either. People told about a vulnerable near miss are about as likely to take precautions as people with no near-miss information at all. 
If you’re trying to use a near miss as a lesson to convince people to take more precautions, the odds are against you. At the very least, you need to stress the vulnerability aspect of the near miss, and downplay its resilience aspect. 
NEAR MISS STATISTICS Why do safety professionals see near misses differently than everybody else? I think I know the answer, though I haven’t found a study that addresses my hypothesis. 

Consider two scenarios.

In Scenario One, you know (or at least you have an intuitive rough idea) what percentage of the time a particular dangerous behavior ends in a near miss and what percentage of the time it ends in an accident. The ratio of near misses to accidents resulting from that behavior is a constant you have in mind. And that ratio is a pretty small number.
It follows that an increase in the number of near misses is a warning: the more near misses, the higher the probability of an accident. “We almost exploded the tank!”
Warning, or False Alarm?
Why Safety Professionals See Near Misses Differently than Everybody Else
BY PETER M. SANDMAN​
LINKS:
• Back to Summary Page
The Synergist TOC
If you’re trying to use a near miss as a lesson to convince people to take more precautions, the odds are against you.

It also follows that reducing the number of near misses meaningfully reduces the probability of an accident (since the ratio between the two is a known, fairly low constant). So it’s worth the effort to strive to prevent near misses, even though near misses don’t do any actual harm.

That’s the way safety professionals think. They know the behavior is dangerous, so the near miss is a powerful reminder of the danger.
In Scenario Two, on the other hand, you have little or no prior knowledge about how often the behavior leads to a near miss and how often it leads to a real accident. The ratio is unknown. Every near miss you experience without an accident is data about that ratio.
If you’re trying to figure out how high the accident risk is, it’s natural and rational to see each near miss as another piece of evidence that the behavior in question rarely leads to an accident. “We’ve ‘almost exploded the tank’ hundreds of times and yet the tank has never exploded. Defense-in-depth must be working. Those ‘near misses’ aren’t so near after all!”
And if that’s true, then reducing the number of near misses by modifying the behavior is a low-priority task.
That’s the way most people think. They don’t know whether the behavior is dangerous, so the near miss is evidence that it’s not. THE BOTTOM LINE When a safety professional tells employees about a near miss, the professional is probably talking about Scenario One. But the workforce may be hearing Scenario Two.
What is intended by the communicator as a warning can easily be experienced as reassuring by the audience.
The bottom line: if you want people to see near misses as warnings, you need to do at least two things:
  1. Convince them that the ratio of near misses to accidents resulting from the behavior in question is a known, low constant (assuming it’s true). In other words, prove that the behavior in question results in actual accidents often enough to justify seeing the near miss as a warning.
  2. Emphasize the ways the near miss demonstrates how vulnerable your audience is—how close we came, how lucky we were. And de-emphasize the ways it demonstrates resilience—how well we coped, how skillful we were.
I’m not trying to take an​ything away from the analysis of near misses as a crucial tool of safety management (though I do think we need to pay more attention to the problems of hindsight bias and the gambler’s fallacy). Learning from near misses is a lot less costly than learning from disasters.
But as a tool of safety communication, near misses are all too likely to backfire.
What Kind of Near-miss Was Ebola? When I wrote this article in mid-October 2014, Americans were still getting used to the new and scary risk of Ebola. Ebola fears led to a number of airline passengers being yanked off planes because they exhibited flu-like symptoms and had some connection, however remote, to Africa. So far they’ve all tested negative for Ebola. If that remains true, the number of such disruptions will soon decline precipitously. 
Are these events warnings that we should continue to take seriously, “casting a wide net” to reduce the odds of missing an actual Ebola case onboard? Or are they false alarms that we should learn to stop worrying about? Most experts, officials, and journalists say they’re false alarms. But that answer will change in hindsight if a traveler from West Africa ever infects some fellow passengers with Ebola.
Ebola also offers an object lesson in learned overconfidence. The discovery that two nurses were infected with the virus while treating an Ebola sufferer at a Dallas hospital raised many questions. Did the nurses breach PPE protocols? Were the protocols insufficiently protective in the first place? Is it realistic to expect healthcare workers to be 100 percent meticulous in following such protocols? 
One relevant fact: every nurse has considerable experience with breaches of infection control protocols that didn’t end in infection. And all too often the lesson learned isn’t that “We need to be more meticulous.” It is that “Infection control is pretty forgiving. Even when we mess up, it doesn’t usually do any harm.” Then along comes a much less forgiving pathogen, Ebola, and learned overconfidence becomes life-threatening.
Peter Sandman