I mentioned to my better half recently that I’d checked out a video lecture on allllaconference.com about risk in emergency medicine: ‘Risk – How to assess and measure it!‘. Her response was along the lines of, “What’s new? That’s what you’re always going on about.” It’s true I am fascinated by the concept of risk, and decision-making in environments that are time-pressured and information-limited. Nevertheless, Dr. David Schriger raised more than a few points in his talk that even the most ‘risk averse’ person would find interesting, some of which I’ll discuss below.
Risk in emergency medicine is a slippery concept that can be difficult to grasp. So, let’s start with a definition. Croskerry et al (2009) define risk in emergency medicine as:
“the probability of danger, loss or injury within the health system.”
“Within the health system”? What does that mean? Isn’t there some kind of discrepancy here? The risk that we probably all agree we should be worrying about is the risk to the patient. However, a lot of emergency medicine, perhaps more so in the United States, seems preoccupied with the risk to the doctor or the hospital if something bad happens. Surely, this needs to change.
I agree with Schriger when he suggests that a core problem with modern emergency medicine is that it has evolved from its ‘bread-and-butter,’ the emergent treatment of life-threatening conditions, to the assessment of basically well people who ‘could’ have a condition that ‘could’ be life-threatening at ‘some point’ in the future. It is the latter problem that devours most of our psychic energy and much of our time. For us to function as emergency doctors our focus has to be on what can go wrong in the immediate short term. Interestingly, the recently retired Dr. Greg Henry has even suggested that to survive in the specialty, emergency doctors need to realize that their job is not the management of emergencies, but the provision of health care to anyone at anytime.
As emergency doctors we should consider the probability of events when making decisions. Saying ‘likely’ or ‘possibly’ is not good enough. We need to commit to actual probabilities. Bryant and Norman (1980) have shown that doctors vary widely on what likelihood terms like ‘probable’, ‘likely’ and ‘unlikely’ actually mean. An actual probability gives you something to actually work with when making a decision.
So, how good are emergency doctors at estimating the probability of events?
It is hard to say.
There have been studies like that of Chandra et al (2009) suggesting that emergency doctors are remarkably good at predicting which chest pain patients will have adverse events related to acute coronary syndrome (ACS). However, this may not be the case for all doctors, in all settings, and for all diagnoses. In Schrigler’s talk he uses statistical software to analyze the conference attendees answers about the likelihood of ACS and bad outcomes for a chest pain scenario. The likelihoods of ACS given by the doctors ranged across 6 orders of magnitude! Clearly, there may be substantial biases in this ‘stage show demonstration’, but the result is still shocking. What is just as interesting is that the likelihood estimates of doctors vary depending on how they are expressed. Likelihoods given tend to be lower if they are expressed as ratios like 1 in x (e.g. 1 in 10,000) compared with percentages (e.g. 1%). Then, when asked to determine the likelihood of a bad outcome for the patient in the scenario, most doctors gave a similar result to their likelihood of a bad outcome for ACS. The likelihood should be much lower, as it involves the likelihood of ACS being the diagnosis multiplied by the likelihood of a bad outcome from ACS (this is simplistic, the real likelihood of a bad outcome also requires calculations of the likelihood of each of the differential diagnoses, and the likelihood of bad outcomes from each of those, and then adding them all together… an impossible task in the real world). The simple conclusion is that doctors do not think probabilistically. Most of our decisions are based on normative behaviour, by doing what everyone else does in a given situation. How often do we honestly make decisions based on the answers to these questions (which apply to diagnosis and misdiagnosis, as well as investigations and the consequences of over-investigation and ‘false positives’)?:
What can go wrong?
How often will it go wrong?
What can be done to prevent it from going wrong?
If it goes wrong, how bad will it be?
If it goes wrong, what can we do about it?
If it goes wrong, when will it go wrong?
Now, assume we have actually calculated an appropriate risk.Should it even be the doctor who makes the decisions based on this risk? Doctors and patients appear to have widely differing degrees of risk aversiveness, and it can vary from situation to situation. One thing is for sure, the decision to admit or start a treatment should be based on the risk to the patient, not the risk of the doctor being sued if something bad happens.
So, let us allow the patient to decide.The risk aversiveness of patients depends on how the risk is framed, in other words, how it is put to them. They may be happy to go home with a 50% chance of MI, until they are shown a box that is half coloured in – now 50% looks way too high. They may say that if they have a stroke they wouldn’t want to have life-prolonging treatment, but when it happens they may want us to delay death as long as possible. This is the nature of the beast. The human beast.
What is the way forward?
Schriger has a number of suggestions. He calls on us to move beyond the Hippocratic fixation on the individual and adopt a societal perspective when considering risk and how to act on it. We should ask, “what does society expect at a population level?” While the individual may want 100% certainty that something will not happen to them, this is impossible and society would agree. Because of this we need to define acceptable miss rates and accept that no matter how good we are as doctors we will always miss diagnoses – sometimes it is beyond our control – needless over-investigation may not change this and creates other problems. Mistakes are inevitable and we need to completely eradicate the culture of fear to improve patient safety. We have to remember that we don’t have to try save the world in a single ED visit and we need the public to understand this. In most cases the ED visit is not the patient’s last chance of diagnosis or treatment – with optimal follow up and discharge advice, they can come back or see another doctor if things get worse. Finally, doctors and the health system should not be rewarded for doing more than the necessary – remember the Fat Man’s 13th law of the House of God?
I’d also add that as doctors we need to keep educating ourselves about probabilities, risk and how we make decisions. If you’re a doctor, reading this is a start. Checking out Schriger’s talk (its free to access), then Scott Weingart’s reaction, and the references below are good next steps.
- Bryant GD, Norman GR. Expressions of probability: words and numbers. N Engl J Med. 1980 Feb 14;302(7):411. PMID: 7351941
- Chandra A, et al. Emergency physician high pretest probability for acute coronary syndrome correlates with adverse cardiovascular outcomes. Acad Emerg Med. 2009;16(8):740-8. PMID 19673712
- Croskerry P, Cosby KS, Schenkel SM, Wears R. Patient Safety in Emergency Medicine. (2009) Lippincott, Williams & Wilkins.
- Weingart S, Wyer P. Emergency Medicine Decision Making – Critical Choices in Chaotic Environments. (2006) McGraw-Hill.