top of page
  • Writer's picturebobbury

Dealing with uncertainty

(This is a lengthy post - a revised section of The Book, which attempts to explain a number of issues around clinical testing, and which became very relevant in the response to the Covid pandemic)


On a medical internet forum, a GP recently asked: ‘ If you could impart one pearl of wisdom to a GP trainee or medical student, something that isn't in the medical books and you have discovered along the way, what would it be?’ One of the first replies, from an experienced GP, was ‘you have to be able to live with uncertainty’. That’s true for all doctors, not just GPs, and in this chapter I want to look at how uncertainty also influences the therapeutic efforts of doctors, and how we use diagnostic tests to reduce that uncertainty.


Of course, not every medical decision is dogged by uncertainty – a broken leg is a broken leg ‒ it needs to be immobilised in a good position and then allowed to heal; there are various means of achieving this, depending on the exact site and type of fracture, but the underlying principle is clear. However, most of medicine just isn’t like that; we deal in shades of grey rather than black and white. I realise this may be scary for those of you who want to believe in your doctor's omniscience, but as we'll see later, good medical practice is based on the reduction of that uncertainty to a point where effective treatment can be employed. Much of what follows deals with the way in which we reduce uncertainty and arrive at a diagnosis, but I need to start by looking at how patients view the transaction with their doctor.


the patient’s view of the diagnostic and treatment process

The first time you go to see your doctor with a problem, you go with a number of assumptions. The first of these, as we’ve seen above, is that he’ll immediately know what’s wrong with you. Sometimes he will. If you are a 55 year-old overweight bus driver who develops central chest pain radiating to your left arm every time you heave yourself off the sofa during the adverts to go and get another pizza out of the freezer, and you have a grossly abnormal ECG recording and a family history which includes the death from coronary artery disease of all your first-degree male relatives before the age of 60, then the doctor will be pretty confident in diagnosing coronary artery disease. He’ll get you straight off to the cardiologist, and although the specialist may do a few more sophisticated tests, there won’t be much doubt about the eventual diagnosis. Equally, if you have a sore throat, a high temperature and tonsils that look like over-ripe strawberries coated in pus, your doctor will be happy that your problem is tonsillitis. But most patients don’t present with textbook signs and symptoms, so the doctor has to draw on his knowledge and experience to produce a short-list of possibilities.


This is not as simple as it sounds. As an example, let's take headache. This is one of the signs of a brain tumour or haemorrhage, but headaches are very common, and the vast majority are not due to serious disease. So how seriously should the doctor take this particular symptom? Well, symptoms don’t occur in isolation, and the significance of headaches depends on a number of other issues. If the patient is presenting for the first time with a crippling headache that came on suddenly, their pupils are unequal in size and there is some weakness down one side of the body (signs of something that shouldn’t be there taking up space inside the head) the doctor will call an ambulance and get the patient to a neurosurgical centre as quickly as possible. If, on the other hand, the patient has a mild headache that comes and goes, and the doctor can find nothing wrong on examination, the chances of serious disease are small. In other words, the doctor is playing a percentage game. He knows that every so often a patient with ‘just’ a headache and no other sign of illness will turn out to have a brain tumour, but if he sent everyone with headache to the neurologist, not only would he worry lots of patients unnecessarily, he would also swamp the neurological services, delaying the investigation and treatment of patients with genuinely serious symptoms. The laudable target to ensure that all patients with symptoms that could be due to cancer are seen within two weeks can have the same unintended adverse effect.


This uncomfortable fact ‒ that doctors play a percentage game as they negotiate their way through a miasma of diagnostic uncertainty is the key to an understanding of how diagnosis and treatment work. What’s more, it is also central to the misunderstandings which often occur when things go wrong (or appear to). The following flowchart gives a simplified and slightly cynical description of the way in which many patients regard the diagnostic process.







The first assumption in that flowchart is the immediate ‘see doctor‒doctor does tests’ step. Of course, that often won’t be the case. The only test he may need to do is to look at your throat and feel the enlarged, throbbing glands in your neck to diagnose tonsillitis. But often he will need to send you for tests, because clinical investigations, whether they be X-rays, scans, electrocardiograms, blood tests or examination of any other bodily fluid, are simply ways of reducing uncertainty about the cause of your symptoms, and homing in on the correct diagnosis.


The second assumption about tests is that the result will either be positive or negative ‒ you either have got the disease or you haven’t. And there are some tests where it’s almost like that ‒ for example, an X-ray of your ankle if you twist it badly and the doctor thinks it might be broken. Either there is a break in the bone on the X-ray image, or there isn’t. You could say the same about a blood test for HIV ‒ either you’re HIV-positive, or you’re HIV negative. But even with those apparently simple yes/no examples, it’s not as straightforward as that, and there are many other tests where it’s difficult even to decide what we mean by positive and negative. Why should that be? Before answering that question, we need to look at the possible outcomes of a clinical test.


  • True positive: the test is positive, and you have got the disease.

  • True negative: the test is negative, and you haven't got the disease.

  • False positive: the test is positive, but you haven’t got the disease.

  • False negative: the test is negative, but you have got the disease.


Now the first two of these are good, because they reflect the true state of affairs. The last two are bad outcomes, because they are misleading: your disease may go untreated if the result is false negative, or you may undergo unnecessary further investigations and treatment for a disease you don’t actually have if it’s a false positive. The proportion of true and false negatives and positives generated by a particular test indicate how ‘good’ it is as a test. The terms used to do this are sensitivity and specificity:


  • sensitivity indicates how effectively it detects the disease. A sensitivity of 90% means that if you ran the test on 100 people who have the disease, the test would come up positive in 90 of them. In other words, there would be a 10% false negative rate.

  • specificity on the other hand, tells you how good it is at excluding disease. A specificity of 90% means that if you test 100 people who don’t have the disease, it will be negative in 90 of them. In this case, then, there will be a 10% false positive rate.


Obviously, we would all like to be using tests that are 100% sensitive and 100% specific, but they don’t exist. What’s more, in clinical testing, as in life, you never get something for nothing, and as sensitivity increases, specificity decreases, and vice versa. So, the more sensitive a test is (i.e. the better it is at detecting the disease when it is present) the less specific it will be, and so it throws up more false positive results. One of the few good things about the Covid pandemic was that it introduced the concept of false positive and negative tests to the population at large, notably in relation to the performance of the lateral flow rapid tests that we all became accustomed to using. This becomes especially important when we look at the use of clinical tests for screening purposes (i.e. to detect disease in asymptomatic patients, enabling us to treat it early), and I’ll look at that in more detail elsewhere, because it’s another area where there is often misunderstanding of what clinical testing can and can’t (or should and shouldn’t) do.


But first we need to look at the assumption that there is a simple distinction to be made between positive and negative results; between fracture and no fracture, or between virus present and virus absent, but there are lots of situations where that is not the case. Let’s think of a simple blood test for a particular substance; cholesterol for example. We know that too much cholesterol in the blood is a bad thing, and If you plot a graph of cholesterol levels in the population you get something like this:






The cholesterol level is plotted along the horizontal axis, and the number of people with a given level up the vertical axis This is know as a ‘normal distribution’, sometimes called a ‘bell curve’. You would find a similar result if you looked at a lot of other characteristics of the population – height, for example. There’s an median value in the middle, with a spread of lower and higher values, affecting progressively smaller proportions of the population, to the left and right of it. At some point, the cholesterol level becomes too high to be healthy ‒ but at what point? Where we decide to put that point will have a large influence on the performance of the test:


Set it low (the orange line) and we’ll be fairly sure of picking up all those people with a dangerously high level of cholesterol (i.e. the test will have a high sensitivity), but you’ll also label as abnormal a lot of people whose levels aren’t that high, and who would never have run into any trouble because of their cholesterol metabolism (i.e. there will be a lot of false positives, and the specificity of the test will be low).


Set it high (the blue line) and we’ll cut out all the false positives, and everyone you pick up will have a potentially dangerous level of cholesterol, but you’ll miss a lot of people at risk by setting the threshold so high (i.e. the sensitivity will be reduced, and you will have lots of false negative results). This is not the place to get into discussions of what level we should choose, and in any case, I’m not qualified to comment, but the important point is that the definition of ‘positive’ or ‘negative’ often involves an element of informed choice.


Another example of this difficulty comes from my own specialty – clinical imaging. Take the apparently straightforward example of a chest X-ray (CXR). If you are a 50 year-old smoker who suddenly starts coughing up blood, your GP will send you for a CXR, because he will be concerned you might have lung cancer. You will expect the result to be either positive (cancer present) or negative (all clear ‒ carry on smoking). But as with the cholesterol situation, it's not that simple, and the radiologist reporting the x-ray has to use his or her judgement. One of the most difficult things I had to learn when training as a radiologist was to recognise the range of appearances that are ‘normal’. Like faces, no two CXRs look just the same ‒ there is normal anatomical variation to cope with, and then on top of that, all those little blemishes that are markers of minor problems in the past and which are no longer relevant. Patients speak darkly having a ‘shadow’ on their lung, but X-rays are pictures formed from the shadows cast by a beam of radiation passing through the body, and there are lots of normal shadows that we learn to ignore. If I see a 3 cm shadow with nasty irregular borders infiltrating out into the normal lung, I know it’s almost certainly a cancer and take the appropriate measures. If I see a 2 mm nodule with calcium (chalk) in it, and it hasn’t changed since the previous X-ray five years ago, I’m happy to accept it as a marker of an old infection, and ignore it. But cancers have to start somewhere, and that nasty infiltrating 3 cm tumour was, at sometime in the past, a tiny barely-perceptible shadow that might easily have been ignored. So, as with the cholesterol level, at what point do I start reporting ‘shadows’ as suspicious of cancer, in the process scaring the patient and setting off a chain of further investigations, and possibly even an invasive biopsy? The answer is that every radiologist has their own internal threshold for deciding that something is amiss. That threshold will be influenced by their training, experience and personal capacity for coping with uncertainty.


An understanding of the principles of clinical testing outlined above is especially important when we consider the use of tests in patients with no symptoms or signs of disease: this is what we mean by screening, an often emotive topic, and I say more about it here.

14 views0 comments

Recent Posts

See All

A bodger and proud of it!

We have just returned from a week away in a large house in the Forest of Dean with all the family, to celebrate our golden wedding – yes, very nice, thanks for asking. Anyway, while we were away, it w

Irresponsible headlines

I know I've been here before, but The Times is at it again. In today's issue (25/01/23) we read: "Sharpen your mind with six minutes of hard exercise a day". Six minutes? – I could manage that, you th

bottom of page