Not so long ago, the following headline appeared on the front page of The Times: ‘Health experts say mothers should be paid to breastfeed’. It was reporting the findings of a survey which found that a group of new mothers offered gift vouchers as a reward for continuing breastfeeding kept their babies on the breast for longer than a control group who were given no financial inducement. This, predictably, made it on to the national radio and TV news.
There were so many things wrong with the research, and with the reporting of it, that it’s hard to know where to begin, but it does give a nice illustration of the pitfalls which exist to trap unwary readers when the mainstream media (MSM) report on science (or, as in this case, on pseudoscience).
First, as a statement of the bleeding obvious, the finding that financial inducements can affect behaviour in this way is up there with the religious leanings of popes, and the defaecatory habits of bears. Then there’s that banner headline. We don’t know which ‘experts’ it was referring to, but the only ones quoted by name in the piece both stated that the research was not sufficiently robust to have any affect on practice, and certainly didn’t support the diversion of scarce NHS resources to provide bribes to new mums. Those dissenting opinions, as is usually the case, didn’t appear until the final paragraph, and so the many readers who never got past the first couple of sentences were left with the inaccurate take-home message provided by a lazy sub-editor’s headline.
That’s my gripe about the reporting out of the way, but much more interesting is the design and results of the research project itself. Like so much of this sort of ‘soft’ research, questionnaires were used to determine the breastfeeding status of the participants. I think that most of us know how reliable questionnaire-derived information is, because we fill in so many of them ourselves. We may not actually lie (well, not always), but we do tend to put the best complexion we can on our answers. For example, when we’re asked about our drinking habits, how many of us keep a diary for a couple of weeks and religiously calculate the number of units? No, neither do I, and when I do tot up my recent intake, I tend to choose a ‘good’ week, when it was a bit less than usual.
So – data from questionnaires is always going to be potentially flaky. But if you then offer cash (or vouchers) to the study group, any tendency to give the answers the researchers clearly want will be exaggerated. In this particular case, the respondents are likely to over-report the frequency of breastfeeding, and their answers will be about as reliable as my assertion that I drink 12 units a week.
And it’s not only the data that’s a bit iffy. The first paragraph stated that the voucher system ‘improvedbreastfeeding) rates by about 20%’. I think that most readers, unless they were used to critically analysing research results, will have thought that a 20% increase was a pretty good result, because they will have assumed that it meant that breastfeeding rates had risen from (say) 40% to 60%. But of course, it doesn’t mean that. It means there was a rise of 20% compared with the breastfeeding rate in the control group. The control group figure was 32%, and it rose to 38% in the experimental group, the 6% change representing 20% of 32.
The media always use the percentage increase or decrease in their reports because it’s easy, and it also tends to exaggerate the effect being reported. They do this whether they are reporting a benefit of treatment, as in this case, or the supposedly harmful effects of some environmental factor. You need to see the underlying numbers in order to assess the importance, or otherwise, of the effect – always look for the absolute figures rather than a percentage change, and if they aren’t included in the report, be very suspicious.
Take the example of a new drug to protect against heart attacks. If you were told that taking the drug every day would reduce your chance of a heart attack by 30%, you might be tempted to say ‘OK, give me a prescription’. If you were told that if 70 people took the drug for five years, one heart attack would be prevented, but two other people would suffer attacks despite taking the tablets, you might have second thoughts about embarking on lifelong treatment and risking harmful side effects from the drug. It’s the same drug, just different ways of presenting the information. This example is for a low coronary risk population taking a statin, and the reason that the percentage reduction figure is so misleading is that the pre-existing chance of any of the drug-takers experiencing a heart attack is small, so that a 30% reduction actually makes little difference to the numbers of people who suffer a cardiac event.
It’s the same with reports of the health effects of living near a mobile phone mast (or electricity pylons, or a nuclear power station etc etc). The alleged health issue is usually an increase in cancer; especially the more emotive childhood cancers. You will always be told that the adverse effect increases the risk of childhood leukaemia, for example, by, say, 25%. That sounds dreadful, but childhood leukaemia is rare, with an incidence in the order of 1:30,000. Detecting a 25% increase in such a small figure, and being certain that it is a genuine change and not just random variation, or due to some other environmental factor, is next to impossible. So, that scary 25% headline figure translates into an effect that is too small to measure accurately, even if it exists.
So the bottom line as it relates to The Times report, is that if if 100 new mothers were given a substantial bribe, at the end of the trial period 38 would still be breastfeeding and 62 would have resorted to bottle-feeding. Without the cash payment, 32 would still be breastfeeding and 68 would have given up. And that’s only if you trust the results of a self-reporting questionnaire survey.
Which to me doesn’t sound like an efficient way to use scarce NHS resources.
Comments