For more than three decades, the Wessely School has searched for empirical support for its psychosomatic approach to CFS. That search has been in vain. I show here, here and here that the theoretical assumptions of the Wessely approach lack support and have fallen.
The drive to show CBT and GET are effective treatments has been a key part of the failure, e.g. the PACE trial. An independent review by NICE suggests that GET is unsafe and CBT can only be weakly supported and, quite likely, is only a placebo effect.
In facing the mountain of invalidation that the Wessely School is having to endure, it has made the most basic error for any scientist: converting an inconclusive association into a conclusion of causation.
Correlation does not equal causation
Everybody knows it. It is drummed into people’s heads from the very beginning.
Yet, as a journal editor I have discovered that it is the most basic and common error by authors in psychology and healthcare, no matter how experienced the investigator. This common mistake can throw a mantle of doubt over a publication or even an entire research programme.
The fundamental distinction between correlation and causation is legendary. It is taught in first year medical and psychology classes all over the world. Yet, the distinction can allude even the most seasoned researchers in Psychology, Psychiatry and kindred fields.
An introduction to the topic for 14-216 year old students studying Health is here.
A video for Biology GCSE students about the topic is available below.
An often cited example of the correlation=causation mistake concerns the polio epidemics in the US and Europe during the 1940s and 50s in the pre-vaccination period. Polio was crippling thousands of people, mostly children (and still is in some parts of the world). Polio epidemics occurred during summer and autumn. People eat more ice cream during summer and autumn. So for a while, children were warned not to eat ice cream or they would get polio.
Correlation is an association between two variables. In the late 1940s the polio rate (Y) and ice cream sales (X) could show a close correlation but eating ice creams did not cause polio:
X (ice cream eating) -/> Y (polio rate increase)
Causation is a cause and effect relationship between two variables. In 1949 in the US hot weather (Z) led to more people using public swimming pools and to more people eating ice creams (X) so both were caused by hotter weather (Z), which led to higher rates of polio (people swam in non-chlorinated pools more frequently) and to more ice cream eating:
The same kind of erroneous logic occur everyday in science, even among some of the most experienced researchers who are often strongly influenced by confirmation bias.
Z (hotter weather)-> X (ice cream eating)
and
Z (hotter weather) -> Y (polio rate increase)
Adamson, Ali, Santhouse, Wessely & Chalder (2020)
In October 2020, Adamson, Ali, Santhouse, Wessely and Chalder published a study in the Journal of the Royal Society of Medicine that purported to demonstrate that CBT ‘led to’ significant improvements in CFS patients. The authors had reached the conclusion that CBT caused improvements but the evidence warranted no such thing. What exactly did the authors do in their study?
Objectives
The authors’ aim was to examine the effectiveness of CBT for CFS in a naturalistic setting and examine what factors, if any, predicted outcome. Note that ME is not mentioned because patients with ME were not included in the study. Nor should they have been because CBT could not possibly have helped them.
Design
They analysed patients’ self-reported ‘symptomology’ over the course of treatment and at three-month follow-up. They also explored baseline factors associated with improvement at follow-up.
Setting and Participants
Data were available for 995 patients receiving CBT for CFS at an outpatient, specialist clinic in the UK.
Main outcome measures
Patients were assessed throughout their treatment using self-report measures including the Chalder Fatigue Scale, 36-item Short Form Health Survey, Hospital Anxiety and Depression Scale and Global Improvement and Satisfaction. Note, these are all self-reported, subjective outcome measures.
Results
“Patients’ fatigue, physical functioning and social adjustment scores significantly improved over the duration of treatment with medium to large effect sizes (|d| = 0.45–0.91). Furthermore, 85% of patients self-reported that they felt an improvement in their fatigue at follow-up and 90% were satisfied with their treatment. None of the regression models convincingly predicted improvement in outcomes with the best model being (R2 = 0.137).”
Conclusion
As stated in the Abstract the Conclusion implies, but not does categorically state, a causal role for the CBT intervention. However, inside the main body of the article the authors state the conclusion that makes the CBT treatment causal in a manner that is unwarranted. They make the fundamental correlation equals causation error.
Enter stage left:
Brian Hughes and David Tuller (2021)
In a well-argued paper, Brian Hughes and David Tuller (2021) demonstrate that Adamson et al.’s (2020) conclusions are “misplaced and unwarranted.” They had submitted their critique to the Journal of the Royal Society of Medicine but the Editor did not accept it. Hughes and Tuller made a preprint available online and submitted it to the Journal of Health Psychology where it was reviewed and accepted and will shortly appear online. Here I quote from the Abstract:
“[Adamson et al.] interpret their data as revealing significant improvements following cognitive behavioural therapy in a large sample of patients with chronic fatigue syndrome and chronic fatigue. Overall, the research is hampered by several fundamental methodological limitations that are not acknowledged sufficiently, or at all, by the authors. These include: (a) sampling ambiguity; (b) weak measurement; (c) survivor bias; (d) missing data; and (e) lack of a control group. In particular, the study is critically hampered by sample attrition, rendering the presentation of statements in the Abstract misleading with regard to points of fact, and, in our view, urgently requiring a formal published correction. In light of the fact that the paper was approved by multiple peer-reviewers and editors, we reflect on what its publication can teach us about the nature of contemporary scientific publication practices.”
A Few Details
In their paper, Tuller and Hughes point out that the Adamson et al. study and paper:
“are both problematic in several critical respects. For example, the Abstract – the section of the paper most likely to be read by clinicians – contains a crucial error in the way the data are described, and requires urgent correction.” They point out that a conspicuous controversy is overlooked. Adamson et al. write that the intervention is “based on a model which assumes that certain triggers such as a virus and/or stress trigger symptoms of fatigue. Subsequently symptoms are perpetuated inadvertently by unhelpful cognitive and behavioural responses” (p. 396). Treatment involves, among other elements, “addressing unhelpful beliefs which may be interfering with helpful changes” (p. 396).
The theory of unhelpful beliefs was laid out in a 1989 paper by the Wessely team that included two of the Adamson et al. paper’s authors (Wessely and Chalder). Recent posts here, here, and here show that the theory is lacking in any scientific support leaving the theory totally broken.
This fact was brushed under the carpet and simply not mentioned in the Adamson et al. paper.
Tuller and Hughes report that Adamson et al. are similarly selective in their discussion of the literature on CBT. After scrutiny of 172 CBT outcomes, the redrafted NICE guidance makes it perfectly clear that all of the research is of either “low” or “very low” quality. According to NICE, not one claim for CBT efficacy was supported by any evidence exceeding the “low quality” threshold.
To quote Hughes and Tuller, the research reviewed by Adamson et al.:
“is hampered by several fundamental methodological limitations that are not acknowledged sufficiently, or at all, by the authors. These include: (a) sampling ambiguity; (b) weak measurement; (c) survivor bias; (d) missing data; and (e) lack of a control group. Given these issues, in our view, the findings reported by Adamson et al. are unreliable because they are very seriously inflated.”
I consider here the last point only for its relevance to cause and effect.
Lack of a control group
Causality can never be established without a control group or a control condition. Adamson et al. did not include a control group and so their data cannot possibly support an inference about causality.
Period.
Yet Adamson et al. write:
“The cognitive behavioural therapy intervention led to significant improvements in patients’ self-reported fatigue, physical functioning and social adjustment” (p. 400).
This direct statement of causality is unjustifiable and, most likely, plain wrong.
The authors realise this – or were made to realise it by the editor or reviewers – because they state:
“the lack of a control condition limits us from drawing any causal inferences, as we cannot be certain that the improvements seen are due to cognitive behavioural therapy alone and not any other extraneous variables” (p. 401).
As Brian Hughes and David Tuller (2021) point out, this statement includes another assertion of causality which is also self-contradictory: “In one sentence, therefore, the authors draw a causal inference while denying the possibility of being able to do just that given their study design.”
Ironically, this kind of assertion is what some psychiatrists used to call ‘schizophrogenic’. Not a bad descriptor in this case. It is also a little piece of ‘doublethink‘ in which the reader is expected to simultaneously accept two mutually contradictory beliefs as correct.
Implications
- The Adamson et al. study does not and will never warrant the conclusion that CBT “led to” improvements in CFS symptoms.
- The draft NICE guidance establishes that the evidence in support of CBT for pwCFS is marginal. It is likely to be nothing more than a placebo effect.
- To quote Tuller and Hughes, “the authors have provided a partial dataset suggesting that some of their participants self-reported modest increases in subjective assessments of well-being …These changes in scores might well have happened whether or not CBT had been administered.”
- The flight of Adamson et al. into the illegitimate correlation-equals-causation error is possibly a sign of desperation. When nothing is working, there is little option but to make it up as you go along.
- The house of cards that is the Wessely School is fast tumbling down, and not before time.
Wessely et al, could just as well have attributed the self-reported gains to the nice drinking fountain they have in the hospital waiting room, or the great lighting or the temperature in the treatment rooms… since it was all at one clinic. Without the proper control group you really can’t isolate what created the improvement. The damage caused by the Wessely school of mis-information has caused far too much damage, to too many patients, and has a lot to answer for. Very grateful for those shining a light into these dark places and exposing the lies for what they are.
Of course, I’m clearly alluding to the “Hawthorne Experiment” with my examples… but the point being, there could be some improvement even attributed to just the fact that people were taking an interest their wellbeing regardless of any other factors.
I agree with the point you are making.
You’ll probably enjoy this –
https://www.tylervigen.com/spurious-correlations
Thank you. I have seen these graphs, which show how misleading bad science can be.