Final bit (probably) on the CTT Oxford paper

When I started the process of critiquing the Lancet paper I thought I was going to strip it down to its component parts. However, I have come to realise this would take far too long. Even longer than it has already taken – which is probably too long.

A truly detailed critique would mean that virtually word every would need to be deconstructed, expanded and explained. Instead, I am going to try and condense the key points, without getting dragged down into too many statistical rabbit holes.

I suppose that, in part, the difficulty in deconstruction represents a key defence of the paper. It has been so cleverly written, so well-guarded on all sides by impenetrable jargon, that it can take pages to explain just one word they have used. No journalist stood a chance; they just repeated the press release virtually word for word. As did everyone else. Ending up with something like:

Statins are even more effective, with fewer adverse effects than even we thought, say Oxford based researchers in the Lancet. (Beep) message ends.

Of course, the paper itself wasn’t written in quite such a jaunty manner. Here is one turgid passage from the paper itself:

‘Biases can also be introduced by making non-randomised comparisons between rates of events across different trials, not only because the outcome definitions might differ but also because the types of patients studied and the duration of follow-up might differ. Such between-trial comparisons might be seriously misleading, which is the reason why meta-analysis of randomised trials involves statistical methods based on the within-trial differences in a particular outcome.

 As a consequence, health outcomes do not need to have been obtained in the same way in the different randomised trials contributing to a meta-analysis for comparisons of the rates between the randomly allocated groups within each separate trial to provide unbiased assessments of any real effects of the treatment.’

If you did manage to get to the end of that, what do you think it meant? And why did they write it? What are they saying here?

In truth, there is only one word in that passage that you need to take note of, for it carries the weight of all else on its shoulders. It is the word ‘might’, which I put in bold. They used this word because they didn’t want to address the fact that the rate of adverse effects seen in different statin trials differed so wildly. From 3.2%% to 94.4% for the statin. And from 2.7% to 80.4% for the placebo.

Figures that call into serious doubt, the validity of the entire data set. Which should lead to questions such as: Why is the measurement of adverse effects in clinical trials such a sprawling mess, when it is so critically important?

Researcher one:      ‘I measured Mount Everest as being eight thousand metres high.

Researcher two:      ‘Funny that, I measured it at two hundred and forty thousand metres.’

Researcher one:     ‘Measuring things can be tricky. We’ll just include both figures in our report.

At best, each trial was recording adverse effects in such a different way, the data itself becomes almost completely meaningless. At worst, there was data manipulation going on to ensure that adverse effect rates stayed exactly the same for the statin and the placebo. No matter what the absolute rate. Which brings up the possibility of data fraud. And no-one wants to open that can of worms.

Still, the authors of the paper faced a tricky problem. How do you get around the fact that the ‘between trial’ rates of adverse effect varied thirty-fold for both statin and placebo arms. Particularly when placebos are supposed to be inert ‘sugar’ pills. And therefore identical? [More on that myth at some other time].

The answer was to decree this was completely irrelevant, and the huge variations not only could, but should be completely ignored. Which left them in the far more comfortable situation whereby they only had to explain or compare the rates of adverse effects within each trial.

Their comfort, in large part, came from the fact they already knew the rates for placebo and statins would be virtually the same within each trial. How so? Because they had previously responded to an open letter in the BMJ discussing this very same issue in 2014. Text written by me.1

Yes, they dismissed this massive data anomaly in 2014, and dismissed it again in 2026. Might is the word they used for this task. Sitting within the sentence ‘between-trial comparisons might be seriously misleading.

A lot of things might be misleading, but a lot of things also might not be. However, it is not really a scientific word, is it? In this passage it represents an evidence free opinion. You would think ignoring something as important as this would need to be supported by some facts, a bit of research, a touch of evidence even? Not so. The word might will do quite nicely, thank you.

And, whilst the passage surrounding that word may sound scientific, if you take a little time to think about it, it crumbles into gibberish …dressed up to sound like science. Just look at the first sentence, and read it slowly?

‘Biases can also be introduced by making non-randomised comparisons between rates of events across different trials.’

What, exactly, is a non-randomised comparison in this case? How can you possibly randomise data that has already been collected? Especially if those data have been obtained from double blind placebo controlled randomised clinical trials in the first place. We need to randomise already randomised data? Any thoughts on how that could be done? What would it even look like if we did so?

To further sustain their analysis, they brushed aside any trial that they didn’t like the look of e.g. METEOR or IDEAL. They decreed only trials with one thousand participants, or more, were worthy of consideration. METEOR the trial where 80.4% had adverse effects on placebo – had a mere 984. Dismiss!

Removing the METEOR study was not an act of randomisation. It is what we humans call cherry picking the data to suit our argument. They got rid of IDEAL because it looked at two different statins. Not statin vs. placebo. Dismiss!

Then, in a stunning contradiction, the CTT Oxford found themselves content to carry out a ‘meta-analysis’ ‘of all these wildly different trials. And this caused them no problem at all. A meta-analysis, is, by definition, a case of bringing together various clinical trials of interest in an area and aggregating the data.

There will always be differences between trials. Different ages of participants, different underlying medical conditions, different end points etc. However, if you state that you are unable to do ‘between trial’ analysis, because the trials are too different, you cannot then go on to do a meta-analysis. Yet, they did.

Anyway, here we are… I looked at passages like the one above, and all sorts of flashing red lights went off. Most people, I suppose, just blipped over it. ‘Oh, long words, statistics…. Forget it, move on.’ Most people in truth, will never read the paper … ever. Most doctors will not read it either. A few. A very, very, few may read the introduction and conclusions – and that’s about it.

Scientific papers in medical journals are not designed to be read nowadays; they are designed to intimidate. Dense medical jargon, unknown acronyms, all written in the dullest possible passive voice. Then, sprinkle in a few statistical tests no-one has heard of, and your castle of intimidation is complete. Defences bristling.

In this case. ‘Oh yes, we used a false discovery rate analysis…’ They used a what? I had never heard of it, and it took me a week to find someone who had. I looked it up on Google, but it still made very little sense, at first.

My current broad-brush interpretation is that the false discovery rate is a way of immunising against uncommon serious adverse events. These can turn up in early-stage exploratory studies, with small numbers of participants. Say for example, you do a Phase One study on fifty people, and two cases of liver damage turn up.

Four per cent possibility of liver damage …OMG! That could end your drug. But of course, it could just be chance. Every week someone wins the lottery, at ten million to one. Thus, in order to prevent a chance finding from killing your drug stone dead, you set a false discovery rate (FDR). Obviously, you need to keep a sharp eye on the liver issue. It might not have been chance.

However, FDR it is not an appropriate test for randomised confirmatory studies with thousands involved i.e. the exact same studies they chose for their meta-analysis. Here, you are supposed to have removed chance, as far as possible, by looking at thousands of participants over several years of study. It is, kind of, the whole point of such studies. Remove chance, disprove the null hypothesis. Sell your drug, make billions.

Yet, despite this, they set an FDR of 5% to analyse studies long since completed. The 5% rate means that you expect that one in twenty of the identified safety signals are expected to be false positives i.e. chance findings.

Which safety signals could be missed, or ignored, by doing this? How about rhabdomyolysis … severe muscle breakdown carrying with it a mortality rate of around 25%. It is a known adverse effect of all statins, though rare(ish).

Let me take you back in time to one of the most recently launched statins – cerivastatin. What, you mean you’ve never heard of it? It was the most potent statin ever made. Here is a paper from 1998. ‘Clinical efficacy and safety of cerivastatin: summary of pivotal Phase IIb/III studies.’ 2

‘…cerivastatin is an effective, well-tolerated, and safe treatment for the reduction of LDL cholesterol and other atherogenic lipids in primary hypercholesterolemia.’

‘There was no significant difference between the incidence of adverse effects with cerivastatin and comparator statins.’

Now let me take you forward a mere three years to 2001. ‘Withdrawal of cerivastatin from the world market.’

‘Cerivastatin was recently withdrawn from the market because of 52 deaths attributed to drug-related rhabdomyolysis that lead to kidney failure.’ 3

Do you think CTT Oxford included data from the ‘double blind’ cerivastatin trial? The one where it was found to be safe and effective? How would the FDR analysis have done on that one? Who knows, because obviously the CTT Oxford ignored this ‘safe and effective’ drug, and the RCT that launched it.

Let me summarise the wonder drug cerivastatin for you. Statin found to be safe and effective in clinical trial, stop. No more adverse effects than any other statin in double blind placebo controlled clinical trial, stop. Found to kill people shortly after launch. Stop.

Do you think serious adverse effects can be missed in clinical trials? You may want to look up Vioxx on Google. Actually, the serious adverse effects were not missed in this case, they were deliberately hidden. Costly little oversight.

Initial Litigation Settlement ($4.85bn): In 2007, Merck paid $4.85 billion to settle roughly 26,000 lawsuits from individuals alleging injury, a move that followed several high-profile jury trials.

Securities Class Action ($830m): In 2016, Merck agreed to pay $830 million to settle a lawsuit with investors who bought securities between 1999 and 2004, claiming the company misrepresented Vioxx’s safety

Anyway, missing safety signals happens all the time. The first time anyone noticed that statins increased the risk of diabetes – by a large amount – was with the JUPITER study on rosuvastatin. Which was one of the later studies, on one of the very last statins to launch. But all statins have this effect. So, how come no-one noticed it before? Your guess is as good as mine. FDR anyone?

Of course, in what I consider to be a perfect irony, the greatly increased risk of rhabdomyolysis with cerivastatin was only picked up by observational post-marketing safety studies. The type of study that might be biased. The type of study that the CTT decreed to be worthless and can therefore be ignored.

Perhaps the CTT Oxford should demand that cerivastatin is brought back onto the market? After all, it was found to be perfectly safe in the randomised controlled double-blind study. The only studies we can rely on to provide objective data. Ho, ho.

Perhaps you can see my problem here. I read a paper, and see flashing red lights all over the place, but trying to explain why they are flashing … even a single word like ‘might’ can take pages to untangle. Why did they use it, what is the context, the background, the rationale … as for FDR …

For most people a clinical paper is an arcane and impenetrable document – virtually unreadable. However, I do know this world … all too damned well. I may not follow all the highly complex statistical methodology used, but I know enough – at least to ask the right questions. Most people are effectively frightened off.

‘FDR, what’s that, I don’t understand it. I may sound stupid if I ask questions. I shall keep quiet.’ Time, I think for one of my favourite quotes from Michel de Montaigne.

‘Difficulty is a coin which the learned conjure with so as not to reveal the vanity of their studies.’ 

I certainly do not read medical research papers the same way others may do. I start from a different place. Which is that I do not believe a single damned word they are saying, then work backwards towards the truth – if I can find it. It is not a world I particularly want to inhabit, but here I am. I like to think of my approach as constructive criticism. But then, I also like to think of myself as handsome and witty.

Am I alone in having such a cynical view? Am I a mad conspiracy theory nutter, railing against the deep state? I don’t think so. I am certainly not alone. Here is what Marcia Angell has to say about medical research. She edited the New England Journal of Medicine for many years. It was, and remains, the number one medical journal in the world. I imagine she could be considered a reliable witness by the mainstream.

It is simply no longer possible to believe much of the clinical research that is published, or to rely on the judgement of trusted physicians or authoritative medical guidelines.

How about Richard Horton, long term editor of the Lancet, the number two medical journal in the world.

‘The case against science is straightforward: much of the scientific literature, perhaps half, may simply be untrue. Afflicted by studies with small sample sizes, tiny effects, invalid exploratory analyses, and flagrant conflicts of interest, together with an obsession for pursuing fashionable trends of dubious importance, science has taken a turn towards darkness’.

You think peer review would pick up egregious errors and data manipulation? Here is what Horton has to say on that issue

‘Editors and scientists alike portrayed peer review as a “quasi-sacred process that helps to make science our most objective truth teller. But we know that the system of peer review is biased, unjust, unaccountable, incomplete, easily fixed, often insulting, usually ignorant, occasionally foolish, and frequently wrong.’

Another of my favourite quotes comes from Drummond Rennie.

The Trouble with Medical Journals carries on the insightful, acerbic criticism of biomedical publication started by Drummond Rennie1 (the founding father of peer review research and JAMA Deputy Editor) who wrote more than 20 years ago:

“There seems to be no study too fragmented, no hypothesis too trivial, no literature citation too biased or egotistical, no design too warped, no methodology too bungled, no presentation of results too inaccurate, too obscure, and too contradictory, no analysis too self-serving, no argument too circular, no conclusions too trifling or too unjustified, and no grammar and syntax too offensive for a paper to end up in print.’

Print that out, and stick it on your wall. I have, because it is my starting point when I read a clinical paper in a medical journal.

I knew straight off, from reading the title of the paper, that we had nonsense on our hands. The paper was called. ‘Assessment of adverse effects attributed to statin therapy in product labels: a meta-analysis of double-blind randomised controlled trials.’

Utterly misleading, right from the very start.

That title leads to the first obvious question. Or at least it should. Did they bother to find out how many people read product labels? [The product information leaflets found inside boxes of prescribed medications]. More specifically did they try to find out how many people read statin product information leaflets?

The answer to both questions is a resounding no. They did not. Nor do they mention any other research they could use to support their central argument? Not a single scrap.

Next question. Did they carry out any original research to find out the effect, or otherwise, or reading a product label on reported adverse effects. No, they did not. There was nothing done in this area, at all.

Given this, how could they possibly make a connection between adverse effects reported in observational studies, to the information found on product leaflets. Again, the answer is that they could not …. they didn’t even bother. They simply assumed it ‘might’ happen. Yup, bang on proper research. Assumption laid upon assumption.

The correct title for this paper should have been. ‘A meta-analysis of adverse effects of statins found in selected trials.’ Because that is what took place. They then tacked on an assumption, based on nothing, that the information on product labels leads to massive over-reporting of adverse effects in all observational studies. Which can therefore be ignored. Utter and complete rubbish.

As for myself. I assume that the reporting of adverse effects is based on the phases of the moon in the constellation of Sagittarius, and I have done just as much original research to support this hypothesis as they did i.e. none.

Despite this, they ended up making the following, potentially dangerous, recommendation

‘…there is a pressing need for regulatory authorities to require revision of statin labels and for other official sources of health information to be updated, so that clinicians, patients, and the public can make informed decisions regarding the balance of the benefits and risks of statin therapy.’

I say dangerous because muscle pain, for example, may herald severe muscle damage that can – in some cases – end up being fatal (see under cerivastatin). And if you don’t know statins can damage muscles, because all information on this has been removed, you may not take any action on symptoms until it is too late. ‘Lawyers, please start your engines.’

As for the ‘meta-analysis of double-blind controlled trails’ part of the title. That refers to research on adverse effects found in randomised controlled trials. Which was only tangentially connected to product labels. If at all.

In short, there was only one piece of new research done – the meta-analysis of adverse effects inf RCTs. Which was hardly new. Yet they claimed to have carried out an Assessment of adverse effects attributed to statin therapy in product labels.’ Well, they made no such assessment, so no such claim could possibly be made.

A title as misleading as this, heralds what is to come. Several pages of quasi-scientific nonsense. Yet, the world lapped it up.

But what of the adverse effects?

I may look at them next. Or I may have run out of energy.

1: https://www.bmj.com/content/348/bmj.g3306/rr/702257#:~:text=The%20following%20is%20a%20pr%C3%A9cis,until%20these%20concerns%20are%20addressed.&text=Sir%20Richard%20Thompson%2C%20Professor%20Clare,University%20of%20Liverpool 

2 https://www.sciencedirect.com/science/article/abs/pii/S0002914998004354

3: https://pmc.ncbi.nlm.nih.gov/articles/PMC59524/

6 thoughts on “Final bit (probably) on the CTT Oxford paper

  1. nestorseven's avatarnestorseven

    On one hand you can take drugs and take your chances and on the other hand you take no drugs and take your chances. There is no proof that taking drugs leads to a longer, healthier life. I choose option number two as I prefer not to be poisoned by toxic drugs. Actually my brain tells me this speaking to my heart and soul.

    Reply
  2. Paul Murphy's avatarPaul Murphy

    Since offering my comment on your previous essay I have further reviewed the issues involved and concluded:

    1 – yes, Kaiser Permanente does have the data you need to settle this;

    2 – No, I do not think the lancet authors are duplicitous lying SOBs, I think they’re caught between rocks and hard places and trying desperately to wiggle their way out by ultimately saying nothing they could be hung on by either side; and,

    3 – the underlying problem is that the “gold standard” methodologies, whether at the experimental or analytic stages, simply do not work for long term drugs like statins.

    So – if you want to determine whether, or to what extent, statins work what you need to do is see whether people who were correctly (i.e. in response to a correct diagnosis) prescribed statins and took them over a long period of time lived longer, and/or better, than those who (correctly) got the same prescriptions but choose not take them for the long term.

    This approach eliminates three of the major factors limiting the accuracy and value of long term clinical trials: no placebo effect; diagnosis confirmed by autopsy; and, no sorting (data contamination) effects arising from interactions between testers and testees.

    So, bottom line, if some large number (like 931,822) people for whom we have subsequent autopsy or morbidity conference data were prescribed statins in some year like 2000 and many (e.g. 241,455) of them did not renew their prescriptions, then simply looking at the number and scope of subsequent medical interventions in the lives of the members of both groups can tell you definitively whether, or to what extent, statins work – and you can then further refine that by considering available data on the factors differentiating those who continued to take the medication from those who did not: things like weight, activity, job stress, or addiction (food, medical attention, alcohol, …)

    Reply
  3. barovsky's avatarbarovsky

    Brilliant Dr Kendrick but ever so depressing. The entire edifice of modern medicine [sic] is based on a mountain of lies, so what to make of the treatment we get when (if we’re lucky, or perhaps not) we get to actually see and talk to a doctor, a rare event these days? I raised the issue of the information leaflet once and was told that, ‘Oh they have to say that for legal reasons, just ignore it’.

    BTW, I found out about FDRs within 30 seconds of reading about them in your essay:

    “The false discovery rate (FDR) is the expected proportion of type I errors. A type I error is where you incorrectly reject the null hypothesis; In other words, you get a false positive.

    This is the condensed version. What is so infuriating about this, is using language as a weapon to hide the truth but wait, doesn’t the mainstream media do this as a matter of course but without all the confusing words?

    Reply
  4. Robert Dyson's avatarRobert Dyson

    A good meta-analysis of the meta-analysis. It is not worth tormenting yourself more. Vioxx has been one of my favourite they-knew-but-hid drugs for many years. I read Marcia Angell’s book “The truth about the drug companies” 20 years ago and maybe that is where I picked up the Vioxx case. I have never been in a position where something I worked on could cause death but wonder how it feels when stuck with that problem. This would be a good research project though maybe impossible to get honest answers.

    Reply
  5. GeeDee's avatarGeeDee

    I stopped taking statins years ago after reading your original ‘blog’. Now nearly 80 I would like to thank you for my health and for saving the NHS some money. Unfortunately, I cannot say the same for the nurse who conducts my annual review – frustration and ‘well it’s up to you’. Today, you are not allowed to show independence. My online medical record shows repeated ‘refused COVID vaccination’ ‘refused statins’. When I show this to friends they warn me I may get struck off. How close is that?

    Reply

Leave a reply to Paul Murphy Cancel reply