Finally, you may think, some figures.
In this blog I want to look at some of the figures on adverse effects seen in a number of statin trials – such figures as it is possibly to unearth anyway. Unlike the CTT in Oxford I – along with almost every other researcher in the world – cannot gain access to most of the data. It is all highly confidential.
Official secrets are released in the UK after thirty years. But not, it seems the pharmaceutical company research papers held in Oxford. Clearly, they are far more important to national security. We can only work with what we’ve got – so, here goes:
Now, accountancy heads on, eyelids taped open, strong coffee at the ready. Here come some figures on adverse effects from statins and placebos. These come from very large statin trials – all of which were funded by pharmaceutical giants such as Merck, Pfizer and AstraZeneca.
The main point I want to make here is that the figures themselves appear, frankly, unbelievable. So unbelievable that …I shall leave that next bit unsaid, under advice from lawyers:
- In the LIPID trial, the percentage of adverse effects recorded, in those taking a statin (pravastatin) was: 3.2%1
- In the IDEAL trial, the percentage of adverse effects recorded, in those taking a statin (atorvastatin) was: 94.7% 2
94.7 ÷ 3.2= 29.6 [Thirty times as many adverse effects seen with a different statin]
- Looking at the LIPID trial again, the percentage of adverse effects recorded, in those taking the placebo was: 2.7% 1
- In the METEOR trial, the percentage of adverse effects recorded, in those taking the placebo was: 80.4% 3
80.4 ÷ 2.7 = 29.9 [Thirty times as many adverse effects seen with placebo]
- In the 4S trial, where they used simvastatin 20mg, the percentage of adverse effects recorded, in those taking simvastatin was: 6.0% 4
- In the IDEAL trial, where they used simvastatin 20mg, the percentage of adverse effects recorded, in those taking simvastatin was: 94.4% 2
94.4 ÷ 6 = 15.7 [Sixteen times as many adverse effects recorded using the same statin, same dose].
If these figures are correct, we have a major problem on our hands. Either the collection of adverse effects data in randomised controlled trials is done in such wildly different and unregulated ways, that the data are completely unusable. And thus, worthless for research purposes.
Or…
The data themselves are heavily manipulated in some way. And are thus equally worthless for research purposes.
The possibility of data manipulation is strengthened by another strange phenomenon that emerges from the data – such as we are allowed to see. Namely, whatever the rate of adverse effects seen within each trial it is the same (or very nearly the same) for both the statin and placebo.
Here is a selection of some big statin studies.
AFCAPS/TEXCAPS: Total adverse effects: lovastatin 13.6%: placebo 13.8% 5
4S: Total adverse effects: simvastatin 6%: placebo 6% 4
CARDS: Total adverse effects: atorvastatin 25%: placebo 24% (ref now missing/gone))
HPS: Muscle pain: simvastatin 5% placebo 6% 6
METEOR: Total adverse effects rosuvastatin 83.3%: placebo 80.4% 3
LIPID: Total adverse effects 3.2% Pravastatin: Placebo 2.7% 1
JUPITER: Discontinuation rate of drug 25% Rosuvastatin 25% placebo. Serious Adverse events 15.2 % Rosuvastatin 15.5% placebo 7
WOSCOPS: Total adverse effects. Pravastatin 4.9%%: Placebo 4.5%. Discontinuation rate Pravastatin 29.6% Placebo 30.8% 8
IDEAL: Total adverse effects simvastatin 94.4%, atorvastatin 94.7% 2
The adverse effects, the serious adverse events, the drop out and trial discontinuation rates – they are always the same for the statin and the placebo (or for one statin vs another) – no matter what the absolute rate may be. Yet the figures range thirty-fold. For both drug and placebo.
When I first dredged together – such figures as I was able to dredge – I thought. Well, I had best not say what I thought, or I would be immediately sued for libel.
As for the 94.4% and 94.7% figures. These came from the IDEAL study where simvastatin was compared with higher dose of atorvastatin in 8,888 participants. A big trial. However, the CTT dismissed IDEAL from its analysis, as it did not meet their self-appointed criteria.
I wonder if their summary dismissal of IDEAL might have had anything to do with the fact that this was a trial where almost every single participant suffered an adverse effect from taking a statin. They dismissed the METEOR trial because they decided that only trials with a thousand participants would be included in their meta-analysis, and METEOR only had 984. Wow, how … convenient.
The IDEAL trial is also very telling in a couple of other ways. As mentioned earlier, in the earlier 4S study, 20mg simvastatin was used as the active drug. The adverse event rate from this dose was 6%. In the IDEAL study, exactly the same dose of simvastatin was used. However, in this case, it led to an adverse event rate of 94.4%.
A fact which is made even more strange by the knowledge that, about half the participants had been taking simvastatin before this trial began, and the participants were specially selected as being ‘simvastatin tolerant.’ Their words, not mine.
Therefore, one would also expect them to be generally ‘statin’ tolerant – as all statins have pretty much the same spectrum of adverse effects – if somewhat dose dependent.
You really could not make this stuff up. We have a group of people included in a trial of statins, fifty per cent of whom were already known to be ‘simvastatin tolerant’, yet 94.4% of them suffered an adverse drug reaction … to simvastatin. God knows what would have happened if they were not tolerant. All turn blue and explode perhaps.
Yes, IDEAL was not a double-blinded study, this is true. But is anyone seriously trying to suggest that the nocebo effect can lead to sixteen times as many people reporting adverse effects. If so, please provide some …a little … any evidence for any such massive difference. And good luck with that.
I have studied both placebo, and nocebo, research papers in some detail. Which are often the same papers. They are, I should probably say here, far from perfect. As you can probably imagine this is an area fraught with confounding variables and observer effect. But the differences found are consistently small, around five per cent or so.
Importantly, the placebo effect – where fewer adverse effects are reported when people think they are taking a medication – has almost exactly the same impact as the nocebo effect – in reverse. Therefore one, effectively, cancels the other one out. Something you might intuitively expect to be the case 9.
There is certainly nothing anywhere in the literature to suggest that the nocebo effect can explain anything more than, perhaps, a doubling in adverse effects that are reported. Absolute maximum. Ergo, the figures from the RCTs make no sense. Unless, that is, your interpretation is that they may not be … entirely aligned with that little thing we call, the truth.
Serious adverse effects
Perhaps most importantly, and far more difficult to explain away with a rapid waving of the hands and a repeated muttering of ‘not-blinded, not-blinded, not-blinded….’ The rate of a serious adverse events in IDEAL was:
Simvastatin: 47.4%
Atorvastatin: 46.5%
Here, we are not talking about diarrhoea, or mild leg pain, or blurry vision, or headache. The definition of a serious adverse event in a clinical trial is, as follows:
‘…any untoward medical occurrence that results in death, is life-threatening, requires inpatient hospitalization or prolongation of existing hospitalization, results in persistent or significant disability/incapacity.’
These are not the sort of things you can imagine happening to you, having first read about them on a patient leaflet. It is particularly difficult to simply imagine that, for example, you are dead. Yet nearly half of the participants in this trial suffered a serious adverse event.
This was one of the very few trials where they published any data on any serious adverse events. [How, just how, are they allowed to get away with this? Sorry to keep banging on about this, but I mean …what?]
Anyway, very nearly half the people in this clinical trial, where all participants took a statin, suffered a serious adverse event …. How did the authors deal with this elephant, nay mastodon, in the room? They dealt with it by using the following sentence:
‘The results indicate that patients with myocardial infarction may benefit from intensive lowering of LDL-C without increase in noncardiovascular mortality or other serious adverse reactions.’
[Note the word reaction here. Thisthis implies a ‘reaction’ to the drug, which is not what an SEA is. Right here, right under our noses, yet another critical semantic trick is being played out. Seemingly small, possibly unimportant? In truth massively distorting. SAEs cannot be defined as ‘reactions’, as that is not what they are. Peer reviewers where art thou? Polishing their pro-statin reputations, no doubt.
In essence, the investigators simply brushed aside the fact that 94% of participants suffered an adverse effect with statins, and that very nearly half of the participants suffered a serious adverse event. Move along now, nothing to see here.
Adverse events/effects – serious adverse events (SAEs)?
At this point I feel the need to explain the difference between a drug related adverse effects/event and a serious adverse event (SAEs). They do sound like pretty much the same thing, and most people think they are the same thing. Are serious adverse events simply the serious subset of the overall adverse events figure?
No. And I have to say this particular topic is highly confusing … deliberately so? It took me a while to get my head round it.
A drug related adverse effect, or event, is an unpleasant thing deemed to have been caused by the drug e.g. rash, muscle pain, a headache. Unpleasant, damaging to quality of life, but not life threatening.
Are serious adverse effects (SAEs) also caused by the drug?
The answer is both yes, and no. And no and yes – and maybe. SAEs are complicated, but important to understand. For, within a single figure, lies the potential for much statistical mischief. And much statistical mischief there, indeed, is.
There are three sub-sets of SAEs:
- An SAE can be caused by the drug – e.g. liver failure, muscle breakdown
- An SAE can be something that is supposed to have been prevented by the drug e.g. a heart attack.
- An SAE may have nothing whatsoever to do with the drug – someone develops bowel cancer (which, obviously, might have something to do with the drug, that you didn’t expect to see. Another issue for another time)
Given that there are three types of SAE, all wrapped up within the same overall number, does it mean anything at all? Can any sense be made of it?
Well, first you have to split the single figure into its component parts.
- Drug related SAEs – serious events caused by the drug
- Drug preventable SAEs – serious events that the drug is designed to protect against
- Coincidental SAEs – serious things that just happen by chance (maybe)
Disentangling can be tricky when you cannot see the vast majority of the data that makes up the headline SAE figure. Although some trials did release more than others e.g. METEOR, ALHAT-LLP, and IDEAL. Strangely, none of these trials made it into the CTT meta-analysis. Yet another inexplicable coincidence, no doubt.
Anyway, let us start the great disentanglement by first looking at coincidental SAEs. That is, serious events that happen by chance. If the trial is big enough then you should see the same number of ‘coincidental’ SAEs in both the statin and placebo arms. Maybe one or two more on either side. For the sake of this argument, we can ignore these, assuming the drug didn’t actually cause unexpected SAEs.
Once you have got rid of ‘chance’ SAEs, you have two left. Drug related SAEs and drug preventable SAEs.
In a trial of statins, the investigators are hoping to see fewer cardiovascular SAEs. By which I mean there should be fewer cardiovascualar (CV) deaths e.g. heart attacks and strokes. There should also be fewer non-fatal heart attacks and strokes, and fewer angina attacks, or stents inserted etc. In short, fewer serious CV events.
If there is a reduction in cardiovascular SAEs, there should also be a reduction in overall SAEs in the statin arm. If not, this means that the statin must be causing as many SAEs as it is preventing. Whatever those SAEs may be.
If we look at the IDEAL study in a little more detail, we find the following – with regard to serious adverse cardiovascular events.
Simvastatin = 30.8%
Atorvastatin = 26.5%
The first thing to say here is that is a hell of a lot of CV events. More than a quarter of those taking high dose atorvastatin suffered a serious cardiovascular event, and nearly a third of those taking simvastatin. Over a time period of just under five years. So much for the super preventive power of statins.
Back on point. What we have here is a difference of 4.3% in cardiovascular SAEs between simvastatin and atorvastatin.
Ergo, there should also be a difference of 4.3% in overall SAEs. But there is not. There is a difference of 0.9%. [47.4% vs. 46.5%]. Herein lies the great unexplained gap of 3.4% inSAEs. Not really a gap, more of a giant chasm. Both clinically, and statistically, hugely significant.
What fills that gap? What indeed?
To help answer this question I will take you back to an earlier statin meta-analysis done several years ago by the University of British Columbia – a part of the Cochrane Collaboration. This was a paper looking at statins in primary prevention (people with no known CV disease). They highlighted the exact same issue that I am discussing here. In their words:
‘In the two trials where serious adverse events are reported, the 1.8% absolute reduction in myocardial infarction and stroke should be reflected by a similar absolute reduction in total serious adverse events. [MI and stroke are, by definition, serious adverse events].
However, this is not the case; serious adverse events are similar in the statin group, 44.2%, and the control group, 43.9%. This is consistent with the possibility that unrecognized serious events are increased by statin therapy, and that the magnitude of the increase is similar to the magnitude of the reduction in cardiovascular serious adverse events in these populations.
This hypothesis needs to be tested by analysis of total serious adverse event data in both past and future statin trials. Serious adverse event data is available to trial authors, drug companies and drug regulators. The other measure of overall impact, total mortality, is available in all 5 trials and is not reduced by statin therapy.’10
Their conclusion.
‘Conclusions: If cardiovascular serious adverse events are viewed in isolation, 71 primary prevention patients with cardiovascular risk factors have to be treated with a statin for 3 to 5 years to prevent one myocardial infarction or stroke.
This cardiovascular benefit is not reflected in 2 measures of overall health impact, total mortality and total serious adverse events. Therefore, statins have not been shown to provide an overall health benefit in primary prevention trials.’
Their conclusion: unrecognized serious events are increased by statin therapy
[This group changed their minds on the use of statins in primary prevention some years later. Based mainly on the results of the only primary prevention study to show benefits on overall mortality. The JUPITER study. I think they were wrong to do so, but they did. However, it had nothing to do with their SAE analysis, which still holds true – and ever unanswered. I have added the CV event data from JUPITER in an additional blog, as it became too complicated to explain here].’
To go back to my earlier question. ‘What fills that 3.4% gap?’
What fills that gap is, until proven otherwise, excess SAEs caused by the higher dose atorvastatin. Something that should be of real concern to everyone prescribing, or having high dose atorvastatin prescribed for them.
Instead, in IDEAL, here is what we got by means of an explanation.
‘The results indicate that patients with myocardial infarction may benefit from intensive lowering of LDL-C without increase in noncardiovascular mortality or other serious adverse reactions.’
This is, in reality, the exact opposite of what they actually found. Here we see, in black and white, an increase of 3.4% in ‘other’ un-named SAEs that were most likely caused by the high dose atorvastatin.
And, until independent researchers can see the raw trial data, which only the CTT can currently see, we cannot possibly accept the idea that more intensive statin lowering of LDL-C does not cause other serious adverse reactions.
Nor can we accept the headlines that followed Lancet paper that triggered my response
‘Cholesterol-lowering drugs called statins, used by millions, are far safer than previously thought, a major review has found.
Leaflets in packs should be changed to reflect this and avoid scaring people off using the life-saving pills, say the authors.’ 11
‘Safer?’… The CTT Oxford study had nothing to do with safety, or SAEs. Nothing, at all. It did not look at life-threatening drug related adverse events; it looked at the less damaging drug related effects such as muscle pain. Safety, where art thou. Not here.
To be frank I have never been that bothered by the relatively minor drug related effects of statins. If you stop taking them, they go away … hopefully (though not always). Although they may be the harbinger of greater health problems. Muscle pain > myopathy > rhabdomyolysis > dead.
I have been far more concerned with SAEs and/or irreversible damage caused by statins. Something not even considered by the CTT in Oxford. But I have seen many people who I am absolutely certain were severely damaged by statins. Leading to lifelong disability and damage. Crippled, for life.
I have also witnessed patients who have, literally, risen from their beds, as if by a miracle, when they stopped taking the damned things. Hundreds, thousands, who have written to me telling me their very upsetting stories or pain and damage, and utter dismissal by their doctors. You think I am just making this up?
I will remind you of the Barney Calman e-mail to Professor Sir Rory Collins and his co-collaborators at the CTT as he put together his article in the Mail on Sunday, attacking me for my dangerous criticism of statins.
‘Dear all, thank you again for all your input into this article so far. I wanted to readdress the issue of finding a case study. One of the key factors in your collective argument is that criticism of statins discourages use amongst high-risk patients, and this is a public health threat. Since putting calls out we have been inundated by stories of people who have stopped taking statins and felt far healthier.
We’ve had two quite dramatic stories of patients who have been taken off statins by their doctors because of developing serious liver problems, and then died. The families themselves both naturally question whether statins caused the problems. What we haven’t had is a single story which backs your thesis, and obviously I’m concerned.’ [but not concerned enough to pay any attention to these tales, clearly]
Ah yes, but these are mere anecdotes, so easily dismissed when we have the utterly concrete and believable figures from the almighty randomised controlled (industry funded) clinical trials to disprove the evidence of our own eyes.
Digging the figures
I know that for a lot of people this analysis will have been really heavy going. Figures, statistics, unfamiliar acronyms. I would apologize, but in truth, I cannot really do so. To quote the great Michel de Montaigne:
‘Difficulty is often a tool used by scholars to hide the lack of substance in their studies.’
The CTT paper is difficult to read and understand. All the statin studies are equally difficult. They are filled with complex graphs and statistics, they use arcane terminology, and terms that have to be painstakingly unwrapped. Such as, what does a Serious Adverse Event actually mean? One simple acronym, a page and a half of explanation required to explain it.
In fact, all words they use are chosen with great care. Even the title of the Lancet paper is misleading and yet, still, just about, technically correct. You wouldn’t be able to pin them down in a court of law. The words would slip and slither away from your grasp.
In short, I have to go into the detail, for this is where the game is played. In the murky world of assumptions and acronyms. Assumptions that can drive complex statistical games in any direction you want. Clever, clever, games. I hope that by shedding light on a few of the games, you can see more clearly what is going on here.
I do wish all researchers were on the same page, the same side, searching for the truth, rather than making it almost impossible to see. I wish this type of forensic analysis was not required. But it is.
Next, and finally in this series, I will outline the assumptions the CTT in Oxford made, which allowed them to claim what they did. And finish here with a quote from the BMJ:
‘Analysis of AE (adverse event) data is frequently inappropriate and RCT reports published over a recent period in high impact general medical journals often provide insufficient and inconsistent information to allow a comprehensive summary of the safety profile to be established.’ 12
Addendum. Next, is a blog on the JUPITER trial.
2: https://jamanetwork.com/journals/jama/fullarticle/201883
6: Gurm
10: Therapeutics_Initiative_48_Statins_role_2003.pdf.pdf
11: https://www.bbc.co.uk/news/articles/c80142p2g00o
12: Analysis and reporting of adverse events in randomised controlled trials: a review | BMJ Open
