Tamiflu

Debating the Evidence

Franklynn Bartol

In our interviews with health experts about Tamiflu, it may seem like everyone has evidence to support their position, leaving us with a rather bewildering picture. How can we assess what “good evidence” looks like in medicine?While there’s no simple answer to this question, here we’ll get into some of the terminology and perspectives you’ll hear in our coverage -- both for and against Tamiflu.

We tend to assume that medical professionals always base their decisions on clear scientific evidence of a treatment’s efficacy-- after all, these are highly educated, scientifically-minded people. However, the picture is often much more complicated. In fact, it wasn’t until 1991 that Canadian physician Gordon Guyatt coined the now common term ‘Evidence-Based Medicine’.

EBM stresses that clinical decisions should be based on the best available scientific evidence—rigorous, systematic, peer-reviewed research that follows scientific standards for careful and unbiased testing. Since its inception, EBM has evolved to emphasize not only scientific evidence, but also factors relating to the specific case at hand, such as the patient’s values and preferences and the healthcare provider’s clinical judgement based on the patient’s unique situation and medical history. 


So how does it work?

In medicine, randomized control trials (RCTs) are often the “gold standard” for scientifically testing a treatment. In RCTs, one group of participants receives the intervention under investigation (e.g. the drug), while another, the control group, receives a comparison intervention (e.g. another established drug), an inactive intervention (e.g. a placebo), or standard care.

Comparison between these groups allows researchers to measure the intervention’s impacts above what would normally be expected or in comparison to accepted treatments. It is important that researchers follow specific procedures that ensure participants are assigned to groups randomly. If this condition isn’t met, there may be differences in the characteristics of participants assigned to the two groups, which could bias the trial’s outcomes. Trials are considered ‘clinical’ when they follow specific protocols for testing the efficacy and safety of medical interventions. An infographic created by the Cochrane Collaboration explains the four phases of typical clinical trials: 

However, a few RCTs are not enough to support a drug’s efficacy and safety. A core value in science is replicability. If a trial’s results can be replicated, that’s evidence that an individual trial’s outcomes weren’t from random chance, or from factors that weren’t controlled for in the study. So many RCT’s should be conducted, ideally by multiple labs, to thoroughly test a product. Often, this results in a vast literature of studies that can be challenging, if not impossible, for an individual healthcare provider to assess.

After multiple rounds of RCTS, systematic reviews are conducted to amalgamate these studies’ findings, review their procedures and results, and either draw conclusions or note remaining questions or inconsistencies in the existing evidence. Reviews are ‘systematic’ if they follow strict guidelines to identify all relevant evidence and assess each study using set criteria. Some reviews, called meta-analyses combine the numerical data from multiple studies on the same topic or treatment.

This large dataset is more ‘statistically powerful’ than individual studies, whose smaller number of participants may be insufficient to detect the full effects (including side effects) of a treatment.

Just like any research, RCTs have limitations which will in turn limit the conclusions that systematic reviews and meta-analyses can draw. RCTs are often designed to look for the benefits of a drug rather than its risks, so potential harmful effects may not be detected. They tend to recruit patients who will have minimal risk of complications or interactions with the treatment.

In these cases, RCTs may not detect how a treatment will interact with multiple other treatments or underlying health conditions. RCTs also generally take place over a relatively short period of time, reducing the likelihood that longer-term effects will be detected. Because of these limitations, even if a treatment is approved for use outside of trials, observational data continues to be important to monitor how a treatment is impacting patients in uncontrolled ‘real world’ contexts. 

In observational studies, researchers take note of the associations between a treatment, risk factor, or diagnostic test and health outcomes. They observe without interfering — so there’s no random assignment to an intervention or control group. Therefore, observational studies cannot draw conclusions about causality, since results may be due to other ‘confounding’ factors.

For example, maybe those who sought treatment were already healthier than those who didn’t. Sometimes an observational study is the only practical, affordable, or ethical option. For example, it would be unethical to ‘assign’ people to smoke every day or to withhold a probable life-saving treatment from those in a control group.

While observational studies may be carefully planned and include many participants, observational data from one or more patients can also prompt further scientific inquiry into a treatment. This was the case in our story when doctors observed multiple instances of neuropsychiatric symptoms after children took Tamiflu.

Observational data is an important piece of scientific research, but insufficient on its own to draw conclusions about what health outcomes a treatment does or does not ‘cause’. 

To learn more about EBM, RCT’s, and systematic reviews, consumers can access the Cochrane Review’s free four-module educational series

Is EBM a ‘Gold Standard’ or a Political Tool?

Evidence-Based Medicine (EBM) seems to provide medical professionals and regulators with an agreeable recipe for evidence-based decision-making—RCTs, systematic reviews, and observational studies are all essential ingredients for stringent testing.

However, as our Tamiflu story shows, there is still much debate about what constitutes sufficient evidence. Many scholars note that EBM does not remove the politics from health research and policy-making (e.g. Cairney & Oliver, 2017; Oliver & Pierce, 2017).

In fact, they argue, “promoting medical practice based on evidence will necessitate more, not less politics” (Rodwin, 2001). Political values and motivations still influence how ‘EBM’ is presented, interpreted, and used. 

This is exemplified throughout our story on Tamiflu. As the BMJ’s investigation revealed, the RCTs supporting Tamiflu, initially reported in 2003, suffered from publication bias: Roche didn’t publish 8 of their 10 clinical trials. They withheld trials with less favourable results until 2014, after years of requests for the data.

With the new trial data, Cochrane Collaboration conducted a systematic review and meta-analysis of all existing Tamiflu trials. It was by far the largest dataset yet used to evaluate Tamiflu: they reviewed 46 trials involving 24,000 patients and 150,000 pages of regulatory documents.

Cochrane reviews use very systematic ‘bias’ checks to externally review and re-analyze data. They assess whether statistical analyses and study protocols were appropriate and that all data is transparently reported across internal company reports and external publications.

The importance of this can be seen in the findings. They found several issues with potential bias in the Roche-funded studies, including missing data and trial information, selective reporting, and lack of consistent protocols for diagnosis, measurement of complications, and randomization of participants. 

While there was evidence that Tamiflu reduces time to first symptom alleviation by about 1 day, it did not relieve symptoms in children with asthma, who are at most risk. In addition, the review reports that “the apparent duration of effect on symptom relief afforded by oseltamivir is open to question because data on relapse after the five day treatment period were not reported in the clinical study reports.” 

Importantly, the review found no evidence that Tamiflu stops the spread of influenza between people or reduces serious complications and hospitalization: two key claims Roche had used to justify pandemic-preparedness stockpiling.

Side effects were also found: Tamiflu caused nausea and vomiting and increased the risk of headaches and renal and psychiatric syndromes—including suicide ideation, paranoia aggression, nervousness. 

This seriously undermines pandemic plans to give Tamiflu to many otherwise healthy people. Given influenza’s relatively benign nature in healthy individuals, its potential harms outweigh the small benefits. The authors conclude that “these findings provide reason to question the stockpiling of oseltamivir (Tamiflu’s generic name), its inclusion on the WHO list of essential drugs, and its use in clinical practice as an anti-influenza drug.”

They further note that their findings, “imply that numerous national and international bodies appear willing to accept biased or incomplete trial reports seemingly at face value”. 

In this case, systematic analyses by Cochrane, in which they assessed the risk of bias in each trial and amalgamated data across trials, helped to reveal important inconsistencies in the existing evidence and refute claims based on smaller trials. 

But reviews can also be biased if they don’t follow appropriate methodology.. Let’s take the ‘Dobson’ study, published in 2015 in The Lancet, just one year after the Cochrane review.

This meta-analysis of nine trials, amounting to 4,328 patients, claimed to support the efficacy of Tamiflu: “Oseltamivir in adults with influenza accelerates time to clinical symptom alleviation, reduces risk of lower respiratory tract complications, and admittance to hospital, but increases the occurrence of nausea and vomiting”.

However, several scientists disagreed with the study’s interpretations of their findings and cast doubt on its reliability, noting that the meta-analyses’ protocols were not published and the methodological quality of the included studies had not been reviewed. It was later revealed that this meta-analysis was funded by Roche, the pharmaceutical company producing Tamiflu.

On top of that, three of the four authors had financial conflicts of interest. One researcher, Dr. Richard Whitley, was a board member of Gilead Sciences, which holds patents for Tamiflu. Despite the findings of the Cochrane Review one year earlier, the CDC cited the Dobson study as support for its promotion of oseltamivir in its Take 3 campaign, where  step 3 is to take an antiviral. 

In an interview with NPR, the CDC Director claimed that Tamiflu “will shorten how long you’re sick, might keep you out of the hospital, and could even save your life”. In fact, the CDC responded to the 2014 Cochrane Review that their recommendations for antivirals would remain unchanged, noting that observational studies not considered in the Cochrane review supported prescribing antivirals to high-risk patients with influenza. 

Debates within EBM still rage regarding the relative weight given to RCTs versus observational studies, as well as which studies are considered methodologically rigorous and unbiased. EBM is not a short-cut to ‘objective’ science, but a standard by which to critically assess the values and politics shaping science itself.

Industry-funding of research compromises the scientific integrity of many RCTs, systematic reviews, meta-analyses, and observational studies. We’ll get into that in our next blog post. But it’s essential to to critically assess whether evidence-based medicine is actually based on the best data out there, or if it’s just marketed that way.