Evidence-based medicine is a great idea– except for the evidence part. In a powerful op-ed piece in the Boston Globe, Sylvia Pagán Westphal makes the seemingly obvious point that “evidence-based medicine is only as strong as the evidence used to support it. The stark reality is that evidence can be weak, biased, or even fraudulent.”
A key weakness in the chain is the use of guidelines in evidence-based medicine, since expert opinion instead of “solid clinical trial evidence” plays a large role in the guidelines. The problem is that “many physicians who write these recommendations have financial ties to drug companies.” Unfortunately, she observes, “there’s a systemic lack of transparency in the guideline-setting process, which means that low-quality data gets portrayed as much better evidence than it really is.”
She continues:
Perhaps no recommendations carry more weight — and the burden of more life-and-death decisions — than those in cardiology, issued jointly by the American College of Cardiology and the American Heart Association. Of the 2008 guidelines, involving over 2,700 treatment recommendations, only 11 percent were backed by strong evidence from multiple randomized clinical trials, according to a 2009 study by Duke University researchers published in the Journal of the American Medical Association. Meanwhile, 48 percent of the recommendations were based on the personal opinions of experts in the field.
Sometimes the experts disagree. In 2008 guidelines from the Endocrine Society recommended against using the apoB test to assess risk, while the American Disabetes Association and the ACC supported the test.
And sometimes the experts are plain wrong, causing real damage. Westphal cites a recommendation in the pneumonia guidelines from the Infectious Diseases Society of America that patients should receive antibiotics within 4 hours of hospital arrival, instead of 8 hours as in the past, although there was no good data to support the recommendation. But because it was included in the guidelines it was adopted as a performance standard by CMS and the JCAHO. The result:
But it wasn’t long before reports began to surface of emergency rooms over-diagnosing patients with pneumonia and giving them antibiotics just to comply with the four-hour rule. Studies also came out showing no survival advantage from giving the antibiotic earlier. By 2007, the government announced that it would stop using the four-hour performance measure in its tracking of hospitals. In the interim, many thousands of people were affected, plus there was the wasted cost incurred for many unnecessary doses of antibiotics. Moreover, the Medicare study’s senior author was one of the six men in charge of writing the society’s pneumonia guidelines — and he, along with two more members of the very same panel, also sat on the national pneumonia panel. Allowing the same experts to sit on multiple treatment panels, rubber-stamping their own recommendations, violates any system of checks and balances.
Westphal notes that many organizations have “moved to shore up” their internal controls of their guidelines, including more transparency about the process and the degree or agreeement and disagreement among guideline committee members. “The bottom line is that expert opinion, as well intentioned as it might be, shouldn’t masquerade as evidence-based medicine,” she writes.
But what to do when “the experts themselves are clouded by financial ties to companies making the drugs under consideration”? She points out that over 50% of the panel members of the DSM had financial ties to industry, and “in areas where drugs are the treatment of choice, such as mood disorders and schizophrenia, all guideline panel members had financial links to industry.”
Disclosure itself isn’t enough, argues Westphal:
Doctors with ties to companies making the drugs under consideration shouldn’t be assessing them. It’s a basic ethical assumption, but it hasn’t been widely embraced by guideline-setting organizations, which think the problem is fixed by allowing panel members to disclose their financial ties or asking them to recuse themselves from voting. The truth is — and companies know this — that having a “thought leader’’ speak on behalf of a treatment is influential to his or her peers, even if disclosures have been made.
Westphal also discusses the limitations of data derived from clinical trials sponsored by industry, citing examples of companies hiding adverse events data or “publishing only studies that are positive for a drug, while burying the negative results.”
Here’s the real dilemma:
Evidence-based medicine functions on the assumption that strictly analyzing all evidence available for a treatment leads to an accurate judgment of the treatment’s effect. When negative studies are not published, or when they are published in a way that makes them seem positive, or when adverse outcomes are omitted from the medical literature, the “evidence” is no longer a reflection of a treatment’s real effect but a skewed, artificially positive illusion.
I highly recommend you read the original article in the Boston Globe.
“no longer a reflection of a treatment’s real effect but a skewed, artificially positive illusion”
This is the concept that confuses many intelligent but ill patients when reading about treatment options. All of the evidence needs to be reviewed carefully. This must include negative results and limitations of the methods used.
Independent confirmation surpasses expert opinion by leaps and bounds in terms of evidence quality. By helping with that one can truly advocate for patients.
Evidence based medicine is a wonderful concept. Sadly. it is increasingly being hijacked by self-interests, both individual and commercial. Even political and social agendas are involved. Comparative effectiveness studies are next in line.
I think concept of evidence has based medicine has been taken to ridiculous proportions to the extent of being impractical for day to day situations wherein a clinician has to encounter a myriad of situations and he is required to keep himself updated with the latest evidence emerging by way of trials and expert guidelines.
This evidence being mutually contradictory at times, so the poor treating doctor has to exercise his discretion and judgement in choosing what to follow and what not to follow and in this the entire concept of so called evidence based medicine meets its antithesis.
In a country like India a general practitioner has to see as much as 100 patients in a day suffering from all kinds of ailments from head to toe, how do you expect him to keep himself informed about latest clinical trials, expert opinions and guidelines applicable individually to his patients. From where he is going to get the time for it and whether or not great majority of his patients have means to compensate efforts involved.
This calls for an entirely different approach to practice of so called evidence based medicine.
Well, there are various ways in which evidence assessments in evidence-based medicine (EBM) can be biased. In 2010, we accidentally stumbled across some irregularities in the area of skin antisepsis, admittedly a relatively trivial area of medicine, but one that is important in preventing infections from invasive procedures. We investigated this further, and before our eyes unfolded a story of how perception and preconceived notion can cloud and skew assessments in EBM, even when strictly following EBM protocols. A “forgotten” component was not taken into account by between 1/3 and 1/2 of clinical trials and systematic reviews, and this led to unsubstantiated recommendations in prominent evidence-based guidelines. Our findings have broader implications for EBM, and also potential implications for patient safety. They are published here: http://dx.plos.org/10.1371/journal.pone.0044277
When process and administration becomes more important than intellectual content and common sense—beware. But when it turns around to control or manipulate the medical knowledge base to further its own existence—fear.