Bad guidelines don’t just give bad advice. They also harm science and impede research.
The new US Dietary Guidelines– which I’ve already called a recipe for disaster— are a perfect example of why we need to have fewer, shorter, and, crucially, better guidelines. Back in 2014, in response to the controversy over salt guidelines, I argued that guidelines can have hidden dangers and cause serious unintended consequences. Like war, I wrote, guidelines should be waged only when there is overwhelming evidence and near universal consensus.
Most readers here are probably already aware of the most obvious kind of harm inflicted by bad guidelines. Bad advice leads to bad actions. The best example of this is the likelihood that earlier guidelines demonizing fat and dietary cholesterol may have played a significant contributing role to the obesity and diabetes epidemic.
But there’s another sort of unintended consequence that is less obvious and rarely discussed, though it may well be equally harmful. By their very nature guidelines present the illusion of successful science, the appearance that clarity and understanding has been achieved by the experts. When this happens, science is the victim.
Mission Not Accomplished
The danger of guidelines is that we’ve declared premature victory. We’ve removed the incentives to perform new research. Any casual observer, looking at the hundreds of footnotes in a guideline, would be perfectly justified in asking why we should throw away more resources exploring a topic that is already so thoroughly well understood.
The key to making new progress is to first acknowledge how much we don‘t know. Rather than trumpeting false knowledge and certainty, health organizations should instead publicize how much they don’t know. Such a move would instantly give them more durable and genuine credibility, and would also serve to educate the broader public about how science works, and why we need to perform more research.
Back in 2014 Salim Yusuf (McMaster University) proposed that the experts should focus on only a few well chosen public health policies: “In the end there are only a few public health policies on diet and lifestyle that we can recommend as a society in order to have credibility and so we have to choose them very carefully and focus on the ones for which we have the best evidence and those which are most feasible,” he told me.
In addition, said Yusuf, it is important that when new evidence emerges, guidelines should be re-evaluated objectively “rather than people or organizations digging in, and refusing to consider new evidence even when it may challenge one’s own thinking. Policy must be based on reliable evidence and not on personal positions.” In 2014 Yusuf could not have known what the new dietary guidelines would contain. But his warning now appears prescient.
Update:
Here’s a great comment sent by James Stein (University of Wisconsin):
I agree with everything you said above. But another way that guidelines are bad for science is the forced, oversimplification of complex data into simple, often binary treatment recommendations. It is understandable from a practice standpoint to desire simple rules, but scientifically one loses much information by using binary cut points and oftentimes, journal and grant reviewers not only believe that an issue is “settled” when it is no (as you pointed out), but that the thresholds used in guidelines represent a biologically meaningful cut points, when they may have been used for expediency only.
There are strong rationales not to use binary categorizations of many biological variables, unless empirically justified. Researchers seek to provide biological insights into the relationships between continuous risk factors or biological variables and clinically relevant outcomes. A priori, we would expect there to be a continuously-graded, monotonic relationships between most. Often, relatively precise data on risk factors and biological variables are assessed at great expense and substantial burden to study participants. Rounding them off to a binary variable wastes that information. Rounding some variables like body-mass index which is a potential confounder in many cardiology studies of risk factors, to a binary variable, dilutes our ability to properly adjust for it yielding residual confounding and biasing associations away from the null, which would be statistically improper.
I agree with everything you said above. But another way that guidelines are bad for science is the forced, oversimplification of complex data into simple, often binary treatment recommendations. It is understandable from a practice standpoint to desire simple rules, but scientifically one loses much information by using binary cut points and oftentimes, journal and grant reviewers not only believe that an issue is “settled” when it is no (as you pointed out), but that the thresholds used in guidelines represent a biologically meaningful cut points, when they may have been used for expediency only.
There are strong rationales not to use binary categorizations of many biological variables, unless empirically justified. Researchers seek to provide biological insights into the relationships between continuous risk factors or biological variables and clinically relevant outcomes. A priori, we would expect there to be a continuously-graded, monotonic relationships between most. Often, relatively precise data on risk factors and biological variables are assessed at great expense and substantial burden to study participants. Rounding them off to a binary variable wastes that information. Rounding some variables like body-mass index which is a potential confounder in many cardiology studies of risk factors, to a binary variable, dilutes our ability to properly adjust for it yielding residual confounding and biasing associations away from the null, which would be statistically improper.
COMMON SENSE AND VALUE BASED GUIDELINES!
Clinical guidelines should be considered as guides to be customized for individual situation and patients with their values and preferences. Patients often have multiple comorbidities and guidelines need to have enough flexibility to serve their interest and of the public at large.
Academic research and the development of guidelines are not immune from conflict of interest as pharmaceutical and device manufacturers fund the research, and development of many guidelines either directly or indirectly by funding the clinical societies and academic institutions.
Guidelines committees and organizations are obligated to minimize conflicts of interest among panel members so that their recommendations become credible and value based, not just evidence based.
Nonetheless, automatic translation of guidelines into performance measures should not be the case for the professional and governmental bodies, and regulatory agencies.