One of the great scientific achievements of the past generation has been the identification and characterization of the genetic underpinnings for many diseases. By combining genetic information with other forms of research doctors have been able to reach a much deeper understanding of many diseases. In a few cases genetic information has proved useful in screening and treating diseases, but in general the broad application of genetic information has not yet been shown to be helpful, despite considerable hype suggesting imminent and overwhelming benefits.
Now a new study gives strong evidence that broad scale general use of genetic information will have very little value in guiding health decisions, and even has the potential to cause far more harm than good. In particular, the study shows that incidental findings from genetic screening strategies have a very long way to go before living up to the hype of precision medicine advocates.
“To have genomics impact clinical medicine, we need to be able to accurately interpret the genome, to properly describe the phenotypic impact of any given variation,” said Sekar Kathiresan (Massachusetts General Hospital), a genomics researcher. “With respect to this capability, right now, we are like my second-grader who can pronounce the words but often doesn’t understand meaning. Hopefully, with new tools like the NIH’s Precision Medicine Initiative, we will develop into advanced genome readers who understand spelling, pronunciation, and clinical meaning.”
The new study, published in the Journal of the American Medical Association, looked for variations in 2 genes that have been shown to play a major role in two well-known, potentially lethal cardiac heart rhythm disorders, Brugada syndrome and long QT syndrome. The researchers sent the genetic sequences of 2,022 people without previously known arrhythmias to 3 separate laboratories for analysis. They then analyzed the electronic medical records of the people, which included ECG tracings for most of the patients.
The labs found that 11%, or 223 of the participating subjects had rare gene variations. 63 of the subjects had variants that have been linked to disease. However, the people with these variants were no more likely than those without the troubling variations to have any evidence in their medical records or their ECGs of heart rhythm problems. In fact, a majority of the people with the genetic variations had no signs of a heart rhythm disorder. “We found a high frequency of potentially pathogenic mutations without a high burden of disease,” wrote the authors.
In an accompanying editorial, William Gregory Feero (Maine Dartmouth Family Medicine Residency) writes that “separating the genetic biomarker wheat from the chaff is an important and daunting task for the biomedical research community.” The study “calls into question how existing knowledge of genetic variants translates to predicting outcomes in unselected and ostensibly healthy individuals.”
An equally disturbing study findings was the enormous amount of discordance between the three laboratories in the studies. In fact, there was very little overlap between the 3 labs, with each lab reporting genetic variants distinct from the other labs.
Responding to the finding of discordance, Kathiresan said that “there needs to be some serious work done by laboratories to standardize variant interpretation. It’s like the old days for cholesterol measurements prior to standardization by the CDC. But, this problem is 1000x worse.”
Ethical Implications
The study “provides de facto evidence of a high false-positive rate for variant interpretation,” writes Feero. This finding appears to create a conflict with current, well-established ethical standards, which require researchers to inform study participants when they find potentially harmful findings. In his editorial Feero writes that “returning results [to study subjects] seems problematic.” He recommends “at a minimum, the language for describing variations’ predictive ability should be carefully calibrated to convey, when appropriate, a probabilistic, rather than deterministic, nature.” In other words, subjects should be told that troubling findings are, in fact, unlikely to cause any trouble.
Further, it is equally important to recognize that the kind of genetic findings involved in this study are exactly the sort of incidental findings that are certain to come up in any sort of broad genetic scanning test, such as those proposed by companies like 23and me. Referring to the recent controversies regarding resolution of these tests Feero writes that the study “supports the FDA’s recent movements to more closely examine the accuracy and reproducibility of interpretation steps that occur after a variant is detected.”
“This study offers one key insight: for even well-studied disease genes like the two ion channels in this report, we routinely overestimate penetrance, the proportion of genotype carriers who manifest phenotype.
We overestimate penetrance because, for most of the last several decades, human genetics has been about a phenotype-first approach. We find people with unusual or extreme phenotype and ask what genotype leads this condition. However, such results do not imply that every time you see the given genotype, the phenotype will follow.
To get true estimates of penetrance, you need a genotype-first approach, i.e., find everyone with a given genotype and ask what are the phenotypic consequences. Thankfully, due to cheap genome sequencing now, we are in the midst of a revolution in medical genetics where we are moving from phenotype-first to genotype-first studies. The study by Roden and colleagues takes a genotype-first approach to two ion channel genes and finds that very few people with mutations labelled as ‘pathogenic’ by clinical diagnostic labs have any arrhythmia or ECG phenotype.
I see several research and clinical implications of this study:
- We need to develop a large matrix of genotype and phenotype in unselected individuals in order to accurately define penetrance. Investments such the UK Biobank and the US Precision Medicine Initiative will help in this regard.
- We need to standardize criteria being used by clinical diagnostic labs to make ‘pathogenicity’ determinations.
- When we return genomic findings to patients, either in the context of clinical and research sequencing, we need to inform patients/participants as to how little we truly know about penetrance of mutations.”
For Kathiresan the goal of precision medicine isn’t about providing immediate, actionable information to individual patients. “Precision medicine is less about using genetics to identify discrete patient subsets but rather to identify causal biological pathways that can be modified. For the latter goal, the magnitude of risk conferred by the variant is not important but rather that there is a robust relationship between genotype and phenotype.”
Why has it taken life scientists so long to understand that variation in observed apparent response has many possible causes http://www.ncbi.nlm.nih.gov/pmc/articles/PMC524113/ and that much of it may be signal rather than noise. I count it as a failure of my profession, statistician, that we have not succeeded in explaining the basic problem.
The recent FDA report, Paving the Way for Personalized Medicine, http://www.fda.gov/downloads/ScienceResearch/SpecialTopics/PersonalizedMedicine/UCM372421.pdf illustrates the depth of confusion. All that the FDA can come up with is a single paper using an unclear methodology and based on The Physician’s Desk Reference (of all things!) as a source of figures on variation in response. I discussed this parlous state of affairs in a recent blog http://errorstatistics.com/2014/07/26/s-senn-responder-despondency-myths-of-personalized-medicine-guest-post/ in which I attempted to show what would be necessary to really investigate this.
Yet the Agency is stuffed with able statisticians. Where are they? Asleep? Some years ago the FDA was proposing that replicate bioequivalence studies (for example four period cross-over trials) were necessary to demonstrate individual bioequivalence. My view at the time was that this was a hurdle too far, http://www.thelancet.com/journals/lancet/article/PIIS0140-6736%2898%2985007-1/abstract but at least the Agency had the technical theory right. The irony is that the area in which, of all areas you least expect variation in individual response, is bioequivalence. Now it seems that the statisticians at the FDA have let the pharmacogeneticists turn their brains to mush.
In the meantime one might like to ponder one of the great mysteries of prescribing medicines. Why does nobody ever think of using bathroom scales? I could weep.
Multimillion Rs advertisements on genome testing started appearing in Indian dailies over the past two years. If that trend has to be sustained, the companies must receive generous support from willing “worried well” patients. Venture capitalists, who are hard nosed realists are willing to park their funds in “genome testing” ventures. May be one should not worry much. Will the Universal “Genome Testing” effort also suffer the fate of “Fair and Lovely” campaign in India as explained at: http://globalmarketingtoday.net/the-real-world/fair-lovely-advertising-in-india/
Are you disregarding Epigenetics altogether? And on the contrary…
http://individualizedmedicineblog.mayoclinic.org/discussion/delivering-on-the-promise-of-precision-medicine-2015-year-in-review/?linkId=20070391
I must have been having a senior moment. I meant to say
“and that much of it may be noise rather than signal”. As regards epigenetics, it seems clear that its study will produce lots of interesting science but treatments? I sometimes put it this way. “You know the billions that you gave us to personalise medicine? We now realise that because of epigentics it’s much more complicated. But if you give us trillions…”