He said, she said, the data said: Coming to consensus on genetic testing
As genetic tests become more widely available and widely adopted, clinical genetics is facing a crisis of consensus. But it doesn’t have to be this way.

As genetic tests become more widely available and widely adopted, clinical genetics is facing a crisis of consensus. But it doesn’t have to be this way.
A doctor logs in to her computer and pulls up the results of a genetic test she ordered for a patient. Scanning the report, she sees that the test turned up something: a variant in one gene associated with a rare form of dementia. Good news, though: a note in the report flags the variant as benign.
Across town, another doctor reads a report from another lab about a different patient. Same gene variant, same association. But this report calls the variant “likely pathogenic.”
Which one is right?
This is not just an academic question. The number of recognized DNA variants (e.g., a single base change or polymorphism, a deletion) is growing at an extraordinary rate, in part because DNA sequencing and analysis technologies keep getting cheaper, faster, and more accessible. Commercial and medical laboratories are making new genetic tests based on those variants available seemingly on a daily basis.
Lagging behind, however, have been consistent standards for interpreting what those variants and tests mean for a patient’s health.
“There are more than 80 million observed variants in the human genome so far,” said Heidi Rehm, medical director of the ӳý’s and director of the Laboratory for Molecular Medicine at . “For only a very few do we have an interpretation or an understanding. And by some measures, testing laboratories disagree in their interpretation of a variant 17 percent of the time.”
Clinical geneticists like Rehm are wrestling with questions about how to build consensus on how they, as a community, should read the tea leaves of DNA. Bolstering those efforts are a set of online resources where geneticists can share the data behind their interpretations.
Heidi Rehm
Clinical genetic tests generally fall into a few categories:
Every time a test reveals a variant, a geneticist then has to make the call: Is that variant a problem? Could it be causing this patient’s symptoms? Does it raise that patient’s risk of disease? Will that family likely have a child with a condition?
The issue (particularly for monogenic disease tests) is that every study reveals new variants, some widespread, some specific to certain groups or populations, some unique to the individual. Rehm notes that 67 percent of the variants her lab has reported out as being or possibly being clinically significant have only ever shown up in a single person. Plus, by some estimates, ; more than half a million of those variants will be rare or even unique.
“Every person has variants unique to them,” she said. “Just because a variant is rare doesn’t mean it’s pathogenic.”
Because most variants are rare, Rehm adds, often there is very little evidence on which to build an interpretation. Plus, the standards for evaluating that evidence keep evolving.
“What we thought were good standards 10 years ago aren’t necessarily good today, because we’ve learned a lot more,” she said. “It used to be that peer-reviewed literature was the gold standard. But now we recognize that just because an interpretation was published in a paper doesn’t make it correct.”
Rehm describes the consensus conundrum at a ӳý "Science for All Seasons" event in 2016.
The and the have taken leading roles in bringing the clinical genetics community together to create a standard for interpreting monogenic disease variants. One set of standards is semantic, establishing the nomenclature for describing a variant’s clinical significance. The other is hierarchical: what level of evidence is needed in order to assign a level of impact to a variant?
In , Rehm and eleven of her colleagues started addressing these questions, proposing a set of ACMG/AMP guidelines for variant nomenclature (pathogenic, likely pathogenic, benign, likely benign, uncertain significance) and tiers of evidence (supporting, moderate, strong, etc.) based on the kinds of data (e.g., population data, functional data, allelic data) used to make interpretations.
“There’s definitely different types and weights of evidence,” Rehm explained. “If a variant has been observed to segregate with disease in a large pedigree and has evidence to show disrupted function, for instance, that’s much stronger evidence for calling that variant pathogenic than a prediction made by a computer algorithm.”
Rehm offers one overarching answer to whittling down that 17 percent disagreement rate in labs’ interpretations: more data sharing. A lot more data sharing.
“There are too many variants for every laboratory to see every variant in every gene,” she said. “If we just shared what we’re seeing and the evidence we’re analyzing, it would benefit everyone.”
Open sharing would also address one of the major barriers in consensus building: knowledge expands. “One lab might call a variant uncertain based on data available when it last saw a patient with that variant ten years ago, while another might come to a different conclusion because it has more recent data,” Rehm explained.
She advocates for sharing variant and gene interpretations and the data behind them through two resources: , a National Center for Biotechnology Information-run variant database; and , a National Institutes of Health-funded resource for creating genomic knowledge bases to support research and precision medicine.
“ClinVar is about having each lab sharing its variant interpretations and, in a sense, crowdsourcing the workload,” she explained. “And by everyone submitting to the same place, we can identify differences and facilitate resolving those differences.”
ClinGen, on the other hand, is built around domain-specific expert working groups, building off the ACMG guidelines and applying them to individual disease areas using an expert consensus mechanism.
Sharing helps, but resolving discrepancies can still be labor intensive. “It takes time and work,” Rehm said.
A screenshot of ClinVar
But it’s worth it. Rehm and colleagues from nine sites in the — a — recently took the ACMG-AMP guidelines for a dry run, applying them to 99 variants spanning the pathogenic spectrum. Initially, , only 34 percent of the labs’ interpretations agreed. Further discussion revealed that the labs were not all applying the guidelines’ criteria in the same way, and highlighted opportunities to make the guidelines more clear. In the end, the group upped their agreement rate to 79 percent.
Rehm also cites work involving her lab and labs at , , and the to resolve different interpretations of more than 200 possibly disease-causing variants. Because the four labs openly shared the data and rationale behind their interpretations, they so far have been able to come to agreement on 86 percent of them. And there are many, many more variants to go.
“Improving consensus takes a lot of data sharing, and it’s the patients who benefit from that,” she said.
What is clear is that the field of clinical genetics as a whole is working hard to build standards of evidence and to resolve differences in interpretation across laboratories. Rehm thinks there are several lessons to be learned from the efforts she and her colleagues have seen and taken part in thus far:
Papers cited:
Kohane IS, Hsing M, Kong SW. . Genetics in Medicine. DOI: 10.1038/gim.2011.68
Richards S, Aziz N, et al. . Genetics in Medicine. DOI: 10.1038/gim.2015.30
Green RC, Goddard KAB, et al. . American Journal of Human Genetics. DOI: 10.1016/j.ajhg.2016.04.011
Amendola LM, Jarvik GP, et al. . American Journal of Human Genetics. DOI: 10.1016/j.ajhg.2016.03.024