New Questions Raised About SPRINT

More questions are being raised about SPRINT, the enormous NIH-funded blood pressure lowering trial. Two recent developments will likely add more obstacles to the already difficult task of applying the results of the trial in the real world.

Even before the full results of the trial were first made public the NIH and the SPRINT investigators vigorously promoted the idea that the trial offered powerful evidence that systolic blood pressure targets should be lowered for many patients from  140 mm Hg to 120 mm Hg. But that idea has come under attack from critics of the trial, who have argued that weaknesses in the trial design and conduct limit its ability to generate practical information that can be applied to guidelines and recommendations.

Blood Pressure Measurements

One of the recurrent issues regarding SPRINT involves the techniques used to measure blood pressure in the trial. SPRINT employed a “research grade” measurement technique in which multiple readings from an expensive automated blood pressure monitor were obtained. This practice is almost never followed in routine clinical practice. (As I reported last fall, however, there are important lingering questions about the precise details of the actual BP measuring techniques used in the trial.)

Now a new study offers fresh evidence that there is a large difference between the BP measuring technique used in SPRINT and the BP measuring techniques used in routine clinical practice. In a paper published in the Journal of the American Heart Association, a VA researcher, Rajiv Agarwal, compared unattended “research grade” measurements obtained after a 5 minute rest, which was similar to the method used in SPRINT, to a more typical attended measurement obtained without a prior rest period. He studied 275 patients with chronic kidney disease.

The most important finding was that the research-grade measurements were 12.7 mm Hg lower than the routine measurements. But, Agarwal cautioned, this does not mean that clinicians can simply adjust their readings accordingly, since patients in the study had research grade measurements that differed from routine measurements by as much as 46.1 mm Hg lower or 20.7 mm Hg higher.

“Taken together, these findings suggest that if the SPRINT findings were to be translated to clinic practice, the first step would be to measure the clinic BP as was done in the trial,” the paper concludes. “Without these research-grade measurements, the likelihood of harm (or lack of benefit) may be real.”

Hypertension experts I consulted agreed that the paper spotlights the problem of trying to apply the SPRINT results to clinical practice. The SPRINT method is “neither practical nor feasible, simply because the procedure is time and office space consuming and requires an automated monitor,” said Franz Messerli (Mt. Sinai). “Any attempt of an extrapolation from office BP to SPRINT BP by subtracting a fixed number of mm Hg is not acceptable in a given patient.”

Similarly, Sripal Bangalore (New York University) said that “if the results of SPRINT are to be replicated in real-world practice, the BP measurement should be similar to that done in the trial. Otherwise we will end up with over-treatment and significantly increasing the risk of serious adverse effects and less benefit on CV outcomes.” Bangalore also said that the persistent confusion about how exactly blood pressure was measured in SPRINT needs to be resolved. “The NHLBI and SPRINT investigators need to send a clear message on how BP was measured” in the trial, he said.

George Bakris said that we needa unified way to measure BP in the community that mimicks the trial.”

Contest Participants Uncover Flaw in Published SPRINT Data

In a separate development, a potentially important flaw in the SPRINT dataset has been discovered by independent researchers participating in a contest designed to demonstrate the benefits of sharing clinical trial data. The contest, the SPRINT Data Analysis Challenge, is being run by the New England Journal of Medicine. The contest, though still underway, has already produced a novel and potentially important finding. Independent researchers participating in the contest found that the SPRINT researchers mistakenly calculated the cardiovascular risk score for patients in the trial.

Here is how the problem was explained to contest participants in an email message from the organizers of the contest.

“…two groups have reported — and it has been confirmed — that the Framingham Risk Scores in the dataset are incorrect. If you choose to use that variable in your analysis, please re-calculate the scores. An explanation from the investigators follows:

“The Framingham 10-year cardiovascular disease risk score % (variable RISK10YRS) was calculated based on D’Agostino et al, Circulation, 2008. The equation appears to have originally been calculated with the coefficients for treated systolic blood pressure and untreated systolic blood pressure reversed. Thus, participants with treated blood pressure may have been assigned the equation coefficient for untreated blood pressure and vice versa.

“It is important to note that eligibility for SPRINT was determined based on calculated risk at the screening visit and not estimated risk at the baseline/randomization visit. Therefore, calculated risk at baseline may appear lower for some of those whose eligibility was not based on age (75+ years), prevalent CKD, or prevalent clinical/subclinical CVD. In addition to numerical changes in blood pressure between the screening visit and baseline visit, the Framingham general CVD risk equation was developed in whites only, and the screening risk tool made further adjustments based on race by adding risk for African Americans and reducing risk for Asian Americans.”

Harlan Krumholz (Yale University) was the leader of one of the group’s that discovered the problem. “This discovery highlights the value of an open science approach,” said Krumholz. “Soon after receiving the dataset, our group identified this error. It seems it was a coding error and now can be corrected going forward. This is crowdsourcing the quality of the data. And this should not be understood as diminishing SPRINT’s investigators. No set of investigators can always do perfect work – the SPRINT investigators represent an exceptionally skilled and accomplished group of investigators. This is further validation of the utility of having fresh eyes looking at the patient-level data.”

According to the experts I have consulted, it is as yet unclear whether this mistake will have an important effect on the overall interpretation of SPRINT.

Previous SPRINT Coverage:


Speak Your Mind