Large, real-world study of pharmacogenetic testing to guide treatment shows it doesn't work
Pharmacogenetic testing means identifying which genetic variants a patient carries and then using that information to guide what drug treatment they should be prescribed. It's something one would think ought to be helpful. Variation in key genes might affect how drugs were metabolised or how well they worked on their molecular target. Within the field of cancer treatment there is clear evidence that genetic testing can be used to define different subtypes of cancer and that this can result in more effective treatment programs being used. There is plenty of research to show that certain commonly occurring variants will result in some drugs being metabolised more slowly, meaning that in somebody carrying one of these variants the drug may accumulate in higher concentrations, increasing the risk of side effects. Common sense would predict that testing patients and providing lower doses to such "slow metabolisers" should make them less likely to experience adverse effects and a large trial, called PREPARE, involving thousands of patients was set up to investigate this. The results of the study were published here in the Lancet and were acclaimed as showing that pharmacogenetic testing in the real world did indeed confer clinical benefits. According to the lead investigator, the results of the trial "were 'fantastic' and it was time to implement routine pharmacogenomic testing in the NHS." "Carrying out pharmacogenomic testing before prescribing commonly prescribed drugs can slash adverse drug reactions by 30%," Unfortunately, a careful reading of the report draws one to the opposite conclusion. Pharmacogenetic testing as implemented seems to provide no benefit at all. In fact, it might even be harmful.
In order to understand the reasons for this gloomy assessment, it is helpful to understand a little about how the study was set up. It was carried out across a number of different treatment centres and involved a number of different combinations of disease, drug treatment and relevant genetic variants. It was an open-label, controlled, cluster-randomised crossover trial which broadly speaking means that at a given time all patients attending a particular treatment centre either received pharmacogenetic testing prior to being prescribed medication or they did not. Importantly, the patients and the doctors treating them knew whether or not pharmacogenetic testing was being used to guide treatment. If a patient was found to carry a genetic variant implying that metabolism of the drug they were to be prescribed might be impaired, termed an "actionable result", then the doctor could use this information to prescribe a lower dose. In the control group patients were tested for the same genetic variants later on, after the course of treatment. This allowed the researchers to identify everybody who had an actionable result, whether in the genotype-guided treatment group, where it could be acted upon, or in the control group.
The headline finding of the study, resulting in the claim that adverse drug reactions were slashed by 30%, was that for patients with an actionable result 231 of 833 (27.7%) in the control group reported a clinically relevant adverse drug reaction compared with only 52 of 725 (21.0%) in the genotype-guided treatment group. This is reported as a 30% reduction because the odds ratio of 0.210 and 0.277 is 0.7 although I think most people would say that the reduction is in fact 24%, which is (0.277-0.210)/0.277. The result is statistically significant with p=0.0075 and at first sight seems to imply that by using pharmacogenetic testing to guide treatment one can adjust medication doses and reduce adverse effects. However this simple conclusion does not fit with other findings.
The key problem for the interpretation that pharmacogenetic testing reduced adverse effects is pointed out in an accompanying Comment piece: a reduction of adverse effects was reported in the tested group even for patients who did not carry a relevant variant and hence who received normal doses of medication. Alongside the results reported for patients with an actionable result, the corresponding numbers for all patients in both groups are also presented. 934 of 3270 (28.6 %) of all patients in the control group reported a clinically relevant adverse drug reaction compared with 628 of 2923 (21.5%) of all tested patients. From these results we can simply subtract the results for those with an actionable result in order to obtain the relevant numbers for the patients who did not have an actionable result. Of patients without an actionable result, 703 of 2437 (28.8 %) in the control group reported a clinically relevant adverse drug reaction compared with 476 of 2198 (21.7%) in the genotype-guided treatment group. This difference is highly statistically significant: chi-squared = 18.5, 1 df; p = 0.000017. What is going on here? Why is that even when there is no actionable test result and patients presumably receive a standard dose of medication they nevertheless still report fewer adverse drug reactions?
It is not at all difficult to think of plausible explanations as to why there might be fewer reports of adverse events in the tested group. Recall that the study is open label, so that patients and their doctors both know what group they have been assigned to. Patients have given informed consent, meaning that the study has been fully explained to them. Thus patients in the tested group know that the dose of medication they receive may have been adjusted to take account of their genetic vulnerability to side effects. Adverse effects were not systematically assessed but depended on patients reporting that they had side effects and then for their doctors to judge whether these side effects were "clinically relevant". It is easy to see that a patient who thought that their dose of medication had potentially been adjusted to match their genotype might be less likely to report a physical symptom as a possible side effect, and likewise that their doctor might be less likely ascribe any reported side effect to the prescribed treatment. An additional possibility is that doctors in the centres using genetic testing might have made generic changes to their prescribing practice, which we could loosely describe as "more careful prescribing", and that these could have resulted in a genuine reduction in adverse effects across the board, irrespective of any results of genetic testing. All these kinds of bias are very familiar in the context of open label studies and could easily account for the fact that fewer adverse effects were reported in the patients attending centres where genotype-informed treatments were prescribed.
However if we consider the magnitude of this bias then things become even more concerning. As I point out in my letter in the Lancet in response to this study, it is not simply that there was a reduction in adverse effects for patients who were tested but who did not have an actionable result. The key issue is that the reduction associated with testing was just as great in these patients as in those who did have an actionable result. The biases described above would be equally applicable to those with an actionable result (arguably more so) and so the fact that the reduction in side effects due to testing is equal whether one has an actionable result or not suggests that the test result itself is completely irrelevant. Obtaining a genetic test result indicating that drug dosage should be reduced appears to have no effect at all on the risk of experiencing adverse effects.
Another line of evidence to undermine the utility of pharmacogenetic testing comes from inspecting the rates of adverse effects in patients in the control group, who were tested subsequent to treatment to see if they would have had an actionable result. What we see from the figures above is that 27.7% of those with an actionable result reported a clinically relevant adverse drug reaction compared with 28.8 % of those without an actionable result. But the whole rationale for pharmacogenetic testing is that those with an actionable test result are at higher risk of adverse drug reactions. Here we see that there is no association at all between test results and risk.
I think an objective assessment of the results of this fairly large study is that it shows that pharmacogenetic testing does not identify patients at higher risk of adverse effects and that using it to guide prescribing does not result in a lower incidence of adverse effects.
Why do I say that such testing might even be harmful? Consider that in this study pharmacogenetic testing was only used in one way: if patients were judged to be at higher risk of adverse effects then they might be offered lower doses of medication. No test result was interpreted as saying that a patient might need an increased dose of medication. Likewise, the only assessment of outcome was whether or not the patient reported adverse effects. But patients were prescribed medication for a reason. For example, a number of patients were prescribed statins and statins are prescribed to reduce the risk of heart attacks. The standard dose of statins has been chosen to have a clinical benefit in terms of reducing heart attacks while also being unlikely to lead to adverse effects. If there is an average reduction in the prescribed doses of statins then the expectation is that over time this would lead to an increased incidence of heart attacks. The effect might be small or out-weighed by other factors but I maintain that one cannot judge the overall utility of pharmacogenetic testing without considering all outcomes, not just reported adverse effects.
So to conclude, I think this study does not demonstrate a benefit of using pharmacogenetic tests to guide treatment. In the light of the issues I have noted, I would suggest that any future study should have a number of features. It should focus on a single medication where genetic variation has been shown to have a substantial effect on outcome. Patients and assessing clinicians should be blinded as to whether pharmacogenetic testing has been used to guide treatment. Overall outcome should be assessed, including both adverse effects and also some measure of clinical efficacy.