Saturday, November 27, 2010

Comparative Effectiveness Research

In a recent JAMA article there is a status discussion of CER, CCE, or whatever they call it now. As the authors state:

According to the act, the purpose of the institute is "to assist patients, clinicians, purchasers, and policy-makers in making informed health decisions." Institute staff are to apply research and evidence to improve methods for preventing, diagnosing, treating, monitoring, and managing health conditions. The institute is also charged with disseminating research findings on health outcomes, clinical effectiveness, and the appropriateness of medical treatments, services, and items.

To achieve these goals, the institute will create a standing methodology committee to develop and update scientifically based standards for research conducted through the institute. In addition, the institute will ensure peer review and make research findings publicly available within 90 days. The institute will also allow for public comment periods prior to such actions as the adoption of national priorities, the research project agenda, the methodological standards, and the peer review process, and after the release of draft findings of reviews of existing research and evidence.

This is a difficult if not impossible task. Recall that medicine is based in science but is determined clinically. Further clinical progress is made by trials and as we learn more we do more trials. Thus the classic example which I have used again and again is the use of PSA and subsequent biopsy procedures. We know that PSA has positives and negatives and furthermore that the biopsy itself is prone to errors. Foe example the figure below depicts the probability of not seeing a cancer in the prostate, where one exists, depending on the size of the prostate and the number of cores used to sample.


















This shows that despite using the gold test of biopsy, we can all too often not conclude there is a cancer when indeed there is. If this is an aggressive type then more than likely it will met before a subsequent biopsy and the result or outcome is death. How doe we treat this?

They continue:

As clear as these expectations are, the institute must still navigate political waters roiled with charges that comparative effectiveness research is a tool for the continued takeover of health care by the federal government and a way to justify health care rationing. Such attitudes among certain members of Congress resulted in the passage of a law that severely restricts one of the perceived benefits of comparative effectiveness research: the possibility of cost savings in health care.

Savings would presumably follow from the identification of proven therapies that are less expensive than those in common use, although such research could also identify more effective treatments that are more expensive. The law stipulates that the institute "shall not develop or employ a dollars-per-quality adjusted life year (or similar measure that discounts the value of a life because of an individual's disability) as a threshold to establish what type of health care is cost effective or recommended"

With such restrictions, little wonder that researchers and economists are looking at other ways to bring cost into the comparative effectiveness research conversation. Some of those approaches are discussed in Health Affairs, which dedicated its October issue to a variety of articles exploring issues surrounding comparative effectiveness research.

Not only is cost a factor, but the Government entity will control how the physician practices. Rationing is less the issue than treating each patient the same or as an individual. Medicine requires the physician to treat each patient as an individual. The CER approach is to diverge from this, namely to develop and proscribe common treatments, independent of the patient.

They continue:

"According to the letter of the law, the institute could actually commission cost-effective analysis where you use quality-adjusted life-years," said Sox in an interview. "What it forbids is using that information to set a threshold below which something is cost-effective and above which it is not cost-effective."

Sox said the law's ambiguity regarding cost considerations reflects the debate seen across the nation.

"Comparative effectiveness research has gotten so much support because the cost of health care is affecting the country's economy, and yet there is a peculiar attitude from the federal government about considering cost-effectiveness," Sox said. "People are concerned about having the government say that it will pay for this and not pay for that. Patients and doctors do not want to have their hands tied by government rules about which health resources would be paid for under Medicare."

But Sox and Garber, after discussing the loophole allowing for cost analysis, do not call for the institute to commission such study. "Doing so might lead to the appearance, if not the reality, that the institute was attempting to define care standards for federal health insurance programs in the United States, which the Affordable Care Act discouraged," they wrote.

Instead, they recommend that the institute insist that studies it sponsors provide enough information to enable others to perform the analyses. That would allow analysts who are free of sanctions to develop cost-effectiveness information.

"It seems like a pretty obvious idea that research sponsors would require authors to gather data that can be useful to people, but not necessarily those doing the original research," Sox said. "The [National Institutes of Health has been requiring authors to get cost data on randomized clinical trials it sponsors, and the trialists may not be the ones who use that information."

They are back-dooring the costs via data.