Looking at the Centers for Medicare and Medicaid Services Research Designs in a New Context

Looking at the Centers for Medicare and Medicaid Services Research Designs in a New Context

Published: May 30, 2017
Publisher: Health Affairs Blog
Download
Authors

Thomas W. Grannemann

Randall S. Brown

It’s time to take a fresh look at how the Centers for Medicare and Medicaid Services (CMS) designs its initiatives to test new models of provider payment and care delivery. As highlighted recently in the Health Affairs Blog by Tim Gronniger and colleagues, the new administration faces important choices about imposing requirements that support more rigorous and informative evaluations of new models on providers.

With the recent widespread implementation of alternative payment models (APMs), strong designs are needed more than ever to provide evidence for policy decisions about model expansion, modification, or termination for Center for Medicare and Medicaid Innovation (Innovation Center) initiatives. However, policy decisions to pursue designs with mandatory participation or random assignment can be difficult when providers resist participating in studies for which their requirements, financial incentives, and risks are not fully known in advance. The good news is that the tradeoffs between accommodating provider interests and CMS’ ability to identify worthy innovations using strong research designs may not be as stark as some have previously assumed.

In a Health Services Research article recently made available online, we argue that designs of CMS payment initiatives must effectively accommodate the changing payment and delivery system environment. Accordingly, we advocate for use of factorial experiments (randomized designs that test multiple versions of a model simultaneously) as the best prospect for producing definitive evidence on future APMs. This approach stands in marked contrast to that of William Shrank, Robert Saunders, and Mark McClellan, who have endorsed continuing to base policy decisions on a mix of quantitative analysis comparing similar populations, qualitative analysis of other populations, and other contextual evidence. They largely dismiss randomized designs for CMS, saying “momentum and timelines would be lost with too much focus on experimental design and … traditional rigorous evaluation methods.” Both papers share the objectives of improved evidence and accelerated learning, but we reach different conclusions about what methods will produce the best and quickest evidence to guide CMS policy in the years ahead.

How do you apply evidence?

Take our quick four-question survey to help us curate evidence and insights that serve you.

Take our survey