After APPAM: Four Ideas About What Comes Next

APPAM banner

 

Researchers from across the country gathered last week at one of the most important annual gatherings in the research and policy communities—the fall conference of the Association for Public Policy Analysis and Management (APPAM). With a theme of “The Role of Research in Making Government More Effective,” this year’s conference featured Mathematica Policy Research experts in numerous panels and presentations covering key policy areas such as disability, education, employment, environment, family support, health, and international policy.

In the following four posts, Mathematica researchers reflect upon the issues and themes they encountered at APPAM 2016—and offer their thoughts on the evolving role of research in improving public well-being.

Continuing Conversations About What Works and Why

Jeanne Bellotti

I started at Mathematica in 1996 as a wide-eyed 22-year-old with an idealistic view about contributing to research that could make a true impact on public policy. Over the past 20 years, there is no denying that research evidence has played an increasing role in the policy discussion. As Ron Haskins, this year’s APPAM president, emphasized in his address, this is the golden age of evidence-based decision making. But there is still a long road ahead in understanding what works and why.

As Haskins noted, 80 to 90 percent of public and private research studies show null results. For both policymakers and researchers, the trick is to ensure that a lack of significant findings does not mean the end of the discussion.

A lack of significant findings does not mean the end of the discussion.

I organized a panel this year on employment and training programs for formerly incarcerated individuals. Two of the panelists presented results from studies of re-entry programs that showed little or no impacts on employment and earnings outcomes. These programs are tackling complex challenges that can’t be solved with simple solutions. It’s incumbent upon both policymakers and researchers to ask more questions: Why didn’t the initiative work? Were services implemented as intended? Did the programs serve the right people? Was there a uniform model of services across sites? Did participants receive sufficient services for enough time? What other factors might have influenced the research results? How can the initiative be tweaked or adapted to improve the chances of influencing participants’ lives?

I was really pleased to see a wide range of qualitative research (including work featured in this panel) that aimed to answer just these types of questions at this year’s conference. These qualitative studies not only document how public resources are being used and help policymakers and practitioners learn from the experiences of others, but help set the stage for determining when an initiative is ready for impact evaluation and identifying factors that may influence the impact results.

I was fortunate to present findings from our evaluation of Linking to Employment Activities Pre-Release (LEAP), a series of grants awarded by the U.S. Department of Labor to establish American Job Centers (AJCs) within local jails. These grants aim to fill a gap identified in prior research by providing workforce services before release, along with a direct “handoff” of participants to the community workforce system after release to ensure a continuity of care during the critical reentry period. The LEAP evaluation has documented early lessons in bringing together the corrections and workforce development systems to design and operationalize these jail-based AJCs. Now that this first generation of jail-based AJCs is off the ground, we are looking forward to learning more from AJC staff and participants as services unfold.

Hopefully, we’ll be able to talk more about this important project at the APPAM 2017 research conference—and it can serve as an example of how continuing conversations between researchers and policymakers can help strengthen programs that improve people’s lives.

Don’t Forget About the Voters

Hanley Chiang

I heard many excellent presentations at APPAM, but one comment by a presenter really stuck with me and, I believe, lays down a major challenge for policy researchers. At a panel session devoted to the effects of school choice policies, Sarah Cohodes of Columbia University presented new, rigorous evidence showing that operators of successful charter schools in Boston were able to start new schools, modeled on the original ones, that produced large gains in student achievement—in fact, just as large as the gains produced by the original schools. Yet, when Cohodes was asked how evidence like this was shaping debates over this November’s ballot question in Massachusetts to lift the state's cap on the number of charter schools (this panel took place five days before the vote), she observed that evidence was largely tangential to the debate. Instead, arguments unrelated to evidence have been shaping public opinion on the ballot question.

Voters need to know the relevant evidence to make informed choices.

Over the past seven years, researchers have accumulated compelling evidence that urban charter schools in Massachusetts enable their students, who are predominantly from disadvantaged families, to make larger gains in achievement and be better prepared for college than similar students in other public schools. Beyond Massachusetts, Mathematica has produced a wealth of rigorous research on charter schools that espouse approaches similar to the ones in urban Massachusetts—schools that set very high expectations for their students, provide frequent coaching and feedback to their teachers, and implement extended instructional time. Similar to the evidence from Massachusetts, Mathematica’s research has shown that urban charter middle schools, many charter management organizations, KIPP charter schools, and a variety of specific, high-profile urban charter schools all have positive impacts on student achievement. Our researchers have also found that charter schools in Florida raise their students’ rates of high school graduation, enrollment in college, and earnings in the labor market. And, although less conclusive, the bulk of the evidence suggests that charter schools do not hurt, and occasionally even benefit, the achievement of students at nearby traditional public schools, perhaps via school-to-school competition.

Even if researchers and policymakers are familiar with this evidence, voters may be far less aware of it. When voters directly set a policy, as in state and local referenda, they need to know the relevant evidence to make informed choices.

This knowledge gap poses a major challenge for policy researchers. In education policy, the research community’s efforts to disseminate evidence have, to date, largely targeted other researchers, education agency staff, legislatures, and educators. But, as the case of Massachusetts’s ballot question shows, we cannot forget the voters. Researchers must find new ways to publicize their findings in venues such as newspapers, popular magazines, community meetings, blogs, and other social media that interface directly with the public, not just with policy wonks.

At Mathematica, we are constantly looking for ways to get evidence to the right people, at the right time, and in the right form. During the debate over the Massachusetts ballot question, one of our senior fellows, Brian Gill, published an op-ed on CommonWealth Magazine’s website to explain directly to voters the evidence about charter schools’ effects on nearby traditional schools. However, the entire research community will need to continue—and enhance—efforts like these to ensure that voters have the information they need to make good public policy choices.

The Importance of Partnerships

Jonathan McCay

The importance of relationships between researchers and practitioners emerged as a key theme of Mathematica’s APPAM panel on using rigorous evaluations to guide program design.

The topic of the panel related directly to the conference’s theme and highlighted ways that exciting new methods such as rapid-cycle evaluation (RCE) and predictive analytics can be used to test and improve the quality of program services. Panel discussant Anu Malipatil, education director of the Overdeck Family Foundation, rightly pointed out that effective uses of RCE benefit from strong partnerships between researchers and program administrators. Her perspectives dovetailed with comments from Ella Gifford-Hawkins—manager of the Larimer County Works Program in Colorado—that researchers and practitioners must be able to speak the same language for effective partnerships to form.

Jonathan McCay at APPAM

Jonathan McCay at APPAM

I had the chance to catch up with Gifford-Hawkins afterward (pictured) to dig in a bit more on this point. She defined the partnership between Larimer County and Mathematica as combining two perspectives to share expertise and test solutions to the program’s specific “pain points.”

Her comments contain two critical insights for researchers. First, we should approach partnerships with program administrators and staff as an opportunity for equal exchange—sharing our expertise on conducting rigorous and timely research, while learning from the expertise and practice wisdom of those on the front lines about their unique program and community contexts. This dialogue-based approach to research design builds a foundation of trust and respect, and, in turn, can yield potentially promising solutions. Second, using techniques such as RCE will be a “win-win” (for researchers and practitioners) if what we’re testing relates to an interesting research question and offers a solution to a real program challenge.

Another encouraging takeaway from the session was that programs of varying capacity and size have succeeded in using newer research methods to inform their own decision making. Erica Mullen, a research scientist with the New York City Human Resources Administration, discussed how the agency recently developed a profile of families most at risk of homelessness and used data analytics to predict applications for homeless shelters. The agency then used an RCE approach to assess whether its proactive homelessness prevention outreach to families identified as “at risk” reduced the number of shelter applications. Likewise, Ella Gifford-Hawkins described how her mid-sized community in northern Colorado used insights from behavioral science to create low-cost interventions designed to reduce barriers to successfully engaging in the Temporary Assistance for Needy Families program. Larimer County is also conducting RCEs to test whether the interventions help families avoid unnecessary sanctions and reduce the administrative burden on staff.

In the spirit of making research more accessible, my Mathematica colleague Alex Resch rounded out the session by unveiling a new tool funded by the U.S. Department of Education to help educators easily evaluate the effectiveness of education technology applications. The Ed Tech RCE Coach is a free, publicaly available web-based tool that guides users, step-by-step, through setting up and conducting their own evaluations.

Coming away from the session, I’m hopeful that researchers and practitioners can increasingly form successful partnerships that embed research into program decision making as a tool for continuous quality improvement.

Machine Learning in Policy Research

Laura Nolan

This was my first APPAM conference since joining Mathematica as a researcher in September. I organized a session and presented a paper based on work I did as a postdoc at Columbia University, and had an opportunity to get a taste of the diverse and exciting work being done by my colleagues here at Mathematica. I thoroughly enjoyed sessions on helping states use evidence-based research in health care, the application of behavioral insights to labor programs, and rapid-cycle evaluation.

Overall, it struck me that many of the papers I saw presented had a common theme: the use of administrative data to support evidence-based policymaking. I also noticed that, although many presenters used these administrative data to ask questions about program management or intervention efficacy, relatively few presentations used prediction techniques.

A predictive approach can help target resources efficiently and effectively.

As government agencies become more advanced in data management and quality assessment, policy researchers have started to expand work using administrative data beyond descriptive statistics, data dashboards, and questions about program efficacy. Although traditional policy research has often focused on estimating relationships between a predictor and an outcome, a machine learning approach focuses on optimizing data for prediction. A predictive approach can help policymakers who want to target resources efficiently and effectively; predicting which individuals are most in need of services means that policymakers or program managers can allocate scarce resources efficiently. Machine learning can also be helpful for predicting policy-relevant outcomes such as job loss or low birthweight.

Mathematica is already making inroads in this area of policy research. We have new projects focused on predicting health care and child welfare super-utilization and identifying identity theft in the Supplemental Nutrition Assistance Program, among other topics. We’re also working on using machine learning models to improve propensity score matching, which is a key technique for estimating effects in rigorous policy evaluations. Although predictive analytics are only relevant for a subset of policy questions, and machine learning models can be somewhat opaque even to people with statistics training, I think the approach offers tremendous value to policy researchers and decision makers. I’m hoping to see more papers at APPAM 2017 that explore the use of machine learning models in research to support evidence-based governance.

About the Authors