Get Updates via Email Get Updates Get our RSS Feed
  Follow Mathematica on Twitter  Share/Save/Bookmark

American Evaluation Association Annual Meeting Abstracts

"Evaluation Practice in the Early 21st Century"

Washington, DC—October 17-19, 2013

Non-profit and Foundations Evaluation TIG Business Meeting and Presentation: Measurement as Misdirection
TIG Leaders: Beth Stevens, Aimee White (Custom Evaluation Services), Marcus Martin (2M Research Services LLC), and Claire Sterling (American Society for the Prevention of Cruelty to Animals)
Presenter: Patricia Patrizi (Patrizi Associates)

Report cards and dashboards have proliferated as a favored device for strategy tracking in foundations. Interviews with evaluation directors (for the Evaluation Roundtable) revealed the increasingly central role of highly synthesized metrics as the vehicle by which foundations track strategy progress and impact. These metrics are often pulled from existing or in-house data–often retrieved at lower costs than what more in-depth evaluation would require. Some foundations report struggle with “too much data, not much processing and difficulty in getting the right set of ‘metrics that fit everyone’s needs’ which tends to fuel a proliferation of metrics,” (Jaffe, 2013). Others say that they are drowning in data, leaving them “feeling punch drunk” (Coffman 2013) or as others note feeling like “they are drinking from a fire hose,” (Frank and Magnone, 2011). The presenter will discuss problems and possible solutions.

Advantages and Limitations of an Efficient, Flexible Design: Evaluating Administrative Costs and Savings From "Express Lane" Medicaid Enrollments
Adam Swinburn and Maggie Colby

We developed an efficient, flexible evaluation framework to assess the impact of "Express Lane Eligibility" policies (ELE) on the administrative costs incurred by public health insurance agencies. By relying on eligibility findings from food assistance, income support, and other programs, ELE promises to reduce the burden on state staff of collecting and evaluating applicant data when enrolling children in Medicaid or the Children's Health Insurance Program, but new operational costs may offset anticipated efficiencies. Flexibility in the legislation authorizing ELE led to diverse programs across states, presenting a challenge for evaluators. This paper reviews evaluation design decisions made to accommodate this variation, while minimizing the evaluation participation burden for state staff. We also discuss practical advantages and limitations encountered during data collection and analysis (January 2012-April 2013). Issues considered include data source limitations, analysis of marginal (rather than average) costs, and our decision to report certain costs qualitatively.

Systems in Evaluation TIG Business Meeting and Charette: Designing an Evaluation of the American Evaluation Association
TIG Leaders: Margaret Hargreaves, Mary McEathron (University of Minnesota), and Erin Watson (Michigan State University)
Presenter: Glenda Eoyang (Human Systems Dynamics Institute)

The Systems in Evaluation TIG is ready to move beyond systems-thinking and systems-talking to get to real systems-doing. This year’s business meeting will feature an open-space design charrette with the goal of designing an evaluation of the American Evaluation Association using systems-thinking approaches. Glenda Eoyang will set the stage with a discussion of complex systems and the rules of engagement. Groups will be formed around different methods and approaches. The Systems TIG leadership will join Glenda in continuing to work with the teams throughout the year. If you are ready to roll up your sleeves and delve into the systems-doing, come join us! No experience is required as we will pair novices with our more knowledgeable Systems colleagues. Spectators and cheering teams are also welcomed.

Evaluation Technical Assistance as a Means to Improving the Evidence Base and Identifying Effective Teen Pregnancy/STI Prevention Programs
Susan Zief

In 2010, the Office of Adolescent Health (OAH) within the U.S. Department of Health and Human Services launched its Teen Pregnancy Prevention (TPP) Initiative that addresses rising teen pregnancy rates by supporting grantees in replicating evidence-based programs. Nearly half (40) of these grants are being rigorously evaluated using an experimental or quasi-experimental design by an independent evaluator. All of the 40 rigorous evaluations must be designed to meet the standards for evidence from effectiveness evaluations established by the Department of Health and Human Services (HHS). This unprecedented experience of supporting the implementation of 40 evaluations has led to a collection of important considerations for designing and conducting program evaluations with different designs and in a variety of contexts. OAH anticipates that disseminating these lessons can improve the quality of future evaluations of similar interventions, thereby increasing the number of evaluations that meet federal HHS evidence standards and show positive impacts.

Dynamic Interactions and Field-Based Challenges: Lessons Learned From a TPP Evaluation Technical Assistance Project
Jacqueline Berman

This presentation will provide an overview of the application of this strategy to a grantee implementing a rigorous evaluation of school-based TPP program for ninth grade students. We will explore four aspects of the TA process: (1) assessment of proposed evaluation plans; (2) challenges faced prior to and during evaluation implementation; (3) points of contact and collaboration between the TA team and the grantee to address these challenges; and (4) initial lessons learned that will support design refinements in a second program evaluation. Building on the TA experience, the presentation will offer a set of emergent lessons related to (1) establishing collaborative relationships to increase TA relevance and uptake; (2) understanding and anticipating critical points at which to engage TA; and (3) developing TA strategies able to build capacity and support rigorous evaluation.

Sustainability and the Public Good: What Does it Mean for the 21st Century Evaluator?
Presenter: Beverly Parsons (InSites)
Discussants: Eleanor Chelimsky (Independent Consultant), Matt Keene (United States Environmental Protection Agency), Juha Uitto (United Nations Development Programme), Rakesh Mohan (Idaho State Legislature), Gene Thomas (Clarendon United Methodist Church), and Margaret Hargreaves

Transforming societies and the world's economy to a sustainable basis for future generations presents an unprecedented challenge in the 21st century. It involves the planet as a whole. Societies around the world are highly interconnected in regard to their economies, social justice, and natural environment. New visions and approaches to how we live in the 21st century are essential to support the public good. But what does sustainability mean for evaluators? How do evaluators and the evaluation field as a whole-approach sustainability in their work? This session first engages the participants in a discussion of what a sustainable social and natural environment is. Next, the focus shifts to the implications for evaluators who seek to serve the public good. Group facilitators with a range of perspectives on the meaning of sustainability and the implications for evaluation will engage participants in the discussion.

Timing, Tensions, and Trade-offs: Findings From Evaluation Technical Assistance to Community Colleges Implementing TAACCCT Grants
Ann Person and Nan Maxwell

The U.S. Department of Labor (DOL) is awarding $2 billion in grants to community colleges around the country under the Trade Adjustment Assistance Community College and Career Training (TAACCCT) program. These grants support innovative initiatives to increase the attainment of postsecondary credentials that prepare workers for employment in high-skill, high-wage fields. The program encourages grantees to form partnerships with other higher education institutions as well as employers, to utilize new technologies, and to implement forward-thinking policies-often requiring colleges to make major changes to the ways they operate. Moreover, grantees must track DOL performance measures for participant and comparison cohorts and they are strongly encouraged-in some cases required-to conduct evaluations of funded programs. This paper presents findings from a technical assistance initiative to support TAACCCT grantees' measurement and evaluation efforts, highlighting the challenges and proposing some solutions to improve evaluation of these complex and comprehensive programs.

Using Tools From Implementation Science to Strengthen Large-Scale Program Evaluation
Chair: Diane Paulsell
Discussant: Caryn Blitz (U. S. Department of Health and Human Services)

In recent years, interest has grown among policymakers, practitioners, and funders in promoting the use of interventions with scientific evidence of effectiveness. Implementation science is the study of how evidence-based programs are translated, replicated, and scaled up in "real world" settings. This panel brings together lessons from large-scale evaluations of three federal initiatives about the application of implementation science to program evaluation. The first paper presents findings on states' plans for supporting replication of evidence-based programs with fidelity in a nationwide scale-up of teen pregnancy prevention programs. The second paper presents lessons learned from a mixed-methods approach to measuring program fidelity across five evidence-based home visiting programs. The third paper presents a conceptual framework for documenting implementation of healthy marriage and responsible fatherhood initiatives, exploring factors that contribute to higher-quality implementation and participant responsiveness, and facilitating selection of qualitative and quantitative data sources to inform each aspect of the framework.

Scaling Up Evidence-Based Teen Pregnancy Prevention Interventions: Findings From the Personal Responsibility Education Program (PREP) Evaluation
Debra Strong, Susan Zief, and Rachel Shapiro

The consequences of adolescent sexual activity remain a troubling issue in the United States. To help reduce teen pregnancies, sexually transmitted infections, and associated risk behaviors, Congress authorized the Personal Responsibility Education Program (PREP). PREP provides $55 million to states to implement evidence-based, comprehensive teen pregnancy prevention programs for at high-risk youth. Replicating evidence-based models with fidelity is a necessary, if not sufficient, condition for ensuring that programs yield the range of outcomes observed in the original evaluations (Daro et al. 2012). Results of interviews with PREP administrators in 45 states conducted as part of the PREP evaluation showed that states are well-aware of the importance of fidelity and plan to provide initial training and technical assistance, and require providers to record fidelity data. However, states had no concrete plans in place, and varying capacities, to independently assess fidelity, diagnose potential problems, and provide support through all stages of implementation.

Assessing the Fidelity of Evidence-Based Early Childhood Home Visiting Programs: A Mixed Methods Approach
Kimberly Boller

The national evaluation of the Children's Bureau's Supporting Evidence-Based Home Visiting to Prevent Child Maltreatment (EBHV) initiative focuses on two aspects of fidelity: (1) structural (e.g., adherence to program elements such as reaching the intended target population and providing participants with the recommended service dosage and duration) and (2) dynamic (e.g., quality of the provider-participant relationship and the consistency of service content). This presentation will describe the evaluation's multi-method approach to capturing fidelity across 5 evidence-based models and the challenges and successes in doing so across 17 subcontractors working with 45 implementing agencies. We will demonstrate the analytic power of bringing together quantitative data from almost 5,000 families, 90,000 home visits, and 390 staff with qualitative data from site visits to the grantees focused on understanding how systems change and community partnerships are associated with fulfilling goals in the initiative's logic model and the 17 subcontractor models.

Strengthening the Study of Responsible Fatherhood and Healthy Marriage Program Operations Through the Use of Tools From Implementation Science
Heather Zaveri and Robin Dion

Since 2006, Congress has appropriated funding for grants to provide healthy marriage or responsible fatherhood (HM/RF) services. The Administration for Children and Families, within the Department of Health and Human Services, funded the Parents and Children Together (PACT) evaluation to understand the effectiveness and implementation of services by selected HM/RF grantees from the 2011 cohort. PACT's implementation study is documenting program implementation and is exploring what contributes to higher-quality implementation and participant responsiveness. The implementation study's conceptual framework defines and draws connections between program inputs, outputs, and participant outcomes, and recognizes the influence of community context. Data collected from both qualitative and quantitative sources will inform this framework. The presentation will describe the conceptual framework, discuss how the tools and principles available in implementation science informed the framework, and highlight how the collected data informs each aspect of the framework.

Exploring Federal Use of Systems Science: Expanding the Tent of Public Policy Research and Evaluation
Chair: Margaret Hargreaves
Discussants: Matt Keene (U. S. Environmental Protection Agency) and Amanda Cash (U. S. Department of Health and Human Services)

The federal government has played a central part in program and policy research and evaluation. In particular, pioneering staff in numerous federal agencies have also played a crucial role in the experimentation, adoption and early use of complex systems theories and methods. Groundbreaking work has been done to commission systems change evaluations, host public conferences, grantee meetings and federal working groups exploring the topic, publish topic-related studies, reports, and journal articles, and require systems expertise in research and evaluation contracts. This panel discussion will explore how five federal agencies are incorporating complex systems concepts and methods into their research and evaluation work. The panel discussion will begin with a short presentation by each federal agency representative, followed by 40 minutes of group conversation and questions from the audience, on the topic of the development and use of complex systems concepts and methods in federal research and evaluation.

An Overview of Federal Efforts to Expand the Use of Systems Science in Public Policy Research and Evaluation
Margaret Hargreaves

Meg Hargreaves, the chair of the Systems in Evaluation TIG, will start the panel with a general introduction and facilitate the discussion with key staff from five federal agencies.

Measuring Multi-dimensional Change: Challenges and Opportunities in The MasterCard Foundation Scholars Program
Joseph Dickman (The MasterCard Foundation) and Matt Sloan

The MasterCard Foundation Scholars Program is a secondary and university scholarship and support program for economically disadvantaged, but academically promising youth with a commitment to social change. The Program is a $500 million, 10-year initiative to educate an estimated 15,000 young people, primarily in Africa, and is being implemented by a network of education institutions and non-profits. The wide variety of activities and partners, coupled with the unique design of the Program, presents a challenge for designing an overarching measurement framework. In particular, key challenges include partners spread across multiple continents, developing a unified set of learning questions, measuring change at multiple levels and for difficult concepts such as "give back", and the difficulty of establishing a valid counterfactual. This presentation discusses the evaluation design and associated methodologies developed to overcome these challenges and arrive at a robust measurement strategy for the Program.

Weaving Together Evidence, Expert Opinion, and Feedback From the Field to Build a Conceptual Framework for Advancing the Wellbeing and Self-Sufficiency of At-Risk Youth
Robin Dion and Cay Bradley

For many youth, the path to economic self-sufficiency is challenging. Youth may lack education and work credentials. The lack of a safe and secure home or healthy relationships may further hinder their ability to achieve self-sufficiency. Public and private funds support a wide range of programs to support at-risk youth in the transition to economic self-sufficiency, but do existing programs reflect best practice? Using evidence of program effectiveness, expert opinion, and input from youth serving organizations, we developed a conceptual framework to inform future programming and evaluation efforts. The conceptual framework is grounded in the theories of resilience and human capital. Evidence-informed interventions and outcomes associated with wellbeing, including resilience, self-sufficiency, and human capital, are identified. In addition to presenting the final conceptual framework, the process by which it was developed, and how it may be used by practitioners, policymakers, evaluators, and funders will be discussed.

Getting It Right: Using Systematic Reviews of Evaluations for Evidence-Based Decision Making
Chair: Jonathan Morell (Fulcrum Corporation)
Presenter: Jill Constantine
Discussant: Naomi Goldstein (U. S. Department of Health and Human Services)

Using evidence to inform policy and practice is central to the purpose of evaluation. Systematic reviews provide a comprehensive and consistent summary of the findings from evaluations, applying a pre-specified set of standards for assessing the quality of those evaluations. This session focuses on how systematic reviews of evaluations can inform decisions on (a) designing new interventions or social programs, based on knowledge of what works, (b) deciding whether to scale up or replicate existing programs, and (c) structuring evaluations so that the findings meet standards for rigor. The session will discuss the evolution of systematic reviews and how they are being used in a variety of fields including through the U.S. Department of Education's What Works Clearinghouse.

Roundtable Rotation II: Evaluating for Replication: Designing Evaluations that Go Beyond Measuring Success to Identifying Where Success Can be Repeated
Presenter: Beth Stevens

Foundations and nonprofits increasingly emphasize replicability when judging the success of their programs. Yet few evaluations are designed to provide evidence for where and how to replicate success. The question of where to replicate a program highlights the role of context, or environment, in the success of a program model. In evaluation, context is usually controlled or minimized in order to isolate the effects of the program. This roundtable will explore how evaluations can instead be designed to isolate the role played by context as differentiated from that of the program. How can evaluation designs maximize variation in context to reveal contextual factors that might affect replication in other settings? Which contextual elements are important for a foundation or nonprofit to consider when considering replicating a model somewhere else? This roundtable has the potential to expand the role of evaluation from measuring success to identifying where success can be repeated.

Learning Across Borders: The Collaborative Creation of a Monitoring, Evaluation, and Learning Framework for The MasterCard Scholars Program
Matt Sloan

The MasterCard Foundation Scholars Program is a $500 million, 10-year initiative to provide secondary and university scholarships and support services for academically talented but financially disadvantaged students with a commitment to social change. The Program is being implemented by a network of educational and non-profit institutions in several countries, and involves diverse activities ranging from comprehensive scholarships to career counseling and leadership development. This presentation will discuss lessons learned from developing a monitoring, evaluation, and learning (MEL) framework for the Program that allows the Foundation and its partners to decide what learning questions are most meaningful, and how to answer them. Key challenges to this process include the wide variety of partners spread across three continents, developing an overarching set of learning questions, measuring concepts like "giving back," and establishing a valid counterfactual. The presentation will highlight how a highly collaborative approach to MEL development attenuated these challenges.