There’s More to Life than Test Scores: Measuring Social and Emotional Skills to Support Student Success

There’s More to Life than Test Scores: Measuring Social and Emotional Skills to Support Student Success

Editor’s Note: This blog originally appeared on the U.S. Department of Education’s Office of Educational Technology (OET) Medium page and is being re-posted with their permission. You can follow the full series at https://medium.com/building-evaluation-capacity.

OET blog post #3

by Laurie Sullivan under Attribution 2.0 Generic (CC BY 2.0)

Success in both school and the workforce requires not just academic knowledge and skills, but also a broad range of social and behavioral competencies. Often referred to as social and emotional (SE) skills, these competencies have, until relatively recently, largely been ignored in evaluations of educational program offerings. SE skills include conscientiousness, persistence (or “grit”), self-management, integrity, creativity, self-efficacy, and self-control. Research supports that SE skills can be as important as student achievement in predicting important life outcomes, such as educational attainment, health, earnings, and employment. Importantly, SE skills are malleable and can be developed through interventions.

Given this evidence on the importance of SE skills, education practitioners and policymakers are searching for viable ways to measure them. We have found that many districts want to use the Rapid Cycle Evaluation Coach (RCE Coach) to assess whether educational technology and other interventions are improving SE skills, but struggle to find adequate outcome measures. Progress on identifying SE measures has been slow, in part because existing SE measures were not developed specifically for evaluating educational programs or school performance. SE skills have historically been measured through three primary approaches:

  1. Self or observer reportsThe most common approach is for respondents to rate themselves or someone they know based on how they tend to behave. For example, when completing the Grit Scale, respondents report the extent to which they agree with statements like “I am a hard worker,” where answers range from “Not like me at all” to “Very much like me.”
  2. Performance tasks or games. In this approach, respondents are asked to complete a task or play a game. Their performance on the task or game provides a measure of their SE skills. Perhaps the most famous example is the so-called marshmallow test of self-control in which children are given one marshmallow and are told they will be given a second marshmallow if they refrain from eating the first marshmallow within a fixed time period. Performance on the marshmallow test is highly correlated with later SAT scores.
  3. School administrative recordsAdministrative records — such as attendance and credits earned — can serve as a proxy for SE skills. The logic is that by consistently showing up to school or completing courses, students exhibit persistence and motivation. These types of records are correlated with traditional measures of SE skills and are highly predictive of later educational and employment outcomes.

Before selecting a measure for SE skills of interest, it is useful for districts and schools to consider the following challenges:

Challenge #1. Measures can actually reflect factors unrelated to underlying SE skills, such as incentives or other aspects of students’ situations.

For example, some educational programs include components that might temporarily change a student’s situation in a way that does not have a lasting impact on their underlying SE skills. Consider a program that gives students rewards for exhibiting particular behaviors (such as getting good grades). These students might work harder and therefore appear to be “grittier.” Once the program ends, the students may no longer exhibit those behaviors. The goal of most educational programs, however, is to improve the underlying SE skills in a lasting way.

Recommendations for addressing Challenge #1

  • If rapid cycle evaluation results look promising because they improve short-term measures of SE skills or academic performance, consider following up after the conclusion of the intervention to determine whether it had a lasting impact on SE skills. Social programs have had short-term impacts on measures of SE skills or behaviors, because they temporarily changed the situation of program participants but not the participants’ underlying skills. Long-term follow-ups of two or more years after the end of the intervention can provide more compelling evidence that the program led to a lasting changes in SE skills. Additionally, measuring aspects of a student’s situation can also rule out the possibility that the intervention appeared to change SE skills only because it affected their situation. For example, if an evaluation found that an intervention had no impact on a student’s access to transportation (part of their situation) but did find that it improved attendance, then it is more likely that the intervention improved an underlying SE skill.

Challenge #2. Self-reported measures can depend on an individual’s perception and point of reference, rather than SE skills.

In particular, recent evidence suggests that people respond to subjective surveys based on what it means to them to exhibit a particular skill to various degrees — their reference point. For example, people tend to rate their own skills relative to people they know, rather than to the population as a whole. This issue is most likely to occur with questions in which response categories are subjective, such as: “strongly disagree,” “disagree,” “agree,” or “strongly agree.” Therefore, it can be misleading to compare the reports of students from different backgrounds or from very different schools.

Recommendations for addressing Challenge #2

  • If you are designing a new survey or modifying an existing one, use “situational judgment tasks” that place respondents in a particular context. This helps convert subjective answer responses to more objective responses.
  • Consider complementing standard self-reported questions with anchoring vignettes, which allow researchers to measure reference points and adjust responses accordingly. An anchoring vignette is a short description of a hypothetical person’s behavior that the respondent is asked to rate. For example, an anchoring vignette for organizational skills may be: “When Taylor is given a school assignment, Taylor does not usually start the assignment until right before the deadline and frequently does not meet deadlines. Taylor does not arrange materials in the school locker and places all of the papers in a backpack without using a folder. How much do you agree or disagree that Taylor is someone who is organized?” where possible responses are: disagree strongly, disagree a little, neither agree nor disagree, agree a little, and agree strongly. If different students answer this question differently, then that provides evidence that they have different reference points and that information can be factored into the analysis.
  • Consider using objective measures like absences tracked in administrative records as a proxy for a SE skill.

Challenge #3. Educational interventions might impact SE skills, but not the ones you are measuring.

Some educational interventions target aspects of a given skill or how that skill is applied in a particular context. For example, a program might aim to improve students’ persistence in completing school assignments, rather than persistence across all aspects of life. In that case, a broad measure of persistence might not capture the skill that the intervention aims to impact. Similarly, some more targeted types of performance tasks, such as the marshmallow test, might be too narrow to capture the skill you really want to measure.

Recommendations for addressing Challenge #3

  • Use a logic model to identify the set of skills that are most likely to be improved by the intervention. By considering what the intervention aims to change, evaluators can better focus the selection of measures on those that are most relevant.
  • Adapt self-reported measures to the context where the skill will be applied. For example, for an intervention that attempts to boost homework completion, a survey might ask students whether they tend to complete school assignments (a contextualized measure), rather than whether they tend to complete tasks in general (a common question on standard self-reports of SE skills).
  • Consider creating new measures that capture the behaviors that the intervention targets.

Challenge #4. Some measures of SE skills can be challenging to implement in the school context.

For example, measures like the marshmallow test require training to implement and one-on-one administration. Similarly, schools already administer many forms of assessments (for example, achievement tests), so administering additional long surveys to measure SE skills might not be feasible. Additionally, many measures were developed for use among adults so might not be well understood by children.

Recommendations for addressing Challenge #4

  • Use administrative data as a proxy for SE skills. Most schools already collect data on absences, credits earned, disciplinary infractions, and grades, which can serve as useful proxies for skills like persistence and conscientiousness.
  • Consider using performance tasks that can be easily administered through the use of technology. For example, the academic diligence task can be easily administered online and is free to the public. Other digital learning environments have the potential to both assess SE skills and promote them. Educational technology interventions can optimize performance tasks through adaptivity; motivate students through gamification of learning environments; and provide resources and interpersonal networking that can enable learners to persist toward goals that, for many, were previously unattainable.
  • Adapt long surveys of SE skills to focus on the SE skills of interest to reduce burden.
  • Use pretest surveys among the population of interest. A pretest involves administering a survey to a few students who are within or similar to that target population in advance of the study and interviewing them about their experience taking the survey. This approach helps to ensure that the questions are appropriate for the target population.

We know that student success means more than just high test scores, so measuring and assessing SE skills should play an essential role in evolving educational priorities to support success for all students. If students are to achieve their full potential, they must have opportunities to engage and develop a much richer set of skills beyond content knowledge alone. As schools use educational technologies and other interventions to develop SE skills in their students, it’s important that they consider how SE skills will be both measured and tracked. The recommendations above will help schools choose a good measure, and schools can use the RCE Coach to assess whether the interventions they are using are having the desired effect.

About the Authors

Mathematica logo icon

Bernadette Adams

Senior Policy Advisor, Office of Educational Technology, U.S. Department of Education
View More by this Author