Get Updates via Email Get Updates Get our RSS Feed
  Follow Mathematica on Twitter  Share/Save/Bookmark

At a Glance

Funder:

U.S. Department of Education

Project Time Frame:

2003-2009

Findings

Project Publications

Press Release

 

Educational Technology: Does It Improve Academic Achievement?

Educational technology has become increasingly commonplace in classrooms, and Congress has spent billions to give schools access to technology and online learning opportunities. But research on the effectiveness of using educational technology has lagged behind technology's growth. In 2001, Congress mandated that the U.S. Department of Education conduct a scientific study of the effectiveness of using educational technology to answer the following questions:

  • Is educational technology effective in improving student academic achievement?
  • Which conditions and practices are related to the effects of educational technology?

Mathematica’s National Study of the Effectiveness of Educational Technology Interventions, funded by the U.S. Department of Education's Institute of Education Sciences, was a scientific evaluation of the efficacy of technology applications designed to improve student learning in math and reading in grades K-12. The study, which found few impacts on achievement, assessed the effects of four reading products—Destination Reading, Headsprout, Plato Focus, and Waterford Early Reading Program—on reading achievement in first grade, and two products—Academy of Reading and LeapTrack—in fourth grade. It also looked at effects of two math products—Achieve Now and Larson Pre-Algebra—on math achievement in sixth grade, and two high school algebra products—Cognitive Tutor Algebra I and Larson Algebra I—used mostly in ninth grade.

A classroom-level random assignment design was used to estimate impacts. The study recruited a geographically diverse set of 36 districts and 132 schools balancing the urban/rural setting of the districts. The study also focused on schools that served low-income students, were interested in implementing one of the interventions, had the technology infrastructure to support the intervention, and were able to implement random assignment at the appropriate grade level. In the interest of understanding the effects of technology relative to conventional instruction, the school recruitment process avoided schools that already used a technology intervention at the grade level.

The study assessed technology’s impacts using two types of student achievement measures. The first set of measures assessed reading achievement among students who were part of the reading cluster experiments, and math achievement among students who were part of the math cluster experiments. The second set of measures used data collected from school records and included such outcomes as student attendance and promotion to the next grade. The study also examined the conditions and practices under which educational technology is effective. The design yielded measures of technology’s impacts for each school in the study. The importance of school-level conditions and practices was explored by relating the school-level impact estimates and measures of school factors.

Data collection was based on five instruments: (1) a teacher survey, (2) classroom observations, (3) teacher interviews, (4) school records, and (5) achievement tests administered to students. The teacher survey was conducted soon after the start of the 2004–2005 school year. Classroom observations were conducted three times during the 2004–2005 school year, with the first occurring in fall 2004, the second in winter 2005, and the third in spring 2005. Teacher interviews were conducted during the first and third classroom observations. School records were obtained at the end of the 2004–2005 school year and contained information for that school year, as well as information for the previous school year. Achievement tests were administered to students at the beginning and end of the 2004–2005 school year.

The Year 1 sample included 526 teachers with a teacher survey response rate of 94 percent. Slightly fewer than 12,000 students were sampled for the Year 1 survey. Of these, 95 percent were tested at baseline. Response rates ranged from 85 percent in ninth grade algebra to 98 percent in sixth grade math classes. The spring follow-up study achieved similar response rates. 

Findings

Key findings from the 2009 report include the following:

  • Teacher experience was not systematically related to changes in effects between the first year and the second year.
  • Of the 10 products reviewed, one had statistically significant positive effects. The size of the effect was equivalent to moving a student from the 50th to the 54th percentile.

Findings from the 2007 report to Congress showed that:

  • On average, after one year, products did not increase or decrease test scores by amounts that were statistically different from zero.
  • For reading products, effects on overall test scores were correlated with the student-teacher ratio in first-grade classrooms and with the amount of time that products were used in fourth-grade classrooms.
  • For math products, effects were uncorrelated with classroom and school characteristics.

Publications

“Effectiveness of Reading and Mathematics Software Products: Findings from Two Student Cohorts” (February 2009)
“Effectiveness of Reading and Mathematics Software Products: Findings from the First Student Cohort.” Report to Congress (March 2007)