Administrators selecting educational technology programs for their schools or districts face big decisions. Their choices will sap limited funding and time, so having accurate and relevant information about how the programs affect student performance should be a critical part of the decision-making process.
During a recent webinar hosted by edWeb.net, Andrew Coulson, chief data science officer at the MIND Research Institute, and Brian LeTendre, the group’s director of content and communications, explained a process for finding data that will help administrators make the best choices for their student populations, and in doing so “raise expectations about the availability of information” during program evaluations.
Rather than rely solely on randomized controlled trials, generally considered the gold standard for testing medical treatments, the MIND approach emphasizes the use of publicly available schoolwide performance data that can be matched to specific schools and types of students.
Evaluating the evaluation process
The advantages of the process Coulson outlined include being able to look at student performance for a particular grade level over several years. This avoids “cherry picking” the data to present an exceptional performance in just one year, while ignoring other years that may be more typical. This approach also includes a wide range of students, so administrators can view data relating directly to the types of students in their schools.
While acknowledging that randomized controlled trials have their place in what should be an “edtech efficacy portfolio,” Coulson said the time and expense required for this type of study usually makes them feasible only once every five or 10 years, limiting their ability to show progress over time and include the latest data.
RCTs also require a relatively small sample of students to control all the variables, so it really provides a snapshot of specific types of students at a specific time.
In contrast, the MIND approach uses school-wide performance data published annually by the states for language arts and math, which provides a large enough pool of schools for administrators to find an effective match for their particular district or school. Administrators can consider factors such as whether the schools are urban, suburban or rural, and how many students participate in Title I programs.
States may also break out the data in regard to specific subsets of learners, such as English-language learners and students with special needs, so the annual publishing of data can allow administrators to see how the same type of student performs in the same grade each year, as well as how much progress occurs as the same students advance.
Having access to schoolwide performance data over a number of years also allows administrators to look at how students performed before the purchase of the program, and then for years afterward. This helps to determine how much of an impact the program had and how much improvement can be expected as a result of its purchase.
Putting good data to good use
For the MIND approach to be effective, there must be grade-wide enrollment in the edtech program being evaluated so that the test-score data published by the state applies to all students. Administrators can have confidence in this type of data because it is derived from rigorous standardized tests that are used statewide, so they provide a uniform basis for comparison.
This approach has now been used for well over a decade, and Coulson explained that the methodology has been replicated by an independent education research lab, using different sample sizes. The methodology and results have also met What Works Clearinghouse and Every Student Succeeds Act standards for these types of studies.
Coulson noted that “other outcomes should also be considered, rather than just what happens on one day in April.” Ideally, an edtech program’s efficacy portfolio would include a randomized controlled test, samples of student work, videos of what happens inside the classroom, and qualitative input from educators and administrators, as well as access to schoolwide performance data that allows administrators to find the right match for their schools.
Meanwhile, Coulson said he hopes the approach and results MIND Research Institute has developed will help administrators maximize the impact of an edtech purchase on student performance. He also believes it should create the expectation that every edtech program will have a positive response and supporting evidence when administrators ask, “Do you have a study on a district like mine?”
About the presenter
Andrew Coulson, chief data science officer at MIND Research Institute, leads a team of data analysts to conduct evaluations of student and teacher usage and outcomes of ST Math. He previously led MIND’s education division for 12 years, where he helped to execute strategies to scale the organization’s reach to over one million students. Prior to joining MIND, Coulson was the first education program officer for a major Orange County foundation and also worked for 17 years in upper management in high-tech manufacturing engineering. Coulson holds a master’s degree in physics from UCLA.
About the host
Brian LeTendre serves as director of content and communications at MIND Research Institute. In addition to orchestrating content for MIND, he is an author, podcaster and avid gamer.
Join the community
EdFocus – An EdTech Industry Network is a free professional learning community on edWeb.net that will help you connect with colleagues in the education industry to share information and resources, raise questions, and get advice.