Huawei's research partnerships with U.S. universities come under fire
June 20, 2018
Members of Congress call on Education Secretary Betsy DeVos to investigate possible spying and theft of intellectual property and technology.
The grant program gets high marks in a new study for relying on "tiered evidence" as a basis for awarding funding for education innovations.
Corinne Lestch is a staff reporter covering education for EdScoop and its affiliate public sector technology news websites, FedScoop and StateScoop...
The Trump administration is proposing $370 million to build on an Obama-era grant program that pilots innovative, evidence-based projects for education – and the investment will likely pay off, according to a new report.
Patrick Lester, who leads the Social Innovation Research Center, evaluated the impact of grants through the Education Innovation and Research (EIR) program, which is run by the Department of Education. He conducted the study for the IBM Center for the Business of Government to ascertain whether the "tiered evidence" grants, which fluctuate based on proven effectiveness, made a difference.
"We want to take existing dollars and spend them more on projects that show evidence that they actually work," Lester said in an interview with EdScoop. "Most innovations don’t work — when Silicon Valley is figuring out new things, they usually don’t work."
Still, he added, enough projects pan out to justify spending.
"You don’t need too many successes for innovation to be pretty productive, and the same idea applies to education," Lester said. "Once they do [work], take them to the next level, and as you get more and more solid evidence then you steer more and more dollars into it."
The projects funded by the grants operate at one of three different tiers of development: early-stage innovations, mid-level programs that show some evidence of working, and initiatives with substantial evidence that should be expanded nationally.
The Government Accountability Office, which surveyed these types of grants last year, released a statement that they provide strong incentives to use effective methods of learning.
“Proponents of tiered-evidence grants contend that they create incentives for grantees to use approaches backed by strong evidence of effectiveness, encourage learning and feedback loops to inform future investment decisions, and provide some funding to test innovative approaches," according to GAO, the investigative and auditing arm of Congress.
Previously known as the "Investing in Innovation," or i3 program when it launched in 2009, the grant program has provided more than $1.4 billion for education projects over the last eight years, and is on track to help more low-performing schools make gains.
Lester tentatively identified ten factors that may explain why some individual projects succeeded while other failed. These include:
After studying the impact of 44 individual grants through EIR, Lester found that “a higher percentage of the program’s scale-up and validation grants, which required more evidence, have produced positive impacts (50 percent). A smaller share of development grants, which required less evidence, did so (20 percent).” He concludes, with some cautions, that “these rates of success appear to exceed those in other areas of education research.”
Several of the grants that produced positive results were focused around STEAM initiatives. One grant went to the University of Missouri to test a professional development program aimed at improving student math and English skills. Another was for a STEM-focused career and college readiness project at Bellevue School District in Washington state.
Since its inception, the program has increased its focus on STEM grants.
"I do see STEM playing a very significant role going forward," Lester said. "Testing various technology-based solutions is absolutely something you can do with this evidence-based approach. You can try various learning plans on kids, and you could do two different versions of it. And then when you test the kids, then you have a good indication that one module works better than the other."
Grant recipients have included local school districts, universities and nonprofits that run charter schools.
Public school districts had distinct advantages compared to other recipients — they have "an easier time with buy-in, easier access to data, and budgets could sustain a program if it was working," according to the report.
Lester included recommendations that could strengthen the EIR program, like producing results more quickly – final evaluation results for most of the first-year grants, which were awarded in 2010, were not available until 2016.