Every spring, the moment arrives: end-of-year benchmark data is reviewed, and while there are often many things to celebrate, some results may not match the promise of an intervention used in classrooms. The immediate instinct is often to question the program, but a recent report suggests a more important question may be: “What happened after we adopted it?” (EdReports & The Decision Lab, 2025).
Based on a nationally representative survey of more than 250 leaders and educators who had recently been through a curriculum adoption process, the report brings to light a finding that should be central to every end-of-year conversation about literacy gains and the program(s) used to achieve them: selecting a strong, right-fit program is only the beginning. Implementation determines whether students actually benefit.
Leaders reported feeling confident in their teams’ abilities to identify and adopt high-quality programs. Nearly three in four (72%) expressed confidence in selecting strong instructional materials. Unfortunately, that confidence didn’t carry through to program implementation.
When asked where they face their greatest challenges, leaders pointed to two primary roadblocks. Nearly half cited achieving stakeholder buy-in (49%), and 48% identified the roadblock of implementing the program they had already selected. In contrast, only 13% struggled with determining their needs, and only 14% found it difficult to narrow their options.
Whether at the district or school level, leaders have become skilled at selecting programs. Bringing them to life in classrooms is where the real challenge lies.
The report also points to a blind spot. While the vast majority of leaders expressed confidence in their ability to select quality materials, only 59% reported having processes in place to assess whether those materials are actually working once they are in classrooms.
That gap between selecting a program and actively supporting it is where strong, evidence-based programs quietly fade and lose their impact. A program that was introduced at an August training and then left largely unmonitored will have a very different level of impact than a program that was coached, observed, and adjusted in real time from month to month.
As the report notes, leaders who have seen sustained literacy improvement—such as those in Louisiana and Mississippi—treat curriculum adoption as a multi-year effort that combines evidence-based materials, aligned professional learning, and intentional implementation. Districts and schools are far more likely to achieve successful student outcomes when they plan for effective implementation from the very beginning and actively support it once a program is in teachers’ hands.
The implementation challenge has a human aspect that data alone doesn’t fully capture. According to research, only 22% of teachers say they had a meaningful role in selecting the instructional materials they are now expected to use.
While it isn’t always realistic to include every teacher in a selection process, when teachers have a voice, are given clear context and rationale for decisions made, and receive professional learning and ongoing support—rather than a single training at program launch—they are far more likely to use a program consistently and confidently.
When teachers feel disconnected from the process, they are less likely to engage deeply—and that directly reduces a program's impact on students. Initial buy-in and sustained engagement from teachers are necessary for implementation fidelity, and implementation fidelity is critical for strong student outcomes.
If your end-of-year literacy data fell short of expectations, it may be worth stepping back and asking:
As these questions suggest, what may look like a program problem in the spring often has its roots in an implementation planning or support problem from the fall.
The EdReports findings are a call to expand how we define “program adoption” at every level. Selecting a program is not a one-time decision; it's the start of a process. That process involves planning, professional learning and coaching, monitoring, and active, sustained leadership at all levels—from district, to school, to classroom.
If your spring data review is prompting hard questions, it’s natural to revisit program decisions. But before asking, “Was this the right program?” start with a more fundamental question:
“Did our students experience the program as intended?”
Ultimately, students cannot benefit from interventions they do not fully receive.