Insights & Hubs | EPS Learning

When Intervention Results Fall Short: Is it Fit, Implementation—or Both?

Written by No Author | May 15, 2026 5:51:58 PM

It is a familiar scenario in many schools and districts: reading achievement data is reviewed and something isn’t quite right. A student—or group of students—didn’t grow as expected. An evidence-based intervention was in place, and other schools have seen results with the program—so what happened?

Before drawing conclusions, it can help to pause and take a step back. The reason behind disappointing results is rarely immediately obvious, and misidentifying the root cause can lead to the same challenges next year, especially for students in upper elementary and secondary grades.

Why Striving Readers in Grades 4 and Above Deserve a Closer Look

For students in grades 4 and above, state assessments and popular benchmark tools typically report comprehension scores and proficiency levels. What they don’t reveal is why a striving reader is struggling.

After third grade, decoding is not always assessed directly, yet many striving readers in upper elementary and secondary grades have not fully crossed the decoding threshold. In other words, they have not yet developed the accurate, automatic word recognition that allows them to focus on making meaning from text—rather than figuring out each word.

As a result, striving readers who have not yet mastered decoding are far more likely to struggle with comprehension. In fact, research suggests that when phonics and word recognition are not yet secure, comprehension-focused intervention alone is unlikely to move striving readers forward (Wang et al., 2019). In these cases, strengthening decoding is a critical first step.

This is why accurately identifying a student’s most urgent literacy need is such an important part of the instructional process, especially when reviewing reading outcomes at the end of the year.

A Thought Process for When the Data Disappoints

When student outcome data falls short, the framework below can help teams think through the possibilities without jumping to conclusions or placing blame.

Step 1: Was the intervention implemented with fidelity?

If no – students did not fully experience the intervention.

  • Intervention fit is difficult to evaluate with confidence
  • It’s still worth working through Step 2 (below) to confirm the intervention was matched to student need
  • Key reminder: Students who don’t fully experience an intervention are less likely to show expected growth

If yes – students did fully experience the intervention.

  • Move to Step 2 (below)

Step 2: Was the intervention the right fit for the student(s)? Did it address the student’s most urgent literacy need?

  • If not, this points to a fit issue, not a program failure
  • If yes, and implementation was strong, consider other contributing factors, such as grouping, intensity, or access to additional supports

Bringing it Together

Asking these questions and taking the time to thoughtfully work through the answers is not always easy. But this kind of reflection can spark insight, learning, and planning for the fall, positioning teams to make strong, student-centered decisions for the school year ahead.