The Feedback Loop: How Bad Rubrics Create Their Own Crisis

The Feedback Loop: How Bad Rubrics Create Their Own Crisis

November 16, 2025

Imagine you’re selecting students for an advanced physics class.

You test them on basic arithmetic. The top scorers get into advanced physics.

Some of them struggle. They weren’t ready for physics—they were just good at arithmetic.

Your response: Raise the bar on arithmetic testing.

Require 95th percentile instead of 90th. Add a speed component. Demand perfection on multiplication tables.

More students fail to qualify. Fewer get into advanced physics.

But here’s the problem: You’re still not testing physics readiness.

Some students who ace arithmetic still aren’t ready for physics. And some students ready for physics—who understand velocity, acceleration, force—don’t make the cut because they weren’t fast enough at multiplication.

You’ve made the bar higher. You haven’t made the selection better.

This is exactly what happens when school districts respond to acceleration challenges by tightening rubrics that measure the wrong things.

And Oak Park Elementary School District 97’s rubric shows clear evidence of this feedback loop.

(For the bigger picture of Oak Park’s grade-level disparities—where 276 seventh graders accelerate while only 26 first graders do—see: The Acceleration Gap: 276 to 26)

The Vicious Cycle: How It Works

Here’s how a well-intentioned district can get trapped in a feedback loop that makes the problem worse:

Step 1: Create a Rubric (Measuring the Wrong Thing)

District creates an acceleration rubric for first grade that measures:

  • Report card grades (first-grade content mastery)
  • AimsWeb scores (first-grade fluency screening)
  • Percentile ranks on grade-level assessments

What it should measure: Readiness for third-grade content (multiplication, fractions, area, multi-step problems)

What it actually measures: Exceptional performance at first-grade content (addition/subtraction within 100, place value)

(For why these are different questions, see: When Ready Isn’t Enough: How Rubrics Measure the Wrong Things)

Step 2: Accelerate Students (Some Ready, Some Not)

Because the rubric measures first-grade exceptionalism instead of third-grade readiness:

Students who qualify:

  • ✅ Some who excel at first grade AND are ready for third grade (correct accelerations)
  • ❌ Some who excel at first grade but AREN’T ready for third grade (incorrect accelerations)

Students who don’t qualify:

  • ❌ Some who “meet” first grade expectations because content is too easy, but ARE ready for third grade (missed opportunities)
  • ✅ Some who aren’t ready for third grade (correct denials)

The rubric creates both false positives and false negatives.

Step 3: District Sees Struggles

Some accelerated students struggle with third-grade content.

Why? Because the rubric measured first-grade performance, not third-grade readiness.

A student can ace first-grade addition and subtraction (earning “Excels” and high percentiles) while not understanding multiplication concepts, fractions, or area.

District observes: “Some of our accelerated students are struggling.”

What district concludes: “Our rubric isn’t rigorous enough. We need to raise the bar.”

Step 4: Tighten the Rubric (Still Measuring the Wrong Thing)

District’s response:

  • Raise the threshold from 70% to 80%
  • Increase percentile requirements (90th → 93rd → 95th)
  • Add “Excels” requirements on report cards
  • Create additional barriers

But: The rubric still measures first-grade exceptionalism, not third-grade readiness.

Result:

  • Fewer students qualify overall
  • Some who qualified before (both ready and unready) now don’t
  • Some students who ARE ready for third grade get blocked
  • But some students who AREN’T ready still get through (because they excel at first grade)

The fundamental problem—measuring the wrong thing—hasn’t been fixed.

Step 5: Repeat

Some accelerated students still struggle (because rubric still measures wrong thing).

District raises bar again.

Fewer students qualify.

Pattern continues.

The feedback loop is complete.

The Evidence: Oak Park’s Rubric Shows This Pattern

Let’s look at Oak Park District 97’s actual rubric requirements and see the fingerprints of this vicious cycle:

The 80% Threshold

Requirement: Students must score 36 out of 46 points (78%, advertised as “80%”) to qualify.

Question: Why 80%? Why not 75%? Or 85%?

Asked three times. Response from Acting Superintendent Patrick Robinson:

“We do not have all of the detailed work readily available to provide.”

Translation: The threshold isn’t based on validation research. It’s a policy choice.

Likely made in response to concerns about acceleration outcomes. Likely raised over time as district saw struggles.

But: No evidence it was raised based on measuring third-grade readiness more accurately.

The Percentile Hairsplitting

AimsWeb requirement: 93rd percentile = 1 point. 92nd percentile = 0 points.

One percentile point determines whether a student gets any credit.

This is the kind of arbitrary precision that suggests a district trying to control outcomes by raising bars incrementally.

92nd percentile not high enough? Require 93rd. Still seeing struggles? Require 95th.

But: AimsWeb is a screening tool designed to identify struggling students, not measure readiness for grade-skipping.

(For detailed analysis, see: The Wrong Tool: Why Screening Tests Don’t Belong on Acceleration Rubrics)

Making the percentile requirement higher doesn’t make AimsWeb appropriate for this purpose. It just excludes more students based on an inappropriate measure.

The “Excels” Requirement

Report card scoring: “Excels” = 2 points. “Meets” = 0 points.

First-grade report cards measure mastery of first-grade content:

  • Addition and subtraction within 100
  • Basic place value
  • Comparing lengths
  • Identifying shapes

A student can meet these expectations easily (even while bored) without getting “Excels”—because “Excels” is reserved for exceptional performance in the context of first-grade content.

Meanwhile, that same student might be ready for third grade:

  • Multiplication and division concepts
  • Fractions on number line
  • Area and perimeter
  • Multi-step word problems

Requiring “Excels” on first-grade report cards doesn’t measure third-grade readiness.

It just creates another hurdle based on first-grade performance.

And there’s another problem: The rubric only counts Trimesters 1 and 2. Trimester 3—the final trimester before entering the next grade—doesn’t count at all.

Why? Practical limitations. Teacher summer vacation starts the same week as student summer vacation—right when Trimester 3 report cards are published. Acceleration decisions must be made before teachers leave for summer, so only T1 and T2 grades can be considered.

But think about what this means:

The rubric uses report cards from October through March (Trimesters 1 and 2) to predict readiness for third grade starting in September.

It excludes report cards from April through June (Trimester 3)—the period closest to when the student would actually be entering third grade.

We’re measuring performance from the beginning and middle of first grade, not the end of first grade.

If the goal is to predict third-grade readiness, wouldn’t you want the most recent performance data—the trimester that ends right before summer, closest to when third grade begins?

Instead, the rubric uses the least recent data it can—omitting the trimester that would be most predictive of next-grade readiness.

This is another example of measuring the wrong thing due to practical constraints, not because it’s the right measure.

The Pattern

All of these requirements suggest a district that has:

  1. Seen acceleration struggles
  2. Responded by raising bars on existing measures
  3. Never validated whether those measures predict third-grade readiness

The rubric has gotten more restrictive. It hasn’t gotten more accurate.

The Missing Data

Illinois requires districts to publish acceleration data on the state Report Card. You can slice it by grade, year, and demographics. Oak Park District 97’s numbers are public: 276 seventh graders accelerated in 2025, but only 26 first graders.

But here’s what the Report Card doesn’t show: How many of those accelerations were successful?

How many students who were accelerated later had to de-accelerate because the material was too challenging?

If the district has been raising bars in response to acceleration struggles—tightening thresholds from 70% to 80%, requiring 93rd percentile instead of 92nd, demanding “Excels” on report cards—there should be data showing those struggles.

That data exists somewhere. The district knows which students de-accelerated. They track outcomes. They have the evidence that supposedly justified raising the bar.

But it’s not public.

Which raises an obvious question: If the data showed that most accelerated students succeeded, why tighten the rubric?

And if the data showed many students struggled, was it because they weren’t ready—or because the rubric was measuring the wrong things to begin with?

Without this data, we can’t know whether raising bars actually improved outcomes, or just reduced access.

The Perverse Outcomes

Here’s what happens when you raise bars on the wrong measures:

Outcome 1: Fewer Accelerations Overall

In a district of Oak Park’s size, serving a community known for high academic achievement, first-grade acceleration rates are remarkably low.

(For detailed grade-level data showing the disparity, see: The Acceleration Gap: 276 to 26)

The bar is so high that very few students clear it.

Outcome 2: Still Accelerating Some Wrong Students

Because the rubric still measures first-grade performance, not third-grade readiness:

  • Some students who excel at first grade (high percentiles, “Excels” grades) get accelerated
  • Some of them aren’t ready for third grade
  • District still sees struggles

Raising the bar from 70% to 80% doesn’t fix this.

A student can score 85% on first-grade measures and still not be ready for third-grade content.

Outcome 3: Blocking Ready Students

Meanwhile:

  • Students who ARE ready for third grade (successful in advanced coursework, confirmed by teachers, scoring 99th percentile on above-grade testing)
  • But who got “Meets” instead of “Excels” on first-grade report cards
  • Or who scored 92nd instead of 93rd percentile on screening tests
  • Get blocked

My daughter is one example. Three teachers confirmed third-grade readiness. Successfully completed third-grade sessions. 99th percentile on MAP.

Blocked because she got “Meets” on first-grade report cards and 92nd percentile on AimsWeb.

The high bar blocked a ready student while still letting through unprepared ones.

And research shows the cost of these false negatives is significant. Decades of meta-analyses and longitudinal studies demonstrate that appropriately selected acceleration yields strong academic gains with neutral-to-positive social-emotional outcomes, while withholding acceleration after mastery is demonstrated drives disengagement (Kulik & Kulik; Rogers; Steenbergen-Hu, Makel, & Olszewski-Kubilius; Colangelo/Assouline’s A Nation Deceived/Empowered; Gross; Lubinski & Benbow/SMPY).

Districts often focus on avoiding false positives (accelerating unprepared students). But when your rubric measures the wrong things, raising the bar doesn’t reduce false positives—it just increases false negatives.

The Fundamental Problem

You can’t fix a bad measure by raising the bar.

If you’re using a thermometer to measure distance, requiring 101°F instead of 100°F doesn’t make your measurement more accurate.

You’re still measuring the wrong thing.

Why Raising Bars Doesn’t Work

The district’s logic seems to be:

“If accelerated students struggle, we need to identify only the MOST exceptional students at current grade level.”

But this assumes:

  • Exceptional first-grade performance = third-grade readiness
  • More exceptional first-grade performance = more confident third-grade readiness

Neither assumption is valid.

The Two Questions (Again)

Question 1: Does this student perform exceptionally at first-grade math?

  • First-grade report cards measure this
  • First-grade screening tools measure this
  • High percentile requirements measure this

Question 2: Is this student ready for third-grade math?

  • Above-grade testing measures this
  • Performance in third-grade coursework measures this
  • Teacher observations from advanced work measure this

Raising the bar on Question 1 doesn’t answer Question 2 more accurately.

A student can score 90% on Question 1 measures and not be ready (false positive). A student can score 60% on Question 1 measures and be completely ready (false negative).

The problem is which question you’re asking, not how high you set the bar.

What Would Break the Cycle

The solution isn’t to lower the bar back to 70%.

The solution is to measure the right things.

1. Ask the Right Question

Don’t ask: “Does this student perform exceptionally at first grade?”

Ask: “Is this student ready for third grade?”

2. Use the Right Measures

For third-grade readiness:

Above-grade testing

  • MAP out-of-level (test at grade they’d enter)
  • Third-grade placement assessments
  • Measures: Can they demonstrate mastery of second-grade content and readiness for third?

Performance in advanced curriculum

  • Trial placements with third-grade classes
  • Advanced enrichment performance (SMART sessions, etc.)
  • Measures: How do they actually perform with third-grade content?

Teacher observations from advanced work

  • Input from teachers who’ve worked with student on above-grade content
  • Not based on grade-level report cards
  • Measures: Do educators who’ve seen advanced performance recommend acceleration?

First-grade report cards (measure first-grade mastery, not third-grade readiness) ❌ First-grade screening tools (designed for intervention, not acceleration) ❌ Percentile ranks on grade-level tests (ceiling effects, wrong purpose)

3. Validate the Rubric

Track outcomes:

  • How do accelerated students perform in advanced classes?
  • Is there a correlation between rubric scores and acceleration success?
  • Do students who score 78% succeed better than those who scored 75%?

If no validation data exists: The thresholds are arbitrary, not evidence-based.

If validation shows rubric doesn’t predict success: The rubric measures the wrong things.

4. Focus on Predictive Validity, Not Arbitrary Precision

False precision: Requiring 93rd percentile instead of 92nd on a screening tool not designed for this purpose

True validity: Using measures that actually predict successful acceleration

Better to use the right measure with a reasonable threshold than the wrong measure with an impossibly high one.

The Irony

Districts think raising bars demonstrates rigor.

“We have high standards. Only the most exceptional students qualify.”

But if you’re measuring the wrong things:

  • High bars don’t equal rigor
  • They equal false precision
  • You block ready students
  • While still accelerating unprepared ones
  • And you never fix the fundamental problem

Rigor isn’t about how high the bar is. It’s about whether you’re measuring what actually matters.

The Way Forward

For Oak Park Elementary School District 97:

  1. Validate the rubric

    • Track accelerated student outcomes
    • Determine if rubric scores predict success
    • Identify which components actually matter
  2. Measure third-grade readiness, not first-grade exceptionalism

    • Give more weight to above-grade testing
    • Credit performance in advanced curriculum
    • Value teacher confirmations from advanced work
    • Reduce emphasis on grade-level report cards and screening tools
  3. Break the feedback loop

    • If accelerated students struggle, ask: “Are we measuring readiness accurately?”
    • Don’t ask: “Should we raise the bar on our current measures?”
    • Fix the measurement, not just the threshold

For other districts:

  • Before raising bars, validate that current measures predict success
  • Track outcomes over time—has tightening rubrics improved acceleration success rates?
  • Remember: Fewer accelerations ≠ better accelerations if you’re measuring the wrong things
  • Ask whether your elementary and middle school acceleration rates make sense relative to each other

The Bottom Line

Oak Park District 97’s rubric shows clear signs of a system trapped in its own feedback loop.

The cycle:

  1. Rubric measures first-grade performance, not third-grade readiness
  2. Some accelerated students struggle
  3. District raises bar on first-grade measures (80% threshold, 93rd percentile requirements, “Excels” on report cards)
  4. Still measuring wrong thing
  5. Still get mixed results
  6. Raise bar again
  7. Repeat

The outcomes:

  • Fewer students qualify overall
  • Some unprepared students still get through (rubric measures wrong thing)
  • Ready students get blocked (high bars on wrong measures)
  • District never validates whether higher bars improve outcomes

The solution:

  • Measure third-grade readiness directly
  • Not first-grade exceptionalism with increasingly high bars
  • Validate that rubric predicts successful acceleration
  • Break the feedback loop

The irony:

Districts think raising bars demonstrates rigor and high standards.

Actually, it demonstrates that they’re measuring the wrong things and don’t know how to fix it.

True rigor is measuring what actually matters.

Not measuring the wrong things with false precision.


Related Posts:


This is part of an ongoing series documenting one family’s experience with gifted education acceleration in Oak Park Elementary School District 97. All facts are based on emails, rubric documents, and official communications obtained through public records requests and direct correspondence with district officials.

Names of district administrators and principals are used as they are public officials performing official duties. Teacher and staff names have been removed to protect privacy. Student names are withheld to protect privacy.