When Ready Isn't Enough: How Rubrics Measure the Wrong Things

When Ready Isn't Enough: How Rubrics Measure the Wrong Things

November 16, 2025

My daughter didn’t qualify for math acceleration from first grade to third grade.

Not because she wasn’t ready for third grade. She was.

She didn’t qualify because the rubric measured how well she performed at first grade math—not whether she was ready for third grade math.

These are two entirely different questions.

The Evidence: Ready for Third Grade

By the end of first grade, here’s what my daughter demonstrated:

Advanced Coursework: My daughter successfully completed both SMART advanced 2nd grade math sessions and SMART advanced 3rd grade math sessions. Both programs placed her with older students working on grade-level-ahead material. Her teachers described her work as “appropriate for her level”—she was thriving with this advanced content, not struggling.

Teacher Confirmations: Three teachers at her school confirmed she was working comfortably at 3rd grade level: a 3rd grade teacher, a 4th grade teacher, and an enrichment teacher. These weren’t casual observations. All three teachers worked with her directly on 3rd grade content and confirmed her readiness.

Above-Grade Testing: On the NWEA MAP Math assessment, my daughter scored in the 99th percentile nationally (RIT 205). MAP is an adaptive test specifically designed to measure above-grade-level performance. She scored higher than 99% of first graders nationwide. The test adjusts difficulty based on responses, measuring performance several grade levels above the student’s actual grade.

What Third Grade Math Actually Requires:

First grade Eureka Math focuses on addition and subtraction within 100, basic place value, and foundational concepts. Third grade Eureka Math introduces entirely different mathematical thinking: multiplication and division concepts (not just memorization), fractions as numbers on the number line, area (understanding multiplication as area), multi-step word problems requiring strategic planning, and properties of operations (commutative, associative, distributive).

My daughter was doing this work. With 3rd and 4th grade teachers. Successfully. She wasn’t a borderline case. She had direct evidence of readiness for the grade she’d be entering.

What the Rubric Said

The district’s acceleration rubric scored my daughter at 29 out of 46 points (63%).

She needed 36 points (78%, advertised as “80%”) to qualify.

She fell short by 7 points.

Here’s where those points went—and what it reveals about what the rubric actually measures.

Where the Points Went

1. Report Card “Meets” = 0 Points

My daughter’s first grade report card showed “Meets” (M) for most math standards.

In most educational contexts, “Meets grade-level standards” means successful. It means the child has mastered the content expected for their grade level.

Rubric score: 0 points.

The rubric only awards points for “Excels” (E) grades. “Meets” is treated the same as “Below.”

But here’s what first grade report cards actually measure: mastery of first grade content.

First grade math (per Eureka Math/Common Core standards) covers addition and subtraction within 100, basic place value (tens and ones), comparing lengths, and identifying and partitioning shapes. A student can meet these standards easily—even while bored—without showing the exceptional performance teachers reserve “Excels” for in the context of grade-level work.

Meanwhile, that same student might be mastering multiplication and division, fractions on the number line, multi-digit operations, and area and perimeter. None of which appears on a first grade report card.

The rubric penalized my daughter for not getting “Excels” at content she’d already moved beyond—while ignoring her actual performance in the advanced curriculum she was ready for.

2. AimsWeb Math: 92nd Percentile = 0 Points

On the spring AimsWeb+ Math assessment, my daughter scored in the 92nd percentile nationally.

She performed better than 92% of first graders across the country.

Rubric score: 0 points.

To earn even 1 point, she needed the 93rd percentile.

Meanwhile, on MAP Math—the adaptive test designed to measure above-grade performance—she scored 99th percentile and received full credit (7 points).

Same child. Two different tests. One gave her 0 points for exceptional performance; the other recognized it.

(For a detailed analysis of why AimsWeb is the wrong tool for acceleration decisions, see: The Wrong Tool: Why Screening Tests Don’t Belong on Acceleration Rubrics)

3. The Missing Spring Assessment

The spring 2025 AimsWeb+ row on my daughter’s rubric was blank.

Not scored. Not administered. Just empty.

That row was worth up to 5 points.

When I asked about it in June, I was told parents were informed in the fall that students scoring above a certain threshold would skip the spring test. But there was no clarification about whether my daughter was exempt due to strong performance, or whether the test simply wasn’t administered.

A component worth 5 points was missing from the evaluation—with no explanation.

The Two Questions

Here’s the fundamental problem with this rubric:

It conflates two entirely different questions.

Question 1: Does this student excel at first grade math?

This question measures addition and subtraction within 100, basic place value to 100, grade-level fluency and performance, and “Excels” marks on report cards for exceptional first-grade work. You’d assess it using first grade report cards, grade-level assessments (AimsWeb, end-of-year tests), and teacher ratings of first-grade performance.

Question 2: Is this student ready for third grade math?

This question measures understanding of multiplication and division concepts, readiness for fractions on the number line, ability to solve multi-step word problems, conceptual readiness for area and properties of operations, and whether they can handle the actual third-grade curriculum. You’d assess it using above-grade testing (MAP, out-of-level assessment), performance in actual third-grade coursework, teacher observations from advanced work, and trial placement or advanced enrichment performance.

These are not the same question.

A student can excel at first grade (Question 1) and not be ready for third grade (Question 2).

A student can meet expectations at first grade—because the content is too easy—while being completely ready for third grade (Question 2).

The rubric assumes Question 1 = Question 2. It doesn’t.

What the Rubric Actually Measures vs. What It Should Measure

What the Rubric Actually Measured (Question 1):

The rubric asked: Did she get “Excels” at first grade content? No, she got “Meets” (0 points). Did she hit 93rd percentile on grade-level fluency? No, 92nd percentile (0 points). Did she complete all grade-level assessments? Unknown, test not administered (0 points). Total focus: first-grade performance measures.

What the Rubric Should Have Measured (Question 2):

The rubric should have asked: Did she succeed in 3rd grade math sessions? Yes, she completed SMART 2nd and 3rd grade sessions successfully—but the rubric didn’t assess this. Did teachers working with her on 3rd grade content confirm readiness? Yes, three teachers confirmed 3rd-grade readiness—but the rubric gave this minimal weight, easily outweighed by report card “Meets.” Did she score well on tests designed to measure advanced performance? Yes, 99th percentile on MAP Math—and the rubric counted this, but gave it equal weight to grade-level screening tools.

The rubric gave the most weight to Question 1 (first-grade performance) and minimal weight to Question 2 (third-grade readiness).

My Daughter’s Scores: Strong Evidence for the Wrong Question

Question 1: Does she excel at first grade math?

Report cards showed “Meets” (not Excels). AimsWeb showed 92nd percentile (not 93rd+). The rubric’s conclusion: Not exceptional at first grade.

Question 2: Is she ready for third grade math?

She successfully completed 2nd and 3rd grade sessions in advanced coursework. Three teachers confirmed 3rd grade readiness. She scored 99th percentile on MAP (adaptive, above-grade testing). The rubric’s conclusion: …measured primarily Question 1 instead.

She had strong evidence for readiness for third grade (the question that matters for acceleration). She had weak scores on exceptional performance at first grade (a different question entirely). The rubric blocked her acceleration based on the wrong question.

The Curriculum Gap: Why This Matters

Let me show you concretely why these are different questions.

First Grade Eureka Math, Module 4: Place Value, Comparison, Addition and Subtraction to 40. Students work with numbers up to 40, place value (tens and ones), addition and subtraction within this range, and comparison strategies.

Third Grade Eureka Math, Module 4: Multiplication and Area. Students work with multiplication as area (understanding rows × columns), the distributive property (8 × 6 = (5 × 6) + (3 × 6)), connecting multiplication to geometric concepts, and multi-step problems requiring nested operations.

These require fundamentally different mathematical thinking.

A student can master Module 4 at first grade—addition and subtraction to 40, earning “Meets” on their report card—while simultaneously being ready for Module 4 at third grade—multiplication as area, distributive property.

The rubric conflates these: it assumes mastering first-grade addition means readiness for third-grade multiplication.

Or worse: It assumes that NOT getting “Excels” on first-grade addition means NOT ready for third-grade multiplication.

Neither assumption is valid.

The Mathematics of the Impossible

Let’s look at what my daughter would have needed to qualify under this rubric:

Here’s what she had: MAP Fall (7 points, 95-99th percentile), MAP Winter (7 points, 99th percentile), Report Cards (0 points—Meets, not Excels), AimsWeb (0 points—92nd, not 93rd percentile), Spring Assessment (0 points—not administered), and Teacher surveys plus placement test (~15 points estimated). Total: 29 points (63%). She needed 36 points (78%).

To reach 36 points, she would have needed either “Excels” on report cards instead of “Meets” (+2-4 points), 93rd percentile on AimsWeb instead of 92nd (+1 point), and credit for spring assessment (+3-5 points)—or perfect teacher survey scores, higher placement test scores, and hope the math works out.

But here’s what’s missing from this calculation: none of these measures assess readiness for third grade math.

She could have gotten “Excels” on first-grade report cards and still not been ready for multiplication, division, and fractions.

She could have scored 99th percentile on AimsWeb first-grade fluency and still not understood area or properties of operations.

The rubric wasn’t measuring third-grade readiness. It was measuring first-grade exceptionalism.

What This Reveals About Rubric Design

When a rubric produces outcomes that contradict expert teacher judgment (three teachers confirmed 3rd grade readiness), direct observation (successful performance in 3rd grade coursework), and above-grade assessment (99th percentile on adaptive test), the problem isn’t the student. It’s the rubric.

Good assessment design asks: “What do we need to know to make this decision?” For acceleration decisions, we need to know: Can this student handle the advanced curriculum they’d be entering? Do they demonstrate conceptual readiness for that grade level? Will they thrive with the actual content they’d encounter?

My daughter answered all three questions affirmatively. She successfully completed 3rd grade coursework (SMART sessions). She was confirmed ready by teachers working with her on 3rd grade content. She scored 99th percentile on adaptive testing measuring above-grade performance. But the rubric wasn’t designed to answer those questions.

It was designed to measure exceptional performance at current grade level (report card “Excels”), high percentile ranks on grade-level screening tools (AimsWeb 93rd+), and hitting arbitrary thresholds across multiple measures. It measured Question 1 (first-grade exceptionalism) instead of Question 2 (third-grade readiness).

The 80% Threshold: Where Did It Come From?

The district requires 80% (actually 78%, or 36/46 points) to qualify.

Why 80%? Why not 75%? Or 85%?

I asked this question three times:

August 25, 2025: “Please share the validation the district used to set these thresholds.”

September 4, 2025: “This is the third request for the research supporting the rubric’s thresholds. You have described them as valid; please share the sources.”

Response from Acting Superintendent Patrick Robinson (September 11, 2025):

“We do not have all of the detailed work readily available to provide.”

No research. No validation. No explanation.

The threshold is a policy choice, not a research-based standard.

What Should Change

For Oak Park District 97:

First, measure the right question: assess readiness for the grade students would enter, not exceptional performance at current grade. Second, fix report card scoring—“Meets” at grade level doesn’t mean not ready for acceleration, and “Excels” at grade level doesn’t prove advanced readiness. Consider that a bored student might “Meet” expectations while ready for acceleration. Third, weight evidence appropriately: give high weight to above-grade testing, performance in advanced curriculum, and teacher confirmations from advanced work, while giving low weight to grade-level screening tools and report cards measuring current-grade mastery. Fourth, validate the rubric: Does rubric score predict successful acceleration? Track outcomes—do students who score 78%+ actually succeed better than those at 75%? If no data exists, the thresholds are arbitrary, not research-based. Fifth, ask the right question: not “Does this student excel at first grade?” but “Is this student ready for third grade?”

For all districts using acceleration rubrics:

Remember that measuring the wrong thing precisely is worse than measuring the right thing imperfectly. Validate that your rubric actually predicts successful acceleration. Separate the questions—current-grade exceptionalism doesn’t equal next-grade readiness. Use the right tools—above-grade testing and advanced curriculum performance matter more than grade-level report cards.

The Bottom Line

My daughter was ready for third grade math. The evidence was overwhelming: she successfully completed 3rd grade coursework, was confirmed ready by three teachers who worked with her on 3rd grade content, and scored 99th percentile on the test designed to measure above-grade performance.

The rubric said she wasn’t qualified. Because she got “Meets” instead of “Excels” on her first-grade report card. Because she scored 92nd percentile instead of 93rd on a screening test. Because she didn’t demonstrate exceptional first-grade performance.

The rubric measured the wrong question. And my daughter paid the price.


Related Posts:

Coming next: The Feedback Loop: How measuring the wrong things creates a district-wide crisis that raising the bar can’t fix.


This is part of an ongoing series documenting one family’s experience with gifted education acceleration in Oak Park Elementary School District 97. All facts are based on emails, rubric documents, and official communications obtained through public records requests and direct correspondence with district officials.

Names of district administrators and principals are used as they are public officials performing official duties. Teacher and staff names have been removed to protect privacy. Student names are withheld to protect privacy.