The Wrong Tool: Why Screening Tests Don't Belong on Acceleration Rubrics
Imagine using a thermometer to measure distance. Or a bathroom scale to measure time. You’d get numbers, sure. But those numbers would be meaningless because you’re using the wrong tool for the job.
That’s essentially what happens when school districts put screening tools like AimsWebPlus on acceleration rubrics.
What AimsWebPlus Actually Is
AimsWebPlus is a universal screening and progress monitoring tool created by Pearson. Its purpose, according to Pearson’s own materials, is to serve as a “risk-screening/MTSS tool.”
Let me translate that:
- Risk-screening: Identifying students at risk of academic difficulty
- MTSS: Multi-Tiered System of Supports—the framework for providing intervention to struggling students
AimsWebPlus helps schools answer questions like:
- Which students need extra support in math?
- Is this intervention working for students who are struggling?
- Are students building foundational fluency skills?
It’s a tool for identifying students who need help, not students who need challenge.
What AimsWebPlus Is NOT Designed For
AimsWebPlus is not designed to:
- Identify students ready for grade-level acceleration
- Differentiate performance at the upper end of the ability spectrum
- Measure advanced conceptual understanding
- Assess readiness for above-grade-level curriculum
This isn’t my opinion. This is basic assessment design.
Screening tools are built for breadth, not depth. They’re quick, repeatable measures that can be administered to all students to flag who needs additional assessment or intervention. They’re not comprehensive placement measures.
And yet, Oak Park Elementary School District 97 uses AimsWebPlus Math scores as a requirement on its acceleration rubric—where scoring below the 93rd percentile earns you zero points, no matter how strong your other evidence.
The Parent Communication Barrier
When I tried to understand how Pearson intended their tool to be used for acceleration decisions, I did what any parent would do: I contacted the company that created the assessment.
August 21, 2025: Pearson acknowledged my inquiry about whether aimswebPlus is designed as a “risk-screening/MTSS tool” rather than an “advanced-placement measure.”
August 26, 2025: After consulting internally for five days, Pearson responded with their policy: they don’t communicate with parents. All inquiries must go through the school or district.
The same district I was disputing with.
So I couldn’t get independent expert guidance from the company that created the assessment being used to deny my daughter’s acceleration.
This creates a fundamental information asymmetry:
- Districts have access to Pearson’s detailed interpretive guidance, technical manuals, and professional development resources
- Parents are told to ask the district—the party making decisions we’re questioning
When the stakes are whether a child gets appropriate academic challenge, this barrier matters.
How It Was Used: My Daughter’s Case
Here’s what happened when Oak Park District 97 applied AimsWebPlus to my daughter’s first-grade acceleration application:
Her Performance:
- NWEA MAP Math: 99th percentile (RIT 205)
- Classroom work: Confirmed by three teachers as working comfortably at 3rd grade level
- Advanced sessions: Successfully completed SMART 2nd and 3rd grade math enrichment
- AimsWebPlus Math: 92nd percentile
Rubric Scoring:
- MAP Math (99th percentile): 7 points
- AimsWebPlus (92nd percentile): 0 points
Let that sink in: The same child, two different assessments, wildly different rubric treatment.
And it gets worse.
When I reviewed the testing records, I discovered that not all the subtests were administered. The first-grade AimsWebPlus battery includes:
- NCF-P (Number Comparison Fluency - Pairs): Administered
- MFF-T (Math Fact Fluency - Triads): NOT administered
No explanation was provided for why one subtest was omitted.
What NCF-P Actually Measures (And Why It’s Absurd)
Let me explain what NCF-P is, because this detail matters.
NCF-P (Number Comparison Fluency - Pairs) is a one-minute timed test where students look at pairs of numbers and circle which one is bigger.
That’s it.
“Which is bigger: 7 or 3?” “Which is bigger: 15 or 12?” “Which is bigger: 24 or 31?”
Students get 60 seconds to complete as many comparisons as possible. The score is how many correct comparisons they can make in one minute.
This measures number magnitude comparison — a foundational early numeracy skill typically mastered in kindergarten and early first grade.
Now let’s remember what we were actually trying to determine: Is my daughter ready to accelerate past 2nd grade math into 3rd grade math?
Third grade math includes:
- Multi-step word problems
- Multi-digit multiplication and division
- Understanding of fractions
- Area and perimeter
- Problem-solving strategies
My daughter was demonstrating proficiency in all of these areas. She was working comfortably with 3rd and 4th grade teachers on problems like:
“Sarah has 3 boxes. Each box contains 4 bags. Each bag contains 6 marbles. How many marbles does Sarah have in total?”
This requires:
- Reading comprehension
- Multi-step planning
- Multiplication concepts (not just memorization)
- Keeping track of nested quantities
But the deciding factor was: How fast can she circle which number is bigger in a pair?
She scored 92nd percentile. Meaning she performed better than 92% of first graders nationwide at circling the bigger number in 60 seconds.
The rubric gave her zero points.
To get even 1 point, she would have needed the 93rd percentile. To get the maximum 5 points, she would have needed the 98-99th percentile.
The Disconnect Is Staggering
Think about what this means:
A child who can solve multi-step word problems involving nested multiplication… …who has successfully completed advanced math sessions with students two grade levels ahead… …who has been confirmed by three expert teachers as working comfortably at 3rd grade level… …who scored 99th percentile on MAP Math, an adaptive test designed to measure above-grade performance…
…was denied acceleration because she didn’t circle “which number is bigger” fast enough.
She was in the 92nd percentile. Not the 93rd.
One percentile point on a timed test of kindergarten-level number comparison determined she wasn’t ready for 3rd grade math.
Why Speed on Basic Tasks Doesn’t Predict Advanced Readiness
Here’s what the 92nd percentile NCF-P score tells us:
My daughter can accurately identify which number is larger. She understands number magnitude. She has the foundational skill.
What it doesn’t tell us:
- Can she handle complex multi-step problems?
- Does she understand mathematical concepts deeply?
- Can she apply strategies to novel problem types?
- Is she ready for the conceptual demands of 3rd grade curriculum?
For those questions, we have actual evidence:
- Her performance in SMART 3rd grade sessions: ✅ Yes
- Her MAP Math score (99th percentile on adaptive test): ✅ Yes
- Her teachers’ direct observations of 3rd grade work: ✅ Yes
But the rubric weighted a 1-minute test of “circle the bigger number” as heavily as all of that evidence combined.
And because she was in the 92nd percentile instead of the 93rd, the rubric said: 0 points.
The Missing Test Makes It Even Worse
Remember that MFF-T (Math Fact Fluency - Triads) was never administered?
That’s another timed fluency test, but it measures recall of basic addition and subtraction facts (like 7 + 5 = ?).
When I asked the district to:
- Re-administer NCF-P with clear instructions about the timing component
- Administer the missing MFF-T test
The response from Acting Superintendent Patrick Robinson: “We did not feel that the data collected for Sonia warranted a retest.”
So: The test that was given (NCF-P) may not have been properly administered with timing emphasis. The test that should have been given (MFF-T) wasn’t administered at all. And the district saw no need to remedy either issue.
They had enough data, apparently, to conclude that 92nd percentile at circling bigger numbers = not ready for 3rd grade math.
Despite everything else suggesting the opposite.
What I Requested
I asked the district to:
- Re-administer NCF-P with proper standardization (emphasizing speed, which wasn’t clear during the original administration)
- Administer the missing MFF-T subtest so we’d have complete data
The district’s response, from Acting Superintendent Patrick Robinson: “We did not feel that the data collected for Sonia warranted a retest.”
He also admitted: “We do not have all of the detailed work readily available to provide.”
So:
- A screening tool designed to identify struggling students was used to block acceleration for a high-achieving student
- Some subtests were omitted without explanation
- Testing conditions documentation wasn’t available
- The district refused to complete the assessment or provide documentation
- A 92nd percentile score = zero points on the rubric
Why This Is the Wrong Tool
1. Ceiling Effects
Screening tools are designed to have a floor (to identify students very far below grade level) but often don’t have an adequate ceiling (to differentiate among high-achieving students).
When my daughter scored 92nd percentile on AimsWebPlus but 99th percentile on MAP Math, that’s a signal: one tool is better than the other at measuring performance at the upper end.
MAP Math is adaptive—it adjusts difficulty based on student responses and can measure several grade levels above the student’s actual grade. AimsWebPlus is a fixed-form screener—it gives all first graders the same test regardless of their ability level.
Expecting a screening tool to differentiate between students at the 92nd and 99th percentile is like expecting a thermometer that only reads up to 100°F to tell you the difference between 100°F and 110°F. The tool doesn’t go that high.
2. Speed vs. Conceptual Understanding
AimsWebPlus Math subtests are timed fluency measures. They measure how quickly students can perform basic operations within a short time window.
This is useful for screening—students who can’t perform basic operations quickly may need intervention on foundational skills.
But speed is not the same thing as advanced mathematical thinking.
A student who can solve multi-step word problems, understand multiplication concepts beyond rote memorization, and work comfortably with 3rd-grade curriculum might not be the fastest at number comparison tasks under time pressure.
Acceleration decisions should be based on conceptual readiness for advanced curriculum, not just fluency with grade-level skills.
3. Incomplete Administration
The fact that the MFF-T subtest wasn’t administered raises a critical question: If the tool is so important that 92nd percentile = zero points, why wasn’t it administered completely?
Either:
- The assessment is critical to measuring readiness (in which case it should have been administered completely), or
- The assessment isn’t actually measuring what’s important for acceleration decisions
You can’t have it both ways.
4. No Validation Research
Illinois law requires that acceleration practices be “research-based” (105 ILCS 5/14A-32(a)(4)).
When I asked—three times—for the research validating the use of AimsWebPlus for acceleration decisions and the specific percentile thresholds (92nd percentile = 0 points, 93rd = 1 point), Acting Superintendent Robinson admitted:
“We do not have all of the detailed work readily available to provide.”
So there’s no evidence that:
- AimsWebPlus is an appropriate tool for acceleration decisions
- The 93rd percentile threshold is validated for identifying students ready for advanced work
- A 1-percentile-point difference (92nd vs. 93rd) should be decisive in acceleration decisions
What Should Be Used Instead?
If you want to assess whether a student is ready for grade-level acceleration in math, use tools designed for that purpose:
1. Above-Level Testing
- Out-of-level MAP Math: Test student at the grade level they’d be entering
- End-of-year placement tests: From the grade they’d be skipping (which District 97 does use)
- Measures: Can this student demonstrate mastery of the content they’d be skipping?
2. Performance in Advanced Curriculum
- Trial placements: How does the student perform when actually doing the advanced work?
- Enrichment performance: How did they do in advanced math sessions?
- Measures: Does this student thrive when given grade-level-ahead material?
3. Teacher Recommendations Based on Advanced Work
- Not grade-level report cards (which measure meeting grade-level standards)
- Observations from advanced coursework: How did teachers see the student perform in accelerated settings?
- Measures: Do educators who’ve worked with this student in advanced contexts recommend acceleration?
4. Portfolio Evidence
- Student work samples: Demonstrating advanced mathematical thinking
- Problem-solving: Multi-step word problems, conceptual understanding
- Measures: Can this student demonstrate the kind of thinking required for advanced work?
Notice what’s not on this list: A timed screening tool designed to identify students who need intervention.
The Real Cost
Here’s what happens when you use the wrong tool:
Students who should accelerate don’t.
My daughter:
- Scored 99th percentile on MAP Math (the adaptive test designed to measure above-grade performance)
- Successfully completed 2nd and 3rd grade math enrichment sessions
- Was confirmed by multiple teachers as working comfortably at 3rd grade level
- Scored zero points for 92nd percentile on a screening tool
She didn’t get acceleration not because she wasn’t ready, but because a screening tool designed for a completely different purpose said she scored at the 92nd percentile instead of the 93rd.
One percentile point. On the wrong tool. Used for the wrong purpose.
The Questions That Need Answers
To Oak Park Elementary School District 97:
- What is the research validating the use of AimsWebPlus for acceleration decisions?
- What is the research supporting the 93rd percentile threshold?
- Why was the MFF-T subtest not administered?
- What documentation exists showing that the screening was properly standardized?
- How does a 1-percentile-point difference on a screening tool justify blocking acceleration for a student demonstrating readiness through multiple other measures?
To other districts using screening tools on acceleration rubrics:
- Have you validated that your screening tools are appropriate for identifying students ready for acceleration?
- Are you using tools designed to identify struggling students to make decisions about advanced students?
- Do parents have access to independent expert interpretation of assessments, or must they rely solely on the district?
- Are you confusing “data-driven” with “using the right data for the right purpose”?
The Bigger Picture
This isn’t just about AimsWebPlus. This is about a broader pattern of using data without asking whether it’s the right data for the question being answered.
Screening tools have value. They’re excellent for what they’re designed to do: quickly identifying students who need additional support or intervention.
But using them to block acceleration for high-achieving students isn’t rigorous. It’s not research-based. It’s not validated.
It’s using a thermometer to measure distance and then claiming the numbers prove something meaningful.
When we use the wrong tools, we get the wrong answers. And kids pay the price.
Related Posts:
- The Acceleration Gap: 276 to 26
- The Glitch: Two Calculation Errors, Both Caught by Parent
- The Ghost Rubric: When Requirements Reference Tests That Don’t Exist
Next in series: Coming next week: The next chapter in this investigation.
This is part of an ongoing series documenting one family’s experience with gifted education acceleration in Oak Park Elementary School District 97. All facts are based on emails, rubric documents, and official communications obtained through public records requests and direct correspondence with district officials.
Names of school officials (principals, district administrators) are used as they are public officials performing official duties. Student and parent names are withheld to protect privacy.