For what it’s worth, here are my responses to questions that were posed to testing committee members this week in preparation for our next session:
With a focus on designing an ideal system of state assessments, what are the key issues for you going forward? What areas that need further exploration?
Is it possible to use existing diagnostic assessments for summative and accountability purposes as well? For example, ACT Aspire, which provides a range of results correlated to predicted ACT college and career ready outcomes, could be used in a summative system where spring to spring scores are compared and credit is given for growing students, similar to the construct of the K-3 literacy measure on the LRC. This data could also be disaggregated by subgroup for ESEA accountability purposes.
Reducing the overall amount of testing, while not tying the hands of Districts to administer diagnostic assessments that inform teaching and learning is the gold standard outcome for the testing committee. Finding a suite of assessments that can be used for multiple purposes, while allowing districts to choose their tool and administer a single assessment whose data can be used in a variety of ways should be the focus moving forward.
The frustration is that the data from PARCC/OCBA will in no way be able to inform instructional decision making for students at the point when teachers are equipped to make a difference with them. The key to ending over-testing is to admit that the argument of ‘testing the full range of the standards’ is a false choice that inappropriately limits decisions that can be made to reduce the overall test level for students. If current diagnostic platforms (Aspire, STAR, MAP) have college and career ready metrics and student growth data baked into their systems, and if their data can prove to be validated in the actual outcomes of students who have taken the assessments and transitioned to college/career (and continued to be remedial free), then having another assessment layered on top that ONLY measures the CCSS is an extreme example of redundancy.
The issue for exploration is why has Ohio insisted on continuing down the path with PARCC in the name of school accountability when there already exists a range of assessments that can measure students’ ongoing journey towards college and career readiness? What matters is are students college and career ready, not did they take tests that measured the full range of the standards. If systems already exist that can answer the first question, and if longitudinal data studies validate the claims these assessments contend to make regarding readiness at X, Y, or Z performance levels, then having an additional system that measures standards first and then layers on a CCR score is unnecessary.
What are two issues that you would like ODE to speak to at our next meeting?
1. Is it possible to move to a system where ODE allows for multiple assessments to be chosen by Districts for accountability purposes, can can ODE crosswalk the scores to ensure continuity of performance levels across the measures (this is the model from the Third Grade Guarantee alternate assessment structure).
2. What is ODE’s ability to re-submit the ESEA waiver to account for either recommendations from the testing committee or legislation signed into law by the Governor?
3. What constraints are placed on the committee by the current PARCC/AIR contracts? Is there room to make significant recommendations for Fall 2015, or are we boxed in by contracts that will not expire for several more years?
4. In light of the sub-optimal testing conditions many students faced electronically (an inordinate amount of distractions due to error codes and general tech hiccups), will ODE advocate for school districts and work with the legislature to consider the data from 2014-2015 as a full year field test data set and not use the data for grading purposes on the local report card? To do so is unfair to school districts and will paint a false picture of District performance in a year where there are too many variables in the data to reliably count on their statistical validity.
5. What is ODE willing to do regarding testing and the multiple graduation pathways at the high school level? As ODE is all about ‘choice’, it seems logical that once a student has demonstrated that s/he is remedial free and has qualified for a graduation pathway, s/he should be able to choose what additional assessments to take. For example, if a student scores a 27 composite on the ACT as a freshman and meets the remedial free pathway on a college and career ready assessment, why should s/he have to take additional assessments that measure college and career readiness? The opt-out movement will continue to stay alive and active in Ohio unless this issue is resolved, especially in high performing school districts.