Ohio Accountability System Thoughts

For what it’s worth, here are my responses to questions that were posed to testing committee members this week in preparation for our next session:

With a focus on designing an ideal system of state assessments, what are the key issues for you going forward? What areas that need further exploration?

Is it possible to use existing diagnostic assessments for summative and accountability purposes as well?  For example, ACT Aspire, which provides a range of results correlated   to predicted ACT college and career ready outcomes, could be used in a summative system where spring to spring scores are compared and credit is given for growing students, similar to the construct of the K-3 literacy measure on the LRC.  This data could also be disaggregated by subgroup for ESEA accountability purposes.

Reducing the overall amount of testing, while not tying the hands of Districts to administer diagnostic assessments that inform teaching and learning is the gold standard outcome for the testing committee. Finding a suite of assessments that can be used for multiple purposes, while allowing districts to choose their tool and administer a single assessment whose data can be used in a variety of ways should be the focus moving forward.

The frustration is that the data from PARCC/OCBA will in no way be able to inform instructional decision making for students at the point when teachers are equipped to make a difference with them.  The key to ending over-testing is to admit that the argument of ‘testing the full range of the standards’ is a false choice that inappropriately limits decisions that can be made to reduce the overall test level for students.  If current diagnostic platforms (Aspire, STAR, MAP) have college and career ready metrics and student growth data baked into their systems, and if their data can prove to be validated in the actual outcomes of students who have taken the assessments and transitioned to college/career (and continued to be remedial free), then having another assessment layered on top that ONLY measures the CCSS is an extreme example of redundancy.

The issue for exploration is why has Ohio insisted on continuing down the path with PARCC in the name of school accountability when there already exists a range of assessments that can measure students’ ongoing journey towards college and career readiness?  What matters is are students college and career ready, not did they take tests that measured the full range of the standards.  If systems already exist that can answer the first question, and if longitudinal data studies validate the claims these assessments contend to make regarding readiness at X, Y, or Z performance levels, then having an additional system that measures standards first and then layers on a CCR score is unnecessary.

What are two issues that you would like ODE to speak to at our next meeting?

1.   Is it possible to move to a system where ODE allows for multiple assessments to be chosen by Districts for accountability purposes, can can ODE crosswalk the scores to ensure continuity of performance levels across the measures (this is the model from the Third Grade Guarantee alternate assessment structure).

2. What is ODE’s ability to re-submit the ESEA waiver to account for either recommendations from the testing committee or legislation signed into law by the Governor?

3. What constraints are placed on the committee by the current PARCC/AIR contracts?  Is there room to make significant recommendations for Fall 2015, or are we boxed in by contracts that will not expire for several more years?

4. In light of the sub-optimal testing conditions many students faced electronically (an inordinate amount of distractions due to error codes and general tech hiccups), will ODE advocate for school districts and work with the legislature to consider the data from 2014-2015 as a full year field test data set and not use the data for grading purposes on the local report card?  To do so is unfair to school districts and will paint a false picture of District performance in a year where there are too many variables in the data to reliably count on their statistical validity.

5. What is ODE willing to do regarding testing and the multiple graduation pathways at the high school level?  As ODE is all about ‘choice’, it seems logical that once a student has demonstrated that s/he is remedial free and has qualified for a graduation pathway, s/he should be able to choose what additional assessments to take.  For example, if a student scores a 27 composite on the ACT as a freshman and meets the remedial free pathway on a college and career ready assessment, why should s/he have to take additional assessments that measure college and career readiness?  The opt-out movement will continue to stay alive and active in Ohio unless this issue is resolved, especially in high performing school districts.

What It Takes To Really Make A Difference

One of the things I love about summer is the chance to stretch out academically and get into professional books with the time and space to think deeply and reflect on my professional practice and the practice of the institution I am charged with providing academic leadership for. This summer I have spent time with Eric Jensen’s ‘Teaching Poverty in Mind: What Being Poor Does to Kids’ Brains and What Schools Can Do About It”. I highly recommend this book for all of my colleagues who work in challenging urban environments. Jensen spends a great deal of time in the first part of the book examining the effects of poverty on the brain. This section includes extensive research quotes to support his thoughts, without being overly academic or stuffy. The second part of the book explores the mindsets necessary to work successfully with children from impoverished backgrounds. (I’ll take this chance to give a shout out for Carol Dweck and ‘Mindset’…a must for all who work in education for a living). Finally, the third and fourth parts of the book explore building wide and classroom success factors that are critical for schools and teachers that seek to be successful with at risk students. Specifically, Jensen asserts that support of the whole child, strategically using data, a high degree of accountability, focusing on relationships, and constant enrichment are critical components in schools that are making a difference with hard to reach students. Below are a smattering of quotes that specifically stuck out to me. The book is available through ASCD, ISBN: 978-1-4166-0884-4

“Poverty calls for key information and smarter strategies, not resignation and despair” p.5

“Teachers don’t need to come from their students’ cultures to be able to teach them, but empathy and cultural knowledge are essential.” p. 11

“Children raised in poverty rarely choose to behave differently, but they are faced daily with overwhelming challenges that affluent children never have to confront, and their brains have adapted to suboptimal conditions in ways that undermine good school performance.” p.14

“Some teachers may interpret students’ emotional and social deficits as a lack of respect or manners, but it is more accurate and helpful to understand that the students come to schools with a narrower range of appropriate emotional responses than we expect.” p. 18

“It is much easier to condemn a student’s behavior and demand that he or she change it than it is to help the student change it. Every proper response that you don’t see at your school is one that you need to be teaching.” p. 19

“If your school aims to improve student achievement, academic success must be culturally acceptable among your students.” p. 20

“Whenever you and your colleagues witness a behavior you consider inappropriate, ask yourselves whether the discipline process is positive and therefore increases the chances for better future behavior, or whether it’s punitive and therefore reduces the chances for better future behavior.” p. 30

“Instead of telling students to act differently, take the time to teach them how to act differently.” p. 30

“On every single day of school, your students’ brains will be changing…Whether they are changing for better or for worse depends headily on the quality of the staff.” p.48

“Most low SES kids’ brains have adapted to survive their circumstances, not to get As in school. Their brains may lack the attention, sequencing, and processing systems for successful learning. It’s up to us to upgrade their operating systems – or see a downgrade in their performance.” p. 57

“We can help kids rise above their predicted path of struggle if we see them as possibilities, not as problems.” p. 65

“If you surrender to the despair and deprivation of students’ lives outside school, you will make your classroom and school failure a self-fulfilling prophecy. To get the best from your student, you must expect and demand the best from yourself.” p. 83

“If a student is not doing well (excellent teachers) immediately ask the question “How can I teach this differently, and what needs to change so that the student will achieve mastery?” p. 110

“Avoid complaining about students’ deficits. If they don’t have it, teach it!” p. 116

The Test

OK, so I’m worried…..

Ohio has just announced that PARCC will be the test vendor for the new Common Core assessments.
My concern is that their idea of 21st century assessment (how many times will they use the word ‘innovative’ in their literature?) will be to take 20th century selected response items and computerize them. This would lead to more of the same in the classroom (a narrowing of the curriculum in order to ensure that students only pick the one correct response) and this is incompatible with 21st century skills students desperately need to compete in the global economy.
The new assessments will drive instruction and learning for better or worse. I hope (but doubt) that policy makers will think carefully about the damage they are inflicting by measuring accountability using single, constrained metrics.
Here is a slide from the November 2011 PARCC powerpoint describing the efficiencies of their system:

PARCC’s assessment will be computer-based and leverage technology in a range of ways:

Item Development
Develop innovative tasks that engage students in the assessment process
Administration
–Reduce paperwork, increase security, reduce shipping/receiving & storage
Increase access to and provision of accommodations for SWDs and ELLs
Scoring
–Make scoring more efficient by combining human and automated approaches
Reporting
Produce timely reports of students performance throughout the year to inform instructional, interventions, and professional development

I’ve take the liberty to redline the codewords for multiple choice items (in my opinion).
Here’s a portion of another slide about the types of items PARCC will be using:
Summative Assessment Components:
Performance-Based Assessment (PBA) administered as close to the end of the school year as possible. The ELA/literacy PBA will focus on writing effectively when analyzing text. The mathematics PBA will focus on applying skills, concepts, and understandings to solve multi-step problems requiring abstract reasoning, precision, perseverance, and strategic use of tools
End-of-Year Assessment (EOY) administered after approx. 90% of the school year. The ELA/literacy EOY will focus on reading comprehension. The math EOY will be comprised of innovative, machine-scorable items

Some lingering questions include how many PBA questions will there be, what percentage of a student’s summative score will the PBA questions count for, and will there be anything other than selected response questions on the EOY assessment?
The common core espouses collaborative, authentic project based and problem based learning. This will not occur if the assessment students are subject to and teachers are accountable for values single pathways to correctness.
Is anyone else worried?