Sunday, December 13, 2015

"An Assessment Written By Ohio Teachers"

Much has been made of the new and improved testing system that will be rolled out this spring. With its shorter assessments and elimination of PARCC as a vendor, the state and accountability advocates claim that the people have spoken, so the state has responded with a more humane system, a more effective system.

Except that it isn't. Length and vendor issues completely miss the point. These tests are telling us nothing about student learning that classroom teachers couldn't answer in greater depth and in a more timely manner. 

It has also been widely documented that the single thing that standardized tests measure best is economic standing. Generally speaking, we can accurately predict test scores from rates of free and reduced lunch in our schools.

As a classroom teacher, this would be the point in my discussion where I offer the obligatory disclaimer that I believe in accountability. Sure, OK. Come into my classroom, interview my students, survey their parents, but please stop insisting that this system of assessment is anything more than it is, a measure of relative poverty.

In a discussion of scores from last year's PARCC and AIR assessments, I heard it suggested that next year's tests will be more valid because they were "written by Ohio teachers."

This is a problematic statement for several reasons. First, they were NOT, in fact, written by Ohio teachers. As has been documented, Ohio is borrowing questions from Florida, Utah, and Nevada for this new round of tests. It would be more accurate to say that portions of these assessments were not even written in Ohio, let alone by Ohio teachers.

To be fair, what those in question were referring to was Ohio teacher participation in a group that also included administrators, members of the Ohio Department of Education, and reps from the American Institutes for Research. These stakeholders met to select existing questions that they believed would accurately depict student mastery of Ohio's standards thus making the tests valid. While this process is commendable in its inclusion, it is hardly "an assessment written by Ohio teachers."

These assessments will also hardly be valid. First, questions regarding validity have become common in states administering tests electronically. Second, the new tests, some being given now as make-up tests in high school and for the 3rd grade reading guarantee, have never been administered before. There have been no field tests or live high-stakes precedents for these assessments, unless you count the use of individual questions on disparate assessments in multiple other states as validating mastery of Ohio standards. The scenario is problematic, and should raise some significant questions. The situation is criminal, especially if you're a 3rd grader relying on this assessment for promotion, or a high school student acquiring points toward graduation.

Once again, I'm stuck quibbling over technicalities that completely miss the point. To say that the next round of tests will be more valid because they're written by Ohio teachers is terribly misleading, and I am not at all happy about the suggestion. However, the larger issue is that we're still suggesting that we can fix a system of punitive standardized tests. We cannot.

Shorten, lengthen, change vendors, include or exclude teachers in the process, and the assessments will continue to measure what they have always measured, the economic standing of the students assessed.



No comments:

Post a Comment