Sunday, September 25, 2016

Perspectives on Realism

Having spent a week coming to grips with being labeled a failure by Ohio's Report Card, I've begun to take a more philosophical (and angry) approach to what the scores mean. According to State Superintendent Paolo DeMaria, these grades are a small piece of the multitude of evidence that proves the value of our schools. Let's ignore for a moment that it is also the only piece of evidence that the Superintendent and the ODE have made very public. 

What I find more problematic is the premise that teachers like myself, as well as our students, are supposed to be inspired by the label of failure and redouble our efforts in order to find success. I cannot, however, ignore the reality that this has been made impossible by the ODE and state school board because they have arbitrarily set proficiency rates that automatically label 40-50% of students as failures. These rates have nothing to do with the mastery of content, which is better measured on a daily basis by educators, but rather on a desire to have a certain number of kids fail. State School Board Member Sarah Fowler said as much in a letter to her constituents...

"The cut scores were set AFTER the kids took the tests and based upon how they performed. This is not an objective standard, rather it is extremely subjective (ie, "how many kids do we want to see pass and how many do we want to see fail?")."

Many of us have been critical of this situation for years. Throughout this time the ODE has proven their mastery of avoiding criticism and reality, steering conversations in the direction of the bullshit rhetoric of rigor and expectations while completely ignoring the facts. All in all, if the team at the Ohio Department of Education has proven anything, it is that they are unable to respond to criticism, constructive as it may be, and are in no way prepared to admit that they may be wrong in order to do what is right for Ohio's students. The very organization tasked with leading our state's education policy is utterly incapable of learning.

Take Jim Wright, Director of Testing for the ODE, for example. In an article published in the Plain Dealer, Mr. Wright explained Ohio's reasoning for it's cut-scores, which determine which students are proficient and which are not. Apparently, the logic was to have our scores look more like the NAEP scores, but not exactly like the NAEP scores. In case you're unfamiliar, the National Assessment for Educational Progress tests students at intervals, both for the purpose of testing mastery and to see progress over time, depending on the test. Using NAEP scores as a marker for proficiency on state assessments is problematic for a variety of reasons. According to the ODE's logic, it's OK if you only sort of use the NAEP scores and otherwise make some shit up from there.

Wright himself indicates that we didn't use the NAEP. He suggests that in a decision displaying their immeasurable benevolence, the state picked an arbitrary point midway between the NAEP and our previous scores. This way students, teachers, schools and districts can look awful, but not completely awful. From the article...


"Other states have gone directly to a NAEP-like cut, which was pretty drastic," Jim Wright, the director of testing for the Ohio Department of Education, told the state board in June.
Wright said the department instead recommended scores that would show 50-60 percent of students as "proficient," instead of the 80 percent in previous years. He noted that Ohio would still have more "proficient" kids than NAEP says, but it would be "more realistic."
What we keep hearing is that these assessments are designed to measure mastery in a subject in order to assure that Ohio's kids are career and college ready. I'm confused. How does choosing a random percentage midway between the NAEP and our previous (OGT/OAA) scores indicate being on a path to college readiness? I'm only a teacher, not the director of testing for the ODE, but I'm thinking it doesn't. My instinct says that no series of standardized tests can measure college readiness.
In the meantime we're supposed to be grateful for the ODE's realism. I'm not. What Wright fails to recognize in this explanation is that their arbitrary NAEP, but not quite as drastic as NAEP lowering of proficiency numbers actually impacts kids. On an introspective or motivational level, those 40-50% of students earning "Basic" or "Limited" scores have just been labeled as failures. Worse yet, in the world of high stakes outcomes, those 40-50% of students who happen to be in 3rd grade are now in danger of not being promoted. The high school kids in this situation are not on pace to graduate. 
Talk to some high school kids about what is "realistic," Mr. Wright. Being prevented from graduating by bureaucrats despite years worth of effort does not qualify.
What's worse is that Wright himself expressed his concern about the impact of these new assessments on graduation, and now he seems to have forgotten all about it. According to the meeting minutes of the Ohio Technical Advisory Committee, January 26, 2016... 
Jim Wright noted that there are three pathways to high school graduation, but recognized that the new proficiency cuts for End of Course assessments will be challenging if used in defining high school graduation.
So, which is it, realistic or challenging? Realistically challenging, perhaps? For certain kids, anyway. Let's look at this from a different perspective.
A report released this week by the Ohio Education Policy Institute on the state report cards indicates (as it does every year) that Economically Disadvantaged Students perform far poorer on standardized tests than their wealthier counterparts. From the report...
This analysis is far from the first to demonstrate a strong negative correlation between student achievement and socioeconomic status. However, this data shows that in Ohio, the negative correlation between socioeconomic and student achievement has proven all too persistent over time.
The report uses the Performance Index, among many other measures, to make their point. For those unfamiliar, the study defines the PI in this way, "the Performance Index is an aggregate statewide assessment measure which takes into account the performance of each district’s students at the different performance levels (Advanced Plus, Advanced, Accelerated, Proficient, Basic, and Limited) across all of the tests. The maximum PI score is 120 (all students at “Advanced Plus” level)." As you can see in Table 1 below, the higher the percentage of Economically Disadvantaged Students, the lower the Performance Index.
While I'd like to give Jim Wright and the decision making team at the ODE the benefit of the doubt, if they're worth their salt as educational professionals, then they are very aware that impoverished students score poorly on standardized tests as a rule. So, assuming their knowledge of this data, they have either chosen to completely ignore it, or have purposefully chosen to label a far greater percentage of poor children as failures, hold back a disproportionate number of poor children in 3rd grade, and place a disproportionate number of impoverished students in danger of not graduating.
This decision making process is not "more realistic" as Jim Wright says, but is completely at odds with the reality of what these tests measure best, which is socioeconomic status. The state has chosen to fail kids on this measure.
The ODE is at best "Limited" in their ability to make decisions regarding the betterment of Ohio's education policy. In this case I would categorize them as "Basic." They have failed. Perhaps the standards that I'm holding them to are too rigorous, the expectations too high. Based on their own logic, I hope they will work harder, seek remediation, and achieve success.
A good start would be to stop pretending that there is a connection between these assessments and college and career readiness. Your next move would be to convince the legislature to eliminate all high stakes decisions from their link to standardized tests.

No comments:

Post a Comment