Friday, February 12, 2016

The Evidence-Based Set and Online Testing.

The Simulation below contains a set of 6 sources and corresponding sample questions from the state of Ohio's American History End of Year Assessment. It is an excellent example of the American Institutes for Research's directive from the state to create PARCC-like questions. The description of the set is described by the state as follows...

"An Evidence-Based Set is a group of several questions associated to one or more common stimuli. Evidence-based sets allow students to work with primary source materials to show deep understanding of social studies topics. The questions in these sets will assess a range of skills and content in the content statements."

The premise is academically sound as it facilitates the analysis of multiple sources, asks the student to draw on relevant ideas, and combine them in order to come to conclusions. 

My students and I recently undertook a similar process in analyzing the decision to develop and drop the atomic bomb. The activity took place over two days within our study of WWII, and involved an analysis of writing from Oppenheimer, Eisenhower, and the War Department, as well as President Truman. We watched interviews with Japanese survivors and members of the crew of the Enola Gay. Throughout the process we discussed the issues, and the effect the bombings had on the world going forward. Students wrote reflective essays and defended their thesis with facts from the sources we'd investigated.

Attempting to recreate this process as a part of a standardized test, while understandable, is contrived at best, and at its worst is another example of educational inequality. The problems begin with the fact that standardized testing takes any information out of an authentic context. As you'll see below, the assessment writers attempt to remedy the issue with a contextual introduction. This does something, though not much, to set up the simulation.

The real issues relate to the fact that 80% of Ohio's students will complete this process online, while the rest will complete it on paper. Online the 6 sources included here must be accessed individually using a drop down menu. In other words, students taking the test on computers have no opportunity to view the sources side by side. This creates a clear disadvantage, and was one of the many criticisms of the PARCC assessments last year. Many people assumed PARCC was the issue, but these problems persist regardless of vendor.

As the Cleveland Plain Dealer recently reported, states looking at comparisons are finding that scores from computer test takers tend to be lower than their paper counterparts. The issue above might begin to explain this phenomenon. Research suggests that the physicality of paper versus screen print promotes a tactile relationship that actually improves a reader's long-term understanding. What is also at play here is a student's familiarity with the technology in question. Just as many have argued that all standardized tests, regardless of subject, are reading and writing tests, they are now becoming assessments of technological skill.

As with most educational issues, economic inequality exacerbates the problem. Students in high poverty areas will obviously have less opportunity to develop computer skills in the home, so will suffer disproportionately as these tests become required online. Standardized assessments have already proven to measure economic standing better than any academic measure. The use of computers for testing seems as if it will further solidify this issue. 

Even if we could assure equal technological skill, which we cannot, the hardware will differ from district to district. It should be clear then that a student's ability to maneuver the assessment will differ depending on their use of a desktop, a device with mouse or without, a laptop, a tablet, Chromebook or otherwise. Furthermore, anyone who has used multiple devices knows that they are not all created equally. Student computer skill (or lack thereof) combined with a multitude of unequal devices on a variety of systems creates a recipe for further educational inequality. 

School districts are currently funding cost-effective devices in order to comply with the coming mandate of all online testing. With a lack of overall funds, it is easy to assume that these purchases are being made at the expense of the arts, music, physical education or other non-core areas. Naturally, time once devoted to these subjects can now be used in the interest of increasing a student's screen time in an attempt to assure improvement on state tests.

What is especially painful is the fact that success on the simulation below and the tests overall decide whether or not a student graduates from high school in the state of Ohio. Other online assessments determine a 3rd grader's promotion, a school's reputation, and a teacher's rating, among other things. These stakes are far too high for an assessment system like this.

A scenario that includes the number of issues that I have delineated here is at least deserving of some discussion. We cannot simply continue to grind students through an assessment system constructed without a thorough analysis, with little regard to its impact on students, that clearly exacerbates the educational inequality that is already terribly problematic.

Unfortunately, as the testing window nears, this is what we're preparing to do. 

The state of Ohio has released some resources for students and teachers in all tested subjects which are available on the ODE's website. I would encourage all stakeholders to analyze the materials. For our purposes here, I have provided screenshots of the sources, and then the questions from the simulation I have described. Consider the issues I've presented, check out the simulation below, and see what you think. 










No comments:

Post a Comment