Last week, the U.S. Department of Education officially announced that $350 million designated under the Race to the Top program would be made available to several consortia to develop student assessments aligned with the common core standards states are expected to adopt later this year. The big question many of those watching the assessment discussions are now asking is how different will these next generation assessments be compared to the state tests that have governed the NCLB/AYP era.
Even before taking office, President Obama often expressed frustration and dismay with “bubble tests.” A little over a year ago, as part of his education transformation agenda launch, the President stated: “I am calling on our nation’s governors and state education chiefs to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they possess 21st century skills like problem-solving and critical thinking, entrepreneurship and creativity.”
So now that that $350 million is about to hit the streets, what exactly does moving beyond the bubbles on the test look like? That was one of the questions that the National Academy of Education and the Stanford Center for Opportunity Policy in Education (SCOPE) asked at a policy forum yesterday titled, “What Do We Know About High Quality Performance Assessment?”
The forum spotlighted a series of new research papers released by SCOPE as part of a Ford Foundation-funded effort to focus on the performance assessment issue. The full documents can be found here .
The takeaways from the SCOPE papers focused on three key issues researchers recommended that next-gen performance assessments needs to address:
* Careful task design based on a clear understanding of the specific knowledge and skills to be assessed and how they develop cognitively
* Reliable scoring systems based on standardization of tasks and well-designed scoring rubrics
* Methods for ensuring fairness based on the use of universal design principles
While NAEd and SCOPE were all about the research, perhaps the most provocative part of the forum was the policy discussion. Jack Jennings, the long-time President and CEO of the Center on Education Policy, suggested that accountability efforts should be put on hold for the next three or five years until we have a better understanding of what we know and what we need (particularly coming out of the AYP era). Jennings suggested piloting a range of assessment strategies (the inspectorates advocated by the Broader, Bolder Approach to Education, next gen AYP tests, portfolios, performance assessments, etc.), letting states try one, and then evaluating where we are after a few years.
Chris Cross, the President of Cross & Joftus and former Assistant Secretary at the U.S. Department of Education, quickly noted that the genie is out of the bottle when it comes to testing and assessment, and the only choice is to continue moving forward. And Cross is right, taking a step back or shuffling sideways provides no value at this stage of the school improvement game. The game is all about improving our testing and assessment efforts, not providing a cooling off period for folks to ease up on accountability.
So where does this leave us? First, we need to recognize that all of these issues found in ESEA, RttT, and other federal programs are not islands unto themselves. If we are serious about building a better performance assessment system, that means finding stronger ways to integrate curriculum, teacher development and supports, standards, and tests.
Second, we need to pay more attention to how such assessments are scored. There is tremendous work currently being done with regard to innovative, machine-scored items on fixed and adaptive forms. Such work needs to be central to the assessment consortia and their plans for our coming common core world. At the same time, we need to spend greater effort getting actual classroom teachers in on the scoring, using it as professional development tool.
Perhaps most importantly, though, we need to recognize that we don’t have much time to wait here. The flaw in Jennings’ model is that waiting three or five years means losing an entire generation of students to gaps, cracks, and failures. Instead, we should be accelerating our efforts to get better tests, aligned with common standards, into our states, districts, and schools as quickly as possible. Build off of what is working and those pockets of promising practice. Move from bubble sheets to computers (complete with open-answer questions). Figure out how we go from just knowing the right answers to knowing what to do with the right answers after test day is done. Act now, with a commitment to continuous improvement.
(Full disclosure, Eduflack has helped Stanford University School of Education with the launch of SCOPE.)