Today, the final shoe dropped on the Reading First era. The Institute of Education Sciences released the final version of the Reading First Impact Study. A surprise to no one, the final impact study came to the same conclusions as the interim study. The summary of summaries, RF schools aren’t doing a better job of making student reading proficient, compared with non-RF schools.
As President-Elect Obama and his Administration-in-waiting begin working through the transition, they have a terrific opportunity to shape the direction of future policy and future successes. With each new administration, particularly with a change in party leadership, there is the opportunity to reorganize Cabinet departments, the chance to emphasize new priorities and to turn back the efforts of previous administrations. While Stephen Hess of the Brookings Institution cautions against overhauls and reorganizations at the start of an Administration, now is definitely the time to look at a new organization for the U.S. Department of Education.
there is still a great deal of work that needs to be done to meet that goal. IES needs to broaden its mission beyond the WWC and become a true clearinghouse for quality research and a Good Housekeeping seal of approval for what works. More importantly, it needs to expand the dialogue beyond the researchers and effectively communicate the education sciences to practitioners, advocates, and others in the field.
Dear President-Elect Obama,
isions, spending decisions, and instructional decisions. More importantly, we just need to plain know that what we are doing works, and it works in schools like mine, in classes like mine, with kids like mine. There is nothing wrong with accountability if it is a shared responsibility, shared by government, schools, teachers, parents, and the students themselves.
Last week, This Week in Education revealed that Russ Whitehurst was leaving the Institute of Education Sciences. That should come as no surprise, as Whitehurst’s congressionally appointed term expires in November 2008, and he has made clear he was not seeking reappointment. TWIE’s announcement was followed by Fordham Flypaper’s news that Whitehurst was moving over to Brookings’ Brown Center for Education Policy, presumably to fill the very capable shoes of the departing Tom Loveless.
Even the most zealous of Reading First advocates/agitators (yours truly included) recognize that the headstone for the federal program has been carved. At this point, we’re all just waiting to see if RF will officially be laid to rest on October 1, 2009, when a new fiscal year takes affect or in March 2009 or so, when a new Congress decides to abandon a continuing resolution for the federal budget and actually passes a Labor/HHS/Education appropriations bill (and as former appropriations folk, Eduflack would be shocked if anything new happens with the budget this spring, regardless of who is president).
Not much more than a month ago, it seemed the entire education community had written Reading First off for dead. Congress has zero-funded the law. The U.S. Department of Education was doing little, if anything, to do something about it. IES had released an interim study questioning the program’s effectiveness. All seemed relatively lost.
Earlier this week, the What Works Clearinghouse released its analysis on the research base for the Open Court and Reading Mastery programs. To the surprise of many (or at least many of those who are paying attention to the WWC these days), both programs were found to lack the research umph that WWC and the Institute of Education Sciences demands under the “scientifically based” definition.
EdWeek’s Kathleen Manzo has the full story here — http://www.edweek.org/ew/articles/2008/08/13/01whatworks.h28.html?tmp=1851512060.
The reports are particularly interesting because most believed Open Court and Reading Mastery were two of the leading programs for which Reading First and SBRR were intended. Open Court is the program of choice in Los Angeles, for instance, and both programs have been credited with boosting student reading achievement in the classroom.
Critics of RF will use this as yet another “I told you so” moment, that such golden list programs lack the research merit to warrant inclusion. And while it might make good AERA chatter, there is a much larger issue we should be discussing.
What is the true impact of the What Works Clearinghouse? Based on these reports, does anyone expect LAUSD to drop its contract with Open Court? Of course not. LAUSD has long believed the program has helped students in LA, and they’ll point to their own student achievement numbers to prove it. Same goes for most of the schools using both Open Court and Reading Mastery. It is in those schools because administrators, teachers, or both have found it effective with their kids.
As with much of the federal education reforms of the past decade, WWC is in a time of transition. Now is the time for the Clearinghouse to figure out what it really wants to be, and what role it is to play in P-12 education. Is it an evaluator of commercial programs? Is it an arbiter of scientifically based research? Is it a Consumer Reports for education? Or is it a tool to help education decisionmakers make intelligent decisions about instructional practice?
We need to start shifting from an “all or nothing” thinking and start determining how WWC fits into the larger framework. Otherwise, it could be another story of unfulfilled potential.
Why is it so hard to find good, meaningful scientific data to prove the efficacy of an education reform? Do we know what good data is? Is it too expensive to capture? Is it deemed unnecessary in the current environment? Is it out-of-whack with the thinking of the status quoers?
EdWeek’s Kathleen Manzo has been raising some of these issues over on her blog — Curriculum Matters. (http://blogs.edweek.org/edweek/curriculum/) And no, Eduflack has no qualms whatsoever with her taking me to task on whether the proof points I use to demonstrate Reading First is working are truly scientifically based proof points. To the contrary, I appreciate the demand to “show me” and have greatly enjoyed the offline conversations with Manzo on what research is out there and whether that research — the good, the bad, and the ugly — meets the hard standards we expect.
For the record, I am not a methodologist, a neuropsychologist, nor an academic to the nth degree. I learned about research methodology and standards and expected outcomes from NRPers like Tim Shanahan and Sally Shaywitz and from NICHDers such as Reid Lyon and Peggy McCardle. My knowledge was gained on the streets, so take it for what it is worth.
When NCLB and RF were passed into law, the education community took a collective gasp of concern over the new definition of education research. The era of squishy research was over. The time for passing action research or customer satisfaction surveys as scientific proofs of effectiveness had met its end. Folks starting scratching their heads, wondering how they would implement (and fund) the longitudinal, double-blind, control-grouped studies defined as scientifically based education research.
The common line in 2002 and 2003 was that only two reading programs, for instance, met the research standards in SBRR. Those two? Direct Instruction and Success for All. Not Open Court. Not Reading Recovery. Not Voyager. Only DI and SFA.
So what has happened over the years? In 2002, the fear was that every educational publisher would have to adopt a medical model-style research network a la NICHD. Millions upon millions of dollars would need to be spent by the basals to prove efficacy. It was to be a new world order in educational research.
Where are we today? As Manzo correctly points out, five years later there is little (if any) research out there that is now really meeting the standard. Even the large IES interim study of RF effectiveness — that $31 million study of our RF districts — fails to meet our standards for high-quality, scientific research (if you listen to the researchers who know best). Why? Why is it so difficult for us to gather research that is so important?
First, we have interpreted the law the way we want to interpret the law … and not the way it was written or intended. Those being asked to implement the research models simply didn’t want to believe that Reid Lyon and Bob Sweet really wanted them to pursue such zealous and comprehensive research. So it was interpreted differently. Neither consumers (school districts, teachers, and parents) nor suppliers (basals, SES providers, etc.) saw the necessity of longitudinal, control-grouped, double-blind, peer-reviewed research. We settled for what we could get. We knew that documents such as the NRP report of the previous National Research Council study met the requirements. So instead of doing our own research, in the early years of RF we simply attached the NRP study as our “research base” to demonstrate efficacy. Forget that the ink on the instructional program wasn’t dry, it was “scientifically based.” And there were no checks or review process to prove otherwise.
Second, we are an impatient people, particularly in the education reform community. Take a look at the NICHD reading research network, and you’ll see it takes a minimum of five years to see meaningful, long-term impact of a particular intervention. RF grants were first awarded in 2002, with most early funders using the money for the 2003-04 school year to start. That means just now — for the 2008-09 school year — would we truly be able to see the impact of RF interventions. But have we waited? Of course not. We declared victory (or defeat) within a year or two of funding. If test scores didn’t increase after the first full academic year, the nattering nabobs of the status quo immediately declared RF a failure, simultaneously condemning the need for “good” research.
We need to see results. If our second grader isn’t reading, we want her reading by third grade, tops. We don’t have the patience or the attention span to wait five to seven years to see the true efficacy of the instruction. We need a research model that provides short-term rewards, instead of measuring the long-term effects we need. A shame, yes, but a reality nonetheless.
The final side to our research problem triangle is the notion of control groups. In good science, we need control groups to properly measure the effects of intervention. How else do we know if the intervention, and not just a change in environment or a better pool of students, should be credited or student gains? That is one of the great problems with the IES interim study. We are measuring the impact of RF funding, but were unable to establish control groups that did not benefit from RF materials, instruction, and PD (even if they didn’t receive any hard RF dollars).
But in our real-life classroom environment, who wants their kid to be in that control group? We all want the best for our children; we don’t want them to get the sugar pill while all the other students are getting scientifically based reading and a real leg up on life. How do you say to teachers — in our age of collective bargaining — that these teachers on my right will get scientifically based professional development, but these two on my left will get nothing? How do we say these students on this side of the district will get research-based instruction and materials, but this cluster here will get instruction we know to be ineffective. Politically, our schools and their leaders can’t let real scientifically based research happen in their schools. Too much grief. Too many problems. Too little perceived impact.
So where does this all leave us? At the end of the day, we all seem to be making do with the research we can get, hoping it can be held to some standard when it comes to both methodology and outcomes. We expect it to have enough students in the study so we can disaggregate the data and make some assumptions. We expect to do the best we can with the info we can get.
Today, we see that most “scientifically based” research is cut from the same cloth. No, we aren’t following the medical model established by NICHD’s reading network, nor are we following the letter of the law as called for under NCLB and RF. Some come close, and I would again refer folks to the recent RF impact studies conducted in states such as Idaho and Ohio. The methodology is strong, the data is meaningful. And it shows RF is working.
What we are mostly seeing, though, is outcomes-based data. School X scores XX% on the state reading assessment last year. This year they introduced Y intervention, and scores increased XX%. Is it ideal? No. But it is a definite start. We are a better education community when we are collecting, analyzing, understanding, and applying data. Looking at year-on-year improvement helps us start that learning process and helps us improve our classrooms. It isn’t the solution, but it is an important step to getting there (particularly if we are holding all schools and students to a strong, singular learning standard).
Yes, Kathleen, we do need better research. We know what we need, we know how to get there. But until we demonstrate a need and a sense of urgency for the type of research NCLB and IES are hoping for, we need to take the incremental steps to get us there. Let’s leave the squishy research of days of old dead and buried. We’ve made progress on education research over the past five years. We need to build on it, not destroy it.
For nearly a decade now, “research” has been the buzz word in education reform. It comes in many flavors, and it usually comes with a number of adjectives — scientifically based, high quality, effective, squishy, and such. And by now we all know that “scientifically based research” is in the NCLB law more than 100 times.
With all of the talk about research, we know there is good research and there is not so good research. We have action research passed off as longitudinal. We have customer satisfaction studies passed off as randomized trials. We have people mis-using, mis-appropriating, and downright abusing the word “research.”
Through it all (at least for the past seven years or so), the U.S. Department of Education was supposed to be the arbiter between good and bad research. IES was founded to serve as the final, most official word on what constitutes good education research. Dollars have been realigned. Programs have been thoroughly examined. Priorities have been shaken up.
So where does it all leave us? In this morning’s Washington Post, EdSec Margaret Spellings launches a passionate defense of the DC voucher program. http://www.washingtonpost.com/wp-dyn/content/article/2008/07/07/AR2008070702216.html (Personally, I’m still waiting for such a defense of Reading First, a program helping millions upon millions of more students in schools beyond our nation’s capital, but what can you do?)
It should come as no surprise that Spellings sought to use research to demonstrate the effectiveness and the need for the DC voucher program. Without doubt, vouchers have had a real impact on the District of Columbia. It has reinforced the importance of education with many families. It has opened doors of schools previously closed off to DC residents. It has forced DC public schools and charters to do a better job, as they seek to keep DC students (and the dollars associated with their enrollment) in the DCPS coffers. And, of course, we are starting to see the impact vouchers are having on student achievement among students who previously attended the most struggling of struggling schools.
Spellings points out all of this in her detailing of the research validating the voucher program. But there is one “research” point Spellings uses that just has Eduflack scratching his head. From the EdSec’s piece — “The Institute of Education Sciences (IES) found that parents of scholarship children express confidence that they will be better educated and even safer in their new schools.”
Such a statement is downright funny, and quite a bit concerning. In all of the discussions about scientifically based research, high-quality research, the medical model, double-blind studies, control groups, and the like, I don’t remember public opinion surveys meeting the IES standard for high-quality research. Parents feel better about their children because of vouchers? That’s a reason to direct millions in federal funding to the program?
Don’t get me wrong. I’m all for public opinion polling and the value of such surveys (along with the focus groups and other qualitative research that helps educate them). But it is one of the last things that should be used to validate a program or drive government spending on educational priorities.
If DC is to keep vouchers, it should keep them because it is driving improvement in student performance and is giving a real chance to kids previously in hopeless situations. It should be saved with real data that bears a resemblance to the scientifically based research we demand of the our programs and that we expect our SEAs and LEAs to use in decisionmaking. It should be actionable research, with a clear methodology that can be replicated.
Otherwise, we’re just wrapping up opinion in a research wrapper. That may be good enough for some for-profit education companies and others trying to turn a quick buck on available federal resources, but it shouldn’t make the cut for the government — particularly the branch of ED that is in charge of high-quality research. Ed reform should be more than a finger-in-the-wind experiment. And Spellings and IES should know that by now.
I know, I know, I promised my Quiotic quest over the IES Reading First implementation study was headed for the bench for a little bit. But after watching so many swing and miss at this RF pitch, Eduflack just has to offer plaudits when someone else makes solid contact and raises some great issues on this study.
Kudos go to Kathleen Kennedy Manzo over at Education Week. Manzo is one of the original RF reporters (along with Greg Toppo), having covered it from the early stages to today. It’s meant that she’s likely been flooded with information, data, research, opinion, and spin over these past six or so years. It’s meant a continuous learning process. And it’s meant having to sort through it all, avoiding the pitches in the dirt and waiting for the good pitch to hit.
Hit it she did. In this week’s Education Week, Manzo’s got a great piece on the IES study. http://www.edweek.org/ew/articles/2008/06/04/39read.h27.html?tmp=1914927477 She explores many of the quality issues that have been raised to date. More importantly, though, she gets Russ Whitehurst to state that no conclusions should be made based on the interim report. Instead, we need to wait for the final.
I, for one, am hoping that means there’s a whole lot of fixing coming in the final report. Of course, I’ve been disappointed before. Regardless, EdWeek and Manzo deserve credit for taking a complicated and growing issue, and reporting on it so that the average educator or the average policymaker understands the issues and knows the tough questions to ask.
Gold stars all around.