Reading First has been the federal law of the land for more than six and a half years now. To date, more than $5 billion has been provided to the states to implement scientifically based reading programs in their schools. A huge bucket of dollars, these moneys were intended to provide evidence-based curricular materials, instructional programs, interventions, and professional development in those schools that needed the most help in getting every child reading proficient.
We’ve all heard about the problems with the implementation of the program. Eduflack is clearly on the record believing RF is a terrifically intentioned program, with the right priorities, the right goals, and the right research. But I’ve also been critical of the implementation of the program. Oversight was sloppy. Programs weren’t adopted with fidelity. And we’ve done a poor job collecting and promoting the data that demonstrates overall effectiveness.
In recent months, the field has debated what the research really tells us. We’ve had dueling studies, one from the U.S. Department of Education’s OPEPD came out with a study showing real results; IES came out with an interim study questioning impact. OPEPD addressed the issues of non-RF schools making real gains because of changes in instructional approaches and materials (what the researchers call contamination); IES did not.
Through it all, we’ve assumed that that $5 billion has been spent as intended. Sure, we know there are some companies that got rich off of RF, selling snake oil to anyone with an open wallet. There are profiteers that saw an opening in the law, and squeezed every last nickel they could out of RF to line their pockets and enrich their companies. There are those who claimed to be research-based, who clearly had no understanding what good research was nor any intention to achieve it. And, yes, there were some really good programs that got into the schools to further show their ability, those that could serve as lighthouses or meaningful examples of promising or best practice.
But how has the money actually been spent? Over at EdWeek’s Curriculum Matters blog, Kathleen Manzo raises a disturbing point. In more than six years, it seems no one at the U.S. Department of Education has bothered to collect data on how the LEAs actually spent the billions in RF dollars that made its way down to the localities. <div><br></div><div>Manzo”>blogs.edweek.org/edweek/curriculum/2008/10/where_has_all_the_money_gone.html
Manzo points to the just-released Notice of Proposed Information Collection issued by ED, asking questions about whether data collection on RF is necessary. Yes, such notices are required by the Office of Management and Budget of any government agency looking to collect data from more than nine or so folks. So Eduflack isn’t so worried about the release of the notice. I’m just heartbroken and frustrated by its timing.
Shouldn’t ED have been collecting this data from the start, gathering information after year one about how RF dollars are spent? Shouldn’t knowing how dollars are spent be part of the determination of whether the program is effective? Shouldn’t it be a given that when you’re issuing billions of dollars in checks, you expect to get detailed spending reports in return?
I’d like to believe this is just standard operating procedure, a necessary notice that is sent out at the close of any federal program. I’d like to believe that such data as been collected annually since 2002, thinking as each SEA gets a new check, they hand over old data. I’d like to believe such data was collected, in part, as part of the research done by OPEPD and IES. I’d like to believe, yes, but I also know better. Through all of the attacks, all of the IG investigations, of the defunding threats, no one in an official position has talked in any detail about how RF money has been effectively spent. And I know it is a question Manzo and EdWeek have been asking for years, without getting any answers of substance.
Any education group that has received philanthropic support knows they need to account for dollars to their donor. Just ask any organization in town that’s received money from the Gates Foundation. They document how the money is spent, making sure it aligns with the goals and promises of the original application. And then they detail how the spending has led to real, measurable results that demonstrate effectiveness.
If this Notice of Proposed Information Collection is what it seems — the first attempt to gather information on RF spending — someone needs to step up and accept responsibility for a monumental failure. NCLB was the largest federal investment in public education in the history of the republic. With such an investment should come the largest measure of accountability as well.
If accountability is to be the legacy of NCLB, and if we are to expect all of our schools to ratchet up their levels of personal accountability, we owe it to every teacher, every publisher, every legislator, every parent, and every teacher to demonstrate similar accountability. Ultimately, we can’t declare RF a success or a failure until we’ve accounted for how the money has been spent. At the end of the day, fidelity is more than just a buzzword to measure teachers by. It is a measure of our action and our spending. Unfortunately, ED seems to have missed that lesson in Accountability 101 class.