Under the Hood of the IES Reading Study

I know, I know, Eduflack is like a dog with an unbelievably potent bone on this whole IES interim study on Reading First.  I can’t help it.  Maybe its because I’m a contrarian.  Maybe I hate to see folks pile on to something that deserves a good defense.  And maybe I’m just practicing insanity, believing that if I keep focusing on the benefits again and again, someone may hear it and change their thinking and their practice.

I come here today not to prosthelatize on RF.  Instead, I want to serve as a conduit for needed information.  If we’ve learned anything from the back-and-forth on the IES study, it is that there are some real questions with regard to the methodology and the project design.  Rather than just trust the salesman that the engine under the study hood is legit, I’ve brought in an expert mechanic of my own.

Today, we hear from the University of Illinois-Chicago’s Tim Shanahan.  If you’ve heard of the IES study, you know Tim.  A leader on the National Reading Panel, Dr. Shanahan has served on a number of similarly influential groups on reading instruction.  He is also the former reading czar of Chicago Public Schools and recently completed his tenure as president of the International Reading Association.

I met Tim a decade ago, when I began my service to the NRP.  Immediately, I found that he was one of those rare breeds who knew the research cold, but could explain it to anyone’s grandma so she understood it … thoroughly and completely.  Even more, he had the patience and the perseverance to teach this old dog about research methodology and scientific approaches, giving me the foundational understandings I have put to use virtually every day since. 

Put simply, there are few researchers I trust more than Dr. Tim Shanahan.  He is as straight a shooter as they come.  And for our purposes today, Tim was an advisor to the IES study, so he knows of what he speaks.  So we asked some questions, he provided far better answers.



EDUFLACK: What does the IES study really say?  How strong are the findings?

SHANAHAN: THE IMPLEMENTATION STUDIES INDICATE THAT THE DIFFERENCES BETWEEN RF AND NON-RF SCHOOLS WERE PRETTY MODEST (ABOUT 50 MINUTES OF INSTRUCTIONAL DIFFERENCE PER YEAR IN AMOUNT OF INSTRUCTION), MEANING THAT RF KIDS PROBABLY RECEIVED FEWER THAN 30 HOURS OF ADDITIONAL READING INSTRUCTION EACH YEAR DUE TO THE INTERVENTION. CLEARLY A MODEST INTERVENTION, ESPECIALLY GIVEN THE SIMILARITIES IN CURRICULUM, INSTRUCTIONAL MATERIALS, PROFESSIONAL DEVELOPMENT, AND ASSESSMENTS.

Q: How valid are the findings, knowing there may be contamination across groups (that both the RF and non-RF groups may have been doing the same things in the classroom)?

A: MOST SCHOOLS EMPLOY SOME KIND OF COMMERCIAL CORE PROGRAM. WHEN READING FIRST EMPHASIZED THE ADOPTION OF PROGRAMS WITH CERTAIN DESIGNS ALL MAJOR PUBLISHERS CHANGED THEIR DESIGNS TO MATCH THE REQUIREMENTS.

READING FIRST SCHOOLS ALL BOUGHT NEW PROGRAMS IN YEAR 1; ALMOST ALL OTHER TITLE I SCHOOLS ADOPT NEW CORE PROGRAMS EVERY FOUR OR FIVE YEARS. THAT MEANS IN YEAR 1, 100% OF THE RF SCHOOLS GOT A NEW PROGRAM, AND 25% OF THE OTHER SCHOOLS DID. IN YEAR 2, THAT NUMBER WENT TO 50%, IN YEAR THREE 75%. ALL RF SCHOOLS HIRED COACHES IN YEAR 1, SO DID MORE THAN 80% OF THE OTHER SCHOOLS. ETC.

THIS ISN’T A CASE OF SPOT CONTAMINATION, IT WAS INTENTIONAL AND PERVASIVE (IN FACT, IT WAS PART OF THE RF LAW ITSELF—20% OF THE STATE MONEY, THAT MEANS $1 BILLION TOTAL WAS DEVOTED TO GETTING NON-READING FIRST SCHOOLS TO ADOPT THESE REFORMS).


Q: Given that contamination, are there contamination rates that can be tolerated in the design?  For example, let’s say 15 percent of the RF and comparison groups received identical programs/PD.  Is this level of contamination tolerable?  What if there is a 30 percent overlap – is this level tolerable?  Are there ways to estimate the degree to which percent contamination will indicate a need to increase sample size? 

A: THE PERCENTAGES OF OVERLAP WERE 75-100% DEPENDING ON THE VARIABLE. THE ONLY ONE WHERE WE HAVE ANY KIND OF IDEA ABOUT WHAT IS TOLERABLE IS WITH TIME.

FROM PAST RESEARCH, ONE SUSPECTS THAT 100 HOURS OF ADDITIONAL INSTRUCTION WOULD HAVE A HIGH LIKELIHOOD OF GENERATING A LEARNING DIFFERENCE, A 50-60 HOUR DIFFERENCE WOULD STILL HAVE A REASONABLE CHANCE OF RESULTING IN A DIFFERENCE. AT 25-30 HOURS A SMALL DIFFERENCE IN LEARNING MIGHT BE OBTAINED, BUT IT IS MUCH LESS LIKELY (ESPECIALLY IF THE CURRICULA WERE THE SAME).



Q: Did the evaluation design include procedures/strategies to  avoid contamination between RF and the comparison group?

A: IT [THE IES STUDY] NOT ONLY DID NOT TRY TO AVOID CONTAMINATION, IT COULDN’T POSSIBLY DO IT SINCE THE SOURCES OF THE CONTAMINATION WERE SO PERVASIVE. FIRST, THE FEDERAL POLICY EXPLICITLY CALLED FOR SUCH CONTAMINATION TO BE PUSHED. SECOND, STATES AND LOCAL DISTRICTS MADE THEIR OWN CHOICES (AND THEY FELT ENTICED OR PRESSURED TO MATCH RF).

FOR EXAMPLE, SYRACUSE, NY RECEIVED READING FIRST MONEY FOR SOME SCHOOLS, BUT MANDATED THAT ALL OF ITS SCHOOLS ADOPT THE SAME POLICIES AND PROGRAMS. THERE SHOULD HAVE BEEN NO DIFFERENCES BETWEEN RF AND NON-RF SCHOOLS IN SYRACUSE, THE ONLY DIFFERENCE WOULD BE IN FUNDING STREAM—HOW THE CHANGES WERE PAID FOR, AS THE NON-RF SCHOOLS ATTENDED THE SAME MEETINGS AND TRAININGS, ADOPTED THE SAME BOOKS AND ASSESSMENTS, RECEIVED THE SAME COACHING, PUT IN PLACE THE SAME POLICIES, ETC.


Q: Did the evaluation design describe practices in the comparison groups?

A: YES, THE IMPLEMENTATION STUDIES SHOW THE SIMILARITIES IN PRACTICES AND HOW, OVER TIME, THE PRACTICES THAT WERE SIMILAR AT THE BEGINNING BECAME INCREASINGLY SIMILAR EACH YEAR. THAT WILL BE CLEARER IN THE NEXT STUDY OUT

Q: Did the evaluation design account in any way for contamination, crossover, compensatory rivalry, etc.?

A: NO. THE FEDERAL LAW CALLED FOR THE EVALUATION OF READING FIRST IN TERMS OF THE EFFECTIVENESS OF THE INSTRUCTIONAL MODEL, BUT DID NOT CALL FOR A STUDY OF THE IMPACT OF READING FIRST UPON THE ENTIRE EDUCATIONAL SYSTEM.

EVEN THOUGH I HAD PERSONALLY MADE A BIG DEAL OUT OF THE PROBLEM FROM THE VERY FIRST STUDY DESIGN MEETING, THE METHODOLOGISTS THOUGHT THEY COULD HANDLE MY PROBLEM SIMPLY BY ACCOUNTING FOR THE RF ROLLOUT EACH YEAR. THEIR ASSUMPTION WAS THAT RF WOULD IMPLEMENT SOME CHANGES IN YEAR 1, OTHERS IN YEAR 2, AND STILL OTHERS IN YEAR 3 AND THAT THIS PATTERN OF IMPLEMENTATION WOULD ALLOW THEM TO EXAMINE A CONTINUING LAG BETWEEN THE RF AND NON-RF SCHOOLS.

I DIDN’T UNDERSTAND THAT THEY WERE THINKING THAT AND THEY NEVER ASKED DIRECTLY ABOUT THAT. LAST YEAR, I FIGURED OUT WHAT THEY WERE THINKING AND I HAD TO EXPLAIN SEVERAL TIMES THAT RF PUT ALL OF ITS REFORMS IN PLACE DURING YEAR 1, WITH NOTHING NEW IN YEARS 2 AND 3, SO IT WOULD BE IMPOSSIBLE TO TEST THE EFFECTS OF DIFFERENT PARTS OF THE IMPLEMENTATION, ETC. USING THEIR APPROACH. I MIGHT HAVE BEEN ABLE TO GET THIS FIXED IF I HAD UNDERSTOOD THAT THEY WERE ASSUMING THAT KIND OF DESIGN (OR IF THEY HAD ASKED ME ABOUT THAT SPECIFICALLY).


Q: Can we assume that the RF group is just like the comparison group except for exposure to RF funding?

A: READ THE IMPLEMENTATION PART OF THE REPORT (AND THERE IS ANOTHER STUDY COMING LATER THAT WILL MAKE THIS CLEARER) AND YOU’LL SEE THE DEGREE OF SIMILARITY IN THE KEY FACTORS BETWEEN THE TWO SETS OF SCHOOLS. I RAISED THIS AS A THEORETICAL PROBLEM ORIGINALLY, BUT THE IMPLEMENTATION STUDY CLEARLY SHOWS THAT CONTAMINATION WAS A BIG PROBLEM (IT CANNOT TELL US WHETHER THE CONTAMINATION CAME FROM THE $1 BILLION FEDERAL EXPENDITURE ON THIS, BECAUSE THE STATES AND LOCAL DISTRICTS OFTEN SIMPLY ADOPTED THE SAME IDEAS.

AS ONE ILLINOIS DISTRICT TOLD ME, “IF THIS IS THE RIGHT STUFF TO DO, THEN WE ARE GOING TO DO IT WITH EVERYONE.” 


That’s a lot to chew on, but it is a worthwhile meal.  Even for the most simple-minded of laypeople (like Eduflack), it is clear that the IES study had no real control group.  We had RF schools and non-RF schools, both pools of which were doing similar things with similar materials.  How can we compare the two groups as haves and have nots when the only measure of separation is the bucket of money that was paying for the approach?

Dr. Whitehurst, I’ll yield the pulpit to you if you’d like to respond.
 

 

“What Happened?”

When Eduflack first started off on Capitol Hill, I was fortunate enough to have a mentor who invested the time in teaching me the finer points of being an “on-the-record” spokesman.  I was working for Sen. Robert C. Byrd (WV) at the time, 22 years old and incredibly wet behind the ears.  Byrd’s spokesperson on the Senate Appropriations Committee, Marsha Berry, took me under her wing.  She walked me through the Senate Press Gallery, introducing me to the gaggle of reporters.  She gave me a great deal of advice and coaching.

One piece of advice she left me was a simple one that I have followed every since.  “Never, ever lie,” Marsha said.  Lie to a reporter once, and you’ve lost his trust.  Lose his trust, and you can’t do the job.

She was absolutely right, and I have done my best to ensure that I always told reporters the truth.  I went on to serve as spokesman for other senators and congressmen.  I did it for government panels and government agencies.  For non-profits and corporations.  I even did it on the campaign trail.  And while I’d sometimes joke about plausible deniability (usually around questions of campaign fundraising), my goal was always to provide needed information to reporters.  Sure, I’d spin it in a favorable way.  But the information was always accurate (or as accurate as it could be), and I trusted what I said.

I have always known I was fortunate when it came to who I worked for.  Be it Byrd, Senator Bill Bradley (NJ), or Congressman John Olver (MA), I worked for honorable men who I trusted and who I was proud to work for.  Yes, I regularly jousted with them on particular policy issues, asking if voting against X policy was good for the upcoming campaign, but I knew I worked for good men who were ultimately doing what they knew was best.  And I thought that’s what most spokespeople did.  Particularly if you worked for the President of the United States.

By now, most of us have heard of Scott McClennan’s new memoir, “What Happened.”  The former Bush press secretary takes a very aggressive stance against his former boss.  And, essentially, McClennan says he regularly stood up behind the podium and lied to reporters on a host of issues.  Of course, it was his higher-ups’ fault that he lied.  He just followed orders.

Eduflack just can’t buy that.  Sure, I have never walked in McClennan’s shoes.  I’ve only done the job on Capitol Hill.  But I’ve done it long enough to know that a good press secretary (or communications director, whatever your preferred title may be) takes the time to look under the hood and understand the issues.  He moves beyond the talking points to learn.  He asks questions.  He anticipates even more questions.  And he is prepared to deal with any issue that is thrown his way.  He becomes an expert on all issues, and rarely takes any one person’s word on a controversial topic.

Saying you lied and just followed orders is a cop out.  It’s lazy work, and it is one of the reasons folks think PR is so easy.  A good spokesman knows all the facts.  He relays those facts as effectively as possible.  He speaks truth, even under tough circumstances.  He truly sees himself as an extension of his boss, sharing information to as broad an audience as possible.

I know, I know, what does all of this have to do with education reform?  A great deal, actually.  When educators are selling their education reforms, be it to the media or the community, they need to be trustworthy.  They can’t stretch the data or make guesses about impact.  They need to know the facts, and stick to them.  And they can never, ever lie.  If you do, your reform is history.  No educator, no policymaker, no reporter will take you seriously if you are caught telling an untruth about efficacy or impact.

Is 100% Proficiency Possible? You Betcha

Since its gaining its moniker, No Child Left Behind has faced growing scrutiny about its goal — ensuring that every student is achieving at grade level.  On the reading side of the coin, when NCLB was passed into law, only 60 percent of fourth graders were proficient or better at reading.  Two of every five students was struggling at reading.  The goal was to get all five of them reading, offering scientifically based interventions to fill the gaps.

Such promises became a punchline for folks.  It seemed like some would have felt better if we had said “Only 10 Percent Left Behind” or “Just a Few Left Behind.”

Today’s Washington Post, though, shows that 100 percent proficiency is not just a campaign slogan, it can be a way of life for some schools.  Over at the Core Knowledge Blog, they’ve done a good job discussing this very topic, and the fact that a school in Ocean City has already completely fulfilled its AYP obligations.  Check it out at http://www.coreknowledge.org/blog/2008/05/28/no-child-no-problem/.

Such gains are not just left to our beachside communities.  We are starting to see more and more examples of schools that have cracked the code and have figured out how to get every child reading and get every child performing.  Case in point, Pennsylvania’s Souderton Collaborative Charter School.

Full disclosure, I recently came across Souderton as part of my day job.  Based in Montgomery County, PA, this K-8 school has clear academic goals.  For language arts, that goal is to “read with comprehension, to write with skill, and to communicate effectively and responsibly in a variety of ways/settings.”

To achieve this goal, the school leadership adopted a scientifically based approach to independent reading.  The school provides books on topics of interest to the student, at reading levels and content appropriate to the students’ age.  In return, the students develop an interest and a passion for reading, developing the skills they need to succeed in ELA and other classroom results.

The result?  Success.  Don’t believe Eduflack?  Take a look at Souderton’s results on the PSSA for 2005-06 — Pennsylvania’s state assessment.  Third grade PSSA reading scores — 100% proficient or better.  Fourth grade PSSA reading scores — 100% proficient or better.  Even seventh grade reading scores — 100% proficient or better.  That’s every child reading at grade level.

Souderton achieved this, in part, because they are using approaches that are proven effective.  Their reading instruction models the best practices called for by the National Reading Panel and Reading First.  They are empowering both students and teachers, inspiring both to achieve.  And the results show.

Ocean City and Souderton can’t be the only schools with these sorts of results.  While schools don’t have to be 100 percent proficient until 2014, I have a feeling that these two schools are but the tip of the iceberg when it comes to unsung heroes that are achieving despite the white noise of failure and impossibility.  We should be modeling behaviors after schools like OC and Souderton.  And we, including Eduflack, should be doing a better job uncovering those schools that are doing it right.  Finding those schools that are achieving.  Throwing the spotlight on those communities where SBRR works, and where student reading proficiency is the norm, not the exception.
 

RF Works, Just Ask Idaho

If we believe the initial buzz from this month (along with the interim study from IES), the Reading First program just doesn’t seem to do the job it was intended to take on.  By now, those who care have heard all about the IES study, as well as the growing criticism about its shortcomings, most notably its methodology.

Throughout this debate, we’ve heard little from the practitioners who have put RF to work in their states or communities.  From those who have seen the positive effects of scientifically based reading research.  From those who have determined what works for their schools and their kids.  Until now.

Over at www.ednews.org, we’re seeing continued comment on this RF debate.  Of particular note is a comment recently posted by Steven Underwood, the Reading First School Improvement Coordinator for Boise State University’s Center for School Improvement & Policy Studies.  The headline — Reading First is working in Idaho.  Not just working, but really working.  Almost as if RF was designed to help struggling schools boost student reading proficiency.

Rather than summarize Underwood’s contribution to the debate, let’s here directly from the horse’s mouth, with a thanks to Underwood for letting Eduflack use the words originally posted at www.ednews.org.

“I applaud the efforts to help the nation’s most at-risk children by consulting a large body of research and theory, sifting out opinion from facts, and making policies and practices that benefit children. It is unfortunate, but many of the critics of Reading First both here and elsewhere seem to speak foremost of theory and secondarily of students. I am saddened by the number of critics who neither have worked in Reading First schools nor fully understand their practices. To continue the analogy of the car from previous posts, many critics, who undoubtedly mean well in their criticisms, seem to misunderstand the repair work that is being done and seem to be completely unaware of the data that demonstrate that Reading First is having a positive impact on student outcomes. In the criticisms, it seems like people are criticizing the mechanic who is working on the complex engine (of literacy among disadvantaged students) without themselves having ever been truly successful at fixing engines which demonstrate the same types of problems. Literacy among our nation’s needy children has been a nationwide concern for years, and Reading First is the first systemic approach to find success in addressing that concern. Had the [IES] study been conducted more in line with the mandate given to IES, we would be able to better understand the impact of Reading First at the national level. However, since the study was not well designed and did not meet its mandate, being people of reason, we are obliged to evaluate all of the other data that has been provided through systems such as the annual performance reports over the course of the years. As one studies these data, Reading First is arguably the most powerful federal education program to date. As part of No Child Left Behind, Reading First has demonstrated powerful results among those children in our nation who have traditionally been “left behind” in literacy skills.

In support of this, allow me to briefly summarize results from the state of Idaho. To qualify to become a Reading First school in Idaho, a district has to have the highest level of needs (e.g. the largest percentages of free and reduced lunch in the state) and the lowest available financial resources to meet those needs. The reason for this qualification is that student performance has so often been correlated with socio-economic status. Even though Idaho Reading First schools have such high needs, they have not only grown in their data more quickly on state reading measures, but have closed or nearly closed the gap in all grade levels. Idaho has a universal K-3 reading screener, the IRI, which measures fluency and basic comprehension. From 2003 to 2007, Reading First schools in Idaho improved on this measure at a rate that exceeded the state’s growth during the same timeframe and currently have an overall average that is within 4 percentage points of the state average.


More importantly, Idaho’s economically disadvantaged students grew at a rate in Reading First schools that far surpassed their economically disadvantaged peers in state averages. Among this subpopulation, which is a focus in the NCLB legislation, Reading First schools performed at a rate of improvement between 2003 and 2007 that was 12% better than the state average in Grade 1, 10% better in Grade 2, and 7% better in Grade 3. These results are also mirrored in the comprehensive outcome measure for Idaho Reading First schools. Idaho Reading First schools have consistently performed more than 10 percentile points above the national cut-score on the Normal Curve Equivalence for ITBS Reading Comprehension. This average far surpasses the last year in Idaho in which the ITBS was given to all students (2001), which again demonstrates that Reading First is closing the gap among the neediest children in our state. Furthermore, among economically disadvantaged students, Reading First schools have improved ITBS scores at rates between 20% and 24% in Grades 1-3 from 2004 to 2007, which again demonstrates alignment of reading comprehension results with one of the primary missions of Reading First. Lastly, and very importantly, Idaho Reading First schools are demonstrating greater overall gains and closing the achievement gap on the Grade 3 AYP measure for reading, the ISAT.


Whereas in 2003, the participating schools were significantly behind the state average, Idaho Reading First Schools are now within 2 percentage points of the state average. While the IES interim report may show no statistical significance in its study sample, the reality of Reading First in Idaho shows a vastly different picture. As mentioned before, it is unfortunate that some well-meaning educators criticize Reading First based upon political preference, theory alone, opinion, or incomplete and misleading information. The interim study published by IES did not do an adequate job in meeting its mandate, nor was it representative of the nationwide set of Reading First schools, nor did it triangulate multiple sets of reading data, nor did it identify all of the pertinent variables, nor did it operate on the basis of a true pre-Reading First baseline. With these and other criticisms of the impact study in mind, I respectfully ask our critical colleagues who believe Reading First to be ineffective to review the broader set of data that exist. Reading First has set a high standard for our nation’s public elementary schools who serve its neediest children. According to multiple sets of data in multiple states, this high standard is paying off for thousands upon thousands of children.”


There you go.  Reading First is working in Idaho.  In a state where the motto is “Let it be perpetual,” they are making reading instruction improvements that will empower a generation of new readers.  And I’m betting there are a lot more states like it that are showing similar gains and similar benefits from RF and the implementation of SBRR in the classroom.  We should be out there cultivating these positive stories, spotlighting those schools, LEAs, and SEAs that are making a difference and boosting student achievement.  I know that is harder than promoting our failures and explaining why AYP can never be achieved, but we can learn a lot more examining what works rather than volleying around excuses for what doesn’t.

Lookin’ for Edu-R&D Sugardaddies

For years now, we have heard IES Director Russ Whitehurst lament the dirth of funding for education research and development.  Compare the U.S. Department of Education’s research budget with that of the U.S. Department of Health and Human Services, it is embarrassing (even if you do it as a percentage of the total agency budget).

The good folks over at Knowledge Alliance (formerly NEKIA) have waved a similar banner.  If we expect a scientifically based educational experience, we need to invest in scientifically based research.  If we are going to do what works, we need to investigate it.  And if we are going to drive the squishy research from the K-12 kingdom, we need to make meaningful investments in the strong, scientific, longitudinal research we are seeking.

Yet education R&D still seems to be feeding from the scraps of practice.  We have few industry leaders that are funding R&D the way we see it in the health industry.  And that view becomes even more acute today, when the Howard Hughes Medical Institute announces a $600 million grant to fund the research of 56 top medical researchers.  The Washington Post has the full story here — http://www.washingtonpost.com/wp-dyn/content/article/2008/05/27/AR2008052701014.html?hpid=topnews.

It has all got Eduflack thinking of the impact such an investment could have on education. Just imagine if a philanthropy offered up $200 or $100 or even $50 million to education’s top researchers to develop major findings in how to improve public education.  Science and math instruction.  ELL.  Teacher training.  Effects of technology.  Charters.  The list of possible topics is limitless.  In reading alone, you can take a look at the list of potential research subjects offered by the National Reading Panel in 2000.  Today, most of those still haven’t been pursued.

But we all recognize that such sugardaddies are few and far between in education reform.  We put our money on educational practice.  We fund practitioners.  R&D is an add-on, often used just to test the ROI for funders, be they philanthropic or corporate.

Yes, we have significant education investment from groups like the Bill & Melinda Gates Foundation. They have made a significant contribution to funding education reforms, particularly in our urban areas.  But the focus is not on R&D, it is on classroom practice.  Valuable indeed, but it doesn’t mean we don’t need a similar investment on the research side.  In fact, such R&D investment can ensure Gates’ money is being wisely spent.

Without question, the money available in the education industry is at levels never imagined in generations past.  Somewhere among those growing pots, there must be a potential sugardaddy (or a collection of sugarbabies) who can do for education what the Hughes Institute is doing for medicine.  

As we struggle with the definitions of SBRR and the findings of the WWC, just imagine the impact we can have with a nine-figure investment in education R&D, particularly if it is led through a public-private partnership.  

Today, education reform is kinda like filling a lake with teaspoon.  We’re adding some drops here or there, but we can’t necessarily see the impact.  With stronger R&D, we have the option of at least adding water by the barrel full, if not more.  And that’s the only way to raise the opportunity boats of the kids who need it most.
 

The Saga of RF Profiteers Continues

Last week, Eduflack opined on where all of the Reading First profiteers have gone.  (http://blog.eduflack.com/2008/05/21/calling-all-rf-profiteers.aspx)  As the program is under siege and the funding has dried up, those who personally profited the most are nowhere to be found.  A word of thanks to the Core Knowledge blog for throwing some additional spotlight on the important issue.

Over the weekend, we received an interesting comment from Richard Allington, the former president of the International Reading Association.  Sure, Allington has long been tagged as a RF opponent, but no one can question that he understands the concept of scientifically based reading research.

His posting no doubt got me thinking.  But more importantly, it got Reid Lyon thinking.  As a godfather of RF, Reid definitely knows what he is talking about, and the volume of his RF conversation has increased dramatically in recent weeks.  And it is important that we listen. 

So without further ado, Reid Lyon’s response to Allington’s thoughts on RF profiteers …

“I believe that these interchanges among individuals with different perspectives on Reading First are helpful, as improvements are impossible with productive debate.  In my mind, the debates are more productive when sufficient details are presented to support a particular point of view.  Riccards brings up the detail that publishers and vendors were selling to districts and schools before the Technical Assistance Centers were ever established. He is correct,.  Many did not need a “list” to garner a substantial amount of reading First funding.   Bob Sweet and I predicted that when the legislative language for Reading First was softened to its use of the “based on” criterion, that a feeding frenzy would ensue with everybody and their brother hawking a program based on SBRR. 

Like Allington, we felt in drafting the initial language requiring program-specific language that publishers and vendors would be highly motivated to test their products.  That still has not happened.  I need more details on which programs were “banned.”  I know that Chris Doherty was compelled by the law to not fund programs with no basis in SBRR and he followed that law.  The Wright program was not funded because it was not comprehensive and did meet additional criteria in the law.  The Wright program, to its credit, attended to the reviews of its product and made substantial changes so that it now meets all criteria.
 

Allington may be talking about Reading Recovery as a “banned” program but Reading Recovery was funded by some states using Reading First funds.  The allegations made by Success for All are baseless as indicated by no findings by the OIG of that product being placed at a disadvantage in either its first major auditing report  or its audit of New York State.   There has been absolutely no evidence of any state or district being pressured by the Reading First office to either drop SFA or not implement SFA.     In fact, emails between different state’s Reading First officials, SFA, and a Technical Assistance Center reveal substantial positive interactions in trying to ensure that SFA could participate fully in Reading First.

There are two points that Allington makes where more detail would be very helpful.  First, Allington makes the point the WWC found that Reading Recovery  (RR) has strong evidence that it improves general reading achievement.    This is a very general statement.  My colleagues and I have published a number of papers over the past several years addressing the effectiveness of Reading Recovery and in each review concluded it was effective – for some. Concerns about the efficacy of RR have been based, in part on whether the program is successful with the lowest performing students – students typically served in reading First programs.    Reading Recovery has typically targeted students who perform in the lowest 20% of their classes.  The actual performance level of participants varies from school to school.  Although the research from the developers of RR continues to indicate efficacy for about 70% of the students in the program ( a very strong degree of effectiveness) , its reported effects are much weaker when students who do not meet the program’s exit criteria are included in the analyses of outcomes (see Fletcher, Lyon, Fuchs, & Barnes, 2007 for review). 

In addition,  a review by Elbaum et al. (2000), it was found that gains for the poorest readers were often minimal, which Elbaum et al. suggested may be related to the need for more explicit instruction in decoding.  A recent meta-analysis also found that RR was effective for many grade 1 students (D’Agostino & Murphy, 2004).  This study disaggregated RR outcomes by whether the outcomes involved standardized achievement tests or the Observation Survey, which parallels the RR curriculum.  It also separated results for students who successfully completed RR (i.e., met program criteria and were discontinued) versus those who were unsuccessful or left the program before receiving 20 lessons (i.e., were not discontinued) and according to the methodological rigor of the studies. When the comparison group was low-achieving students, average effect sizes on standardized achievement tests for all discontinued and not discontinued students were in the small range (.32), and higher for discontinued (.48) than not discontinued (-.34) students. This finding was consistent with Elbaum, Vaughn, Hughes, and Moody (2000), who reported that RR was less effective for students with more severe reading problems. D’Agostino and Murphy (2004) found that analyses based on just the more rigorous studies included in their meta-analysis in which evaluation groups were more comparable on pretests showed smaller, but significant effect sizes on standardized measures. Disaggregation according to whether the student was discontinued or not was not possible. Effect sizes were much larger for the Observation Survey measures, but these assessments are tailored to the curriculum and also have severely skewed distributions at the beginning and end of grade 1 that suggest the Observation Survey should not be analyzed as a continuous variable in program evaluation studies (Denton, Ciancio, & Fletcher, 2006).   

By assessing in greater detail the degree to which well defined groups of students respond positively to well defined interventions, we increase the likelihood that particular programs will be implemented in a more thoughtful manner rather than as a magic bullet – and this is the case for all programs.

Allington also concluded  that the IES Interim Report on the Reading First Impact Study should be the final word on the effectiveness of the program.  Details are critical in drawing this conclusion and they are missing in both Allington’s statement and in the media coverage on the report.  Two details are noteworthy – the sample is not representative of the universe of all Reading First schools nationally, and the ability to draw meaningful conclusions about the null results is very limited due to the contamination between Reading First and Non-Reading First schools with respect to shared professional development and  common instructional programs.Allington has jumped to faulty conclusions in the past before.  Recently he asked the field to read two invited papers in an issue of the Elementary School Journal that he  edited that ostensibly overturned the results obtained by the Phonics Subgroup of the NRP.   However, a formal replication of both these two studies published in a top ranked peer reviewed archival journal (Journal of Educational Psychology) did not support the conclusions of either paper regarding the impact of systematic phonics instruction on reading outcomes.  This is science at its best when replication adjudicates claims arising from publication of data particularly when the process is characterized by mature scientific dialogue.

I predict that the jury is not yet out on the effectiveness of Reading First.  Who knows, if the evaluation carried out By IES actually aligned with the evaluation required in the law, more detail would have helped us interpret the results with greater confidence.  But I bet that even if these flawed comparisons showed Reading First Schools to be superior to non-Reading First schools, many would have argued that Reading had not been in place long enough to make these claims.”

The saga continues.  Dr. Allington, I’ll offer you a chance to respond, if you are so inclined.
 

“Fortune and Glory …”

Over the years, we have heard of the effects of pop culture on higher education pursuits.  In the 1980s, the data shows a spike in law school enrollments, credited to the “L.A. Law” effect.  Young legal minds seeking to be the next Arnie Becker or Victor Sifuentez.  In the 1990s, it was the ‘ER” effect, with increases in law school admissions as young doctors-to-be sought to gain a residency slot at County General.  And in recent years, it has been the “CSI” effect, as aspiring criminologists sought to collect prints in Vegas or Miami

This weekend, Eduflack had one of those rare instances where he was able to slip out to a movie.  (Having a two-year-old in the house means this was the first newly released moving in six months I and Eduwife have been able to see.)  Without giving it a second thought, we jumped in the Edumobile and headed out to an early morning show of “Indiana Jones and the Kingdom of the Crystal Skull.”

Two hours later, I was certain I needed to quit all of this ed reform stuff, go back to school, and become an archaeologist.  If it weren’t for my inability to gain competency in any foreign languages (Indy seems to speak dozens, including the dead-for-a-thousand-year-ones), I’d be fitting myself for a fedora, mastering the bullwhip, and heading out to the jungles, deserts, and mountains when antiquities, fortune, and glory can be found.  I wouldn’t even mind teaching those quaint little undergraduate classes on the civilizations and legends of the past.

Of course, I know this isn’t what archeology is really like.  But it is enough to get the juices and the mind flowing, while inspiring us to pursue new ideas.  We also knew that going to law school didn’t mean a high-powered barrister life in the City of Angels, nor did the forensic sciences afford us a life of glamour, power, and intrigue.  But these pop culture moments inspire others to pursue education.  They see something on TV or at the movies, and have an “a ha” moment.  A career possibility to be explored.  An academic pursuit recently discovered. Doors of knowledge opening for the first time.

Areas like archeology and ancient history are in need of such “a ha” moments.  College majors where many don’t see true fortune and glory are passed over for business or pre-law or economics.  But much value can be found in these subjects and others like them.  Sure, none of us are going to become the next Indiana Jones, but that doesn’t mean we use these moments to educate and to inspire.  To teach and learn.  It is a similar philosophy that has us putting a lense of relevance, interest, and passion around the STEM subjects.

But sometimes we have cold water thrown on our dreams of leather jackets, arks, and temples.  Just check out the piece in today’s Washington Post from Neil Asher Silberman.  http://www.washingtonpost.com/wp-dyn/content/article/2008/05/23/AR2008052302453.html  He paints the job much differently, of excavating by centimeters and analyzing plant remains.  With the stroke of a pen, he took all of the excitement and passion out of a career path that needs passionate and committed scholars.  Unintentionally, Silberman took away a great teaching moment to inspire students to study history, science, and the humanities all rolled into one.

Oh well, I guess that archaeologist-adventurer job will have to be left to my dreams.  Back to ed reform.