Injecting Some Moneyball into Student Testing

I’ve always been one to find love in both the art and science of a given subject. As a lifelong baseball fan – and a pretty poor baseball player through high school – I quickly embraced Ted Williams’ The Science of Hitting, believing that the charts and graphs explaining strike zones and such would somehow transform me from a doubles hitter into a homerun machine. Sadly, it never did.

I’m also an unabashed fan of the New York Mets, and have been since the early 1980s. For more than three decades, I have endured the highs and lows (mostly lows) of rooting for the Metropolitans and in believing this might just be the year.

Sadly, the 2018 season wasn’t that year for the Mets. But it was such a year for Mets ace Jacob deGrom. Last week, the All-Star received the Cy Young, recognizing the best pitcher in the National League. It was a well-deserved honor, recognizing one of the best seasons a starting pitcher has ever had, including an earned run average of only 1.70, a WHIP of 0.912, and 269 strikeouts in 217 innings pitched. DeGrom secured the first place position on all but one of the ballots cast this year, offering a rare highlight in another tough Mets season.

Leading up to the award, there were some analysts who wondered if deGrom would win the Cy Young, despite those impressive numbers. The major ding against him was that he was pitching for the Mets, and as a result posted only a 10-9 record, getting almost no run support at all all season from his team. DeGrom’s top competition in the NL had 18 wins. The Cy Young winner in the American league posted 21 victories. So when a 10-9 record won the Cy Young, some critics pounced, accusing sabermetrics and “moneyball” taking over the awards. The thinking was that one of the chief attributes of a top starting pitcher is how many wins he has. If you aren’t winning, how can you possibly be the best?

All the discussions about how sabermetrics has ruined baseball – or at least baseball awards – soon had me thinking about education and education testing. For well over a decade, we have insisted that student achievement, and the quality of our schools, is based on a single metric. Student performance on the state test is king. It was the single determinant during the NCLB era, and it remains the same during the PARCC/Smarted Balanced reign.

Sure, some have led Quixotic fights against “high-stakes testing” in general, but we all know that testing isn’t going anywhere. While PARCC may ultimately be replaced by a new state test (as my state of New Jersey is looking to do) or whether the consortium may one day be replaced by the latest and greatest, testing is here to stay. The calls for accountability are so great and the dollars spent on K-12 education so high, that not placing some sort of testing metric on schools, and kids, is fairy tale. Testing is here to stay. The only question we should be asking is whether we are administering and analyzing the right tests.

I’ve long been a believer in education data and the importance of quantifiable research, particularly when it comes to demonstrating excellence or improvement. But I still remember the moment when I realized that data was fallible. While serving on a local school board in Virginia, overseeing one of the top school districts in the nation, we were told that our nationally ranked high school had failed to make AYP. At first I couldn’t understand how this was possible. Then I realized we were the victims of a small N size. The impact of a handful of students in special education and ELL dinged up in the AYP evaluation. The same handful of students in both groups. It didn’t make our high school lesser than it was. It didn’t reduce our desire to address the learning needs of those specific students. But the state test declared we weren’t making adequate progress. The data had failed us.

The same can be said about the use of value-added measures (VAM scores) in evaluating teachers and schools. VAM may indeed remain the best method for evaluating teachers based on student academic performance. But it is a badly flawed method, at best. A method that doesn’t take into account the limitations on the subjects that are assessed on state tests, small class sizes (particularly in rural communities or in subjects like STEM), and the transience of the teaching profession, even in a given school year. Despite these flaws, we still use VAM scores because we just don’t have any better alternatives.

Which gets me back to Jake deGrom and moneyball. Maybe it is time that we look at school and student success through a sabermetric lens. Sure, some years success can be measured based on performance on the PARCC, just like many years the best pitcher in baseball has the most victories that season. But maybe, just maybe, there are other outcomes metrics we can and should be using to determine achievement and progress.

This means more than just injecting the MAP test or other interim assessments into the process. It means finding other quantifiable metrics that can be used to determine student progress. It means identifying the shortcomings of a school – or even a student – and then measuring teaching and learning based on that. It means understanding that outcomes can be measured in multiple ways, through multiple tools, including but not limited to an online adaptive test. And it means applying all we know about cognitive learning to establish evaluative tools that live and breathe and adapt based on everything we know about teaching, learning, and the human brain.

DeGrom won the Cy Young because teams feared him every time his turn in the rotation came up. We knew he had a history-making season because of traditional metrics like strikeouts and innings pitched, but also because of moneyball metrics like “wins above replacement,” or WAR, and “walks and hits per innings pitched,” or WHIP. Had he not won that 10th game the last week of the season, thus giving him a winning record, deGrom would have had no less a stellar season. In fact, a losing record would have indicated his personal successes and impact despite what others around him were able to do.

Maybe it finally is a time a little moneyball thinking works its way into student assessment. Hopefully, this discussion will come before the Mets reach their next World Series.

 

 

NAEP Response: More than Words?

Now that the dust has finally settled on the most-recent dump of NAEP scores, we must admit that the results just aren’t good. For a decade now, student performance on our national reading and math tests have remained stagnant. And that stagnation is only because a few select demographics managed gains that kept everyone afloat.

At a time when we all seem to agree that today’s students need stronger and greater skills to succeed in tomorrow’s world, how can we be satisfied with stagnation? And how we can respond simply with words, with the rhetoric of how our students can and should do better?

Over on the BAM! Radio Network, we explore the topic, reflecting on both how we cannot be satisfied with our students treading water and how we need to take real action to improve teaching and learning in the classroom. Give it a listen. Then show your work.

 

Crediting DeVos Where Credit Is Due

Earlier this month, the US Department of Education announced a new effort to pilot flexibility when it comes to student assessment. After years of gripes from the education community about the problems with tests and the concerns of wonks in DC telling educators in the localities what to do and how to measure, the Feds are finally offering a little flexibility and local control when it comes to testing.

What was the response? Largely crickets. Almost no one tipped the cap or offered kudos to EdSec Betsy DeVos for following through on this effort to pilot assessment flexibility. And that’s a cryin’ shame.

On the most recent episode of #TrumpEd on the BAM! Radio Network, we explore the most-recent testing actions from ED and how this should be seen as a good thing in the evolution of federal/locality education policy. Give it a listen!

Really!?! You’re Going to Make Me Defend PARCC Again?

I really didn’t want to spend this week defending PARCC tests, but the universe is working against dear ol’ Eduflack. Yet again, I’m forced to take up rhetorical arms against those who either fail to understand, or choose to prey on, concerns regarding the Common Core and the assessments used to measure student progress against those standards.

This week, an Eduflack reader shared a screen shot of a recent web page. The below was created for parents in a highly resourced, high-performing school district. It was shared as one would share promotional materials for the latest summer camp or child social activity. And it preys on the helicopter parents’ worst fears.

IMG_0420

Yep, its time to send your little ones to “PARCC Preparation Camp.” Over the course of a month and a half, your child can spend their summer days in test prep, preparing for an assessment that one is not supposed to do test prep for. You can drill and be told those areas where you need to purchase additional tutoring because the schools clearly aren’t cutting it. And I’m not even sure what you are getting when your 12-year old will receive “all guidance regarding writing PARCC tests,” but clearly that is important (it is the second selling point in a list of just four!).

And one enhances the offerings by highlighting to a STEM-obsessed parent community that additional tutoring in robotics and coding is also available. That makes it a downright party!

This is why we just can’t have nice things in the education community.

One would be hard pressed to find a parent who wouldn’t seek to give his or her child every possible help available when it comes to school. We are constantly inundated with television ads for the latest tutoring services, as for-profit companies pledge to turn the most struggling of learners into a future Nobel laureate. We purchase the latest technology, buy the latest software and apps, all in the name of giving our kids a leg up. As parents, one of our jobs is to ensure our kids are getting the best educations possible. We use the resources we have to do the best we can at that job.

But when companies are taking advantage of that parental concern — and playing up community concerns around a specific test or particular instructional content — it just makes the blood boil.

And it is should come as no surprise that such ads are populating parents’ social media at a time when the local community started to learn that the PARCC test is being used to determine whether middle schoolers get into the gifted math classes sought by so many parents. Now, if your kid doesn’t get into the math class necessary to create the next Google or Bitcoin, it is your fault as a parent for not sending them to PARCC camp when you could. (And don’t even get me started on the PARCC test prep books that are now available. I can even find them that are specific to the “New Jersey PARCC.”)

As parents, we need to do a far better job of educating ourselves on teaching and learning. Assessments like PARCC are not tests that one should be doing test prep for. They are tests meant to serve as a milestone for how the student is doing. Is my kid at a proficient level, compared with other fifth graders across the country? If not, I need to be talking to the teachers and the schools to understand where the deficiencies may be and address them appropriately and in partnership with the teacher. It isn’t a time to enroll my kid in PARCC boot camp or have them take the walk of PARCC shame.

Sadly, a great number of parents will likely sign up for this camp, and others like it across the country. They will believe these strip-mall tutors will have the cryptex necessary to crack the PARCC code, win the game, get into the Ivy League, and become the smartest, most successful person in the history of persons. Even more sad, parents will credit PARCC gains to test prep and their foresight, not to the hard work of the teacher throughout the academic year.

Or they could just have their kids do some independent reading over the summer. And play outside. And identify, develop, and pursue some of their passions during the summer months.

P.T. Barnum allegedly claimed there was a sucker born every minute. Imagine what he would have said seeing test prep outfits take advantage of parent concerns over testing and the school achievement of their kids.

What We Have Here Is a Failure in Parent Communication

Last week, when announcing his incoming secretary of education, new New Jersey Gov. Phil Murphy noted his intentions to “stop using PARCC tests.” The statement was hardly controversial. Across the Garden State, parents have spent the past three years voicing frustrations with the student assessment, reading from the talking points of Common Core and testing opponents.

So when the then governor-elect joined with parent advocates and the teachers unions in calling for the state to “create new, more effective and less class time-intrusive means for measuring student assessment,” it was no surprise that social media lit up in celebration.

Outside of Princeton, in my little Mayberry RFD, parents rejoiced. For days, Facebook has lit up with messages of parents bidding the state test adieu. They celebrated the end of PARCC. They applauded that their kids wouldn’t have to take the weeks-long tests this winter. They cheered going back to the good ol’ days. They thanked the incoming governor for finally taking action. And in doing so, their premature jubilation reveals our failures to adequately engage parents in the policy process and communicate with them on important issues.

So dear ol’ Eduflack spent the weekend being the proverbial skunk at the garden party. Pointing out that the governor’s works have to be translated into legislative action by the New Jersey state legislature. Noting that New Jersey must still administer annual assessments on almost all of its K-12 students, and that PARCC has to be replaced with something else. Highlighting that if the state doesn’t use PARCC or Smarter Balanced, then it would need to pay to develop a similar test that would have to be approved by the federal government. And making clear that, even if such actions were taken this spring, it would be years before our kids would be free from PARCC assessments in the classroom.

Yes, parents across the state and throughout the country are well intended. Yes, they are paying enough attention to the issues that they are able to share anti-testing talking points like the length of tests, the use of technology, and the absence of early childhood experts in test development. But we are doing a great disservice when we only share part of the process – and part of the solution – with families.

One can’t throw a rock in education policy discussions without hitting someone speaking of the importance of family involvement and parental voice in the discussion. Just as we like to declare the Simpson-eque, “what about the children?” in such discussion, so too do we ask where the parents are in the debate.

But too many are selective in how they want that parent voice present. We don’t want them involved in curricular discussions because that is the purview of the educators. We don’t want them to have too much power with regard to school choice, for that should be a decision of policymakers. We don’t want them involved in teacher evaluation, for they are unaware of the challenges and nuances of what happens in a school and classroom.

So we largely welcome parents twice a year to short parent-teacher conferences, we applaud when they show up for PTA meetings and school concerts, and we hope we won’t need to see them otherwise for disciplinary actions. We certainly don’t want them showing up on the school doorstep with their concerns regarding what is happening behind those doors.

Years ago, I was fortunate to collaborate with a group of tremendous researchers, scientists, educators, and parents on the book, Why Kids Can’t Read: Continuing to Challenge the Status Quo in Education. The book was designed to serve as a primer for parents to get involved in improving reading instruction in their kids’ classrooms. By focusing on what the research tells us, what is working in schools, what other parents have dealt with, and what tools can make a successful parent advocate, Why Kids Can’t Read was written to empower parents in their quest for a world-class education for their kids, for all kids.

In writing it, and since in dealing with my own struggles as a special education parent, it is clear we largely don’t want empowered parents in the schools. If we look back through history, there are only a handful of moments where education policy truly changed because of the power of parents. Instead, we prefer to keep parents at arm’s length, giving only the illusion of involvement.

If we are serious about parents as partners in the learning process, we need to figure out how to truly educate them on it. It is insufficient to equip them solely with the talking points found on social media, and then expect them to be active partners in improvement. Better, stronger educational opportunities for our children can only come when parents are better educated on the processes and policies themselves.

Otherwise, parents are simply the proverbial dog chasing the squirrel, reacting to the latest buzzwords and urban legends shared on social media with the same buzzwords and urban legends they heard the week or month before. And that’s no way to improve teaching and learning for our children.

From Proficiency to Mastery

Earlier this year, EdSec Betsy DeVos caught a great deal of flak for not acknowledging the difference between proficiency and progress when it comes to student learning. But with her remarks earlier this month, she may have changed the discussion by shifting the debate to one on mastery. 

Over on BAM! Radio Network, we examine this development on the latest edition of #TrumpED. Give it a listen!

A Few Future-Looking Qs for DeVos

As Washington and the education community gear up for Betsy DeVos’ confirmation hearings to become the next EdSec, over at BAM Radio Network I explore a few areas we really should look into, but likely won’t.

Sure, we could spend the entire hearing discussion past actions on charter schools, vouchers, reform advocacy, and reform dollars. But rather than just talking the past, what if we actually explored the future and how the U.S. Department of Education can impact the entire education community.

The nation needs a clear vision of accountability, teacher preparation, modes of learning and expectations for all. Now seems like as good a time as any to start asking. Give it a listen here. You won’t be disappointed.