What Parents Want from Student Assessments

It is quite clear that student assessments are quickly becoming the driving force in public education.  In state after state, we are now using student assessment to drive funding, teacher evaluation, and institutional direction.  While many may squabble on what types of assessments to take and how to apply them, there is no denying that student assessment is now ruling the day.

So what is that parents (and teachers) actually want from the learning assessments administered in our classrooms?  That is the question that the Northwest Evaluation Association (NWEA) and Grunwald Associates asked earlier this month, and some of the responses were surprising.  All told, Grunwald Associates surveyed more than 1,000 K-12 teachers, more than 1,000 K-12 parents, and 200 district administrators.  The findings included:
* 90 percent of parents said monitoring their kids’ progress in school, knowing when to be concerned about progress, and determining preparedness for the next stage of learning was “extremely” or “very” important;
* More than eight in 10 parents (84 percent) said formative assessments are useful for instructional purposes, while only 44 percent said summative assessments were; 
* More than six in 10 teachers cited monitoring individual student performance and monitoring growth in learning over time as most important to them;
* With both parents and educators, 90 percent said it is important to measure student performance in math and English/language arts, as well as in other subjects like science, history, government and civics, economics, and technology and media literacy; and
* Only half of parents believe that summative assessment results are delivered in a timely manner.
And the big takeaways?  Teachers value formative and interim assessments far more than they do summative assessments (and that opinion is trickling down to parents).  The vast majority of teachers and parents want more testing (at least in more subjects) and want results delivered in a timely manner.  And an inordinate amount of K-12 parents seem to understand the subtleties among formative, interim, and summative assessments (or at least pretended to in distinguishing between all comers in responding to this survey.
It is valuable to see that we continue to discern value from student assessments, regardless of the form they come in.  But we also have a few key lessons learned from the NWEA/Grunwald data:
* We still aren’t seeing that data is being effectively used in classroom instruction.  Neither parents nor educators seem to believe that current data is being used to tailor and improve instruction in the classroom.  Why not?  With all the data we capture, we should be putting it into practice.  If not, this is all a fool’s errand.
* Testing turnaround time is taking too long.  Teacher and parent alike seem to believe the turnaround time from taking the test to getting the scores is just too lengthy.  Seems like the perfect opportunity to call for online, adaptive testing (whether it be formative or summative) where scores can be turned around and applied in real time.
* Parents follow the lead of their children’s educators.  On the whole, parents’ responses aligned with the teachers leading their kids’ classrooms.  Both the frustrations and benefits of teaching, from the educators’ eyes, is making it back to the parents at home.  This relationship can serve as a valuable tool.
* There seems to be a call for adding testing to the school calendar.  While some bemoaned those “horrible” “high-stakes” summative assessments, there was a strong call for more tests on the front end.  This seems to run contrary to the drumbeat that there is too much testing in the classroom, and, if used properly, can be powerful in further shaping data-driven classrooms.
While such surveys will likely have little impact on the in-developed common core standards assessments or on current state exams, they do provide some interesting context as we look at how to use tests in educator evaluation and other such measures.  Some food for thought.

“Teachers Matter”

Last evening, President Barack Obama delivered his State of the Union Address to Congress and the nation.  The speech focused on the four pillars the President and his team see as necessary for turning around the United States and strengthening our community and our economy.  No surprise for those following the pre-game shows, education stood as one of those four pillars.

Five paragraphs committed to education.  One pointing out our states and districts are cutting education budgets when we should be strengthening them.  One on the importance of teachers.  One on high school dropouts.  Two on higher education and how we fund a college education.  (We have a sixth if you include the President’s call to do something to help hard-working students who are not yet citizens.)
So let’s go ahead and dissect what the President offered up last evening.
“Teachers matter.”
Absolutely.  No question about it.  We cannot and should not reform our K-12 educational systems without educators.  Teachers (and I would add, principals) are the single-greatest factor in education improvement.  They need to be at the table as we work toward the improved educational offerings the President and so many other dream of.
“So instead of bashing them, or defending the status quo, let’s offer schools a deal.  Give them the resources to keep good teachers on the job, and reward the best ones.”
Sign me up.  As the son of two educators, the last thing I want to do is bash a teacher (I’ll get in trouble with my mom if I do).  As I’ve said many times on this blog, teaching — particularly in this day and age — is one of the most difficult professions out there.  Most people aren’t cut out to do it, or at least do it well.  We need to make sure our precious tax dollars are being directed at recruiting, retaining, and supporting great teachers.  We should reward classroom excellence with merit pay and other acknowledgements.  But the President is also right in noting we cannot defend the status quo.  We can no longer debate whether reform is necessary.  Reform is necessary.  The discussion must now shift to how we change how we teach, not whether we change.
“In return, grant schools flexibility: To teach with creativity and passion; to stop teaching to the test; and to replace teachers who just aren’t helping kids learn.”
Yes, yes, yes.  Great educators know how to help virtually all kids learn.  They know to tailor their instruction based on data and other research points.  We should be encouraging that and empowering teachers to do so each and every day.  But we can’t lose sight of that last clause (and many may have missed it last night over the cheap applause line of not teaching to the test).  We must “replace teachers who just aren’t helping kids learn.”  In our quest for a great educator in every classroom, we must also realize not everyone is cut out to teach.  We need serious educator evaluation systems that ensure everyone is evaluated, everyone is evaluated every year, and those evaluations are based primarily on student learning.  And, like it or not, student performance tests still remain the greatest measure we have for student learning.  So if we can’t get struggling educators the professional development and support necessary to excel in the classroom, we need to be prepared to transition them out of the school.       
And lastly, President Obama’s “bold” call to action to ensure every student is college and career ready.
“I call on every State to require that all students stay in high school until they graduate or turn eighteen.” 
And here we have the President’s big educational swing and a miss.  This is a process goal, not an outcomes goal.  Based on AYP figures and recent on school improvement and turnaround, we know that far too many kids — particularly those from historically disadvantaged populations — are attending failing schools.  This is particularly true of secondary school students.  
Why force a student to stay in a school that has long been branded a “drop-out factory?”  Why keep a kid in school until he is 18 when he only reading at the grade level of an eight-year-old?  Why stick around for a high school diploma when it also requires massive remediation to attend a postsecondary institution?
No, the call should not be to require students to stick around a bad situation, giving us nothing more than a process win.  Instead, we should be focused on improving the outcomes of high school.  How do we demonstrate the relevance of a high school curriculum?  How do we engage kids?  How do we provide choices for a meaningful high school education?  How do we show the college and career paths that come from earning that diploma?  How do we make kids see they want to stick around, and don’t have to be mandated to do so?
At this point in time, we all realize that a high school diploma is the bare minimum to participate in our economy and our society.  For most, some form of postsecondary education is also necessary.  Until we improve the quality and direction of our high schools — and help kids see that dropping out is never a viable option — that mandatory diploma will be nothing more than a certificate of attendance.  We need to make a diploma something all kids covet … not a mandatory experience like going to the dentist.
 

Educator Eval … With a British Accent

Over at Education Sector, there is a new report out focused on accountability efforts in England.  The Report, On Her Majesty’s School Inspection Service, offers an interesting look at how expert “inspection teams” can evaluate the success of local schools and local teachers.

Riffing off EdSector’s new report, dear ol’ Eduflack has a guest blog post on Quick and the Ed, examining what we might be able to learn from the British inspectorate and how those lessons could be applied to current U.S. efforts to key in on educator evaluation.  The most important point?  British evaluations are all about the kids, with the vast majority of their multiple measures focused on students and student learning.
By now, we all realize that effective educator evaluation requires multiple measures.  While many want to focus on just the inputs that go into teaching – what our educators are bringing to the classroom – it is equally, if not more, important for us to focus on student achievement.  And England makes clear that student learning is the most important element to its evaluation system.
Definitely some food for thought as SEAs look for the most effective ways to build effective evaluation systems and determine the best ways to measure educator effectiveness.
  

Saving American Education

So how do we “save American education?”  As a nation we obviously spend a great deal of time diagnosing the problems, while offering a few targeted solutions.  But what does comprehensive treatment of the problem really look like. 

That’s actually the question that Jay Mathews of The Washington Post recently posed to Mark Tucker, the head of the National Center for Education and the Economy.  And Tucker’s answers may surprise some.  His top five solutions?
1) Make admissions to teacher training programs more competitive
2) Raise teacher compensation significantly
3) Allow larger class sizes
4) End annual standardized testing
5) Spend more money on students who need more help getting to high standards
It is an interesting collection of recommendations, which Tucker and NCEE offer based on observing what other countries have done to improve their educational offerings.  But it begs an important question — are these reforms that the federal government should be leading, or reforms that need to be driven by the states?  Can the United States of America really follow the lead of Singapore, a nation no larger than Kentucky?
Yes, it is important we focus on educator effectiveness.  That starts with getting the best individuals into our teacher training programs and continues with ensuring schools are able to recruit, retain, and support those truly excellent educators.  And yes, we should pay those teachers better, but only after we have developed teacher evaluation systems focused on student achievement measures.
And you will get no disagreement from Eduflack on the need to spend more money on the students who need the most help.  The time has clearly come to overhaul our school finance systems to ensure that scarce tax dollars are going where they are needed the most.  We shouldn’t be funding schools based simply on an historical perspective, doing what we do because it worked a few decades ago.  We need to fund our schools in real time, recognizing that all schools — be they traditional public, magnet, technical, or charter — are treated fairly and equitably when it comes to funding formulas and per-pupil expenditures.
But eliminate testing?  While I like Tucker’s idea of three national exams that identify student performance at the end of elementary school, 10th grade, and 12th grade, do we really believe that is enough?  Is one test between kindergarten and high school really sufficient, particularly when we know a third of our elementary school students are reading below grade level and the real trouble spot for our schools is the middle school years?  
Instead of cutting back on the number of tests, we should first look to use our testing data more effectively.  Empower teachers with formative and summative assessment data to tailor their instructional approaches to meet student needs.  Let the data guide what happens in the classroom.  We need to change the mindset that the test is the end product.  It needs to be the starting line, providing educators with a strong diagnosis for how to proceed with the work at hand for a given school year.
That’s how we can save American education.  Data-driven decision making.  Evidence-based instruction.  By better understanding and applying the research, we have the power to focus on effective teachers, getting the resources where they are most needed, and actually improving student achievement.  Without it, we will just continue to feel our way in the dark.

Applauding Public School Successes and Progress

In education reform, it is often easy to focus on the negative.  A third of all kids are not reading proficient in third grade.  No coincidence, the high school dropout rate is also about a third.  We have stagnant test scores, even as state standards were reduced.  We are slipping in international comparisons.  And even the U.S. Secretary of Education says four in five public schools in our nation are likely not making adequate yearly progress.

But today I am here to praise some of our public schools, not bury them.  In schools across the nation, educators are recognizing there are serious problems and there are real, productive solutions for addressing those problems.  And in those schools and those communities that are fortunate enough to have superintendents, principals, teachers, and other educators enacting those solutions, the kids are reaping the benefits.
Today’s case in point is up in the Nutmeg State.  Yes, Connecticut has the largest achievement gaps in the nation.  But we are seeing pockets of success and progress in elementary, middle, and even a few high schools across the state. 
Today, ConnCAN (or the Connecticut Coalition for Achievement Now) released its annual report cards on the state’s public schools.  For the last six years, ConnCAN has provided a simple, yet effective, report card for grading every school and every school district in the state.  Using state test scores, ConnCAN ranks all public schools on how they are doing with regard to four measures — 1) overall performance, 2) student subgroup performance (low-income, African-American, and Hispanic), 3) performance gains, and 4) achievement gap.  Each school receives both a ranking (relative performance) and a letter grade (absolute performance).  The complete set of 2011 ConnCAN report cards can be found here.

In addition to scoring more than 1,000 schools this way, ConnCAN also provides a list of Top 10 schools (elementary, middle and high school) based on many of the above measures.  And to top it off, the not-for-profit offers up a list of 2011 Success Story Schools.  Each of these Success Stories are at least 75 percent low-income and/or minority.  And in each of these schools, at least one subgroup (low-income, African-American, or Hispanic) outperforms the overall average for the state at that school level (elementary, middle, or high school). 
While the staff of ConnCAN deserves real credit for undertaking this effort each year, the intent of this missive is not a self-congratulatory pat on the back.  No, the purpose is to put the spotlight and the plaudits where they belong — on those schools that are making real progress, particularly when it comes to addressing the achievement gaps.
So here’s to the Worthington Hooker School in New Haven, where 86 percent of low-income students are at or above goal.  To Jefferson Elementary in Norwalk, where 67.5 percent of African-American students are at or above goal.  To the Mead School in Ansonia and the Ralph M.T. Johnson School in Bethel, both of which have more than 80 percent of their Hispanic students at or above goal.  And to the AnnieFisher STEM Magnet School and Breakthrough 2, both in Hartford, and Fair Haven School in New Haven, all three of which posted improvement in excess of 20 percentage points from last year.
These — and all of the others on ConnCAN’s 2011 Top 10 and Success Story Schools Lists — are examples of what is possible.  They signal that change, while difficult, can happen.  They show that all students — regardless of race, family income, or zip code — can have access to great schools.  And they demonstrate the power and impact truly great educators can have on the achievement of our young people.
These schools also teach us there is no one solution, no one magic bullet, and no one enchanted elixir for improving our schools.  It takes hard work.  It demands commitment.  It requires a true student focus.  And it calls for learning from and modeling after schools like those recognized by ConnCAN on this year’s lists.
So congratulations to those public schools on ConnCAN’s Top 10 and Success Story Schools Lists and to other public schools posting similar progress in other states across the country.  Kudos to those administrators, teachers, and staff who are making it happen.  And applause to those students and their families who are making clear that terms like dropout factories and achievement gaps can become nothing more than urban legend.
(Full disclosure, Eduflack not only works with ConnCAN, but he also runs the organization.)


Some Nutmeg on the NAEP

Last week, the U.S. Department of Education released the latest round of NAEP scores, offering the most recent snapshot on how our nation’s students are doing when it comes to reading and math.  The results were downright depressing, with the majority of kids still failing to post proficient scores and the achievement gaps growing in far too many areas.

National Journal is running its weekly blog on those very same NAEP results.  You can check out Eduflack’s post on the scores, their impact in Connecticut in particular, and how if these latest scores don’t signify an urgent call, I don’t know what will.
We often think of Connecticut, the Nutmeg State, as the land of plentiful budgets and bountiful student success.  But the numbers tell a vastly different picture.  While Connecticut is indeed in the top 10 when it comes to per-pupil expenditure, it is tops when it comes to achievement gaps.  From my National Journal post:
For those looking to strap on the pom-poms for number one rankings, Connecticut did score first in seven of the 16 disaggregated categories. Of course, that’s a first place for largest gaps. And we’re in the top 10 for every single one of those 16.

As always, this week’s debate is worth checking out, as are the actual reports, breakdowns and official government statements on the 2011 Nation’s Report Cards on reading and math, as released by the National Assessment Governing Board.

Saving Our Schools?

Most of those who read the education blogosphere or follow the myriad of edu-tweeters know that this weekend is the “Save Our
Schools” rally
 in Washington, DC.  On Saturday, teachers, parents, and concerned citizens with gather on the Ellipse.  They are encouraged to “arrive early to enjoy performances, art, and more!” and they are slated to hear from Diane Ravitch, Jonathan Kozol, Jose Vilson, Deborah Meier, Monty Neill, and “other speakers, musicians, performance poets, and more.”  This collection “will encourage, educate, and support this movement.”

For weeks now, we’ve seen the media savvy folks in the Save Our Schools clique use their blogs and Twitter feeds to promote the rally.  Ravitch has been touting it since its inception.  Teacher Ken has written about it on multiple blog platforms.  And Nancy Flanagan has used her perch at Education Week to tout the event, its justification, and its potential significance.
As I’ve written about many times before, successful public engagement is about far more than simply “informing” people on an issue.  Sharing information, as the slated speakers intend to do, is the easiest component of public engagement.  The hard work is affecting outcomes.  How do you move from informing at a rally to building measurable commitment to a specific solution?  How do you mobilize around that specific solution?  And ultimately, how do you successful change both thinking and action related to the issue?
To that end, rather than rehash the points and counterpoints that have been going back and forth, Eduflack simply has a few questions to ask:
* What is the expected turnout for the event?  Noting the “RSVP” function, how many actual attendees will be considered a success?  And how many physical bodies would be considered a failure?
* Will Save Our Schools disclose its funders?
* What are the tangible outcomes coming from the principles?  Does equitable funding mean moving more dollars into failing schools, or can it mean a new formula where funding follows the student?  Where do the dollars for all of the “full funding” come from?  What specific “multiple and varied assessments” are “demanded?”  What exactly do you propose for curriculum development (recognizing the bullets under the principle of curriculum seem to have little do do with actual curriculum development)?
* How does a weekend of speeches, music, and art “draw sustained attention to the critical issues?”
* And why are you following the kiss of death for many recent education movements, opening a “Save Our Schools” store?
I’m all for people have a good, fun time during these hot and humid summer days in our nation’s capital.  But if one is serious about school improvement (setting aside whether SOS’ agenda can be considered “improvement”), you need to offer a little more than arts and crafts.  Set an agenda.  Publicly disclose intentions.  Establish clear, measurable goals and report back on progress.  Allow the same public you are appealing to now to hold you accountable a year from now.  Without that, it is just another fun day in the sun, with chants of “go schools!” between games of ultimate frisbee.  And that gets us no closer to improving student achievement and potential for success.
 

Injecting Tech Into Assessment

As we all well know, last year the U.S. Department of Education awarded $350 million to develop new assessments to go with our Common Core State Standards.  Those assessment consortia — the Partnership for the Assessment of Readiness for College and Careers (PARCC) and SMARTER Balanced Assessment Consortium (SBAC) — have been working to start developing the tests that measure the achievement of the student performance against the new common standards.

Since the beginning of the consortia effort, questions have been raised.  Recently, many have asked about the progress of the consortia, wondering if they will be able to deliver test to states for implementation in 2014.  But queries about technology have existed before the feds even cut the checks, with initial hypotheses (since proven incorrect) saying that PARCC wasn’t even interested in the adoption of new technologies in its assessment model.
To help focus on the issues of technology and CCSS assessment, the State Educational Technology Directors Association (SETDA) recently released Technology Requirements for Large-Scale Computer-Based and Online Assessment: Current Status and Issues, a discussion draft report currently available on www.assess4ed.net, a new online community supported by the U.S. Department of Education to explore RttT assessment issues.  
Among the issues posed by SETDA in the discussion draft:
* Striking the right balance in specifying technology requirements, while recognizing the heterogeneity of the technology in use in schools today and tomorrow;
* The specifications for test administration – including especially the length of the testing window – may have the single greatest impact on school technology readiness for computer-based and online assessment;
* Coordinating technology requirements, management, and related costs for assessment with other educational technology investments;
* Employing IT industry best practices to extract cost-savings via the shift to computer-based and online assessment;
* Creating processes and plans to both take advantage of future technology innovations and to take out of service obsolete technology;
* Architecting a system that can accommodate the trend away from seat time requirements and toward increasing online and blended (part-online, part face-to-face settings) enrollments;
* Striking and maintaining the right balance between comparability and validity in implementing next generation assessment systems;
* Providing meaningful opportunities for students and teachers to become comfortable with the assessment technology prior to implementation; and
* Coordinating work with state and district technology leadership.
Without question, Eduflack applauds SETDA for asking the right questions and pointing to the right issues when it comes to technology and the next generation of student assessments.  And the report is particularly useful in providing a series of charts and graphs on both CCSS and the states themselves.
As this Technology Requirements was issued as a draft for review and comment, I just can’t miss the opportunity to provide two comments (additions really) for the authors to consider:
* In addition to providing meaningful opportunities for students and teachers to become comfortable with the assessment technology, there is a real opportunity to position the ed tech standards (NETS) established by the International Society for Technology in Education (ISTE) as a key component for linking technology, assessment, instruction, and learning. 
* While online assessments are important, they really only get us half of the way to our destination.  If we are serious about deploying meaningful tests that will serve our states and districts for decades to come, we must look at exams that are both online and adaptive.  Adaptive testing technologies are advancing rapidly.  Some states, particularly those in SBAC are already using online adaptive technologies to build a better testing mousetrap.  We need to learn from those states, constructing for the future of testing, not for its past.
Now is the time to speak up folks.  SETDA has put a valuable and intriguing marker down on the the discussion of technology and assessment.  Contribute to the discussion, both through the draft report and through www.assess4ed.net.  These are important discussions.  Speak now or forever hold your peace.

Cheatin’ on Peach Tree Street

The big edu-news of the week has to be the ever-evolving cheating scandal down in Atlanta.  The allegations had already brought down a superintendent of the year, one who was once rumored to be on the short list for U.S. Secretary of Education.  The report released by the Georgia governor notes cheating in 80 percent of the schools reviewed, with 178 teachers and 38 principals named in the scheme.  The Atlanta Journal-Constitution has the full story here.

Critics are quick to use this scandal to condemn testing and accountability in general, stating that our high-stakes, AYP era made these educators act the way they did.  They had no choice.  With high expectations, they had to use any means necessary to demonstrate student proficiency.  If that meant erasing a number of bubbles in the name of APS’ reputation, then so be it.
And it isn’t like this is the only incident of district-wide cheating we’ve heard of in recent years.  There is the current investigation in Baltimore.  And who can forget the huge expose that USA Today did on potential cheating in Washington, DC.  
There is a difference, though, beyond the scale of the allegations.  In DC and Baltimore, folks were quick to condemn the leadership for taking shortcuts.  And we were quick to remind people that those districts were headed by upstart “reformers” looking to change the way we teach.  So in their quest to demonstrate their model works, of course they would do whatever it took to post student gains, right?
But Atlanta paints a very different picture.  Superintendent Hall is the very model of a status quo superintendent.  Her tenure in Atlanta surpasses just about any current urban superintendent.  She’s part of the old guard, and was regularly put forward as an example that one doesn’t have to blow up the central office and preach reform to generate the sort of student achievement numbers most urban districts only dream about.  So if there is some malfeasance, it must be the devil’s work.  It must be the doing of that dear ol’ mephistopheles known as NCLB/AYP. 
There is never a good reason why a school or district should engage in systematic cheating on assessments.  Even with the best of intentions, such actions only serve to destroy the lives of educators and embarrass the students.  Such actions only undo the good changes and improvements that may be happening in a district.  And such actions only throw more fuel on the fire regarding public perceptions of failing schools and incapable educators.  Instead of everyone winning by some short-term student gains, everyone — particularly the students — loses when details and stories such as these go public.
Yes, we feel better when it is one isolated teacher or school that engages in such behavior, versus an entire district that uses rubber gloves to eliminate fingerprints and allegedly handed out cheat sheet transparencies to make changing answers that much easier.  We don’t want to believe that such actions can be systemic.  Now Atlanta has shown us otherwise.
What comes next?  We are already hearing of potential criminal charges and calls for the denial of pensions and benefits down in Atlanta.  But such does little to help those students who were positioned as part of the Atlanta “miracle” only to find they aren’t quite as proficient as they once believed.  The students are the real victims here, and punishing individual teachers does little to make them whole or to fix the underlying issue.  In what will clearly be “I was just following orders” defense, a few administrators will take the fall, with the rest left to pick up the pieces.
But it begs an important question — what if all of that time and effort was put into actually teaching the students?  What if instead of the “changing parties” educators used the time for additional tutoring or instruction for the students?  
Then again, Atlanta could have always done what so many other states and districts did during the NCLB era — just lower its standards.  It is much easier to just lower the bar, year after year, rather than look for way to enhance performance through answer-changing methods.  I guess lowering the bar is just so 2005.
 

Pencils, Bubble Sheets, and Erasures

After yet another investigation into alleged cheating on DC Public Schools’ student achievement tests, DCPS officials yesterday announced that they were tossing out the standardized test scores for three classrooms.  If one reads between the lines, it appears that the current action was based on allegations that someone altered the beloved bubble tests after the students took the exam.

This follows on the heels of similar allegations in Atlanta last year, which forced the resignation of long-time Atlanta Public Schools Superintendent Beverly Hall.  And, of course, this isn’t the first time that DCPS has investigated alleged altering of the bubble sheets on its exams.  The same charges were levied just a few years ago.
For the past few years, we have heard EdSec Arne Duncan rail against the dreaded “bubble test.”  And while the good EdSec may be taking issue with such exams for a very different reason, he is correct.  The days of No.2 pencils and scanned bubble sheets should be over. 
With a growing chorus of opposition to bubble tests, with allegations of cheating on said tests on the rise, and with those pencil-and-scan sheet exams viewed as a general enemy to the educational process, it begs some essential questions.  Why aren’t we testing through other means?  In our 21st century learning environment, why do we still use 19th century testing approaches?  Can we build a better testing mousetrap?
Those first two questions are typically answered with the usual responses.  Change is more difficult than the status quo.  We fear the new.  If it isn’t truly broken, why try to fix it?  It costs too much, either in dollars or in stakeholder chits.  We don’t know enough yet (maybe we can form a committee to explore).  It just isn’t a high enough priority.
As for the last question, though, we have already built a better mousetrap.  A few states have begun using online adaptive testing, demonstrating promising practice (on its way to best practice).  The gold standard, at this point, is Oregon’s OAKS Online, or the Oregon Assessment of Knowledge and Skills.  Following on its heels are similar online adaptive assessment systems in Hawaii and Delaware.  And with a $176 million grant from the U.S. Department of Education, the SMARTER Balanced Assessment Consortium (led by the State of Washington) is looking to develop a similar assessment framework to measure the K-12 Common Core State Standards.
Why these new systems?  To the point, they seem to assess student achievement and learning faster and better than ye olde bubble sheets, at a lower cost to the states.  From a practical point of view, they hopefully bring testing up to speed with instruction and learning.  If we are serious about a 21st century education for all, it only makes sense that we would couple that with 21st century assessment.  And that just isn’t done with a stick of wood and some graphite.
So in looking at alleged issues in DC, Atlanta, and elsewhere, the last questions we should be asking is how to avoid erasures on tests or the best way to detect systematic changes on bubble sheets.  Instead, we should be asking why we aren’t using a more effective testing system in the first place, a system that better aligns with both where we are headed on instruction and how today’s — and tomorrow’s — students actually learn?
* Full disclosure — Eduflack does work related to the assessment efforts in Oregon, Hawaii, and Delaware.