Showing posts with label teacher evaluations. Show all posts
Showing posts with label teacher evaluations. Show all posts

Sunday, November 12, 2017

Building a Better Teacher Through VAMs? Not So Fast According to Mark Paige's Book

As a part of my research explorations, I stumbled across a relatively new book published in 2016 about the problems with using value-added measures in teacher evaluations. This book entitled Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher Evaluation is a short and concise read that any administrator who currently encounters the use of value-added data in teacher evaluations should read.

Paige's argument is rather straightforward. Value-added models have statistical flaws and are highly problematic, and should not be used to make high-stakes decisions about educators. Scholars across the board have made clear that are problems with VAMs, enough problems that they should only be used in research and to cautiously draw conclusions about teaching. Later, Paige also provides advice to opponents to using value-added models in teacher education as well. Attempting to challenge the use of value-added models in teacher evaluations through the federal courts may be fruitless. According to Paige:
"At least at the federal level, courts will tolerate an unfair law, so long as it may be constitutional." p. 24
In other words, our courts will allow the use of VAMs in teacher evaluations, even if used unfairly. Instead, Paige encourages action on the legislative side. Educator opponents of VAMs should inform legislators of the many issues with the statistical measures and push for laws that restrict their use. In states with teacher unions, he encourages teachers to use the collective bargaining process to ensure that VAMs are not used unwisely.

Throughout Paige's short read, there are reviews of legal cases that have developed around the use of VAMs to determine teacher effectiveness and lots of information about the negative consequences of this practice.

Here are some key points from chapter 1 of Mark Paige's book Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher Evaluation.

  • VAMs are statistical models that attempt to estimate a teacher's contribution to student achievement.
  • There are at least (6) different VAMs, each with relative strengths and weaknesses.
  • VAMs rely heavily on standardized tests to assess student achievement.
  • VAMs have been criticized on a number of grounds as offending various statistical principles that ensure accuracy. Scholars have noted that VAMs are biased and unstable, for example.
  • VAMs originated in the field of economics as a means to improve efficiency and productivity.
  • The American Statistical Association has cautioned against using VAMs in making causal conclusions between a teacher's instruction and a student's achievement as measured on standardized tests.
  • VAMS raise numerous nontechnical issues that are potentially problematic to the health of a school or learning climate. These include the narrowing of curriculum offerings and a negative impact on workforce morale.
Throughout his book, Paige offers numerous key points that should allow one to pause and interrogate the practice of using VAMs to determine teacher effectiveness.


Using VAMs to Determine Teacher Effectiveness: Turning Schools into Test Result Production Factories

"But VAMs have fatal shortcomings. The chief complaint: they are statistically flawed. VAMs are unreliable, producing a wide range of ratings for the same teacher. VAMs do not provide any information about what instructional practices lead to particular results. This complicates efforts to improve teacher quality; many teachers and administrators are left wondering how and why their performance shifted so drastically, yet their teaching methods remained the same." Mark Paige, Building a Better Teacher: Understanding Value-Added Models in the Law of Teacher Evaluation
Mark Paige's book is a quick, simple view regarding the problems with using value-added models as a part of teacher evaluations. As he points out, the statistical flaws are a fatal shortcoming to using them to definitively settle the questions regarding whether a teacher is effective. In his book, he points to two examples of teachers where those ratings fluctuated widely. When you have a teacher who rates "most effective" to "not effective" within a single year, especially when that teacher used the same methods with similar students, there should be a pause of question and interrogation.

Now, the VAM proponents would immediately diagnose the situation thus, "It is rather obvious that the teacher did not meet the needs of students where they are." What is wrong with the logic of this argument? On the surface, arguing that the teacher failed to "differentiate" makes sense. But, if there exists "universal teaching methods and strategies" that foster student learning no matter the context, then what would explain the difference? The real danger of using VAMs in the manner suggested by the logic of "differentiation" invalidates the idea that there are universally, research-based practices to which teachers can turn in improving student outcomes. What's worse, teaching becomes a game of pursuit every single year, where the teacher simply seeks out, not necessarily the best methods for producing learning of value, but instead, becomes, in effective a chaser of test results. Ultimately, the school becomes a place where teachers are simply production workers whose job is to produce acceptable test results, in this case, acceptable VAM results.

The American Statistical Association has made it clear. VAMs do not predict "causation." They predict correlation. To conclude that "what the teacher did" is the sole cause of test results is to ignore a whole world of other possibilities and factors that has a hand in causing those test results. Administrators should be open to the possibility that VAMs do not definitively determine a teacher's effectiveness.

If we continue down the path of using test score results to determine the validity and effectiveness of every practice, every policy, and everything we do in our buildings, we will turn out schools in factories whose sole purpose is produce test scores. I certainly hope we are prepared to accept along with that the life-time consequential results of such decisions.


NOTE: This post is a continued series of posts about the practice of using value-added measures to determine teacher effectiveness based on my recently completed dissertation research. I make no efforts to hide the fact that I think using VAMs to determine the effectiveness of schools, teachers, and educators is poor, misinformed practice. There is enough research out there to indicate that VAMs are flawed, and that there application in evaluation systems have serious consequences.

Saturday, June 14, 2014

Value-Added Measures and 'Consulting Chicken Entrails' for High-Stakes Decision-Making

“Like the magician who consults a chicken's entrails, many organizational decision makers insist that the facts and figures be examined before a policy decision is made, even though the statistics provide unreliable guides as to what is likely to happen in the future.” Gareth Morgan, Images of Organization: The Executive Edition

Could it be that using Value-added data is the equivalent of consulting “chicken-entrails” before making certain high-stakes decisions? With all the voodoo, wizardry, and hidden computations that educators are just supposed to accept on faith from companies crunching the data, value-added data might as well be “chicken entrails” and the “Wizards of VAM” might as well be high-priests or magicians reading those innards and making declarations of effectiveness and fortune telling. The problem, though, is value-added measures are prone to mistakes, despite those who say “it’s best we have.” Such reasoning itself smells of simply accepting its imperfections. One only need hold their nose, and take the medicine.

What President Obama, Arne Duncan, down through our own North Carolina state education leaders do not get is that Value-added measures simply are not transparent. If anyone reads any of the current literature on these statistical models, you immediately see many, many imperfections. There’s certainly enough errors of concern to argue that VAMs have zero place in making high-stakes decisions.

As the “Wizards of VAM” prepare to do their number crunching and “entrails reading” in North Carolina, we await their prognostications and declarations of “are we effective or ineffective?” Let’s hope it doesn’t smell too bad.

Wednesday, May 28, 2014

What Should Teacher Evaluations Look Like? Ideas to Consider

Recently, I received an anonymous comment on my blog post entitled “Merit Pay in Education: An Exercise Manipulation and Futility.” I have purposefully chosen to not publish anonymous comments on this blog because I believe firmly that if you have something to say, then you should be willing to divulge your identity. Part of your message is who you are, and hiding your identity is actually hiding part of your message. At any rate, the commenter who calls him or herself “Engaged Parent” seemed to practically dare me to publish his or her comment. I won’t publish it as a comment, but I will share it here with my own commentary because there are some fundamental misconceptions apparent in that comment. Here’s “Engaged Parent’s” comment, then I will respond.

Even though you're going to NOT publish this comment I think it's worth sending it to you anyway. A teacher you are yes? I'm going to assume that.

Over and over again I see the same comments on how we can measure the success of education. It's great that we keep hearing how a merit program doesn't work but what's the alternative? It would be nice (for once) to hear a teacher tell us how a "bad" teacher can get filtered out of the system. I'm sure you will agree that not all teachers are good, there are always some bad apples in the bushel, no way around it. 
I agree, test scores are not always a good judge of how a teacher is doing but it does or it at least should give key indicators to a teacher that maybe what they're doing isn't the best for that particular mix of kids and that another approach might work better. But do they do that? I don't know.
Why are teachers the only ones who don't have to be put under the scrutiny of evaluation? The rest of the world has to go through it.

What I really want to say is I think that a happy medium could be met if careful thought was put in to it.
For instance:
10 % what did your students think of you?
15% what did your students parents think of you?
25% test scores
10% peer scores
15% self score
25% principal scores
Is that not a fair assessment? If not then come up with SOMETHING because personally I'm sick of hearing how teachers don't want to be assessed.

First of all, let me say that yes, I am a teacher, and I don’t think I’ve hidden that fact anywhere on this blog. You can find that information on the blog sidebar. I am currently a principal. As a teacher though, I am more concerned with your logic and misconceptions in your comment than anything else. First of all, you seem to suggest that merit pay should be implemented because we have no alternatives. Following this argument means that the rationale for implementing merit pay lies, not in whether it will work or not, but because we don’t have any other alternatives. That in itself is faulty reasoning. I certainly hope my physician doesn’t implement a treatment or my mechanic doesn’t simply initiate a car repair because he or she says, “What’s the alternative?” without considering the evidence of symptoms and lab test results that identify the problem.  You are hearing criticisms of performance pay because it has been researched and has failed to bring about the improvements sought, which is increased student learning. It has been tried in public education, even in North Carolina with no appreciable effect on the quality of education.

You next assertion seems to be thinly veiled when you state “It would be nice (for once) to hear a teacher tell us how a “bad” teacher can get filtered out of the system.” It appears you have a belief that “bad” teachers never get dismissed. I would agree with you that not all teachers are good, which I gather you really mean effective at helping students learn among the many other things teachers do. But contrary to your belief, they do “get filtered” out of the system, at least in my experience. So your belief that teachers who don’t do their jobs very good somehow are not subject to dismissal is another misconception. Teachers can be dismissed fairly but with “due process,” which means that I as an administrator must thoroughly document my rationale for doing so. Many administrators see “due process” or tenure as many call it, as an obstacle, but due process rights were put in place because historically, our education system was notorious for political firings and reprimands. Teaching historically has been quite political with school board members or even administrators firing teachers for noxious reasons. Teachers have been fired for being pregnant or even so a school board member can then hire a son or nephew. At any rate, it is a misconception that teachers who aren’t doing their jobs can’t be fired. It simply takes leadership and willingness to first try to help that teacher improve, then take the steps necessary to counsel them into another profession.

Still another misconception from you comment is that teachers aren’t under the “scrutiny of evaluations.” I have been an educator in North Carolina for 25 years, and I have always been subject to evaluations, so the idea that teachers aren’t somehow evaluated just isn’t true. Teachers in my state have been evaluated for years, and some are dismissed as a result of those evaluations. When you hear all the criticism of current evaluations systems, those do not come from the desire to avoid evaluations; it comes from the desire to have those evaluations be fair. You yourself acknowledge that test scores alone are not always a “good judge” of teacher, but there are much deeper issues with using test scores as a part of evaluation, such as the fact that more 70 percent of courses in a high school do not have these state tests. Teachers that I know aren’t trying to avoid being evaluated as you suggest; they simply want those evaluations to be fair and just, as any employee in any line of work would want.

Now, let’s take a look at the evaluation system you suggest. You provide several interesting sources of evidence for teacher evaluations. First of all, you would base 10% of the evaluation on “what students thought of the teacher.”  I hope you aren’t suggesting that students rate the teacher as a person. I suspect you are really suggesting that students are somehow surveyed on what kind of job they think their teachers are doing. This is reasonable in some ways, and I’ll agree with you. But there obviously has to be careful attention paid to the survey instrument so that its questions get at the heart of instructional prowess and not opinions about the teacher personally. The question then becomes, what do students know about a teacher’s instructional ability? The answer to that question could be used as a basis for survey questions. Our school has used student surveys for over 5 years, and one quite common problem is that the responses can sometimes be about anything other than the teaching. The data is useful, however, because it can help us make changes where there are genuine complaints and issues.

The second source of evidence for teacher evaluation you suggest is “what the parents think of their teachers.” Again, that seems reasonable, if what you are suggesting is a parent survey that focuses on a teacher’s ability to teach. For obvious reasons, the survey would need to move beyond asking the parent what they thought of the teacher as an instructor. But there are some issues that would need to be ironed out. For example, one issue that would need to be dealt with would be the fact that few parents witness the actual teaching teachers do in the classroom. They get most of their information about what happens in the classroom secondhand. I can tell you as an administrator that quite often  that secondhand information isn’t entirely accurate. A student often goes home and tells a parent one story about something that’s happened in their classroom or to them or what a teacher has done. That parent next calls the principal, clearly angry, until they hear what really happened. Using parent opinions about teacher practice would be especially difficult since they do not witness teaching in action; they have to rely on hearsay, which as we know is unacceptable in court rooms. Parent surveys could focus, however, on the parts of teaching that they do directly witness, such as teacher to home communication. At any rate, if parent surveys were to be used in teacher evaluations, they would need to be much more than simply asking a parent what they thought of his or her teacher.

The third source of evidence you suggest for teacher evaluations are test scores. That also seems reasonable until you try to implement that practice which we’ve done in North Carolina.  The issues are many. Some of the tests are inferior and are of questionable quality. Then there’s the fact that not all subjects are tested, which leads to questions like: Do you evaluate only those in tested areas, or do you develop and administer tests in every single subject area, which is quite costly in terms of time and money? In addition, some tests were designed in a way that they make poor choices for use in evaluating teachers. Tests like ACT and SAT were created for an entirely different purpose rather than measuring teacher quality. Additionally, tests designed for assessing student achievement have never been proved valid for assessing teacher quality. Other issues with test scores used as part of teacher evaluations? How do you separate the effects of previous years teachers on this year’s teacher scores? The list of issues with using tests as evidence of teacher effectiveness is lengthy, that’s why teachers and myself question the practice. A test is a test is a test is how much of the world sees it, but most of us who’ve been in education know tests aren’t as simple as we wish they were.

The fourth source of evidence for teacher evaluations that you suggest are “peer scores” which I suspect you actually mean peer ratings. Peer ratings seem to make sense too, after all, who would know better than a teacher’s peers whether or not they are effectively teaching. But the problem with this one is similar to the parent one: there are few teachers who actually know how effectively a teacher is teaching, unless of course you set up a system by which they peer observe. That would work, but it offers logistical problems such as finding time and means for each teacher to observe each other in action. Currently, we use peer observations in evaluations as a part of our process so your suggestion isn’t far from current reality. There is one additional issue with peer ratings too; teachers are often very reluctant to honestly rate their peers because quite often, they have to work with these individuals, and they are often their friends. You have to be pretty naive about human nature to see peer ratings working very well.

The fifth source of evidence for teacher evaluations you suggest are “self scores” which I would suspect you mean self ratings. Believe it or not, we already ask our teachers to do “self-assessments.” While these aren’t directly connecting to a teacher’s final rating, they are used as a part of the evaluation process early in the year. Teachers use them to honestly look at how they stand with our teacher evaluation, and many use it as a basis for their professional development plan. Of course it’s like any self-assessment; it’s only as useful as the amount of honestly employed in its completion. The main problem with self-ratings is if high stakes are tied to it in any way, how noble do you think someone would be to give themselves a poor rating if it could impact their job status?

Your final source of evidence for teacher evaluations you suggest are “principal scores” which I take to mean principal ratings. In North Carolina, this is practice. Principals complete a summative evaluation of ratings on teachers at the conclusion of every year. This is in turn used to suggest professional development for the next year. Principal ratings are certainly not without issues as well. Most of us have worked for bosses who were tyrants and completely incapable of making fair and partial judgments. Because of this danger alone, principal ratings have issues too.

I would say “Engaged Parent” you do get one principle right in your evaluation suggestion; we do need multiple sources of evidence for teacher evaluations. The issue is simply deciding which ones will effectively improve instruction for our students. We do need multiple sources of evidence to evaluate teachers, but if we choose the incorrect evidence, then our teachers don’t teach well and what we set out to do which was improve student learning never happens. We can’t afford to get teacher evaluation wrong because in the end our children will suffer.

So, “Engaged Parent” we do have evaluations of teachers. We also use much of the evidence you suggest above, and there’s talk of adding other sources as well. Teachers as a rule in North Carolina do not complain about being evaluated. Like any employee they do complain about evaluated unfairly, and that is a great deal of the criticism you’re hearing. I would suggest that since you have a great interest in teacher evaluations, you might want to read a book called Evaluating America’s Teachers Mission Possible? by W. James Popham. Popham goes to great length to argue what a good, fair and valid teacher evaluation would look like. He even goes through about every single source of evidence you suggest and points out the problems with them and suggests how those problems might be alleviated. It’s a good read for anyone, educator or non-educator, looking into the evaluation of teachers.

Monday, May 26, 2014

W. James Popham's Sarcastic Shot at Teacher Evaluation Reform

I don’t usually post a video on its own my my blog, but W. James Popham’s irreverent  look at the teacher evaluation reforms going in the United States simply speaks for itself. His mock info-mercial here is almost believable in the climate of Race to the Top and the NCLB waivers.


Saturday, May 24, 2014

Can Non-Teaching Administrators Effectively Evaluate Teachers?

The arguments about whether school administrators who haven't been classroom teachers can effectively evaluate teachers is never-ending. At the heart of that argument is actually whether teaching is a professional skill requiring great expertise, or whether, as some business-minded people seem to think, it's a skill that anyone can do, with as little training as possible. But can an administrator who has never faced the day-to-day management of a classroom full of students adequately pass judgment on the quality of a teacher? I would say probably not. Before the barrage of emails start, let me clarify my reasoning for using the word "probably."

To teachers who think I should have used the words "definitely not" instead of "probably not," let me say that I don't think there are absolutes here. There are certainly administrators out there who have enough intuitive understanding of what "good teaching" looks like that they might be able to do an adequate job of evaluating teachers. But, and this is important, I think that is by far the exception. 

In the United States, there is a prevalent belief that somehow a successful businessman is capable of "super-heroic" feats and should be respected as capable of doing anything---from being a great leader to a politician. This is evident in how many times political advertisements display proudly the title "Successful Businessman" as if that somehow automatically qualifies them for office. As an offshoot of this belief is the idea that a person who's managed a thousand-plus employee corporation can somehow step into a school and manage it just as well. The same thinking seems to apply to those who are military leaders. They also are somehow revered by some and seen as fully capable of managing schools and districts simply due to their established leadership abilities, and this includes the evaluation of educators. But I submit to you that neither business leaders nor military leaders are always capable of managing educational organizations. They are definitely not always capable of judging the effectiveness of classroom teachers simply because when they do judge teaching, they automatically see teaching as a simple task of "imparting knowledge" to students and not a complex set of experiences and activity that are designed to make the deeper learning of critical thinking, problem-solving, creativity, etc. possible. Sure, these business leaders quite often are creative and experienced people when it comes to solving problems. But their experiences lie within organizations where employees are easily dispensed with, and if products are defective, they can be simply discarded. In the educational enterprise, we can't dispense with students when they are somehow not measuring up like so man defective parts; we have to teach them from where they are. Also, while we could easily fire teachers not quite measuring up, we then have to face the issue of trying to find another teacher in a climate of fewer teachers and fewer young people becoming teachers. In fact, most teachers, and I would add principals, don't fall into the black and white categories of "good" and "bad." Most fall in the middle where evaluation is incapable to making those extreme categorizations. Instead, good administrators are often in positions of working with all teachers to move and improve to the "good" category.

Classrooms are very complex places. After having spent 16 years of my 25 years in them, every single time I enter one to evaluate a teacher, I always remember that. It does not translate into a sympathetic lack of will to honestly complete the evaluation; it simply means I view classroom teaching through a lens of complexity that allows me to see many more of the subtle things that happen as a teacher engages students. It took time for me to develop that lens. I think it fair to say, that lens allows me to better understand teaching and learning, and it makes me a better evaluator. 

Often, when an administrator has never been in the classroom lacks having this lens, he must resort to the only lens he has, which is often tinted with a superficial understanding of teaching and learning. That means he grasps for simple evidences of good teaching like test scores, attendance, and graduation rates, which are numerical in nature and easily placed on a yardstick. He doesn't see teaching through a lens of complexity and understanding. It is for that reason, I would say that administrators who have taught for a period of time themselves understand "good teaching" better. Those at the extremes of good teacher and bad teacher are certainly easier to recognize, so the non-teaching administrator could easily recognize their expertise or lack of skill. It's all those in the middle that need the coaching and support that escape the black and white categorization of a superficial understanding of teaching and learning. A non-teaching administrator is often simply incapable of seeing the complexity of teaching.

Friday, May 2, 2014

Value-Added Measures and Harmful Consequences of Measure & Punish

"The M & P (Measure and Punish) Theory of Change suggests that by holding districts, schools, teachers and students accountable for meeting higher standards, as measured by student performance on high-stakes tests, administrators will supervise America's public schools better, teachers will teach better, and as a result students will learn more, particularly in America's lowest performing schools." Audrey Amrein-Bearsley, Rethinking Value-Added Models in Education


As states and school districts begin to wade deeper into using value-added measures, or VAMs, in high-stakes employment decisions, lawsuits are inevitable. On Wednesday, seven Houston Independent School District teachers and their union filed a lawsuit against the Houston Independent School District (HISD), (See "Seven Teachers and Their Union Are Suing HISD to End Evaluations Tied to Students' Test Scores.") In this case, the teachers and their unions are focusing on the fact that teacher value-added ratings fluctuated immensely from year to year. For example, one of the plaintiffs, Andy Dewey, a social studies teacher, received high ratings in 2012, enough for him to receive a bonus. His results the next year dropped significantly. The lawsuit, which you can read for yourself here (HISD Lawsuit), states, "Mr. Dewey went from being deemed one of the highest performing teachers in HISD to one making 'no detectable difference' for his students." If, as VAM supporters hold to be true, teachers have substantial effect on student scores, how can a teacher get it perfectly correct one year, and get it all wrong the next?

HISD defends the use of value-added in its high-stakes practices, even as organizations such as the American Statistical Association cautions strongly against such use. Contrary to what those who support value-added measures say, even if you set aside the technical and methodological concerns, there is absolutely no evidence that using value-added measures as a part of teacher evaluations has any effect on student learning. There is, however, a great deal of research pointing out that there are potentially harmful, unintended consequences of using standardized tests in any high stakes manner. Those consequences include:

  • Increased amounts of time devoted to teaching to the test and test prep activities.
  • Administrative decisions made to drop non-tested subjects like art and social studies.
  • Decreases in morale among teachers and administrators.
  • Administrative decisions to cut time spent in untested subjects to focus on tested subjects.
  • Narrowing of the curriculum to only what gets tested.
  • Teaching becomes more didactic and teacher-centered rather than student-centered or 21st century oriented.
  • Increased levels of frustration for students as they are subjected to more and more standardized tests.
  • Teaching shifts to focusing more on "bubble" students or "money" students as I have heard them called. These are the students that have been identified to have the most potential for the greatest amount of growth. The other students receive less instruction and teacher attention as a result.
  • Increased student apathy and boredom as a result of the disconnect between content relevancy and what's tested.
  • Teachers and administrators shop for students and classes in order to teach students who are more likely to provide them with desired academic growth and test scores.
  • Teachers are leaving a profession where they once believed in teaching students content worthwhile, which is rapidly becoming more focused on the raising of test scores.
  • Potential teachers are choosing to not become teachers because it is no longer about teaching content they care about; it has become more about playing the game to get high test scores.
  • In some schools and districts, teaching has become programmed and scripted and not creative, engaging and self-fulfilling any more.
  • Administrators and teachers are held accountable for test scores in an environment where there are so many things not under their control, such as budgets, which violates the Cardinal Rule of Accountability, which states "Hold people accountable for what they control."
The use of high-stakes testing and VAMs are impacting schools and classrooms, but the costs and negative consequences are high. This lawsuit, while it is indicative of some serious methodological concerns about value-added measure, it is also a symptom of a greater issue. Those who still support high-stakes accountability and the use of VAM ignore or minimize any objections to their use. The massive increase in testing and its use for high-stakes personnel decisions under federal and state policy is negatively impacting our schools, classrooms, students, teachers, and our parents. The question becomes, at what point are policymakers going to realize the damage being done to public education?

All this focus on standardized testing is making public education a bizarre world where schools serve soft drinks to students as a test preparation strategy (See "Florida School Stops Giving Students Caffeinated Soda Before Standardized Tests"), and where entire schools hold pep rallies in their gymnasiums to get students "pumped up" for latest tests. Where time-honored subjects have become worthless and what's most trivial and "testable" gets emphasized. Where teachers are forced to focus on "money" students at the expense of other students who have needs too. Does not anyone else see anything morally wrong with this entire picture? To me, it is certainly understandable that when "the test results" are what determines job effectiveness, any educator is understandably going to do what is necessary to increase the measure by which their effectiveness is judged. Still, there are moral boundaries we should be unwilling to cross and ethical principles we just can't violate. Raising test scores is not our highest calling as educators despite what the Measure and Punish crowd think, and "Raising them at any cost" is morally repugnant and gives these tests more dignity and importance than they deserve.


Wednesday, December 25, 2013

D.C. Teachers Suffer Faulty Evaluations at Hand of Value-Added Measures: Is NC on Same Path?

I have made it known that I am no fan of using value-added measures in teacher evaluations. There's just too much room for error, and there's too many things that can go wrong, from the test to the calculations. Value-added calculations are done in a mysterious black-box and there is too little oversight and protection measures in place to ensure that the data is error-free. As the Washington Post reports here in "Errors Found in D.C. Teacher Evaluations," more than 40 teachers received incorrect teacher evaluations of the year 2012-2013. One teacher was even fired due to miscalculations. That is totally unaccepted and should not every happen.

Many states, including my  own, have adopted the "Value-added measure fad" without piloting or studying it at all, other than listening to the sales pitches and lobbying of companies peddling this methodology. In North Carolina, there is currently no recourse for challenging the scores either. If a teacher suspects their ratings are incorrect, there is no way to independently validate it. But if your goal is to implement corporate reforms measures, any mis-calculations and faulty teacher ratings are acceptable, as long as we implement the reform measure. According to an additional post on the Washington Post Web site, "D.C. Schools Gave 44 Teachers Mistaken Job Evaluations," it was faulty calculations "of the value that D.C. teachers added to student achievement in the last school year resulted in erroneous performance evaluations for 44 teachers, including one who was fired because of a low rating."

This incident illustrates clearly that value-added measures used in teacher evaluations are too error-prone and should be discarded. When education policy gets too caught up in numbers and statistics, people, whether teachers or students don't matter as much to the number-crunchers. The Obama administration should be ashamed of mandating this mistaken education policy too states to begin with. States who have implemented these measures need to immediately discard this statistical fad because it will ultimately do more to harm education than help. North Carolina needs to drop this fad too and begin moving their educational system into the 21st century. Sadly, our state leaders are so blinded by the numbers they just can't let go.

Wednesday, November 27, 2013

Misplaced Faith in Value-Added Measures for Teacher Evaluations

Due to Race to the Top and the No Child Left Behind waivers, 41 states have now elected to use Value-Added Measures or VAMs as a part of teacher evaluations. This is done, without regard to the limitations these statistical models have and without any supporting research that says doing so will increase student achievement. What are those limitations? In a recent post, the authors of Vamboozled, provided this post entitled  "Top Ten Bits of VAMmuniton" that educators can use to defend themselves with research-based data against this massive non-research-based shift toward a model of teacher evaluation that will most likely do more to damage education than No Child Left Behind or any other education "reforms" of modern times.

I recently uncovered a journal article entitled "Sentinels Guarding the Grail: Value-Added Measurement and the Quest for Education Reform." which describes a rhetorical study by Rachel Gabriel and Jessica Nina Lester which examined the discourse during a meeting of the Tennessee Teacher Evaluation Advisory or TEAC from March 2010 through April 2011. TEAC was a 15 member panel appointed by the governor of Tennessee to develop a new teacher evaluation policy. The authors of this study examined the language used by those on this panel as they deliberated through the various components of a teacher evaluation policy.

What is interesting about this study is that the language employed by those in this meeting betray some important assumptions and beliefs about teaching, learning, testing, and value-added measures that aren't entirely supported by research or common sense.

According to Gabriel and Lester, Value Added Measurement became a sort of "Sentinel of Trust" and sort of a "Holy Grail" in measuring teacher effectiveness during these meetings in spite of all the research and literature that points to its limitations. According to the author's of this study, here's some of the assumptions those in this TEAC meeting demonstrated through the language they used:

1) Value-added measures alone defines effectiveness.
2) Value-added measures are the only "objective" option.
3) Concerns about Value added measures are minimal and not worthy of consideration.

As far as I can see, there is enormous danger when those making education policy buy into these three mistaken assumptions about value added measures.

First of all, VAMs do not alone define effectiveness. They are based on imperfect tests and often a single score collected at one point in time. Tests can't possibly carry out the role of defining teacher effectiveness because no test is even capable of capturing all that students learn. Of course, if you believe by faith that test scores alone equal student achievement, then sure, VAMs are the "objective salvation" you've been waiting for. However, those of us who have spent a great deal of time in schools and classrooms know tests hardly deserve such an exalted position.

Secondly, even value added measures are not as objective as those who push them would like to be. For example, the selection of which value added model to use is riddled with subjective judgements. Which factors to include and exclude from the model is a subjective judgment too. Choices of how to rate teachers using these requires subjective judgment as well, not to mention that VAMs are not entirely based on "objective tests" either. All the decisions surrounding their development, implementation and use require subjective judgment based on values and beliefs. There is nothing totally objective about VAMs. About the only objective number that results from value-added measures is the amount of money states pay consulting and data firms to generate them.

Finally, those who support value added measures often just dismiss concerns about the measures as not a real problem. They use the argument that VAMs are the "best measures" we've got currently as flawed as they are. Now that's some kind of argument! Suppose I was your surgeon, and used "tapping on your head" to decide whether to operate for a brain tumor because "tapping" was the best tool I've got? The whole 'its-the-best-we-have' argument does not negate the many flaws and issues and the potential harm using value-added measures have. Instead of dismissing the issues and concerns about VAMs, those who advocate for their use in teacher evaluations need to address every concern. They need to be willing to acknowledge the limitations, not simply discard them.

I offer one major, final caution to my fellow teachers and school leaders: it is time to begin really asking the tough difficult questions about the use of VAMs in evaluations. I strongly suggest that we learn all we can about the methodology. If anyone uses the phrase, "Well, it's too difficult to explain" we need to demand that they explain anyway. Just because something looks complicated does not mean its effective. Sometimes we as educators are too easily dazzled by the "complicated" anyway. The burden is on those who support these measures to adequately explain them and to support their use with peer-reviewed research, not company white-papers and studies by those who developed the measures in the first place.

Tuesday, June 25, 2013

Teacher Evaluations That Effectively Impact Learning and the Classroom

“States have implemented teacher evaluations in their race to avoid the impossible demands of NCLB.” W. James Popham, Evaluating America’s Teachers Mission Possible?
Getting teacher evaluation right should be a priority, and in my opinion, a higher priority than any other educational initiative on the table right now. How we evaluate teachers will impact more of what happens in our classrooms than any other reform measure we implement. For example, if we make test scores a large part of this evaluation, then we can expect much of what happens instructionally is going to directed toward improving test scores. If we make technology a priority, we can expect to see more teachers engaging in its use in the classroom. And, if getting students working collaboratively is emphasized, then expect to see teaming as a big part of classroom practice. What get's evaluated is what gets done, period! As Popham points out, "A teacher-appraisal system that inclines teachers to make good instructional decisions is likely to do just that. Conversely, a state teacher-appraisal that points teachers in unsound instructional directions, will, unfortunately, also just do that." What we evaluate is going to be what we get instructionally.

With the importance of what to include in our teacher evaluations in mind, getting teacher evaluation right should not be a hurried process to satisfy federal education policy, though that is what has happened in many states. In order to get waivers from the impossible and ludicrous demands of federal No Child Left Behind legislation, states have been hurriedly putting together teacher evaluations that include using test scores in some manner. This express route to using test scores to determine teacher quality should be frightening to any educator and our parents because of the likelihood that how students do on tests will become the center of what we do in our classrooms. In a word, our schools, every single one of them, become massive "test-prep centers." Let's just hope those evaluations encourage sound instructional decisions and not unsound decisions. Otherwise, we could end up with an education system much more worse off than what we have.

What then are some major mistakes states could make in this rush to implement federally mandated teacher evaluations? In his book Evaluating America's Teachers Mission Possible? W. James Popham describes what he calls "Four Teacher Evaluation Implementation Mistakes" that is, perhaps, a good start for critiquing these state teacher evaluation schemes.

Mistake 1: Using Inappropriate Evidence of a Teacher's Quality

According to Popham, implementation mistake one is simply using "poorly chosen evidence" to determine teacher effectiveness. Evidence is poorly chosen when it does not accurately or in a valid manner tell us anything about a teacher's quality. For example, an observation by a poorly or untrained classroom observer may not provide us with appropriate evidence to determine a teacher's quality. Also, using state achievement tests may also be poorly chosen evidence as well. As Popham clearly points out, "There is almost zero evidence that these state accountability tests yield data permitting valid inferences about a teacher's instructional quality." So even standardized test scores could be inappropriate evidence if one can't make a valid inference about teacher quality from those tests. With this mistake in mind, it is vital that administrators and teachers carefully scrutinize what states choose as evidence in their teacher evaluations.

Mistake 2: Improperly Weighting Evidence of a Teacher's Quality

Improperly weighting evidence involves assigning improper weights to the various forms of evidence used in teacher evaluations. The different sources of evidence most often used include: 1) student test performances, 2) administrator ratings of teachers' skills, 3) classroom observations, and 4) parent or student ratings of teachers. As Popham points out, "a weighting mistake occurs when a given source of evidence is given either far greater, or far lesser, evaluative importance than it should be given." In other words, this error occurs when states place too much or not enough emphasis on one source of evidence. For example, some states weigh test scores as 50 percent in their evaluation schemes. Such heavy weighing of test scores is improper, if those tests do not really allow one to make any valid inferences about teacher quality. Properly weighting the evidence means doing so in a reasonable and fair manner. It is important for teachers and administrators to also scrutinize the weighting scheme states apply to teacher evaluations as well.

Mistake 3: Failing to Adjust Evaluative Weights of Evidence for a Particular Teacher's Instructional Setting

As Popham points out, "To evaluate teachers as though they were operating in identical instructional settings is naive." Using a "cookie-cutter" evaluation system that fails to take into account the unique qualities of that teacher's instructional setting forces standardization when classrooms are far from being standardized. Doing so will not make them standardized either. When a teacher evaluation system weights all evidence without taking into account a teacher's instructional setting, you are going to force that teacher to do the "hoop-jumping" dance, just to satisfy the evaluation. Also, classrooms are as diverse as the teachers teaching them. Even in the unreal world of 20th century, factory designed instructional settings were not standardized, though some nostalgically think so. To evaluate teachers as if they were all teaching under the same conditions shows pure ignorance of what goes on in classrooms. They are highly, complex and diverse environments, and to think one can evaluate them the same fails to take into account this diversity. Teachers and administrators would do well to scrutinize the evaluative weights states place on various forms of evidence and demand that those weighting systems accommodate the diversity of instructional settings that exist too.

Mistake 4: Confusing the Roles of Formative and Summative Teacher Evaluation

As Popham points out, "Formatively, we want to improve teachers' prowess so they can do their most effective job in helping students learn. Summatively, we want to identify the exceptional teachers who should be rewarded as well as those teachers who, if they cannot be helped, should be relieved of their teaching responsibilities." Mistake four involves combining these two functions of teacher evaluation. As Popham points out, these two functions of teacher evaluation can conflict with each other, making neither effective. In addition, isn't it really unrealistic to expect a teacher to open up and be candid and reflective about their performance with the person who holds their future employment in his or her hands? Perhaps it us time for teachers and administrators to advocate for separating the formative and summative functions of teacher evaluation so that both work more effectively and can do their jobs of improving teaching.

Getting teacher evaluation right should be a high priority, because how teachers are evaluated is going to directly impact how instruction is carried out in the classroom. Unfortunately, states, in my opinion, including my own state North Carolina, have rushed to try to satisfy federal mandates for that evaluation system, and I fear the negative impact on education and the teaching profession in North Carolina, and around the country, is going to be felt for years.

Monday, June 17, 2013

Call for Skepticism and Caution When Using Test Scores in Teacher Evaluations

We need to be careful that the tests we use in a properly designed teacher-appraisal system do, in fact, contribute to a valid (that is accurate) inference about a teacher’s quality.” W. James Popham, Evaluating America’s Teachers: Mission Possible?
North Carolina took the plunge this year and started using test scores as part of teacher and principal evaluations. The state has even invented a "new" kind of test, called a "Measure of Student Learning" in order to make sure there is plenty of test data to go around. What is particularly telling is how "carefully" the state crafted the term "Measures of Student Learning." It's as if somehow, not calling it a test, makes it not a test. State level educational logic never ceases to amaze me. Of course, the state then started calling these "Measures of Student Learning" something else. They started calling them "Common Exams." Notice again, the careful use of the word "exam" rather than "test." It's almost as if you don't call it a test, it isn't a test, but apparently state level policymakers haven't heard the old saw about a rose still being a rose even if it has another name.

Besides North Carolina's struggle with what to call their newly implemented tests, there's still the question of what the unintended consequences of having thousands of teachers "teaching to the test" is going to do for students in our state. Ultimately, being able brag that your students "Have the best scores in the world" is most likely what politicians and state level education officials are after. That's why they see salvation through test scores as the means to the "Educational Promised Land." Ultimately, there's a flawed logic driving this whole accountability and testing movement: it's the whole idea that learning can be entirely reduced to bubble sheet answer sheets and taken in a single sitting. And, that teachers can't be trusted to tell when a student has demonstrated that they have learned or not.

In my years as an educator, I have been amazed how trusting and accepting educators in North Carolina are when it comes to the latest policy flowing down from on high. It's as if they accept that those at the state level know more than they do, or somehow have access to magical information they do not have. So, when they implement something like the use of test scores in evaluations, many educators accept that the powers that be at the state level know what they are doing, so they trust them. Given the history of reform ideas and educational policy that travels down from on high, this "trust" is highly misplaced. I like to think that state level education officials mean well, but what often has happened during my career, these ideas when implemented locally have sometimes been a disaster and have been sometimes downright bad for kids. Instead of being so trusting, I submit that all educators in the schools and districts need to become skeptics and ask tough questions of our state-level, and federal level policymakers. We should never accept the "trust me, this will work" answer.

It is in this spirit of skepticism, I turn to Popham's book, Evaluating America's Teachers: Mission Possible? and our state's venture into making high stakes testing even more high stakes. In spite of what our state-level policymakers say,  I am not fully satisfied that North Carolina's tests are adequate measures of educator effectiveness, and  a healthy skepticism is still in order. This whole push to add test scores to teacher and principal evaluations has been a rush from the start. Depending on when you asked questions, how the tests were to be implemented has changed multiple times throughout the last two years. Never mind the fact that not a single teacher in North Carolina even saw the test before they were implemented. In their rush to have "test data" it's as if our state level policymakers think "any old data will do." They have failed to take the time to establish whether any of these tests really tell us anything about teaching quality.

In light of our state's push into "higher stakes testing", I think Popham reminds us of some important key issues and ideas about tests and teacher evaluations that state politicians and policymakers seem to forget.
  • “Tests are not valid or invalid. Instead, it is a test-based inference whose validity is at issue.” In other words, it isn't the test that’s valid or invalid, it is the inferences drawn from those tests that have these qualities. It boils down to whether you can actually make an inference based on the test or not. The question is whether North Carolina's tests, which have been implemented haphazardly and a thrown-together-manner, actually tell us anything at all about the quality of teaching in our classrooms. Can I honestly say Teacher A is a good teacher because she added "this much" value to her students' Measures of Student Learning? Seems to me that it puts a great deal of faith in a single test.
  • “Tests allow us to make inferences about a test taker. This inference, depending on the appropriateness of the test as a support for the inference being made, may be valid or invalid.” As Popham points out, the inference we make about the learner may be valid or invalid depending on the “appropriateness of the test” in its role to support the inference being made. As we know, the word validity is the extent to which that inference, or conclusion, is well-founded or corresponds to the real world. This boils down to whether the inference we draw about a student is valid or not. For example, should we infer, based on a student’s test scores that he is not proficient in the subject, we must be satisfied that the test we are using is the “appropriate measure,” and we must also make sure the conclusion we draw considers all real world facts. Ignoring a student’s socio-economic status, or even whether he experienced  a death in the family, can make our inference about the student’s proficiency invalid. Then there's the whole issue about making an inference about a teacher or principal's effectiveness using this same test. Has North Carolina sufficiently established the appropriateness of their Measures of Student Learning, End of Grade Tests, End of Course Tests, as instruments that allow for making inferences about teacher and principal quality? I'm not sure they have. Another question, do these Measures of Student learning allow us to make valid inferences about teacher quality? I'm not convinced they do.
As North Carolina moves forward with a teacher and principal appraisal instrument that uses test scores to determine effectiveness, all educators need to educate themselves and scrupulously ask questions of policymakers at all levels.

As Popham suggests, “If heavy importance is being given to students’ performances on state tests for which there is no evidence supporting such an evaluative usage, then teachers (I would add principals too) might wish to engage in further study of this issue so that, armed with pertinent arguments, they can attempt to persuade educational decision makers that more appropriate evidence should be sought.” In other words, all educators, administrators, and teachers need to study how North Carolina or any state is using test scores to determine educator effectiveness.

Administrators owe it to their teachers, and themselves, to understand that some of these tests were never designed to determine educator effectiveness, so that data needs to be viewed with skepticism. And, I would add that the manner in which these Measures of Student Learning were developed and are administered may not allow them to draw valid inferences about teacher quality. Test scores in North Carolina currently are only 1/6th of the teacher evaluation, and effective administrators are going to keep this in mind and not let the allure of numbers numb them to the other 5 standards.

Saturday, August 18, 2012

Resource for 21st Century School Leaders Who Are Instructional Leaders

No one argues any more that principals must take on the role of being an instructional leader in their schools. It is widely accepted, but often having credibility in that role is difficult when principals do not have experience teaching, or don’t really understand what being an instructional leader means. Author of the book The Principal as Instructional Leader: A Practical Handbook, Sally Zepeda points out that, “Principals who are instructional leaders ‘link’ the work of leadership and learning to everyone in the school.” Furthermore, these school leaders are charged with building an instructional program that “links the mission and vision of their schools to:
  • supervising instruction
  • evaluating teachers
  • providing professional development and other learning opportunities for teachers
  • modeling proactive uses of data to make informed decisions that positively affect student learning
  • promoting a climate of instructional excellence
  • establishing collegial relationships with teachers.
With this list of charges to principals as instructional leaders, it is easy to see why leading instruction in a school is a daunting task, and that does not even consider all the other roles principals assume, from facilities management, budgeting, to public relations and customer service. But for 21st century school leaders, being an instructional leader is not an add-on role any longer, it is at the core of transforming schools in 21st century institutions with learning at the center. Zepeda’s book The Principal as Instructional Leader is a hands-on guidebook for the school leader as instructional leader taking on this role.

The Principal As Instructional Leader: A Practical Handbook
Book Cover

The Principal as Instructional Leader: A Practical Guidebook is just as its title implies, a practical guidebook to instructional leadership that avoids becoming entangled in all the theories of learning,curriculum, and instruction that other books on instructional leadership often do. It provides principals, potential principals, and teacher leaders with comprehensive but concise information needed to tackle those things instructional leaders must tackle to improve student learning.

Often, books on instructional leadership get enmeshed in theory and rationale and never recover enough to rise above “textbookese” to give school leaders the tools to take on this most important role. This book does that. It relentlessly focuses on the practical side of supervising instruction. Readers are provided with an overview of what instructional leadership is, what the process looks like, and then given specific tools to carry out that role  in their schools or educational institutions.

After Zepeda briefly describes what instructional leadership is, she then ties that role to the vision and culture of the school. She also includes a complete overview of the instructional supervision process, and provides an extensive list of observational tools as supplemental downloads. These downloadable tools give principals the means to walk into classrooms and observe specific instructional elements such as “Beginning of Class Routines” or “Using Bloom’s Taxonomy and Levels of questions.” Each of the downloads are observation instruments to gather data regarding specific aspects of classroom teaching and student learning.

The Principal as Instructional Leader: A Practical Handbook is a definite reference book that every school leader, from teacher leader to district superintendent needs to have in their school administration library. I have read other books on this aspect of school leadership, but Zepeda provides the most no-nonsense approach to instructional leadership yet. Definitely an excellent addition to your reading list.

Tuesday, November 8, 2011

NC to Test Every Subject K-12 and Tie Teacher & Principal Evaluations to Test Scores

In a meeting this past Monday I attended, representatives from the North Carolina Department of Public Instruction provided educators with a presentation describing how our state is adding a sixth standard to our teacher evaluation and an eighth standard to our principal evaluation directly tying those evaluations to test scores. What I discovered at that meeting was that the standards proposed are worded innocuously and can hardly be questioned. For example the teacher standard reads:

"Teachers contribute to the academic success of students. The work of the teacher results in acceptable, measurable progress for students based on established performance expectations using appropriate data to demonstrate growth."

I would think just about all teachers hope that what they're doing is contributing to the academic success of their students. The big difference in opinions among educators, however, is perhaps what this "academic success" is and whether growth measured by a test score is accurate. The principal standard is also written in this hard-argue-against language:

"ACADEMIC ACHIEVEMENT LEADERSHIP: School executives will contribute to the academic success of students. The work of the school executive will result in acceptable, measurable progress for students based on established performance expectations using appropriate data to demonstrate growth."

Both of these standards make sense on the surface. There's no educator alive who would argue that teachers and principals are not responsible for the achievement of their students. In my years as an educator, there's not a day that passes where concern about whether our students are learning what we're asking them to do isn't on my mind. The problem is not with these standards, but it is with the North Carolina Department of Public Instruction's interpretation of what "student achievement" is. Under their plan, student achievement = growth on standardized test scores. While that is a nice, neat simplification of what achievement is, it ignores all learning that can't be tested with standardized testing. By interpreting student achievement as test score growth, the state is simply making testing in North Carolina even higher-stakes than before. With this test-emphasis, we will be well on our way to becoming test-prep factories that don't churn out educated students, but excellent test-takers. North Carolina is going to subject students to tests early and often, not to measure student progress, but to measure how well the teacher is doing. This betrays an underlying, but mistaken belief that tests can be used to tell you how teachers and principals are doing. North Carolina has defined "effective teaching" and "effective school administration" as simply growth demonstrated (by whatever model they can create) by test score performance.

Ultimately, the meeting I attended was billed as an opportunity for educators like myself to provide "feedback" to North Carolina Department of Public Instruction personnel on this proposed teacher evaluation change. However, in practice, it seemed more of a here's-what-we're-going-to-do-to-you session, but we want to give the "appearance that we're listening to educators." (That's a tactic perfected by Arne Duncan.) During the session, when educators expressed concerns about what the state is planning to do, they were often cut off by DPI personnel who interrupted to defend the state's plans rather than sincerely allowing educators to voice their concerns and listening. They felt the need to stifle honest opinion by cutting off people while they were speaking. Instead of being an opportunity to express concerns, it was an opportunity for the state of North Carolina to try to summarily dismiss those concerns.

To this educator, there are two equally frightening things about this whole test-centered approach to teacher and principal evaluations that I and other educators at this meeting tried to bring up, but the North Carolina Department of Public Instruction representatives quickly let us know that we were wrong.

  • First of all, North Carolina is planning to evaluate teachers and administrators  by using test scores this spring even though they have not decided what growth model they are going to use, nor do they know exactly how this evaluation is going to take place. What's worse, they will not know at least until February of this year when the State Board has an opportunity to hear the first reading. In typical bureaucratic fashion, they are rolling out a program and implementing it even before it is finalized and clearly defined. Historically, I can recall two other examples of North Carolina Department of Public Instruction initiatives that were rolled out before they were defined well. These were when the state during the 1990's rolled out the ABC Accountability Model, and in the early 2000s, when the state rolled out the NC WISE student data system used throughout the state. Both of these were rolled out without any deep thought on its practical application in the districts. They were both rolled out bugs and all, and classroom teachers and educators had to suffer while the state got its act together. Now, it appears our State Department of Public Instruction is doing it yet again, except this time there are much higher stakes tied to it. North Carolina will be evaluating educators' careers based on something that isn't even clearly defined, and won't be until three-fourths of the year is completed.
  • Secondly, North Carolina is also planning to create tests for every single subject taught, administer these tests, and use the results, not to see how students are doing, but to see if teachers and principals are able to raise test scores. Recently, Charlotte-Mecklinburg proposed this "test-everything-that-moves" approach, and it blew up in the administration's face so bad, they had to take money from Bill Gates and the Broad foundation to hire public relations personnel to try to sell it. If something smells so bad that you have to repackage it to sell, then perhaps there's something fundamentally wrong with it. In spite of the fact that educators at this meeting expressed concern over how much more time we will spend testing under this proposal, and how problematic it will be to implement, the Department of Public Instruction personnel at this meeting dismissed these concerns outright repeatedly, and told us how wrong we were. 
As an administrator and educator, I understand the need for accountability. I understand the need for testing to see how our students are doing. I even cynically understand why our state is doing this. They are not doing it because it's what's best for our kids, because how could testing students' every move be beneficial? No, the state of North Carolina is doing this so that they can keep their Race to the Top money, plain and simple. As a 22 year veteran educator, I've seen education measures come and go. Most of those have been benign and simply were discontinued, and no one noticed their passing. However, this time I'm afraid it's going to be quite different. The state of North Carolina is using a program that is not fully defined yet, and a strategy that "tests-everything-with-a-pulse" that is going to destroy public education in this state. It is going to turn our schools into the test-prep factories that Diane Ravitch has spoken about so eloquently so many times.

Note: Here's the link to the presentation used if you would like to see it for yourself.