Breaking the Spell

The dire wolf is not back. The New Yorker tried to convince us otherwise this week, with an article that surely makes one wonder what happened to its famed fact-checking department. Extinct, I guess, much like the Canis dirus which lived in the Americas until about 10,000 years ago.
The photos of the admittedly very cute little pups – photos supplied by the genetics startup Colossal Sciences and reprinted with incredulity by multiple publications – are not photos of any resurrected species. These are not dire wolves – not phylogenetically, and maybe not even morphologically, particularly if what you're looking for in a dire wolf is the creature you saw in Game of Thrones. (One of the pups is named Khaleesi, so you can see the PR game being played here.) These are just plain old wolves – Canis lupus – with 20 gene edits, although the exact details of which genes have not been disclosed. This information is – surprise surprise – the "company's I.P."
According to The New Yorker article, the grey wolf is the closest living relative of the dire wolf: "they share 99.5 per cent of their DNA," it says, linking to a Nature article that says nothing of the sort and posits that the lineage between dire wolves and other canids (including not just grey wolves but the equally-close relatives coyotes and dholes) diverged some 5.7 million years ago.
As Tom Scocca's Indignity newsletter puts it,
What Colossal is selling, by all appearances, is furry vaporware. The company's rationalizations for its bogus dire wolves and imaginary mammoths are as bogus and imaginary as the animals themselves. There is no available ecological role for an ersatz Ice Age mammoth in the overheating, human-ravaged 21st century; the idea of rewilding any part of Earth back to a Pleistocene ecosystem—whether with fully reincarnated ancient megafauna or with synthetic functional approximations of those megafauna—is shameless nonsense. The planet is barely wild enough right now for the regular wolves and regular elephants to get by.
Questions about canine genes might seem far afield from the topic at hand here at Second Breakfast: education and artificial intelligence and, of late apparently, the end of democracy. But I'd argue that there are important connections worth making, and not simply, as Ben Williamson's work reminds us, because educational genomics remain an area of research for those pushing for predictive measurements in schooling and for new technological infrastructure (and old racist practices) to rank and sort students.
Edward Ongweso Jr. also published a story this week on "DNA's real value" – about 23andMe's bankruptcy and the links between genetic testing, advertising, and authoritarianism. Charles Murray – yes, that Charles Murray – is apparently among those interested in buying 23andMe's genetic data, which should tell you everything you need to know about the politics of the genetic testing industry. The value of this data – whether extracted by 23andMe or utilized by Colossal Science – lies not in "personalized medicine" or new drug treatments, Ongweso Jr. argues, but in reactionary efforts to bolster the police state and to undermine our empathy and collective responsibility to one another.
Eugenics is at the core of this project, just as it is foundational for artificial intelligence. And the politics of AI is much the same too: AI is, at its heart, a technology of discrimination.
See also: "Inside a Powerful Database ICE Uses to Identify and Deport People" by 404 Media's Jason Koebler. "The Shocking Far-Right Agenda Behind the Facial Recognition Tech Used by ICE and the FBI," by Mother Jones's Luke O'Brien. That's Clearview AI, which from the outset planned to use its tools to identify immigrants and leftists. Peter Thiel and Palantir have a hand in all of this. (Palantir stock, FWIW, is up 340% since Trump's inauguration.)
A good definition of AI is the branch of computer science dedicated to making computers work the way they do in movies – Alan Blackwell, Moral Codes: Designing Alternatives to AI
I spoke last night to a class of education/sociology grad students, laying out my very long list of reasons why using AI in education is a very bad idea – environmentally, politically, pedagogically, morally. One student came up to me after class and asked, with a mix of panic and exasperation, "what the hell can we do?"
I rarely have a satisfactory answer to this because the right answer – or at least, the full answer – is the most difficult path forward: we have to change everything. We have to radically reimagine education at both the micro and macro levels – how schools are funded, how schools are staffed, which practices matter, how we develop our relationship to knowledge and, even more importantly, to one another. We must expand human capacity, not outsource and privatize and turn education over to (and turn teachers and students into) machines.
Oh, and also: eat the rich.
But there is a smaller step, one that requires a lot less of us, but that is nonetheless incredibly powerful: ask questions. Push back on the technology, shatter the illusion that AI is all-powerful, inevitable, necessary, or even good – as Neil Postman argues in the closing pages of Amusing Ourselves to Death about the dangers of television to civic discourse:
What is information? Or more precisely, what are information? What are its various forms? What conceptions of intelligence, wisdom and learning does each form insist upon? What conceptions does each form neglect or mock? What are the main psychic effects of each form? What is the relation between information and reason? What is the kind of information that best facilitates thinking? Is there a moral bias to each information form? What does it mean to say that there is too much information? How would one know? What redefinitions of important cultural meanings do new sources, speeds, contexts and forms of information require? Does television, for example, give a new meaning to “piety,” to “patriotism,” to “privacy”? Does television give a new meaning to “judgment” or to “understanding”? How do different forms of information persuade? Is a newspaper’s “public” different from television’s “public”? How do different information forms dictate the type of content that is expressed?
To ask questions, Postman argues, is to break the spell.
So much of the talk about artificial intelligence (and, no doubt, this whole "de-extinct" dire wolf as well) relies on our uncritical awe, on promises of the good that the technology will someday be able to do. Often Arthur C. Clarke's famous adage – that "any sufficiently advanced technology is indistinguishable from magic" – is wielded to dismiss those without sufficiently advanced scientific knowledge rather than to showcase how those who are peddling technology rely on a fair amount of hand-waving to get us all to play along with the future they're invested (literally) in building.
In a recent op-ed, Tressie McMillan Cottom argued that artificial intelligence is pretty "mid" – a goofy and even benign kind of magic that still sometimes adds a sixth finger to human hands in the images it generates. But that's not really the entirety of the magic trick that AI's promoters are trying to pull off: that this is the worst AI you'll ever use, as Ethan Mollick often chuckles.
The technology's advocates would like us ignore that AI is being wielded right now, today to round up immigrants, to identify "subversives," to revoke student visas, to deny social services, to extract value from the commons, to crash the economy, to expand government austerity, to facilitate genocide, to bolster the fossil fuel industry, to dismantle democratic accountability.
Ask questions. Break the spell.
The ASU+GSV Summit was held this past week – the annual gathering where ed-tech's most powerful investors and policy-makers have long plotted and schemed on how to profit from a privatized education sector. There was a bit of pushback. Math teacher Dan Meyer took folks to church. [Praise hands emoji, for sure.] In his presentation, Ben Riley made crystal clear the links between AI – the theme and big marketing push of the conference – and the rise of techno-fascism. The Chronicle of Higher Education's Goldie Blumenstyk challenged the audience at her talk about their complicity in the face of the current administration's policies: after all the money those present have made on the backs of educational institutions, how could they be silent?
Mostly, I gather, they managed to be silent. Frankly, I'm not even sure it was awkward silence.
Honestly, I don't know why anyone in and adjacent to ed-tech would continue to believe that the trajectory the tech industry is on will take us anywhere other than the subversion of democracy, although I know there are good folks who do. [Less enthusiastic hand emoji.] I saw someone say that Colin Kapernick was at the event, hawking some AI thing, and I reckon in a different life I could have made bank helping celebrities avoid these "oops, I did a fascism" kind of moments with their ed-tech philanthropy. But hey. I digress.
I recognize that people want to believe that, in their little corner of educational software, everything is kind and fun and empowering for teachers and students alike. But as David Golumbia argued in his posthumously published book Cyberlibertarianism: The Right Wing Politics of Digital Technology, the tech industry has long sought to explicitly "disrupt" two of democracy's core institutions: journalism and education. And phew. Look at us/US now.
Sure, I guess it is funny that Education Secretary Linda McMahon misread from her teleprompter at the event and said "A1" instead of "AI." Twice. Hahaha. But also goddamn. Let's not have that blunder be the takeaway.
The takeaway is that the ed-tech industry, so busy hustling AI products and services to schools, continues business as usual as the Trump Administration actively dismantles civil rights and public education. And maybe we should recognize that that's been the goal all along.
So here's a question for you (and a question for you to ask others – to break the spell): how can you, in good conscience, compel any student to use any piece of education technology right now, to upload any personal data to any educational provider – institutional or third-party, knowing that there are no assurances that this information will not be shared with the US government and used against them?

Thanks for subscribing to Second Breakfast. Please consider becoming a paid subscriber. Your financial support enables me to do this work. It's a better investment than the stock market, because I am actually committed to a better future for everyone.