!!! This is a SiteProxy proxied website, do not enter your personal information. Refer to: https://github.com/netptop/siteproxy for details !!!×

Wednesday, April 9, 2025

When nothing else works: Blaming the scientific method instead of the pseudoscience

The world of facilitated communication and variants (RPM, S2C, Spellers Method) is heavily fortified against researchers and skeptics. Those fortifications are one reason why, despite all the evidence against it, and besides all the support it provides to desperate parents in impossible situations, FC/RPM/S2C has persisted for so many decades.

Some fortifications derive from social pressure. Using FC typically involves joining a tight-knit community of other FC users that becomes one of your main support networks. Should you ever grow disillusioned with FC, you lose all those friends, if you agree to undergo authorship testing, you incur their hostility; if you speak out publicly, you become their enemy. Most defectors keep quiet (our Janyce is an extraordinarily rare exception).

Other fortifications derive from professional pressure. If you’re a “certified” S2C practitioner and agree to authorship testing, you risk losing your “certification.”

Other fortifications derive from legal pressure. If you live or work in proximity to a facilitator and their clients—whether you are the spouse of a person who facilitates your child, or a teacher/clinician who works with a child who is facilitated by others—and if you express skepticism about, or otherwise try to resist, the use of FC, you risk serious harm to your reputation, your livelihood, and even your basic rights. To be more specific, you risk being accused, via FCed messages controlled (however unwittingly) by those who now see you as their adversary, of abusing the FCed individual. You may lose your job or custody of your child; you may get locked up for months in solitary confinement. (See Stuart Vyse’s highly disturbing piece on one such account).

Fortress of the Îlette of Kermovan, in municiplaity of Le Conquet (Finistère, Britanny, west of France). Wikimedia Commons.

But FC/RPM/S2C has also fortified itself in more theoretical ways. That is, the theories that proponents have advanced in support of FC include claims that attempt to make FC impossible to explore scientifically. One is a claim about minimally and non-speaking autism that, if true, would invalidate any observations or tests of the communicative and cognitive capabilities of minimal and non-speaking autistics—except for those observations and tests based solely on FCed output. Other claims seek to invalidate tests that are based on FCed output—specifically, message-passing tests, or tests that blind the facilitator in order to determine who is authoring the messages. Collectively, these claims, if true, would mean it’s impossible to assess the validity of FC, whether by testing via a standard language assessment if the facilitated person understands the words they spell, or by testing via a facilitator-blinded condition if the facilitated person can spell correct answers that their facilitator doesn’t know and therefore can’t cue.

In the rest of this post, I’ll take a closer look at these claims.

The claim about minimally and non-speaking autism is that the underlying disability isn’t the socio-cognitive challenges that eight decades of clinical observation, diagnostic screening tools, and empirical research have shown—and continue to show—autism to be. Rather, the underlying disability, according to FC proponents, is a mind-body disconnect. Individuals with minimal and non-speaking autism are, allegedly, unable to control their bodies, especially their fine motor movements. This effectively invalidates any attempt to assess the cognitive or linguistic skills of minimal and non-speakers with autism. That’s because all responses to test questions involve bodily responses, particularly fine motor responses: speaking, pointing to pictures, assembling shapes, and using a pencil. Indeed, all inferences based on any behaviors are purportedly invalid. Included among the invalidated inferences are these:

  • That if someone is always looking away from the people speaking in their presence, they’re probably not paying attention to what’s being said;

  • That if they don’t respond appropriately to requests, directions, or other communications, it’s probably because they don’t understand you;

  • That if they don’t speak or type independently, it’s probably because they don’t have expressive language skills;

  • That if they walk away or try to escape while being facilitated, or say “No more! No more!”, they don’t want to be facilitated.

As Elizabeth Vosseller puts it, “Insides don’t match outsides.” But where does that leave us?

Even eye movements, powerful indicators of what someone is attending to or thinking about, purportedly reveal nothing in minimally and non-speaking autism. Some FCed individuals are said to have “ocular apraxia,” such that they can’t control their eye movements. Some are said to use peripheral vision instead of direct gaze, such that where they’re looking isn’t necessarily what they’re looking at, and such that they can supposedly see the letters they’re pointing to during FC sessions even when they appear to be looking away. Some routinely wear sunglasses, allegedly because of visual sensitivities, making it hard to see what they’re looking at.

But, in fact, the only basis we have for making any judgments about anyone (unless, of course, we’re telepathic and can read their minds directly) are their body movements: their speech, gestures, facial expressions, and actions. The mind-body disconnect theory of autism, therefore, makes autistic individuals essentially impossible to evaluate—and makes all claims about autistic individuals both unverifiable and unfalsifiable.

This, in turn, makes the mind-body disconnect theory not only profoundly wrong (i.e., in conflict with eight decades of clinical observation, diagnostic screening tools, and empirical research, and without any empirical support of its own), but profoundly unscientific. That’s because falsifiability—being susceptible to being disproven—isn’t just the foundation of science, but the largely agreed-upon demarcation between science and pseudoscience that dates back to philosopher of science Karl Popper.

Which brings us to a quick detour into Karl Popper’s insights.

Popper, recognizing that some claims are hard to prove, also recognized that some such claims are nonetheless more scientific than others. In particular, he recognized a distinction between claims like All swans are white and claims like We’re living in a computer simulation (my example; not his). While it’s impossible to prove that all swans are white (because that would involve somehow inspecting every single swan, past, present, and future), All swans are white, at least, has the virtue of being falsifiable. That is, it can be falsified if a single non-white swan is found. And thus, All swans are white counts as a scientific claim (one that may turn out to be false).

We’re living in a computer simulation, however, cannot be falsified. That’s because anything that looks like evidence that we’re not living in a computer simulation could be part of the simulation. Any experiment we try to do to test whether we’re in a computer simulation, including the results of that experiment, could be part of the simulation. Thus, this claim is not a scientific one—it will never turn out to be false, even if it is false.

Curiously, there’s at least one outlier in the philosophy universe who’s proposed a totally different way to demarcate science from pseudoscience. This wasn’t someone I’d ever heard of (he appears to be unaffiliated), but he was recently cited on X, seemingly in support of the paranormal claims made on the Telepathy Tapes podcast about non-speaking individuals with autism who are subjected to S2C.

This self-styled philosopher—I’m not sure what else to call him—states that it isn’t the various empirical claims out there that we should be judging as pseudoscientific, but, rather, the methods we use to judge those claims (Westcombe, 2019). For him, a method is scientific “if it is well suited to establishing the truth or falsehood of a particular empirical claim” and pseudoscientific “if it is not well suited to establishing the truth or falsehood of a particular empirical claim.” Since no method is well suited to establishing the truth or falsehood of an empirical claim that turns out to be unfalsifiable, any method that takes on such a claim is, according to Westcombe’s approach, a pseudoscientific method.

By shifting the stigma of “pseudoscience” from claim to method, this approach effectively shields all claims from all charges of being pseudoscientific. Instead, the only entities that are potentially pseudoscientific are the methods that investigate these claims. Any method that investigates a claim that turns out to be unfalsifiable takes the blame for that unfalsifiability: the method is now pseudoscientific, while the claim itself stays above reproach. Since Westcombe fails to show why this seemingly upside-down philosophy is superior to Popper’s—he doesn’t even mention Popper—it isn’t worth spending more time on it. Except that, as we’ll see below, Westcombe isn’t the only one to suggest that methods, rather than claims, are the ones at fault when methods investigate problematic claims.

Which takes us to the other set of claims that attempt to fortify FC against scientific exploration. These are claims that attempt to discredit message-passing tests—tests that rigorously assess who is authoring the facilitated messages by blinding the facilitator to the answers to questions directed at the person they’re facilitating. Such tests, mostly dating to the 1990s, have consistently shown that it is the facilitator, not the facilitated person, who controls the typing.

Some arguments against authorship tests invoke the Observer Effect: the disturbance of what’s being observed by the act of observation. For example, according to Sheehan and Matuozzi’s pro-FC paper (Sheehan & Matuozzi, 1992), Nobel Prize winning physicist Arthur Schawlow, himself the father of an FC user,

described certain experimental efforts to investigate facilitated communication validity[i.e., message-passing tests] as analogous to looking for a ping pong ball on the floor of a dark room by shuffling your feet around. If you touch it even slightly it is not there anymore.

It’s not clear how message-passing tests could cause the equivalent of a dislocation of a ping pong ball, but this objection would apparently rule out such tests as hopelessly unreliable.

Some arguments against authorship tests focus on psychological effects that purportedly invalidate their results. Rigorous testing, allegedly, is inherently hostile; facilitated individuals, allegedly, sense the researchers’ skepticism about their linguistic capabilities. Their performance is further undermined, allegedly, by “stereotype threat”: negative stereotypes that can undermine test performance in marginalized groups, in this case negative stereotypes about the capabilities of minimal and non-speakers with autism. All this conspires, allegedly, to create memory retrieval difficulties so prohibitive that the facilitated person is unable to come up with the words for the pictures they’re shown during testing—even words as simple as “hat” and “flower.”

One problem with these arguments—besides the lack of evidence for them—is that they don’t explain how it is that, as in some of the 1990s message-passing tests, the facilitated person is able to type the words that label what their facilitator saw. Why would a facilitated person, allegedly underperforming due to a hostile environment, be able label these words, but not the words that label what they saw and their facilitator didn’t see?

I’m aware of only two proponents of FC/RPM/S2C who attempt to address this question: Rosemary Crossley, credited with introducing FC to the English-speaking world in the late 1970s, and Cathie Davies, a once-frequent commenter on this blog whose comments here ceased three years ago, right around the time that Crossley passed away. In those comments (here, here, and here), Davies acknowledges that in the rigorous message-passing tests of previous decades, facilitated individuals often typed what the facilitator saw, not what they saw and their facilitator didn’t.

The results of message passing studies should, by now, be very predictable. There is little point in replicating such studies, as I am not aware that the results are widely contested.

But for her, the question is “how are those results to be interpreted?” Critics, she claims, have produced only one alternative interpretation to facilitator control:  “the straw man – ESP.” Davies, if she’s still around, is apparently not an enthusiast of the Telepathy Tapes.

For Davies, neither facilitator control nor telepathy explains why the facilitated person types “flower” when only the facilitator saw the picture of the flower. Furthermore, like Westcombe, she faults the method (asking the autistic person to label an external stimulus like a picture) rather than the claim it’s investigating (the validity of FC). For Davies, the method is at fault for being unreliable, and the researchers are at fault for drawing faulty conclusions.

Your preferred experimental design may be adequate to demonstrate that many FC users do not pass messages under controlled conditions. However, such studies have no power to explain why this may be so. We do not currently know enough about the relevant population to design experiments able to distinguish between competing explanatory hypotheses.

Thus, she concludes, all conclusions based on this method (prompting the autistic person to label a picture that the facilitator didn’t see) are “speculative.”

One reason they’re speculative, Davies says, is:

[T]he principle of the underdetermination of theory by evidence (Quine, 1951). Experimental data is typically consistent with a broad array of competing theoretical explanations.

But how is data generated by tests that prompt the autistic person to label a picture that the facilitator didn’t see ambiguous in any significant way? The ambiguity, Davies claims, comes from the fact that the tests are “closed-item” tests—that is, they solicit a particular word or phrase rather than an open-ended response. This, she claims, is not representative of the type of communication that FC is all about—namely, open-ended communication:

[S]ubjects’ performance in message passing or responding to closed questions says little about their capacity for self-motivated communication on topics important to themselves: the type of communication most commonly reported in non-experimental studies and, arguably, the type of communication most valuable to the communicators.

She adds:

[T]he task demands for self-motivated communication are different from those for message passing under controlled conditions, particularly in relation to processing exteroceptive (“outside-in”) sensory information [presumably by this Davies means tasks like labeling a picture].

For this reason, Davies claims, closed-item tests are actually less rigorous than tests that elicit self-motivated communication.

She adds that closed-item testing is more susceptible to facilitator influence than open-ended communication:

Research has demonstrated that facilitator influence is more likely in the context of closed questions with simple answers (Wegner, Fuller, & Sparrow, 2003).

The facilitated individuals in Wegner et al.’s studies were verbally fluent adults (at least inasmuch as they were Harvard undergraduates); this limits the applicability of Wegner et al.’s findings to minimally and non-speaking FCed autistics. Nevertheless, Davies claims that these findings further undermine the assumption “that tests involving closed questions with simple answers are the most rigorous tests.”

But the more open-ended the question, the harder it is to blind the facilitator to its answer, and the harder it is to verify that the answer produced by the facilitated person is the correct answer. Thus, closed questions are necessary conditions for rigorous tests.

As for Davies’ claim that “responding to closed questions says little about [the] capacity for self-motivated communication,” the linguistic skills involved in picture-labeling (basic expressive vocabulary skills) are a prerequisite for open-ended communication. If a person isn’t able to produce the word “flower” when asked to label a picture of a flower, how are they able to produce more sophisticated words and string them together into the sentences that regularly appear as FCed output? And how is the cognitive process of producing the word “flower” in response to a question about a picture more challenging than producing the word “flower” in an open-ended, “self-motivated” description of a walk you took through the park? Davies might claim that flowers and walks in the park might not qualify as “topics important to themselves.” Message-passing tests, however, can be, and have been, adjusted to include objects related to such topics, and the results remain the same.

But Davies also claims that in “closed-item testing” the facilitated person may be less sure of him or herself and so may seek cues from the facilitator:

[W]hen unsure what is required of them or anxious about presenting a wrong answer - FC users may actively and intentionally seek cues from their facilitators.

She adds:

It would, however, be difficult to characterise this behaviour as “control” by facilitators!

For Davies, in other words, if a facilitated person types “flower” when only the facilitator saw a picture of a flower, this may be evidence, not of control by the facilitator, but of “cue seeking” by the facilitated person.

As an example of cue seeking, Davies cites a subject with “complex communication needs” in a study by Kezuka (1997):

It appeared that J had learned to scan the keys with her pointing finger until slight changes in force from the assistant signalled her to drop her finger to the letter below” (p.576).

What Kezuka means by “learned” here is unclear: some learning is non-conscious and includes conditioned responses to facilitator cues which, in turn, is the basis for facilitator control. But arguably, even conscious learning about facilitator cues, and adherence to those cues, count as facilitator control. If a teacher shushes a student, and the student, as a conscious response to being shushed, stops talking, the teacher is arguably the one in control. There is, in other words, little reason to believe that the scenario presented by Kezuka doesn’t involve facilitator control.

Besides claiming that they “cannot distinguish between ‘control’ by facilitators or ‘cue seeking’ by communicators” and that they do not measure the “self-motivated communication skills” that are “representative of FC practice,” Davies cites one more problem with message-passing tests: their consistently negative results:

Regardless of the reason, testing FC users’ ability to communicate using a task that is so clearly problematic for many must be questioned.

In other words, what most people would consider to be evidence of facilitator control—the consistently negative results of message-passing tests—Davies considers, instead, to be evidence against the validity of the tests.

This line of reasoning, of course, would call into question the validity of any method that consistently returns negative results—for example, a method that consistently returns negative results for claims that the earth is flat.

Davies adds a quote from Shadish et al (2002):

[I]f a study produces negative results, it is often the case that program developers and other advocates then bring up methodological and substantive contingencies that might have changed the result. For instance, they might contend that a different outcome measure or population would have led to a different conclusion. Subsequent studies then probe these alternatives and, if they again prove negative, lead to yet another round of probes of whatever new explanatory possibilities have emerged.

Shadish et al, as quoted by Davies, allows that eventually there may be a consensus that the negative results are real, but “that this process is as much or more social than logical.” In other words, non-objective and non-rigorous.

Returning to the question of authorship, Davies states: “I would like to say ‘get a new test.’”  And what would be a better test than asking the facilitated person to label a picture that the facilitator didn’t see?

Instead of concentrating on the comparatively unremarkable cue-seeking behaviour, surely it would be better to engage with the population in coproduced, participatory research to explore what else may be going on?

Elaborating, Davies claims that:

[M]odels of evidence-based practice...demand consideration, not only of academic research, but also of stakeholder perspectives and clinical experience. The weighting given to evidence from these sources is decided through open and transparent deliberation. No one of these three types of evidence is automatically given precedence over the others.

In other words, non-objective and non-rigorous. Any research on FC that includes “evidence” from stakeholder perspectives and clinical experience will naturally include facilitators (whose vested interests and resultant biases are arguably stronger than anyone else’s) and FCed messages attributed FCed individuals (where any a priori assumptions about authenticity will lead to circular reasoning and warped outcomes).

In the end, though, Davies seems to rule out any kind of authorship testing—no matter how rigorous or “rigorous.” She claims not only that “no valid outcome measure [i.e., authorship test] is currently available,” but that, even if there were a “valid” authorship test, deploying it may no longer be feasible. That’s because:

[A] group of individuals as hostile and entrenched in their dogma as critics of FC have proven to be... people such as yourself and your colleagues... may have effectively poisoned the waters so that no such research can be conducted.

In short, Davies tries to argue

  • that the most rigorous tests are the least rigorous

  • that the consistently negative results they’ve produced show that there must be something wrong with the tests, not what they’re testing, and

  • that the skepticism that people such as myself and my colleagues have acquired as a result of those consistently negative results has made any future authorship testing of any sort impossible.

In the end, this shifting of blame from the problematic claims that experimental methods have consistently invalidated over to the experimental methods that have invalidated the problematic claims recalls Westcombe’s upside-down reasoning about pseudoscience.

Cathie Davies, assuming she’s still alive and that this is her actual name, appears, like Westcombe, to be unaffiliated. But some like-minded individuals are considerably more powerful—particularly if they happen to be editors at major journals, or people assigned by those editors to review papers. A paper I contributed to, which included a description of various ways to conduct message-passing tests, was assigned a reviewer who claimed that the message-passing tests “have not undergone any empirical scrutiny” and that we should “state more clearly that it is currently unknown how often these authorship check strategies will conclude that messages are not genuine when in fact they are.”

This reviewer, as we wrote to the journal editor, seemed to want us to expand our paper to include an empirical defense of basic principles of experimental design. This left us wondering what it would be like if such a demand were made of studies in general; not just studies of FC. Studies examining the validity of telepathy might use designs similar to those we describe for FC: blinding the targeted recipient of the telepathic message to the message contents and to any possible cueing from the “sender”. Are the results of such studies unreliable until their empirical designs have somehow been empirically validated and their probabilities of drawing erroneous conclusions somehow calculated? What would such a meta-validation even look like, and how would we avoid infinite regress?

But of course, all this is beside the point. The point, at least for some people, is not to explore claims and test outcomes, but to erect fortifications—especially, apparently, when they pertain to FC.


REFERENCES

Kezuka E. (1997). The role of touch in facilitated communication. Journal of autism and developmental disorders27(5), 571–593. https://doi.org/10.1023/a:1025882127478

Popper, Karl (1962). Conjectures and Refutations: The Growth of Scientific Knowledge (2002 ed.). London: Routledge. ISBN 978-0-415-28594-0. excerpt: Science as Falsification

Quine, W.O. (1951). Two Dogmas of Empiricism The Philosophical Review 60 (1951): 20-43. 

Sheehan, C. M., & Matuozzi, R. T. (1996). Investigation of the validity of facilitated communication through disclosure of unknown information. Mental Retardation, 34, 94-107.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental and Quasi-Experimental Designs for Generalized Causal Inference. (2 ed.) Cengage Learning.

Wegner, D. M., Fuller, V. A., & Sparrow, B. (2003). Clever hands: Uncontrolled intelligence in facilitated communication. Journal of Personality and Social Psychology, 85(1), 5–19. https://doi.org/10.1037/0022-3514.85.1.5

Westcombe, A., (2019). I Do Not Think That Word Means What You Think It Means: A Response to Reber and Alcock’s “Searching for the Impossible: Parapsychology’s Elusive Quest.” Journal of Scientific Exploration, Vol. 33, No. 4, pp. 617–622, 2019

Wednesday, March 26, 2025

How FC myths coincide with edu-myths—and why even those who don’t believe in telepathy are primed to believe in FC

In my last post, I proposed one reason for the popularity of the Telepathy Tapes: how predisposed people are to believe in paranormal phenomena. Here I examine another reason: how predisposed people are to believe that autistic non-speakers can be unlocked via held-up letterboards—that is, via variants of facilitated communication known alternatively as Rapid Prompting Method (RPM) and Spelling to Communicate (S2C).

First, the Telepathy Folks

As far as Telepathy Tapes listeners go, part of this inclination comes from the fact that the podcast provides no explicit indications that the individuals on the Telepathy Tapes are being cued by their facilitators. The podcast is audio only, and so in scenes where nonspeaking autistic individuals type out messages on letterboards, all we have are the verbal descriptions provided Ky Dickens, who is not only the show’s host, but a fervent believer in FC. And Dickens’ verbal descriptions omit that the letterboards are held up and inevitably shift around while the autistic person’s extended index finger roams around in front of the letter arrays. The show does provide a few videos behind a paywall, but the facilitator cueing in these, as with many other videos RPM and S2C, is proving to be too subtle for most naïve viewers.

But that doesn’t fully explain what so many people with no vested interest in FC are apparently ready to believe—judging, at least from what we’ve heard from the many Telepathy Tapes enthusiasts. Presented with verbal descriptions of scenarios in which an autistic person points to a number that only the facilitator saw, or to a sequence of letters that labels a picture that only the facilitator saw, a surprisingly large number of Telepathy Tapes listeners have concluded that this is both:

A.      a reliable description of what happened, and

B.      evidence, not that the facilitator might be influencing the number/letter selection through via normal, if subtle, physical mechanisms, but that the facilitator is instead sending a telepathic message that is picked up and acted upon by the autistic non-speaker.

Beyond telepathy believers

As we’ve discussed elsewhere on this blog, you don’t have to believe in telepathy to ignore or dismiss facilitator cueing. But dismissing facilitator cueing entails at least one extraordinary belief: namely, that non-speaking autistic individuals, who typically show little signs of attending to other people, or of comprehending more than a few basic words and phrases, and who typically aren’t included in general education classrooms, have somehow acquired sophisticated vocabularies and literacy skills, worldly knowledge, and academic skills across the entire K12 curriculum. For telepathy believers, the explanation is straightforward: this acquisition happens through telepathy. For everyone else, there are instead a host of FC-friendly education myths that have long dominated the world of K12 education and in turn, through the salience of K12 education in many people’s lives, also dominate our popular beliefs.

Myth #1: Kids can learn academic skills by osmosis.

Within and beyond the education world, there’s a widespread belief that, just as many non-academic skills can be learned through immersion and incidental learning in the natural environment, the same holds for academic skills. That is, just as typically developing children learn to walk, talk, and build block towers without any explicit instruction, the same, purportedly, goes for reading, writing, and arithmetic. Indeed, there’s an entire pedagogy based on this notion: “child-centered discovery learning.”

For reading, this means immersing children in a “print-rich environment.” For math and science, it means manipulatives (blocks, rods, chips) and child-centered exploration. Teachers are “guides on the side” rather than “sages on the stage,” providing minimal instruction or error correction. (See for example Hirsch, 1996; Ravitch, 2000). While few schools take such notions to extremes, and while learning to read through osmosis has been widely discredited, the general notion that discovery learning is more effective than direct instruction continues to resonate broadly and deeply throughout K12 education and on into the general public.

And it extends, naturally, to the world of FC. Indeed, Douglas Biklen, the person credited with bringing FC to the U.S. from Australia  more or less echoed the proponents of literacy through print-rich environments when he said, by way of explanation for the literacy skills in FC, that:

I think it's rather obvious that the way in which these children learned to read was the way that most of us learned to read-- that is, by being immersed in a language-rich environment. You go into good pre-school classrooms and you'll see words everywhere, labeling objects, labeling pictures. You look at Sesame Street. We're introducing words. We're giving people whole words. We're also introducing them to the alphabet. (Palfreman, 1993).

As in K12 education, this line of thinking extends beyond literacy to other skills and knowledge. FC proponents have claimed that FCed children have learned about current events by listening to NPR (Iversen, 2006); Spanish by listening to their siblings practice at home (Handley, 2021); and physics by overhearing a physics class through a cafeteria wall (personal communication).

In K12, the illusion that students can master material without explicit instruction is sustained by powerful prompts and cues from teachers, often in the form of leading questions. I discussed this phenomenon in an earlier post; we can see it play out in detail, for example, in Emily Hanford’s Sold A Story. This podcast is an exposé of an approach to reading known as “Balanced Literacy” and/or “Three Cueing” that eschews phonics instruction and encourages kids to guess words from context. In Episode 1, a teacher reads a story about two children, Zelda and Ivy, who have run away from home because they didn’t want to eat the cucumber sandwiches their father had made for them. The teacher turns to a page where a word is covered up by a sticky note and prompts the students to use context to guess what it is. The word occurs at a point where the Zelda and Ivy are wondering how their parents will react when they realize they’re gone. Here is the excerpt:

Teacher: Do you think that covered word could be the word “miss”?...

Teacher: Could it be the word miss? Because now that they’re gone maybe their parents will miss them?

The teacher asks the kids to think about whether “miss” could be the word using the strategies they’ve been taught.

Teacher: Let’s do our triple check and see. Does it make sense? Does it sound right? How about the last part of our triple check? Does it look right? Let’s uncover the word and see if it looks right?

The teacher lifts up the sticky note and indeed, the word is “miss.”

Teacher: It looks right too. Good job. Very good job.

(Sold a Story, Episode 1 transcript).

The teacher doesn’t seem to recognize what a big clue this is—that is, how many other possibilities there might be: “find,” “scold,” “resent,” etc.—and therefore to what degree she’s essentially told the students the answer and, quite likely, overestimated their word-identification skills.

Precisely this sort of oral prompting pervades—and sustains the illusion of— the more recent variants of FC—Rapid Prompting Method (RPM), Spelling to Communicate (S2C), and Spellers Method, where facilitators frequently direct letter selection with phrases like “up-up-up,” “right next door,” and “get it!”

Myth #2: All students are equally capable: they just need the right environment for learning and the right outlet for demonstrating understanding.

The world of K12 education has become increasingly resistant to the reality that different children have different levels of academic readiness and academic achievement. Instead, large proportions of education professionals, along with large proportions of the general public, embrace pseudoscientific theories (“multiple intelligences”; “learning styles”) that recast differences in skills as differences in styles. These beliefs have continued to spread despite the growing evidence against them (Willingham et al., 2015; Newton & Salvi, 2020). Individuals once viewed as low achievers are now often labeled as “bodily-kinesthetic learners.” This type of learner, purportedly, doesn’t do well in traditional classrooms but will prove quite competent with instruction and activities that incorporate lots of movement (skits, dances, building things, marching around the classroom). Individuals who struggle to read or do math might also be labeled as “visual learners”—purportedly performing perfectly well so long as teachers replace letters and numbers with pictures.

Consistent with these assumptions, assessments are now less about testing specific skills and more about giving students multiple options for “demonstrating understanding.” Specific suggestions include allowing kids to make presentations, posters, or “concept maps” instead of pen-and-paper tests, and providing supports like text-to-speech and speech-to-text (see for example, here). A “visual” student might, for example, retell a story in pictures rather than in words.

The presumption that all students are equally capable, given the appropriate adjustments, echoes a mantra of FC proponents that dates back to Douglas Biklen: Always presume competence. But the similarities don’t end there. In FC, as in education, the apparent (but purportedly not actual) challenges of the population in question are explained by invoking the person’s body. The education world, regarding students who struggle in traditional classrooms, invokes a bodily-kinesthetic learning style; the FC world, regarding minimal speakers with autism, invokes a mind-body disconnect. Finally, just like it’s the teacher’s job to figure out the best way for individual students to demonstrate the understanding that they’re presumed to have somehow acquired, it’s the facilitator’s job, via the letterboard or keyboard, to figure out the motor or regulatory support needed for individual clients to demonstrate the literacy skills and academic knowledge that they, too, are presumed to have somehow acquired.

One final commonality here between the FC world and the edu-world is the notion that the hard work that people used to think was necessary—whether the direct, systematic instruction and “drill and kill” of traditional classrooms or the intensive, one-on-one “discrete trials” of ABA—can be bypassed by methods that simply (1) presume that children are capable of learning on their own and (2) provide appropriate supports and outlets for children to put that learning to use.

Myth #3: Traditional, controlled tests are unreliable and don’t measure what really matters.

In the education world, there has long been a resistance to high-stakes standardized tests that measure student achievement. Particularly vociferous are those most invested in the teaching business: teachers unions and education schools (Phelps, 2023). These individuals make several arguments against using such tests—arguments that resonate across the general public. Among other things, they claim that:

While standardized statewide tests are still routinely administered across the country, the interest groups most resistant to high-stakes testing have effectively eliminated the most informative of such tests: those tests that most fully, comprehensively, and objectively assess students’ skills across a variety of key academic sub-skills and provide the most information about which educational pedagogies are working and which students have been most ill-served by which pedagogies. The canonical example of such tests is the Iowa Test of Basic Skills (ITBS). The ITBS was once used by schools across the country (I took it multiple times as a kid in Illinois); it has become decreasingly popular since then and was recently replaced by new “Common Core-aligned” tests. Used until recently by many in the homeschooling community, the ITBS reported sub-scores in various aspects of reading and math and placed no ceiling on skills being measured, such that a 4th grader could score at a 6th-grade level in a particular reading sub-skill.

Most of the new Common Core-inspired state tests, in contrast, only report general scores for reading and math, not sub-scores. They also only measure students up to what the state considers to be grade-level standards: standards which many testing experts consider, for most grades, to be set too low. Also reducing the tests’ informational value is the fact that some of the math questions require literacy skills (explaining your answer) and that students can receive partial credit for incorrect answers for which they provided verbal explanations. (Both of these factors artificially lower the scores, relative to other students, of English learners and students with language delays—including students with autism). The tests are further compromised by low security: teachers rather than outside proctors administer the tests, and some large-scale cheating episodes have come to light (see Phelps, 2023).

Also decreasingly informative are the SATs, which many colleges have made optional, and which have been redesigned to measure fewer skills with less precision. Many of the math problems allow calculators, and few require complex algebraic operations. The analogies and vocabulary sections are gone, as are questions that ask students to synthesize long passages. Passages now consist of 1-2 short paragraphs, often accompanied by charts and graphs, followed by a single question that often is more about the chart than the paragraph(s). The passages (and graphics) are no longer drawn from the works of professional writers but instead are written by test-makers; as a result they’re often hard to make sense of, not because the writing is sophisticated, but because they’re poorly written (or designed). Test-takers no longer lose points for guessing, so guessing rates have gone up, adding even more noise to the signal.

Meanwhile, one of the most popular early reading assessments in use in K12 schools, Fountas and Pinnnell Benchmark Assessment, is so poor at detecting skilled vs. struggling readers as to be equivalent to a coin toss.

As for those who want to promote a particular pedagogical approach as “evidence-based,” in lieu of standardized testing that might indicate objective effects on learning outcomes, we have anecdotal reports from classrooms: subjective accounts of high levels of student and teacher engagement, interviews with teachers, annotations of student work, and/or researchers’ field notes. “Lived experience” substitutes for objective testing; anecdotes for evidence.

And if the education world needs one more reason to dismiss objective tests, Telepathy Tapes host Ky Dickens obliges. On her “resources” page she claims, falsely, that ”nothing in education can truly be empirically validated because every student is inherently unique.”

Which takes us back to FC. FC proponents, just like their counterparts in the education world, have successfully suppressed informative testing. While the Don’t test mantra dates back to Douglas Biklen and the 1990s, there were, in that decade, a number of FC practitioners who nonetheless willingly participated in objective tests. But those tests consistently established that the facilitators were the ones controlling the FCed messages. What came next was a host of arguments against authorship tests that parallel the education world’s arguments against academic tests:

  • Test anxiety purportedly impedes the FCed person’s ability to type messages, particularly in the hostile environment that purportedly results from skeptical examiners.

  • Test performance is further undermined by stereotype threat: that is, by negative stereotypes about the abilities of minimally speaking individuals with autism (Jaswal et al., 2020) (Unlike in educational testing, there is no evidence that either anxiety or stereotype threat affects authorship testing).

  • Authorship tests are insulting and violate the dictum to Always presume competence. (Apparently standardized education tests aren’t as insulting or unethical. Many FCed individuals take such tests—with the help of their facilitators).

  • There are alternative ways to assess authorship that are purportedly more reliable, like comparing the writing styles of the FCed individual and their facilitator(s), or looking at whether their pointing rhythms suggest an awareness of English spelling patterns, or mounting an eye-tracking computer on their heads and recording whether they look to letters before they point to them (Jaswal et al., 2020; Jaswal et al., 2024; Nicoli et al., 2023). (See here, here, and here for critiques).

  • Or better yet: there’s lived experience. FC-generated accounts attributed to FCed individuals recount their experiences with FC and explain how it’s really them producing the FCed messages. Videos or live observations of FCed individuals typing purportedly establish beyond a reasonable doubt that they aren’t being cued by the assistant who is always within auditory or visual cueing range.

  • As for other types of standardized tests—cognitive tests, academic tests— none of these should be conducted on any minimally speaking autistic individual except through FC. That’s because all such tests require some sort of physical response (pointing to pictures; arranging shapes; filling in bubbles), and so the purported mind-body disconnect makes these tests hopelessly unreliable.

The hostility in both the worlds of FC and the worlds of education towards objective, well-controlled, informative testing underscores what’s so powerful about such tests: they are the brass tacks that everything comes down to. They are, in all areas of life, what separates the science from the pseudoscience and exposes the clinical quacks and methodological cracks for who and what they are—whether in K12 education, in minimally-speaking autism, or on podcasts about telepathy.

While standardized statewide tests are still routinely administered across the country, the interest groups most resistant to high-stakes testing have effectively eliminated the most informative of such tests: those tests that most fully, comprehensively, and objectively assess students’ skills across a variety of key academic sub-skills and provide the most information about which educational pedagogies are working and which students have been most ill-served by which pedagogies. The canonical example of such tests is the Iowa Test of Basic Skills (ITBS). The ITBS was once used by schools across the country (I took it multiple times as a kid in Illinois); it has become decreasingly popular since then and was recently replaced by new “Common Core-aligned” tests. Used until recently by many in the homeschooling community, the ITBS reported sub-scores in various aspects of reading and math and placed no ceiling on skills being measured, such that a 4th grader could score at a 6th-grade level in a particular reading sub-skill.

Most of the new Common Core-inspired state tests, in contrast, only report general scores for reading and math, not sub-scores. They also only measure students up to what the state considers to be grade-level standards: standards which many testing experts consider, for most grades, to be set too low. Also reducing the tests’ informational value is the fact that some of the math questions require literacy skills (explaining your answer) and that students can receive partial credit for incorrect answers for which they provided verbal explanations. (Both of these factors artificially lower the scores, relative to other students, of English learners and students with language delays—including students with autism). The tests are further compromised by low security: teachers rather than outside proctors administer the tests, and some large-scale cheating episodes have come to light (see Phelps, 2023).

Also decreasingly informative are the SATs, which many colleges have made optional, and which have been redesigned to measure fewer skills with less precision. Many of the math problems allow calculators, and few require complex algebraic operations. The analogies and vocabulary sections are gone, as are questions that ask students to synthesize long passages. Passages now consist of 1-2 short paragraphs, often accompanied by charts and graphs, followed by a single question that often is more about the chart than the paragraph(s). The passages (and graphics) are no longer drawn from the works of professional writers but instead are written by test-makers; as a result they’re often hard to make sense of, not because the writing is sophisticated, but because they’re poorly written (or designed). Test-takers no longer lose points for guessing, so guessing rates have gone up, adding even more noise to the signal.

As for those who want to promote a particular pedagogical approach as “evidence-based,” in lieu of standardized testing that might indicate objective effects on learning outcomes, we have anecdotal reports from classrooms: subjective accounts of high levels of student and teacher engagement, interviews with teachers, annotations of student work, and/or researchers’ field notes. “Lived experience” substitutes for objective testing; anecdotes for evidence.

And if the education world needs one more reason to dismiss objective tests, Telepathy Tapes host Ky Dickens obliges. On her “resources” page she claims, falsely, that ”nothing in education can truly be empirically validated because every student is inherently unique.”

Which takes us back to FC. FC proponents, just like their counterparts in the education world, have successfully suppressed informative testing. While the Don’t test mantra dates back to Douglas Biklen and the 1990s, there were, in that decade, a number of FC practitioners who nonetheless willingly participated in objective tests. But those tests consistently established that the facilitators were the ones controlling the FCed messages. What came next was a host of arguments against authorship tests that parallel the education world’s arguments against academic tests:

  • Test anxiety purportedly impedes the FCed person’s ability to type messages, particularly in the hostile environment that purportedly results from skeptical examiners.

  • Test performance is further undermined by stereotype threat: that is, by negative stereotypes about the abilities of minimally speaking individuals with autism (Jaswal et al., 2020) (Unlike in educational testing, there is no evidence that either anxiety or stereotype threat affects authorship testing).

  • Authorship tests are insulting and violate the dictum to Always presume competence. (Apparently standardized education tests aren’t as insulting or unethical. Many FCed individuals take such tests—with the help of their facilitators).

  • There are alternative ways to assess authorship that are purportedly more reliable, like comparing the writing styles of the FCed individual and their facilitator(s), or looking at whether their pointing rhythms suggest an awareness of English spelling patterns, or mounting an eye-tracking computer on their heads and recording whether they look to letters before they point to them (Jaswal et al., 2020; Jaswal et al., 2024; Nicoli et al., 2023). (See here, here, and here for critiques).

  • Or better yet: there’s lived experience. FC-generated accounts attributed to FCed individuals recount their experiences with FC and explain how it’s really them producing the FCed messages. Videos or live observations of FCed individuals typing purportedly establish beyond a reasonable doubt that they aren’t being cued by the assistant who is always within auditory or visual cueing range.

  • As for other types of standardized tests—cognitive tests, academic tests— none of these should be conducted on any minimally speaking autistic individual except through FC. That’s because all such tests require some sort of physical response (pointing to pictures; arranging shapes; filling in bubbles), and so the purported mind-body disconnect makes these tests hopelessly unreliable.

The hostility in both the worlds of FC and the worlds of education towards objective, well-controlled, informative testing underscores what’s so powerful about such tests: they are the brass tacks that everything comes down to. They are, in all areas of life, what separates the science from the pseudoscience and exposes the clinical quacks and methodological cracks for who and what they are—whether in K12 education, in minimally-speaking autism, or on podcasts about telepathy.

REFERENCES

Handley, J. B., & Handley, J. (2021). Underestimated: An autism miracle. Skyhorse.

Hanford, E. (2022). Sold a Story: How Teaching Kids to Read Went so Wrong. [Podcast]. APM reports.

Hirsch, E.D. (1996). The Schools We Need and Why We Don't Have Them. New York: Doubleday.

Iversen, P. (2006). Strange son. Riverhead.

Jaswal, V. K., Wayne, A., & Golino, H. (2020). Eye-tracking reveals agency in assisted autistic communication. Scientific Reports, 10(1), 7882. doi:10.103841598-020-64553-9 PMID:32398782

Jaswal, V. K., Lampi, A. J., & Stockwell, K. M. (2024). Literacy in nonspeaking autistic people. Autism, 0(0). https://doi.org/10.1177/13623613241230709

Newton, P., & Salvi, A. (2020). How Common Is Belief in the Learning Styles Neuromyth, and Does It Matter? A Pragmatic Systematic Review, Frontiers in Education. https://doi.org/10.3389/feduc.2020.602451

Palfreman, J. (Director). (1993). Prisoners of Silence [Documentary]. PBS.

Phelps, R. (2023). The Malfunction of US Education Policy. Lanham, MD: Rowen and Littlefield.

Ravitch, D. (2000). Left Back: A Century of Failed School Reforms. New York: Simon and Schuster.

Willingham, D. T., Hughes, E. M., & Dobolyi, D. G. (2015). The Scientific Status of Learning Styles Theories. Teaching of Psychology, 42(3), 266-271. https://doi.org/10.1177/0098628315589505

https://www.amazon.com/Malfunction-Education-Policy-Misinformation-Disinformation/dp/1475869940

https://www.nea.org/nea-today/all-news-articles/racist-beginnings-standardized-testing

Wednesday, March 12, 2025

Fighting FC pseudoscience requires a broader critique of paranormal beliefs—here’s mine

Ever since last September when it first came out, the Telepathy Tapes Podcast has been recasting as a paranormal phenomenon something that, in a normal world, would invalidate facilitated communication (FC) and help liberate non-speaking autistics from it. That something is evidence of facilitator control over messages that are generated primarily (in most of the instances discussed on the Telepathy Tapes) by Rapid Prompting Method (RPM) and Spelling to Communicate (S2C). In the world of the Telepathy Tapes and its many enthusiasts, the paranormal phenomenon in question is telepathy. That is, the non-speaking autistics who point to letters on letterboards while their facilitators are within auditory, visual, or tactile cueing range (typically holding up the letterboards and moving them in the air) purportedly aren’t being controlled by their facilitators, but instead are reading their facilitators’ minds.

The Telepathy Tapes Podcast has been highly influential beyond the world of non-speaking autism. Not only has it ranked in recent months as one of the top podcasts; it has also inspired other podcasts to have episodes about it, including podcasts in which people with no particular stake in either FC/RPM/S2C or in autistic telepathy seem ready to accept its conclusions—along with a host of other paranormal claims. (The Telepathy Tapes has also garnered a number of critical podcasts as well—see here).

This means that a full defense of the invalidation of FC by scientific experiments must now include a defense of science itself. And this, in turn, means (1) examining why some people go for paranormal explanations over scientific ones and (2) offering reasons for people to be more skeptical about the former.

This post is an attempt to do both.

I draw my admittedly non-representative sample of paranormal belief-prone individuals from two recent telepathy-friendly episodes on popular podcasts: Joe Rogan’s Joe Rogan Experience and Mayam Bialik’s Bialik Breakdown. Rogan is a top-ranked podcaster with viewership rates in the hundreds of thousands to millions of views (his telepathy podcast, Joe Rogan Experience #2279 currently has over 800K views). Bialik has a PhD in Neuroscience from UCLA and is famous for her role on The Big Bang Theory (which featured a main character with apparent Asperger’s Syndrome and savant skills, but no telepathic abilities—at least not in any of the episodes I watched). Bialik’s telepathy episode, which is called “Real or A Hoax? The Secrets Behind the Telepathy Tapes but which doesn’t actually entertain the hoax possibility, currently has almost 200K views.

On both episodes, hosts and guests alike appear sympathetic, not just to telepathy, but to paranormal phenomena in general. Besides Rogan and Bialik, there’s

While Kripal has long dabbled in the paranormal and while Dickens’ paranormal sensitivities are evident from the very beginning of the Telepathy Tapes, Rogan, Bialik, and Cohen all display at least a partial preference for paranormal explanations over scientific ones.

Rogan, for example, isn’t smitten with skepticism, but wowed (as in literally saying “Wow”) when Dickens tells him about The Hill, a mystical realm where non-speaking autistic individuals purportedly communicate telepathically with one another while sleeping. He asks Dickens whether this has been validated, and when she assures him that it has, he doesn’t probe further.

Bialik, for her part, is wowed by the Telepathy Tapes’ account of parents around the country reporting that their non-speaking autistic children simultaneously fell asleep in the middle of the day and learned, via telepathy on The Hill, that one of their non-speaking peers had just died. The only other explanation for this, Bialik states, is that “all the parents were lying.” Bialik also accepts a mother’s account of how her non-speaking son would telepathically communicate to her his musical compositions while she slept, which she would transcribe upon awakening. For Bialik, what counts as proof here is the mother’s report about taking her son to a music studio. The son, Bialik tells us, didn’t ask “What the f*** am I doing here?”, but instead communicated (it’s unclear how, since he’s nonspeaking and apparently doesn’t even point to letters) that he wanted to tweak this and that part of his mother’s transcription, and indicated in various (unspecified) ways that “that he was the agent of the music and the lyrics.”

Of course, the elephant in the room for all of this—the non-paranormal explanation—is a combination of (1) facilitator control (the accounts of the happenings on the Hill are generated through Spelling to Communicate or S2C), (2) what these parents would like to believe about their non-speaking children (where it’s more palatable to view facilitator control as telepathic powers—see discussion here), (3) parent-to-parent(s) communication and contagious memes within the community of parents who believe their non-speaking kids are telepathic, and (4) a mother’s highly motivated interpretation of her son’s non-verbal behavior in a sound studio. The delusions are many and (where the parents are concerned) quite understandable; no one has to actually be lying about any of this.

Elephant dans la pièce (illustration ChatGPT4 - Dall-e3) v2.jpg

So what I want to examine here is what other paranormal beliefs are held by Rogan, Bialik, et al., why they might seem compelling, and whether they actually outcompete alternative explanations that are more reasonable and scientific. If we can nudge these sorts of people back toward science, the damning evidence against FC that’s being recast as telepathic powers might stand a better chance of liberating FC/RPM/S2C’s victims.

Assuming that Rogan, Bialik et al. are representative of the general paranormal-prone public, we see several reasons—espoused by all of these people—for paranormal beliefs. They are:

  • Seemingly amazing coincidences

  • Seemingly amazingly accurate predictions (“precognition”)

  • Mysteries surrounding human consciousness

  • Mysteries surrounding savant skills

  • Apparent telepathy in animals

  • Converging beliefs across world history and cultures—with the exception, of course, of the “western” scientific community.

Let’s start with coincidences. One of the first things discussed on the Rogan episode is phone telepathy: when you think of someone and immediately they text you. We’ve all had that happen, but somehow many of us recognize it for what it is—a coincidence in the context of an event-filled world full of people who are primed to notice coincidences more than non-coincidences. We recognize this even if we don’t fully grasp just how many non-coincidences occur and how much we filter them out in favor of the coincidences. It’s easy not to notice how much wandering our minds do in the course of the day, with fleeting and often forgettable thoughts about one friend or another. It’s easy to forget how often the person doesn’t text when we think of them and does text when we don’t. It’s easy to underestimate how often our minds forget the usual stuff, retain the more remarkable stuff, and misremember so much of the more remarkable stuff as having been more remarkable than it really was.

As with coincidences, so, too with precognition—instances of amazingly accurate predictions. It’s easy to forget how often our minds make predictions, how often our—and other people’s—predictions don’t pan out; how selectively we remember the more successful predictions and forget the failures, and how much we misremember our—and other people’s—successful predictions as more accurate than they really were. In addition, some predictions and coincidences may be influenced by subtle cues, priming, or subconscious learning of which we aren’t consciously aware: a news headline causes two friends to simultaneously free-associate to thoughts of each other; a subconsciously learned “hunch” about someone may cause us to predict, correctly, which email messages they will respond to.

Far more mysterious than these phenomena is consciousness. No one—scientists included—has anywhere near an adequate account of how conscious experience arises from physical brains. Nor is it clear that anyone will—and this includes paranormal belief-holders. For Ky Dickens and Jeffrey Kripal, conscious experience provides evidence of a non-physical world. Both cite a former NASA scientist by the name of Thomas Campbell who believes that consciousness is what creates the universe—or, put another way, that “our reality is virtual and that consciousness is the computer.” According to Campbell’s author page on Anazon

a set of quantum physics experiments designed to provide evidence for or against the hypothesis... are being performed at the California Polytechnical university. First Results are expected in late 2023 or early 2024.

So far, no news. And as far as I can make out, his claim is essentially un-falsifiable. If we’re living in any kind of computer simulation, any experimental results we think we’ve produced would be part of the simulation. So what could possibly distinguish simulation-based results from non-simulation-based results?

The question of life as a simulation, like the mystery of consciousness, strikes me as unanswerable by anyone—scientists, normal people, and paranormal people alike. But that doesn’t stop Rogan, Dickens & Co from assuming that Campbell’s hypothesis is true. Kripal argues that “physical reality deeper down is more like a surrealist painting, more produced by the imagination.” Dickens argues for a paradigm shift that places human consciousness, not physics, at the foundation of the universe.

What’s sort of compelling about all this is that there’s a grain of truthiness here. Each person’s reality and identity is based on their conscious experiences. Our conscious experiences derive from the specific circumstances of our lives, which derive from the worlds we inhabit, which are, in turn, a function of the universe we happen to live in. And that universe is, in a sense, a function of us, and therefore of our consciousness. That’s because the universe we inhabit—along with the planet we happen to live on, down to the type of star it orbits, its position in the solar system, its orbital path and speed, its tilt, its spin, the way it wobbles on its axis, its moon, its size, its chemical composition, its geological history, and its current geological age—is the only kind of universe (solar system, planetary environment, etc.), of all the millions (billions? trillions?) of possibilities, in which the we, as humans, could possibly have evolved and could currently exist. So, if we limit ourselves to an extremely egocentric, human-centric perspective, human consciousness is essentially connected to our particular universe as we know it to be.

But Kripal and Dickens are positing something more literal than this: namely, that human consciousness is literally the foundation of The Universe. Furthermore, they propose, this solves the riddle of consciousness. But they don’t explain how, and if they tried to, they’d fail. That’s because, even with all this paradigm shifting and literal embedding of consciousness within the broader universe, we’re still left with the mind-body problem. That is, we’re still left with the nagging question of how conscious experiences connect up with physical brains: how, in particular, conscious experiences connect up, as we know they do, with the brain’s very physical electromagnetic activities and synaptic connections.

But while placing consciousness at the foundation of the universe doesn’t solve the mind-body problem, it does help Bialik et al. “solve” telepathy, precognition, and near-death experiences—assuming, of course, these phenomena actually exist. Telepathic thoughts, presumably, propagate through the fabric of the universe from one mind to another. Since that fabric, per Dickens and Kripal, is timeless—including all past, present, and future conscious phenomena—precognition presumably happens whenever phenomena from the future reach those minds that are open to them. Near-death experiences—and other altered states, including dreaming, psychedelic states, and what Kripal calls “trauma”—are simply the results of certain minds, or states of mind, that have openings into which some of the cosmic consciousness’s more magical elements can creep. Kripal cites Kevin Cann, a verbally fluent autistic man who says that all autistic people have mystical experiences—which would presumably include Sheldon Cooper from Big Bang Theory, were he a real person, and my moderately autistic son, who is. Kripal also cites Elizabeth Krohn’s first-hand accounts of her near-death experience, which, according to a previous Bialik Breakdown episode featuring Kohn, involved two weeks in heaven and newfound knowledge and abilities.” Surely Krohn isn’t fabricating anything, nor do Kripal or Bialik entertain the possibility of hallucinations, false memories, or some combination thereof.

Instead of discussing the unreliabilities of the human minds/brains, Rogan, Bialik et al. discuss the supposed reliabilities of human culture. Outside of what they call “western” science, they point out, most cultures throughout history agree that the mind is more than the physical brain, the soul survives death, and that dreams contain prophecies (Bialik cites the Bible). Most cultures also once believed that the earth is flat, that the sun literally rises and sets, and that the universe and all creatures great and small arose through forces other than the Big Bang and evolution—three mosty outdated beliefs that Dickens and Kripal, elsewhere in their interviews, disparage as ridiculous.

Besides believing in the Big Bang, evolution, and a round earth that spins with respect to the sun, Bialik et al. are into quantum physics. Kripal, for example, implicitly alludes to “quantum entanglement” when he gives his take on autistic telepathy. In what is one of the most dismissive statements about autistic people I’ve ever encountered, he states:

These individual autistic children are simply nodes. You know they’re picking up the signal from this broader consciousness or mind, and when one dies, they know right away that-that, and because they’re emotionally entangled with that node.

This makes the autistic community sound like the undifferentiated Borg from Star Trek. (Rivaling Kripal in his dismissiveness, Dickens tells Rogan that the most loving thing you can say to a non-speaking autistic person regarding their outward behavior is “I know that’s not you.”)

Shubham Dhage

Kripal goes on:

We all live in a Newtonian world. In other words, we think we’re material objects in some kind of neutral space. But our bodies are also quantum. We know that. That’s a fact. We know that matter is quantum deep down and it’s all connected and it’s all one and it does things that make absolutely no sense to this Newtonian up here frame of reference... And so we have physics to tell us that there are these two levels of reality, but we haven’t integrated that into our worldview.

One reason most of us haven’t integrated quantum mechanics into our worldview is that, for the most part, quantum phenomena have very little effect on the world as we experience it. A quick look at Wikipedia will reveal that the effects of quantum mechanics on the world as it affects human beings are limited to a handful of modern technologies: e.g., quantum computing, LEDs, MRIs, and electron microscopy. There are, thus far, no scientific reports of discernable quantum effects on our minds or bodies, or of how quantum physics contributes to the connectedness or oneness of the universe. For the latter, someone would have to empirically validate a sort of Unified Field Theory, and so far, no one—no physicist, no normal person, and no paranormal person—has done that.

But Kripal, like other paranormal belief holders (Deepak Chopra comes to mind), has recast quantum physics as being about something far beyond subatomic particles, black-body radiation, and the Planck Constant: something downright mystical. How else to explain all that quantum entanglement, all that spooky action at a distance, all that Uncertainty, and all those mysterious effects of observers on measurements? “The early quantum physicists,” Kripal claims, “they all turned to one place for the best place to understand what quantum physics is about, and one place was mystical literature. They all turned there.” That claim is easily fact-checked, and it isn’t true. As Wikipedia’s Quantum Mysticism article tells us, just two physicists, Heisenberg and Schrödinger, were interested in Eastern mysticism “but are not known to have directly associated one with the other” (quantum physics with Eastern mysticism). Only a few (lesser-known) physicists thought that the consciousness of the observer played a causal role in measurement outcomes. As Einstein said about quantum mysticism (or what Murray Gell-Mann called "quantum flapdoodle"): "No physicist believes that. Otherwise he wouldn't be a physicist."

Besides “quantum flapdoodle,” there’s flapdoodle about fields: ”energy fields”; “mind fields.” The mostly invisible power of electromagnetic fields, and the fact that the brain, as an electromagnetic object, must generate such fields, comes up in on the Telepathy Tapes, though not on Rogan’s and Bialik’s podcasts—at least not on the episodes I’m discussing here.

But let’s turn back to Dicken’s and Kripal’s claims about the supposed mysticism of autistic people and to yet another basis for paranormal beliefs—namely, savant skills. These aren’t exclusive to autism, but are more prevalent in autistic people; they include extraordinary skills in math, music, and language. This much everyone agrees on. As I discussed in my last post, however, Dickens, along with Diane Hennacy Powell, Dickens’ self-proclaimed expert on savant skills, goes further. They claim that savant skills are disproportionately present in non-speaking autistics and that they can only be explained by telepathy. The definition of savantism that Dickens gives Rogan is telling: “Being able to really excel at something that you haven’t been taught.” The actual definition of savantism Is slightly different: excelling at something, not that you haven’t been taught, but that is inconsistent with your overall level of functioning.

Nor are those savant skills that have actually been empirically confirmed (as opposed to those heard about through anecdotal reports) so mysterious as to cry out for paranormal explanations. Scientists have modeled how some of them develop; in general, they appear to correlate, not with being non-speaking but with “Heightened sensory sensitivity, obsessional behaviours, technical/spatial abilities, and systemizing.”

The only specific instances of savant skills that Dickens cites on Rogan’s show—with more wows from Rogan—involve language skills. She cites an autistic boy who could speak Russian and Japanese at the age of 5 or 6, and a non-speaking autistic girl who purportedly typed out messages in Portuguese and Spanish and also turned out to understand hieroglyphs. Both reports are second- or third-hand, and all the words attributed to the girl were generated by S2C and so most likely not her own. As for the boy, as I wrote in my last post, videos show hyperlexia vis-a-vis Chinese characters and Japanese hiragana (a recognition of the symbols and an ability to read them out loud) and a precocious ability to recite specific sentences in Chinese and a couple of other languages. There is no need to invoke a telepathic connection to a collective consciousness at the base of the universe; a precocious obsession with languages and writing systems, an ear for languages, and some systematic self-teaching will do the trick.

As far as apparent savant skills in non-speaking autism go, both Dickens and Kripal appear to find telepathy (as in a telepathic connection to the collective consciousness at the base of the universe) more likely than facilitator cueing (as in the adult facilitators being the ones whose skills we’re actually witnessing). For Dickens and Kripal, the mind-body disconnect that purportedly defines autism (according to FC proponents but not according to autism specialists) makes autistic people more tuned in to the purported cosmic consciousness. In addition, while the regular use of language causes the telepathic skills that all of us were purportedly born with to wither away, non-speakers with autism, who communicate less readily, retain these skills. Telepathy, ultimately, appears to be the only real savant skill—or at least the ur-savant skill. All other savant skills, per Dickens and Kripal, derive from telepathy—at least in the case of non-speaking autistics who depend on facilitators to demonstrate those skills.

But what about apparent animal telepathy? Dickens and Rogan discuss a bunch of examples, none of which depend on facilitated communication. They are:

  • The elephants that supposedly traveled for miles to the home of “elephant whisperer” Laurence Anthony after he died and again, for years thereafter, on the precise date of his death—a claim that Snopes classifies as “undetermined.”

  • Synchronized movements and cooperation within groups of animals: fish swimming in schools, birds flying in formation, wolves collaborating on hunts. Scientists have explained all of this synchrony (and successfully simulated some of it) in terms of individual group members following simple rules of thumb, either innate or learned through simple associative learning mechanisms, based on the position and movement of their nearby group-mates (and, in the case of wolves, the position of their prey), as sensed through visual, tactile, auditory, or olfactory cues. For fish, see here; for birds, see here; for wolves, see here.

  • The mystery of how prey animals know when they’re being watched. Here, too, the mechanism is subtle cues, in many cases way too subtle for humans: e.g., predator-specific vibrations and the changes in the concentration or age of particular scents. Animal precognition, namely of coming storms, goes unmentioned here, but this, too, can be explain by cues—e.g., by sensitivity to changes in atmospheric pressure.

  • Rupert Sheldrake’s research on dogs who know when their owners are coming home—because they anticipatorily go to the window. A quick look at Wikipedia shows that Sheldrake’s experimental design has been critiqued and that more plausible alternative explanations for his results have been offered.

Getty Images

Beyond all the apparent appeal of the paranormal as an explanation for seemingly amazing coincidences, seemingly amazingly accurate predictions, the mysteries surrounding human consciousness, the mysteries surrounding savant skills, and the apparent telepathy of animals, there are constant claims by paranormalists that they’re espousing the braver, more open minded, more interesting way of looking at the world. Kripal, for example, states that he finds things he can’t explain “delicious.” According to Rogan, Bialik, & Co., regular scientists, by contrast, are afraid of the paranormal (or “psiphobic”—my word; not theirs). They’re afraid of looking foolish, afraid of things they can’t explain, and afraid of people who experience things that don’t fit into their box. They’re also rigid and controlling. They need stability and limit themselves to what can be measured and observed, to what they can explain and control. They take everything else off the table. Except, presumably, for those mystical quantum physicists, they limit themselves to matter and linear time and are hopelessly stuck in a materialist paradigm. They dismiss the lived experience of other people. Like yesterday’s flat earthers, they’re holding back scientific progress. And destroying what’s beautiful to boot: by putting the flower under the microscope, they magnify the sun’s rays and burn it up.

One way in which these people might appear more open-minded than scientists is in seeming to embrace both science and skepticism in addition to the paranormal. But this embrace is quite limited. Kripal likes quantum physics, but not controlled experiments; Dickens doesn’t believe there’s scientific support for aliens, but she does believe in telepathy; Rogan interrupts the interview to google “Voynich manuscript debunked” and “HECS suit debunked,” but not to google “facilitated communication debunked” or “autism as mind-body disconnect debunked.” Rogan knows that there are quacks and charlatans out there, but thinks he can tell by gut instinct whether someone’s lying.

So who’s actually more open-minded? Scientists are. When people make claims that strike them as implausible, they ask for evidence—which might sound like being dismissive, but actually is more of a provisional openness. Scientists are inherently curious. They’re constantly looking for new things to observe, measure, and test. Observation, measurement, and controls, of course, are at the heart of empirical science—as everyone once learned in elementary school. Without these, there is no science, and anyone can claim anything they like: that the earth was created in seven days, that there are Zebras on the North Pole, or that vaccines cause autism. Scientists may take all sorts of things off the scientific table—from mystical poetry, to unfalsifiable claims like we’re living in a simulation, to claims that have already been solidly rebutted (like those about facilitated communication and non-speaking autism)—while still enjoying poetry and speculations about simulations during their off-hours.

As for who’s fearful, one thing that all espousers of paranormal beliefs appear to be afraid of is well-controlled empirical testing—despite the existence of a half million-dollar reward for anyone who can prove the existence of paranormal phenomena via such tests.

And as for destroying beauty and wonder by looking under the metaphorical microscope, the microscope in particular, and scientific findings in general, have revealed beautiful and wondrous phenomena: the microstructures of a flower petal, the ways that a few simple rules create phenomenal emergent properties like the elaborately coordinated movements of fish shoals, or the subtle ways in which one person’s involuntary, nonconscious muscle movements can non-consciously condition another person’s behavior in increasingly subtle and powerful ways over time.

So let’s conclude with a few things that those who lean towards the paranormal should open up their minds to:

  • How unreliable our first-hand experiences and memories are

  • How people can look like they’re telling the truth—because they believe what they’re saying—without actually telling the truth

  • How cues can simultaneously be too subtle for conscious minds to detect and yet powerful enough to influence animals into elaborate patterns of group behavior that aren’t deliberately planned out, or to influence human beings into the production of elaborately spelled messages that they may or may not understand

  • How the things that science can’t explain aren’t satisfactorily explained by paranormal phenomena

  • How paranormal phenomena don’t explain anything that science can’t explain better.

Addendum: A Telepathy Tapes Update

Through these podcasts and other developments, Ky Dickens is getting a ever growing attention for her various claims, and an ever-growing fund for her upcoming projects, which, as we learn at the end of the Rogan interview include:

  • releasing a documentary film based on the Telepathy Tapes within the year, with five non-speaking members on the production team

  • raising awareness of “spelling” (as in Spelling to Communicate) and getting it into all schools

  • opening up large centers that provide opportunities for “spelling” and for training more people to be “communication partners” so they can interact with and hire non-speakers.

Telepathy, Dickens acknowledges, is serving as a sort of “Trojan horse” to raise awareness and support for Spelling to Communicate.

But along with this dark revelation comes another one: The International Association for Spelling as Communication (I-ASC), Dickens complains, is sending letters telling people “Don’t talk about telepathy or you’ll have your accreditation taken away.”

The Telepathy Tapes has drastically raised awareness of “Spelling to Communicate,” but not all publicity is good publicity. Just possibly, a combination of infighting among quacks and true believers, on one hand, and the promulgation of credibility-straining claims about autistic non-speakers, on the other, could ultimately cause the whole abnormal-but-not-actually-paranormal phenomenon to collapse under its own weight.