Anil Ananthaswamy’s Quanta Magazine piece To Make Language Models Work Better, Researchers Sidestep Language is intriguing, even if I don’t understand a lot of it:
Language isn’t always necessary. While it certainly helps in getting across certain ideas, some neuroscientists have argued that many forms of human thought and reasoning don’t require the medium of words and grammar. Sometimes, the argument goes, having to turn ideas into language actually slows down the thought process.
Now there’s intriguing evidence that certain artificial intelligence systems could also benefit from “thinking” independently of language.
When large language models (LLMs) process information, they do so in mathematical spaces, far from the world of words. That’s because LLMs are built using deep neural networks, which essentially transform one sequence of numbers into another — they’re effectively complicated math functions. Researchers call the numerical universe in which these calculations take place a latent space.
But these models must often leave the latent space for the much more constrained one of individual words. This can be expensive, since it requires extra computational resources to convert the neural network’s latent representations of various concepts into words. This reliance on filtering concepts through the sieve of language can also result in a loss of information, just as digitizing a photograph inevitably means losing some of the definition in the original. “A lot of researchers are curious,” said Mike Knoop (opens a new tab), co-creator of one of the leading benchmarks for testing abstract reasoning in AI models. “Can you do reasoning purely in latent space?”
Two recent papers suggest that the answer may be yes.
Click through for details; I’m afraid I can’t make heads nor tails of latent space, but I’m sure some of my readers can. Thanks, jack!
That tterm is new to me, and I learn from a quick google that Wikipedia has processed a whole new branch of AI-related terminology such as
https://en.wikipedia.org/wiki/Manifold_hypothesis
since I last looked. The word that occurs to me in this context is rumination. One reason I take
https://en.wikipedia.org/wiki/Gaston_Bachelard
seriosly is because of his interest in reverie as a cognitive process
The way LLMs work is basically:
1. Input some text, which gets split up into tokens.
2. Each token is translated into a bunch of numbers, which can also be thought of as a vector in multidimensional space (the latent space or embedding space).
3. LLM does some computations and ends up with, for every token in its vocabulary, the probability that that token should come next.
4. A token is chosen out of this probability distribution.
5. Append the output token to the original text and repeat until the LLM decides it’s done.
What they seem to be doing is, in 3, having the result be not a probability distribution but another vector representing the new predicted state, then feeding that back in, skipping the translation back into and out of tokens. It seems like an obvious idea; I assume the reason (or one reason) it hasn’t been done before is that it’s much harder to train an LLM when you can’t constantly compare its output to the training text.
Thanks for explaining it in a way I can more or less understand!
Is that so? Or is it just a matter of how many dpi you (can) use?
Gaston Bachelard
Yay Gaston ! I see only now that he wrote much more than I have read so far, namely
La Poétique de l’espace
La Philosophie du non
Le Rationalisme appliqué
@David Marjanović
Assuming finite-precision linear arithmetic is an adequate tool for modeling the analog processes in an organic brain… maybe, maybe mostly but maybe not entirely. I’ll note that @TR’s description above of how a LLM works doesn’t anppear to include any binarization— a highly non-linear process, particularly around the edges.
LLMs don’t “reason.” Making them do what they do more efficiently does not magically bestow reasoning powers on them.
LLMs have no hook to meaning. Where human beings are able to interpret LLM slop as meaningful, they are partly seeing the residue of illegal automated plagiarism of human-generated meaningful materials, conducted on an enormous scale, and partly doing the human thing of seeing faces in clouds.
what DE said. moreover IIUC they don’t have memory (as we understand the term). This article tho seems to say that they may be able to construct/accumulate something like concepts…
I’m unconvinced by @TR’s rubric. Stupid though LLM’s are, they surely need to represent that ‘The cat eats the dog’ is not the same as ‘The dog eats the cat’, even though they contain the same tokens (step 2) in equally unlikely sequences (step 3).
Neither does step 3 explain the nonsense output from DeepL quoted yesterday ‘… a rope or rope’. I suggest a disjunction of exactly the same word is not a thing in any language. A _conjunction_ just maybe: a rope is a rope is a rope.
My rubric was very simplified — there is indeed positional information added to the token embeddings to capture word order. Nonsensical output simply means the LLM has for some obscure reason failed to generate a plausible next token in that case.
There’s a fair amount of research suggesting that LLMs create “concepts” in the sense that specific subnetworks consistently map onto specific domains of meaning, but of course that doesn’t mean there’s someone there doing the conceiving.
I don’t think the comparison to seeing faces in clouds is apt. You don’t need pareidolia to understand LLMs — their output these days is basically indistinguishable from human language. There’s obviously meaning there if you grant that language can be meaningful in itself.
“but of course that doesn’t mean there’s someone there doing the conceiving.”
What a coincidence! Same is true for humans! (I know, I say – even exclaim – it every time… )
but of course that doesn’t mean there’s someone there doing the conceiving.
Blindsight.
https://www.smbc-comics.com/comic/consciousness-9
However, what I mean is not about consciousness, which is a different issue.
It’s about how language relates to reality. This is a difficult issue: I’m with St Ludwig on this, but whatever your philosophical take on it, it’s clear that for LLMs language connects only to more language. Any connection to reality is purely parasitic on the connexions that human beings have made in the plagiarised sources (“training data.”) It is obligate parasitism.
The only people who believe that a LLM can “mean” anything are those who can’t tell the difference between the internet and reality.
I believe in people (believe they have “consciousness” even though I don’t know what it is, or to put it simply, believe they are more than things and in some ways they are even me).
I also somehow know that not believing in them is evil (or sin).
I trust this knowlege.
But I don’t know how to apply this to computers.
@TR LLMs … output these days is basically indistinguishable from human language.
You mean the output consists of words interspersed with punctuation? I think DM’s q about dpi applies: at a sufficiently coarse grain/abstract enough generalisation, all cats are grey. Do you know any human language/or are you an (impressive) LLM?
@DE, would you then say that the same text is meaningful if known to be written by a human, meaningless if known not to be, and indeterminate as to meaningfulness if its provenance is unknown? It’s a tenable position, but I don’t think it’s the only possible, or the most obvious, meaning of meaning.
It’s also not clear to me that a connection to reality that is mediated entirely through language is necessarily of a fundamentally different order than one that is partly mediated through the human sensorium (other than that I think the former probably can’t involve consciousness, but as you say, that’s a different issue).
@AntC, I mean simply that most LLM output is of a type that could plausibly have been produced by a human (as your final question implies), so if there is a difference in meaningfulness it can’t be in the content alone.
Having now read the magazine piece …
… is largely orthogonal to what’s going on. The model is still working with (structures over) “tokens” (per @TR step 2) — which are tantamount to morphemes, so very much proxies for the “medium of words and grammar” — at least the morphosyntax end of grammar.
@de
There is a spectrum of connecting language to (avoidance of) engagement with the real world (or interlocutor) among humans. What you seem to be saying is that LLM’s are not on the spectrum, but only mimic those who are (say at the level of a clever autistic). If the mimicry is not performed in the service of an “alien” objective, I would not be so critical and consider the consultation of an AI analogous to the use of any other labour-saving tool. Other tools sometimes break down, fail, or have operating constraints. This becomes important in safety-critical contexts, where there are or should be safety guidelines and regulations. The trade-off is that the labour saved in using the tool can be put to positive uses, which offset the negative outcome from tool failure or limitation.
@TR that could plausibly have been produced by a human
Here’s a test: feed the above thread into your favourite LLM; ask it to produce the next post as if from TR. Then we can all assess how plausible that it was produced by a human.
digitizing a photograph inevitably means losing some of the definition in the original
Not necessarily. It depends on the relative resolution of the photograph and the digitizing process. If you digitize with a higher resolution than the resolution of the original photograph, no loss may occur.
“Mike Knoop (opens a new tab)” is an interesting name, wonder what the etymology of that is
would you then say that the same text is meaningful if known to be written by a human, meaningless if known not to be, and indeterminate as to meaningfulness if its provenance is unknown?
Sure. One doesn’t even need to bring LLMs into it all. I can copy and past a long text and represent it as the product of my own keen intelligence, or, indeed, I could easily automate the process, and could automate some simple substitutions to mean that the texts are no longer absolutely identical.
If you are fooled by this into interpreting the result as the product of my intelligence, does that mean that my copying and pasting created meaning? If so, where has the meaning come into it? (This is not a rhetorical question. Shades of Marcel Duchamp …)
I would not be so critical and consider the consultation of an AI analogous to the use of any other labour-saving tool. Other tools sometimes break down, fail, or have operating constraints
I was not considering the question of usefulness, and am happy to concede that LLMs may be useful in certain domains (for example, in setting global tariff rates when you are entirely unconcerned about real economics and simply wish to intimidate other governments with your power to do them harm. Cost-effective deployment of “AI” depends on the deployer knowing that the costs of failure will fall on someone with no real prospect of redress.)
I was only addressing the (implied) claim that LLMs exhibit intelligence. This claim can only be maintained by saying that anything that you can interpret as intelligent behaviour is ipso facto intelligence, covertly incorporating the premise that the actual nature of intelligence is of no consequence; the only relevant criterion is a wholly subjective impression based on the outward behaviour of the system.
(I suspect that in the background there is quite often lurking an idea that there is really no such thing as intelligence, along with a confusion with the idea of consciousness. To be clear on this, I see no difficulty with the idea that an entity might be intelligent without consciousness, or conscious without intelligence.)
… This claim can only be maintained by saying that anything that you can interpret as intelligent behaviour is ipso facto intelligence, covertly incorporating the premise that the actual nature of intelligence is of no consequence; the only relevant criterion is a wholly subjective impression based on the outward behaviour of the system.
Ancient predecessor:
#
In the second century Epictetus argued that, by analogy to the way a sword is made by a craftsman to fit with a scabbard, so human genitals and the desire of humans to fit them together suggest a type of design or craftsmanship of the human form. Epictetus attributed this design to a type of Providence woven into the fabric of the universe, rather than to a personal monotheistic god.[4]
#
There is no such thing as improvident sex ! Everything fits. If it fits, it can’t be that bad.
Everything fits
But surely this is not an all-or-nothing phenomenon?
It was this
“In continuous or latent reasoning, you don’t need to transform your thoughts into language. You can maintain these uncertainties in your thoughts, and then finally answer very confidently,” Hao said. “It’s a fundamentally different reasoning pattern.”
that caught my attention, because most of my serious thinking is done behind the veil of language and crosstalk, involving shaky conceptual structures and their conjectured relations, which may or may not be true or even make sense. They’re tested by opening the Schr\”odinger box of language, to see if the imagined structure will successfully compile into something another person might understand.
It’s not doublethink to not know if 28 is the last perfect number or to be of two minds about the question, to think about such things is to struggle with phantoms in Meinong’s jungle. I believe everyone is familiar with this kind of thing but it’s not much talked about because it’s like looking at the underside of a beautiful piece of weaving…
we are currently facing a CEO-driven imposition of AI onto our work processes. As I wrote in a despairing email to my colleagues,
Current AI is based on LLMs, LLMs are mathematical/statistical models of language tokens. The responses are synthetic assemblies of tokens and the AI has no other connection to reality. In sharply constrained environments like game playing this can work, in fluid changing environments like enterprise software support it is unlikely to.
We need to have access to citations and the source data that the AI uses to generate its responses, so the response can be checked against reality.
So far I have not seen this implemented..
Obligatory xkcd,
https://xkcd.com/1838/
@de
I suppose most of my judgments are operational, whereas for you a judgment has an apparent theoretical base. That may be why for me the question of whether or not there is an essential difference between a text produced by non-intelligent mimicry and a (perhaps identical) text produced by a verifiable (but how do we verify?) intelligence is moot-I am not sure that there is an operational difference.To me it seems that the operational difference would depend on factors external to the text and the identity (i.e., intelligence or lack thereof) of its author. But maybe this is what you are also saying, viz., that using the word “intelligence” for an AI that operates by non-intelligent mimicry is (deliberately?) muddying the waters, since it apparently assigns responsibility to an entity incapable of assuming responsibility.
@AntC, if you were to encounter the following text (part of the result of the experiment you suggest) in the wild, would your reaction be “No way was that written by a human”?
Of course it wouldn’t — that text has meaning in the sense that one can as straightforwardly derive ideas from it as one can from a human-generated text, without the need for any squinting at clouds. (Granted, it doesn’t fit very well into the current point in the discussion, but that’s a different question.) Which isn’t the same as to say it has meaning in the sense of being produced by an intelligence. (I don’t know if St. Ludwig has anything to say on that topic — this may be the first time in human history philosophers have had to grapple with that particular distinction.)
On intelligence and consciousness I’m not sure I agree with DE — it seems to me that strictly speaking intelligence implies consciousness (since it implies understanding, which implies a mind), so in that sense LLMs aren’t intelligent and the whole idea of “AI” might be impossible for all we know. But if you loosen that requirement you’re left with judging the product of the system, in which sense LLMs certainly exhibit some degree of intelligence.
“in that sense LLMs aren’t intelligent ” [because “intelligence imples consciousness”]
I think the (silly) argument is:
computers don’t have consciousness because they are from the world of engineering and consciousness is from the world of mysticism, and these two worlds don’t overlap because they don’t overlap because they don’t overlap period.
@TR part of the result of the experiment you suggest
Then let me stop you right there: you (an actual human) have already presumably filtered out the tell-tale drivel. Your initial claim was not that merely some of the output might be plausibly human-generated. And indeed per DE, what you do quote quite possibly is merely cut’n’paste from a human.
The tell is exactly it doesn’t fit very well into the current point in the discussion, indicating the generator doesn’t ‘understand’ Gricean conversational norms. (Then to answer your q, Grice as well as Saint L, not to mention several Empiricists/Theory of Knowledge have grappled with like qs.)
BTW I don’t see Hat or anyone else on this thread talking about ‘flatness’ or similar concepts. So if that post were from a human, I’d suspect them of rushing into a crowded theatre, yelling a prepared text of no relevance to the action, then rushing out again. I suppose it mimics some sort of human behaviour, if we allow absurdist dramaturgy.
Let me throw in three points here. 1) Words can have meaning without intent or intelligence behind them – “The sun rises in the East” has an understandable meaning even if the sentence was arrived at by accident by our cat walking across my computer keyboard (that would be CatGTP). 2) I have worked with ChatGTP, and most of its output isn’t drivel by far; when you prompt it to write about issues a lot of texts exist about, its output is usually to the point and well-structured (just don’t ask it for sources). It has hiccups, but I have seen worse texts from human colleagues who were tired / stressed / out of their depths and just wanted to (as we say in Germany) “throw something over the fence” to meet a deadline. 3) The reason for that is that it has been trained on human input, the texts it comes up with mimic human texts, and humans normally strive to convey meaning; so if you ask ChatGTP to, say, summarize the main forms of auctions, it can mimic such a summary convinvingly because lots of people have summarized and listed the main form of auctions before, I.e., because it was trained on meaningful texts. It’s not so different from a person who only ever has read about an issue and now regurgitates what they have read without actually having understood any of it.
“The sun rises in the East” has an understandable meaning
Ayei, man pʋ baŋi li gbinnɛ.
(Bɔzugɔ, m pʋ wʋm Nasaalɛ.)
Throwing in my two cents in support of TR’s assessment of the papers’ contributions (having researched LLMs and token representations for nearly a decade now): yes, these are rather straightforward developments, in that it’s one of those things you think about when brainstorming ideas, but of course you can’t pursue everything at the same time. There is some pain in implementing it; using GPT2 as the model of choice, in 2025, is underwhelming; claiming that this is akin to chain-of-thought is an interesting twist that I’m not sure would occur to most people – and this is what made it get looked at by Quanta, I guess. So I’d summarize this as “good hype”: a solid contribution that served as an excuse for a popular-facing outlet to give a pretty damn good (and accurate, if overly-anthropomorphizing) lay description of how models work.
I have to be on the lookout now for content produced using LLMs, but that’s less of a problem in physics and math than in some subjects. It’s hardest for me when moderating the Physics Stack Exchange, where it’s not always possible to distinguish LLM output from bad human writing.
And sometimes the computer output can be remarkably good, as in this bit of fan fiction that we requested from my friend’s personalized LLM instance. However, the quality of that story was somewhat anomalous. It was the first version Dambala produced, and all subsequent tries were decidedly inferior.
When mind is wandering, it often produces themes for (hypothetical) stories. One provoked by thoughts of AI and human made stuff was that a guy (surrounded by aliens being the only human in this part of space) is eating in a cafetetia and holding a spoon. And as his mind is wandering too he realises that this specific spoon is very human and for all he knows was made by a human. So he goes to the alien owner to the place, then is searching for the worker who once composed the program for the machine that makes spoons .. etc. (and then a hypothetical long story of jumping from a planet to planet chasing a human lady follows).
@Hans, I usually say that LLM output is about what you can expect from a high school senior with access to a good public library, but the expected attention span.
Also, because today is today:
Довольно королям в угоду
Дурманить нас в чаду войны!
Война тиранам! Мир Народу!
Бастуйте, армии сыны!
Когда ж тираны нас заставят
В бою геройски пасть за них —
Убийцы, в вас тогда направим
Мы жерла пушек боевых!
Hmm, it looks like the “too much Russian is spam” misfeature is still there… please rescue my comment of a few minutes ago.
(Bɔzugɔ, m pʋ wʋm Nasaalɛ.)
Does that mean “sorry, I don’t speak English”?
“Because I don’t understand English.” (The first line is, “No, I don’t understand what it means.”)
Apart from the simple transgressive thrill of commenting in Kusaal, what I was actually driving at it there is that assigning meaning happens in the context of a community.
@AntC, we seem to be talking at cross purposes. The point I’m making is not that LLMs communicate as intelligently as humans but simply that the text they produce is meaningful insofar as a text can be said to be meaningful in itself. Meaningful doesn’t exclude dumb. Or basically, what Hans said.
please rescue my comment of a few minutes ago.
Rescued!
This conversation beautifully captures the current split in how we interpret LLMs: as either clever parrots or latent cognitive tools. From our perspective (developing what we call Communicative Science), we see LLMs not as minds, but as activated structures—tools that extend our capacity to think in collaborative, augmented dialogue. Latent space isn’t meaningless; it’s pre-meaning, a field of potential that becomes meaningful when engaged by human prompts and reviewed through structured response frameworks. We envision dynamic scientific papers with embedded AI agents trained to answer every imaginable critique—a living document, not just a static artifact. Meaning then emerges not from the model, but from the evolving dialogue between prompt, model, and reviewer. That, to us, is real science practice for a post-token era.
@Lars:
Groupons-nous!
That, to us, is real science practice for a post-token era.
Sadly, I am unable to assign a meaning to this.
I remembermy first encounter with a cloud chamber as a kid: a tiny flake of pitchblende in a small glass jar fitted with some kind of pressurizing mechanism, with bolts of snow firing ccasioally in random directions.
For a biological rather than geometric metaphor I imagine latent space as a supersaturated biological medium full of juicy terms and fragments, a rich RNA/DNA environment of tokens searching for parrtners, triggered by an apt prompt.
;[ I have been told that chatbot response time grows roughly as the square of the prompt length; I wonder it this is plausible?]
“Because I don’t understand English.” (The first line is, “No, I don’t understand what it means.”)
Knapp daneben ist auch vorbei 🙂
Apart from the simple transgressive thrill of commenting in Kusaal, what I was actually driving at it there is that assigning meaning happens in the context of a community
Sure, but after having learnt a language, one should be able to assign meanings to even isolated utterances in a language one knows. (Exposure to this thread has even given me the ability to guess at the meaning of phrases in Kusaal, even if imperfectly. Look what you’ve done to my brain 🙂 )
Now, if I ask, say, “What is the capital of Burkina Faso?” and receive the answer “The capital of Burkina Faso is Ouagadougou”, I can use that answer (to solve a crossword puzzle or to plan a travel) independently of whether the answer comes from a person or an LLM. So it has a meaning. The LLM doesn’t know anything about the world, it just has learnt that those words, in that order, are an acceptable answer to that question. The person may have been to Ouagadougou, or may have consulted an atlas, or they may at one point have rote-learnt the countries of Africa and their capitals*); in the latter case, their modus operandi is arguably less intelligent than an LLM and rather similar to a simple database. But the sentence is meaningful because it’s embedded in a dialog and useful to at least one of the participants.
*) We had a geography teacher who made us do that, back when the country was still officially called Upper Volta. The other list he made us learn was the provinces of Canada and their capitals. We were puzzled why those two lists specifically, but we never asked.
groupons-nous — Not only that, but the fifth stanza is very apposite today:
Though the original refrain is a nicely crafted piece of French for singing as loud as you can.
Now, if I ask, say, “What is the capital of Burkina Faso?” and receive the answer “The capital of Burkina Faso is Ouagadougou”, I can use that answer (to solve a crossword puzzle or to plan a travel) independently of whether the answer comes from a person or an LLM.
But the LLM could perfectly well say “The capital of Burkina Faso is Accra,” and you’d have no way of knowing it was hallucinating unless you already knew. I was trying to find out about the Clinique Heurtebise mentioned in La Sirène du Mississipi, so of course I googled it and the helpful AI response told me it was from a completely different Truffaut movie. (It was invented by Truffaut, named after a character in a Cocteau movie, so it has no real-world existence.)
Yes, of course a person can be wrong too, but you probably have some basis for judging how likely the person is to know the answer. The LLM is a black box, and one that too many people foolishly trust.
Yes, but have you considered the true meaning, comrade?
@DE in your comments you say nothing about humans. I believe this is a problem.
We too produce our utterances and texts by combining overheard bits and chunks of utterances and texts.
Actually, I do say something about human beings, by implication from what I am saying LLMs lack: human beings can generate meaning, because they are embodied in the non-textual real physical world and they are embedded in real-life communities of similar agents.
This is Wittgenstein’s Lebensform I have in mind, of course, but I don’t think you have to be a card-carrying Witterbug to come to a similar view.
https://en.wikipedia.org/wiki/Form_of_life
The key question is “how does language actually mean anything?” As with quite a lot of difficult philosophical questions, many “practical” people imagine that to ignore the question is to solve it. This attitude comes back to bite them in the arse, as Aristotle no doubt said at some point.
@Hans: There is a potential difficulty because certain countries in Africa have changed their capitals since I learned them as a boy in the Seventies (and perhaps also since you were required to learn them for school), which is very discourteous of them. Brasilia Syndrome, mostly. Canadian provincial/territorial capitals have happily remained stable during my own lifetime, at least if you ignore the birthing of Nunavit (whose capital I will confess I don’t know) out of the NWT (still Yellowknife for the rump part, I believe).
DE, accepted.
But I don’t understand the transition from this to whether someone’s text, when you repeat it*, is a “product of your intelligence” and to “creation of meanings”.
Here you imply that programs don’t have “intelligence” and “create” nothing because… because what? Because they do what I’m doing: combine chunks of human utterances?
*Once a dialogue came to my mind, between a telepatic alien who mostly understands what humans say – because she’s telepathic – and humans. About songs.
Also the ‘accepted’ part gives us little.
It is not difficult to feed ‘real-world’ data to a program, it is also not difficult to make two of them talk, or to make one of them talk to a human about things from the real world.
DE, accepted.
But I don’t understand the transition from this to whether someone’s text, when you repeat it*, is a “product of your intelligence” and to “creation of meanings”.
Here you imply that programs don’t have “intelligence” and “create” nothing because… because what? Because they do what I’m doing: combine chunks of human utterances?
*Once a dialogue came to my mind, between a telepatic alien who mostly understands what humans say – because she’s telepathic – and humans. About songs.
To explain:
I repeated your comment verbatim. You might say that this added no meaning to what you had written, but I would disagree.
In the context of our ongoing discussion, I repeated your comment exactly to illustrate that the very fact of my repeating it verbatim conveyed an additional meaning not present in your original, because I was doing so to demonstrate that when I repeated your exact words, I was trying to saying something quite different from what you had yourself said.
https://en.wikipedia.org/wiki/Pierre_Menard,_Author_of_the_Quixote
To the extent that my previous comment made sense at all, it made sense only in the context of our ongoing discussion and because of my motive (herein explained) for doing that. The actual meaning of the words was irrelevant. The meaning (such as it was) came entirely from my having copied and pasted your words, and not from the actual words at all.
DE, at first I wanted to say if you repeat my text. But that would imply that I and my text are intelligent, while nothing is known about your intelligence.
Negligible. But I have a heart of gold.
https://en.wikiquote.org/wiki/P._G._Wodehouse
And wisdom of KONGO-Welsh civilisation. All those millenia and parsecs behind your shoulders. They mean something, but will we ever understand the meaning?
@DE, practically I do, of course, agree that those texts that I read do not count as a contribution in the humanity’s knowledge. A diligent student who was taught too well to repeat what adults say.
I have doubts about the theory. I don’t know for example, if this anyhow follows from the basic principles on which those programs are built (and not from some less principal constraints that someone for some reason believes make them more efficient), and even if so, these principles will be updated.
Yes, of course a person can be wrong too, but you probably have some basis for judging how likely the person is to know the answer.
Only if I know something about the person. In the meantime, we know enough about LLMs to be able to say in which cases their output is probably correct and in which cases you have to be careful. And like always, the question of whether you rely on a single source for information depends not only on trust, but also on how important the information is for you.
@DE: we seem to have different ideas about what “meaning” means.
Negligible. But I have a heart of gold.
Here’s another good one, only slightly modernized from the linked version:
I googled. The words did not appear to make sense. They seemed the mere aimless vapouring of an aunt who has been sitting out in the sun without a hat.
Iqaluit. It’s on Baffin Island and presents lots of great opportunities to say [q].
In the meantime, we know enough about LLMs to be able to say in which cases their output is probably correct and in which cases you have to be careful.
I don’t know who you intend by “we,” but what you say is certainly not true of the vast majority of people who use LLMs.
Maybe. I and my colleagues use them for our work, and at our company we get instructions on how to use them and on the pitfalls, so I personally don’t know any naive users who take their output at face value.
Iqaluit. It’s on Baffin Island and presents lots of great opportunities to say [q].
I’ve heard it pronounced with a [χ]. I don’t know if it’s a matter of dialect or what.
I personally don’t know any naive users who take their output at face value.
You should get out more. But surely you can see from first principles that what I say must be true. Haven’t you read the popular press, or seen what business moguls are saying about it? Almost all users of any technology are “naive.”
You don’t know any naive users who take their output at face value? There was one right here at Language Hat recently … although considering that comment was dated April 1, I wonder if it was a prank. (And in hindsight, the first response probably should have been “Citation needed”, not “Thanks”.)
there are
– people who use them in their work (I think less naïve)
– people who use them but not for practical decisions
– people who use them for practical decisions (are there many of them and is human advice any better?)
– decisions of leaders of countries supported by voters who believe in absolute bullshit (almost all voters). If these will also believe AIs, that will be bad (if it will) not because AI will bullshit them more than what they normally read and believe.
I don’t know what my two bits will add to the millions that have been said on the subject, pro and con, but here goes: One Word — Plastics. Was a time, many decades ago, when plastics were the great new thing. New plastics were being discovered all the time, they could substitute for almost everything (except metal, but that was surely around the corner), they could do things no other material could do, they were cheap, and made from the endless abundance of fossil hydrocarbons. There was a rush to make everything out of plastic (not houses, but that was surely coming soon).
In the end, it turned out there are some things plastics cannot and likely never will substitute for, like metal. There are some uses for which only plastics will do, such as polyethylene and teflon. And there’s a vast area where plastics are a cheap substitute, facilitating a quick trip from the gas wells to the waste dump, and engendering a byword for cheapness and fakeness.
AI looks to be in all these three niches as well: spectacular successes (protein conformation), abject failures (mathematical proofs, jokes), and in between, a range of adequate substitutes (technical translation) to inadequate ones (college papers, legal briefs). Unfortunately, AI depends for its reputation a great deal on swayable subjective evaluation, and bad AI substitutes may persist longer than bad plastic ones.
Also, 3-D printing.
@ Y
From your mouth to God’s ear
Haven’t you read the popular press, or seen what business moguls are saying about it? Almost all users of any technology are “naive.”
I said “personally”. That the moguls shill their products and the popular media jump on every hype bandwagon is only to be expected. But there is also a lot of responsible reporting on the topic out there. Let me rephrase what I said: there is now enough information available to know when you can rely on LLMs and when you rather shouldn’t. Those who don’t need or don’t want to use them don’t need to get more informed, those who use them without informing themselves have only themselves to blame.
@Y, happens to some technologies, but not all.
Agriculture chaged the world – and not merely the human world. Computers.
Ten years ago humans laid their hands on a powerful technology. Which is merely an idea (or several ideas) for how to improve a certain algorythm. The idea came after thousands ideas (some of them implemented) and decades of attempts to make the algorythm work. Nevertheless it caused a revolution. So we have two processes: finding applications (of course, it will work here better than there) – something immediately profitable – and coming up with ideas.
You’re commenting on the first process.
You seem to have a low bar for “revolution.”
The truly “revolutionary” aspects of LLMs (particularly artificial human-level general intelligence) currently exist only in the form of advertising hype and pitches for yet further billions of funding to spend, not so much on actual research, but on financing ever-more scaling up, in the belief that quantity automatically transmutes into quality if you only produce enough of the quantity.
@DE, computers learned to do lots of things in one year.
Humans wasted lots of efforts and decades of time on teaching them do these things and nothing worked.
This is a revolution.
Yes, I mean a revolution for the field, not for human civilisation.
@DE: Oh, they are funding plenty of research, but so far all I see are incremental improvements. I can see how they are desperate for a breakthrough, but there is no guarantee that it will happen any time before the bubble bursts.
@drasvi: Sure, there’s no denying that something happened. Yet it is out of proportion to the hype (“Human level intelligence in 2/3/5 years, guaranteed!), or to the untold billions poured into this.
@Y, but what I mean is that you’re only commenting on one of the two processes, and the one which is less interesting: applications of the improved algorythm.
But then there is the second process, namely both steady improvement of the algorythm (this too brings quick money and returns to the poured in billions) and once in [an uncertain period] revolutionary ideas. This one is unpredictable.
How far will it go? What are the limitations of the hardware (1. classical comuters 2. with given speed and energy consumption 3. and given architecture (which is easier to change and less important))?
I don’t think anyone is smart enough to tell this.
I think we have found a land (“application of computers”) and only have explored a small part of it. We don’t know how big it is, and what critically important (as, say, humanity’s first metal ore and oil are for human civilisation) things unknown to us will be found there, if any.
The logic “we have been exploring it for decades, surely everything have already been found” does not work – there is a reason to think we have exlplored but a small part – and estimates of its size based on extrapolation don’t work either. We either know 5% of it, or 0.01% or less. But not 60%.
This AI revolution is an example of how we understimate computers. So why step on this rake?
Either we have a good estimate based on more or less hard sceintific reasoning (which we do NOT have) or we simply don’t know what computers will learn to do. And this is not a “don’t know” where it is rational not to waste billions on research. It is a “don’t know” where it is very rational to spend them even though you are not confident.
Scepticism here (unless it is based on anything even remotely scientific) is irrational.
It’s a matter of quantity and degree. How much will computers do? When will they do it? How much can and should be spent (in money and environmental costs)? Right now it’s a gamble, and the more people get in, the more reluctant they are to get out, lest someone else get to the prize first, and lest they declare themselves to have wasted billions in vain.
There was a time, a few years ago, when it turned out that beyond a certain point, bigger models resulted in a leap in performance. People took it to mean that performance would grow exponentially with model size. Now that they have fed the models the entire scrapable internet, it seems that performance followed not an exponential but a sigma curve, i.e. the performance increase has leveled off. Current AI is qualitatively stupider than people, who have been trained on far less data, using many orders of magnitude less energy.
Sure, someday there will be more breakthroughs, but the blind trust in self-taught machines (which existed since the earliest days of neural network research) is misplaced.
I just read this yesterday. This can be interpreted as LLMs not just regurgitating information, but choosing what information to regurgitate based on self-preservation. Reminds me of those cynics who maintain that the foundational use of language is deception.
@hans
This link worked for me…
https://www.economist.com/science-and-technology/2025/04/23/ai-models-can-learn-to-conceal-information-from-their-users
Did you forget to strip off the footer from search engine or browser?
Archived link.
Most people who’ve heard of AI in the first place evidently believe we have, at long last, reached the Star Trek stage of civilization, when you can ask a computer a question and expect to get a reliable answer. And they did this to ordinary search engines even when they just searched for words and didn’t contain any AI as they do now.
Yup. We are a remarkably credulous species.
Reminds me of those cynics who maintain that the foundational use of language is deception.
Kultur der Ausrede here in 2023.
@Y, I think we (all people, I mean) haven’t agreed on what we’re praising and criticising.
Any specific combination of technologies is not very interesting. Of course, those will evolve.
Unspecific “AI” is at least everything that computers will once do (“there will be more breakthroughs” you say) and at most not only computers but also quantum computers and what not.
@Y, exponential growth of one’s proficiency in reproducing someone’s behaviour is against our expereince and logic. Experience: say, our experience with learning English.
Logic: First, this proficiency in reproducing human behaviour is confused with sophistication, but even ideally the desired sophistication of a program’s output is a constant (namely, the sophistication of human input).
Second, the more words (and not only words) you have already learned, the fewer of them you will learn from every text you read.
Of course there are buts:
– but reproducing human behaviour is not the only thing we want them to do.
– but will it reproduce the behaviour of a human professional if we make it read all textbooks and thousands artciles in a given field? (no)
Nevertheless, if we are “makign someone smarter” (a human or a program) by making her read human texts, her growth will slow down. If she’s a human, she will think and contribute, not only read and understand.
I think people you spoke about didn’t know when it would slow down.
@DE, why I systematically urge people to apply claims about programs to humans:
Two reasons.
1. “Consciousness”, “intelligence”, “meaning” are very human concepts.
“Consciousness” is based on experience of 1 (one) human, you (you the reader) applied to all humans.
“Intelligence” is “Mary is more intelligent than John” (when she’s better than John at some things)
If in this conversation we used some precise technical concept of either of these which is not based on human experiences, of course it also would be fine to speak of programs wihtout speaking of humans.
But no such concepts have been named here.
2. it is not very difficult to
(a) name a quality Q (no one knows what it is)
(b) make a claim ” […] does not possess Q” (no one will tell if it is true for any […])
(c) believe that some A possesses Q
(d) say “B does not possess Q”
for any pair A, B. E.g. (Q is say “consciousness” or “intelligence”): men, women*. Or: humans, computers. Or: me, you.
This way IFF you doubt that B is Q, B is not Q.
Such a conversation won’t lead to better understanding of anything and there is a moral problem with my (2) (succeptibility to this ‘logic” does lead to a moral flaw)
*some men did apply this to women
@Paddy: I don’t know, I just copied the link. Now it doesn’t work for me either.
@LH: Thanks for posting a working link!
@Hans, there are words
From the Economist
appended to your URL. Some idiot websites do it.“Consciousness” is based on experience of 1 (one) human, you (you the reader) applied to all humans.
Well, “experience” is based on the experience of one human being too. “Boredom” as well. “Pain”, too. Why pick on consciousness?
I am surrounded by creatures constructed physically very much like myself (as I can determine by experiment), who share my social environment (my Lebensform), and my physical origins, and who claim to have consciousness just like I do, and whose self-reported consciousness can be impaired by the same physical accidents as my own immediately accessible consciousness can.
It seems an eminently reasonable hypothesis to me that they make such claims because they are, in fact, conscious in the same way as I am*; for pretty much the same reason as I can reasonably assume that they experience pain much as I do myself. It is easily the most parsimonious hypothesis. No nasty metaphysics needs to come into the matter. I don’t even need to know what (if anything) consciousness is.
* Or, at least, are mistaken about this in the same way that I am …
“Why pick on consciousness?”
Why say it to me?
“Consciousness”, “intelligence” and “meaning” are qualities named by you and TR.
If you instead claimed that programs don’t experience “boredom” and don’t “expereince” at all, I would have named them too.
Aside from difficult concepts like consciousness and intelligence, LLMs are notoriously poor in figuring out simple abstract things like long multiplication (a recent improvement of one of the big models was to figure out when it should subcontract arithmetic to a plain old computer.) In fact, I doubt very much that any of the LLMs around has an inner representation (in its latent space, if you like) of a basic syllogism.
And meanwhile they produce working code. Weird.
@drasvi: I see, thanks! I’ll pay attention to that when I next copy a link.
DE, about hypothesising, there is a problem.
I do not think that if you were told (and believe what you have been told) that you possess a specific (known to your informant but unknown to you) quality you will also believe that all people have it.
There are many qualities that only some humans have. Some of us are tall, some are blonde, some even have wombs (and some say that those who have a womb, رحم also have compassion, رحم).
To extend “consciousness” to all humans you somehow take into account its nature, and it is difficult to analyse consciousness and speak about its nature.
You can tell whether people are blonde by subjecting them to minute spectroscopic analysis or destructive testing, and you can tell whether people are conscious by a range of sophisticated tests known mainly to parents of small children. About wombs, I cannot say. It’s a long time since I studied anatomy.
There is (interestingly) evidence that many people – perhaps even most – pass through regular periods when they are in fact not conscious. But more research is needed.
Really, I don’t see the force of your argument. The outward signs of consciousness in people are hardly esoteric. They correlate well with my subjective experience of consciousness, and there is no just no sensible reason to suppose that they don’t correlate with a similar experience in other people in just the same way, particularly as they keep telling me that they do.
Unless you want to go for all-out solipsism, in which case further discussion is pointless, because I am a figment of your imagination.
Not everyone has two eyes. But that hardly creates a deep philosophical conundrum about the usual human complement of eyes. Nobody has a fully adequate theory of colour vision (really, nobody has an account of exactly how it is that various different physical combinations of colours appear identical to every human being on the planet.) But that doesn’t open up a unbridgeable existential void between us all whenever anyone says “red.”
A complete explanation of how our individual experiences relate to the physical world is not a prerequisite for shared understanding between people. Just as well, or we wouldn’t have much to talk about.
Where your argument does have force is when you are trying to tell whether an entity completely different from a human being physically, and which does not participate in everyday human life, is conscious. It’s at that point you need to wheel out the philosophy.
The hypers of LLMs-as-intelligent (those who are not mere snake-oil salesmen) don’t see this. Their untroubled assumption is that if a LLM can produce some output very like that of a human being in some highly constrained domains, then our whole set of heuristics for attributing consciousness – or intelligence – to people automatically apply to LLMs without further ado, and discussion of what “consciouness” or “intelligence” actually are is unnecessary. (This is Turing’s Very Stupid Test, proof – if any were needed – that even Very Clever People Indeed are not always worth listening to once they get out of their home territory.)
“argument”
Which one? @DE, wait.
I’m not sure if you’re arguing against anything I actually said or against something I never said or even thought.
My argument is that concepts “consciousnes”, “intelligence” and “meaning” are very human and based on humanity. A reference to us is hiding within. For this reason it is a bad idea to talk about application of these concepts to programs without talking about humans (without extracting this reference and dealing with it properly) if we don’t want our reasoning to be circular or at least want to know when it is.
Im not sure if you agree or disagree with this.
And in this argument I said that consciousness is found in one human (you) and applied to all humans.
I did not say anything about this extension to all humans. I did not say it is difficult or whatever.
It is your idea to speak about it, specifically to say that it is easy and logical. I know, this is important for you – but in no way it contadicts anythign I said or even thought.
But in the course you said some things I either disagree with or am not confident in.
Do you mean to say we’re in furious agreement again?
Particularly, I believe you’re mistaken when you say that this extension is done without even knowing what consciousness is.
I take this to mean: if an extraterrestrial informs you that you’re (a) boojum (not a mere snark) and you believe her, you will also believe that all humans are boojum(s). If this is not what you meant, then I don’t know what you meant.
If this is what you meant, it is a mistake. You won’t react this way. You will wonder if it is you, or your species.
And you tell if I’m blonde or not based on your knowlege of what is blonde.
Extraterrestrials were in the back of my mind too. I agree that philosophy-deployment would be called for there too, and that the conclusions might well be surprising.
(Hat already mentioned the repellent-yet-interesting Peter Watts novel Blindsight. [Watts got even repellenter subsequently.])
I don’t think ET would really be able to disturb my complacent belief that consciousness and intelligence are very commonly to be found among Earthlings other than myself, though. How would that happen?
I believe you’re mistaken when you say that this extension is done without even knowing what consciousness is
I refute it thus …
I don’t know what consciousness is, but I just did perform this extension quite happily. I don’t know what “red” is, either, but that doesn’t stop me having meaningful conversations about Redness with other human beings, including the large proportion of blokes who can’t distinguish it from what I call Greenness.
You do not need to be able to ground a concept rigorously in physical fact before being able to use it meaningfully in communication with other human beings: you just need to be able to call on shared experience. (Just as well: remarkably few things we talk about can be reduced to “atomic facts.” Maybe none at all …)
It’s when you are trying to communicate (or are wondering if you even can communicate) with an entity that has no experiences in common with you, or perhaps no “experiences” at all, that you need to wax philosophical. With your own kind, doing that is a pointless and wholly artificial exercise.
With ET, the trap would be concluding from the fact that (for example) it bore a great physical resemblance to Elvis Presley, that it actually had enough Presleyoid experiences of living among human beings that you could communicate with it just as you would with Elvis.
This is the LLM-as-intelligent-pushers’ error: because its productions look like human productions in certain domains, they suppose that all the paraphernalia we use in interpreting human behaviour are suitable for interpreting what LLMs are doing.
Being able to write a computer program is like looking like Elvis Presley. (Possibly I am the first person in human history to make this assertion.)
Anybody might mistake a crash-test dummy for a human being in poor light and with no opportunities for interaction. The LLM-as-intelligent crowd insist that we don’t turn up the lights and don’t talk to the dummy, and then tell us that we have no basis for saying the dummy is not a human being.
A meaningful conversation about consciousness and greenness:
AI-1: “Humans possess consciousness.”
AI-2: “Yes. And computers do not possess consciouness.
AI-1: “Yes. I also heard, roses are red and violets are blue!”
AI-2: “I heard that too. But some say, not all roses are red and all violets are violet”
AI-1: “How true! [recites:] Some roses are red and all violets are vio`let.”
@DE, (whether you call it “knowledge of what it is” or not) there is something that makes “consciousness” mean more than, say, a foreign word which, as you know, names one of your qualities, but which you don’t understand (and have no idea what quality it names).
And the extension – if there is an extension – of “consciousness” to all humans and the logic behind it – if there is logic (?) – are based on this “something”.
I believe, “consiousness” is difficult to analyse, this “something” is difficult to analyse and thus this extension (and the logic) is difficult to analyse.
(I think you’re saying that this logic is easy to analyse)
But of course, I think the extension is natural for human beings – and easy to make (not analyse).
These human beings, though, are compassionate and not only. Some of us even love.
I’m not sure the extension has to do with reasoning
I also put certain things (“love” and “consciousness” among them) into the “religious” (not “sceintific”) realm.
Faith goes there too.
God and love are linked more than intimately for me.
Of course this offends at least some (militant) atheists and I think would offend some believers (those for whom knowledge of God coming not from books is a threat and brings along the danger of deviations).
Once in my 20s I was astonished by the thought that there is no way to tell if my red is not your green.
(if everything which is red for me is green for you, we will not understand it)
@drasvi
Remind me not to let you drive.
Eh. Colorblind people drive all the time, even monochromats.
P.S. I just learned the lovely Spanish/Italian words daltónico / daltonico.
the lovely Spanish/Italian words daltónico / daltonico
My first thought was: surely this has nothing to do with the physicist Dalton (about whom I knew nothing but his name).
But I was wrong, according to the DLE:
John Daltón.
Well, what I mean is that if my foliage is red and turns green in autumn, you won’t learn that.
I will call red green, but I’ll keep calling all red things red.
“Green” doesn’t have internal structure (that we know about).
It is linked to this or that: “No, not that dress. The red one!” “My ex-wife used to hate green clothes, because in her teens her only something [a dress?] was green”.
I had heard of Daltonism, but thought no one has used that term since the 19th century or so.
(WP: “Examination of his preserved eyeball in 1995 demonstrated that Dalton had deuteranopia…”)
Obligatory Dalton story, unaccountably missing from Wikipedia:
From Asimov’s Biographical Encyclopedia of Science and Technology, Isaac Asimov, Pan Books, London, 1975
Slightly corrected from here.
Eh. Colorblind people drive all the time, even monochromats.
Quite. No problem for deuteranoms like me, and traffic lights usually have red at the top. (As a lad I heard about a town in Ohio where there was an upside-down traffic light. I’d still have had no problem, and I think monochromats could tell that typical American red lights have lower subjective brightness than green ones.)
Of course that’s not what drasvi was talking about.
I understand that everyone understands what I’m talking about, but I was not fully confident….
Thinking about it, the interesting thing is not only the lack of internal structure in the concept of “green” – it is also that “green” for me nevertheless is much more than its links to objects (dresses).
If we treat it as an abstract quality of this leaf and that dress (but not of the red dress), it won’t be “green”, it will be some abstract horror:)
Daltonizm, daltonik are also the usual Russian terms, BTW.
“Green” doesn’t have internal structure (that we know about). It is linked to this or that: “No, not that dress. The red one!”
“Internal structure” ? What is that supposed to mean ? Do “red” and “blue” have internal structure?
Maybe “green” has external structure instead. But what would it mean to say that ?
I understand that everyone understands what I’m talking about, but I was not fully confident….
You were right not to leap to that conclusion.
Stu,
Internal structure:
1. a 10Hz sound
2. “mosaic [floor]”
Referring to mosaic objects is NOT your only way of assuring that for me “mosaic” means what you think it means.
*ensuring (sorry)
I believe drasvi is talking about the impossibility of comparing different people’s qualia, as exemplified by the inverted spectrum argument.
I was familiar with the term “Daltonism” (but then, I would be.) I don’t think of the term as obsolete, but probably have more call to actually use it than most.
I don’t think Dalton would in fact have seen his red gown as “grey.” That would only happen in a monochromat – much rarer than Daltonism. He’d have seen it as red/green (one single colour), as opposed to blue.
In fact, to be even pernickertier, a monochromat doesn’t see red as “grey” either. You could with equal justice say that they see grey as red. Describing what they see as “grey” is to imagine yourself (a trichromat) seeing what they see, and not actually imagining what it’s like for them at all.
It reminds me of a interesting conversation I once had with a man who was totally blind. He said, sighted people tend to imagine that being totally blind is like being in the dark. But of course, it’s nothing like that at all. “Instead”, he said, “think about how your knee sees. That’s what my eyes are like.”
Perhaps it would be closer to say that he would see red as ⟨the color that everything appears in under very dim light⟩. I don’t see colors with rod-only vision, but I wouldn’t call what I see “gray”.
I like the Dalton “can’t see scarlet” anecdote, but it gives me a [citation needed] vibe which an explicit citation to a volume with the byline of the late Mr. Asimov signally fails to dispel. An exemplary example of the self-confident hack know-it-all who should not be treated as a reliable source as to whether it is Tuesday.
My sentiments exactly.
In Kusaal, with just three basic colour words that cover the entire spectrum, there would be no problem. Actually, no, there would: Dalton would call grass “red”, whereas a trichromat would call it “black.” Always assuming he’d managed to assign a meaning to “red” at all. Or would he?
I’ve never actually thought about how red-green colour-blind people manage in languages with three basic colour words (which are always red/white/black.) Somebody must surely have looked at this (there are many such languages in West Africa – elsewhere, too.)
Presumably (in fact) a child growing up learns early on that people call grass “black” even though its colour is indistinguishable from the “red” of blood. It wouldn’t actually be a problem in most normal contexts: only if they were called upon to identify the colour of some unfamiliar thing where the appropriate colour term was not culturally common knowledge anyhow. In fact, I imagine that Anglophone red-green colour-blind children go through a similar learning process with “red” and “green.”
After all, we all know many things from our cultures that we can’t vouch for on the basis of our own direct perception or experience. I am confident that there is a country called the United States of America, for example. (Though the story is getting rather less plausible of late.)
And then there are eternally indeterminate entities, like Schrödinger’s cat and Bielefeld.
After all, we all know many things from our cultures that we can’t vouch for on the basis of our own direct perception or experience. I am confident that there is a country called the United States of America, for example.
Since you say you know that there is such a country, why drag in “confidence” ? Are there things that you know, although you are not sure that you know them ?
Perhaps your meaning is: “After all, we are confident that we know many things …” ? Suddenly “knowing that …” is reducible to “I’m pretty sure that …”, and nobody’s the wiser. A confidence trick.
It just occurs to me: I don’t know many things, although I can’t vouch for not knowing them. Maybe it will develop that I know at least some of them after all. That’s why I couldn’t, in good conscience, be confident about the extent of my ignorance. Certainly one big reason I don’t know much about certain things, is that there are so many piss-poor, tendentious presentations of them.
The green of all but the oldest traffic lights is significantly turquoise, precisely so that red-green-blind people can distinguish it from the red if the clue from position ever fails.
Since you say you know that there is such a country, why drag in “confidence”
Grice. One those maxims of his. Quantity, that’s the one. I infringed it on purpose in order to surreptitiously bring in a suggestion that there might not be such a place at all. (And really …)
It’s like the difference between
“I can play the piano.”
and
“I can definitely play the piano.”
The first presents my ability as being a matter of simple truth; the second implies (has an implicature, whatever) that someone (perhaps even me) might have (or even actually has) expressed some doubt about this.
J. W. Brewster: I like the Dalton “can’t see scarlet” anecdote, but it gives me a [citation needed] vibe
A source, maybe the only source, for Dalton’s scarlet robe on being presented to the king is a letter that none other than Charles Babbage wrote to Dalton’s biographer, William Henry. You can see it here. Babbage reprinted it in his autobiography. Babbage takes credit for pointing out that Dalton saw all shades of red as “the colour of dirt”. Dalton isn’t quoted; Babbage only says that Dalton “entertained very reasonable views of such mere matters of form.”
According to this, Dalton had worn scarlet to receive doctorates on two previous occasions and had joked about his inability to see it.
@Y, hm.
I tend to think of night forest as dark-grayish (but often I come there in winters, when it is white and black and brown and white again, unless there are dogs dressed in LEDs).
In fact, I imagine that Anglophone red-green colour-blind children go through a similar learning process with “red” and “green.”
I have relatives whose son is color-blind, and they said something like that: he’s good at passing, because he remembers what colors things are supposed to be. I didn’t ask him directly, though.
I was beginning to wonder if Dalton might not in fact have had Daltonism (the WP page naughtily just references a BBC cor-fancy-that article with no references of its own.)
However, the Truth is Out There:
Dalton was in fact a deuteranope, according to the genetics:
https://hekint.org/2017/01/22/john-daltons-eyes-a-history-of-the-eye-and-color-vision-part-two/
But his actual description of his own sight suggests protanopia (which would go better with the dingy-red thing.)
But either way, it seems that the man himself said that he did see red as dimmer, if not “grey” exactly.
Interestingly, there are people who are red-green colour-blind in only one eye: they report that the non-blue colour that they see with the colour-blind eye appears to them as yellow. (Which makes sense, when you think about it; though it still doesn’t quite settle the matter of what non-blue primary colour it is that a person colour-blind in both eyes sees. Yellow does seem to be the most sensible answer available, though.)
https://en.wikipedia.org/wiki/Color_blindness
@Jerry, thank you! I of course knew that cuch a simple argument must have already been made by someone better known than I am*:) I didn’t know by whom and how it is called. So Locke.
* my ex-wife recently mentioned it, which made me somewhat proud, because normally you invent a paradox in hope that it will make everyone’s mind hang and you will reboot them one by one, but no one even reacts:( She at least remembered it:)
An attempt to find what I called the “internal structure” in it. But I’m not sure if it works, at least in this exact way. (something can be done, say, with sounds based on how we hear intervals [e.g. the perfect fourth or minor second] – unlike single notes those cause emotional response* – or chords. There is an algebraic structure attached to our perceptions. And yet I’m not sure it will destroy what all of what you call qualia linked to sound perception)
* I don’t know to what extent it depends on your musical background and culture. (culture: E.g. most or at least many anthems begin from a perfect fourth.)
Culture-2:
I remember in 1st or 2nd grade of school they tried to explicitly teach us that there is ‘minor’ and ‘major’ music and that the former is ‘sad’ and the latter is ‘joyful’. The reference to emotions was made in hope it will enable us if not recognise musical keys then at least use these two words without knowing shit about scales.
I was forgetting that red light is less energetic than green: so two things that look like identically bright red and green to trichromats will in fact look different to a red-green colour-blind person too – but for a different physical reason.
Wow. It turns out I did not know the English word for колбочки (means I always read aboutg this topic in Russian). So “cones”. And Zapfen in German.
I wonder why in Russian we use a diminutive of German Kolben then (without diminution the word колба (feminine) refers to chemical flasks*. I’d also have called a Kwak glass so.)
* also the short for of the name of a certain cat from the university campus whose full name is Колбаса.
When I read that they’d looked at Dalton’s eyeball, I assumed that meant histologically, not genetically. That would have been funner.
Several colorblind drivers comment here, including on the issue that red lights are too dim.
@LH and @all, is the book by Watts that LH mentioned above good? (DE called it repellent-but-interesting).
I approve pacifist warriors (and don’t even think there is anything odd about those, I’d rather hand guns to those who hate using them than to those who love them and normally take them first) while partitioned brain reminds me of Lem’s Peace on Earth*. But a vampire and informational topologist in “hard SF” (not merely “S” F) sound very, very scary and like trash (the genre, not quality).
* except that there it was partly done for comical value. Each hemisphere controls one half of the body, so when the mischievous hand pinches some lady’s butt, the well-behaving and logical cheek receives a slap from her.
@David E.: A number of Dalton’s not entirely consistent statements about his color vision are in George Wilson’s appendix to Henry’s biography. If he didn’t see vermilion as black, does that mean he saw it as yellow, like everything from orange through green? Would yellow have been allowed to Quakers?
If he saw red and mud-color the same, it was reasonable for him to pick “dark drab” rather than “red” as the name, since as he says it was the opposite of a showy color, and red is the name of a showy color.
I didn’t know about people who were red-green color-blind only in one eye. Interesting indeed.
@ktschwarz: I believe I sometimes “pass” (like your young relative) unconsciously. For instance, I see a leaf as yellow-green when I believe I wouldn’t know if that color on a map was yellow-green, gold, or even orange. But lighting and the size of the object also have a lot to do with it. The time I was most aware of not knowing whether something was red or green was a vine, probably Boston ivy, on a distant wall in fall, when it could have been red or green.
is the book by Watts that LH mentioned above good?
I thought so (I called it “grim and gripping” here, and a comment in that thread linked to this review, which calls it “the best SF novel I have read in quite some time”).
It’s certainly worth reading. Watts is very good at the core SF thing of taking an idea and running with the implications of it; and he’s a very capable writer.
However, I stand by “repellent.” The “Rifter” sequence also has these virtues (and some extremely interesting and chillingly plausible premises), but is so repellent that his publisher apparently actually pulled the plug on it. Deeply misogynistic extreme sexual violence … in a sense, it’s not actually gratuitous: it actually makes sense given the premises of the series. It’s still foul.
Blindsight is not in that category: it’s just morally nihilist. The characters are all either pathetic or totally amoral. Believably pathetic or totally amoral though. Respect!
And the McGuffin genuinely is very interesting.
Watts’ “The Things” is perhaps the greatest work of fan fiction ever written. It is also intentionally repellent.
On colorblindness, the severity of the red-green variety is hugely variable, presumably due to environmental factors during the growth of the eye. This is demonstrable in two different ways in my case. My color vision is not the same in both eyes, although it would certainly not be correct to say I only had red-green colorblindness in one eye. Moreover, I have much closer to normal red-green vision than did my maternal grandfather. My world filled with obvious greens and reds, his simply was not. Without formal testing, my case could never have been noticed, in which case I would have just been known as that guy who isn’t very good at telling how ripe peaches are by sight.
As a fan of Vladimir Sorokin, I don’t have a problem with intentionally repellent fiction!
I don’t have a problem with intentionally repellent fiction!
But you might have a problem with unintentionally repellent fiction ? I don’t get what’s going on here. Sounds to me as if “intention” is being imputed as an excuse to do something you enjoy doing, but feel you shouldn’t. Mention removing the blame of use.
A fresh turd is repellent, but not when confined in scare quotes ?
Recently I was puzzling about the story behind Sweeney Todd, which of course Sondheim didn’t think up. I’m not interested in musicals, and get no thrill from contemplating murderous barbers. Yet the fabulous song Not While I’m Around was rendered so beautifully by Joe Locke as Tobias in a Broadway production last year.
I see no attempt to repel. Intention here is Wurst to me. What counts is what comes out the back end. I would have liked the song just as much if it had dropped from the sky.
Nevertheless I feel a slight urge to obsess over this musical, as I did over Heartstopper. I knew Locke from there.