I can confirm. Was driving for a while, impervious to any sign of having a flat tyre before someone on the street signalled me to stop. And that was on a Citroen Saxo, which I don’t think had the suspension of C5 etc. Amazing car, lasted for 15 years with me, with only battery and frankly inconsistent oil changes. The lady who bought it when we immigrated still runs it 15 years later. Build quality is a hit and miss per model I think.
Empathy does not lie in its perception on receipt but in its inception as a feeling. It is fundamentally a manifestation of the modalities enabled in shared experience. As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning. Understanding is possible in this relationship, but should not be confused with empathy or compassion.
I happen to agree with what you said. (Paraphrasing: A machine cannot have "real empathy" because a machine cannot "feel" in general.) But I think you're arguing a different point from the grandparent's. rurp said:
> Someone who sees a person stranded on the side of a road might feel for them and stop to lend a hand. ChatGPT will never do that [...]
Now, on the one hand that's because ChatGPT cannot "see a person" nor "stop [the car]"; it communicates only by text-in, text-out. (Although it's easy to input text describing that situation and see what text ChatGPT outputs!) GP says it's also because "the purpose of ChatGPT is to make immense amounts of money and power for its owners [, not to help others]." I took that to mean that GP was saying that even if a LLM was controlling a car and was able to see a person in trouble (or a tortoise on its back baking in the sun, or whatever), then it still would not stop to help. (Why? Because it wouldn't empathize. Why? Because it wasn't created to empathize.)
I take GP to be arguing that the LLM would not help; whereas I take you to be arguing that even if the LLM helped, it would by definition not be doing so out of empathy. Rather, it would be "helping"[1] because the numbers forced it to. I happen to agree with that position, but I think it's significantly different from GP's.
Btw, I highly recommend Geoffrey Jefferson's essay "The Mind of Mechanical Man" (1949) as a very clear exposition of the conservative position here.
[1] — One could certainly argue that the notions of "help" and "harm" likewise don't apply to non-intentional mechanistic forces. But here I'm just using the word "helping" as a kind of shorthand for "executing actions that caused better-than-previously-predicted outcomes for the stranded person," regardless of intentionality. That shorthand requires only that the reader is willing to believe in cause-and-effect for the purposes of this thread. :)
Yes, I am not in fact expanding on GPs argument but etymologically attack the premise. Pathos is not learnt. When I clutch my legs at the sight of someone getting kicked in the balls, that’s empathy. When, as now, I write about it, it’s not, even in my case where I have lived experience of it. More sophisticated kinds of empathy build on the foundations of these gut-driven ones. Thank you for reading recommendation, will look for it.
> As such impossible to the extent that our experiences are not compatible with those of an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Conversely that means empathy is possible to the extent that our experiences are compatible with those of an AI. That is precisely what's under consideration here and you have not shown that it is zero.
an intelligence that does not put emphasis on lived context trying to substitute for it with offline batch learning.
Will change your tune when online learning comes along?
Lived context is to me more than online learning. I admit I am not so versed in the space as to be able to anticipate the nature of context in the case of online learning, so, yes, indeed I may change my tune if it somehow makes learning more of an experience rather than an education. My understanding is it won’t. I have not proven, but argued that experience compatibility is zero, to the extent a Lim does not experience anything. Happy to accept alternative viewpoints and accordingly that someone may perceive something as a sign of empathy whether it is or not.
It looks like you put atheism and agnosticism in the same boat. There’s a certain belief, trust and conviction to lack of a deity in atheism. Not in agnosticism. That state of mind is orthogonal to belief is mostly true. On the other hand , there is also a motivation to explain away religious experience as physiological process, which explains the overlap in group membership.
> There’s a certain belief, trust and conviction to lack of a deity in atheism
Atheism means a lack of belief in a god. Just because many atheists go a step further or the word agnostic exists, that doesn't change the meaning.
I have yet to see an "official" source that says it's definitely belief of absence and not absence of belief. Agnosticism works just as well if you view it as subcategory of atheism.
My understanding is atheism is not believing in a deity (a - without, theism - belief in deity) and agnosticism is about not knowing (sometimes not knowing if it's possible to know). It gets messy because plenty of folks play games with the words like "clearly I don't believe the major, self-contradicting religions but maybe there is some deity out there somehow that had an influence beyond 'natural processes'" or that Einstein quote about calling the beauty of physics and the universe "god".
Similar to what you said about me, I think you're perhaps putting faith/religious dogma and spirituality/holding things sacred in the same boat. I think having awe or wonder or a feeling of being part of something bigger is again orthogonal to belief in a particular god. I don't even think religion is the dividing line here - religion and ritual exists all over without an overarching deity.
I liked the underlying idea to what you said about motivation to explain away religious experience as physiological process; I think there is something interesting there. I expect this is a result of what people already believe, not a cause, but I like the concept of how people take in new information and default to directing it to "knowable, let's figure it out" or some version of "unknowable".
tl;dr - not believing in a god seems separate from spirituality and religious experience. Theist and atheist are extremely high level (and one dimensional) labels and there is a LOT of diverse (and overlapping) belief and experience under each.
I was a Notes user almost 20 years ago, and I liked it because it felt retro, even back then. It was ok, quite weak with audit trails and timestamps for some reason (if I remember correctly, changing the date on your pc and sending an email would make the email appear as if was sent on that changed date...)
I would argue that China is just playing the game and EU should do too while it lasts. No time for sophistication in a street fight, this is not chess.
China has latitude in inflicting pain on its own people that the EU does not, and simply mirroring the US’s tariffs would impose significant pain and will not be the EU’s first choice. It may happen anyway, but it’s a harder decision for the EU than for China.
To me, really, the interesting thing is that _the US_ thinks it can do it. Given how annoyed the American public got about expensive eggs, I really question whether that is true; I don’t think people will sit back and go “cool, I can’t afford consumer goods now, and I’ve just lost my job, but it is all part of Dear Leader’s plan.” The idea, which Trump has actually vocalised, that the American public will tolerate pain seems totally at odds with, well, history.
Just trying to explain it to you made me think of a very good reason why an MCP is preferable to just telling it to fetch a page. When you tell ChatGPT or Sonnet or even cursor/windsurf/whatever to fetch a website do you know exactly what it is fetching? Does it load the raw html into the context? Does it parse the page and return just the text? What about the navigation elements, footer and other “noise” or does it have the LLM itself waste precious context window trying to figure the page out? Is it loading the entire page into context or truncating it? If it is truncated, how is the truncation being done?
With an MCP there is no question about what gets fed to the model. It’s exactly what you programmed to feed into it.
I’d argue that right there is one of the key reasons you’d want to use MCP over prompting it to fetch a page.
There are many others too though like exposing your database via MCP rather than having it run random “psql” commands and then parsing whatever the command returns. Another thing is letting it paw through splunk logs using an MCP, which provides both a structure way for the LLM to write queries and handle the results… note that even calling out to your shell is done via an MCP.
It’s also a stateful protocol, though I haven’t really explored that aspect.
It’s one of those things that once you play with it you’ll go “oh yeah, I see how this fits into the puzzle”. Once you see it though, it becomes pretty cool.
MCP is written for the AI we’ve got not the ones doing all the hyping want us to believe exists.
With a long enough context window it wouldn’t matter the difference. But “long enough” in this context to me means where you view its length as big enough where size no longer matters. Kind of like modern hard drives that are “big enough that I don’t care about a 1gb file” (I was thinking megabyte files but that might be too large of an order of magnitude )
The question would be whether with AI assistants you need to go through the effort of enforcing MECE or just strive for exhaustiveness and let them sort out repetition. I am also wondering whether there is a signal in repetition or even conflicts in documentation.
reply