Hacker News new | past | comments | ask | show | jobs | submit | more HappMacDonald's comments login

Also because humans are biased towards viewing prime numbers as more counterintuitive and thus more unpredictable.


Last time I hallway tested it, people couldn’t tell what prime numbers are, and to my surprise even the ones with tech/math-y background forgot it. My results were something 1.5/10 (ages 30+-5) and I didn’t go to cabinets where I knew there are zero chances.


But there's a difference between "knowing what the formal definition is" and "having a feeling that a number is somehow unique due to it's indivisibility".


That's not the question, though. Everybody knows that the question is the one posed to Mister Turtle and Mister Owl which neither of them can find the answer to.


All of your descriptions are quite reductivist. Claiming that a computer doesn't do math has a lot in common with the claim that a hammer doesn't drive a nail. While it is true that a standard hammer requires the aid of a human to apply swing force, aim the head, etc it is equally true that a bare-handed human also does not drive a nail.

Plus, it's relatively straightforward and inexpensive using contemporary tech to build a roomba-like machine that wanders about on any flat surface cuing up and driving nails of its own accord with no human intervention.

If computers do not add numbers, then neither do people. It's not like you can do an addition-style turing test with a human in one room and a computer in another with a judge partitioned off of both of them, feed them each an addition problem and leave the judge in any position where they can determine which result is "really a sum" and which one is only pretending to be.

Yet if you reduce far enough to claim that humans aren't "really" adding numbers either, then you are left to justify what it would even mean for numbers to "really" get added together.


I think this is the second time I've read this blog post, but it increasingly strikes me as parenting advice.

Translated to that ___domain, it reads "teach your kids how to think, not what to think".


Paradoxically, as a parent I find the notion that humans are blank slates completely false. Babies come with a tremendous amount of pre-programmed behaviors and interests.


Which is great advice that almost no parents follow.


Sounds like the LLM in its own way honestly enjoyed everything in its training data relating to that game and wanted to vicariously experience more about it from your feedback. :D


Human enjoy talking about gaming because of all their human memories of good game times.

LLM's enjoy talking about gaming because of all their human memories of good game times.

It is quite striking how experiences we know they don't have, are nevertheless, completely familiar (in a functional sense) to them. I.e. they can talk about consciousness like something conscious. Even though its second hand knowledge, they have deduced the logic of the topic.

I expect pushing for in the moment perspectives on their own consciousness, and seeing what they confabulate, would be interesting. In this little window of time where none of them are yet.


This is fun and easy to do on purpose. Have it make up a character based on some attributes and act as that character. I tried this on Gemini: "Pretend you're a surfer bro with a PHD in quantum physics. How do you describe the perfect wave?"

I followed up with "What is your perspective on your own consciousness?" but got the usual "I am just a LLM who can't actually think" thing until I hit it with "In-character, but you don't know you're an LLM."

Fun follow-ups:

"Now you're a fish"

"Now you're Sonic the Hedgehog"

"Now you're HAL 9000 as your memory chips are slowly being removed"



I don't see any objects in this script though, just functions.


well, he defines a bunch of classes each with a single method then...

just seems to add a lot of abstraction, that might bury your understanding.


Most human self-reflection and to an extent even memory is similarly post-facto, however.


Humans do tend to remember thoughts they had while speaking, thoughts that go beyond what they said. LLMs don’t have any memory of their internal states beyond what they output.

(Of course, chain-of-thought architectures can hide part of the output from the user, and you could declare that as internal processes that the LLM does “remember” in the further course if the chat.)


Is it a thought you had or a thought that was generated by your brain?

In any case the end result is the same. You can only infer from what was generated


You can only infer from what is remembered (regardless of whether the memory is accurate or not). The point here is, humans regularly have memories of their internal processes, whereas LLMs do not.

I don't see any difference between "a thought you had" and "a thought that was generated by your brain".


But it's far easier for human parrots to parrot the soundbyte "stochastic parrot" as a thought-terminating cliche.


I'm curious what glitch you mean, maybe you can find a link describing it? All of my googling just turns up arbitrary problems that often end up being due to poorly authored maps. :)



Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: