Hacker News new | past | comments | ask | show | jobs | submit login

I don't think it's that easy. Even people who are highly intelligent, capable, and competent in many different ways can still hold one or two really weird outlier beliefs. Unless you ask them about those specific things, you really won't always be able to tell.



It's not just the outlier beliefs about sentience though, the man clearly did not understand how these models work. There's parts of the conversation where he seems to think he has taught the system 'transcendental meditation', which is nonsensical given that these transformer models don't learn interactively through individual conversation but are trained on sets of data, and they don't do anything unless you query them.

It's not like Lambda sits around and ponders the nature of the universe when you're not talking to it.


>It's not like Lambda sits around and ponders the nature of the universe when you're not talking to it.

But can we really be sure of that? We don't know much about the technology itself. What if google is ticking the model at a given framerate allowing the neural network to continuously "think"?


It is a pure function, it has no memory or even short term state to think. Thinking implies you have some mental state that can change, these neural nets doesn't have that.


> it has no memory or even short term state to think

It does have memory/state as it's a recurrent neural network. What it's not doing is contemplating between inputs. If it was conscious its experience would be like falling asleep, having someone wake you up to ask you a question, then falling asleep again right after answering.


Yes, we can.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: