Hacker News new | past | comments | ask | show | jobs | submit login

I honestly can’t believe this is the hyped up “strawberry” everyone was claiming is pretty much AGI. Senior employees leaving due to its powers being so extreme

I’m in the “probabilistic token generators aren’t intelligence” camp so I don’t actually believe in AGI, but I’ll be honest the never ending rumors / chatter almost got to me

Remember, this is the model some media outlet reported recently that is so powerful OAI is considering charging $2k/month for




The whole safety aspect of AI has this nice property that it also functions as a marketing tool to make the technology seem "so powerful it's dangerous". "If it's so dangerous it must be good".


> probabilistic token generators aren’t intelligence

Maybe this has been extensively discussed before, but since I've lived under a rock: which parts of intelligence do you think are not representable as conditional probability distributions?


> which parts of intelligence do you think are not representable as conditional probability distributions

Maybe I'm wrong here but a lot of our brilliance comes from acting against the statistical consensus. What I mean is, Nicolaus Copernicus probably consumed a lot of knowledge on how the Earth is the center of the universe etc. and probably nothing contradicting that notion. Can a LLM do that ?


It could be "probability of token being useful" rather than "probability of token coming next in training data"!


Copernicus was an exception, not the rule. Would you say everyone else who lived at the time was not 'really' intelligent?


That's an illogical counterargument. The absence of published research output does not imply the absence of intelligent brain patterns. What if someone was intelligent but just wasn't interested in astronomy?


Yes but this was just to make a blatant example. The questions still stands. If you feed a LLM certain kind of data is it possible it strays from it completely - like we sometimes do in cases big and small when we figure out how to do something a bit better by not following the convention.


And how many people actively do that? It's very rare we experience brilliance and often we stumble upon it by accident. Irrational behavior, coincidence or perhaps they were dropped on their heads when they were young.


"Senior employees leaving due to its powers being so extreme"

This never happened. No one said it happened.

"the model some media outlet reported recently that is so powerful OAI is considering charging $2k/month for"

The Information reported someone at a meeting suggested this for future models, not specifically Strawberry, and that it would probably not actually be that high.


Elon Musk and Ilya Sutskever Have Warned About OpenAI’s ‘Strawberry’ Jul 15, 2024 — Sutskever himself had reportedly begun to worry about the project's technology, as did OpenAI employees working on A.I. safety at the time.

https://observer.com/2024/07/openai-employees-concerns-straw...

And I’m ignoring the hundreds of Reddit articles speculating every time someone at OAI leaves

And of course that $2000 article was spread by every other media outlet like wildfire

I know I’m partially to blame for believing the hype, this is pretty obviously no better at stating facts or good code than what we’ve known for the past year


My hypothesis about these people who are afraid of AI, is that they have tricked themselves into believing they are in their current position of influence due to their own intelligence (as opposed to luck, connections, etc.)

Then they drink the marketing koolaid, and it follows naturally that they worry an AI system can obtain similar positions of influence.


I mean, considering how many tokens their example prompt consumed, I wouldn't be surprised if it costs ~$2k/month/user to run




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: