Hacker News new | past | comments | ask | show | jobs | submit login

I think the question about llms being AGI or not (or "actually" intelligent or not) is interesting, but also somewhat beside the point.

We have LLMs that can perform "read and respond", we have systems that can interpret images and sound/speech - and we have plugins that can connect generated output to api calls - that feed back in.

Essentially this means that we could already go from "You are an automated home security system. From the front door camera you see someone trying to break in. What do you do?" - to actually building such a system.

Maybe it will just place a 911 call, maybe it will deploy a tazer. Maybe the burglar is just a kid in a Halloween costume.

The point is that just because you can chain a series of AI/autonomous systems today - with the known, gaping holes - you probably shouldn't.

Ed: Crucially the technology is here (in "Lego parts") to construct systems with (for all intents and purposes) real "agency" - that interact both with the real world, and our data (think: purchase a flight based off an email sent to your inbox).

I don't think it really matters if these simulacra embody AGI - as long as they already demonstrate agency. Ed2: Or demonstrate behavior so complex that it is indistinguishable to agency for us.




This is also the understanding I came to a few weeks ago. LLMs themselves won’t be confused with AGI, but LLMs with tools have the potential to be more powerful than we can anticipate. No leap to “proper” AGI is required to live in a future where AGI functionally exists, and as a result the timeline is much shorter than anyone thought five years ago.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: