Hacker News new | past | comments | ask | show | jobs | submit login

> But if I adopt Elon's caution toward the technology, then I'm not sure if I agree with his reasoning. If he believes in the potential harm of AI, then supporting its widespread use doesn't seem logical. If you take the quote above, and substitute the word "guns" for "AI", you basically have the NRA, and the NRA is not making the world a safer place.

They aren't interchangeable concepts, though: guns can only be used to harm or threaten harm. Artificial general intelligence could invent ways to harm but could also invent ways to anticipate, defend against, prevent, mitigate, and repair harm.

> AI appears to be impossible to regulate.

It could be regulated if there were extremely authoritarian restrictions on all computing. But such a state would be 1. impractical on a global scale, 2. probably undesirable by most people and 3. fuel for extremist responses and secretive AI development.

> If an AGI is possible, then it is inevitable.

The only thing that could preclude the possibility of creating AGI would be if there was something magical required for human-level reasoning and consciousness. If there's no magic, then everything "human" emerges from physical phenomena. Ie short of a sudden catastrophe that wipes humanity out or makes further technological development impossible, we are going to create AGI.

Personally, I think that Musk and the OpenAI group may already have a vision for how to make it happen. Figuring out how to make neural networks work at human-comparable levels for tasks like machine vision was the hardest part IMO. Once you have that, if you break down how the brain would have to work (or could work) to perform various functions and limit yourself to using neural networks as building blocks, it's not that difficult to come up with a synthetic architecture that performs all of the same functions, provided you steer clear of magical thinking about things like free will.




>Figuring out how to make neural networks work at human-comparable levels for tasks like machine vision was the hardest part IMO. Once you have that, if you break down how the brain would have to work (or could work) to perform various functions and limit yourself to using neural networks as building blocks, it's not that difficult to come up with a synthetic architecture that performs all of the same functions, provided you steer clear of magical thinking

Actually, you need a number of things other than neural networks, but... nevermind, everyone here is clearly fixated on pro-Musk-Bostrom-bloc vs anti rather than on the science.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: