It’s almost as if human intelligence doesn’t involve performing repeated matrix multiplications over a mathematically transformed copy of the internet. ;-)
It’s interesting that even if raw computing power had advanced decades earlier, this type of AI would still not be possible without that vast trove of data that is the internet.
Maybe the problem isn't the algorithm but the hardware. Numerically simulating the thermal flow in a lightbulb or CFD of a Stone flying through air is pretty hard, but the physical thing isn't that complex to do. We're trying to simulate the function of a brain which is basically an analog thing using a digital computer. Of course that can be harder than running the brain itself.
If you think of human neurons they seem to basically take inputs from bunch of other neurons, possibly modified by chemical levels and send out a signal when they get enough. It seems like something that could be functionally simulated in software by some fairly basic adding up inputs type stuff rather than needing the details of all the chemistry.
Isn’t that exactly what we’re currently doing? The problem is that doing this few billion times for every token seems to be harder than just powering some actual neurons with sugar.
The algorithm (of a neural network) is simulating connections between nodes with specific weights and an activation function. This idea was derived from the way neurons are thought to work.
20w for 20 years to answer questions slowly and error-prone at the level of a 30B model.
An additional 10 years with highly trained supervision and the brain might start contributing original work.
> Until we get major advances in robotics and models designed to control them, true AGI will be nowhere near.
AGI has nothing to do with robotics, if AGI is achieved it will help push robotics and every single scientific field further with progression never seen before, imagine a million AGIs running in parallel focused on a single field.
[1] https://hypertextbook.com/facts/2001/JacquelineLing.shtml