Hacker News new | past | comments | ask | show | jobs | submit login

We will never get rid of the "von Neumann bottleneck", except for a relatively small number of niche applications.

The bottleneck consists in the fact that instead of having a huge number of specialized automata that perform everything that must be done to execute a useful application you have just an universal automaton together with a big memory, where the universal automaton can perform anything when given an appropriate program.

The use of a shared automaton prevents many actions to be done concurrently, but it also provides a huge economy of logical circuits.

The "von Neumann bottleneck" is alleviated by implementing in a computer as many processor cores as possible at a given technological level, each with its own non-shared cache memory.

However removing completely the concept of programmable processor with separate memory would multiply the amount of logic circuits too much for any imaginable technology.

The idea of mixing computational circuits with the memory cells is feasible only for restricted well defined applications, e.g. perhaps for something like ML inference, but not for general-purpose applications.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: