Hacker News new | past | comments | ask | show | jobs | submit login

I would expect it to result in a large and slow Turing machine instead of a small neural network.



So far, betting on RISC over CISC in terms of ultimate hardware performance has been a good bet.


Both RISC and CISC are usually used in the context of describing Turing complete instruction sets. I'm not sure, it's relevant here?

If you want to make a comparison in this flavour: Turing machines are a bit like CPUs in that they can execute arbitrary things in sequence. All the flavours of machine learning are more like GPUs: they do well with oodles of big, parallelisable matrix multiplications interspersed with some simple non-linear transformations.


These words, I do not think that they mean what you hope they will mean.


Well we're talking about a "native" implementation of both for comparison, right? Neural nets as they're being used are just being emulated by our turing-machine-like processors which makes them run like ass in practice. Something like an analog circuit that adds up voltages would be a native NN implementation and would surely vastly outperform any turing machine in wide highly parallel super highly memory driven tasks that are well suited for it, and either emulating the other is slow and bloated.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: