Hacker News new | past | comments | ask | show | jobs | submit login

Recurrent neural networks can be used for arbitrary computations, the equivalence to Turing machines has been proven. However, they are utterly impractical for the task.

This seems to be a state machine that is somehow learned. The article could benefit from a longer synopsis and "Python" does not appear to be relevant at all. Learning real Python semantics would prove quite difficult due to the nature of the language (no standard, just do as CPython does).




> Recurrent neural networks can be used for arbitrary computations, the equivalence to Turing machines has been proven. However, they are utterly impractical for the task.

Karpathy's 2015 RNN article [1] demonstrated that RNNs trained character-wise on Shakespeare's works could produce Shakespeare-esque text (albeit without the narrative coherence of LLMs). Given that, why wouldn't they be able to handle natural language as formulaic as code review comments?

In that case inference was run with randomized inputs in order to generate random "Shakespeare", but the structure of the language and style was still learned by the RNN. Perhaps it could be used for classification also.

1. https://karpathy.github.io/2015/05/21/rnn-effectiveness/


For RNN abilities, RWKV is worth a look[1]

It's billed as "an RNN with GPT-level LLM performance".

[1] https://www.rwkv.com/




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: