Recurrent neural networks can be used for arbitrary computations, the equivalence to Turing machines has been proven. However, they are utterly impractical for the task.
This seems to be a state machine that is somehow learned. The article could benefit from a longer synopsis and "Python" does not appear to be relevant at all. Learning real Python semantics would prove quite difficult due to the nature of the language (no standard, just do as CPython does).
> Recurrent neural networks can be used for arbitrary computations, the equivalence to Turing machines has been proven. However, they are utterly impractical for the task.
Karpathy's 2015 RNN article [1] demonstrated that RNNs trained character-wise on Shakespeare's works could produce Shakespeare-esque text (albeit without the narrative coherence of LLMs). Given that, why wouldn't they be able to handle natural language as formulaic as code review comments?
In that case inference was run with randomized inputs in order to generate random "Shakespeare", but the structure of the language and style was still learned by the RNN. Perhaps it could be used for classification also.
This seems to be a state machine that is somehow learned. The article could benefit from a longer synopsis and "Python" does not appear to be relevant at all. Learning real Python semantics would prove quite difficult due to the nature of the language (no standard, just do as CPython does).