When I think about LLMs reaching their peak at writing code, I can't help but think they will be writing hyper optimized code that will squeeze every last bit of processing power available to them.
I use these tools to get help here and there with tiny code snippets. So far I have not been suggested anything finely optimised. I guess it's because a greater chunk they were trained on isn't optimised for performance.
Does anyone know if any current LLMs can generate super optimised code (even assembly language) ? I don't think so. Doesn't feel we are going to have more intelligent machines than us in future if they full of slop.
Nope, tried various models (you know the common stuff, Claude 3.7, o1, R1, stuff like that) to write SIMD code - both as c++ intrinsics and .NET/c# vector intrinsics - the results have been really really subpar.
I use these tools to get help here and there with tiny code snippets. So far I have not been suggested anything finely optimised. I guess it's because a greater chunk they were trained on isn't optimised for performance.
Does anyone know if any current LLMs can generate super optimised code (even assembly language) ? I don't think so. Doesn't feel we are going to have more intelligent machines than us in future if they full of slop.