Humans will always have a hand in the design because they need to explain the real-world constraints to the AI. Sure, the code it produces may be complex, but if the AI is really as smart as you're claiming it will eventually be, then it will also have the ability to explain how the code works in plain English (or your human language of choice). Even today, LLMs are remarkably good at summarizing what code does.
Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?
> Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?
- There is zero option of paying an obscene amount of money to find the person and make the problem 'go away'
- There is a non-zero possibility that the code is not understandable by any developer you can afford. By this I mean that the system exhibits the desired behavior, but is written in such a way that only someone like Mike Pall* can understand.
Philosophical question: how is LLM-produced code that nobody has ever understood any different from human-written legacy code that nobody alive today understands?