Right now, how DALL-E works is that you enter a prompt, it gives you a few images, you fine tune your prompt, and after a few rounds you may pick an image and maybe polish it before you can use it.
What would the co-pilot equivalent be? You give it a prompt, it generates a few repos, but then you still have to understand the whole project to judge that it respects the system needs and restrictions. After a few rounds, maybe you will have to change up the code a little, but 90% code will be written by the machine. I, for one, welcome this future -- because once again, this "prompt refinement" iteration is where our human creativity is used most effectively; I think it's a significant improvement over writing boilerplate code every time.
What would the co-pilot equivalent be? You give it a prompt, it generates a few repos, but then you still have to understand the whole project to judge that it respects the system needs and restrictions. After a few rounds, maybe you will have to change up the code a little, but 90% code will be written by the machine. I, for one, welcome this future -- because once again, this "prompt refinement" iteration is where our human creativity is used most effectively; I think it's a significant improvement over writing boilerplate code every time.