Yep, it's funny how one of the key factors that limits LLM usage is just the typing speed of users.
I agree that the amount of bespoke UI that needs to exist probably won't stagnate. Humans need about the same amount of visual information to verify a task was done correctly as they need to do the task.
LLM generated UI is an interesting field. Sure, you can get ChatGPT to generate schema to lay out some buttons. But it seems harder to identify the context and relevant information that must be displayed for the human to be a valuable/necessary asset in the process.
I agree that the amount of bespoke UI that needs to exist probably won't stagnate. Humans need about the same amount of visual information to verify a task was done correctly as they need to do the task.
LLM generated UI is an interesting field. Sure, you can get ChatGPT to generate schema to lay out some buttons. But it seems harder to identify the context and relevant information that must be displayed for the human to be a valuable/necessary asset in the process.