The one case where this doesn't work, is if the prompt is, say 3 ideas, which the LLM expand to 20, and the colleague then trimmed down to 10.
Ideally there's some selection done, and the fact you're receiving it means it's better than a mean answer. But sometimes they haven't even read the LLM output themselves :-(
Ideally there's some selection done, and the fact you're receiving it means it's better than a mean answer. But sometimes they haven't even read the LLM output themselves :-(