If you look in the discussion section you'll find that wasn't exactly what the study ended up with. I'm looking at the paragraph starting:
> An unexpected secondary result was that the LLM alone performed significantly better than both groups of humans, similar to a recent study with different LLM technology.
They suspected that the clinicians were not prompting it right since the LLM without humans was observed to be outperforming the LLM with skilled operators.
Exactly - if even the doctors/clinicians are not "prompting it right," then what are the odds that the layperson is going to get it to behave and give accurate diagnoses, rather than just confirm their pre-existing biases?
> An unexpected secondary result was that the LLM alone performed significantly better than both groups of humans, similar to a recent study with different LLM technology.
They suspected that the clinicians were not prompting it right since the LLM without humans was observed to be outperforming the LLM with skilled operators.