No but it increases the speed and ease at which you can check any of those - making a lot of those steps practical when they were a slog before. If people aren't double-checking LLM claims against sources then they were never on guard for those without an LLM either.
Besides, those are incredibly short-term concerns. Recent models are a whole lot more trustworthy and can search for and cite sources accurately.
Does it? You google a query, get results, compare a few alternative results. You ask a prompt and what? Compare outputs to each other? Or just defer back to googling for alternative sources.
Firstly, these prompts tend to be shockingly close in behavior. Secondly, Google tends to rank reputable or self curated sites which have some accountability. It can be wrong but you know thr big news sites tend to at least defer to interviews to back up facts. Wikipedia has an overly strict process to prevent blatant, source less information.
There's room for error, but there's at least more accountability compared to what an LLM is going through.
> Recent models are a whole lot more trustworthy and can search for and cite sources accurately.
Lastly, prompts are still treated as black boxes, which is a whole other issue. For the above reasons I still would simply defer to human curated resources. That's what LLMs are doing anyway without transparency.
People want to give up transparency for speed? It seems completely counter to hacker culture.
Besides, those are incredibly short-term concerns. Recent models are a whole lot more trustworthy and can search for and cite sources accurately.