just curious: wouldn't this entire enterprise be fraught with danger though ? given the proclivity of LLMs to hallucinate how would you (not you per se, but the person engaging with the LLM to learn) avoid being hallucinated to ?
being a neophyte in a subject, and relying solely on 'wisdom' of LLMs seems like a surefire recipe for disaster.
If you trust symbols blindly, sure it's a hazard. But if you treat it as a plausible answer then it's all good. It's still your job to do the heavy lifting of understanding the ___domain of the latent search space, curate the answers and verify the information generated
There is no free lunch. LLMs isn't made to make your life easier. It's made for you to focus on what matters which is the creation of meaning.
I really don't understand your response. a better way to ask the same question would probably be: would you learn numerical-methods from (a video of) Mr. Hamming or LLM ?
> Sontag argues that the proliferation of photographic images had begun to establish within people a "chronic voyeuristic relation to the world."[1] Among the consequences of this practice of photography is that the meaning of all events is leveled and made equal.
This is the same with photography as with llms. The same with anything symbolic actually. It's just a representation of reality. If you trust a photograph fully that can give you a representation of reality that isn't grounded in reality. It's semiotics. Same with llm, if you trust it fully you are bound to get screwed by hallucination.
There are gaps in the logical jumps, I know. I'd recommend you take a look at Philosophize This' episodes about her work to fill them at least superficially.
being a neophyte in a subject, and relying solely on 'wisdom' of LLMs seems like a surefire recipe for disaster.