This goes both ways: the understanding of what you read is constructed from the worldview you contextualize it with.
This is how we are able to use ambiguous language, also known as "Context-Sensitive Grammar". We provide our own backstory (context) that provides a Just-In-Time unambiguous definition of each ambiguous word or phrase.
There is a lot of excitement around LLMs, because they are capable of doing something with Context-Sensitive Grammar. Unfortunately, that something is not equivalent to the arbitrary disambiguation that we do using context. The core feature of an LLM (with respect to natural language processing) is that it never defines anything. Instead, an LLM blindly provides whatever text it deems "most likely to happen next", all without ever associating any text to any meaning. We wrap this process in "training", which hopefully guides common interactions in the direction we want, but that is never a guarantee.
This is how we are able to use ambiguous language, also known as "Context-Sensitive Grammar". We provide our own backstory (context) that provides a Just-In-Time unambiguous definition of each ambiguous word or phrase.
There is a lot of excitement around LLMs, because they are capable of doing something with Context-Sensitive Grammar. Unfortunately, that something is not equivalent to the arbitrary disambiguation that we do using context. The core feature of an LLM (with respect to natural language processing) is that it never defines anything. Instead, an LLM blindly provides whatever text it deems "most likely to happen next", all without ever associating any text to any meaning. We wrap this process in "training", which hopefully guides common interactions in the direction we want, but that is never a guarantee.