While I agree with them, I've found a lot of the other responses to not be conducive to you actually understanding where you misunderstood the situation.
AI performance often decreases at a logarithmic rate. Simply put, it likely will hit a ceiling, and very hard. To give a frame of reference, think of all the places that AI/ML already facilitate elements of your life (autocompletes, facial recognition, etc). Eventually, those hit a plateau that render it unenthusing. LLMs are destined for the same. Some will disagree, because its novelty is so enthralling, but at the end of the day, LLMs learned to engage with language in a rather superficial way when compared to how we do. As such, it will never capture the magic of denotation. Its ceiling is coming, and quickly, though I expect a few more emergent properties to appear before that point.
AI performance often decreases at a logarithmic rate. Simply put, it likely will hit a ceiling, and very hard. To give a frame of reference, think of all the places that AI/ML already facilitate elements of your life (autocompletes, facial recognition, etc). Eventually, those hit a plateau that render it unenthusing. LLMs are destined for the same. Some will disagree, because its novelty is so enthralling, but at the end of the day, LLMs learned to engage with language in a rather superficial way when compared to how we do. As such, it will never capture the magic of denotation. Its ceiling is coming, and quickly, though I expect a few more emergent properties to appear before that point.