"LLMs can train on the official documentation of tools l/libraries but they can't experiment and figure out solutions to weird problems"
LLMs train on way more than just the official documentation: they train on the code itself, the unit tests for that code (which, for well written projects, cover all sorts of undocumented edge-based) and - for popular projects - thousands of examples of that library being used (and unit tested) "in the wild".
This is why LLMs are so effective at helping figure out edge-cases for widely used libraries.
The best coding LLMs are also trained on additional custom examples written by humans who were paid to build proprietary training data for those LLMs.
I suspect they are increasingly trained on artificially created examples which have been validated (to a certain extent) through executing that code before adding it to the training data. That's a unique advantage for code - it's a lot harder to "verify" non-code generated prose since you can't execute that and see if you get an error.
LLMs train on way more than just the official documentation: they train on the code itself, the unit tests for that code (which, for well written projects, cover all sorts of undocumented edge-based) and - for popular projects - thousands of examples of that library being used (and unit tested) "in the wild".
This is why LLMs are so effective at helping figure out edge-cases for widely used libraries.
The best coding LLMs are also trained on additional custom examples written by humans who were paid to build proprietary training data for those LLMs.
I suspect they are increasingly trained on artificially created examples which have been validated (to a certain extent) through executing that code before adding it to the training data. That's a unique advantage for code - it's a lot harder to "verify" non-code generated prose since you can't execute that and see if you get an error.