We've had that for years in the DAW and autotune and snapping to grid.
The result's pretty boring and interchangeable, and that's largely what AI music is trained on. Accuracy is not novel here. Ever since the 80s it's been increasingly possible to augment musical skill or lack of, with technology.
I don't think we're very close to correcting for sloppy intentionality. Only to correcting 'mistakes', or alternately adding them in the belief that doing stuff wrong is where the magic is.
I know I've seen a video where somebody takes part of a Led Zeppelin song and snaps all the notes to a grid. What started off wonderful became sterile and boring.
You can snap while still maintaining arbitrary levels and styles of swing. I suspect the video was intentionally framing the correction software as soulless (of course it is, but the limit of its expression is the human technician using it).
Is there any big difference between using that and instead doing playback lip syncing and fake playing the guitar, like already happens sometimes at concerts?
Until someone makes an AI guitar pedal that corrects sloppy playing.