Thinking about this, I'd be also interested to hear what the author learned and didn't find useful over his phd. Is this a list of most of what he ended up learning, (which could potentially then have a lot of conformation bias in it) or is it curated from the maze of blind alleys he went down?
So one thing I learned a lot of in my PhD (for literally a whole year), that I literally never needed, was functional analytic methods for PDEs. Stuff like Moser iterations / the De Giorgi-Nash-Moser theorem, etc.
The finer details of Turing machines have never really helped me, although in my case that's probably the exception as I imagine that's still pretty important.
On a more ML note, I have literally never needed SVMs. (And hope I never get asked about them, I've forgotten everything about them haha.)
I think there's a lot of other stuff I could add to the "just-don't-know-stuff" list!
(And to answer your last question: this list is curated, and based on the criteria of (a) is it useful, and (b) is it widely applicable.)