That's not right. If DeepMind's agents could really transfer what they learned from one game to another, that they've never seen before, their "specialized" agents, that only trained on one game, would then be able to perform well on unseen games. Instead, in order to get an agent with good performance in one unseen game they had to train it in all but that particular game.
That's typical of the poor generalisation displayed by neural nets and clearly not how humans do transfer learning.
But humans have already trained on an incredible number of games (including reality) when they play No Man's Sky for the first time. What they say here is that training on N-1 games makes you better at the Nth game. So you just continue to scale this up.
"An incredible number of games"? You're saying a kid can't pick up and play No Man's Sky if it's the first time they ever played a video game? Or that they can't get good at it if it's the first game they play?
That's typical of the poor generalisation displayed by neural nets and clearly not how humans do transfer learning.