The dogma generally becomes accepted because it outperforms other known strategies. In a game like Go, that could previously take a while because there are so many possible follow-ups that it takes time to accumulate enough data on whether a new strategy is actually decisively better, or just worse but over-performing because it's less known.
There's a big difference between those two and "the hacker ethos" will lead to a lot of the latter. However, now computers can simulate enough games to give a relatively high degree of confidence that a variation in strategy is truly better.
I don't know how it's developed since, but from what I remember that was how it started - the AIs weren't following the standard moves (joseki) that we'd built up over centuries and human players were thrown off by the nonstandard responses that were working better than expected.
I wonder if AI could be built to continually adapt, so that instead of playing an optimal strategy, it instead chooses between various suboptimal strategies. If humans train to play against the optimal strategy, then maybe the AI could do better by playing in suboptimally but less expected ways.
This is already happening. The point differences for sometimes huge deviations are minuscule, so it's worthwhile to have in one's repertoire. The same is mostly true for purely human games, too: These are trick moves.
There's a big difference between those two and "the hacker ethos" will lead to a lot of the latter. However, now computers can simulate enough games to give a relatively high degree of confidence that a variation in strategy is truly better.