The core technique of AlphaGo is using tree search as a "policy improvement operator". Tree search doesn't work on most real-world tasks: the "game state" is too complex, there are too many choices, it's hard to predict the full effect of any choice you might make, and there often isn't even a "win" or "lose" state which would let you stop your self-play.
MCTS means "Monte-Carlo Tree Search". It's the core of the algorithm. The big difference is that it doesn't use rollouts, or random play: it chooses where to expand the tree based only on the neural network.
That's not what Monte Carlo Tree search is. The new version is still one neural network + MCTS. There's no way to store enough information to judge the efficiency of every possible move in a neural network, therefore a second algorithm to simulate outcomes is necessary.
If you read the paper, they do in fact still use monte-Carlo tree search. They just simplify their usage in conjunction with reducing the number of neural networks to 1
Tree search is also used during play. In the paper, they pit the pure neural net against other versions of the algorithm -- it ends up slightly worse than the version that played Fan Hui, at about 3000 ELO.
> AlphaGo Zero does not use “rollouts” - fast, random games used by other Go programs to predict which player will win from the current board position. Instead, it relies on its high quality neural networks to evaluate positions.