Hacker News new | past | comments | ask | show | jobs | submit login

Go is still a well defined game within a limited space that doesn't change, and rules that don't change. It's just harder than Chess, but that doesn't make it similar to tons of real world tasks humans are better at.



That's probably true, but that's very much not what people were saying about Go a couple years ago. There were a lot of people talking about how there isn't a straightforward evaluation function of the quality of a given state of the board, how things need to be planned in advance, how there's much more combinatorial explosion than in chess, etc., to the point where it's a qualitatively different game.

For me, as someone who accepted and believed these claims about Go being qualitatively different, realizing that no, it's not qualitatively different (or that maybe it is, but not in a way that impedes state-of-the-art AI research) is increasing my skepticism in other claims that board games in general are qualitatively different from other tasks that AIs might get good at.

(If you didn't buy into these claims, then I commend you on your reasoning skills, carry on.)


About those claims- this is from Russel and Norvig, 3d ed. (from 2003, so a way back):

Go is a deterministic game, but the large branching factor makes it challeging. The key issues and early literature in computer Go are summarized by Boozy and Cazenave (2001) and Muller (2002). Up to 1997 there were no competent Go programs. Now the best programs play most of their moves at the master level; the only problem is that over the course of a game they usually make at least one serious blunder that allows a strong opponent to win. Whereas alpha—beta search reigns in most games, many recent Go programs have adopted Monte Carlo methods based on the UCT (upper confidence bounds on trees) scheme (Kocsis and Szepesvari, 2006). The strongest Go program as of 2009 is Golly and Silver's MoGo (Wang and Golly, 2007; Gelly and Silver, 2008). In August 2008, MoGo scored a surprising win against top professional Myungwan Kim, albeit with MoGo receiving a handicap of nine stones (about the equivalent of a queen handicap in chess). Kim estimated MOGO's strength at 2-3 dan, the low end of advanced amateur. For this match, MoGo was run on an 800-processor 15 terailop supercomputer (1000 limes Deep Blue). A few weeks later, MoGo, with only a five-stone handicap, won against a 6-dan professional. In the 9 x 9 form of Go, MoGo is at approximately the 1-dan professional level. Rapid advances are likely as experimentation continues with new forms of Monte Carlo search. The Computer Go Newsletter, published by the Computer Go Association, describes current developments.

There's no word about how Go is qualitatively different to other games, but maybe the referenced sources say something along those lines. Personally, I took a Masters course in AI two years ago, before AlphaGo and I remember one professor saying that the last holdout where humans can still beat computers in board games was GO, but I don't quite remember him saying anything about qualititative difference. Still, I can recall hearing about the idea that Go needs intuition or something like that, except I've no idea where I've heard that. I guess it might come from the popular press.

I guess this will sound a bit like the perenial excuse that "if it works, it's not AI" but my opinion about Go is that humans just weren't that good at it, after all. We may have thought that we have something special that makes us particularly good at Go, better than machines- but AlphaGo[Zero] has shown that, in the end, we just have no idea what it means to be really good at it (which, btw, is a damn good explanation of why it took us so long to make AI to beat us at it).

That, to my mind, is a much bigger and much more useful achievement than making a good AI game player. We can learn something from an insight into what we are capable of.


s/2003/2009/, I think, but the point stands. (Also I think I have the second edition at home and now I want to check what it says about Go.)

> my opinion about Go is that humans just weren't that good at it, after all. We may have thought that we have something special that makes us particularly good at Go, better than machines- but AlphaGo[Zero] has shown that, in the end, we just have no idea what it means to be really good at it (which, btw, is a damn good explanation of why it took us so long to make AI to beat us at it).

I really like that interpretation!


> the last holdout where humans can still beat computers in board games was GO

False, because nobody ever bothered to study modern boardgames rigorously.

Modern boardgames have small decision trees but very difficult evaluation functions. (Exactly opposite from computational games like Go.)

Modern boardgames can probably be solved by pure brute force calculation of all branches of the tree, but nobody knows if things like neural networks are any good for playing them.


In AI, "board games" generally means classical board games (nim, chess, backgammon, go etc) and "card games" means classical card games (bridge, poker, etc). Russel & Norvig also discuss some less well-known games, like kriegspiel (wargame) if memory serves, but those are all classical at least in the sense that they are, well, quite old.

I've seen some AI research in more modern board games actually. I've read a couple of papers discussing the use of Monte Carlo Tree Search to solve creature combat in Magic: the Gathering and my own degree and Master's dissertation were about M:tG (my Master's was in AI and my degree dissertation was an AI system also).

I don't know that much about modern board games, besides collectible card games, but for CCGs in particular, the game trees are not small. I once calculated the time complexity of traversing a full M:tG game tree as O(b^m * n^m) = 2.272461391808129337799800881135e+5564 (where b the branching factor, m the average number of moves in a game and n the number of possible deck permutations for a 60 card deck taking into account cards included multiple times). And mine was probably a very conservative estimate.

Also, to my knowledge, Neural nets have not been used for magic-playing AI (or any other CCG playing AI). What has been used is MCTS, on its own, without terrible success. The best AI I've seen incorporates some ___domain knowledge, in the form of card-specific strategies (how to play a given card).

There are some difficulties in using ANNs to make an M:tG AI. Primarily, the fact that a truly competent player should be able to pick up a card it's never seen before and play it correctly (or decide whether to include it in a deck, if the goal is to also address deck-building). For this, the AI player will need to have at least some understanding of M:tG's language (ability text). It is my understanding that other modern games have equal requirements to understand some game context outside of the main rules, which complicates the traditional tactic of generating all possible moves, pruning some and choosing the best.

In any case what I meant to say is that people in AI have indeed considered other games besides the classical ones- but when we talk about "games" in AI we do mean the classics.


> but when we talk about "games" in AI we do mean the classics

Only because of inertia. There's nothing inherently special about "classics". Eventually somebody will branch out once Go and poker are mined out of paper and article opportunity.

Once we do then maybe some new, interesting algorithms will be found.

In principle, every game can be solved by storing all possible game states in a database. Where brute-force storing is impractical due to size concerns, compression tricks have to be used.

E.g., Go is a simple game because at the end, every one of the fixed number of board spaces is either +1, -1 or 0. Add them up and you know if you won. This means that every move is either "correct" or "incorrect"; the problem of classifying multidimensional objects into two classes is a problem that we're pretty good at now, and things like neural networks get the job done.

A slightly more complex game like Agricola has no "correct" and "incorrect" moves because it's not zero-sum; you can make an "incorrect" move and still win as long as your opponent is forced to make a relatively more "incorrect" move.

Not sure how much of a difference that makes, but what's certain is that by (effectively) solving Go we've only scratched the surface. It's not the end of research, only the beginning.


Sure. Research in game playing AI doesn't end with Go, or any other game. We may see more research in modern board games, now that we're slowly running out of the classics.

I think you're underestimating the amount of work and determination it took to get to where we are today, though (I mean your comment about "inertia"). Classic board games have the advantage of a long history and of being well understood (the uncertainty about optimal strategies in Go notwithstanding). Additionally, for at least some of them like chess, there are rich databases of entire games that can be used outright, without the AI player having to generate-and-test them in the process of training or playing.

The same is not true for modern games. On the one hand, modern board games like Agricola (or, dunno, Settlers or Carcassonne etc) don't have such an extensive and multi-national following as the classics so it's much harder to find a lot of data to train on (which is obviously important for machine-learning AI players). I had that problem when considering an M:tG AI trained with machine learning: I would have liked to find play-by-play data on professional games but there just isn't any (or where there is it's not enough, or it's not in any standardised format).

Finally, classic board games have cultural significance that modern board games dont' quite match, despite the huge popularity of CCGs like M:tG or Pokemon, or Eurogame hits like Settlers. Go, chess and backgammon in particular have tremendous historical significance in their respective areas of the world- chess in Eastern Europe, backgammon in the Middle East, Go in SE Asia. People go to special academies to learn them, master players are widely recognised etc. You don't get that level of interest with modern board games- so there's less research interest for them, also.

People in game playing AI have been trying for a very long time to crack some games like Go and, recently, poker (not quite cracked yet). They didn't sit around twiddling their thumbs all those years, neither did they choose classical board games over modern ones just because they didn't have the imagination to think of the latter. In AI research, as in all research, you have to make progress before you can make more progress.




Consider applying for YC's Summer 2025 batch! Applications are open till May 13

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: