Hacker News new | past | comments | ask | show | jobs | submit login

We use KataGo and sometimes LeelaZero (which is a replication of the AlphZero paper). KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster. It was also trained on different board sizes and to play to get a good result when it's already behind or ahead.

KaTrain is a good frontend.




> KataGo was trained with more knowledge of the game (feature engineering and loss engineering), so it trained faster.

Not really important to your point, but it's not really just that it uses more game knowledge. Mostly it's that a small but dedicated community (especially lightvector) worked hard to build on what AlphaGo and LeelaZero did.

Lightvector is a genius and put a lot of effort into KataGo. It wasn't just add some game knowledge and that's it. https://github.com/lightvector/KataGo?tab=readme-ov-file#tra... has a bunch of info if you're interested.


I wasn't at all trying to say his work was simple. I was trying to say "deepmind were trying to build an AI that gets good at games without anything in their structures being specialized for the game, lightvector asked what if we did specialize the model on Go". And he did some wonderfully clever things.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: