08 January 2018

The Lineage of AlphaZero

Getting back to business after the yearend holidays, let's continue looking at the underpinnings of AlphaZero. In the previous post, The Constellation of AlphaZero (December 2017), I enumerated the underlying technologies. AlphaZero wasn't created by 'a bolt from the blue'; it was the latest evolution in a line of game playing algorithms -- Giraffe, AlphaGo, AlphaGo Zero, AlphaZero -- stretching back a few years. Each of those evolutions was introduced in a separate paper from which I give the abstracts.

2015-09: [Giraffe] 'Using Deep Reinforcement Learning to Play Chess'

This report presents Giraffe, a chess engine that uses self-play to discover all its domain-speci c knowledge, with minimal hand-crafted knowledge given by the programmer. Unlike previous attempts using machine learning only to perform parameter tuning on hand-crafted evaluation functions, Giraffe's learning system also performs automatic feature extraction and pattern recognition. The trained evaluation function performs comparably to the evaluation functions of state-of-the-art chess engines - all of which contain thousands of lines of carefully hand-crafted pattern recognizers, tuned over many years by both computer chess experts and human chess masters. Giraffe is the most successful attempt thus far at using end-to-end machine learning to play chess.

2016-01: [AlphaGo] 'Mastering the game of Go with deep neural networks and tree search'

The game of Go has long been viewed as the most challenging of classic games for artificial intelligence owing to its enormous search space and the difficulty of evaluating board positions and moves. Here we introduce a new approach to computer Go that uses 'value networks' to evaluate board positions and 'policy networks' to select moves. These deep neural networks are trained by a novel combination of supervised learning from human expert games, and reinforcement learning from games of self-play. Without any lookahead search, the neural networks play Go at the level of state-of-the-art Monte Carlo tree search programs that simulate thousands of random games of self-play. We also introduce a new search algorithm that combines Monte Carlo simulation with value and policy networks. Using this search algorithm, our program AlphaGo achieved a 99.8% winning rate against other Go programs, and defeated the human European Go champion by 5 games to 0. This is the first time that a computer program has defeated a human professional player in the full-sized game of Go, a feat previously thought to be at least a decade away.

2017-10: [AlphaGo Zero] 'Mastering the Game of Go without Human Knowledge'

A long-standing goal of artificial intelligence is an algorithm that learns, tabula rasa, superhuman proficiency in challenging domains. Recently, AlphaGo became the first program to defeat a world champion in the game of Go. The tree search in AlphaGo evaluated positions and selected moves using deep neural networks. These neural networks were trained by supervised learning from human expert moves, and by reinforcement learning from selfplay. Here, we introduce an algorithm based solely on reinforcement learning, without human data, guidance, or domain knowledge beyond game rules. AlphaGo becomes its own teacher: a neural network is trained to predict AlphaGo’s own move selections and also the winner of AlphaGo’s games. This neural network improves the strength of tree search, resulting in higher quality move selection and stronger self-play in the next iteration. Starting tabula rasa, our new program AlphaGo Zero achieved superhuman performance, winning 100-0 against the previously published, champion-defeating AlphaGo.

2017-12: [AlphaZero] 'Mastering Chess and Shogi by Self-Play with a General Reinforcement Learning Algorithm'

The game of chess is the most widely-studied domain in the history of artificial intelligence. The strongest programs are based on a combination of sophisticated search techniques, domain-specific adaptations, and handcrafted evaluation functions that have been refined by human experts over several decades. In contrast, the AlphaGo Zero program recently achieved superhuman performance in the game of Go, by tabula rasa reinforcement learning from games of self-play. In this paper, we generalise this approach into a single AlphaZero algorithm that can achieve, tabula rasa, superhuman performance in many challenging domains. Starting from random play, and given no domain knowledge except the game rules, AlphaZero achieved within 24 hours a superhuman level of play in the games of chess and shogi (Japanese chess) as well as Go, and convincingly defeated a world-champion program in each case.

To understand AlphaZero, it helps to understand AlphaGo Zero. Here is a useful video explaining some of the fundamental concepts.


How Does DeepMind's AlphaGo Zero Work? (10:52) • 'Published on Oct 27, 2017'

The description of the video says,

There's been way too many fear-mongering news articles around the latest version of DeepMind's AlphaGo. Let's set the record straight, AlphaGo is an incredible technology and it's not terrifying at all. I'll go over the technical details of how AlphaGo really works; a mixture of deep learning and reinforcement learning.

At one point, the presenter Siraj Raval shows an overview of the different AlphaGo evolutions.

Now that I have some understanding of the concepts behind AlphaZero, I can look a little deeper into those technologies.

No comments: