Chess To Enjoy: You must agree computers changed everything. AlphaZero was not programmed with your principles, the way engines used to. It just played gazillions of games with itself until it discovered the revolutionary truths of chess.
General Principles: Revolutionary? Here's what the book Game Changer has to say: There are 14 detectable features of AlphaZero's playing style. Number one is "AlphaZero likes to target the opponent's king." Number two is "AlphaZero likes to keep its own king out of danger."
Chess To Enjoy: Seems like I learned that the first week I played chess.
General Principles: Other discoveries are that AlphaZero likes to trade material when it has a winning advantage, it tries to control the center, and it seeks great outposts for its knights.
Chess to Enjoy: I see...
General Principles: And AlphaZero likes to sacrifice material to open attacking lines and to exchange off its opponent's most active pieces.
Chess To Enjoy: ...Well, OK...
General Principles: Then on page 129 it says, "AlphaZero may give the opponent the chance to go wrong."
Chess To Enjoy: Sounds like AlphaZero is being credited for inventing what every experienced player already knows.
It's been a while since I introduced the book by Matthew Sadler & Natasha Regan in AlphaZero Stars in 'Game Changer' (January 2019). Although I haven't read it cover-to-cover, I have read significant portions, and the topic that Soltis is gently mocking is one of the main themes of the book. In AI/NN research, the topic is called 'interpretability' : what can we learn from black box models?
Unfortunately, interpretability is not easy to understand or to execute. Wikipedia's page Explainable artificial intelligence starts,
Explainable AI (XAI), Interpretable AI, or Transparent AI refer to techniques in artificial intelligence (AI) which can be trusted and easily understood by humans. It contrasts with the concept of the "black box" in machine learning where even their designers cannot explain why the AI arrived at a specific decision.
I found a four-part PDF presentation 'ICIP 2018 Tutorial' on Interpretable Deep Learning: Towards Understanding & Explaining Deep Neural Networks (interpretable-ml.org). It answers the question 'Why Interpretability?' with five reasons:-
1) Verify that classifier works as expected
2) Understand weaknesses & improve classifier
3) Learn new things from the learning machine
4) Interpretability in the sciences
5) Compliance to legislation
The 'classifier' of a neural network (NN) is the portion of the software that makes the final decision about what the NN is seeing. Is it a cat (yes/no)? -or- What animal is in the photo (cat/dog/other)? A game playing NN has a more complicated classifier, which is the decision on what move to play next. The third item in the list ('Learn new things') above is illustrated in the ICIP tutorial by the following slide:-
The referenced game is AlphaGo, not AlphaZero, but the sentiment is familiar: 'I've never seen a human play this move.' What can we learn from AlphaZero? Probably not much beyond what is in the Sadler/Regan book, because AlphaZero has never been seen outside of DeepMind's laboratory. We might have better luck with Leela or with one of the other chess NNs that are rapidly emerging.
No comments:
Post a Comment