Interpretable Go AI Based on Statistical Learning: Quantifying Go Principles and Developing Lightweight Models

Authors

  • Hanyang Liu Author

DOI:

https://doi.org/10.61173/c52xvf31

Keywords:

Interpretable Artificial Intelligence (AI), Statistical Learning, Monte Carlo Tree Search (MCTS), Go (Weiqi), Cultural Quantification, Lightweight Modeling

Abstract

Due to its large decision space and granular human commonsense, Go, a game with eastern strategic wisdom connotations, is a tough problem for artificial intelligence. Current deep learning Go AI (e.g., AlphaGo) beats human Go players, but their decision-making is a “black box”, data-hungry and computationally expensive. In this talk, I will develop an interpretable Go AI framework based on statistical learning to quantify the traditional Go wisdom (“Go-theory”) and to construct lightweight Go AI models in a limited-resource environment. With the help of probabilistic modeling, Bayesian updating, and cultural rule quantification, an interpretable, computationally efficient, and culturally mathematical expression is attempted in this work. The following contributions are expected: 1) probability-pruned Monte Carlo Tree Search (MCTS) guided by confidence interval; 2) define Go-Theory Index (GTI) as a statistical consistency measure with human heuristics. I expect that my model could provide interpretable predictions, transparent explanations, and pedagogical use in Go education.

Downloads

Published

2025-12-19

Issue

Section

Articles