Best First Minimax Search: First Results

  • Richard Korf ,
  • Max Chickering

In Proceedings of the AAAI Fall Symposium on Games: Planning and Learning |

We present a very simple selective minimax search algorithm for two-player games. It always expands next the frontier node at the end of the principal variation, or current best line of play, which is the node that determines the minimax value of the root. The algorithm requires no information other than a static evaluation function, and its time overhead per node is similar to that of alpha-beta minimax. On random game trees, our algorithm outperforms an efficient implementation of alpha-beta, giving both the same amount of computation. In the game of Othello, using the evaluation function from Bill, the world’s best program, best-first minimax also outplays alpha-beta. We present an implementation of the algorithm that reduces its space complexity from exponential to linear in the search depth, at the cost of increased time complexity. Finally, we present a hybrid best-first extension algorithm that combines alpha-beta and best-first minimax, and performs significantly better than either pure algorithm in both domains.