If you like our work, please consider supporting us so we can keep doing what we do. And as a current subscriber, enjoy this nice discount!
Also: if you haven’t yet, follow us on Twitter, TikTok, or YouTube!
Adversarial search algorithms are a type of algorithm used to find solutions to problems in which two players are in competition with each other. The two players are known as adversaries. The goal of the algorithm is to find a solution that is optimal for one player while being sub-optimal for the other player.
There are a variety of adversarial search algorithms, each with its own strengths and weaknesses. Some of the more popular algorithms include Minimax, Alpha-beta pruning, and Monte Carlo tree search.
Minimax
Minimax is a classic adversarial search algorithm that dates back to the 1950s. It is based on the idea of finding the best move for the player who is currently ahead in the game. The algorithm does this by considering all of the possible moves that could be made by both players and then selecting the move that would result in the player who is ahead winning the game.
Alpha-beta pruning
Alpha-beta pruning is a more efficient version of minimax. It is able to eliminate some of the possible moves that would be considered by minimax without changing the final outcome. This is done by considering the moves of both players in turn and then eliminating any move that would not be optimal for the player whose turn it is.
Monte Carlo Tree Search
Monte Carlo tree search is a newer algorithm that has been gaining popularity in recent years. It is based on the idea of using random sampling to find the best move. The algorithm randomly selects a set of moves and then evaluates them to see which one is most likely to lead to a win.
Adversarial search algorithms are used in a variety of games, including chess, checkers, and Go. They can also be used in other areas such as business and politics.
The history of adversarial search
The history of adversarial search can be traced back to the early days of AI research. In 1950, Alan Turing published a paper that proposed a test for determining whether a machine could be said to think. The test, now known as the Turing test, involves a human judge interacting with two hidden players, one of whom is a human and the other a machine. If the judge cannot tell the difference between the two players, then the machine is said to have passed the test.
In his paper, Turing also proposed a game that could be used to test a machine's intelligence. The game, now known as the Turing game, is played by two players, one of whom is a machine. The other player can be either a human or another machine. The game is played on a board with a number of squares. The aim of the game is to reach a goal square. The player who reaches the goal square first is the winner.
In the 1970s, AI researchers began to develop computer programs that could play games such as chess and checkers. These programs used a variety of techniques, including tree search, alpha-beta pruning, and heuristic evaluation functions.
In 1980, a program called CHESS 4.5, developed by Richard Greenblatt, beat a human chess champion for the first time. Greenblatt's program used a technique called alpha-beta pruning, which is still used in many chess programs today.
In 1997, a program called Deep Blue, developed by IBM, beat the world chess champion, Garry Kasparov. Deep Blue used a number of techniques, including tree search, alpha-beta pruning, and heuristic evaluation functions.
Today, adversarial search algorithms are used in a variety of applications, including computer Go programs, poker-playing programs, and programs that play other two-player games.
Do you like our work?
Consider becoming a paying subscriber to support us!