Libratus is an artificial intelligence that has beat the top four professional poker players in the world at no-limit Texas Hold’em earlier this year. According to Carnegie Mellon University researchers, the AI bot uses a three-pronged approach to become a master poker play with more decision points than atoms in the universe.
Tumoas Sandholm, professor of computer science, and Noam Brown, a Ph.D. student at the Computer Science Department, say that their AI was able to achieve superhuman performance by breaking the game into smaller, manageable parts based on the opponents’ gameplay, and fixing potential weaknesses in its strategy during competition.
AI programs have defeated top humans in checkers, chess and Go. These are all challenging games where both players know the exact state of the game at all times. In contrast, poker players have hidden information. They don’t know what cards their opponents have and don’t know if their opponents are bluffing.
In a 20-day competition involving 120,000 hands at River Casino in Pittsburgh in January 2017, Libratus was the first AI to defeat top human players at heads up no-limit Texas Hold’em. This game has been the primary benchmark and challenge problem for imperfect information game-solving by AIs.
Libratus beat every player individually in the two-player game and collectively amassed more than $1.8 million in chips. Measured in milli-big blinds per hand (mbb/hand), a standard used by imperfect-information game AI researchers, Libratus defeated the human players by 147 mmb/hand. In poker lingo, this is 14.7 big blinds per game.
"The techniques in Libratus do not use expert domain knowledge or human data and are not specific to poker," Sandholm and Brown, "Thus they apply to a host of imperfect-information games."
This hidden information is universal in real-world strategic interactions, according to researchers. This includes business negotiation, cybersecurity, finance, strategic interactions and military applications.
Libratus has three main modules. The first computes an abstraction of the game that is smaller and easier to solve than considering all 10,161 possible decision points in the game. Then the AI creates its own detailed strategy for the early rounds of Texas Hold’em and a coarse strategy for later rounds. This is called the blueprint strategy.
An example of these abstractions in poker is grouping smaller hands together, and then treating them identically.
"Intuitively, there is little difference between a King-high flush and a Queen-high flush," Brown said. "Treating those hands as identical reduces the complexity of the game and thus makes it computationally easier." In the same vein, similar bet sizes also can be grouped together.
In the final round of the game, a second module constructs a new, finer-grained abstraction based on the state of play. The AI computes a strategy for the sub-game in real-time that balances strategies across different subgames using the blueprint strategy for guidance, which needs to be done in order to achieve safe subgame solving. In the competition in January, Libratus performed this computation using the Pittsburgh Supercomputing Center’s Bridges computer.
When an opponent makes a move that is not in the abstraction, the AI computes a solution to the subgame that includes the opponent’s move. Sandholm and Brown named this nested-subgame solving.
DeepStack, an AI created by the University of Alberta to play heads-up, no-limit Texas Hold’em, includes a similar algorithm, called continual re-solving. But DeepStack has not been tested against top professional players.
The third module is designed to improve the blueprint strategy to improve as the AI moves through the competition. According to Sandholm, usually AIs use machine-learning to find mistakes in the opponent’s strategy and exploit them. But this opens the AI to exploitation if the opponent shifts strategy.
Instead, Libratus’ self-improver module analyzes opponents’ bet sizes to detect potential holes in the blueprint strategy. Libratus adds the missing branches, computes strategies and adds them to the blueprint.
Along with beating the human pros, Libratus was evaluated against the best prior poker AIs. These AIs included Baby Tartanian8, a bot developed by Sandholm and Brown that won the 2016 Annual Computer Poker Competition held with the Association for the Advancement of Artificial Intelligence Annual Conference.
Baby Tartanian8 beat the next two strongest AIs in the competition by 12 mmb/hand and 24 mbb/hand. Libratus bested Baby Taratanian8 by 63 mbb/hand. DeepStack has not been tested against other AIs.
"The techniques that we developed are largely domain independent and can thus be applied to other strategic imperfect-information interactions, including non-recreational applications," Sandholm and Brown concluded. "Due to the ubiquity of hidden information in real-world strategic interactions, we believe the paradigm introduced in Libratus will be critical to the future growth and widespread application of AI."
The paper on this research was published in the journal Science.