Players of the science-fiction video game StarCraft II faced an unusual opponent this summer. An artificial intelligence (AI) known as AlphaStar — which was built by Google’s AI firm DeepMind — achieved a grandmaster rating after it was unleashed on the game’s European servers, placing within the top 0.15% of the region’s 90,000 players.
The result, published on 30 October in Nature1, shows that an AI can compete at the highest levels of StarCraft II, a massively popular online strategy game in which players compete in real time as one of three factions — the human Terran forces or the aliens Protoss and Zerg — battling against each other in a futuristic warzone.
DeepMind, which previously built world-leading AIs that play chess and Go, targeted StarCraft II as its next benchmark in the quest for a general AI — a machine capable of learning or understanding any task that humans can — because of the game’s strategic complexity and rapid pace.
“I did not expect AI to essentially be superhuman in this domain so quickly, maybe not for another couple of years,” says Jon Dodge, an AI researcher at Oregon State University in Corvallis.
In StarCraft II, experienced players multitask by managing resources, executing complex combat manoeuvres and ultimately out-strategizing their opponents. Professionals play the game at a breakneck pace, making more than 300 actions per minute. The machine-learning techniques underlying DeepMind’s AI rely on artificial neural networks, which learn to recognize patterns from large data sets, rather than being given specific instructions.
Not a fair fight
DeepMind first pitted AlphaStar against high-level players in December 2018, in a series of laboratory-based test games. The AI played — and beat — two professional human players. But critics asserted that these demonstration matches weren’t a fair fight, because AlphaStar had superhuman speed and precision.
Before the team let AlphaStar out of the lab and onto the European StarCraft II servers, they restricted the AI’s reflexes to make it a fairer contest. In July, players received notice that they could opt-in for a chance to potentially be matched against the AI. To keep the trial blind, DeepMind masked AlphaStar’s identity.
“We wanted this to be like a blind experiment,” says David Silver, who co-leads the AlphaStar project. “We really wanted to play under those conditions and really get a sense of, ‘how well does this pool of humans perform against us?’”
AlphaStar’s training paid off: it crushed low-ranking opponents and ultimately amassed 61 wins out of 90 games against high-ranking players.