Googles DeepMind A.I. takes on something even more complicated than go: StarCraft II

Researchers can now use Google’s A.I. to test various theories for ways that machines can learn to make sense of complicated systems, in this case a real-time strategy game. In StarCraft II, players fight one another by gathering resources to pay for defensive and offensive units.
 
It has a healthy competitive community that is known for having a ludicrously high skill level. But considering that DeepMind A.I. has previously conquered complicated turn-based games like chess and go, a real-time strategy game makes sense as the next frontier.
 
The companies announced the collaboration today at the BlizzCon fan event in Anaheim, California, and Google’s DeepMind A.I. division posted a blog about the partnership and why StarCraft II is so ideal for machine-learning research.
 
“StarCraft is an interesting testing environment for current AI research because it provides a useful bridge to the messiness of the real-world,” reads Google’s blog. “The skills required for an agent to progress through the environment and play StarCraft well could ultimately transfer to real-world tasks.”
 
Most notably, StarCraft requires players to send out scouts to learn information. To succeed, the player then needs to retain and act on that information over a long period of time with ever-changing variables.
 
“This makes for an even more complex challenge as the environment becomes partially observable,” Google’s blog explains. “[That’s] an interesting contrast to perfect information games such as chess or go. And this is a real-time strategy game where both players are playing simultaneously, so every decision needs to be computed quickly and efficiently.”
 
If you’re wondering how much humans will have to teach A.I. about how to play and win at StarCraft, the answer is very little. DeepMind learned to beat the best go players in the world by teaching itself through trial and error. All the researchers had to do was explain how to determine success, and the A.I. can then begin playing games against itself on a loop while always reinforcing any strategies that lead to more success.
 
For StarCraft, that will likely mean asking the A.I. to prioritize how long it survives and/or how much damage it does to the enemy’s primary base. Or, maybe, researchers will find that defining success in a more abstract way will lead to better results, discovering the answers to all of this is the entire point of Google and Blizzard teaming up.