Skip to content

DeepMind Takes a Crack at StarCraft II • Mirror Daily

DeepMind has partnered up with Blizzard in order to train the engine in a real-time environment.

Thinking of watching the greatest StarCraft II match of the decade? Then forget about the South Korean tournament because Google is going to show you what playing StarCraft II is all about.

On the 4th November, at the tech convention hosted in Anaheim, Google’s DeepMind division has announced probably one of the most lucrative partnerships in this history of the company. More specifically, the tech big cat has announced that it will team up with Blizzard, the developer and distributed of the StarCraft franchise in order to improve DeepMind’s deep-learning engine.

DeepMind, the neural network-based engine has always been praising for its human-like ability to learn how to do stuff: the trial and error method. When it comes to games, it would seem that the intrepid engine has managed to pull off some outstanding stunts.

Probably the most notable of them is that it managed to beat the best Chess and Go players in the world. Now, the company wants to take things to the next level and wants to study how the deep-learning engine will behave when it will be exposed to a real-time environment.

And how better to teach a robot baby everything about making quick decisions and managing resources than a friendly (or not!) match of StarCraft II.

The motive behind DeepMind’s decision of choosing StarCraft II as a learning environment is not at all arbitrary, nor based on popularity. As the software engineers argued, games like Chess and Go were much easier to master for the engine since DeepMind would have all the time it needed in order to compute its moves and to elaborate a learning strategy.

However, in a real-time and ever-changing environment, where cunningness, wit, and sleight of hand are all that matters, the engine might have a hard time adjusting.

Using StarCraft II as a learning environment, the developers wish to teach the engine how it can coordinate very dynamic variables in real-time: resource management, the cost of production, defensive versus offensive postures, intuition, strategical thinking, and the ability to make quick decisions, out there, in the heat of battle.

Presently, in order to prepare DeepMind’s first match with a human opponent, the software engineer’s working on a project have instructed DeepMind to play as many matches as it can on its own to get the hang of things.

Image source: Flickr

Subscribe to our Magazine, and enjoy exclusive benefits

Subscribe to the online magazine and enjoy exclusive benefits and premiums.

[wpforms id=”133″]