DeepMind has an AI bot that maneuvers through mazes and grabs objects on its own

Kevin Parrish
Digital Trends

Subscribe on YouTube

Google’s DeepMind release a paper this week called Reinforcement Learning with Unsupervised Auxiliary Tasks, which describes a method to increase the learning speed of artificial intelligence and the final performance of agents — or bots. This method includes adding two main additional tasks to perform while the AI trains, and builds on the standard deep reinforcement learning foundation, which is basically a trial-and-error reward/punishment method where AI learns from its mistakes.

The first added task for speeding up AI learning is the ability to understand how to control the pixels on the screen. According to DeepMind, this method is similar to how a baby learns to control his/her hands by moving them and watching those movements. In the case of AI, the bot would understand visual input by controlling the pixels, thus leading to better scores.

More: Here’s your stop: Google DeepMind’s new AI can help you navigate the subway system

“Consider a baby that learns to maximize the cumulative amount of red that it observes. To correctly predict the optimal value, the baby must understand how to increase ‘redness’ by various means, including manipulation (bringing a red object closer to the eyes); locomotion (moving in front of a red object); and communication (crying until the parents bring a red object),” DeepMind’s paper states. “These behaviors are likely to recur for many other goals that the baby may subsequently encounter.”

The second added task is used to train the AI to predict what the immediate awards will be based on a brief history of prior actions. To enable this, the team provided equal amounts of previous rewarding and non-rewarding histories. The end result is that the AI can discover visual features that will likely lead to rewards faster than before.

“To learn more efficiently, our agents use an experience replay mechanism to provide additional updates to the critics. Just as animals dream about positively or negatively rewarding events more frequently, our agents preferentially replay sequences containing rewarding events,” the paper adds.

With these two auxiliary tasks added to the previous A3C agent, the resulting new agent/bot is based on what the team calls Unreal (UNsupervised REinforcement and Auxiliary Learning). The team virtually sat this bot in front of 57 Atari games and a separate Wolfenstein-like labyrinth game consisting of 13 levels. In all scenarios, the bot was given the raw RGB output image, providing it direct access to the pixels for 100 percent accuracy. The Unreal bot was rewarded across the board for tasks like shooting down aliens in Space Invaders to grabbing apples in a 3D maze.

Because the Unreal bot can control the pixels and predict if actions will produce rewards, it’s capable of learning 10 times faster than DeepMind’s previous best agent (A3C). Even more, it produces better performance than the previous champion as well.

“We can now achieve 87 percent of expert human performance averaged across the Labyrinth levels we considered, with super-human performance on a number of them,” the company said. “On Atari, the agent now achieves on average 9x human performance.”

DeepMind is hopeful that the work that went into the Unreal bot will enable the team to scale up all of its agents/bots to handle even more complex environments in the near future. Until then, check out the video embedded above showing the AI moving through labyrinths and grabbing apples on its own without any human intervention.