Last time we saw DeepMind, they were teaching an AI to gain human-style memory and recall. This time around, they’ve developed a sophisticated AI that can play 1980s Atari games, learn from its successes and mistakes, and eventually beat you in a one-on-one contest. Yep, Google’s AI company just built a retro robot gamer.
In a study published in the journal Nature, the company detailed a new system that uses frames from Atari games as data input. It processes the input from a variety of levels, both simple and complex, in order to acquaint itself with the intricacies of the game.
Called “Human-level control through deep reinforcement learning,” the study tasked the AI with deriving representations of the environment from the inputs and use that information to generalize their past experiences for applying to new situations. In this case, the AI applied that to Atari games, taking into account what happens in a variety of previous in-game situations to make every single one of its succeeding decisions. Deepmind’s new AI only draws from short-term experiences (it has to relearn everything each time out), although its ability to learn is quite remarkable.
In the 49 games they had it play, it was able to best any previous AI system in 43 of the titles, asserting its superiority over other robot brains. It was also able to beat its human opponent in 29 of the same games, so it definitely improves enough in a short time to be sufficiently competitive.
You can check out the study from the link below.