Artificial intelligence and machine learning, these have been as of late the biggest buzzwords in all of tech media, and for good reason. Every time you use a search engine, talk to a chatbot, scroll through any social media app, there is most likely some application of artificial intelligence at work. However, what I want to focus more on is AI tied to games. From simple games such as tic-tac-toe to more difficult ones on the level of chess and go, all have been shown to be cracked by machine learning algorithms. However recently this has been pushed to an even higher level with the introduction of OpenAi 5. They achieved this in Dota 2.
Now what is Dota 2?
Dota (Defence of the ancient) 2 is a video game where 5 people battle it out as a team against another team. The goal of the game is to destroy the enemy teams ancient, a structure that lies in the center of the team’s base, which can be seen on the top right of the map below.
This is achieved by as a team piloting your character from a top down view on this map.
fig2 : A team fight in Dota 2
Above we can see an example of this
What you might immediately realize is the inherent complexity of this game compared to something like chess, and it gets worse. Not only does a player need to consider all of the actions such as abilities, movement mechanics, and targeting. They would subsequently need to consider when to do what and why and on top of it all think about overarching macro strategy. The game also relies on incomplete information. As can be seen above from fig1(i will label the images when making the real blog) the map has shaded areas after the diagonal, this means that if the player pans his camera to these regions of the map he will not be able to fi
nd out what heroes or actions are taking place in this area, and must therefore deduce this from given information. This makes you wonder, how could an AI ever be able to combat such a complex task.
A freakish amount of smart computing
Well as it was done by the Open Ai Five team the neural network trained received 16000 inputs every 4 frames of the game being played and performed up to a whopping 80000 outputs to control characters in the same time based on the information received. These raw numbers themselves are impressive however the way the research team weighted rewards for different actions based on time of effect ect. were also state of the art. The team used 256 GPU’s and 128,000 CPU’s to play 180 years of in game time a day to train the AI to play Dota 2.
What were the results?
Believe it or not Open Ai Five was able to beat the world champions in Dota 2 in a show match after less than a years training. It flexed an insane winrate of 99.4% against the highest skill players in the game. Below can be seen the graph of true skill of the Open AI Five.
As you can see the last team to be played against was the world champions. It can also be seen how shockingly quick the AI got to being better than semi pro teams. So finally to answer our question
 Berner, Christopher, et al. “Dota 2 with large scale deep reinforcement learning.” arXiv preprint arXiv:1912.06680 (2019).