AlphaStar implementation series - Replay file

Dohyeong Kim
2 min readJun 7, 2020

--

I recently make a simple Terran agent using Rule-based system using PySC2 of the DeepMind. I can make it using conditional statements up up to producing Marauder and rushing to enemy base. However, I realize that program will become too complicated to make and control more high tech units.

For that reason, I decide to use the Deep Learning method instead of the Rule-based method. Therefore, I started to read the AlphaStar paper, which shows the best performance in Starcraft2 game.

I also tried to replicate the AlphaGo paper published in few years ago. However, I failed because there was not many resources such as open source environment, sample code of other people.

On the other side, AlphaStar has various abundant resources. Therefore, I can start to replicate paper at this time.

  1. API for downloading replay file: https://github.com/Blizzard/s2client-proto/tree/master/samples/replay-api
  2. API for parsing replay file : https://github.com/narhen/pysc2-replay

Parsing replay file

In the Replay file, the first information we should check is the mmr, apm, win or lose of each player.

Players information of replay file

Through the code above, we can find a replay files that meet certain player condition.

When a replay file is selected, we can prepare to extract the state and action occurred during the game, via the following code.

Data parsing preparation from replay file

Finally, we need to to save the parsing data as an hkl file format. In the case of action, the type and argument are separated from original action data. In the case of observation, feature_screen, feature_minimap, player, feature_units, game_loop, available_actions, build_queue, production_queue, single_select, multi_select and score_cumulative are separated.

Recording data of replay file

You can see that the size of the original replay file of Starcraft 2 is around 13kB, and the size of the hkl file generated from it is about 600MB.

You can see full source code from https://github.com/kimbring2/AlphaStar_Implementation/blob/master/trajectory_generator.py.

Conclusion

In the first post of the series, we check how to extract human expert data from Starcraft 2 replay files, which is essential for training the Deep Learning agent of Starcraft 2. In the next post, I will explain how to build a network for agents.

--

--

Dohyeong Kim
Dohyeong Kim

Written by Dohyeong Kim

I am a Deep Learning researcher. Currently, I am trying to make an AI agent for various situations such as MOBA, RTS, and Soccer games.

No responses yet