AlphaStar implementation series - Replay file

I recently make a simple Terran agent using Rule-based system using PySC2 of the DeepMind. I can make it using conditional statements up up to producing Marauder and rushing to enemy base. However, I realize that program will become too complicated to make and control more high tech units.

For that reason, I decide to use the Deep Learning method instead of the Rule-based method. Therefore, I started to read the AlphaStar paper, which shows the best performance in Starcraft2 game.

I also tried to replicate the AlphaGo paper published in few years ago. However, I failed because there was not many resources such as open source environment, sample code of other people.

On the other side, AlphaStar has various abundant resources. Therefore, I can start to replicate paper at this time.

  1. API for downloading replay file: https://github.com/Blizzard/s2client-proto/tree/master/samples/replay-api
  2. API for parsing replay file : https://github.com/narhen/pysc2-replay

Parsing replay file

In the Replay file, the first information we should check is the mmr, apm, win or lose of each player.

Players information of replay file

Through the code above, we can find a replay files that meet certain player condition.

When a replay file is selected, we can prepare to extract the state and action occurred during the game, via the following code.

Data parsing preparation from replay file

Finally, we need to to save the parsing data as an hkl file format. In the case of action, the type and argument are separated from original action data. In the case of observation, feature_screen, feature_minimap, player, feature_units, game_loop, available_actions, build_queue, production_queue, single_select, multi_select and score_cumulative are separated.

Recording data of replay file

You can see that the size of the original replay file of Starcraft 2 is around 13kB, and size of the hkl file generated from it is about 600MB.

You can see full source code from https://github.com/kimbring2/AlphaStar_Implementation/blob/master/trajectory_generator.py.

Conclusion

In the first post of series, we check how to extract human expert data from Starcraft 2 replay files, which is essential for training the Deep Learning agent of Starcraft 2. In the next post, I will explain how to build a network for agent.

--

--

--

I am a Deep Learning researcher. Currently, I am trying to make an AI agent for various situations such as MOBA, RTS, and Soccer games.

Love podcasts or audiobooks? Learn on the go with our new app.

Recommended from Medium

How to get snowpack the smarter way

READ/DOWNLOAD%* Technology In Action Complete FULL

AWS X-Ray in Distributed Microservices — Part 1: Setup

On DevOps — 8. Infrastructure as Code: Introduction, Best Practices, and Choosing the Right Tool

Price estimation is time consuming. Why are we still doing it manually?

Let’s Do DevOps: Terraform S3 Policies Construction with Home Folders

Hello World using Apache-Airflow

8 Kotlin Features That Make Your Life Easier

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store
Dohyeong Kim

Dohyeong Kim

I am a Deep Learning researcher. Currently, I am trying to make an AI agent for various situations such as MOBA, RTS, and Soccer games.

More from Medium

Post Indicator Valve Optimization

JellyZen — Unity ML Agents Reinforcement Learning for an Asymmetric Adversarial Simulation — Part 3

Visualization of a multi dimensional shaped reward

Playing MOBA game using Deep Reinforcement Learning — part 2

Deep introduction to LSTMs