School for Zergs
This project is a paid project, where people contribute money to fund for korean zerg guides being translated.
help us fund future videos!
This project is a paid project, where people contribute money to fund for korean zerg guides being translated.
help us fund future videos!
Mysterious Mysteries
The following command is used to trim video in FFmpeg. The stream copy enables to trim video without re-encoding and meanwhile keeps original quality for the output video.
`
.. code:: bash
ffmpeg -i input.mp4 -ss 00:01:23 -to 00:04:20 -c copy output.mp4
`
ffmpeg -i input.mp4 -ss 00:01:23 -to 00:04:20 -c copy output.mp4
-c copy trim via stream copy, which is fast and will not re-encode video.
Let's train some workers when we have the minerals for them!
That's the basics you go out, 16 supply, 16 factory, train 2 marines and put them with 1 scv at front on the ramp, 22 cc, 24 supply, 25 tank, 26 ebay.
This tutorial will walk you through StarCraft: Brood War bot development with Python, but first we are going to dive deep into Coroutines!
Real-time strategy (RTS) games are known to be one of the most complex game genres for humans and machines to play. To tackle the task we focus on a message-passing divide-and-conquer approach with ZMQ and multiple languages, splitting the game into separate components and developing separate systems to solve each task.
This trend gives rise to a new problem: how to tie these systems together into a functional StarCraft: Brood War playing bot?
Coroutines are computer-program components that generalize subroutines for non-preemptive multitasking by allowing multiple entry points for suspending and resuming execution at certain locations.
Subroutines are short programs that perform functions of a general nature that can occuir in varios types of computation.
A sequence of program instructions that perform a specific task, packaged as a unit. This unit can then be used in programs wherever that particular task should be performed.
Subprograms may be defined within programs, or separately in libraries that can be used by multiple programs.
In different programming languages, a subroutine may be called a procedure, a function a routine, a method, a subprogram.
Processes are idependent units of execution, a subroutine lives inside a process.
Also known as non-preemptive multitasking, is a style of computer multitasking in which the operating system never initiates a context switch from a running process to another process.
Instead, processes voluntary yield control periodically or when idle in order to enable multiple applications to be run concurrently.
await
and yield
Python uses a single-threaded event loop to enable concurrent actions. This means that all real-time aplication code should aim to be asynchronous and non-blocking because only one operation can be active at a time.
Asynchronous operations generally return placeholder objects (Futures).
Futures
are usually transformed into their result with the await
and yield
.
Here is a sample synchronous function:
:::python from tornado.httpclient import HTTPClient def synchronous_fetch(url): http_client = HTTPClient() response = http_client.fetch(url) return response.body
And here the same rewritten asynchronously as a native coroutine:
:::python from tornado.httpclient import AsyncHTTPClient async def asynchronous_fetch(url): http_client = AsyncHTTPClient() response = await http_client.fetch(url) return response.body
Anything you can do with coroutines you can also do by passing callback around, but coroutines
provide an important simplification by letting you organize your code in the same way you would if it
were synchronous, important for error handling since try/expect
work as you would expect.
ZeroMQ is a community of projects focused on decentralized message passing. They agree on protocols (RFCs) for connecting to each other and exchanging messages. Messages are blobs of useful data of any reasonable size.
You can use this feature to queue, route, and filter messages according to various patterns
.
Multilingual Distributed Messaging thanks to the ZeroMQ Community.
It's asynchronous I/O model gives you scalable multicore applications, built as asynchronous message-processing subroutines. Read the guide.
Coroutines are the recommended way to write asynchronous code.
Coroutines use the Python 3 await
or yield
keyword to suspend and resume execution instead of a chain of callbacks, all coroutines use explicit context switches and are called as asynchronous functions.
Coroutines are almost as simple as synchronous code, but without the expense of a thread. They make concurrency easier to reason about by reducing the number of places where a context switch can happen.
Coroutines do not raise exceptions in the normal way: any exception they raise will be trapped in the awaitable object until it is yielded. This means it is important to call coroutines in the right way, or you may have errors that do unnoticed:
:::python async def divide(x, y): return x / y def bad_call() # This should raise ZeroDivisionError, but it won't # because the coroutine is called incorrectly! divide(1, 0)
In nearly all cases, any function that calls a coroutine must be a coroutine itself,
and use the await
or yield
keyword in the call.
:::python async def good_call(): # await will unwrap the object returned by divide() # and raise the expection. await divide(1, 0)
Sometimes you may want to "fire and forget" a coroutine without waiting for its result. In this case it is recommended to use IOLoop.spawn_callback
, which makes the IOLoop
responsible for the call.
If it fails, the IOLoop
will log a stack trace:
:::python # The IOLoop will catch the expection and print a stack trace # in the logs. Note that this doesn't look like a normal call, # since we pass the function object to be called by the IOLoop. IOLoop.current().spawn_callback(divide, 1, 0)
The simplest way to call a blocking function from a coroutine is to use IOLoop.run_in_executor
, which returns Futures
that are compatible:
:::python async def call_blocking(): await IOLoop.current().run_in_executor(None, blocking_func, args)
The multi
function accepts lists and dicts whose values are Futures
and waits for all of those Futures
in parallel:
:::python from tornado.gen import multi async def parallel_fetch(url1, url2): resp1, resp2 = await multi([http_client.fetch(url1), http_client.fetch(url2)] async def parallel_fetch_many(urls): res = await multi([http_client.fetch(u) for u in urls]) # res is a list of HTTPResponses in the same order async def parallel_fetch_dict(urls): res = await multi({url: http_client.fetch(url) for url in urls}) # res is a dict {url: HTTPResponse}
In decorated coroutines, it is possible to yield
the list or dict directly:
:::python @gen.coroutine def parallel_fetch_decorated(url1, url2): resp1, resp2 = yield [http_client.fetch(url1), http_client.fetch(url2)]
Sometimes it is useful to save a Future
instead of yielding it immediately, so you can start another operation before waiting.
:::python from tornado.gen import convert_yielded async def get(self): # convert_yielded() starts the native coroutine in the background. # This is equivalent to asyncio.ensure_future() (both work) fetch_future = convert_yielded(self.fetch_next_chunk()) while True: chunk = yield fetch_future if chunk is None: break self.write(chunk) fetch_future = convert_yielded(self.fetch_next_chunk()) yield self.flush()
This is a little easier to do with decorated coroutines, because they start immediately when called:
:::python @gen.coroutine def gen(self): fetch_future = self.fetch_next_chunk() while True: chunk = yield fetch_future is chunk is None: break self.write(chunk) fetch_future = self.fetch_next_chunk() yield self.flush()
In native coroutines, async for
can be used.
PeriodicCallback
is not normally used with coroutines. Instead, a coroutine can contain a while True:
loop and use tornado.gen.sleep
:
:::python async def minute_loop(): while True: await do_something() await gen.sleep(60) # Coroutines that loop forever are generally started with # spawn_callback(). IOLoop.current().spawn_callback(minute_loop)
Sometimes a more complicated loop may be desirable. For example, the previous loop runs every 60+N
seconds,
where N
is the running time of do_something()
. To run exactly every 60 seconds, use the interleaving pattern from above:
:::python async def minute_loop2(): while True: nxt = gen.sleep(60) # Start the clock. await do_something() # Run while the clock is ticking. await nxt # Wait fot he timer to run out.
TBD
Let's learn to order our workers to gather some resources closest to them!
TorchCraft is a BWAPI module that sends StarCraft: Brood War data out over a ZMQ connection. This lets you parse game data and interact with the Brood War API from anywhere.
This tutorial will walk you through start the game for the first time after installing the environment, we are going to dive into TorchCraft's Python API and its provided example.py, learn to train a SCV, gather minerals, build a refinery and start harvesting-gas!
Let's start the game and learn a bit more about TorchCraft a general overview can be found in:
Synnaeve, G., Nardelli, N., Auvolat, A., Chintala, S., Lacroix, T., Lin, Z., Richoux, F. and Usunier, N., 2016. TorchCraft: a Library for Machine Learning Research on Real-Time Strategy Games - arXiv:1611.00625.
launcher.py
scriptcd /opt/StarCraft
wine bwheadless.exe -e /opt/StarCraft/StarCraft.exe\ -l /opt/StarCraft/bwapi-data/BWAPI.dll --host\ --name Blueberry --game Blueberry --race T\ --map maps/TorchUp/\(4\)FightingSpirit1.3.scx&\ wine Chaoslauncher/Chaoslauncher.exe
That is what we are actually executing, let's build a launcher.py
script.
:::python #!/usr/bin/env python3 # Run bwheadless.exe and Chaoslauncher.exe from here! import argparse import os parser = argparse.ArgumentParser( description='host a game with bwheadless') parser.add_argument('-p', '--path', type=str, default='/opt/StarCraft/', help='StarCraft path') parser.add_argument('-b', '--bot', type=str, default='Blueberry') parser.add_argument('-r', '--race', type=str, default='Terran') parser.add_argument('-m', '--map', type=str, default='\(4\)FightingSpirit1.3.scx') args = parser.parse_args() execute = ''' wine bwheadless.exe -e {0}StarCraft.exe\ -l {0}bwapi-data/BWAPI.dll --host\ --name {1} --game {1} --race {2}\ --map maps/TorchUp/{3}&\ wine Chaoslauncher/Chaoslauncher.exe '''.format(args.path, args.bot, args.race[:1], args.map) os.chdir(args.path) os.popen(execute).read()
Start the original example again and run launcher.py
to see what gives?
example.py
$ python3 /usr/src/TorchCraft/examples/py/example.py -t 127.0.0.1
launcher.py
script$ python3 /usr/src/starcraft-sif/examples/launcher.py
If everything works as expected, you will see Chaoslauncher
, the first time it will ask for the location of StarCraft.exe
, you will find it on /opt/StarCraft/
confirm and it will ask probably to restart Chaoslauncher.exe
, kill the current session with Control-C
in the terminal where you start launcher.py
and run it again.
$ python3 /usr/src/starcraft-sif/examples/launcher.py
{% img [class name(s)] /images/1.png %}
Now with Chaoslauncher ready, enable the BWAPI 4.2.0 [RELEASE]
and and W-MODE
plugins and click on Start
hopefully that will launch the game on your new environment, check Multiplayer -> Local PC
and confirm that you see Blueberry
waiting in the lobby.
{%img [class name(s)] /images/2.png %}
{%img [class name(s)] /images/3.png %}
{%img [class name(s)] /images/4.png %}
TorchCraft is a library that enables machine learning research in the real-time strategy game of StarCraft: Brood War, by making easier to control the game from a machine learning framework, here PyTorch.
TorchCraft advocate to have not only the pixels as input and keyboard/mouse for commands, but also a structured representation of the game state. This makes it easier to try a broad variety of models.
StarCraft: Brood War is a highly competitive game with professional players, which provides interesting datasets, human feedback, and a good benchmark of what is possible to achieve within the game.
BWAPI is a programming interface written in C++ which allows users to read data and send game commands to a StarCraft: Brood War game client. BWAPI contains all functionality necessary for the creation of a competitive bot. Examples of BWAPI functionality are:
Attack
, Move
, Build
Position
, HP
, Energy
MaxSpeed
, Damage
, MaxHP
, Size
Programs written with BWAPI alone are usually compiled into a Windows dynamically linked library (DLL) which is injected into the game. BWAPI allows the user to perform any of the aboce functionality while the game is running, after each logic frame update within the game's software.
After each logic frame, BWAPI interrupts the StarCraft process and allows the user to read game data and issue commands, which are stored in a queue to be executed during the game's next logic frame.
TorchCraft connects Torch to BWAPI low level interface to StarCraft: Brood War. TorchCraft's approach is to dynamically inject a piece of code in the game engine that will be a server. This server sends the state of the game to a client, and receives commands to send to the game.
The two modules are entirely asynchronous. TorchCraft execution model inject a DLL that provides the game interface to the bots, and one that includes all the instructions to communicate with the external client, interpreted by the game as player (or bot AI).
The server starts at the beginning of the match and stops when that ends.
TorchCraft is seen by the AI programmer as a library that provides: connect()
, receive()
to get the state, send(commands)
, and some helper functions about specifics of StarCraft's rules and state representation.
:::lua -- main game engine loop: -- it acts as the server for our TorchCraft bot client to `connect`, `receive` and `send(commands)` while true do game.receive_player_actions() game.compute_dynamics() -- our injected code: torchcraft.send_state() torchcraft.receive_actions() end
A simplified client/server model that runs in the game engine (server, on top) and the machine learning framework (client, on the bottom).
:::lua -- ilustrates a TorchCraft bot using the Lua client to `connect`, `receive` and `send(commands)` -- it acts as the machine learning client where we can integrate Torch7 to return in-game actions tc = require('torchcraft') featurize, model = init() tc:connect(port) while not tc.state.game_ended do tc:receive() features = featurize(tc.state) actions = model:forward(features) tc:send(tc:tocommand(actions)) end
TorchCraft also provides an efficient way to store game frames data from past games so that existing replays can be re-examined.
TorchCraft is a library that enables machine learning reserch on real game data by interfacing PyTorch with StarCraft: Brood War.
example.py
What is TorchCraft's example.py actually doing?
:::python import torchcraft as tc import torchcraft.Constants as tcc
Get closest function, very self explanatory!
:::python def get_closest(x, y, units) dist = float('inf') u = None for unit in units: d = (unit.x - x)**2 + (unit.y - y)**2 if d < dist: dist = d u = unit return u
TorchCraft Python API client initial setup
:::python client = tc.Client() client.connect(hostname, port) # Initial setup client.send([ [tcc.set_speed, 0], [tcc.set_gui, 1], [tcc.set_cmd_optim, 1], ])
Plays simple micro battles with an attack closest heuristic
{%img [class name(s)] /images/5.png %}
:::python while not state.game_ended: loop += 1 state = client.recv() actions = [] if state.game_ended: break else: units = state.units[0] enemy = state.units[1] for unit in units: target = get_closest(unit.x, unit.y, enemy) if target is not None: actions.append([ tcc.command_unit_protected, unit.id, tcc.unitcommandtypes.Attack_Unit, target.id, ]) print("Sending actions: {}".format(str(actions))) client.send(actions) client.close()
The TorchCraft API is a layer of abstraction on top of BWAPI, we don't interact with BWAPI directly, this is the biggest difference if compared with common C++ or Java bots.
Workers mine 8 minerals per trip. Minerals are the more important of the two physical resources, for all units produces from buildings or larvae require at least some minerals to be produced, while more basic units and structures do not require Vespene Gas
. In addition, gas harvesting is possible only by building a gas-extracting structure on a geyser (Extractor
for Zerg
, Refinery
for Terran
and Assimilator
for Protoss
).
{%img [class name(s)] /images/6.png %}
gathering.py
example$ python3 /usr/src/starcraft-sif/examples/gathering.py
launcher.py
script$ python3 /usr/src/starcraft-sif/examples/launcher.py
:::python for unit in units: if tcc.isbuilding(unit.type)\ and tc.Constants.unittypes._dict[unit.type]\ == 'Terran_Command_Center': if not producing\ and state.frame.resources[bot['id']].ore >= 50\ and state.frame.resources[bot['id']].used_psi\ != state.frame.resources[bot['id']].total_psi: # Target, x, y are all 0 actions.append([ tcc.command_unit_protected, unit.id, tcc.unitcommandtypes.Train, 0, 0, 0, tc.Constants.unittypes.Terran_SCV, ]) # to train a unit you MUST input into "extra" field producing = True
if all went well, the workers should now start gathering the mineral patches closest to them!
:::python gather = tcc.command2order[tcc.unitcommandtypes.Gather] build = tcc.command2order[tcc.unitcommandtypes.Build] right_click_position = tcc.command2order[tcc.unitcommandtypes.Right_Click_Position] for order in unit.orders: if order.type not in gather\ and order.type not in build\\ and order.type not in right_click_position\ and not building_refinery: target = get_closest(unit.x, unit.y, neutral) if target is not None: actions.append([ tcc.command_unit_protected, unit.id, tcc.unitcommandtypes.Right_Click_Unit, target.id, ])
Don't expect an optimal spread of workers, but that is left as an exercise.
We Require More Vespene Gas
:::python vespene = 'Resource_Vespene_Geyser' if tcc.isworker(unit.type): workers.append(unit.id) if state.frame.resources[bot['id']].ore >= 100\ and not building_refinery: for nu in neutral: if tcc.unittypes._dict[nu.type] == vespene: gas_harvesting.append(unit.id) actions.append([ tcc.command_unit, unit.id, tcc.unitcommandtypes.Build, -1, nu.x - 8, nu.y - 4, tcc.unittypes.Terran_Refinery, ]) building_refinery = True
:::python if building_refinery and gas_harvesting[0] != unit.id\ and len(gas_harvesting) == 1 and refinery: gas_harvesting.append(unit.id) actions.append([ tcc.command_unit_protected, unit.id, tcc.unitcommandtypes.Right_Click_Unit, refinery ]) elif refinery and gas_harvesting[0] != unit.id\ and gas_harvesting[1] != unit.id\ and len(gas_harvesting) == 2: gas_harvesting.append(unit.id) actions.append([ tcc.command_unit_protected, unit.id, tcc.unitcommandtypes.Right_Click_Unit, refinery ])
Here is a link to the complete gathering.py script if you are just curious, Next we will train different units to improve our Terran skills!
Now that we are training our workers, eventually we'll be running out of supply.
Workers are often weak in fights compared to other units.
Players gather resources to build units and defeat their opponents. To that end, they often have worker units (and extraction structures) that can gather resources needed to build workers, buildings, other units and research upgrades.
Buldings and research define technology trees (directed acyclic graphs) and each state of a tech tree allow for the production of different unit types and the training of new unit abilities.
An opening denotes the same thing as in Chess: an early game plan for which the player has to make choices.
That is the case in Chess because one can move only one piece at a time (each turn), and in StarCraft because, during the early game phase, one is economically limited and has to choose which tech paths to pursue.
Available resources constrain the technology advancement and the number of units one can produce. As producing buildings and units also take time, the arbitrage between investing in the economy, in technological advancements, and in units production is the crux of the strategy during the whole game.
In StarCraft an opening refer to the initial moves of a game. The term can refer to the initial moves by either side, but an opening by Zerg may also be known as defense. There are dozens of different openings, and hundreds of variants. These vary widely in character from quiet positional play to wild tactical play.
In addition to referring to specific move sequences, the opening is the first phase of a game, the other phases being the middlegame and the endgame.
Opening moves that are considered standard are referred to as "book moves". Reference works often present move sequnces in simple algebraic notation, opening trees or theory tables. When a game begins to deviate deom known, opening theory, the players are said to be "out of the book".
Professional players spend yers studying openings, and continue doing so throughout their careers, as opening theory continues to evolve.
The study of openings can become unbalanced if it is to the exclusion of tactical training and middlegame and endgame strategy.
TorchCraft is a BWAPI module that sends StarCraft data out over a ZMQ connection. This lets you parse game data and interact with BWAPI.
{% img [class name(s)] /images/python.jpg %}
Python is an interpreted, high-level, general-purpose programming language with strengths in automation, data analysis and machine learning.
This tutorial will walk you through installing Python 3 and setting up a programming environment on Debian 10.
Logged into your system as root, first update and upgrade to ensure your shipped version of Python 3 is up-to-date.
apt update
apt upgrade
Confirm upgrade when prompted to do so.
Check your version of Python 3 installed by typing:
python3 --version
You'll receive output similar to the following.
Python 3.7.3
To manage software packages for Python, install pip
, the standard package installer for Python. You can use pip to install things from the official package index and other indexes.
apt install -y python3-pip
Python packages can be installed by typing:
pip3 install schematics
Here, schematics
can refer to any Python package, such as tornado for backend development or NumPy for scientific computing.
There are a few more packages and development tools to install to ensure that we have a robust set-up for our StarCraft: Brood War Python TorchCraft bots programming environment:
apt -y install --install-recommends vim git apt-transport-https\ gnupg2 wget software-properties-common curl build-essential\ gfortran sudo pkg-config make cmake libyaml-0-2 libyaml-dev
dpkg --add-architecture i386
apt-add-repository contrib
apt -y install --install-recommends libgnutls30:i386 libldap-2.4-2:i386\ libgpg-error0:i386 libxml2:i386 libasound2-plugins:i386 libsdl2-2.0-0:i386\ libfreetype6:i386 libdbus-1-3:i386 libsqlite3-0:i386 libgl1-mesa-glx:i386\ libgl1-mesa-dri:i386 libsdl2-2.0-0 libstb0 libstb0:i386 mesa-vulkan-drivers
Get and install the repository key.
wget -nc https://dl.winehq.org/wine-builds/winehq.key && apt-key add winehq.key
apt-add-repository 'deb https://dl.winehq.org/wine-builds/debian/ buster main'
apt update && rm winehq.key
Starting on Wine >= 4.5, libfaudio0 is required by the staging packages provided by WineHQ but is not included in the Wine HQ packages, which means you are responsible for making libfaudio0 available prior to installing Wine. This explains how to obtain libfaudio0 for Debian 10.
wget -nc https://download.opensuse.org/repositories/Emulators:/Wine:/Debian/Debian_10/amd64/libfaudio0_20.01-0~buster_amd64.deb
wget -nc https://download.opensuse.org/repositories/Emulators:/Wine:/Debian/Debian_10/i386/libfaudio0_20.01-0~buster_i386.deb
dpkg -i libfaudio0_20.01-0~buster_amd64.deb
dpkg -i libfaudio0_20.01-0~buster_i386.deb
apt -y install --install-recommends winehq-staging winetricks
Many programs work under WINE with absolutely no configuration.. unfortunately, this isn't always the case.
non-root
userThe following commands MUST be executed as a normal non-root user, if you already have an existing wine setup, you can REMOVE it and start clean with rm -rf ~/.wine/
$ WINEARCH=win32; wineboot
$ winetricks -q vcrun2012
$ winetricks -q vcrun2013
$ winetricks -q vcrun2015
wine
useradduser --disabled-login --gecos "" --shell /forbid/login wine
usermod --append --groups audio wine
chown wine:wine -R /home/wine
sudo -u wine env HOME=/home/wine USER=wine USERNAME=wine LOGNAME=wine WINEARCH=win32 wineboot
sudo -u wine env HOME=/home/wine USER=wine USERNAME=wine LOGNAME=wine winetricks -q vcrun2012
sudo -u wine env HOME=/home/wine USER=wine USERNAME=wine LOGNAME=wine winetricks -q vcrun2013
sudo -u wine env HOME=/home/wine USER=wine USERNAME=wine LOGNAME=wine winetricks -q vcrun2015
At the moment StarCraft: Remastered is NOT yet supported, the only working version is 1.16.1.
git clone https://github.com/spacebeam/starcraft-sif.git /usr/src/starcraft-sif
In this tutorial we have StarCraft installed in /opt/StarCraft/
cat /usr/src/starcraft-sif/include/core/core* > /opt/StarCraft.tar.gz
tar -zxvf /opt/StarCraft.tar.gz -C /opt/
This will install Tornado, PyTorch, PyZMQ, NumPy and SciPy!
pip3 install -r /usr/src/starcraft-sif/examples/blueberry/requirements.txt
git clone https://github.com/TorchCraft/TorchCraft.git /usr/src/TorchCraft --recursive
pip3 install /usr/src/TorchCraft
Lets continue this tutorial with the ambitious goal of create a small Terran bot with a single timing attack, but first.. check that everything is installed correctly and that we can run the original examples.
python3 /usr/src/TorchCraft/examples/py/example.py -t 127.0.0.1
if you follow the steps your output read hopefully:
CTRL-C to stop
You are ready to start a new Brood War bot using Python on Linux, we hope this tutorial provide a good start, after this short success.. learn how to run the game and gather resources growing a Terran economy.
A collection of tools for managing UNIX services.. Supervision means the system will restart the process immediately if it crash for some reason!
At its core, runit is a process supervision suite.
The concept of process supervision comes from several observations:
That one is the core design principle of UNIX: one service -> one daemon.
A process supervision system organizes the process hierarchy in a radical different way.
The design of runit
takes a very familiar approach by breaking down functionality into several small utilities responsible for a single task.
This approach allows the simple components to be composed in various ways to suit our needs.
The core runit utilities are runsvdir
, runsv
, chpst
, svlogd
, and sv
.
Each service is associated with a service directory, and each service daemon runs as a child process of a supervising runsv
process runing in this directory.
The runsv
program provides a reliable interface for signalling the service daemon and controlling the service and supervisor.
Normally the sv
program is used to send commands throught this interface, and to query status information about the service.
The runsv
program supervises the corresponding service daemon. By default a service is defined to be up, that means, if the service daemon dies, will be restarted. Of course you can tell it otherwise.
The promise is that this reliable interface to control daemons and supervisors obsolete pid-guessing programs, such as pidoff, killall, start-stop-daemon, which, due to guessing are prone to failures by design.
It also obsoletes so called pid-files, no need for each and every service daemon to include code to daemonize, to write the new process id into a file, and to take care that file is removed properly on shutdown, which might be very difficult in case of a crash!
runit
guarantees each service a clean process state, no matter if the service is activated for the first time or automatically at boot time, reactivated, or simply restarted. This means that the service always is started with the same environment, resource limits, open file descriptors, and controlling terminals.
The runsv
program provides a reliable logging facility for the service daemon. If configured, runsv
creates a pipe, starts and supervises an additional log service, redirects the log daemon's standard input to read from the pipe, and redirects the service daemon's standard output to write to the pipe.
Restarting the service does not require restarting the log service, and vice versa.
A good choice for log daemon is runit's service logging daemon svlogd
.
The service daemon and log daemon run with different process states and can run under different user id's.
Stage 2 handles the systems's uptime tasks (via the runsvdir
program) and is running the whole system's uptime life spawn.
Stage 2 is portable across UNIX systems. runit
is well suited for autopilot nodes, servers and embedded systems, and also does its job well on everyday working environments.
Stage 2 is packaging friendly: all software package that provides a service needs to do is to include a service directory in the package, we provide a symbolic link mechanism to this directory in /etc/service/
. The service will be started within five seconds, and automatically at boot time.
The package's install and update scripts can use the reliable control interface to stop, start, restart or send signal to the service.
On package removal, the symbolic link simply is removed. The service will be taken down automatically.
runit's service supervision resolves dependencies for service daemons designed to be run by a supervisor process automatically.
The service daemon (or the corresponding run script) should behave as follows:
runsv
program takes care that all logs for the service are written safely to disk.We are now ready to move up to a complete connection information distribution mechanism containing a number of programable modules along with the structure to program them.
The seed mechanism consist of a central knowledge store, a set of programmable modules, and connections between them.
The structure is set in a way that all of the connection information that is specific to recognition of zergs is stored in the central knowledge store.
Incoming lines from the programmable module allow information in each module to access the central knowledge, and output lines from the central knowledge store to the programmable modules allow connection activation information to be distributed back to the modules.
The two programmable modules are just copies of the module. It is assumed that lower-level mechanisms, outside of the model itself, are responsible for aligning inputs with the two modules, so that shown two units are presented, the zergling activates appropriate programmable zergling units in the module, and the hydralisk activates appropriate programmable hydralisk units in the right module.
In summary, the mechanism consists of
Connection information distribution allow us to instruct parallel processing structures from outside the network making their behavior contingent on instructions originating elsewhere in the network.
This means, that the way a network responds to a particular input can be made contingent on the state of some other network in the system, thereby greatly increasing the flexibility of parallel processing mechanisms.
Perhaps the most general way of stating the benefit of connection information distribution is to note that is in a way, analogous to the invention of the stored program!
Using connection information distribution, we can create local copies of relevant portions of the contents of a central knowledge store. These copies then serve as the basis for interactive processing amount the conceptual entities they program local hardware units to represent.
With this mechanism, parallel distributed processing models can now be said to be able to create multiple instances of the same schema, bound appropriately to the correct local variables, though subject to just the same kind of crashes and errors human programmers seem to make.
We have not really done anything more than show how existing tools in the arsenal of parallel distributed processing mechanisms can be used to create local copies of networks.
Did the tournament happen? Did it complete?
It appears that our humans crash going after the 4th expansion and encounter some hydras. Our machine supervisors have schedule again the games for next week from the 16 to Mar 20, if humans crash again we will continue iterating, fixing and scheduling the tournament games until we have at least a first completed round following the structure of our event as stated by the published rules.
The results will be announced in the updates
section of the tournament website as the rules state, we have been battling against our own technical debt, implementation and irrelevant internal details but we are commited to release and announce the tournament results as soon as they become available and make this event an annual competition with the particularity that we can meet in beutiful Brussels, once a year at FOSDEM, come for the brood war, stay for the technical conferences the Belgian beers are great and all but is all about the chocolates.
About the announced new maps they are relevant only for next year competition and up, a hopefully more stable event where we are learning from the iterations and the progress with our current mistakes, crashes, burns and bugs, the maps are a mix of old and new competitive maps for 2v2 and 1v1 with lots of surprises, we have no doubt they will be a challenge for map analyzers as well.
Good luck to all of you participating in this first Torch Up event! we hope to have the results available for you soon, and if you did not register but are curious and want to join us next year, we want to have the maps available for you as soon as possible, check out the new challenges and see you next year!