a ruby programming game
In Zero X an agent controls an object in a simulated world. For Level 1 "Greenfields" this is a population object. The agent can only act through this object, that behaves like a marionette.
The world is a 2D matrix of fields. The number of fields can vary from tournament to tournament, but take 6x6 as an example. Each field has a resource object and some of them have also a population object. Each population is connected to an agent. The population can move from one field to another. The world has no real borders. Means if a population leaves the matrix on the right side, it enters the matrix again on the left side and vice versa. The same applies to the bottom and top borders.
On each field a resource is growing. If a population exists on the field, it feeds upon the resource. The relation between resource and population is similar to the prey predator system described in the Lotka Volterra equation. Every field is isolated from the others surrounding it, there's no influence between fields. So we have on each field a self-regulating, cybernetic system.
To see how this relation works, you can go to the in vitro page, where you can study the behaviour of such a field.
The world is simulated continuously, not round based. Also the agents act continuously, everytime a think method ends, it starts over again. There's no need for a loop within the think method, it will already be looped. For a game a simulation is started for a specific time range. When the time elapses the game ends.
At the beginning of each game, the resources are distributed randomly within a range, take between 4000 and 16000 as an example. Then the populations are set upon the resources, beginnig with the smallest one. About one third of the fields will have a population, the other two third will remain free. To keep the balance, the populations on a richer resource will be smaller than a population on a poorer one. But all populations have one thing in common, they are on the poorest fields, the rich ones are all still free.
An agent can take two types of actions: sensor actions to explore the environment and actions to interact with the environment. For Level 1 "Greenfields" there is for each type one command: look_around and move_to.
The look_around command acts like a radar. The agent or in that case the population takes a 360° look around himself.
view = look_around
The result is a view object with relative coordinates. Our agent is always shown in the center at position 0, 0.
-1, 1 | 0, 1 | 1, 1 |
-1, 0 | 0, 0 | 1, 0 |
-1, -1 | 0, -1 | 1, -1 |
field = view[-1, -1] view.each_field {|field| p field} view.each_field_with_index {|field, x, y| puts x, y} view.select {|f| f.resource > 200}Each field has the following properties:
field[:population] # => the size of the population on the field field.population # => the size of the population on the field field.has_population? # => true if there's a population on the field field.has_no_population? # => true if there's no population on the fieldSome notes to the methods above. If there's no population on a field, field.population will return nil.
view[0,0].population # => your own population size size # => shortcut view[0,0].resource # => resource size on your field resource # => shortcutBoth are based on the last view taken and therefore will not take any additional action points.
After your agent detected his nearest environment, it's time for action. The second command for Level 1 "Greenfields" is move_to. You can move horizontally, vertically and diagonally.
move_to field move_to :x => -1, :y => 0 move_to -1, 0 move_to nil # does not move, report or take any costs move_to 0, 0 # does not move, report or take any costs
Remember the world your moving in has no borders.
If you move to a field, where already another population is, then it comes to a fight.
The suffered combat damage is for the smaller populations inversely proportional to both populations. The combat damage for the bigger one is again inversely proportional to the first one. Confused? Let's make some examples:
The commands look_around and move_to cost action points. When you execute one of them, then the agent has to sleep for a certain time. A look_around costs 2 action points, a move_to 6 action points. After the costs have been payed, the thread can continue to run the think method.
But there's a difference how the commands and the sleep time is executed. For a look_around, the agent first sleeps and then looks around. Like this the agent gets the most up date view.
While move_to is executed in the opposite sequence. First the agent moves and there may be fights, then it sleeps. Like this the move_to and look_around command can be as close together as possible and the agent probably knows, where he moves.
Although be aware, as all agents have their own threads, you can not really tell when the execution between the single agents respectively threads switches. There's no guaranty, that an estimated free field is still free, when you move.
Note: The former disengage block for the action move will be removed from both levels Greenfields and Clone War. A deprecation warning will be added to your reports.
Sometimes you want to write some messages along the way, like debug informations. Later in the game reviews you can read them. Info messages are private, only the owner of an agent can see them.
info 'my message'
Watch at the game clock. With time you get the time_units elapsed since the game has been started.
time # in time_units
Here's a very simple example class of user "demo" called "The Good":
module Demo class TheGood < Tournament::Agent # moves to a random free field def think # collect data view = look_around # analyse them free_fields = view.select {|field| field.has_no_population?} # take a decision move_to free_fields.shuffle.first end end end
With callback methods you can hook in the life cycle of an agent.
after_start is called immediately after the thread is launched and just before the think loop. It's a good place to initialize the agent and to prepare him for the game.
module Demo class Example < Tournament::Agent def after_start # called once end def think # called periodically end end end
During a game some events are reported directly to the agents. They can read them, during their think method with next_event or iterate over them with each_event. The event will be removed as soon as it has been read.
event = next_event event.class # => AttackedEvent event.attacker # => '003' agent code name event.damage # => 132
Receiving or reading events don't take any action costs.
If an agent is attacked by another agent, the victim receives an attacked event. The event includes the code name of the attacker and the damage suffered.
Here an example with an iterator:each_event do |event| if event.instance_of? AttackedEvent puts "attacked by #{event.attacker} and suffered #{event.damage}" end end
event.victim # => e.g. 008
You can download a SDK to test your agents. The SDK uses RSpec as test framework.
What should and can be tested? We want to test the behaviour of the agent in a certain situation and we don't need to test the simulation.
The agents input is a view, that he can analyse to get aware of the situation. You can create such views as fixtures. First define a couple of fields and then compose a view with them as scenario.
field :empty, :resource => 5000 field :self, :resource => 5000, :population => 200, :agent => 'myself' scenario :empty, :empty, :empty, :empty, :empty, :self, :empty, :empty, :empty, :emptyDon't forget to set your own agent in the center of each scenario.
describe Demo::TheGood do before :all do load_field_fixtures 'greenfields' end before :each do @agent = create_agent Demo::TheGood end it "should move to an empty field" do use_scenario :empty @agent.think @agent.should have_moved_to(-1,-1) end endThe agents of user "demo" are included in the SDK as examples.
Tournaments will be announced when they will take place. Until then you can code your agent classes and upload them. All uploaded agents will automatically participate.
Each tournament consists of several games and every agent will take part in a specific number of games. For example we have 20 agents, everyone of them participates in 3 games, and every game can hold 10 agents. Then we would have 6 games for this example tournament.
In every game an agent can get experience points:
First 5 points if the population survived.
Then some extra points:
+ 1 point for the 3rd biggest population.
+ 3 points for the 2nd biggest population.
+ 5 points for the biggest population.
The experience points for one tournament is the average points over all games in that tournament. The total ranking is the average points over the last 10 tournaments.
As you can see, the focus is on the best survival method, not necessarily on destroying as many other agents.
As already mentioned level "Greenfields" is just the first level. You will get for each tournament the points achieved by your best agent in that tournament. To get to the next level, you'll have to participate in at least 5 tournaments and collect 40 points.
After each tournament you can review every game for any agent. Also for agents that don't belong to you.
So go and see how the winner's agent behaved and improve your code.
May the plays begin!