ICCMA
International Competition on Computational Models of Argumentation
Home Competition 2015 Rules Participation Important Dates Solvers Results Organization Competition 2017 Contact

In cooperation with

The 2015 International Workshop on Theory and Applications of Formal Argument (TAFA'15)



Supported by

Rules

The source code of all submitted solvers must be attached to submissions and must be freely available under some open source software license. Besides the submission of the software (binary and source code) it is mandatory that an additional system description (2 pages) is provided.

Submitted solvers must provide a command line interface following the requirements of probo (see the interface description, an update will be provided soon) and the provided binaries must run under Linux. A reference implementation of the solver interface in Java is given by the TweetySolver which is based on Tweety and also used for checking solutions from participating solvers in the competition itself.

Solvers are evaluated with respect to their performance using the IPC score which is also used for the planning competition. More precisely, for each computational problem and each semantics (called track in the following) we consider a set of argument graphs (the exact nature of each graph, if they will be artificially generated or taken from real-world graphs will be determined at a later point in time). For each graph, each solver has a fixed amount of time (currently estimated to be 10 minutes) to solve the given computational problem. If he manages to correctly solve the problem in the given amount of time, he receives 1 point. If he cannot solve the problem within the given amount of time, he receives zero points (if a solver classifies at least one instance incorrectly, it is disqualified for the track (** Note: this policy has not been implemented in the actual competition, an incorrect result also yielded zero points **)). For each track a ranking of the solvers is determined by their number of correctly and timely classified instances. Additionally, we will determine a global ranking of the solvers across all tracks by taking the Borda count of all solvers in all tracks.

In order to make your solver ready for the competition, please take the following hints into account and provide necessary explanation in your submission mail (see Participation).

Developing your solver

Please have a close look at the following two documents which describe the interface requirements of your solver:

Compiling your solver
  • Each solver directory should contain a shell script, named build (note the letter case), which completely builds your solver. Be sure that your build script is executable. You may assume that build is run from the directory in which it resides. In the common case that you want to use the make tool to build your solver, your build script should look like this: make
  • Your build script will be run with limited user rights, but still please make sure that it doesn't contain any operations that can wreak havoc on the computer. In particular, it must not write to any directories outside the directory it is run in (creating and using subdirectories is fine), and it must not use the network.
  • If there are reasons to expect that your solver won't build on a standard Linux machine (e.g. because it uses unusual libraries), please explain any potential issues in your submission email.
Running your solver
  • Each solver directory must have an executable (either script or binary file), that supports the probo command line interface.
  • You can test your solver with the following sets of example graphs in TGF and APX format; while these example graphs are, in terms of structure, representative for the graphs that will be used in the actual competition, the number of nodes may change.
  • You may assume that the solver is run from the directory it resides. We will run the script in an environment that limits memory usage to 4 GB (this value may be changed depending on requirements) and overall runtime to 10 minutes, so you don't need to take such measures manually.
  • Your solver may write whatever it wants to the stderr stream. Diagnostic output to this stream will be logged, and we encourage you to produce any output that may help in troubleshooting the solver. Stdout stream is used for prodiving solutions to the tasks.
  • Your solver will be run with limited user rights, but still please make sure that it doesn't contain any operations that can wreak havoc on the computer. In particular, it must not write to any directories outside the directory it is run in (creating and using subdirectories is fine), and it must not use the network.
  • If your solver generates any temporary files, please check or remove them at the start of execution. We will not automatically remove them.
Special solver aspects
  • Randomized algorithms: If your solver uses randomised algorithms, please initialise the random seed to a fixed constant such as 2015. If there are any reasons to expect that your solver won't generate reproducible results, please tell us clearly in the submission email.
  • Concurrency: it is not expected that any solver spawns concurrent threads or processes. Nevertheless, if yours do so, e.g., sequentially first to preprocess / encode the AF, and then to start the search process itself, please let us know.
  • Various executable files: If your solver consists of a number of executable files (most likely because it uses an ensemble of approches which are invoked upon some criteria) make sure only one is executed simultaneously.
  • Disk usage: If your solver uses external search algorithms, we will need to run it in an isolated environment, so please tell us. In that case, we will of course take time spent doing I/O into account for the time limit. Please also tell us how much hard disk space the solver should be expected to require, at maximum, during execution. The solver is only allowed to write to the directory in which it is invoked, or subdirectories thereof. (For example, don't write to /tmp.).
Bug fix policy

In some cases, we will offer the opportunity to fix bugs that arise during the evaluation period, but any changes after the submission deadline will be strictly limited to bugfixes only. We will use a diff tool to check that patches don't contain new features or parameter tuning, and will reject patches that don't look like minimal changes to fix bugs. It is your responsibility to provide patches that are easy to verify with a diff tool. We reserve the right to reject changes for which the only-bugfixes rule is unnecessarily hard to check (e.g. because you reformatted the whole code).




Last updated 11.08.2015, Matthias Thimm