The following computational tasks have been addressed in the competition:
The above computational tasks are to be solved with respect to the following standard semantics:
For each combination of computational task and semantics the final ranking of all solvers supporting this combination are given as follows:
SE-CO
|
SE-PR
|
SE-GR
|
SE-ST
|
EE-CO
|
EE-PR
|
EE-GR
|
EE-ST
|
DC-CO
|
DC-PR
|
DC-GR
|
DC-ST
|
DS-CO
|
DS-PR
|
DS-GR
|
DS-ST
|
Each ranking above has been determined by querying each solver with N instances of the corresponding computational task with a timeout of 10 minutes each (N=192 for SE and EE and N=576 for DC and DS). The solvers are ranked wrt. the number of timeouts on these instances, ties are broken by also taking the actual runtime on the instances into account.
Considering only those solvers who participated in all tracks above and computing their Borda count across all tracks we have the following ranking:
The top-3 of the above ranking receive the awards of "First Place", "Second Place", and "Third Place" of ICCMA 2015, respectively. Furthermore, the solver Cegartix additionally receives the award "Honorable mention" as it achieved the best rankings in the three tracks it participated in (SE-PR, EE-PR, DS-PR).
The raw data, a complete list of the executed queries and the individual runtimes can be found in the following Excel-Sheet: xls (Update 1, 30.09.2015)
All benchmark graphs used in the competition can be downloaded here: zip
All benchmark graphs have been randomly generated based on three different graph models:
For the three models graphs were generated of three different size classes each, yielding a total of 9 test sets (note that the test set corresponding to the largest graphs of the second group has been removed from the competition as the majority of the solvers could not solve any of those).
The source code of the three graph generators is available in the code respository of probo. More precisely, the source code files are