ICCMA
International Competition on Computational Models of Argumentation
Home Competition 2015 Rules Participation Important Dates Solvers Results Organization Competition 2017 Competition 2019 Competition 2021 Competition 2023 Competition 2025 Contact

In cooperation with

The 2015 International Workshop on Theory and Applications of Formal Argument (TAFA'15)



Supported by

Results

The following computational tasks have been addressed in the competition:

  1. Given an abstract argumentation framework, determine some extension (SE)
  2. Given an abstract argumentation framework, determine all extensions (EE)
  3. Given an abstract argumentation framework and some argument, decide whether the given argument is credulously inferred (DC)
  4. Given an abstract argumentation framework and some argument, decide whether the given argument is skeptically inferred (DS)

The above computational tasks are to be solved with respect to the following standard semantics:

  1. Complete Semantics (CO)
  2. Preferred Semantics (PR)
  3. Grounded Semantics (GR)
  4. Stable Semantics (ST)

For each combination of computational task and semantics the final ranking of all solvers supporting this combination are given as follows:

SE-CO
  1. CoQuiAAS
  2. ASGL
  3. ASPARTIX-D
  4. ConArg
  5. ArgSemSAT
  6. ArgTools
  7. LabSATSolver
  8. DIAMOND
  9. Carneades
  10. Dungell
SE-PR
  1. Cegartix
  2. ArgSemSAT
  3. LabSATSolver
  4. ASPARTIX-V
  5. CoQuiAAS
  6. ASGL
  7. ConArg
  8. ASPARTIX-D
  9. ArgTools
  10. GRIS
  11. DIAMOND
  12. Dungell
    Carneades
SE-GR
  1. CoQuiAAS
  2. Carneades
  3. LabSATSolver
  4. ArgSemSAT
  5. ArgTools
  6. GRIS
  7. ASGL
  8. ASPARTIX-D
  9. ConArg
  10. Dungell
  11. DIAMOND
SE-ST
  1. ASPARTIX-D
  2. ArgSemSAT
  3. LabSATSolver
  4. CoQuiAAS
  5. ConArg
  6. ASGL
  7. DIAMOND
  8. ArgTools
  9. ProGraph
  10. Dungell
    Carneades
    ASSA
EE-CO
  1. ASPARTIX-D
  2. ArgSemSAT
  3. CoQuiAAS
  4. LabSATSolver
  5. ASGL
  6. ConArg
  7. DIAMOND
  8. ArgTools
  9. Dungell
    Carneades
EE-PR
  1. Cegartix
  2. ArgSemSAT
  3. CoQuiAAS
  4. ASPARTIX-V
  5. LabSATSolver
  6. prefMaxSAT
  7. ASGL
  8. ASPARTIX-D
  9. ConArg
  10. ArgTools
  11. ZJU-ARG
  12. GRIS
  13. DIAMOND
  14. Dungell
    Carneades
EE-GR
  1. CoQuiAAS
  2. Carneades
  3. LamatzSolver
  4. LabSATSolver
  5. ArgSemSAT
  6. ZJU-ARG
  7. ArgTools
  8. GRIS
  9. ASGL
  10. ASPARTIX-D
  11. ConArg
  12. Dungell
  13. DIAMOND
EE-ST
  1. ASPARTIX-D
  2. ArgSemSAT
  3. CoQuiAAS
  4. ASGL
  5. ConArg
  6. ArgTools
  7. LabSATSolver
  8. DIAMOND
  9. Dungell
    Carneades
    ASSA
DC-CO
  1. ArgSemSAT
  2. ASPARTIX-D
  3. LabSATSolver
  4. CoQuiAAS
  5. ASGL
  6. ConArg
  7. DIAMOND
  8. ArgTools
  9. Carneades
DC-PR
  1. ArgSemSAT
  2. LabSATSolver
  3. CoQuiAAS
  4. ASGL
  5. DIAMOND
  6. GRIS
  7. ArgTools
  8. ASPARTIX-D
  9. Carneades
DC-GR
  1. CoQuiAAS
  2. Carneades
  3. LabSATSolver
  4. ASGL
  5. ArgSemSAT
  6. ArgTools
  7. GRIS
  8. DIAMOND
  9. ASPARTIX-D
DC-ST
  1. ASPARTIX-D
  2. ArgSemSAT
  3. LabSATSolver
  4. CoQuiAAS
  5. ConArg
  6. ASGL
  7. DIAMOND
  8. ASSA
  9. ArgTools
  10. ProGraph
  11. Carneades
DS-CO
  1. ASGL
  2. LabSATSolver
  3. ConArg
  4. ArgSemSAT
  5. ASPARTIX-D
  6. ArgTools
  7. CoQuiAAS
  8. DIAMOND
  9. Carneades
DS-PR
  1. ArgSemSAT
  2. Cegartix
  3. LabSATSolver
  4. ASPARTIX-V
  5. CoQuiAAS
  6. DIAMOND
  7. GRIS
  8. ASGL
  9. ArgTools
  10. ASPARTIX-D
  11. Carneades
DS-GR
  1. CoQuiAAS
  2. Carneades
  3. ASGL
  4. LabSATSolver
  5. ArgSemSAT
  6. ArgTools
  7. GRIS
  8. ASPARTIX-D
  9. DIAMOND
DS-ST
  1. ASPARTIX-D
  2. LabSATSolver
  3. CoQuiAAS
  4. ConArg
  5. ASGL
  6. DIAMOND
  7. ArgSemSAT
  8. ASSA
  9. ArgTools
  10. Carneades

Each ranking above has been determined by querying each solver with N instances of the corresponding computational task with a timeout of 10 minutes each (N=192 for SE and EE and N=576 for DC and DS). The solvers are ranked wrt. the number of timeouts on these instances, ties are broken by also taking the actual runtime on the instances into account.

Considering only those solvers who participated in all tracks above and computing their Borda count across all tracks we have the following ranking:

  1. CoQuiAAS
  2. ArgSemSAT
  3. LabSATSolver
  4. ASGL
  5. ASPARTIX-D
  6. ArgTools
  7. Carneades
  8. DIAMOND

The top-3 of the above ranking receive the awards of "First Place", "Second Place", and "Third Place" of ICCMA 2015, respectively. Furthermore, the solver Cegartix additionally receives the award "Honorable mention" as it achieved the best rankings in the three tracks it participated in (SE-PR, EE-PR, DS-PR).

The raw data, a complete list of the executed queries and the individual runtimes can be found in the following Excel-Sheet: xls (Update 1, 30.09.2015)

All benchmark graphs used in the competition can be downloaded here: zip

All benchmark graphs have been randomly generated based on three different graph models:

  • the first group consists of graphs with a very large grounded extension and many nodes in general
  • the second group consists of graphs which feature many complete/preferred/stable extensions (this was the hardest group for most solvers)
  • the third group consists of graphs which feature a rich structure of strongly connected components

For the three models graphs were generated of three different size classes each, yielding a total of 9 test sets (note that the test set corresponding to the largest graphs of the second group has been removed from the competition as the majority of the solvers could not solve any of those).

The source code of the three graph generators is available in the code respository of probo. More precisely, the source code files are




Last updated 18.07.2018, Matthias Thimm | Terms