ICCMA
International Competition on Computational Models of Argumentation
Home Competition 2015 Competition 2017 Competition 2019 Competition 2021 Competition 2023 Competition 2025 Important Dates Calls Tracks Rules Execution environment Submissions Solvers Organization Contact

Rules

General

Withdrawal

A solver can be withdrawn from ICCMA 2025 only before the deadline for the submission of the final versions. After this deadline no further changes or withdrawals of the solvers are possible.

Number of solver submissions

The evaluation organizers reserve the right to restrict the number of solver submissions originating from the same author(s) based on computation resource limitations.

Disqualification

A solver will not be immediately disqualified if it displays a wrong solution during the execution of the evaluation. The organizers will aim to provide all participants a fair chance of submitting bug fixes to their solvers in a timely manner based on feedback on wrong results. However, the organizers reserve the right to limit the number of resubmissions of a buggy solver based on resource limitations.

If the solver's bug cannot be resolved, the submitters will still have the option of having the solver's correct results tabulated and reported in the table of results. However, the results will be marked to indicate that the solver also generated some incorrect results.

Correctness

A solver is correct on a task of a subtrack if it fulfills the following requirements for every instance executed during the evaluation. For every subtrack, the output of the solver must conform to the I/O requirements, see Solver Requirements. No additional output is allowed. If the solver exits without running out of time or memory, it must exit without any errors.
  • Main Track and No-Limits Track:
    • DC-σ: If the query argument is credulously accepted, the solver outputs YES along with a certificate, i.e., a σ-extension containing the query. Otherwise, the solver outputs NO.
    • DS-σ: If the query argument is not skeptically accepted, the solver outputs NO along with a certificate, i.e., a σ-extension not containing the query. Otherwise, the solver outputs YES.
    • SE-σ: The solver outputs a σ-extension.
  • Note: For solvers wishing to participate in the Main Track implementing an algorithm for DC-σ or DS-σ which can be argued to produce certificates equivalent to σ-extensions (in terms of computational complexity of checking the certificates and in terms of practical solutions for checking the certificates) as certificates: any participant wishing to propose such alternative certificates must contact the organizers no later than December 9, 2024 to discuss the details and possibilities of doing so.

  • Heuristics Track: same requirements as in Main Track, but without the need to produce a certificate.
  • Dynamic Track: every function specified in the API is correctly implemented. If the benchmark application issues a sequence of API calls which are not supported by the solver, the solver must enter state ERROR. If this state is encountered, the solver will be excluded from the benchmark application, but will not be disqualified.
  • NEW 2025 Fixed-SAT-Solver Comparison: A participating may only use a SAT solver (for delegation of computationally-intensive tasks) and only via the IPASIR interface. No other search engines, libraries, or modules tailored towards solving NP-hard problems are allowed (like answer set programming, quantified Boolean formula solvers, solvers for integer linear programming, ...). We encourage all participants using a SAT solver to take part in this comparison
  • Assumption-Based Argumentation Track: same requirements as in Main Track, but without the need to produce a certificate.

Use of processor cores and portfolio solvers

In all tracks except for the No-Limits Track, solvers must use only a single core of the processor on the computing node it is run on, and solvers making use of multiple cores will be disqualified. This limitation does not concern the No-Limits Track, where parallel computations are allowed.

Portfolio solvers, which combine several existing argumentation solvers are not allowed to participate in the competition. This limitation does not concern the No-Limits Track, where portfolios are allowed.

The organizers may move a solver from the Main Track to the No-Limits Track if the solver is deemed to violate these conditions.

Fixed-SAT-Solver Comparison

If a solver takes part in the Fixed-SAT-Solver comparison, the solver must implement the IPASIR interface with incremental SAT solvers. The compilation must be described sufficiently so that the organizers can compile the solver with a specific SAT solver implementing the IPASIR interface. For instance, when using makefiles, a clear and dedicated must be present for exchanging the IPASIR SAT solver. For examples, we provide a simple example, and also see the IPASIR repository. The solver will then interact with a SAT solver with the IPASIR interface and be compiled with a specific SAT solver for ICCMA 2025. Taking part in the Fixed-SAT-Solver comparison excludes then use of any other search engine similar to SAT solvers (like answer set programming). Usage of a SAT solver is limited to usage of the IPASIR interface (i.e., the solver may only use the interface to delegate computationally intenstive computations). The organizers reserve the right to remove submissions to the Fixed-SAT-Solver comparison based on a case-by-case analysis.

Open source requirement

For all tracks apart from No-Limits, solver source code originating from the authors (including modifications to third-party source code such as SAT solvers as part of a solver) must be submitted together with a corresponding solver binary. The source code package of each participating solver will be made available at the time when the results of ICCMA 2025 are presented. We plan to present the results at the KR 2025 conference. In case bug fixes are submitted during the evaluation (i.e., if requested by the evaluation organizers), the bug-fixed source package must also be submitted together with a bug-fixed solver binary. A solver is allowed to use external binaries or unmodified third-party libraries. However, if the library is non-standard (i.e., not STL, Boost, etc.) its identity and usage in the solver should be clearly specified in the solver description.

Solver description

Each entrant to ICCMA 2025 must include a short (at least 1, at most 2 pages) description of the system. This should include a list of all authors of the system and their present institutional affiliations. It should also provide details of any non-standard algorithmic techniques (e.g., heuristics, simplification/learning techniques, etc.) and data structures, as well as references to relevant literature (be they by the authors themselves or by others). The system description must be formatted in a style that will be announced at a later stage, and submitted as PDF. The system descriptions will be posted on the ICCMA 2025 website. Furthermore, given that the quality and number of system descriptions is high enough, the organizers are considering publishing the collection of system descriptions as a report.

Participation of the organizers

The organizers are allowed to enter their own solvers into the evaluation. To avoid providing them with advantage over other participants, details of the benchmark selection process will be made public after the submission deadline and it will ensured that no individual organizers nor subsets of the organizers can impose that a specific seed is used for the randomized process applied for benchmark selection. Furthermore, any solver originating from the authors will be published online in open source at the submission deadline.

Solver requirements

Building. The submitted source code of the solver must include a README with detailed instructions on how to compile and invoke the solver. For the Dynamic Track, the README must include instructions on building the solver in order to be called via the generic Python API (e.g., how to build a dynamic library beforehand). All builds must be successful on a standard Linux distribution with x86-64 processors. Note that if the solver makes use of non-standard libraries or external binaries, they must be included within the submission.

Input format. In the Main Track, the No-Limits Track, the Dynamic Track, and the Heuristics Track, the following AF input file format is used. The arguments of an AF with n arguments are indexed consecutively with positive integers from 1 to n. The first line of the input file is the unique "p-line" of the form

p af <n>
where <n> is the number of arguments n, ending with the newline character. An attack a → b, where a has index i and b has index j, is expressed as the line
i j
ending with the newline character. In addition to the p-line and lines expressing attacks, lines starting with the '#' character are allowed. These lines are interpreted as comment lines unrelated to the syntax of the input AF, and end with the newline character. No other lines are allowed.

Example. The AF with five arguments a,b,c,d,e and attacks a → b, b → d, d → e, e → d, e → e is specified as follows, assuming the indexing a=1, b=2, c=3, d=4, e=5.

p af 5
# this is a comment
1 2
2 4
4 5
5 4
5 5

In the Assumption-Based Argumentation track, the following ABA input file format is used. The atoms of an ABA framework with n atoms are indexed consecutively with positive integers from 1 to n. The first line of the input file is of a the unique "p-line" of the form

p aba <n>
where <n> is the number of atoms n, ending with the newline character. Assumptions, rules and contraries are specified on individual lines. A line starting with 'a', followed by an index between 1 and n, specifies that the atom with the index is an assumption. A line starting with 'c', followed by two space-separated indices between 1 and n, specified that the atom corresponding to the second index is a contrary of the atom corresponding to the first index. A line starting with 'r' followed by a space-separated list of indices between 1 and n specify a rule whose head is the atom corresponding to the first index in the list and whose body consists of the atoms corresponding to the subsequent indices in the list. Each line starting with 'a', 'c' or 'r' ends with the newline character. In addition to the p-line and lines starting with 'a', 'c' or 'r', lines starting with the '#' character are allowed. These lines are interpreted as comment lines unrelated to the syntax of the input ABA framework, and end with the newline character. No other lines are allowed.

Example. The ABA framework with rules (p ← q,a), (q ←), (r ← b,c), assumptions a,b,c, and contraries a̅ = r, b̅ = s, c̅ = t is specified as follows, assuming the atom-indexing a=1, b=2, c=3, p=4, q=5, r=6, s=7, t=8.

p aba 8
# this is a comment
a 1
a 2
a 3
c 1 6
c 2 7
c 3 8
r 4 5 1
r 5
r 6 2 3

Output format. In all tracks except for the Dynamic Track, the solvers must print the result into standard output exactly in the format specified below.

  • DC-σ:
    • Main and No-Limits Tracks: If the query argument is credulously accepted, the solver outputs YES along with a σ-extension containing the query on the following line. For example, if the solver finds the σ-extension 1,2,5 containing the query argument 1, the solver output must be the following:
      YES
      w 1 2 5
      Otherwise, the solver output must be:
      NO
    • Heuristics and ABA Tracks: If the query argument is credulously accepted, the solver output must be:
      YES
      Otherwise, the solver output must be:
      NO
  • DS-σ:
    • Main and No-Limits Tracks: If the query argument is not skeptically accepted, the solver outputs NO along with a σ-extension not containing the query on the following line. For example, if the solver finds the σ-extension 1,4 not containing the query argument 2, the solver output must be the following:
      NO
      w 1 4
      Otherwise, the solver output must be:
      YES
    • Heuristics and ABA Tracks: If the query argument is credulously accepted, the solver output must be:
      YES
      Otherwise, the solver output must be:
      NO
  • SE-σ: If a σ-extension exists, the solver outputs it. For example, if the solver finds the σ-extension 3,7, the solver output must be the following:
    w 3 7
    
    Otherwise, the solver output must be:
    NO

Note that in the Dynamic Track, where the solver is invoked via an API, the solver may not print anything to standard output. The benchmark application calling the solver will perform the necessary printing.

Usage. In all tracks except for the Dynamic Track, the solvers must be executable from a command line with the following behavior.

  • If no command-line arguments are given, prints author(s) and version information of the solver and exits. Example:
    user$ solver
    MySolver v1.0
    John Doe, john.doe@example.com
  • If the option --problems is included, prints the supported computational problems and exits. Example:
    user$ solver --problems
    [DC-CO,DS-CO,SE-CO,DS-PR,SE-PR]
  • Otherwise, the usage of the solver is as follows.
    solver -p <task> -f <file> [-a <query>]
    
    The arguments -p <task> and -f <file> are mandatory for all tasks. Here <task> is a computational problem supported by the solver, and <file> is a valid file as described above. The -a <query> argument is mandatory only for the DC and DS tasks. Here <query> is a positive integer less than or equal to <n> (i.e., the number of arguments in the AF tracks and the number of atoms in the ABA tracks) as specified in the p-line. Example (Main or No-Limits Track):
    user$ solver -p DC-CO -f instance_file -a 2
    YES
    w 2 3
    
    Example (Heuristics and ABA Tracks):
    user$ solver -p DC-CO -f instance_file -a 2
    YES
    
    

Note that in the Dynamic Track, the solver is invoked via IPAFAIR, an incremental API for AF solvers. The API contains the specifications for the input arguments and return values of all functions.

Benchmarks

  • The benchmark sets used in the evaluation will consists of both instances appearing in the earlier ICCMA benchmark sets as well as new unseen benchmarks.
  • The set of benchmarks used in the evaluation will be revealed only after the results from the evaluation are finalized.
  • The number of benchmarks used in the evaluation is not predetermined.
  • The benchmark selection procedures used consists of two steps. 1. The organizers will divide accessible benchmarks into buckets consisting of "similar" benchmark instances (e.g., originating from the same or similar problem domain). 2. Benchmarks will be drawn randomly one-by-one from each of the buckets, so that a balanced number of benchmarks over the buckets is obtained.




Terms