====================================================================================== Fourth International Competition on Computational Models of Argumentation (ICCMA'21) Call for Benchmarks http://argumentationcompetition.org/2021/index.html ====================================================================================== Argumentation is a major topic in the study of Artificial Intelligence. In particular, the problem of solving certain reasoning tasks on Dung's abstract argumentation frameworks or in (logic-based) structured argumentation frameworks is central to many advanced argumentation systems. The fact that many of the problems to be solved are intractable requires efficient algorithms and solvers. The Fourth International Competition on Computational Models of Argumentation (ICCMA'21) will be conducted in the first half of 2021; submitted solvers will compete on a selected collection of benchmark instances. The main goals of the competition are to provide a forum for empirical comparison of solvers, to highlight challenges to the community, to propose new directions for research and to provide a core of common benchmark instances and a representation formalism that can aid in the comparison and evaluation of solvers. ICCMA'21 reasoning tasks are detailed in the Call for Solvers (see http://argumentationcompetition.org/2021/calls.html). Challenging and representative benchmarks are essential to perform significant comparisons of solvers. We invite submissions of both real world benchmarks and benchmark generators to ensure a diverse benchmark set for the competition. In the case of (randomly) generated benchmarks we invite the submission of the generator instead of the instances. Submissions of real world benchmarks are most welcome no matter if they come directly from an application or if they have been obtained via a translation from another (argumentation) formalism. The 2021 edition of ICCMA also includes a track dedicated to solvers working on dynamic frameworks. For this reason, we will collect also generators and benchmarks where each instance consists in an abstract framework (tgf or apx formatted file) and a separate file with a list of 15 modifications (at least) to the initial instance. Each one of such modifications consists of the addition or removal of a single attack: e.g., "+att(a,b)." or "-att(d,e)." (apx), or "+1 3" or "-4 2" (tgf). These files with modifications will have respectively .apxm and .tgfm extensions. This year, we also include addition and deletion of arguments in the authorized types of modifications. See http://argumentationcompetition.org/2021/SolverRequirements.pdf for more details on this matter. Finally, for the newly introduced structured argumentation track, we collect generators or benchmarks for flat Assumption-based Argumentation (ABA) frameworks. The expected format, inspired by the apx format for abstract AFs, is described in the Solver Requirements. A selection of the submitted benchmark instances will be used to evaluate solvers at ICCMA'21, and it will be made available to the community after the event. Organizers reserve the right to make this choice, which will also be based on a pre-selection of benchmarks instances, whose goal is to select only "meaningful" benchmark instances. We refer interested authors of benchmarks to the Solver Requirements, where more details and examples can be found (http://argumentationcompetition.org/2021/SolverRequirements.pdf). Before *** February 15, 2021 *** authors of benchmarks are expected to 1) produce a 2-page paper describing their generator/benchmark. The Latex template is available at http://argumentationcompetition.org/2021/iccma2021-latex-template.tar.gz. Note that we plan to include the benchmark description in the ICCMA 2021 proceedings on arXiv. 2) provide an instance set (composed of a significant number of instances, e.g. 30) and/or an instance generator (either producing instances in apx or tgf, or both). Regarding the provided instances, an indication of which instances, either provided in an instance set, or generated with some given parameters of the generator, is expected to be "hard", or "easy", is welcome. Both the paper and the instance set/generator must be submitted on http://iccma2021.cril.fr/. After the announcement of the results, we expect the authors of benchmarks to submit their paper on arXiv. This will allow us to gather these papers (as well as the solvers descriptions) as ICCMA 2021 proceedings. More instructions will be provided at this moment. Contact ------- Main contact: iccma2021@cril.fr Submission website: http://iccma2021.cril.fr/ For up-to-date information: https://twitter.com/argcompetition Participants or just interested people are welcome to subscribe to argumentationcompetition@inria.fr, by sending an email with header "subscribe argumentationcompetition ” to sympa_inria@inria.fr, in order to receive information concerning future editions of ICCMA. Organizers ---------- Jean-Guy Mailly, LIPADE, University of Paris, France Emmanuel Lonca, CRIL, University of Artois, France Jean-Marie Lagniez, CRIL, University of Artois, France Julien Rossit, LIPADE, University of Paris, France The ICCMA steering committee: --------------------------- Sarah A. Gaggl, International Center for Computational Logic, TU Dresden, Germany Nir Oren, Department of Computing Science, University of Aberdeen, UK Jean-Guy Mailly, LIPADE, University of Paris, France Federico Cerutti, Department of Information Engineering, University of Brescia, Italy Matthias Thimm, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany Mauro Vallati, School of Computing and Engineering, University of Huddersfield, UK Serena Villata, WIMMICS Research Team, INRIA Sophia Antipolis, France