[loginf] [ICCMA'17] Call for Benchmarks

Stefan Woltran woltran at dbai.tuwien.ac.at
Fri Jul 15 12:55:40 CEST 2016

  Second International Competition on Computational Models of Argumentation (ICCMA'17)

                                  Call for Benchmarks


Argumentation is a major topic in the study of artificial intelligence. In 
particular, the problem of solving certain reasoning tasks on Dung's abstract 
argumentation frameworks is central to many advanced argumentation systems. The 
fact that problems to be solved are intractable requires efficient algorithms 
and solvers.

The Second International Competition on Computational Models of Argumentation 
(ICCMA'17) will be conducted in the first half of 2017, where submitted solvers 
will compete on a selected collection of benchmark instances.

The objectives of the competition are to provide a forum for empirical 
comparison of solvers, to highlight challenges to the community, to propose new 
directions for research and to provide a core of common benchmark instances and 
a representation formalism that can aid in the comparison and evaluation of 

ICCMA'17 reasoning tasks are detailed in the Call for Solvers (see 

Challenging and representative benchmarks are essential to perform significant 
comparisons of solvers. We invite submissions of both real world benchmarks and 
benchmark generators to ensure a diverse benchmark set for the competition. In 
the case of (randomly) generated benchmarks we invite to submit the generator 
instead of the instances. Submissions of real world benchmarks are most welcome 
no matter if they directly come from an application or if they have been 
obtained via a translation from another (argumentation) formalism.

Thus, it is now time to collect benchmarks!

All submitted benchmark instances, together with those of ICCMA'15, will become 
part of the suite of benchmark instances made available to the community after 
the event, and a selection will be used to evaluate solvers at ICCMA'17.

Organizers reserve the right of this choice, that will be also based on a 
pre-selection of benchmarks instances, whose goal is to have evaluated only 
"meaningful" benchmark instances.

Input format is adapted from the last edition and detailed at 

Benchmark authors are invited to send an email to iccma17 at dbai.tuwien.ac.at by

Feb 1, 2017

where they are expected to

* indicate names and affiliations of the contributors,

* produce a 1-page paper describing their benchmark (they will be published on 
the competition web site), and

* provide an instance set (composed of a significant number of instances, e.g. 
15)  and/or an instance generator.

About the instances, an indication about which instances, either provided in an 
instance set, or generated with some given parameters of the generator, is 
expected to be "hard", or "easy", is welcome.

We are looking forward to an exciting competition!

iccma17 at dbai.tuwien.ac.at

Sarah A. Gaggl, Computational Logic Group, TU Dresden, Germany
Thomas Linsbichler, Institute of Information Systems, TU Wien, Austria
Marco Maratea, DIBRIS, University of Genoa, Italy
Stefan Woltran, Institute of Information Systems, TU Wien, Austria

The ICCMA steering committee
Federico Cerutti, School of Computer Science & Informatics, Cardiff University, UK
Sarah A. Gaggl, Computational Logic Group, TU Dresden, Germany
Nir Oren, Department of Computing Science, University of Aberdeen, UK
Hannes Strass, Computer Science Institute, Leipzig University, Germany
Matthias Thimm, Institute for Web Science and Technologies, University of Koblenz-Landau, Germany
Mauro Vallati, School of Computing and Engineering, University of Huddersfield, UK
Serena Villata, WIMMICS Research Team, INRIA Sophia Antipolis, France

More information about the loginf mailing list