Touché Task 1: Conversational Argument Retrieval

Synopsis

  • Task: Given a query on a controversial topic, retrieve relevant arguments from a focused crawl of online debate portals.
  • Input: [data]
  • Submission: [submit]

Task

The goal of Task 1 is to support users who search for arguments to be used in conversations (e.g., getting an overview of pros and cons or just looking for arguments in line with a user's stance). Given a query on a controversial topic, the task is to retrieve relevant arguments from a focused crawl of online debate portals.


Register now

Data

The topics for Tasks 1 will be send to each team via email upon completed registration. The topics will be provided as XML files.

Example topic for Task 1:

   <topic>
      <number>1</number>
      <title>Is climate change real?</title>
      <description>You read an opinion piece on how climate change is a hoax and disagree. Now you are looking for arguments supporting the claim that climate change is in fact real.</description>
      <narrative>Relevant arguments will support the given stance that climate change is real or attack a hoax side's argument.</narrative>
   </topic>

Document collections

Task 1 will use the args.me corpus; you may index the corpus with your favorite retrieval system. To ease participation, you may also directly use the args.me search engine's API for a baseline retrieval.

Evaluation

For Task 1, be sure to retrieve good ''strong'' arguments. Our human assessors will label the retrieved documents manually, both for their general topical relevance, and for argument quality dimensions such as: (1) whether an argumentative text is logically cogent, (2) whether it is rhetorically well-written, and (3) whether it helps a user in their stance-building process, i.e., somewhat similar to the concept of "utility" (more information on argument quality: paper).

Submission

We encourage participants to use TIRA for their submissions to allow for a better reproducibility. Please also have a look at the dedicated TIRA tutorial for Touché—in case of problems we will be able to assist you. Even though the preferred way of run submission is TIRA, in case of problems you may also submit runs via email. We will try to quickly review your TIRA or email submissions and provide feedback.

Runs may be either automatic or manual. An automatic run does not use the topic descriptions or narratives and must not "manipulate" the topic titles via manual intervention. A manual run is anything that is not an automatic run. Upon submission, please let us know which of your runs are manual. For each topic, include up to 1,000 retrieved documents.

The submission format for the task will follow the standard TREC format:

qid Q0 doc rank score tag

With:

  • qid: The topic number.
  • Q0: Unused, should always be Q0.
  • doc: The document ID returned by your system for the topic qid:
    • For Task 1: Use the official args.me ID.
  • rank: The rank the document is retrieved at.
  • score: The score (integer or floating point) that generated the ranking. The score must be in descending (non-increasing) order. It is important to handle tied scores (trec_eval sorts documents by the score values and not your rank values).
  • tag: A tag that identifies your group and the method you used to produce the run.
The fields should be separated by a whitespace. The individual columns' widths are not restricted (i.e., score can be an arbitrary precision that has no ties) but it is important to include all columns and to separate them with a whitespace.

An example run for Task 1 is:

1 Q0 Sf9294c83-Af186e851 1 17.89 myGroupMyMethod
1 Q0 Sf9294c83-A9a4e056e 2 16.43 myGroupMyMethod
1 Q0 S96f2396e-Aaf079b43 3 16.42 myGroupMyMethod
...

Task Committee