Dear all,
We are excited to announce our next shared task of the FEVER Workshop at EMNLP2024, which aims to evaluate the ability of systems to verify real-world claims with evidence from the Web. More explicitly:
* Given a claim and its metadata, the systems must retrieve evidence that supports and/or refutes the claim, either from the Web or from the document collection provided by the organisers. * Using this evidence, label the claim as Supported, Refuted given the evidence, Not Enough Evidence (if there isn't sufficient evidence to either support or refute it) or Conflicting Evidence/Cherry-picking (if the claim has both supporting and refuting evidence). * A response will be considered correct only if both the label is correct and the evidence adequate. As evidence retrieval evaluation is non-trivial to perform automatically, the participants will asked to help evaluate it manually to assess the systems fairly.
To learn more about the task and our baseline implementation, read our paper AVeriTeC: A Dataset for Real-world Claim Verification with Evidence from the Webhttps://proceedings.neurips.cc/paper_files/paper/2023/hash/cd86a30526cd1aff61d6f89f107634e4-Abstract-Datasets_and_Benchmarks.html. Key dates:
* Challenge Launch: April 2024 * Training/Dev Data Release: April 2024 * Testing Begins: June 30, 2024 * Submission Closes: July 15, 2024 * Results Announced: July 18, 2024 * System Descriptions Due for Workshop: August 15, 2024 * Winners Announced: November 15 or 16, 2024 (7th FEVER Workshop)
For more information on the shared task, data and code to get started, visit https://fever.ai/https://fever.ai/task.htmltaskhttps://fever.ai/task.html.htmlhttps://fever.ai/task.html.
Feel free to contact us on our slack channelhttps://join.slack.com/t/feverworkshop/shared_invite/zt-4v1hjl8w-Uf4yg~diftuGvj2fvw7PDA or via email: fever-organisers@googlegroups.com with any questions.
Looking forward to your participation! -- The FEVER workshop organizers