Dear all,
we invite you to participate in this year's WMT shared tasks on Quality Estimation, where the goal is to predict the translation quality based on just the source and translation text.
This year we introduce the following new elements:
- Updated quality annotation scheme: the majority of the tasks uses Multidimensional Quality Metrics (MQM) instead of direct assessments (DA); - QE as a Metric: we share part of the data used in this year's Metrics task, to promote research that bridges the two domains; - New language pairs: English-Marathi, with sentence level and word level annotations (direct assessments) and a _surprise_ language pair in the testing phase; - An explainability subtask, following up on the 1st edition of Eval4NLP shared task; - Updated data and task definition for the critical error detection task.
## Tasks description
Task 1 -- Quality prediction - Word-level: predict the translation errors, assigning OK/BAD tags to each word of the target. - Sentence-level: predict the quality score for each source-target sentence pair.
Task 2 -- Explainable QE Infer translation errors as an explanation for sentence-level quality scores.
Task 3 -- Critical Error Detection Predict sentence-level binary scores indicating whether or not a translation contains a critical error.
Links to the data on the QE shared task website: https://wmt-qe-task.github.io/
## Important dates:
- Release of training data: 30 May - Release of dev data: TBA - Release of test data: 21 July - Test predictions deadline: 16 August - System description paper: 18 August - Conference: 7-8 December
To ask questions and receive the latest announcements, join our Google Group: https://groups.google.com/g/wmt-qe-shared-task/
Best, The organisers.