Hi all,
the 9th Conference on Machine Translation (WMT24), collocated with EMNLP 2024, will be featuring this year the Shared Task on evaluation of Automatic Metrics. We are looking for both reference-based metrics and reference-free metrics to evaluate the quality of MT systems. We’ll be using expert-based MQM annotations on English-German, English-Spanish and Japanese⇾Chinese as the primary gold standard for evaluating metrics. Details are at http://www2.statmt.org/wmt24/metrics-task.html.
We’ll be continuing the challenge sets subtaskhttp://www2.statmt.org/wmt24/metrics-task.html#_challenge_set_subtask this year: we invite anyone to submit a new test suite and/or an analysis paper on metric behaviour for specific perturbations/phenomena (you’re welcome to resubmit last year’s challenge set!)
New this year:
* New language pairs: English-Spanish and Japanese⇾Chinese * Additional 13 typologically diverse African languages in a challenge set * We will be using the Codabench platform to improve the metric submission experience
Important dates: Challenge sets submission deadline: 11th July Metrics inputs ready to download: 23rd July Metric submission deadline: 30th July Metric scores for challenge sets distributed: 6th August Paper submission deadline to WMT: 20th August
Please register your metric submissions here https://docs.google.com/forms/d/e/1FAIpQLSenoN9svmqkfJshM8bHd8p3Ofyepdvcg9Mtns0VdEbsFunvEA/viewform?usp=pp_url and challenge set submissions https://forms.office.com/e/uhA74RnjMb herehttps://forms.office.com/pages/responsepage.aspx?id=vfGpYaB-aECyMbtKa_y3APGD0hnjQCxCom5Ag3yDFjJUQVZHV1FSUlJRSTBYRU9JV1oyRjNRWk9LRi4u so we can keep track of participants.
Looking forward to your submissions, WMT24 Metrics team