Workshop on Automatic Translation for Signed and Spoken Languages

***** The submission deadline for AT4SLL has been extended to April, 24th 2023 *****

SCOPE

According to the World Federation of the Deaf (WFD) over 70 million people are deaf and communicate primarily via Sign Language (SL). Currently, human interpreters are the main medium for sign-to-spoken, spoken-to-sign and sign-to-sign language translation. The availability and cost of these professionals is often a limiting factor in communication between signers and non-signers. Machine Translation (MT) is a core technique for reducing language barriers for spoken languages. Although MT has come a long way since its inception in the 1950s, it still has a long way to go to successfully cater to all communication needs and users. When it comes to the deaf and hard of hearing communities, MT is in its infancy. The complexity of the task to automatically translate between SLs or sign and spoken languages, requires a multidisciplinary approach.

The rapid technological and methodological advances in deep learning, and in AI in general, that we see in the last decade, have not only improved MT, recognition of image, video and audio signals, the understanding of language, the synthesis of life-like 3D avatars, etc., but have also led to the fusion of interdisciplinary research innovations that lays the foundation of automated translation services between sign and spoken languages.

This one-day workshop aims to be a venue for presenting and discussing (complete, ongoing or future) research on automatic translation between sign and spoken languages and bring together researchers, practitioners, interpreters and innovators working in related fields. We are delighted to confirm that two interpreters for English<>International Sign (IS) will be present during the event, to make it as inclusive as possible to anyone who wishes to participate.

Theme of the workshop: Data is one of the key factors for the success of today’s AI, including language and translation models for sign and spoken languages. However, when it comes to SL, MT and Natural Language Processing, we face problems related to small volumes of (parallel) data, low veracity in terms of origin of annotations (deaf or hearing interpreters), non-standardized annotations (e.g. glosses differ across corpora), video quality or recording setting, and others. The theme of this edition of the workshop is Sign language parallel data – challenges, solutions and resolutions.


The AT4SSL workshop aims to open a (guided) discussion between participants about current challenges, innovations and future developments related to the automatic translation between sign and spoken languages. To this extent, AT4SSL will host a moderated round table around the following three topics: (i) quality of recognition and synthesis models and user-expectations; (ii) co-creation -- deaf, hearing and hard-of-hearing people joining forces towards a common goal and (iii) sign-to-spoken and spoken-to-sign translation technology in media. 


TOPICS

This workshop aims to focus on the following topics. However, submissions related to the general topic of automatic translation between signed and spoken languages that deviate from these topics are also welcome:

  • Data: resources, collection and curation, challenges, processing, data life cycle

  • Use-cases, applications 

  • Ethics, privacy and policies

  • Sign language linguistics

  • Machine translation (with a focus on signed-to-signed, signed-to-spoken or spoken-to-signed language translation)

  • Natural language processing

  • Interpreting of sign and spoken languages

  • Image and video recognition (for the purpose of sign language recognition)

  • 3D avatar and virtual signers synthesis

  • Usability and challenges of current methods and methodologies

  • Sign language in the media


SUBMISSION FORMAT

Two types of submissions are going to be accepted for the AT4SSL  workshop:

  • Research, review, position and application papers
    Unpublished papers that present original, completed work. The length of each paper should be at least four (4) and maximum eight (8) pages, with unlimited pages for references.

  • Extended abstracts
    Extended abstracts should present original, ongoing work or innovative ideas. The length of each extended abstract is four (4) pages, with unlimited pages for references.

Both papers should be formatted according to the official EAMT 2023 style templates (LaTex. Overleaf, MS Word, Libre/Open Office, PDF).

Accepted papers and extended abstracts will be published in the EAMT 2023 proceedings and will be presented at the conference.

SUBMISSION POLICY

  • Submissions must be anonymized.

  • Papers and extended abstracts should be submitted using EASY Chair.

  • Work that has been or is planned to be submitted to other venues must be declared as such. Upon acceptance at AT4SSL, it must be withdrawn from the other venues.

  • The review will be double-blind.

IMPORTANT DATES:

  • First call for papers: 13-March-2023

  • Second call for papers: 31-March-2023

  • Submission deadline: 14-April-2023 24-April-2023 (Extended!)

  • Review process: between 25-April-2023 and 05-May-2023

  • Acceptance notification: 12-May-2023

  • Camera ready submission: 01-June-2023

  • Submission of material for interpreters: 06-June-2023

  • Programme will be finalised by: 01-June-2023

  • Workshop date: 15-June-2023

ORGANISATION COMMITTEE:

Dimitar Shterionov (TiU)

Mirella De Sisto (TiU)

Mathias Muller (UZH)

Davy Van Landuyt (EUD)

Rehana Omardeen (EUD)

Shaun O’Boyle (DCU)

Annelies Braffort (Paris-Saclay University)

Floris Roelofsen (UvA)

Frédéric Blain (TiU)

Bram Vanroy (KU Leuven; UGent)

Eleftherios Avramidis (DFKI)

INTERPRETING:

We will provide English to International Sign (IS) interpreting during the workshop. 

FOR CONTACTS:

Dimitar Shterionov, workshop chair: d.shterionov@tilburguniversity.edu

Registration will be handled by the EAMT2023 conference. (To be announced)