According to the World Federation of the Deaf (WFD) over 70 million people are deaf and communicate primarily via Sign Language (SL). Currently, human interpreters are the main medium for sign-to-spoken, spoken-to-sign and sign-to-sign language translation. The availability and cost of these professionals is often a limiting factor in communication between signers and non-signers. Machine Translation (MT) is a core technique for reducing language barriers for spoken languages. Although MT has come a long way since its inception in the 1950s, it still has a long way to go to successfully cater to all communication needs and users. When it comes to the deaf and hard of hearing communities, MT is in its infancy. The complexity of the task to automatically translate between SLs or sign and spoken languages, requires a multidisciplinary approach (Bragg et al., 2019).
The rapid technological and methodological advances in deep learning, and in AI in general, that we see in the last decade, have not only improved MT, recognition of image, video and audio signals, the understanding of language, the synthesis of life-like 3D avatars, etc., but have also led to the fusion of interdisciplinary research innovations that lays the foundation of automated translation services between sign and spoken languages.
This one-day workshop aims to be a venue for presenting and discussing (complete, ongoing or future) research on automatic translation between sign and spoken languages and bring together researchers, practitioners, interpreters and innovators working in related fields.
Theme of the workshop: Data is one of the key factors for the success of today’s AI, including language and translation models for sign and spoken languages. However, when it comes to SL, MT and Natural Language Processing, we face problems related to small volumes of (parallel) data, large veracity in terms of origin of annotations (deaf or hearing interpreters), non-standardized annotations (e.g. glosses differ across corpora), video quality or recording setting, and others. The theme of this edition of the workshop is Sign language parallel data – challenges, solutions and resolutions.
The AT4SSL workshop aims to open a (guided) discussion between participants about current challenges, innovations and future developments related to the automatic translation between sign and spoken languages. To this extent, AT4SSL will host a moderated round table around the following three topics: (i) quality of recognition and synthesis models and user-expectations; (ii) co-creation -- deaf, hearing and hard-of-hearing people joining forces towards a common goal and (iii) sign-to-spoken and spoken-to-sign translation technology in media.
This workshop aims to focus on the following topics. However, submissions related to the general topic of automatic translation between signed and spoken languages that deviate from these topics are also welcome:
Data: resources, collection and curation, challenges, processing, data life cycle
Use-cases, applications
Ethics, privacy and policies
Sign language linguistics
Machine translation (with a focus on signed-to-signed, signed-to-spoken or spoken-to-signed language translation)
Natural language processing
Interpreting of sign and spoken languages
Image and video recognition (for the purpose of sign language recognition)
3D avatar and virtual signers synthesis
Usability and challenges of current methods and methodologies
Sign language in the media
Two types of submissions are going to be accepted for the AT4SSL workshop:
Research, review, position and application papers
Unpublished papers that present original, completed work. The length of each paper should be at least four (4) and maximum eight (8) pages, with unlimited pages for references.
Extended abstracts
Extended abstracts should present original, ongoing work or innovative ideas. The length of each extended abstract is four (4) pages, with unlimited pages for references.
Both papers should be formatted according to the official EAMT 2023 style templates (LaTex. Overleaf, MS Word, Libre/Open Office, PDF).
Accepted papers and extended abstracts will be published in the EAMT 2023 proceedings and will be presented at the conference.
First call for papers: 13-March-2023
Second call for papers: 31-March-2023
Submission deadline: 14-April-2023
Review process: between 17-April-2023 and 05-May-2023
Acceptance notification: 12-May-2023
Camera ready submission: 01-June-2023
Submission of material for interpreters: 06-June-2023
Programme will be finalised by: 01-June-2023
Workshop date: 15-June-2023
Dimitar Shterionov (TiU)
Mirella De Sisto (TiU)
Mathias Muller (UZH)
Davy Van Landuyt (EUD)
Rehana Omardeen (EUD)
Shaun O’Boyle (DCU)
Annelies Braffort (Paris-Saclay University)
Floris Roelofsen (UvA)
Frédéric Blain (TiU)
Bram Vanroy (KU Leuven; UGent)
Eleftherios Avramidis (DFKI)
Dimitar Shterionov, workshop chair: d.shterionov@tilburguniversity.edu
Registration will be handled by the EAMT2023 conference. (To be announced)