Workshop on Automatic Translation for Signed and Spoken Languages
***** The submission deadline for AT4SLL has been extended to April, 24th 2023 *****
SCOPE
According to the World Federation of the Deaf (WFD) over 70 million people are deaf and communicate primarily via Sign Language (SL). Currently, human interpreters are the main medium for sign-to-spoken, spoken-to-sign and sign-to-sign language translation. The availability and cost of these professionals is often a limiting factor in communication between signers and non-signers. Machine Translation (MT) is a core technique for reducing language barriers for spoken languages. Although MT has come a long way since its inception in the 1950s, it still has a long way to go to successfully cater to all communication needs and users. When it comes to the deaf and hard of hearing communities, MT is in its infancy. The complexity of the task to automatically translate between SLs or sign and spoken languages, requires a multidisciplinary approach.
The rapid technological and methodological advances in deep learning, and in AI in general, that we see in the last decade, have not only improved MT, recognition of image, video and audio signals, the understanding of language, the synthesis of life-like 3D avatars, etc., but have also led to the fusion of interdisciplinary research innovations that lays the foundation of automated translation services between sign and spoken languages.
This one-day workshop aims to be a venue for presenting and discussing (complete, ongoing or future) research on automatic translation between sign and spoken languages and bring together researchers, practitioners, interpreters and innovators working in related fields. We are delighted to confirm that two interpreters for English<>International Sign (IS) will be present during the event, to make it as inclusive as possible to anyone who wishes to participate.
Theme of the workshop: Data is one of the key factors for the success of today’s AI, including language and translation models for sign and spoken languages. However, when it comes to SL, MT and Natural Language Processing, we face problems related to small volumes of (parallel) data, low veracity in terms of origin of annotations (deaf or hearing interpreters), non-standardized annotations (e.g. glosses differ across corpora), video quality or recording setting, and others. The theme of this edition of the workshop is Sign language parallel data – challenges, solutions and resolutions.
The AT4SSL workshop aims to open a (guided) discussion between participants about current challenges, innovations and future developments related to the automatic translation between sign and spoken languages. To this extent, AT4SSL will host a moderated round table around the following three topics: (i) quality of recognition and synthesis models and user-expectations; (ii) co-creation -- deaf, hearing and hard-of-hearing people joining forces towards a common goal and (iii) sign-to-spoken and spoken-to-sign translation technology in media.
TOPICS
This workshop aims to focus on the following topics. However, submissions related to the general topic of automatic translation between signed and spoken languages that deviate from these topics are also welcome:
* Data: resources, collection and curation, challenges, processing, data life cycle
* Use-cases, applications
* Ethics, privacy and policies
* Sign language linguistics
* Machine translation (with a focus on signed-to-signed, signed-to-spoken or spoken-to-signed language translation)
* Natural language processing
* Interpreting of sign and spoken languages
* Image and video recognition (for the purpose of sign language recognition)
* 3D avatar and virtual signers synthesis
* Usability and challenges of current methods and methodologies
* Sign language in the media
SUBMISSION FORMAT
Two types of submissions are going to be accepted for the AT4SSL workshop:
* Research, review, position and application papers
Unpublished papers that present original, completed work. The length of each paper should be at least four (4) and maximum eight (8) pages, with unlimited pages for references.
* Extended abstracts
Extended abstracts should present original, ongoing work or innovative ideas. The length of each extended abstract is four (4) pages, with unlimited pages for references.
Both papers should be formatted according to the official EAMT 2023 style templates (LaTex<https://events.tuni.fi/uploads/2022/12/ee35fd56-latex_template.zip>. Overleaf<https://www.overleaf.com/read/mkjbkppndvxw>, MS Word<https://events.tuni.fi/uploads/2022/12/edd598d2-eamt23.docx>, Libre/Open Office<https://events.tuni.fi/uploads/2022/12/ece98f81-eamt23.odt>, PDF<https://events.tuni.fi/uploads/2022/12/6e89772e-eamt23.pdf>).
Accepted papers and extended abstracts will be published in the EAMT 2023 proceedings and will be presented at the conference.
SUBMISSION POLICY
*
Submissions must be anonymized.
*
Papers and extended abstracts should be submitted using EASY Chair<https://easychair.org/my/conference?conf=at4ssl2023>.
*
Work that has been or is planned to be submitted to other venues must be declared as such. Upon acceptance at AT4SSL, it must be withdrawn from the other venues.
*
The review will be double-blind.
IMPORTANT DATES:
* First call for papers: 13-March-2023
* Second call for papers: 31-March-2023
* Submission deadline: 14-April-2023 24-April-2023 (Extended!)
* Review process: between 25-April-2023 and 05-May-2023
* Acceptance notification: 12-May-2023
* Camera ready submission: 01-June-2023
* Submission of material for interpreters: 06-June-2023
* Programme will be finalised by: 01-June-2023
* Workshop date: 15-June-2023
ORGANISATION COMMITTEE:
Dimitar Shterionov (TiU)
Mirella De Sisto (TiU)
Mathias Muller (UZH)
Davy Van Landuyt (EUD)
Rehana Omardeen (EUD)
Shaun O’Boyle (DCU)
Annelies Braffort (Paris-Saclay University)
Floris Roelofsen (UvA)
Frédéric Blain (TiU)
Bram Vanroy (KU Leuven; UGent)
Eleftherios Avramidis (DFKI)
INTERPRETING:
We will provide English to International Sign (IS) interpreting during the workshop.
FOR CONTACTS:
Dimitar Shterionov, workshop chair: d.shterionov(a)tilburguniversity.edu
Registration will be handled by the EAMT2023 conference. (To be announced)
[https://lh4.googleusercontent.com/6pUIP3BR24AzmruTUlFA4sluufOwqVuTMxj0nXD7y…]
Call for Papers for 2023
The Journal of Open Humanities Data (JOHD)<https://openhumanitiesdata.metajnl.com/> features peer-reviewed publications describing humanities research objects with high potential for reuse. These might include curated resources like (annotated) linguistic corpora, ontologies, and lexicons, as well as databases, maps, atlases, linked data objects, and other data sets created with qualitative, quantitative, or computational methods.
We are currently inviting submissions of two varieties:
1. Short data papers contain a concise description of a humanities research object with high reuse potential. These are short (1000 words) highly structured narratives. A data paper does not replace a traditional research article, but rather complements it.
2. Full length research papers discuss and illustrate methods, challenges, and limitations in humanities research data creation, collection, management, access, processing, or analysis. These are intended to be longer narratives (3,000 - 5,000 words), which give authors the ability to contribute to a broader discussion regarding the creation of research objects or methods.
Humanities subjects of interest to the JOHD include, but are not limited to Art History, Classics, History, Linguistics, Literature, Modern Languages, Music and musicology, Philosophy, Religious Studies, etc. Research that crosses one or more of these traditional disciplinary boundaries is highly encouraged. Authors are encouraged to publish their data in recommended repositories<https://openhumanitiesdata.metajnl.com/about/#repo>. More information about the submission process<https://openhumanitiesdata.metajnl.com/about/submissions>, editorial policies<https://openhumanitiesdata.metajnl.com/about/editorialpolicies/> and archiving<https://openhumanitiesdata.metajnl.com/about/> is available on the journal’s web pages.
JOHD provides immediate open access to its content on the principle that making research freely available to the public supports a greater global exchange of knowledge.
We accept online submissions via our journal website. See Author Guidelines <https://openhumanitiesdata.metajnl.com/about/submissions/> for further information. Alternatively, please contact the editor<https://openhumanitiesdata.metajnl.com/contact/> if you are unsure as to whether your research is suitable for submission to the journal.
Authors remain the copyright holders and grant third parties the right to use, reproduce, and share the article according to the Creative Commons<http://creativecommons.org/licenses/by/4.0/> licence agreement.
--
Prof Menno van Zaanen menno.vanzaanen(a)nwu.ac.za
Professor in Digital Humanities
South African Centre for Digital Language Resources
https://www.sadilar.org
________________________________
NWU PRIVACY STATEMENT:
http://www.nwu.ac.za/it/gov-man/disclaimer.html
DISCLAIMER: This e-mail message and attachments thereto are intended solely for the recipient(s) and may contain confidential and privileged information. Any unauthorised review, use, disclosure, or distribution is prohibited. If you have received the e-mail by mistake, please contact the sender or reply e-mail and delete the e-mail and its attachments (where appropriate) from your system.
________________________________
**
****Apologies for cross-postings*****
*
********Please help disseminate****
1st Call for Papers
SIGUL 2023 Workshop <https://sigul-2023.ilc.cnr.it>
Co-located with Interspeech 2023 <https://www.interspeech2023.org/>
Dublin, Ireland, 18-20 August 2023
The 2nd Annual Meeting of the ELRA <http://www.elra.info/>/ISCA
<https://www.isca-speech.org/iscaweb/index.php>Special Interest Group on
Under-Resourced Languages <http://www.elra.info/en/sig/sigul/>(SIGUL
2023) provides a forum for the presentation and discussion of
cutting-edge research in text and speech processing for under-resourced
languages by academic and industry researchers. SIGUL 2023 carries on
the tradition of the SIGUL and the CCURL-SLTU (Collaboration and
Computing for Under-Resourced Languages – Spoken Language Technologies
for Under-resourced languages) Workshop Series, which has been organized
since 2008 and, as LREC Workshops, since 2014. As usual, this Workshop
will span the research interest areas of less-resourced,
under-resourced, endangered, minority, and minoritized languages.
*Workshop website*: https://sigul-2023.ilc.cnr.it
Special Features
This year, the workshop will be marked with three special events:
(1) Special Session in Celtic Language Technology (August 18)
SIGUL 2023 will provide a special session or forum for researchers
interested in developing language technologies for Celtic languages.
(2) Joint Session with SlaTE 2023 (August 19)
SIGUL 2023 will have a joint session with The 9th Workshop on Speech and
Language Technology in Education (SlaTE 2023
<https://sites.google.com/view/slate2023>). The goal is to accelerate
the development of spoken language technology for under-resourced
languages through education.
(3) Social outing and dinner near Dublin (optional on August 20)
Workshop Topics
Following the long-standing series of previous meetings, the SIGUL venue
will provide a forum for the presentation of cutting-edge research in
natural language processing and spoken language processing for
under-resourced languages to both academic and industry researchers and
also offer a venue where researchers in different disciplines and from
varied backgrounds can fruitfully explore new areas of intellectual and
practical development while honoring their common interest of sustaining
less-resourced languages.
Topics include but are not limited to:
*
Processing any under-resourced languages (covering less-resourced,
under-resourced, endangered, minority, and minoritized languages)
*
Cognitive and linguistic studies of under-resourced languages
*
Fast resources acquisition: text and speech corpora, parallel texts,
dictionaries, grammars, and language models
*
Zero-resource speech technologies and self-supervised learning
*
Cross-lingual and multilingual acoustic and lexical modeling
*
Speech recognition and synthesis for under-resourced languages and
dialects
*
Machine translation and spoken dialogue systems
*
Applications of spoken language technologies for under-resourced
languages
*
Special topic:
o
Celtic language technology
o
Spoken language technologies for under-resourced languages via
education
We also welcome various typologies of papers:
*
research papers;
*
position papers for reflective considerations of methodological,
best practice, institutional issues (e.g., ethics, data ownership,
speakers’ community involvement, de-colonizing approaches);
*
research posters for work-in-progress projects in the early stage of
development or description of new resources;
*
demo papers, and early-career/student papers, to be submitted as
extended abstracts and presented as posters.
Instructions for submission
Prospective authors are invited to submit their contributions according
to the following guidelines.
*
Research and position papers and posters: a maximum of 5 pages with
the 5th page reserved exclusively for references.
*
Demo papers, and early-career/student papers: a maximum of three
pages with the 3rd page reserved for references.
Both types of submissions must conform to the Interspeech format
<https://www.interspeech2023.org/author-resources/>defined in the paper
preparation guidelines as instructed in the author’s kit
<https://www.interspeech2023.org/wp-content/uploads/2023/01/INTERSPEECH_2023…>on
the Interspeech webpage. Papers do not need to be anonymous. Authors
must declare that their contributions are original and that they have
not submitted their papers elsewhere for publication.
Important Dates
- Paper submission deadline: 28 May 2023
- Notification of acceptance: 2 July 2023
- Camera-ready paper: 21 July 2023
- Workshop date: 18-20 August 2023
Outline of the Program
SIGUL 2023 will continue the tradition of the previous SIGUL event
<https://sigul-2022.ilc.cnr.it> that features a number of distinguished
keynote speakers, technical oral and poster sessions, and panel
discussions to discuss a better future for under-resourced languages and
under-resourced communities.
Full list of organizers SIGUL Board
Sakriani Sakti (JAIST, Japan)
Claudia Soria (CNR-ILC, Italy)
Maite Melero (Barcelona Supercomputing Center, Spain)
SIGUL 2023 Organizers
Kolawole Adebayo (ADAPT, Ireland)
Ailbhe Ní Chasaide (Trinity College Dublin, Ireland)
Brian Davis (ADAPT, Ireland)
John Judge (ADAPT, Ireland)
Maite Melero (Barcelona Supercomputing Center, Spain)
Sakriani Sakti (JAIST, Japan)
Claudia Soria (CNR-ILC, Italy)
To contact the organizers, please mail sigul2023(a)ml.jaist.ac.jp
<mailto:sigul2023@ml.jaist.ac.jp>(Subject: [SIGUL2023]).
*
--
Claudia Soria
Researcher
Cnr-Istituto di Linguistica Computazionale “Antonio Zampolli”
Via Moruzzi 1
56124 Pisa
Italy
Tel. +39 050 3153166
Skype clausor
Second International Workshop on Automatic Translation
for Signed and Spoken Languages (AT4SSL2023 @EAMT2023)
Second Call For Papers
https://sites.google.com/tilburguniversity.edu/at4ssl2023/
****** Apologies for cross -posting ******
SCOPE
According to the World Federation of the Deaf (WFD) over 70 million people are deaf and communicate primarily via Sign Language (SL). Currently, human interpreters are the main medium for sign-to-spoken, spoken-to-sign and sign-to-sign language translation. The availability and cost of these professionals is often a limiting factor in communication between signers and non-signers. Machine Translation (MT) is a core technique for reducing language barriers for spoken languages. Although MT has come a long way since its inception in the 1950s, it still has a long way to go to successfully cater to all communication needs and users. When it comes to the deaf and hard of hearing communities, MT is in its infancy. The complexity of the task to automatically translate between SLs or sign and spoken languages, requires a multidisciplinary approach (Bragg et al., 2019)<https://dl.acm.org/doi/10.1145/3308561.3353774>.
The rapid technological and methodological advances in deep learning, and in AI in general, that we see in the last decade, have not only improved MT, recognition of image, video and audio signals, the understanding of language, the synthesis of life-like 3D avatars, etc., but have also led to the fusion of interdisciplinary research innovations that lays the foundation of automated translation services between sign and spoken languages.
This one-day workshop aims to be a venue for presenting and discussing (complete, ongoing or future) research on automatic translation between sign and spoken languages and bring together researchers, practitioners, interpreters and innovators working in related fields. We are delighted to confirm that two interpreters for English<>International Sign (IS) will be present during the event, to make it as inclusive as possible to anyone who wishes to participate.
Theme of the workshop: Data is one of the key factors for the success of today’s AI, including language and translation models for sign and spoken languages. However, when it comes to SL, MT and Natural Language Processing, we face problems related to small volumes of (parallel) data, large veracity in terms of origin of annotations (deaf or hearing interpreters), non-standardized annotations (e.g. glosses differ across corpora), video quality or recording setting, and others. The theme of this edition of the workshop is Sign language parallel data – challenges, solutions and resolutions.
The AT4SSL workshop aims to open a (guided) discussion between participants about current challenges, innovations and future developments related to the automatic translation between sign and spoken languages. To this extent, AT4SSL will host a moderated round table around the following three topics: (i) quality of recognition and synthesis models and user-expectations; (ii) co-creation -- deaf, hearing and hard-of-hearing people joining forces towards a common goal and (iii) sign-to-spoken and spoken-to-sign translation technology in media.
TOPICS
This workshop aims to focus on the following topics. However, submissions related to the general topic of automatic translation between signed and spoken languages that deviate from these topics are also welcome:
* Data: resources, collection and curation, challenges, processing, data life cycle
* Use-cases, applications
* Ethics, privacy and policies
* Sign language linguistics
* Machine translation (with a focus on signed-to-signed, signed-to-spoken or spoken-to-signed language translation)
* Natural language processing
* Interpreting of sign and spoken languages
* Image and video recognition (for the purpose of sign language recognition)
* 3D avatar and virtual signers synthesis
* Usability and challenges of current methods and methodologies
* Sign language in the media
SUBMISSION FORMAT
Two types of submissions are going to be accepted for the AT4SSL workshop:
* Research, review, position and application papers
Unpublished papers that present original, completed work. The length of each paper should be at least four (4) and maximum eight (8) pages, with unlimited pages for references.
* Extended abstracts
Extended abstracts should present original, ongoing work or innovative ideas. The length of each extended abstract is four (4) pages, with unlimited pages for references.
Both papers should be formatted according to the official EAMT 2023 style templates (LaTex<https://events.tuni.fi/uploads/2022/12/ee35fd56-latex_template.zip>. Overleaf<https://www.overleaf.com/read/mkjbkppndvxw>, MS Word<https://events.tuni.fi/uploads/2022/12/edd598d2-eamt23.docx>, Libre/Open Office<https://events.tuni.fi/uploads/2022/12/ece98f81-eamt23.odt>, PDF<https://events.tuni.fi/uploads/2022/12/6e89772e-eamt23.pdf>).
Accepted papers and extended abstracts will be published in the EAMT 2023 proceedings and will be presented at the conference.
SUBMISSION POLICY
*
Submissions must be anonymized.
*
Papers and extended abstracts should be submitted using EASY Chair<https://easychair.org/conferences/?conf=eamt2023>.
*
Work that has been or is planned to be submitted to other venues must be declared as such. Upon acceptance at AT4SSL, it must be withdrawn from the other venues.
*
The review will be double-blind.
IMPORTANT DATES:
* First call for papers: 13-March-2023
* Second call for papers: 3-April-2023
* Submission deadline: 14-April-2023
* Review process: between 17-April-2023 and 05-May-2023
* Acceptance notification: 12-May-2023
* Camera ready submission: 01-June-2023
* Submission of material for interpreters: 06-June-2023
* Programme will be finalised by: 01-June-2023
* Workshop date: 15-June-2023
ORGANISATION COMMITTEE:
Dimitar Shterionov (TiU)
Mirella De Sisto (TiU)
Mathias Muller (UZH)
Davy Van Landuyt (EUD)
Rehana Omardeen (EUD)
Shaun O’Boyle (DCU)
Annelies Braffort (Paris-Saclay University)
Floris Roelofsen (UvA)
Frédéric Blain (TiU)
Bram Vanroy (KU Leuven; UGent)
Eleftherios Avramidis (DFKI)
FOR CONTACTS:
Dimitar Shterionov, workshop chair: d.shterionov(a)tilburguniversity.edu
Registration will be handled by the EAMT2023 conference. (To be announced)