Apologies for cross posting
Third Workshop on Language Technology for Equality, Diversity and Inclusion (LT-EDI-2023) at RANLP 2023
Link: https://sites.google.com/view/lt-edi-2023/
Equality, Diversity and Inclusion (EDI) is an important agenda across every field [1] throughout the world. Language as a major part of communication should be inclusive and treat everyone with equality. Today’s large internet community uses language technology (LT) and has a direct impact on people across the globe. EDI is crucial to ensure everyone is valued and included, so it is necessary to build LT that serves this purpose. Recent results have shown that big data and deep learning are entrenching existing biases and that some algorithms are even naturally biased due to problems such as ‘regression to the mode’. Our focus is on creating LT that will be more inclusive of gender [2], racial [3], sexual orientation [4], persons with disability [5,6]. The workshop will focus on creating speech and language technology to address EDI not only in English, but also in less resourced languages.
The broader objective of LT-EDI-2023 will be
-
To investigate challenges related to speech and language resource creation for EDI. -
To promote research in inclusive LT. -
To adopt and adapt appropriate LT models to suit EDI. -
To provide opportunities for researchers from the LT community around the world to collaborate with other researchers to identify and propose possible solutions for the challenges of EDI.
Our workshop theme focuses on being more inclusive and providing a platform for researchers to create LT of a more inclusive nature. We hope that through these engagements we can develop LT tools to be more inclusive of everyone, including marginalized people.
Call for Papers:
Our main theme in this workshop is equality, diversity, and inclusivity in LT. We invite researchers and practitioners to submit papers reporting on these issues and datasets to avoid these issues. We also encourage qualitative studies related to these issues and how to avoid them. LT-EDI-2023 welcomes theoretical and practical paper submissions on any languages that contribute to research in Equality, Diversity and Inclusion. We will particularly encourage studies that address either practical application or improving resources.
Topics of interest include, but are not limited to:
-
Data set development to include EDI -
Gender inclusivity in LT -
LGBTQ+ inclusivity in LT -
Racial inclusivity in LT -
Persons with disability inclusivity in LT -
Speech and language recognition for minority groups -
Unconscious bias and how to avoid them in natural language processing, machine learning and other LT technologies. -
Tackling rumors and fake news about gender, racial, and LGBTQ+ minorities. -
Tackling discrimination against gender, racial, and LGBTQ+ minorities.
Important dates (will be changed according to guidelines from RANLP)
-
First call for workshop papers: 15 February 2023 -
Second call for workshop papers: 15 March 2023 -
Workshop paper due: 10 July 2023 -
Notification of acceptance: 5 August 2023 -
Camera-ready papers due: 20 August 2023 -
Workshop dates: 7 September 2023
Submission:
Papers must describe original, completed/ in progress and unpublished work. Each submission will be reviewed by three program committee members. Accepted papers will be given up to 9 pages (for full papers), 5 pages (for short papers and posters) in the workshop proceedings, and will be presented as oral paper or poster. Papers should be formatted according to the RANLP 2023 style-sheet, which is provided on the website. Please submit papers in PDF format.
We are seeking submissions under the following category
-
Full papers (8 pages) -
Short papers (work in progress, innovative ideas/proposals, research proposal of students: : 4 page) -
Demo (of working online/standalone systems: : 4 page)
Both long and short papers must follow the RANLP 2023 two-column format, using the supplied official style files. The templates can be downloaded in Style Files and Formatting. Please do not modify these style files, nor should you use templates designed for other conferences. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review. Verification To guarantee conformance to publication standards, we will be using the ACL Pubcheck tool (https://github.com/acl-org/aclpubcheck). The PDFs of camera-ready papers must be run through this tool prior to their final submission, and we recommend its use also at submission time.
Organisers
-
Bharathi Raja Chakravarthi, Assistant Professor, School of Computer Science, University of Galway, Ireland.
-
B. Bharathi, Associate Professor, Department of CSE, SSN College of Engineering, Chennai, India
-
Josephine Griffith, Assistant Professor, School of Computer Science, University of Galway, Ireland.
-
Kalika Bali, Researcher, Microsoft Research India
-
Paul Buitelaar, Professor in Computer Science and Deputy Director of the Data Science Institute at the University of Galway, Ireland, co-PI of the Insight SFI Research Centre for Data Analytics, and Co-Director of the SFI Centre for Research Training in AI.
References
[1] https://aim.gov.ie/wp-content/uploads/2016/06/Diversity-Equality-and-Inclusi...
[2]Kiritchenko, S. and Mohammad, S., 2018, June. Examining Gender and Race Bias in Two Hundred Sentiment Analysis Systems. In Proceedings of the Seventh Joint Conference on Lexical and Computational Semantics (pp. 43-53).
[3]Sap, M., Card, D., Gabriel, S., Choi, Y. and Smith, N.A., 2019, July. The risk of racial bias in hate speech detection. In Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics (pp. 1668-1678).
[4]Wu, H.H. and Hsieh, S.K., 2017, November. Exploring Lavender Tongue from Social Media Texts [In Chinese]. In Proceedings of the 29th Conference on Computational Linguistics and Speech Processing (ROCLING 2017) (pp. 68-80).
[5]Hutchinson, Ben, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. "Unintended machine learning biases as social barriers for persons with disabilities." ACM SIGACCESS Accessibility and Computing 125 (2020): 1-1.
[6]Hutchinson, Ben, Vinodkumar Prabhakaran, Emily Denton, Kellie Webster, Yu Zhong, and Stephen Denuyl. Social Biases in NLP Models as Barriers for Persons with Disabilities, Proceedings of ACL 2020, ACL
with regards, Dr. Bharathi Raja Chakravarthi, Assistant Professor / Lecturer-above-the-bar School of Computer Science, University of Galway, Ireland Insight SFI Research Centre for Data Analytics, Data Science Institute, University of Galway, Ireland E-mail: bharathiraja.akr@gmail.com , bharathiraja.asokachakravarthi@universityofgalway.ie