*** APOLOGIES FOR CROSS POSTINGS ***
Call for Papers: Language Understanding in the Human-Machine Era (LUHME)
The LUHME 2024 workshop Language Understanding in the Human-Machine Era is part of the 27th European Conference on Artificial Intelligence, ECAI 2024 (https://www.ecai2024.eu/).
Workshop description Large language models (LLMs) have revolutionized the development of interactional artificial intelligence (AI) systems by democratizing their use. These models have shown remarkable advancements in various applications such as conversational AI and machine translation, marking the undeniable advent of the human-machine era. However, despite their significant achievements, state-of-the-art systems still exhibit shortcomings in language understanding, raising questions about their true comprehension of human languages. The concept of language understanding has always been contentious, as meaning-making depends not only on form and immediate meaning but also on context. Therefore, understanding natural language involves more than just parsing form and meaning; it requires access to grounding for true comprehension. Equipping language models with linguistics-grounded capabilities remains a complex task, given the importance of discourse, pragmatics, and social context in language understanding. Understanding language is a doubly challenging task as it necessitates not only grasping the intrinsic capabilities of LLMs but also examining their impact and requirements in real-world applications. While LLMs have shown effectiveness in various applications, the lack of supporting theories raises concerns about ethical implications, particularly in applications involving human interaction. The “Language Understanding in the Human-Machine Era” (LUHME) workshop aims to reignite the debate on the role of understanding in natural language use and its applications. It seeks to explore the necessity of language understanding in computational tasks like machine translation and natural language generation, as well as the contributions of language professionals in enhancing computational language understanding.
Topics of Interest Topics of interest include, but are not limited to: • Language understanding in LLMs • Language grounding • Psycholinguistic approaches to language understanding • Discourse, pragmatics and language understanding • Evaluation of language understanding • Multi-modality and language understanding • Socio-cultural aspects in understanding language • Effects of language misunderstanding by computational models • Manifestations of language understanding • Distributional semantics and language understanding • Linguistic theory and language understanding by machines • Linguistic, world, and common sense knowledge in language understanding • Machine translation and/or interpreting and language understanding • Human vs. machine language understanding • Role of language professionals in the LLMs era • Understanding language and explainable AI
Ethics Statement Research reported at ECAI and the LUHME workshop should avoid harm, be honest and trustworthy, fair and non-discriminatory, and respect privacy and intellectual property. Where relevant, authors can include in the main body of their paper, or on the reference page, a short ethics statement that addresses ethical issues regarding the research being reported and the broader ethical impact of the work. Reviewers will be asked to flag possible violations of relevant ethical principles. Such flagged submissions will be reviewed by a senior member of the programme committee. Authors may be required to revise their paper to include a discussion of possible ethical concerns and their mitigation.
Submission Instructions Papers must be written in English, be prepared for double-blind review using the ECAI LaTeX template, and not exceed 7 pages (not including references). The ECAI LaTeX Template can be found at https://ecai2024.eu/download/ecai-template.zip. Papers should be submitted via OpenReview: https://openreview.net/group?id=eurai.org/ECAI/2024/Workshop/LUHME
Excessive use of typesetting tricks to make things fit is not permitted. Please do not modify the style files or layout parameters. You can resubmit any number of times until the submission deadline. The workshop papers will be published in the proceedings (further information will be provided soon).
Important Dates • Paper submission: 31 May 2024 • Notification of acceptance: 15 July 2024 • Camera-ready papers: 31 July 2024 • LUHME workshop: 19 or 20 October 2024
Invited Speakers • Alexander Koller, Saarland University • Anders Søgaard, University of Copenhagen • Melanie Mitchell, Santa Fe Institute
Organization This workshop is jointly organized by the chairs of working groups 1 (Computational Linguistics) and 7 (Language work, language professionals) of the COST Action LITHME – Language in the Human-Machine Era.
Workshop Organizers • Rui Sousa-Silva (University of Porto, Portugal) • Henrique Lopes Cardoso (University of Porto, Portugal) • Maarit Koponen (University of Eastern Finland, Finland) • Antonio Pareja-Lora (Universidad de Alcalá, Spain) • Márta Seresi (Eötvös Loránd University, Hungary)
Program Committee • Aida Kostikova (Bielefeld University) • Alex Lascarides (University of Edinburgh) • Alípio Jorge (University of Porto) • António Branco (University of Lisbon) • Belinda Maia (University of Porto) • Caroline Lehr (ZHAW School of Applied Linguistics) • Diana Santos (Universitetet i Oslo) • Efstathios Stamatatos (University of the Aegean) • Ekaterina Lapshinova-Koltunski (University of Hildesheim) • Eliot Bytyçi (Universiteti i Prishtinës “Hasan Prishtina”) • Hanna Risku (University of Vienna) • Jörg Tiedemann (University of Helsinki) • Lynne Bowker (University of Ottawa) • Nataša Pavlović (University of Zagreb) • Paolo Rosso (Universitat Politècnica de València) • Ran Zhang (Bielefeld University) • Ruslan Mitkov (Lancaster University) • Sule Yildirim Yayilgan (Norwegian University of Science and Technology) • Tharindu Ranasinghe (Lancaster University)
For further information, please visit https://luhme.web.uah.es/ or contact rssilva@letras.up.ptmailto:rssilva@letras.up.pt