International Conference
'LAnguage TEchnologies for Low-resource Languages' (LaTeLL '2026)
Fes, Morocco
30 September, 1 and 2 October 2026
www.latell.org/2026/ [1]
Second Call for Papers
The conference
Natural Language Processing (NLP) has witnessed remarkable progress in
recent years, largely driven by the emergence of deep learning
architectures and, more recently, large language models (LLMs).
Nevertheless, these advances have disproportionately benefited
high-resource languages that possess abundant data for model training.
By contrast, low-resource languages which account for at least 85% of
the world's linguistic diversity and are often spoken by smaller or
marginalised communities, have not yet reaped the full benefits of
contemporary NLP technologies.
This imbalance can be attributed to several interrelated factors,
including the scarcity of high-quality training data, limited
computational and financial resources, and insufficient community
engagement in data collection and model development. Developing NLP
applications for low-resource languages poses major challenges,
particularly the need for large, well-annotated datasets, standardised
tools, and robust linguistic resources.
Although several workshops have previously addressed NLP for
low-resource languages, _LaTeLL_ represents the first international
conference dedicated specifically to the automatic processing of such
languages. The event aims to provide a forum for researchers to present
and discuss their latest work in NLP in general, and in the development
and evaluation of language models for low-resource languages in
particular.
Conference topics
We invite submissions on a broad range of themes concerning linguistic
and computational studies focusing on low-resource languages, including
but not limited to the following topics:
Language resources for low-resource languages
* Dataset creation and annotation
* Evaluation methodologies and benchmarks for low-resource settings
* Lexical resources, corpora, and linguistic databases
* Crowdsourcing and community-driven data collection
* Tools and frameworks for low-resource language processing
Core language technologies for low-resource languages
* Language modelling and pre-training for low-resource languages
* Speech recognition, text-to-speech, and spoken language
understanding
* Phonology, morphology, word segmentation, and tokenisation
* Syntax: tagging, chunking, and parsing
* Semantics: lexical and sentence-level representation
NLP Applications for low-resource languages
* Information extraction and named entity recognition
* Question answering systems
* Dialogue and interactive systems
* Summarisation
* Machine translation
* Sentiment analysis, stylistic analysis, and argument mining
* Content moderation
* Information retrieval and text mining
Multimodality and Grounding for low-resource languages
* Vision and language for low-resource contexts
* Speech and text multimodal systems
* Low-resource sign language processing
Ethics, Equity, and Social Impact for low-resource languages
* Bias and fairness in low-resource language technologies
* Sociolinguistic considerations in technology development
* Cultural appropriateness and sensitivity
Human-Centred Approaches in low-resource languages
* Usability and accessibility of low-resource language technologies
* Educational applications and language learning
* Community needs assessment and technology adoption
* User experience research in low-resource contexts
Multilinguality and Cross-Lingual Methods for low-resource languages
* Multilingual language models and their adaptation
* Code-switching and code-mixing
* Cross-lingual transfer learning in low-resource languages.
Special Theme Track 1 -- Building Applications Based on Large Language
Models for Low-Resource Languages
_LaTeLL'2026_ will feature a Special Theme Track dedicated to the
development of applications based on Large Language Models (LLMs) for
low-resource languages.
This track aims to explore innovative methodologies, architectures, and
tools that leverage the power of LLMs to enhance linguistic processing,
accessibility, and inclusivity for underrepresented languages.
Contributions are encouraged on topics such as model adaptation and
fine-tuning, multilingual and cross-lingual transfer, ethical and
fairness considerations, and the creation of datasets and benchmarks
that facilitate the integration of LLM-based solutions in low-resource
settings.
Special Theme Track 2 -- Modern Standard Arabic (MSA) and Arabic
Dialects
This special track addresses the unique challenges and opportunities in
processing Modern Standard Arabic (MSA) and the rich landscape of Arabic
dialects. The diglossic nature of Arabic, where the formal MSA coexists
with numerous, widely used spoken dialects, presents a significant
hurdle for NLP. While MSA is relatively well-resourced, Arabic dialects
are quintessential examples of low-resource languages, often lacking
standardised orthographies, annotated corpora, and dedicated processing
tools. This track invites submissions on novel research and resources
aimed at bridging this gap and advancing the state of the art in Arabic
language technology. Topics of interest include, but are not limited to:
* Dialect identification and classification
* Creation of corpora and lexical resources for Arabic dialects
* Machine translation between MSA and dialects, and across different
dialects
* Speech recognition and synthesis for dialectal Arabic
* Computational modelling of morphology, syntax, and semantics for
dialects
* NLP applications (e.g., sentiment analysis, NER) for dialectal
user-generated content
* Code-switching between Arabic dialects, MSA, and other languages
Submissions and Publication
_LaTeLL'2026_ welcomes high-quality submissions in English, which may
take one of the following two forms:
* Regular (long) papers:Up to eight (8) pages in length, presenting
substantial, original, completed, and unpublished research.
* Short (poster) papers:Up to four (4) pages in length, suitable for
concise or focused contributions, ongoing research, negative results,
system demonstrations, and similar work. Short papers will be presented
during a dedicated poster session.
The conference will not consider submissions consisting of abstracts
only.
All accepted papers (both long and short) will be published as
electronic proceedings (with ISBN) and made available on the conference
website at the time of the event. The organisers intend to submit the
proceedings for inclusion in the ACL Anthology.
Authors of papers receiving exceptionally positive reviews will be
invited to prepare extended and substantially revised versions for
submission to a leading journal in the field of Natural Language
Processing (NLP).
Further details regarding the submission process will be provided in the
follow up Calls for Papers.
The conference will also feature a Student Workshop, and awards will be
presented to the authors of outstanding papers.
Important dates
* Submissions due: 1 May 2026
* Reviewing process: 20 May - 20 June 2026
* Notification of acceptance: 25 June 2026
* Camera-ready due: 10 July 2026
* Conference camera-ready proceedings ready 10 July 2026
* Conference: 30 September, 1 October and 2 October 2026
Organisation
Conference Chair
Ruslan Mitkov (Lancaster University and University of Alicante)
Programme Committee Chairs
Saad Ezzini (King Fahd University of Petroleum & Minerals)
Salima Lamsiyah (University of Luxembourg)
Tharindu Ranasinghe (Lancaster University)
Organising Committee
Maram Alharbi (Lancaster University)
Salmane Chafik (Mohammed VI Polytechnic University)
Ernesto Estevanell (University of Alicante)
Further information and contact details
The follow-up calls will provide more details on the conference venue
and list keynote speakers and members of the programme committee once
confirmed.
The conference website is www.latell.org/2026/ [1] and will be updated
on a regular basis. For further information, please email
2026(a)latell.org
Registration will open in March 2026.
--
Amal Haddad Haddad (She/her)
Facultad de Traducción e Interpretación
Universidad de Granada |https://www.ugr.es/personal/amal-haddad-haddad
Lexicon Research Group |http://lexicon.ugr.es/haddad
Co-Convenor, BAAL SIG 'Humans, Machines,
Language'|https://r.jyu.fi/humala
Event Coordinator, BAAL SIG 'Language, Learning and Teaching'
===============
Cláusula de Confidencialidad: "Este mensaje se dirige exclusivamente a
su destinatario y puede contener información privilegiada o
confidencial. Si no es Ud. el destinatario indicado, queda notificado de
que la utilización, divulgación o copia sin autorización está prohibida
en virtud de la legislación vigente. Si ha recibido este mensaje por
error, se ruega lo comunique inmediatamente por esta misma vía y proceda
a su destrucción.
This message is intended exclusively for its addressee and may contain
information that is CONFIDENTIAL and protected by professional
privilege. If you are not the intended recipient you are hereby notified
that any dissemination, copy or disclosure of this communication is
strictly prohibited by law. If this message has been received in error,
please immediately notify us via e-mail and delete it"
===============
Links:
------
[1] http://www.latell.org/2026/
There are two open PhD positions in Natural Language Processing available at the Institute for Computer Science at Leipzig University, in the group of Leonie Weissweiler.
Potential research topics include but aren’t limited to:
- Linguistic Interpretability
- Multilingual Evaluation
- Computational Typology
Positions are fully funded for at least three years and will be affiliated with the ScaDS.AI graduate school. Ideal PhD candidates have a master's degree in computational linguistics, computer science or a related discipline.
Positions: Full-time (TV-L E13) for 3 years
Preferred start date: 1st of April 2026
More information: https://leonieweissweiler.github.io/phd_leipzig.pdf
Application deadline: 15th of January 2026
-----------------------------------------------------------------------------
Call for submissions
1st International Workshop on Quality in Large Language Models and
Knowledge Graphs
In conjunction with EDBT/ICDT 2026
QuaLLM-KG @ EDBT/ICDT 2026
24 March 2026, Tampere, Finland
Website: https://quallmkg2026.github.io/
-----------------------------------------------------------------------------
**** Goal ****
QuaLLM-KG aims to bring together researchers and practitioners working
on quality issues at the intersection of large language models and
knowledge graphs. The workshop focuses on theories, methods, and
applications for assessing, improving, and monitoring the quality of
LLMs and KGs.
**** Important Dates ****
- Submission deadline: January 15th, 2026
- Notification: February 8th, 2026
- Camera-ready: February 20th, 2026
**** Topics ****
* Quality in Knowledge Graphs
- Accuracy, consistency, completeness, freshness
- Schema validation, constraint checking, error detection
- Entity resolution, link prediction, ontology alignment
- Provenance, explainability, trust in KG data
- KG quality in dynamic and large-scale settings
* Quality in Large Language Models
- Hallucination reduction & factual grounding
- Bias detection and mitigation
- Metrics & benchmarks for quality assessment
- Uncertainty estimation, calibration, interpretability
* Synergies Between KGs and LLMs
- KG-based grounding and fact-checking for LLMs
- LLM-based KG enrichment, extraction, entity linking
- Quality-driven prompting and fine-tuning
- Hybrid KG–LLM architectures for quality assurance
- Evaluation frameworks for integration and consistency
* Benchmarks and Evaluation Frameworks
- Datasets and metrics for KG & LLM quality
- Tools for monitoring, validation, maintenance
- Reproducibility, transparency, responsible AI
* Applications and Case Studies
- Scientific, industrial, enterprise use cases
- Quality at scale
- Human-in-the-loop quality control
**** Submissions ****
We invite submissions of full papers (up to 8 pages, excluding
references) and short papers describing work in progress, systems,
demos/systems/applications,
or vision/innovative ideas (up to 4 pages, excluding references).
Submissions should be in the CEUR-WS proceedings template.
Accepted papers will be published in the CEUR Workshop proceedings
(CEUR-WS.org).
**** Workshop Organizers ****
- Soror Sahri, Université Paris Cité, France
- Sven Groppe, University of Lübeck, Germany
- Farah Benamara, University of Toulouse, France & IPAL-CNRS, Singapore
--
========================
Farah Benamara Zitoune
Professor in Computer Science, Université de Toulouse
IRIT and IPAL-CNRS Singapore
118 Route de Narbonne, 31062, Toulouse.
Tel : +33 5 61 55 77 06
http://www.irit.fr/~Farah.Benamara
==================================
The next meeting of the Edge Hill Corpus Research Group will take place online (MS Teams) on Friday 19 December 2025, 2:00-3:30 pm (GMT<https://time.is/United_Kingdom>).
Topic: Philosophies of Language and Corpus Linguistics
Speaker: Alan Partington (SiBol Group / CoLiTec)
Title: Language Distrusted, Language Ignored, Language Recovered: From Plato to Corpus Linguistics and Beyond
The abstract and registration link are here: https://sites.edgehill.ac.uk/crg/next
Attendance is free. Registration closes on Wednesday 17 December.
If you have problems registering, or have any questions, please email the organiser, Costas Gabrielatos (gabrielc(a)edgehill.ac.uk<mailto:gabrielc@edgehill.ac.uk>).
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
CFP: LT4HALA 2026 - The Fourth Workshop on Language Technologies for Historical and Ancient Languages
Website: https://circse.github.io/LT4HALA/2026/
Date: Monday, May 11 2026
Place: co-located with LREC 2026, May 11-16, Palma, Mallorca (Spain)
Submission page: TBA
DESCRIPTION
LT4HALA 2026 is a one-day workshop that seeks to bring together scholars who are developing and/or are using Language Technologies (LTs) for historically attested languages, so to foster cross-fertilization between the Computational Linguistics community and the areas in the Humanities dealing with historical linguistic data, e.g. historians, philologists, linguists, archaeologists and literary scholars. LT4HALA 2026 follows LT4HALA 2020, 2022, 2024 that were organized in the context of LREC 2020, LREC 2022 and LREC-COLING 2024, respectively. Despite the current availability of large collections of digitized texts written in historical languages, such interdisciplinary collaboration is still hampered by the limited availability of annotated linguistic resources for most of the historical languages. Creating such resources is a challenge and an obligation for LTs, both to support historical linguistic research with the most updated technologies and to preserve those precious linguistic data that survived from past times.
Relevant topics for the workshop include, but are not limited to:
* creation and annotation of linguistic resources (both lexical and textual);
* role of digital infrastructures, such as CLARIN, in supporting research based on language resources for historical and ancient languages;
* handling spelling variation;
* detection and correction of OCR errors;
* deciphering;
* morphological/syntactic/semantic analysis of textual data;
* adaptation of tools to address diachronic/diatopic/diastratic variation in texts;
* teaching ancient languages with LTs;
* NLP-driven theoretical studies in historical linguistics;
* NLP-driven analysis of literary ancient texts;
* evaluation of LTs designed for historical and ancient languages;
* LLMs for the automatic analysis of ancient texts.
SHARED TASKS
LT4HALA 2026 will also host:
* the 4th edition of EvaLatin<https://circse.github.io/LT4HALA/2026/EvaLatin>, a campaign entirely devoted to the evaluation of NLP tools for Latin. This new edition will focus on two tasks: dependency parsing and Named Entity Recognition. Dependency parsing will be based on the Universal Dependencies framework.
* the 5th edition of EvaHan<https://circse.github.io/LT4HALA/2026/EvaHan>, the campaign for the evaluation of NLP tools for Ancient Chinese. EvaHan 2026 will focus on Ancient Chinese OCR (Optical Character Recognition) Evaluation.
* the 2nd edition of EvaCun<https://circse.github.io/LT4HALA/2026/EvaCun>, the campaign for the evaluation of Ancient Cuneiform Languages, with shared tasks on transliteration normalization, morphological analysis and lemmatization, Named Entity Recognition of Akkadian and/or Sumerian.
SUBMISSIONS
Submissions should be 4 to 8 pages in length and follow the LREC 2026 stylesheet (see below). The maximum number of pages excludes potential Ethics Statements and discussion on Limitations, acknowledgements and references, as well as data and code availability statements. Appendices or supplementary material are not permitted during the initial submission phase, as papers should be self-contained and reviewable on their own.
Papers must be of original, previously unpublished work. Papers must be anonymized to support double-blind reviewing. Submissions thus must not include authors’ names and affiliations. The submissions should also avoid links to non-anonymized repositories: the code should be either submitted as supplementary material in the final version of the paper, or as a link to an anonymized repository (e.g., Anonymous GitHub or Anonym Share). Papers that do not conform to these requirements will be rejected without review.
Submissions should follow the LREC stylesheet, which is available on the LREC 2026 website on the Author’s kit page<https://lrec2026.info/authors-kit/>.
Each paper will be reviewed by three independent reviewers.
Accepted papers will appear in the workshop proceedings, which include both oral and poster papers in the same format. Determination of the presentation format (oral vs. poster) is based solely on an assessment of the optimal method of communication (more or less interactive), given the paper content.
As for the shared tasks, participants will be required to submit a technical report for each task (with all the related sub-tasks) they took part in. Technical reports will be included in the proceedings as short papers: the maximum length is 4 pages (excluding references) and they should follow the LREC 2026 official format. Reports will receive a light review (we will check for the correctness of the format, the exactness of results and ranking, and overall exposition). All participants will have the possibility to present their results at the workshop. Reports of the shared tasks are not anonymous.
WORKSHOP IMPORTANT DATES
17 February 2026: submissions due
13 March 2026: reviews due
16 March 2026: notifications to authors
27 March 2026: camera-ready due
Shared tasks deadlines are available in the specific web pages: EvaLatin<https://circse.github.io/LT4HALA/2026/EvaLatin>, EvaHan<https://circse.github.io/LT4HALA/2026/EvaHan>, EvaCun<https://circse.github.io/LT4HALA/2026/EvaCun>.
Identify, Describe and Share your LRs!
When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.) to enable their reuse and replicability of experiments (including evaluation ones).
[http://static.unicatt.it/ext-portale/5xmille_firma_mail_2023.jpg] <https://www.unicatt.it/uc/5xmille>
Dear colleagues,
We are writing to invite your collaboration in a community-driven initiative
to develop annotation schemas for scientific process descriptions in
research articles. The effort is inspired by the spirit of schema.org
<https://schema.org/> , but focuses specifically on capturing experimental
and simulation workflows across scientific domains. The resulting schemas
will be openly published as templates in the Open Research Knowledge Graph
(ORKG, <https://orkg.org/> https://orkg.org/) and will form the basis of a
paper planned for Nature Scientific Data <https://www.nature.com/sdata/> .
Motivation
Scientific papers describe complex processes-e.g., ALD and CVD in materials
science, PCR and CRISPR in molecular biology, tensile and fatigue testing in
engineering, leaching experiments in environmental science, RCTs and
cognitive tasks in psychology-using highly variable narrative text. This
variability makes it difficult to:
* design consistent, interoperable annotation guidelines,
* build cross-domain corpora of scientific methods,
* compare and align experimental setups across papers, and
* create FAIR, reusable metadata about how studies are actually
carried out.
Our goal is to define annotation schemas for these processes (inputs,
conditions, outputs, roles, and relations) and to populate them from
full-text articles. These schemas and resulting corpora are intended as
shared resources for corpus linguistics, NLP, scientific text mining, and
downstream applications.
Why Collaborate
We are seeking contributors who can:
* provide collections of full-text articles (~50+) describing a
specific experimental or simulation process in their field,
* offer expert feedback on automatically mined process schemas, or
* run the schema-miner workflow themselves (with our support) and help
refine the resulting schema.
Individual or small-team participation is welcome, and co-authorship
opportunities are available depending on involvement.
A wide variety of processes can be included-thin-film deposition, synthetic
chemistry reactions, gene editing workflows, fatigue testing, soil leaching
experiments, drug dissolution assays, fMRI tasks, cognitive experiments, and
many more.
A broader (non-exhaustive) list is here:
<https://docs.google.com/document/d/1iyL1l9vCXhnQ0To7j79vlr-pW4JvPlQC95svygq
RDfg/edit>
https://docs.google.com/document/d/1iyL1l9vCXhnQ0To7j79vlr-pW4JvPlQC95svygqR
Dfg/edit
How to Participate
Please register your interest using this short form:
<https://forms.gle/9WEdouw4yMyNHcn19> https://forms.gle/9WEdouw4yMyNHcn19
We will notify selected contributors by January 31, 2026. Data collection
and schema mining will conclude by April 30, 2026, followed by manuscript
preparation.
We hope members of this community will consider contributing to this effort
to develop shared annotation schemas and corpora of scientific process
descriptions-a step toward more comparable, analyzable, and reusable
scientific text resources. Also please help us spread the word!
Best regards,
Jennifer D'Souza
TIB - Leibniz Information Centre for Science and Technology
(on behalf of the schema-miner coordination team)
[apologies for cross posting]
DeTermIt! Workshop @ LREC 2026
Second Workshop on Evaluating Text Difficulty in a Multilingual Context
Location: Palau de Congressos de Palma, Palma de Mallorca (Spain)
#####################
First Call for Papers
Schedule
- Paper submissions: 23 February 2026
- Notification of acceptance: 13 March 2026
- Camera-ready due: 30 March 2026
- Workshop: one of 11, 12, or 16 May 2026 (half-day)
All deadlines are 11:59PM UTC-12:00 AoE (“Anywhere on Earth”)
For more information, please visit:
Website: https://determit2026.dei.unipd.it/
#####################
In today’s interconnected world, where information dissemination knows no linguistic bounds, it is crucial to ensure that knowledge is accessible to diverse audiences, regardless of language proficiency and domain expertise. Automatic Text Simplification (ATS) and text difficulty assessment are central to this goal, especially in the age of Large Language Models (LLMs) and Generative AI (GenAI), which increasingly mediate access to information.
The second edition of the DeTermIt! workshop focuses on the evaluation and modeling of text difficulty in multilingual, terminology-rich contexts, with a particular emphasis on the interaction between:
- text simplification,
- terminology and conceptual complexity, and
- LLM/GenAI-based generation and rewriting.
The 2026 edition builds on the first DeTermIt! workshop held at LREC-COLING 2024 (https://determit2024.dei.unipd.it/), as well as related initiatives such as the CLEF SimpleText track (https://simpletext-project.com/), which provides reusable data and benchmarks for scientific text summarization and simplification. DeTermIt! 2026 aims to bring together researchers and practitioners interested in terminology-aware simplification, lexical and conceptual difficulty, and evaluation protocols for GenAI systems.
We welcome contributions that address theoretical, methodological, and applied aspects of text difficulty, including resource creation and evaluation (e.g., corpora, datasets, and benchmarks), with a focus on how linguistic complexity, specialized terminology, and domain knowledge interact with human understanding. In particular, we encourage work that explores how LLMs and GenAI can be evaluated, constrained, or guided to produce readable, faithful, and accessible texts.
#####################
Topics of Interest
#####################
We invite submissions on (but not limited to) the following themes:
1. Theoretical and Modeling Perspectives
- Cognitive and linguistic models of text and lexical complexity.
- Multilingual readability and text difficulty prediction.
- Modeling conceptual difficulty and domain-specific terminology.
- Theoretical connections between lexicography, terminology, and text simplification.
2. Terminology and Conceptual Complexity
- Identification and classification of specialized terms and concepts.
- Estimation of term difficulty for lay readers and second language learners.
- Use of terminological databases, ontologies, and knowledge graphs in simplification pipelines.
- Methods for adapting domain-specific terminology for accessible communication (e.g., in medicine, law, technology).
3. Generative and Explainable AI for Text Simplification
- LLM- and GenAI-based approaches to text simplification and paraphrasing.
- Terminology-Augmented Generation (TAG) and term-preserving simplification.
- Evaluation of GenAI outputs: readability, factuality, terminology fidelity, and hallucination analysis.
- Readability-controlled or difficulty-controlled generation; controllable simplification.
- Human-centered and explainable approaches to text accessibility in GenAI systems.
4. Resources, Benchmarks, and Evaluation Frameworks
- Corpora, annotation schemes, and benchmarks for text difficulty and simplification.
- Datasets and methods for evaluating terminology-aware simplification and explanation.
- FAIR and reusable resources for multilingual text accessibility.
- Evaluation protocols and metrics for cross-lingual and cross-domain simplification and GenAI-based rewriting.
5. Applications and Case Studies
- Domain-specific simplification (e.g., healthcare, legal, scientific communication).
- Tools and systems for educational settings, language learning, or accessible communication.
- User studies, human evaluation setups, and mixed-method approaches to assessing text difficulty and GenAI-assisted simplification.
- Industrial and real-world experiences with integrating ATS and terminology into LLM-driven workflows.
#####################
Submission Guidelines
#####################
We invite original contributions, including research papers, case studies, negative results, and system demonstrations.
When submitting a paper through the START system of LREC 2026, authors will be asked to provide essential information about language resources (in a broad sense: data, tools, services, standards, evaluation packages, etc.) that have been used for the work described in the paper or are a new result of the research. ELRA strongly encourages all authors to share the resources described in their papers to support reproducibility and reusability.
Papers must be compliant with the stylesheet adopted for the LREC 2026 Proceedings (see https://lrec2026.info/authors-kit/).
The workshop proceedings will be published in the LREC 2026 workshop proceedings.
PAPER TYPES
We accept three types of submissions:
- Regular long papers – up to eight (8) pages of content, presenting substantial, original, completed, and unpublished work.
- Short papers – up to four (4) pages of content, describing smaller focused contributions, work in progress, negative results, or system demonstrations.
- Position papers – up to eight (8) pages of content, discussing key open challenges, methodological issues, and cross-disciplinary perspectives on text difficulty, terminology, and GenAI.
References do not count toward the page limits.
#####################
Organizers
#####################
Chairs
Giorgio Maria Di Nunzio, University of Padua, Italy
Federica Vezzani, University of Padua, Italy
Liana Ermakova, Université de Bretagne Occidentale, France
Hosein Azarbonyad, Elsevier, The Netherlands
Jaap Kamps, University of Amsterdam, The Netherlands
Scientific Committee
Florian Boudin - Nantes University, France
Lynne Bowker - University of Ottawa, Canada
Sara Carvalho - Universidade NOVA de Lisboa / Universidade de Aveiro, Portugal
Rute Costa - Universidade NOVA de Lisboa, Portugal
Eric Gaussier - University Grenoble Alpes, France
Natalia Grabar - CNRS, France
Ana Ostroški Anić - Institute of Croatian Language and Linguistics, Croatia
Tatiana Passali - Aristotle University of Thessaloniki
Grigorios Tsoumakas - Aristotle University of Thessaloniki
Sara Vecchiato - University of Udine, Italy
Cornelia Wermuth - KU Leuven, Belgium
#####################
Contact
#####################
For inquiries, please contact:
giorgiomaria.dinunzio(a)unipd.it <mailto:giorgiomaria.dinunzio@unipd.it>
Uppsala University is hiring a Substitute Senior Lecturer in Computational Linguistics, half-time for one year starting in January 2026:
https://www.uu.se/om-uu/jobba-hos-oss/lediga-jobb/jobbannons?query=883187
Duties include supervision of master’s theses and teaching within the international master’s program in Language Technology, as well as research. Application deadline: December 23, 2025.
Joakim Nivre
Professor of Computational Linguistics
Uppsala University
När du har kontakt med oss på Uppsala universitet med e-post så innebär det att vi behandlar dina personuppgifter. För att läsa mer om hur vi gör det kan du läsa här: http://www.uu.se/om-uu/dataskydd-personuppgifter/
E-mailing Uppsala University means that we will process your personal data. For more information on how this is performed, please read here: http://www.uu.se/en/about-uu/data-protection-policy
New Book Series: Corpus Linguistics and Technology-Mediated Language Education in the AI Era, Applied Linguistics Press
Corpus linguistics and technology-mediated language education in the AI era invites proposals for authored or edited volumes that advance
trustworthy, reproducible work at the intersection of corpora, digital learning environments and AI-supported language pedagogy. The series
encourages submissions that combine corpus design, annotation, analytics and AI-based learning ecosystems to improve educational
decision-making with traceable, verifiable data.
Topics of interest include corpus pedagogy and AI interfaces in education, multilingual and multimodal learning, accessibility, Datadriven
learning (DDL), corpora and technology-mediated language education, Corpus-Based Language Pedagogy (CBLP), corpus-literacy,
AI literacy, language data and AI, low-resource languages and corpus education, open datasets in corpus-based pedagogy, shareable code,
AI risk evaluation, learner metacognition in corpus education and teacher/learner agency in CBLP and DDL.
Corpus linguistics and technology-mediated language education in the AI era is open access and encourages global authorship. Proposals
submitted to the series will undergo initial evaluation by the ALP General Editor and the series Co-Editors and will then be sent out for external peer review.
Please send your proposals outlining aim(s), topics addressed, and how the volume fulfils the ALP mission to contribute to open science.
To submit a proposal for this book series, download the proposal template HERE<https://docs.google.com/document/d/1KK57vC0-hwqHfd7ilaJC7zngLigXTFGO/edit?r…> <https://docs.google.com/document/d/1KK57vC0-hwqHfd7ilaJC7zngLigXTFGO/edit?r…> and return when completed to the series Editors via email (pascualf(a)um.es. and maqing(a)eduhk.hk)
We welcome proposals for monographs and edited volumes.
Series Editors: Pascual Pérez-Paredes (Universidad de Murcia) & Qing (Angel) Ma (The Education University of Hong Kong)
Applied Linguistics Press<https://www.appliedlinguisticspress.org/home> is a scholar-led digital publisher promoting open science, fair practice, and wider access, offering monographs and collections with multimedia features, founded in 2023 by Prof Luke Plonsky and run by volunteers.
Feel free to contact us if you’d like to discuss your idea.
Pascual Pérez-Paredes
https://webs.um.es/pascualf
Ninth Workshop on Universal Dependencies (UDW 2026)
May 2026, Palma de Mallorca, Spain (co-located with LREC 2026)
https://universaldependencies.org/udw26/
Universal Dependencies (UD, https://universaldependencies.org) is a
framework for cross-linguistically consistent treebank annotation that
has so far been applied to over 180 languages. The framework aims to
capture similarities as well as idiosyncrasies among typologically
different languages (e.g., morphologically rich languages, pro-drop
languages, and languages featuring clitic doubling). The goal in
developing UD was not only to support comparative evaluation and
cross-lingual learning but also to facilitate multilingual natural
language processing, enable comparative linguistic studies, and
provide resources for language model understanding and evaluation.
The Universal Dependencies Workshop series was started to create a
forum for discussion of the theory and practice of UD, its use in
research and development, and its future goals and challenges. Some of
the previous workshops have been co-located with COLING, EMNLP, and
SyntaxFest. We invite papers on all topics relevant to UD, including
but not limited to:
- Theoretical foundations and universal guidelines
- Linguistic analysis of specific languages and/or constructions
- Language typology and linguistic universals
- Treebank annotation, conversion, and validation
- Word segmentation, morphological tagging and syntactic parsing
- Use of UD data for evaluating or understanding language models
- Linguistic studies based on the UD data
Priority will be given to papers that adopt a cross-lingual perspective.
## Important Dates
- Paper submission deadline: February 16, 2026
- Notification of acceptance: March 16, 2026
- Camera-ready version due: March 30, 2026
- Conference dates: May 11-16, 2026
We invite submissions in two formats:
- Regular (long) papers up to 8 pages of content
(excluding references and appendices). Regular papers should present
substantial, original, and unpublished research, including empirical
evaluation results where appropriate.
- Short papers up to 4 pages of content (excluding references and
appendices). Short papers may offer smaller, focused contributions,
such as work in progress, negative results, surveys, or opinion
pieces.
We also welcome non-archival papers, defined as work that has already
been published or accepted for publication at another computational
linguistics venue. These papers may be presented at the workshop but
will not appear in the LREC 2026 Workshop Proceedings.
Accepted papers will be given one additional page to address reviewer
comments.
## Paper Submission, Review Process and Selection Criteria
Submissions will be handled via the START Conference Manager. The
submission link will be provided on the workshop website as soon as it
becomes available. Papers should describe original work; they should
emphasise completed work rather than intended work, and should
indicate clearly the state of completion of the reported results.
Submissions will be judged on correctness, originality, technical
strength, significance and relevance to the conference, and interest
to the attendees.
All submissions should follow the two-column LREC style guidelines. We
strongly recommend the use of the LaTeX style files, OpenDocument, or
Microsoft Word templates created for LREC:
<https://lrec2026.info/authors-kit/>. Unlike LREC main conference
submissions, UDW submissions are allowed to include appendices, and
the UDW makes a distinction between short (up to four pages) and long
papers (up to eight pages). All papers must be anonymous, i.e., not
reveal author(s) on the title page or through self-references. So,
e.g., “We previously showed (Smith, 2020) …”, should be avoided.
Instead, use citations such as “Smith (2020) previously showed …”.
All papers will undergo a double-blind peer review process, with final
acceptance decisions made by the workshop chairs. Submissions
that violate the requirements above will be rejected without review.
## LRE-Map and Sharing Language Resources
When submitting a paper from the START page, authors will be asked to
provide essential information about resources (in a broad sense, i.e.
also technologies, standards, evaluation kits, etc.) that have been
used for the work described in the paper or are a new result of your
research. Moreover, ELRA encourages all LREC authors to share the
described LRs (data, tools, services, etc.) to enable their reuse and
replicability of experiments (including evaluation ones).
## Presentation Format
Accepted papers will be presented as oral or poster presentations. The
mode of presentation will be determined by the workshop chairs and
does not reflect the quality of the submission.
Accepted papers will be published in the LREC 2026 Workshop Proceedings.
## Organizing committee
Çağrı Çöltekin, Tübingen University
Kaja Dobrovoljc, University of Ljubljana & Jozef Stefan Institute
Joakim Nivre, Uppsala University