๐—ฆ๐—ฒ๐—ฐ๐—ผ๐—ป๐—ฑ ๐—–๐—ฎ๐—น๐—น ๐—ณ๐—ผ๐—ฟ ๐—ฃ๐—ฎ๐—ฝ๐—ฒ๐—ฟ๐˜€ - ๐—ง๐—ต๐—ฒ ๐—ฆ๐—ฒ๐—ฐ๐—ผ๐—ป๐—ฑ ๐—ช๐—ผ๐—ฟ๐—ธ๐˜€๐—ต๐—ผ๐—ฝ ๐—ผ๐—ป ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ ๐— ๐—ผ๐—ฑ๐—ฒ๐—น๐˜€ ๐—ณ๐—ผ๐—ฟ ๐—Ÿ๐—ผ๐˜„-๐—ฅ๐—ฒ๐˜€๐—ผ๐˜‚๐—ฟ๐—ฐ๐—ฒ ๐—Ÿ๐—ฎ๐—ป๐—ด๐˜‚๐—ฎ๐—ด๐—ฒ๐˜€

[Workshop website - https://loreslm.github.io/home]
[CFP - https://loreslm.github.io/cfp]
[Submissions - https://openreview.net/group?id=eacl.org/EACL/2026/Workshop/LoResLM]

Neural language models have revolutionised natural language processing (NLP) and have provided state-of-the-art results for many tasks. However, their effectiveness is largely dependent on the pre-training resources. Therefore, language models (LMs) often struggle with low-resource languages in both training and evaluation. Recently, there has been a growing trend in developing and adopting LMs for low-resource languages. Supporting this important shift, LoResLM aims to provide a forum for researchers to share and discuss their ongoing work on LMs for low-resource languages.

๐—ง๐—ผ๐—ฝ๐—ถ๐—ฐ๐˜€
LoResLM 2026 invites submissions on a broad range of topics related to the development and evaluation of neural language models for low-resource languages. We welcome research that explores modalities beyond text and encourage work on low-resource dialects in addition to major language varieties. Topics of interest include, but are not limited to:
โ€ข Building language models for low-resource languages.
โ€ข Adapting/extending existing language models/large language models for low-resource languages.
โ€ข Corpora creation and curation technologies for training language models/large language models for low-resource languages.
โ€ข Benchmarks to evaluate language models/large language models in low-resource languages.
โ€ข Prompting/in-context learning strategies for low-resource languages with large language models.
โ€ข Review of available corpora to train/fine-tune language models/large language models for low-resource languages.
โ€ข Multilingual/cross-lingual language models/large language models for low-resource languages.
โ€ข Multimodal language models/large language models for low-resource languages
โ€ข Applications of language models/large language models for low-resource languages (i.e. machine translation, chatbots, content moderation, etc.)

๐—ฆ๐˜‚๐—ฏ๐—บ๐—ถ๐˜€๐˜€๐—ถ๐—ผ๐—ป ๐—š๐˜‚๐—ถ๐—ฑ๐—ฒ๐—น๐—ถ๐—ป๐—ฒ๐˜€
We follow the EACL 2026 standards for submission format and guidelines. LoResLM 2026 invites submissions of long papers up to 8 pages and short papers up to 4 pages. These page limits only apply to the main body of the paper. At the end of the paper (after the conclusions but before the references), papers need to include a mandatory section discussing the limitations of the work and, optionally, a section discussing ethical considerations. Papers can include unlimited pages of references and an appendix.
To prepare your submission, please make sure to use the EACL 2026 style files available here:
โ€ข Latex - https://github.com/acl-org/acl-style-files
โ€ข Overleaf - https://www.overleaf.com/latex/templates/association-for-computational-linguistics-acl-conference/jvxskxpnznfj
Papers should be submitted through OpenReview using the following link: https://openreview.net/group?id=eacl.org/EACL/2026/Workshop/LoResLM

๐—œ๐—บ๐—ฝ๐—ผ๐—ฟ๐˜๐—ฎ๐—ป๐˜ ๐——๐—ฎ๐˜๐—ฒ๐˜€
โ€ข Paper submission: 6th January 2026
โ€ข Notification of acceptance: 28th January 2026
โ€ข Camera-ready submission: 3rd February 2026
โ€ข Workshop: March 28, 2026- March 29, 2026 (TBD) @ EACL

๐—ฉ๐—ฒ๐—ป๐˜‚๐—ฒ
LoResLM 2026 will be held in conjunction with EACL 2026 in Rabat, Morocco.

๐—ฃ๐—ฟ๐—ผ๐—ฐ๐—ฒ๐—ฒ๐—ฑ๐—ถ๐—ป๐—ด๐˜€
Proceedings of the workshop will appear in the ACL Anthology. For the past proceedings, please refer https://scholar.google.co.uk/citations?user=rvm3HOgAAAAJ&hl=en

๐—ž๐—ฒ๐˜†๐—ป๐—ผ๐˜๐—ฒ ๐—ฆ๐—ฝ๐—ฒ๐—ฎ๐—ธ๐—ฒ๐—ฟ
Prof Barbara Plank - Full professor and chair for AI and Computational Linguistics at Ludwig-Maximilians-Universitรคt Mรผnchen, Head of the Munich AI and NLP (MaiNLP) lab, and co-director of the Centre for Information and Language Processing (CIS)

๐—ฃ๐—ฟ๐—ผ๐—ด๐—ฟ๐—ฎ๐—บ๐—บ๐—ถ๐—ป๐—ด ๐—–๐—ผ๐—บ๐—บ๐—ถ๐˜๐˜๐—ฒ๐—ฒ
David Ifeoluwa Adelani - McGill School of Computer Science, Canada
Idris Abdulmumin - University of Pretoria, South Africa
Godfred Agyapong - University of Florida, USA
Isuri Anuradha - Lancaster University, UK
Laura Bernardy - University of Luxembourg, Luxembourg
Ana-Maria Bucur - University of Lugano, Switzerland
Eleftheria Briakou - Google
Tommaso Caselli - University of Groningen, Netherlands
ร‡aฤŸrฤฑ ร‡รถltekin - University of Tรผbingen, Germany
Charibeth Ko Cheng - De La Salle University, Philippines
Claudiu Creanga - University of Bucharest
Sourabh Deoghare - Indian Institute of Technology, Bombay, India
Bosheng Ding - Nanyang Technological University, Singapore
Alphaeus Dmonte - George Mason University, USA
Daan van Esch - Google
Ignatius Ezeani - Lancaster University, UK
Anna Furtado - University of Galway, Ireland
Ona de Gibert - University of Helsinki, Finland
Amal Htait - Aston University, UK
Diptesh Kanojia - University of Surrey, UK
Jaroslav Kopฤan - Kempelen Institute of Intelligent Technologies, Slovakia
Constantine Lignos - Brandeis University, USA
Cedric Lothritz - Luxembourg Institute of Science and Technology, Luxembourg
Anne-Marie Lutgen - University of Luxembourg, Luxembourg
Sheng Li - Institute of Science Tokyo, Japan
Veronika Lipp - Hungarian Research Centre for Linguistics, Hungary
Vukosi Marivate - University of Pretoria, South Africa
Muhidin Mohamed - Aston University, UK
Simon Mรผnker - Trier University, Germany
Abiodun Modupe - University of Pretoria, South Africa
Fred Philippy - University of Luxembourg, Luxembourg
Md Nishat Raihan - George Mason University, USA
Mariana Romanyshyn - Grammarly
Guokan Shang - Mohamed bin Zayed University of Artificial Intelligence, France
Ravi Shekhar - University of Essex, UK
Archchana Sindhujan - University of Surrey, UK
Hristo Tanev - Joint Research Centre, European Commission
Uthayasanker Thayasivam - University of Moratuwa, Sri Lanka
Raรบl Vรกzquez - University of Helsinki, Finland
Taro Watanabe - Nara Institute of Science and Technology, Japan-
Zheng Xin Yong - Brown University, USA
Alexandra Zbaganu - University of Bucharest, Romania

๐—ข๐—ฟ๐—ด๐—ฎ๐—ป๐—ถ๐˜€๐—ถ๐—ป๐—ด ๐—–๐—ผ๐—บ๐—บ๐—ถ๐˜๐˜๐—ฒ๐—ฒ
Hansi Hettiarachchi โ€“ Lancaster University, UK
Tharindu Ranasinghe โ€“ Lancaster University, UK
Alistair Plum โ€“ University of Luxembourg, Luxembourg
Damith Premasiri โ€“ Lancaster University, UK
Fiona Anting Tan โ€“ National University of Singapore, Singapore
Lasitha Uyangodage โ€“ University of Mรผnster, Germany

๐—”๐—ฑ๐˜ƒ๐—ถ๐˜€๐—ผ๐—ฟ๐˜€
Paul Rayson โ€“ Lancaster University, UK
Ruslan Mitkov โ€“ Lancaster University, UK
Mohamed Gaber โ€“ Queensland University of Technology, Australia

๐—ฆ๐˜‚๐—ฝ๐—ฝ๐—ผ๐—ฟ๐˜๐—ฒ๐—ฑ ๐—ฏ๐˜†
The workshop is supported in part by the Artificial Intelligence Journal, which promotes and disseminates AI research.

๐—–๐—ผ๐—ป๐˜๐—ฎ๐—ฐ๐˜ ๐˜‚๐˜€
Contact us through loreslm.contact@gmail.com.
Follow us on social media
โ€ข LinkedIn - https://www.linkedin.com/company/loreslm/
โ€ข X - https://x.com/LoResLM2026
โ€ข BlueSky - https://bsky.app/profile/loreslm.bsky.social

Best Regards
Tharindu Ranasinghe, on behalf of the organising committee, LoResLM 2026

Dr Tharindu Ranasinghe | Lecturer in Security and Protection Science
School of Computing and Communications | Lancaster University