Neural language models have revolutionised natural language processing (NLP) and have provided state-of-the-art results for many tasks. However, their effectiveness is largely dependent on the pre-training resources. Therefore, language models (LMs) often struggle
with low-resource languages in both training and evaluation. Recently, there has been a growing trend in developing and adopting LMs for low-resource languages.
LoResLM aims to provide a forum for researchers to share and discuss their ongoing work on LMs for low-resource languages.
-
Building language models for low-resource languages.
-
Adapting/extending existing language models/large language models for low-resource languages.
-
Corpora creation and curation technologies for training language models/large language models for low-resource languages.
-
Benchmarks to evaluate language models/large language models in low-resource languages.
-
Prompting/in-context learning strategies for low-resource languages with large language models.
-
Review of available corpora to train/fine-tune language models/large language models for low-resource languages.
-
Multilingual/cross-lingual language models/large language models for low-resource languages.
-
Applications of language models/large language models for low-resource languages (i.e. machine translation, chatbots, content moderation, etc.
We follow the COLING 2025 standards for submission format and guidelines. LoResLM 2025 invites the submission of long papers of up to eight pages and short papers of up to four pages. These page limits only apply to the main body of the paper. At the end of
the paper (after the conclusions but before the references), papers need to include a mandatory section discussing the limitations of the work and, optionally, a section discussing ethical considerations. Papers can include unlimited pages of references and
an unlimited appendix.
To prepare your submission, please make sure to use the COLING 2025 style files available here:
-
Hansi Hettiarachchi, Lancaster University, UK
-
Tharindu Ranasinghe, Lancaster University, UK
-
Paul Rayson, Lancaster University, UK
-
Ruslan Mitkov, Lancaster University, UK
-
Mohamed Gaber, Birmingham City University, UK
-
Damith Premasiri, Lancaster University, UK
-
Fiona Anting Tan, National University of Singapore, Singapore
-
Lasitha Uyangodage, University of Münster, Germany
-
Burcu Can - University of Stirling, UK
-
Çağrı Çöltekin - University of Tübingen, Germany
-
Debashish Das - Birmingham City University, UK
-
Alphaeus Dmonte - George Mason University, USA
-
Daan van Esch - Google
-
Ignatius Ezeani - Lancaster University, UK
-
Anna Furtado - University of Galway, Ireland
-
Amal Htait - Aston University, UK
-
Ali Hürriyetoğlu - Wageningen University & Research, Netherlands
-
Diptesh Kanojia - University of Surrey, UK
-
Jean Maillard - Meta
-
Maite Melero - Barcelona Supercomputing Centre, Spain
-
Muhidin Mohamed - Aston University, UK
-
Nadeesha Pathirana - Aston University, UK
-
Alistair Plum - University of Luxembourg, Luxembourg
-
Sandaru Seneviratne - Australian National University, Australia
-
Ravi Shekhar - University of Essex, UK
-
Taro Watanabe - Nara Institute of Science and Technology, Japan
-
Phil Weber - Aston University, UK