Dear all,
Over the last month, the server for OGA and OLA [1] has experienced occasional downtimes, including the current one, due to connectivity issues within the hosting department of Leipzig University. This is an infrastructure issue outside my control, and I am monitoring the situation closely. I will keep you updated if there are significant further developments. I apologise for any inconvenience this may cause.
Best regards,
Giuseppe Celano
-----
[1] https://annis.varro.informatik.uni-leipzig.de
We are pleased to invite participation in the First Workshop on
Optimal Reliance and Accountability in Interaction with Generative
Language Models (ORIGen) to be held in conjuction with the Conference
on Language Modeling (COLM) in Montreal, Canada, on October 10, 2025!
With the rapid integration of generative AI, exemplified by large
language models (LLMs), into personal, educational, business, and even
governmental workflows, such systems are increasingly being treated as
“collaborators” with humans. In such scenarios, underreliance or
avoidance of AI assistance may obviate the potential speed,
efficiency, or scalability advantages of a human-LLM team, but
simultaneously, there is a risk that subject matter non-experts may
overrely on LLMs and trust their outputs uncritically, with
consequences ranging from the inconvenient to the catastrophic.
Therefore, establishing optimal levels of reliance within an
interactive framework is a critical open challenge as language models
and related AI technology rapidly advances.
- What factors influence overreliance on LLMs?
- How can the consequences of overreliance be predicted and guarded against?
- What verifiable methods can be used to apportion accountability for
the outcomes of human-LLM interactions?
- What methods can be used to imbue such interactions with
appropriate levels of “friction” to ensure that humans think through
the decisions they make with LLMs in the loop?
ORIGen will examine questions of reliance, trust, confidence, and
accountability in interactions with modern generative systems from an
interdisciplinary perspective, and we seek engagement from the NLP,
AI, HCI, robotics, education, and cognitive science communities and
beyond. The workshop will feature paper presentations as well as 4
invited talks from leading AI, NLP, and HCI researchers, and a panel
discussion on the Future of Reliable and Accountable AI. More
information about the workshop can be found at:
https://origen-workshop.github.io
9:00-9:15 - Opening remarks
9:15-9:50 - Invited talk I: Andreas Vlachos is a Professor of Natural
Language Processing and Machine Learning at the Department of Computer
Science and Technology at the University of Cambridge and a Dinesh
Dhamija fellow of Fitzwilliam College. His expertise includes dialogue
modeling, automated fact-checking, imitation and active learning,
semantic parsing, and natural language generation and summarization.
9:50-11:00 - Accepted paper lightning talks: 4 minutes each + 1 minute
transition
11:00-11:15 - Coffee break
11:15-12:00 - Keynote talk: Malihe Alikhani is an Assistant Professor
at Northeastern University’s Khoury College of Engineering and
Visiting Fellow at The Center on Regulation and Markets at Brookings.
She works towards developing safe and fair AI systems that enhance
communication, decision-making, and knowledge-sharing across
disciplines and populations.
12:00-12:35 - Invited talk II: Bertram F. Malle is a Professor of
Cognitive and Psychological Sciences at Brown University. He received
the Society of Experimental Social Psychology (SESP) Outstanding
Dissertation award, an NSF CAREER award, the Decision Analysis Society
2018 best publication award, several HRI best-paper awards, and the
2019 SESP Scientific Impact Award. Malle’s research focuses on moral
psychology and human-machine interaction.
12:35-2:05 - Lunch
2:05-2:40 - Invited talk III: Q. Vera Liao is an Associate Professor
of Computer Science and Engineering at the University of Michigan, and
previously a researcher at Microsoft Research and IBM research. Her
current interests are in human-AI interaction, responsible AI and AI
transparency, with a goal of bridging emerging AI technologies and
human-centered perspectives.
2:40-3:40 - Poster Session
3:40-4:00 - Coffee break
4:00-4:45 - Panel discussion: Future of Reliable and Accountable AI
Matthias Scheutz, Tufts University
Jesse Thomason, University of Southern California
Diyi Yang, Stanford University
Matthew Marge, DARPA
4:45-5:00 - Conclusion
The list of accepted papers can be found at
https://origen-workshop.github.io/programme/
Nikhil Krishnaswamy
Assistant Professor of Computer Science
*Colorado State University*
The Centre for Computational Linguistics (KU Leuven) is seeking a
research-oriented full-stack developer to contribute to cutting-edge
linguistic infrastructure by reimagining the next generation of GrETEL,
a treebank query system developed at KU Leuven. This position is part of
CLARIAH-VL+ [1], a strategic research infrastructure project funded by
the Flemish Research Foundation (FWO). It provides the opportunity to
work at the intersection of computational linguistics, language
technology, and digital humanities. More information and link to the
application here [2].
Links:
------
[1] https://clariahvl.hypotheses.org/2582
[2] https://www.kuleuven.be/personeel/jobsite/jobs/60510459
--------Call for participation in a research survey on collecting data
in the era of LLMs-------------------------------
Dear Collegues,
We (researchers originally from University of Stuttgart, Kopenhagen and
Gent) are conducting a survey on the challenges of collecting data in
the era of large language models (LLMs). In particular, we are
interested in issues such as crowdworkers relying on LLMs to generate
free-text responses that are expected to be written by themselves.
Our goal is to better understand the contexts in which researchers
encounter these problems and to find possible solutions.
The survey is completely anonymous and should take about 5–10 minutes to
complete.
You can access it here:
https://ugent.qualtrics.com/jfe/form/SV_6KUJzBhQSgemzpY
We would be very grateful for your input—your perspective will help
support this research.
Thank you for your time and support!
Best regards
Aswathy Velutharambath, Amelie Wührl, Sofie Labat, Tarun Tater and Neele
Falk
First CFP: CHOMPS – Confabulation, Hallucinations, & Overgeneration in Multilingual & Precision-critical Settings
(with our apologies for cross-posting)
Venue: IJCNLP-AACL 2025 (https://2025.aaclnet.org/), Mumbai, India
Date: 23/24th December 2025 (TBC)
Workshop website: https://chomps2025.github.io/
* Description *
Despite rapid advances, LLMs continue to "make things up": a phenomenon that manifests as hallucination, confabulation, and overgeneration. That is, produce unsupported and unverifiable text that sounds deceptively plausible. These outputs pose real risks in settings where accuracy and accountability are non-negotiable, including healthcare, legal systems, and education. The aim of the CHOMPS workshop is to find ways to mitigate one of major the hurdles that currently prevent the adoption of Large Language Models in real-world scenarios: namely, their tendency to hallucinate, i.e., produce unsupported and unverifiable text that sounds deceptively plausible.
The workshop will explore hallucination mitigation in practical situations, where this mitigation is crucial: in particular, precision-critical applications (such as those in the medical, legal and biotech domains), as well as multilingual settings (given the lack of resources available to reproduce what can be done for English in other linguistic contexts). In practice, we intend to invite works of the following (not exclusive) list of topics:
* Workshop topics *
- Metrics, benchmarks and tools for hallucination detection
- Factuality challenges in mission critical & domain-specific (e.g., medical, legal, biotech) and their consequences
- Mitigation strategies during inference or model training
- Studies of hallucinatory and confabulatory behaviors of LLMS in cross-lingual and multilingual scenarios
- Confabulations in language & multimodal (vision, text, speech) models
- Perspectives and case studies from other disciplines
- …
* Invited speakers *
- Anna ROGERS, IT University of Copenhagen
- Danish PRUTHI, IISc Bangalore
- Abhilasha RAVICHANDER, University of Washington
* Submission details *
The workshop is designed with a widely inclusive submission policy so as to foster as vibrant a discussion as possible.
Archival or non-archival submissions may consist of up to 8 pages (long) or 4 pages (short) of content. Dissemination submissions may consist of up to 1 pages of content. On acceptance, authors may add one additional page to accommodate changes suggested by the reviewers.
Please use the ACL style templates available here: https://github.com/acl-org/acl-style-files
The submissions need to be done in PDF format via (a) via Direct submission (https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2025/Workshop/CHOMPS) (b) via ARR commitment (https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2025/Workshop/CHOMPS…)
* Important dates *
Paper submission deadline: September 29, 2025
Direct ARR commitment: October 27, 2025
Author notification: November 3, 2025
Camera-Ready due: November 11, 2025
Workshop date: December 23-24, 2025 (TBC)
* Contact *
For questions, please send an email to chomps-aacl2025(a)googlegroups.com or contact one of the workshop chairs:
- Aman Sinha, Université de Lorraine, aman.sinha(a)univ-lorraine.fr
- Raúl Vázquez, University of Helsinki, raul.vazquez(a)helsinki.fi
- Timothee Mickus, University of Helsinki, timothee.mickus(a)helsinki.fi
(Apologies for cross-posting)
Dear colleagues,
This is a reminder that the eLex 2025 early-bird registration deadline is September 5th.
The conference website https://elex.link/elex2025/ has been updated with a list of presentations, workshop programmes, and more.
Looking forward to seeing you in Bled
Iztok Kosem
Head of the organising committee
***Call for Papers WASP @ IJCNLP-AACL 2025***
https://ui.adsabs.harvard.edu/WIESP/2025/
Building on the success of the First Workshop on Information Extraction from Scientific Publications (WIESP) at AACL-IJCNLP 2022 and the Second WIESP at IJCNLP-AACL 2023, the Third Workshop on Artificial intelligence for Scientific Publications (WASP) at IJCNLP-AACL 2025 aims to establish itself as a pivotal platform for promoting discussions and research in the field of Natural Language Processing (NLP) and Artificial Intelligence (AI). This gathering will bring together esteemed experts and renowned organizations with students and early-career researchers who are interested and invested in efforts to extract and mine the world’s scientific knowledge from research papers. Their collaboration will be focused on developing advanced algorithms, models, and tools that will lay the foundation for future machine comprehension of scientific literature. The third iteration of WASP will specifically concentrate on various topics related to Artificial Intelligence research for scientific publications:
***Topics (not limited to)***
Scientific document parsing and structured information extraction
Scientific named-entity recognition and concept identification
Citation context/span extraction and citation-based knowledge mining
Argument extraction and scientific discourse analysis
Scientific article summarization and headline generation
Question-answering and fact retrieval from scientific literature
Prompt engineering and retrieval-augmented generation (RAG) for science Q&A
Chain-of-thought reasoning and scientific problem-solving with LLMs
LLM-powered information extraction from scientific texts
Pretraining and fine-tuning LLMs on scientific corpora
Evaluation and alignment of LLMs for scientific understanding
AI-assisted scientific discovery and hypothesis generation
Ethical and responsible use of LLMs in scientific publishing
Large Language Reasoning Models for Scientific Discovery
LLM hallucinations and impact on scientific knowledge, publications
Challenges, Future of AI in Scientific Publishing
AI, Peer Review, and Scientific Publishing
Impact of Generative AI on Scientific Publishing
In addition to papers, WASP will also host a shared task.
***Telescope Reference and Astronomy Categorization Shared (TRACS)***
https://ui.adsabs.harvard.edu/WIESP/2025/shared_task
We will publish a separate CfP on the shared task. Shared task authors will be invited to write their system descriptions, which will then undergo light peer review.
All accepted papers and shared task system papers will be published in the WASP proceedings as part of IJCNLP-AACL 2025 and indexed in the ACL Anthology.
***Important Dates***
Paper submission deadline (WASP+TRACS): September 29, 2025
ARR commitment deadline: October 27, 2025
Notification of paper acceptance (WASP+TRACS): November 3, 2025
Camera-ready submission deadline (WASP+TRACS): November 11, 2025
Workshop: December 23, 2025 (hybrid)
All submission deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”)
***Paper Submission Site***
https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2025/Workshop/WASP
Submission will be via OpenReview. Submissions should follow the ACLPUB formatting guidelines and use the provided template files. Paper formatting guidelines - ACLPUB
Submissions (Long and Short Papers) will be subject to a double-blind peer-review process. We follow the same policies as IJCNLP-AACL 2025 regarding anonymity, preprints, and double submissions.
Please reach out to the organizers (cc'ed) for any queries.
Thank you!
--
+++++++++++++++++++++++++++++++++++
Dr. Tirthankar Ghosal
Scientist (NLP/AI and HPC)
National Center for Computational Sciences (NCCS)
Oak Ridge National Laboratory, United States
&
Affiliate Faculty (NLP/AI)
University of Tennessee Knoxville
United States
https://www.tirthankarghosal.com
++++++++++++++++++++++++++++++++++++
We have a number of permanent academic Assistant/Associate Prof posts open in all areas of Computer Science including Artificial Intelligence and Natural Language Processing in the School of Computer Science. Applications due September 30th. Please see the following link for more details:
https://www.jobs.ac.uk/job/DOI907/assistant-or-associate-professor-in-compu…
With best regards,
Mark
Mark Lee
DPVC (India)
Professor of Artificial Intelligence
www.cs.bham.ac.uk/~mgl<http://www.cs.bham.ac.uk/~mgl>
University of Birmingham
ConsILR-2025 - deadline extension
the 20th edition of the International Conference on Linguistic Resources
and Tools for Natural Language Processing
(https://conferences.info.uaic.ro/consilr/2025
<https://conferences.info.uaic.ro/consilr/2025/index.html>)
Important Dates:
August 23, 2025September 10, 2025– abstracts submission (max 300 words)
August 31, 2025September 13, 2025 – paper submission
September 21, 2025 September 20, 2025 – authors’ notification
September 31, 2025 – final form submission
October 8 - 10, 2025 – ConsILR Conference
October 17, 2025 - final form submission
Venue: Casa Academiei Române (House of the Romanian Academy), 13, Calea 13
Septembrie, Bucharest, Romania and ONLINE
We invite papers presenting original and unpublished research, as well as
descriptions of accomplished or in-progress work, in all areas of natural
language processing. We welcome contributions covering a range of topics,
including but not limited to:
-
Natural Language Processing (NLP) Techniques and Applications
-
Large Language Models (LLMs) and Applications
-
Digital Humanities in Language Technology
-
(Mono- or multimodal) Language Resources and Tools for text, speech,
images and videos
-
Computational Models and Algorithms in Language Processing
-
Applied Linguistics and NLP Integration
-
Morphosyntactic Structures in Language Processing
-
Semantic and Pragmatic Analysis in NLP
-
Multi-word Expressions and Idiomatic Language in NLP
-
Cultural and Contextual Factors in Language Technology
-
Romanian Language Processing and Contrastive Linguistics
Authors are encouraged to submit, in addition to the papers per se,
open-source linguistic resources, such as corpora (or corpus examples),
demo code, video and sound files.
Confirmed invited speakers:
Agata Savary <https://perso.limsi.fr/savary/>
Amalia Todirașcu <https://fr.linkedin.com/in/amalia-todirascu>
Marius Ursache <https://www.linkedin.com/in/mariusursache/>
Paula Gradu <https://www.linkedin.com/in/paula-gradu-7505591b0>
Organisers:
-
“Mihai Drăgănescu” Research Institute for Artificial Intelligence
of the Romanian
Academy
-
Institute of Computer Science of the Romanian Academy – Iași Branch
-
Faculty of Computer Science of the “Alexandru Ioan Cuza” University of
Iași
-
“Alexandru Philippide” Institute of Philology of the Romanian Academy –
Iași Branch
-
Romanian Association of Computational Linguistics
-
Academy of Technical Sciences of Romania
The abstracts (max. 300 words) and papers (an even number of pages, between
6 and 12, including references) must be written in British English.
Details about the paper format are available on the conference website.
The Proceedings of the Conference will be sent for indexing to Clarivate
Analytics.
Further information can be found on the conference web site:
https://conferences.info.uaic.ro/consilr/2025
<https://conferences.info.uaic.ro/consilr/2025/index.html>.
On behalf of the ConsILR-2025 Organising Committee,
Dr. Elena Irimia