The First Workshop on Optimal Reliance and Accountability in Interactions with Generative Language Models (*ORIGen*) will be held in conjunction with the Second Conference on Language Modeling (COLM) at the Palais des Congrès in Montreal, Quebec, Canada, on October 10, 2025!
*The deadline for submission has been extended to June 27, 2025, Anywhere on Earth.*
With the rapid integration of generative AI, exemplified by large language models (LLMs), into personal, educational, business, and even governmental workflows, such systems are increasingly being treated as “collaborators” with humans. In such scenarios, underreliance or avoidance of AI assistance may obviate the potential speed, efficiency, or scalability advantages of a human-LLM team, but simultaneously, there is a risk that subject matter non-experts may overrely on LLMs and trust their outputs uncritically, with consequences ranging from the inconvenient to the catastrophic. Therefore, establishing optimal levels of reliance within an interactive framework is a critical open challenge as language models and related AI technology rapidly advances.
* What factors influence overreliance on LLMs? * How can the consequences of overreliance be predicted and guarded against? * What verifiable methods can be used to apportion accountability for the outcomes of human-LLM interactions? * What methods can be used to imbue such interactions with appropriate levels of “friction” to ensure that humans think through the decisions they make with LLMs in the loop?
The ORIGen workshop provides a new venue to address these questions and more through a multidisciplinary lens. We seek to bring together broad perspectives from AI, NLP, HCI, cognitive science, psychology, and education to highlight the importance of mediating human-LLM interactions to mitigate overreliance and promote accountability in collaborative human-AI decision-making.
Submissions are due *June 27, 2025*. Please see our call for papers [1] for more!
[1] https://origen-workshop.github.io/submissions/
Organizers: - Nikhil Krishnaswamy, Colorado State University - James Pustejovsky, Brandeis University - Dilek Hakkani-Tür, University of Illinois Urbana Champaign - Vasanth Sarathy, Tufts University - Tejas Srinivasan, University of Southern California - Mariah Bradford, Colorado State University - Timothy Obiso, Brandeis University - Mert Inan, Northeastern University