We are pleased to invite participation in the First Workshop on Optimal Reliance and Accountability in Interaction with Generative Language Models (ORIGen) to be held in conjuction with the Conference on Language Modeling (COLM) in Montreal, Canada, on October 10, 2025!
With the rapid integration of generative AI, exemplified by large language models (LLMs), into personal, educational, business, and even governmental workflows, such systems are increasingly being treated as “collaborators” with humans. In such scenarios, underreliance or avoidance of AI assistance may obviate the potential speed, efficiency, or scalability advantages of a human-LLM team, but simultaneously, there is a risk that subject matter non-experts may overrely on LLMs and trust their outputs uncritically, with consequences ranging from the inconvenient to the catastrophic. Therefore, establishing optimal levels of reliance within an interactive framework is a critical open challenge as language models and related AI technology rapidly advances.
- What factors influence overreliance on LLMs? - How can the consequences of overreliance be predicted and guarded against? - What verifiable methods can be used to apportion accountability for the outcomes of human-LLM interactions? - What methods can be used to imbue such interactions with appropriate levels of “friction” to ensure that humans think through the decisions they make with LLMs in the loop?
ORIGen will examine questions of reliance, trust, confidence, and accountability in interactions with modern generative systems from an interdisciplinary perspective, and we seek engagement from the NLP, AI, HCI, robotics, education, and cognitive science communities and beyond. The workshop will feature paper presentations as well as 4 invited talks from leading AI, NLP, and HCI researchers, and a panel discussion on the Future of Reliable and Accountable AI. More information about the workshop can be found at: https://origen-workshop.github.io
9:00-9:15 - Opening remarks 9:15-9:50 - Invited talk I: Andreas Vlachos is a Professor of Natural Language Processing and Machine Learning at the Department of Computer Science and Technology at the University of Cambridge and a Dinesh Dhamija fellow of Fitzwilliam College. His expertise includes dialogue modeling, automated fact-checking, imitation and active learning, semantic parsing, and natural language generation and summarization. 9:50-11:00 - Accepted paper lightning talks: 4 minutes each + 1 minute transition 11:00-11:15 - Coffee break 11:15-12:00 - Keynote talk: Malihe Alikhani is an Assistant Professor at Northeastern University’s Khoury College of Engineering and Visiting Fellow at The Center on Regulation and Markets at Brookings. She works towards developing safe and fair AI systems that enhance communication, decision-making, and knowledge-sharing across disciplines and populations. 12:00-12:35 - Invited talk II: Bertram F. Malle is a Professor of Cognitive and Psychological Sciences at Brown University. He received the Society of Experimental Social Psychology (SESP) Outstanding Dissertation award, an NSF CAREER award, the Decision Analysis Society 2018 best publication award, several HRI best-paper awards, and the 2019 SESP Scientific Impact Award. Malle’s research focuses on moral psychology and human-machine interaction. 12:35-2:05 - Lunch 2:05-2:40 - Invited talk III: Q. Vera Liao is an Associate Professor of Computer Science and Engineering at the University of Michigan, and previously a researcher at Microsoft Research and IBM research. Her current interests are in human-AI interaction, responsible AI and AI transparency, with a goal of bridging emerging AI technologies and human-centered perspectives. 2:40-3:40 - Poster Session 3:40-4:00 - Coffee break 4:00-4:45 - Panel discussion: Future of Reliable and Accountable AI Matthias Scheutz, Tufts University Jesse Thomason, University of Southern California Diyi Yang, Stanford University Matthew Marge, DARPA 4:45-5:00 - Conclusion
The list of accepted papers can be found at https://origen-workshop.github.io/programme/
Nikhil Krishnaswamy Assistant Professor of Computer Science *Colorado State University*