*** First Combo Call for Workshop Papers ***
The Annual ACM Conference on Intelligent User Interfaces (IUI 2026)
March 23-26, 2026, 5* Coral Beach Hotel & Resort, Paphos, Cyprus
https://iui.hosting.acm.org/2026/http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fiui.hosting.acm.org%2F2026%2F
The ACM Conference on Intelligent User Interfaces (ACM IUI) is the leading annual venue for researchers and practitioners to explore advancements at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI).
IUI 2026 attracted a record number of submissions for the main conference (561 full paper submissions after an initial submission of 697 abstracts). Although the submission deadline for the main conference is now over, we welcome the submission of papers to a number of workshops that will be held as part of IUI 2026.
A list of these workshops, with a short description and the workshops' websites for further information, follows below.
AgentCraft: Workshop on Agentic AI Systems Development (full-day workshop)
Organizers: Karthik Dinakar (Pienso), Justin D. Weisz (IBM Research), Henry Lieberman (MIT CSAIL), Werner Geyer (IBM Research)
URL: https://agentcraft-iui.github.io/2026/http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fagentcraft-iui.github.io%2F2026%2F
Ambitious efforts are underway to build AI agents powered by large language models across many domains. Despite emerging frameworks, key challenges remain: autonomy, reasoning, unpredictable behavior, and consequential actions. Developers struggle to comprehend and debug agent behaviors, as well as determine when human oversight is needed. Intelligent interfaces that enable meaningful oversight of agentic plans, decisions, and actions are needed to foster transparency, build trust, and manage complexity. We will explore interfaces for mixed-initiative collaboration during agent development and deployment, design patterns for debugging agent behaviors, strategies for determining developer control and oversight, and evaluation methods grounding agent performance in real-world impact.
AI CHAOS! 1st Workshop on the Challenges for Human Oversight of AI Systems (full-day workshop)
Organizers: Tim Schrills (University of Lübeck), Patricia Kahr (University of Zurich), Markus Langer (University of Freiburg), Harmanpreet Kaur (University of Minnesota), Ujwal Gadiraju (Delft University of Technology)
URL: https://sites.google.com/view/aichaos/iui-2026?authuser=0http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fsites.google.com%2Fview%2Faichaos%2Fiui-2026%3Fauthuser%3D0
As AI permeates high-stakes domains—healthcare, autonomous driving, criminal justice —failures can endanger safety and rights. Human oversight is vital to mitigate harm, yet methods and concepts remain unclear despite regulatory mandates. Poorly designed oversight risks false safety and blurred accountability. This interdisciplinary workshop unites AI, HCI, psychology, and regulation research to close this gap. Central questions are: How can systems enable meaningful oversight? Which methods convey system states and risks? How can interventions scale? Through papers, talks, and interactive discussions, participants will map challenges, define stakeholder roles, survey tools, methods, and regulations, and set a collaborative research agenda.
CURE 2026: Communicating Uncertainty to foster Realistic Expectations via Human- Centered Design (half-day workshop)
Organizers: Jasmina Gajcin (IBM Research), Jovan Jeromela (Trinity College Dublin), Joel Wester (Aalborg University), Sarah Schömbs (University of Melbourne), Styliani Kleanthous (Open University of Cyprus), Karthikeyan Natesan Ramamurthy (IBM Research), Hanna Hauptmann (Utrecht University), Rifat Mehreen Amin (LMU Munich)
URL: https://cureworkshop.github.io/cure-2026/http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fcureworkshop.github.io%2Fcure-2026%2F
Communicating system uncertainty is essential for achieving transparency and can help users calibrate their trust in, reliance on, and expectations from an AI system. However, uncertainty communication is plagued by challenges such as cognitive biases, numeracy skills, calibrating risk perception, and increased cognitive load, with research finding that lay users can struggle to interpret probabilities and uncertainty visualizations.
HealthIUI 2026: Workshop on Intelligent and Interactive Health User Interfaces (half-day workshop)
Organizers: Peter Brusilovsky (University of Pittsburgh), Behnam Rahdari (Stanford University), Shriti Raj (Stanford University), Helma Torkamaan (TU Delft)
URL: https://healthiui.github.io/2026/http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fhealthiui.github.io%2F2026%2F
As AI transforms health and care, integrating Intelligent User Interfaces (IUI) in wellness applications offers substantial opportunities and challenges. This workshop brings together experts from HCI, AI, healthcare, and related fields to explore how IUIs can enhance long-term engagement, personalization, and trust in health systems. Emphasis is on interdisciplinary approaches to create systems that are advanced, responsive to user needs, mindful of context, ethics, and privacy. Through presentations, discussions, and collaborative sessions, participants will address key challenges and propose solutions to drive health IUI innovation.
MIRAGE: Misleading Impacts Resulting from AI-Generated Explanations (full-day workshop)
Organizers: Simone Stumpf (University of Glasgow), Upol Ehsan (Northeastern University), Elizabeth M. Daly (IBM Research), Daniele Quercia (Nokia Bell Labs)
URL: https://mirage-workshop.github.iohttp://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fmirage-workshop.github.io
Explanations from AI systems can illuminate, yet they can misguide. MIRAGE at IUI tackles pitfalls and dark patterns in AI explanations. Evidence now shows that explanations may inflate unwarranted trust, warp mental models, and obscure power asymmetries—even when designers intend no harm. We classify XAI harms as Dark Patterns (intentional, e.g., trust-boosting placebos) and Explainability Pitfalls (unintended effects without manipulative intent). These harms include error propagation (model risks), over-reliance (interaction risks), and false security (systemic risks). We convene an interdisciplinary group to define, detect, and mitigate these risks. MIRAGE shifts focus to safe explanations, advancing accountable, human-centered AI.
PARTICIPATE-AI: Exploring the Participatory Turn in Citizen-Centred AI (half-day workshop)
Organizers: Pam Briggs (Northumbria University), Cristina Conati (University of British Columbia), Shaun Lawson (Northumbria University), Kyle Montague (Northumbria University), Hugo Nicolau (University of Lisbon), Ana Cristina Pires (University of Lisbon), Sebastien Stein (University of Southampton), John Vines (University of Edinburgh)
URL: https://sites.google.com/view/participate-ai/workshophttp://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fsites.google.com%2Fview%2Fparticipate-ai%2Fworkshop
This workshop explores value alignment for participatory AI, focusing on interfaces and tools that bridge citizen participation and technical development. As AI systems increasingly impact society, meaningful and actionable citizen input in their development becomes critical. However, current participatory approaches often fail to influence actual AI systems, with citizen values becoming trivialized. This workshop will address challenges such as risk articulation, value evolution, democratic legitimacy, and the translation gap between community input and system implementation. Topics include value elicitation within different communities, critical analysis of failed participatory attempts, and methods for making citizen concerns actionable for developers.
SHAPEXR: Shaping Human-AI-Powered Experiences in XR (full-day workshop)
Organizers: Giuseppe Caggianese (National Research Council of Italy, Institute for High-Performance Computing and Networking Napoli), Marta Mondellini (National Research Council of Italy, Institute of Intelligent Industrial Systems and Technologies for Advanced Manufacturing, Lecco), Nicola Capece (University of Basilicata), Mario Covarrubias (Politecnico di Milano), Gilda Manfredi (University of Basilicata)
URL: https://shapexr.icar.cnr.ithttp://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fshapexr.icar.cnr.it
This workshop explores how eXtended Reality (XR) can serve as a multimodal interface for AI systems, including LLMs and conversational agents. It focuses on designing adaptive, human-centered XR environments that incorporate speech, gesture, gaze, and haptics for seamless interaction. Main topics include personalization, accessibility, cognitive load, trust, and ethics in AI-driven XR experiences. Through presentations, discussions, and collaborative sessions, the workshop aims to establish a subcommunity within IUI to develop a roadmap that includes design principles and methodologies for inclusive and adaptive intelligent interfaces, enhancing human capabilities across various domains, such as healthcare, education, and collaborative environments.
TRUST-CUA: Trustworthy Computer-Using Generalist Agents for Intelligent User Interfaces (full-day workshop)
Organizers: Toby Jia-Jun Li (University of Notre Dame), Segev Shlomov (IBM Research), Xiang Deng (Scale AI), Ronen Brafman (Ben-Gurion University of the Negev), Avi Yaeli (IBM Research) Zora (Zhiruo) Wang (Carnegie Mellon University)
URL: https://sites.google.com/view/trust-cuaiui26/homehttp://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBvcmFAbGlzdC5lbHJhLmluZm8JVGhlIEFubnVhbCBBQ00gQ29uZmVyZW5jZSBvbiBJbnRlbGxpZ2VudCBVc2VyIEludGVyZmFjZXMgKElVSSAyMDI2KTogRmlyc3QgQ29tYm8gQ2FsbCBmb3IgV29ya3Nob3AgUGFwZXJzCTQxNwlJVUkyMDI2CTE3NQljbGljawl5ZXMJbm8=&url=https%3A%2F%2Fsites.google.com%2Fview%2Ftrust-cuaiui26%2Fhome
Computer-Using Agents (CUAs) are moving from point automations to generalist agents acting across GUIs, browsers, APIs, and CLIs—raising core IUI questions of trust, predictability, and control. This workshop advances trustworthy-by-design CUAs through human-centered methods: mixed-initiative interaction, explanation and sensemaking, risk/uncertainty communication, and recovery/rollback UX. Outcomes include (1) a practical TRUST-CUA checklist for oversight, consent, and auditing, (2) a user-centered evaluation profile (“CUBench-IUI,” e.g., predictability, oversight effort, time-to-recovery, policy-aligned success), and (3) curated design patterns and open challenges for deployable, accountable agentic interfaces.
Important Dates
• Paper Submission: 19 December, 2025 • Notification: February 2, 2026
All dates are 23:59h AoE (anywhere on Earth).
Organisation
General Chairs • Tsvi Kuflik, The University of Haifa, Israel • Styliani Kleanthous, Open University of Cyprus, Cyprus
Local Organising Chair • George A. Papadopoulos, University of Cyprus, Cyprus
Workshop and Tutorial Chairs • Karthik Dinakar, Pienso Inc, USA • Werner Geyer, IBM Research, USA • Patricia Kahr, Eindhoven University of Zurich, Switzerland • Antonela Tommasel, ISISTAN, CONICET-UNCPBA, JKU, Argentina, Austria