The ICML 2026 Workshop on Efficient Multimodal Question Answering (EMM-QA) invites submissions on methods, resources, evaluations, and systems for question answering over multimodal inputs under realistic resource constraints. The workshop focuses on balancing answer quality with efficiency in modern QA systems, especially in the age of large language models.
*Website and Contact*
- Contact email: emm-qa-organizers@googlegroups.com - Workshop website: https://qanta-org.github.io/competition/2026/icml/
Please check the workshop website for updates (current keynotes: Sewon Min and Mrinmaya Sachan), submission instructions, FAQs, and final shared-challenge details.
*Important Dates*
All deadlines are *Anywhere on Earth (AoE)*.
Paper Track
- *Submission deadline: May 29, 2026* - Review deadline: June 7, 2026 - Notification date: June 10, 2026 - Camera-ready deadline: June 22, 2026
Shared Challenge
- Warmup data release: May 22, 2026 - Test data release: June 1, 2026 - Challenge metric / scoring feedback deadline: June 1, 2026 - *Last system submission: June 15, 2026* - System Description Paper Submission: June 22, 2026 - In-person human competition: June 27, 2026 - Online human competition: June 28, 2026
Workshop Date and Venue
- Workshop date: July 10 or July 11, 2026 *(final date to be confirmed by ICML)*, The COEX Convention and Exhibition Center, Seoul, South Korea
*Topics of Interest*
We welcome submissions on topics including, but not limited to:
- large language models for question answering - open-domain QA under resource constraints - efficient retrieval and answer lookup - multimodal question answering - synthetic data generation for efficient QA - balancing accuracy against model and system size - token-efficient and compute-efficient QA systems - multimodal retrieval and reasoning for QA - benchmarking and evaluation for efficient multimodal systems - human verification and evaluation of QA outputs - efficiency in human-computer interaction - efficient inference for knowledge-intensive multimodal tasks - system descriptions, negative results, and position papers relevant to efficient QA
The scope is intentionally *medium-broad*: the workshop is centered on efficient multimodal QA, while also welcoming closely related work on multimodal retrieval, reasoning, evaluation, and benchmarking when clearly connected to QA or other knowledge-intensive multimodal tasks.
*Guidelines for Submission*
Submission Categories
We invite the following types of submissions:
- full papers - short papers - position papers - system description papers - previously published or recently accepted work as non-archival submissions
Papers accepted at ICML 2026 may be submitted as *fast-track submissions*. Fast-track submissions must include the acceptance decision and reviews from ICML. They will still be evaluated by the Area Chairs (ACs) and Program Chairs (PCs) for thematic fit.
We do not plan to accept extended abstracts or poster-only submissions without a paper.
Formatting
- Submission platform: OpenReview https://openreview.net/group?id=ICML.cc/2026/Workshop/EMM-QA#tab-recent-activity - Review model: double-blind - Paper format: ICML style - Full paper length: up to 8 pages of main content - References: excluded from the page limit - Appendices: excluded from the page limit - Supplementary material: allowed and may be considered during review
Review Process
- Each submission will receive 3 reviews - Reviews will consider novelty, relevance to the workshop, clarity, technical quality, and discussion value - Preliminary but promising work is welcome - Desk rejection is possible for submissions that are out of scope, non-anonymized, or non-compliant
Archival Policy
Submissions in both categories may be *archival* or *non-archival*, based on the preference of the authors.
All archival papers will be published in the workshop proceedings.
Non-archival papers may be submitted to any venue in the future, and we will only link to a preprint if the authors provide such a link.
All accepted submissions are expected to appear publicly on:
- OpenReview - the workshop website
Camera-ready versions will be required for accepted archival submissions.
Archival submissions
The following submissions are eligible to be archival:
- newly submitted workshop papers - newly submitted system description papers
Non-archival submissions
The following submissions are non-archival:
- previously published papers - recently accepted conference papers - papers accepted to the main conference and presented again at the workshop - work currently under review elsewhere - any submission explicitly designated as non-archival
Non-archival submissions are welcome for presentation and discussion at the workshop, but they will not be published in proceedings of the workshop.
Conflict of Interest Policy
Workshop organizers will not submit papers.
Program committee members and reviewers may submit papers, but they may not review, handle, or make decisions on submissions for which they have a conflict of interest. Conflicts include recent collaboration, shared institutional affiliation, advisor-advisee relationships, family or close personal relationships, or any other circumstance that could impair objective judgment. Conflicted submissions will be handled by non-conflicted organizers and reviewers.
*About the Shared Challenge*
EMM-QA 2026 will host the *QANTA 2026 shared challenge*, a multimodal quiz bowl competition on efficient question answering with incrementally revealed clues.
In QANTA 2026, systems answer pyramid-style questions from streaming text and, for some questions, accompanying images. For computer teams, the core task is to decide when to buzz, produce an answer, and express confidence under realistic efficiency constraints. The challenge is closely aligned with the workshop’s focus on efficient multimodal question answering, retrieval, reasoning, and evaluation.
The shared challenge includes both *computer-team submissions* and related *live human competition* events. The key challenge deadlines listed above are the *warmup data release*, *test data release*, *challenge metric / scoring feedback deadline*, *last system submission*, and *system description paper submission*. The live human competition will take place on *June 27, 2026* (in person) and *June 28, 2026* (online).
Detailed task definitions, rules, scoring, technical requirements, and participation instructions are available on the following pages:
- Computer Teams https://qanta-org.github.io/competition/2026/computer-teams/ — how to build and submit systems - Human Teams https://qanta-org.github.io/competition/2026/human-teams/ — how to register for the live human competition - Rules overview https://qanta-org.github.io/competition/2026/rules/ — high-level competition structure - Computer rules https://qanta-org.github.io/competition/2026/rules/computer/ — official rules for submitted systems and leaderboard scoring - Human rules https://qanta-org.github.io/competition/2026/rules/human/ — official rules for live human–AI gameplay - Prizes and awards https://qanta-org.github.io/competition/2026/prizes/ — prizes across competition tracks
Thanks, and we hope to see you participate!
-Jordan Boyd-Graber (on behalf of Ikuya, Martin, Chen, and George)