The Second Workshop on Evaluation for Multimodal Generation
Multimodal generation and retrieval systems are increasingly central to modern information retrieval, powering retrieval-augmented generation (RAG), multimodal search, recommendation, and knowledge-intensive applications. Despite rapid progress in multimodal large language models (MLLMs), robust and principled evaluation of multimodal generation and retrieval remains a major open challenge for the IR community. This workshop aims to foster discussions and research efforts by bringing together researchers and practitioners in information retrieval, natural language processing, computer vision, and multimodal AI. Our goal is to establish evaluation methods for multimodal research and advance research efforts in this direction.
Both long papers and short papers (up to 9 pages and 4 pages respectively, with unlimited references and appendices) are welcome for submission.
A list of topics relevant to this workshop (but not limited to):
Multimodal retrieval for RAG, Agentic AI, recommendation systems
Evaluation of retrieved cross-modal samples, without relying on augmented generation
Multi-aspect evaluation methods capturing inter- and intra-modal coherence, relevance, grounding, and contextual consistency
Benchmark retrieval datasets, evaluation protocols and annotations for text–image–audio–video–3D generation
Automatic and human-centric metrics for informativeness, factuality, fluency, faithfulness, calibration, and usability for multimodal generation
Methodology for detecting, analysing, and mitigating multimodal bias, stereotypes, toxicity, and hallucinations
Evaluation in multimodal low-resource and multilingual settings, including culturally aware and cross-lingual metrics
Agent-based evaluation of multimodal generation in multi-turn, tool-use, or iterative editing scenarios
Game-theoretic or optimization-based formulations of evaluation objectives and protocols
Evaluation of the generation quality of synthetic multimodal data, provenance/attribution, and downstream impact on training and deployment
Ethical considerations in the evaluation of multimodal text generation, including bias detection and mitigation strategies
Evaluation of Security and Privacy Dimensions in Multimodal Applications
For direct paper submission, you are invited to submit your papers in our OpenReview portal. Papers are required to strictly follow the SIGIR submission guidelines. We invite both long papers (9 pages) and short papers (4 pages) submissions. All the submitted papers have to be anonymous for double-blind review. All accepted papers must be presented at the workshop.
Mar 25, 2026: Submission Open
May 2, 2026: Workshop Paper Direct Submission Deadline
May 27, 2026: ARR Commitment Date
June 2, 2026: Workshop Paper Notification
July 24, 2026: Workshop Day
Note: All deadlines are 11:59 PM UTC-12:00 (“Anywhere on Earth”)
More Information: https://evalmg.