We would like to draw your attention to the 3rd edition of the International Multimodal Communication Symposium (MMSYM 2026), which will be held in Leuven (Belgium) from 9-11 September 2026.
http://www.mmsym.org/http://www.mmsym.org/
The third edition of MMSYM continues the symposium series on multimodal communication previously held in Frankfurt am Main (2024) and Barcelona (2023). The symposium aims at gaining insights into the interaction and/or co-dependence of semiotic resources in spoken and signed language. To advance our understanding of communication, the symposium aims at further integrating multimodality as an integral part of linguistics and cognitive science. This overarching goal of the symposium is rooted in the conviction that investigating multimodality is key to understanding how language and communication work.
For MMSYM 2026, we welcome contributions approaching multimodal communication from different methodological and disciplinary angles. The main topic of this edition of the symposium is “Embodied communication in (inter)action”. We particularly encourage submissions that approach the main theme from one of the following angles: (1) multimodal pragmatics and interaction analysis, (2) (generative) multimodal behavior modelling, (3) creating and sharing sustainable multimodal data, and (4) exploring the interface of gesture and signed language. For MMSYM 2026, we particularly encourage contributions relating to these conference themes but are open for all submissions related to multimodal communication.
TOPICS INCLUDE BUT ARE NOT LIMITED TO
- Gesture-speech interaction & integration - Prosody-Gesture-coordination - Multimodal language processing - Gesture & sign language interaction - Sequential organization of multimodal behavior in (face-to-face) interaction - Semantics-pragmatics interface & multimodal communication - Multimodal communication & spatial configurations - Kinematics of bodily movements - Multimodal corpus development - Annotation schemes and tools for multimodal data processing - Machine and deep learning techniques applied to multimodal data - Multimodal human-computer interaction and conversational agents - Intercultural aspects of embodied behavior - Multimodal aspects of language acquisition and learning (both L1 and L2) - Multimodal communication disorders and communication support - Multimodal health communication
IMPORTANT DATES
- Abstract submissions welcome from 15th November 2025 - Abstract submission deadline: 1st March 2026 - Notification of acceptance: 3rd April 2026 - Conference dates: 9-11th September 2026
KEYNOTE SPEAKERS
We are delighted to announce four keynote speakers whose work relates to the main themes of the conference:
- Pamela Perniss (Universität zu Köln) - Elisabeth Zima (Universität Zürich) - Stefan Kopp (Bielefeld University) - Wim Pouw (Tilburg University)
ABSTRACT SUBMISSION DEADLINES
We invite abstract submissions for 20-minute oral presentations (15 minutes for the presentation and 5 minutes for discussion) or posters of original, unpublished work on any aspect of multimodal communication. Abstracts should be written in English and should be submitted via OpenReview.nethttp://openreview.net/ (submissions will be welcome starting from 15th November 2025). Please check the website (http://www.mmsym.org/http://www.mmsym.org/) with the call for papers, important deadlines and keynote speakers for more details.
We hope to welcome you in Leuven for an exciting MMSYM conference!
On behalf of the organizers Geert Brône, Bert Oben and Julie Janssens
****
Patrizia Paggio
Associate Professor University of Copenhagen Centre for Language Technology paggio@hum.ku.dkmailto:paggio@hum.ku.dk
Professor (retired) University of Malta Institute of Linguistics and Language Technology patrizia.paggio@um.edu.mtmailto:patrizia.paggio@um.edu.mt
Selected recent publications and upcoming projects:
Paggio, P., Mitterer, H., Attard, G., & Vella, A. (2025). Do hand gestures increase perceived prominence in naturally produced utterances? Language and Cognition, 17, e54. doi:10.1017/langcog.2025.20
MultiplEYE DK - Enabling multilingual eye-tracking data collection - Funded by the Carlsberg Foundation https://www.carlsbergfondet.dk/det-har-vi-stoettet/cf24-2005/