Call for Presentations and papers
47th Translating and the Computer Conference (TC47)
Luxembourg, 8 to 10 December 2026
https://asling.org/tc47/ [1]
AI-assisted or AI-eclipsed? Language services between promise and
pressure
AsLing invites submissions for the 47th edition of the Translating and
the Computer Conference (TC47), to be held from 8 to 10 December 2026 in
Luxembourg.
The TC conference series brings together professionals, researchers,
developers and decision-makers from the language industry, academia and
public institutions. TC47 will explore how technological innovation -
particularly AI - is reshaping multilingual communication, raising new
questions about human agency, professional ethics, and sustainable
practices in the language services sector.
Conference theme
_AI-assisted or AI-eclipsed? Language Services between Promise and
Pressure_
_ _
From Machine Translation and LLMs applied to translation, language
professionals face unprecedented change. TC47 invites reflection on how
to navigate this evolving landscape - to ensure that technology empowers
rather than eclipses, and that multilingual communication remains
inclusive, trusted and professionally grounded.
We especially welcome contributions exploring:
* Synergy between human expertise and AI-powered tools
* The role of AI in promoting or undermining inclusion and equity
* Strategies for sustainable and ethical language services
* Cross-sector collaboration between academia, industry, and
institutions
Submissions not focused on AI are equally welcome, particularly those
addressing broader trends in multilingual communication, training,
translation workflows, and evolving professional practices.
We also welcome critical reviews and discussions on:
* The broader impact of AI and automation on the language industry
* Implications for training, education and career development of
language professionals
* Coexistence of AI and traditional practices
* Impact of AI on language professionals
* Adoption barriers and risks for LSPs new to AI
* Future trends in translation, interpreting, and localisation - with
or without AI
* Responsible and sustainable development in language technologies
(environmental, social, professional)
Key areas of interest
Include, but are not limited to:
* Multilingual NLP and large language models
* Human-in-control systems vs. human-in-the-loop AI
* Terminology management and controlled language
* AI readiness and digital transformation in LSPs
* NLP, semantic technologies and linked data
* Collaborative translation tools and environments
* Quality assurance, benchmarking and evaluation
* Training, professional development and digital upskilling
* Inclusive and culturally aware AI systems
* Sustainable practices across the language lifecycle
* Language policy and digital language equality
* FAIR data, corpora and infrastructure
* Ethical implications and human oversight
* Empowering language professionals to shape - not just use - AI tools
* Non-AI innovations and evolutions in translation, interpreting,
localisation or terminology work
We invite:
* Innovative research: studies that expand the boundaries of language
technologies, multilingual NLP, or AI ethics.
* Practical applications: case studies from public or private sector
stakeholders showcasing language technology use and development.
* Workshops and panels: interactive formats encouraging dialogue on
timely, challenging or divisive issues in AI and language work.
* Critical reflections: well-argued contributions questioning current
uses of AI and proposing alternative, human-centred approaches.
* Posters and short talks: snapshots of emerging projects, tools, or
preliminary research.
Submission tracks
All submissions are for talks, within the following categories:
* Research track (Academic)
* 20-minute talk
* Followed by a paper (max. 5,000 words) presenting original,
unpublished research
* User experience track (Non-academic)
* 20-minute talk
* Optional post-facto paper (max. 5,000 words) detailing workflows,
tools or implementation cases
* Posters / Short talks
* 7-8-minute talk
* Followed by a paper (max. 2,000 words) outlining a project,
experiment, or tool
* Workshops and panels
* Interactive sessions with multiple speakers
* Moderators may submit an optional post-facto paper summarising key
takeaways
Submission instructions
Submissions must be made via the START conference submission system:
https://www.softconf.com/p/tc2026 [2]
Important dates
* Deadline for research/user experience talks: 30 June 2026
➤ Notification of acceptance: 31 August 2026
* Deadline for workshops and panels: 31 July 2026
➤ Notification of acceptance: 15 September 2026
* Deadline for posters and short talks: 15 September 2026
➤ Notification of acceptance: 30 September 2026 * Final paper
submission (except post facto workshop and panel papers): 31 October
2026
* Conference dates: 8-10 December 2026
Submission guidelines
Detailed submission guidelines, including templates and formatting
instructions, will be available on the TC47 conference website.
We look forward to your contributions that will help shape the future of
language services through innovation, collaboration, and inclusivity.
Why submit to TC47?
TC47 offers a unique opportunity to engage in a multi-stakeholder
dialogue that bridges research, practice and policy. It is a space for
shared reflection on what language professionals need, what tools
actually deliver and how we co-create a future where humans and AI work
better together.
For any questions, reporting of problems concerning submissions or the
Conference at least, please email tc47-info(a)asling.org. Let's explore,
challenge and shape the future of multilingual communication together!
--
Amal Haddad Haddad (She/her)
Facultad de Traducción e Interpretación
Universidad de Granada |https://www.ugr.es/personal/amal-haddad-haddad
Lexicon Research Group |http://lexicon.ugr.es/haddad
Co-Convenor, BAAL SIG 'Humans, Machines,
Language'|https://r.jyu.fi/humala
Event Coordinator, BAAL SIG 'Language, Learning and Teaching'
===============
Cláusula de Confidencialidad: "Este mensaje se dirige exclusivamente a
su destinatario y puede contener información privilegiada o
confidencial. Si no es Ud. el destinatario indicado, queda notificado de
que la utilización, divulgación o copia sin autorización está prohibida
en virtud de la legislación vigente. Si ha recibido este mensaje por
error, se ruega lo comunique inmediatamente por esta misma vía y proceda
a su destrucción.
This message is intended exclusively for its addressee and may contain
information that is CONFIDENTIAL and protected by professional
privilege. If you are not the intended recipient you are hereby notified
that any dissemination, copy or disclosure of this communication is
strictly prohibited by law. If this message has been received in error,
please immediately notify us via e-mail and delete it"
===============
Links:
------
[1] https://asling.org/tc47/
[2] https://www.softconf.com/p/tc2026/
*Apologies for cross-posting*
IEEE Conference on Games (CoG) 2026
Madrid, September 1–4, 2026
Special session: Evaluating and Advancing Spatial Intelligence through Games
CFP for auxiliary papers: https://cog2026.fdi.ucm.es/cfp-spatial
Submission deadline: 14 May 2026
LinkedIn announcement: here<https://www.linkedin.com/posts/prashantjayannavar_cog2026-ieee-spatialintel…>
X announcement: here<https://x.com/p_jayannavar/status/2045141231517192584?s=20>
Organizing committee
*
Prashant Jayannavar — University of Illinois Urbana-Champaign, US (paj3(a)illinois.edu)
* Alessandro Suglia — University of Edinburgh, United Kingdom (asuglia(a)ed.ac.uk)
* Sina Zarrieß — University of Bielefeld, Germany
* Massimo Poesio — Queen Mary University of London, United Kingdom
Scope
Spatial intelligence, the ability to perceive, reason about, and manipulate spatial relationships, is fundamental to human cognition and essential for artificial intelligence systems operating in both physical and virtual environments. Games provide rich, controlled, and interactive testbeds for evaluating and advancing spatial intelligence in AI, offering diverse scenarios that require understanding spatial configurations, navigation, object manipulation, and communication about spatial concepts.
This special session seeks contributions that use games as diagnostic tools or environments for developing and evaluating spatial intelligence in AI systems. We welcome research spanning both embodied and unembodied games, as well as multimodal and unimodal settings. Of particular interest are collaborative and interactive scenarios where spatial intelligence enables AI agents to serve as instruction followers or instruction givers, playing alongside or assisting humans in open-ended gameplay or task-specific applications such as navigation, construction/assembly, object manipulation, etc. Further, reward modeling in such domains also inherently requires spatial understanding, and we encourage work that studies this.
The session is in part motivated by prior workshops such as the 4th Workshop on Spatial Language Understanding and Grounded Communication for Robotics (SpLU-RoboNLP 2024), When Language meets Games Workshop (Wordplay 2025), with particular inspiration drawn from the Dagstuhl Perspectives Workshop Human in the Loop Learning through Grounded Interaction in Games (https://www.dagstuhl.de/seminars/seminar-calendar/seminar-details/24492).
Topics of Interest
We are seeking papers that present methodological contributions to relevant tasks as follows.
Relevant Tasks
Including, but not limited to:
* Core instruction following and giving tasks (dialogue-based or single-turn)
* Related sub-problems, e.g., referring expression comprehension/generation, clarification question generation, planning
* Reward modeling
* Novel task formulations to evaluate spatial intelligence capabilities
Methodological Contributions
For the above tasks, we invite contributions including, but not limited to:
Data and Resources
* Data collection, synthetic data generation, and simulation frameworks
* Data scarcity in embodied or interactive settings
* Inverse Dynamics Models and related approaches for pseudo-labeling, enabling scalable dataset creation from abundant unlabeled sources
* Resources and environments for interactive/online learning
Modeling Approaches
* LLMs, VLMs, VLAs, agentic frameworks
* Reinforcement Learning
* Model design, fine-tuning strategies, in-context learning, parameter-efficient methods, and specific techniques for spatial reasoning or spatio-temporal memory representations
Evaluation and Analysis
* Automated metrics, human evaluation studies, and benchmark design
* Games as diagnostic environments for behavioral analysis (manual or automatic) of models to uncover their strengths and limitations
* Evaluating reward modeling ability itself as an effective proxy for spatial reasoning ability
Program Committee members
* Julia Hockenmaier — University of Illinois Urbana-Champaign
* Marc-Alexandre Cote — Microsoft Research – Montreal
* Raffaella Bernardi — Free University of Bozen-Bolzano
* David Schlangen — University of Potsdam
* Manling Li — Northwestern University
* Parisa Kordjamshidi — Michigan State University
* Simon Dobnik — University of Gothenburg
* Nikolai Ilinykh — University of Gothenburg
* Vardhan Dongre — University of Illinois Urbana-Champaign
* Ruiyi Wang — University of California San Diego
* Sandro Pezzelle - University of Amsterdam
Submission Instructions
This is a call for auxiliary papers. We invite the submission of short, competition and vision papers:
* Short papers (4 pages page limit) describe work in progress, smaller projects that are not yet ready to be published as a full paper, or new progress on projects that have been reported elsewhere.
* Competition papers (8 pages page limit) describe research related to one of the competitions in the Games community, including the design of new competitions and in particular submissions to existing competitions.
* Vision papers (8 pages page limit) describe a vision for the future of the Games field or some part of it, be based on extensive research, and include a comprehensive bibliography. Please notice that the standards for vision papers are high: literature reviews and opinion papers with speculations not grounded in research are immediately rejected.
All page limits include references and appendices.
All accepted auxiliary papers will be included in the proceedings of the conference.
NONE OF THE SUBMISSION DEADLINES WILL BE EXTENDED.
All deadlines are Anytime on Earth (AoE).
Relevant dates for this call are as follows:
* Submission of auxiliary papers: 14th May 2026
* Notification of acceptance of auxiliary papers: 10th June 2026
* Submission of the camera-ready version of auxiliary papers: 24th June 2006
* Conference dates: 1st – 4th September 2026
Papers must be submitted through the conference submission system available at the following link: https://easychair.org/conferences?conf=ieeecog2026
All paper submissions should follow the recommended IEEE conference author guidelines. MS Word and LaTeX templates can be found at https://www.ieee.org/conferences/publishing/templates
All submitted papers will be fully peer-reviewed, and accepted papers will be published in the conference proceedings and on IEEE Xplore. CoG will use a *double-anonymous review process*. Authors must omit their names and affiliations from their submissions, avoiding obvious identifying statements. Submissions not abiding by anonymity requirements will be desk rejected.
Papers might be allocated to either poster presentations or oral presentations.
The Paradigm Shift: From Rules to Models in Natural Language Processing
International Summer School
Alicante, Spain, 15, 16 and 17 June 2026
https://summer-school.gplsi.es
Third Call for Participation
Natural Language Processing (NLP) has witnessed a clear paradigm shift:
the transition from rule-based approaches to data-driven language
models. While rule-based approaches dominated NLP for many years, during
the 1990s and early 2000s they gradually gave way to statistical and
machine-learning methods. It would be fair to say that data-driven
models--and, most prominently, Deep Learning (DL), including more
recently Large Language Models (LLMs)--have taken the world by storm.
Deep Learning models are now used almost everywhere, across nearly every
discipline, and Natural Language Processing is no exception. DL has
proved highly promising so far, delivering improvements for almost every
NLP task and application. However, as observed on numerous occasions,
the outputs of DL models are not always ideal, with some studies
reporting cases in which machine-learning approaches do not necessarily
outperform the 'old-fashioned' rule-based ones.
The overarching theme of the summer school will be this paradigm shift,
with lectures and practical sessions reflecting the latest trends at
both theoretical and practical levels. More specifically, the programme
will combine lectures focusing on theoretical foundations with hands-on
practical sessions. See the confirmed lectures below.
The summer school will be ideal for both newcomers and experienced
professionals in NLP, computer science, data science, cybersecurity,
corpus linguistics, language technologies, and related disciplines,
offering a unique opportunity to deepen expertise and engage with the
rapidly evolving world of LLMs.
Keynote speech: Roberto Navigli, 'Is Lexical Semantics Dead in the LLM
Era?'
We are delighted to announce Roberto Navigli (Sapienza University of
Rome) as keynote speaker of the summer school who will deliver a keynote
speech 'Is Lexical Semantics Dead in the LLM Era?'
Summer school programme
The summer school programme will feature the following lectures:
Invited lecture
'Quantum Natural Language Processing: Foundations, Challenges, and
Insights'
Ellena Lloret (University of Alicante)
'Explainable AI in Natural Language Processing'
Salima Lamsiyah (University of Luxembourg)
'Quality Estimation for Machine Translation'
Tharindu Ranasinghe (Lancaster University)
'Understanding Language Models'
Hansi Hettiarachchi (Lancaster University)
'LLMs for low-resource languages'
Robiert Sepúlveda Torres and Iván Martínez (University of Alicante)
'Fairness in Machine Learning: Evaluating Gender Bias in LLMs'
Juan Pablo Consuegra-Ayala (University of Alicante)
'Gaze data for NLP research: recording methods and analysis'
Cengiz Acarturk (Jagiellonian University)
'Beyond the Single Text: NLP Reading in Digital Humanities'
Isuri Anuradha (Lancaster University)
'Automatic hyperparameter optimisation and model selection for NLP
pipelines'
Ernesto Luis Estevanell (University of Alicante)
'Legal NLP in the LLM era'
Damith Premasiri (Lancaster University)
'Machine Translation for Low-Resource Languages'
Alicia Picazo-Izquierdo (University of Alicante)
'Sentiment analysis: from rule-based methods to Large Language Models'
Maram Alharbi (Lancaster University)
Panel discussion
A panel discussion 'The future of NLP methods and language models' is
scheduled as part of the summer school
(https://summer-school.gplsi.es/panel/). The panel will be
hosted/moderated by Ruslan Mitkov (Lancaster University and University
of Alicante) and will include contributions from
Roberto Navigli (Sapienza University of Rome)
Elena Lloret (University of Alicante)
Tharindu Ranasinghe (Lancaster University)
Salima Lamsiyah (University of Luxembourg)
Nasredine Semar (CEA)
Yoan Gutiérrez Vázquez (University of Alicante)
Gražina Korvel (Vilnius University)
Venue, dates and accommodation
The summer school will take place at the Research Institute of
Informatics of the University of Alicante and will take place on 15, 16
and 17 June 2026. See the summer school website for recommended
accommodation options (prospective participants are advised to book
accommodation at their earliest convenience, as availability is limited)
or more details in general.
Summer School Directors
Tharindu Ranasinghe (University of Lancaster)
Salima Lamsiyah (University of Luxembourg)
Summer School Chair
Ruslan Mitkov (University of Alicante)
Advisory Committee
Manuel Palomar Sanz (University of Alicante)
Rafael Muñoz Guillena (University of Alicante)
Andrés Montoyo Guijarro (University of Alicante)
Organising Committee
Raúl García Cerdá (University of Alicante)
Alicia Picazo Izquierdo (University of Alicante)
Ernesto Luis Estevanell (University of Alicante)
Maram Alharbi (Lancaster University)
Registration
Registration can be completed at
https://summer-school.gplsi.es/registration/. Kindly note that
early-bird registration closes on 25 May 2026.
Related events
The summer school will follow the second international conference
_Natural Language Processing and Artificial Intelligence_ (NLPAICS'2026)
which will take place in Alicante on 11 and 12 June 2026
(https://nlpaics2026.gplsi.es). Those who register for both events will
benefit from a discounted registration fee.
Further information
The summer school website is updated on regular basis. Alternatively,
interested parties can email summer-school(a)dlsi.ua.es for more
information.
--
Amal Haddad Haddad (She/her)
Facultad de Traducción e Interpretación
Universidad de Granada |https://www.ugr.es/personal/amal-haddad-haddad
Lexicon Research Group |http://lexicon.ugr.es/haddad
Co-Convenor, BAAL SIG 'Humans, Machines,
Language'|https://r.jyu.fi/humala
Event Coordinator, BAAL SIG 'Language, Learning and Teaching'
===============
Cláusula de Confidencialidad: "Este mensaje se dirige exclusivamente a
su destinatario y puede contener información privilegiada o
confidencial. Si no es Ud. el destinatario indicado, queda notificado de
que la utilización, divulgación o copia sin autorización está prohibida
en virtud de la legislación vigente. Si ha recibido este mensaje por
error, se ruega lo comunique inmediatamente por esta misma vía y proceda
a su destrucción.
This message is intended exclusively for its addressee and may contain
information that is CONFIDENTIAL and protected by professional
privilege. If you are not the intended recipient you are hereby notified
that any dissemination, copy or disclosure of this communication is
strictly prohibited by law. If this message has been received in error,
please immediately notify us via e-mail and delete it"
===============
2nd Call for Papers
2nd International Workshop on Language and Language Models (WoLaLa)
Dubrovnik, Croatia | October 12-13
The ELTE Research Centre for Linguistics, the University of Zagreb, Faculty of Humanities and Social Sciences, and the Croatian Language Technologies Society invite submissions to the 2nd International Workshop on Language and Language Models. This workshop is designed as a dedicated forum for scholars and practitioners in the social sciences and humanities (SSH) to discuss and evaluate large language models from an SSH perspective, and to share best practices that can advance research and applications within these fields.
Relevant topics include, but are not limited to, the following areas:
General language models: Critical and comparative analyses of state-of-the-art language models, including their linguistic competence, performance, and limitations.
Cultural and linguistic perspectives: Investigations into the cultural, cognitive, and scientific aspects of language processing, including the unexplored territories of model behavior and linguistic capability.
Applications and best practices: Case studies and best practices in applying AI to language research, highlighting the potential for cross-disciplinary innovation within SSH.
Bridging disciplines: Contributions that examine the role of language models in reshaping traditional SSH methodologies, and proposals on integrating AI insights into linguistic inquiry.
IMPORTANT DATES
20 May 2026: Submission deadline
08 August 2026: Notification of acceptance
12 October – 13 October 2026: Workshop in Dubrovnik
15 December 2026: Full paper submission deadline
Submissions
We expect submissions in the form of extended abstracts (length: 3 to 4 pages including references) in PDF format, in accordance with the template (https://www.overleaf.com/read/sbmczvkpxpzz#4a94e3). Please ensure your submission clearly outlines your research question, methodology, and preliminary findings.
Extended abstracts must be submitted through the EasyChair submission system <https://easychair.org/conferences/?conf=wolala2026> and will be reviewed by the Programme Committee. All proposals will be reviewed on the basis of the following criteria:
Appropriateness: The contribution must pertain to the topics listed above
Soundness and correctness: The content must be technically and factually correct; methods must be scientifically sound, according to best practice, and preferably evaluated.
Meaningful comparison: The abstract must indicate that the author is aware of alternative approaches, if any, and highlight relevant differences.
Substance: Concrete work and experiences will be given preference over ideas and plans.
Impact: Contributions with a higher impact on the research community and society more broadly will be given preference over papers with lower impact.
Clarity: The abstract should be clearly written and well structured.
Timeliness and novelty: The work must convey relevant new knowledge to the audience at this event.
Programme Committee
The Programme Committee for the conference consists of the following members:
Marko Tadić, University of Zagreb, Croatia (chair)
António Branco, University of Lisbon, Portugal
Eva Hajičová, Charles University Prague, Czech Republic
Erhard Hinrichs, University of Tubingen, Germany
András Kornai, HUN-REN Institute for Computer Science and Control, Hungary
Alessandro Lenci, University of Pisa
Csaba Pléh, Central European University, Austria
Gábor Prószéky, ELTE Research Centre for Linguistics & Pázmány Péter Catholic University
Paul Rayson, Lancaster University, United Kingdom
Frédérique Segond, National Institute for Research in Digital Science and Technology, France
Dan Tufiș, Romanian Academy, Romania
Hans Uszkoreit, German Research Center for Artificial Intelligence, Germany
Tamás Váradi, HUN-REN Hungarian Research Centre for Linguistics, Hungary
Martin Wynne, University of Oxford, United Kingdom
LINKS
2nd International Workshop on Language and Language Models website: https://wolala.nytud.hu <https://wolala.nytud.hu/>
EasyChair submission: https://easychair.org/conferences/?conf=wolala2026
Template for submissions:
ZIP-archive: https://wolala.nytud.hu/templates/WoLaLa2026.zip <https://wolala.nytud.hu/templates/WoLaLa2025.zip>
Overleaf template: <https://www.overleaf.com/read/xsvjrhvjyfmj#f3362f>https://www.overleaf.com/read/prvhqbxdgmxq#374f7b
Contact for any questions regarding the conference: info(a)wolala.nytud.hu
The eighth talk of the Data in Historical Linguistics Seminar Series will take place remotely on Monday 11th May 2026 at 5pm BST. Federico Viglino (Guglielmo Marconi University, Italy) will be presenting on "Middle voice in the diachrony of Ancient Greek: a quantitative (and qualitative!) approach”.
Registration for this talk will close at midnight on Friday 8th May and the link for this can be accessed here: https://forms.gle/ioQ7qbspf9ebc19J7
Participants will receive a Microsoft Teams link via email on the morning of the talk.
The abstract for this talk can be found at this page<https://datainhistoricallinguistics.wordpress.com/2025/12/19/monday-11-may-…>.
The programme and registration links for all talks in the series can be found on our website:
https://datainhistoricallinguistics.wordpress.com/2026-programme/
This seminar series is run by Andrea Farina (King’s College London) and Dr Mathilde Bru and is aimed at PhD students and early career researchers. The purpose of this seminar series is to bring together researchers working on historical linguistics with a quantitative approach, and to discuss current avenues of research in this topic. We hope that these seminars will nurture international collaboration and establish academic ties among researchers working on similar topics in this field.
Join our mailing list<https://datainhistoricallinguistics.wordpress.com/join-us/>: https://datainhistoricallinguistics.wordpress.com/join-us/
Registration open!!
########################################################
GRACE@IberLEF2026: https://www.codabench.org/competitions/13280/
########################################################
****We apologize for multiple postings of this e-mail****
GRACE@IberLEF2026 announces the first edition of a novel shared task on Argument Mining in Spanish connecting Explainable AI and Evidence-Based Medicine across clinical trials and medical licensing examinations.
⚗️ Argument Mining
Argument Mining automatically extracts claims and evidence from clinical text and reveals how they support or challenge each other, enabling transparent, traceable clinical reasoning.
🌍 Spanish, First
GRACE is the first Argument Mining shared task in Spanish for the clinical domain, filling a key gap in shared tasks for multilingual biomedical NLP with fine-grained, entity-level annotations.
Track 01
🔬 Clinical Trial Evidence & Argumentation
This track focuses on abstracts of Randomized Controlled Trials (RCTs). Their standardized design, contrasting an intervention with a control group, provides a transparent path from data to conclusions, making argumentative components more accessible to automated systems.
Goal: Identify argumentative components (claims and premises) and detect support/attack relations at the sentence level.
Track 02
🩺 Clinical Case Reasoning (MIR)
This track uses cases from the MIR (Médico Interno Residente) exam, Spain's national medical specialization test. Each instance pairs a dense clinical narrative with five competing diagnostic or treatment options, only one of which is correct.
Goal: Extract fine-grained evidence spans that justify the correct option while refuting the incorrect alternatives.
📅 Important Dates
📂 Release of Training & Dev Sets March 18
🚀 Official Test Set Release April 22
⏰ Deadline for Result Submission May 15
📊 Publication of Results May 20
📄 System Paper Submission June 6
✅ Notification of Acceptance June 17
🎤 IberLEF Workshop (at SEPLN) September 22
We are excited to announce the Call for Participation for NTCIR-19 Tip-of-the-Tongue (ToT) Shared Task. ToT known-item retrieval is defined as “an item identification task in which the searcher has previously experienced an item but cannot recall a reliable identifier”—i.e., “It’s on the tip of my tongue…”. After 3 successful years as a TREC Track, the ToT shared task is expanding to NTCIR for 2026. The NTCIR-19 ToT Shared Task will focus on open-domain ToT information needs in multiple languages (English, Chinese, Japanese, and Korean). You can participate in the shared task in any subset of these languages, and you are also welcome to present your work remotely at the NTCIR conference in Tokyo in December 2026.
Please visit the following websites for further information.
Task guidelines: https://ntcir-tot.github.io/guidelines
Registration: https://research.nii.ac.jp/ntcir/ntcir-19/howto.html (Deadline: June 1)
Important dates
March 27: Release corpus and training queries
May: Release test queries
June 1st: Deadline for registration
July (tentative): Deadline for submitting runs
Please consider participating and help us spread the word!
Best regards,
Fernando Diaz
On behalf of the NTCIR-19 ToT Shared Task organizers
10th Workshop on Online Abuse and Harms (WOAH) @EMNLP: 2nd CFP
*** Second Call for Papers ***
We invite paper submissions to the 10th Workshop on Online Abuse and Harms (WOAH), which will take place on 24-29 October at EMNLP 2026.
Website: https://www.workshopononlineabuse.com/cfp.html
Important Dates
* Registration deadline for mentorship programme: April 10, 2026
* Notification of mentor/mentee match: April 25, 2026
* Submission due: June 26, 2026
* ARR reviewed submission due: August 3, 2026
* Notification of acceptance: August 15, 2026
* Camera-ready papers due: September 10, 2026
* Workshop: 24-29 October 2026
Overview
Digital technologies have brought significant benefits to society, transforming how people connect, communicate, and interact. However, these same technologies have also enabled the widespread dissemination and amplification of abusive and harmful content, such as hate speech, harassment, and misinformation. Given the sheer volume of content shared online, addressing abuse and harm at scale requires the use of computational tools. Yet, detecting and moderating online abuse remains a complex task, fraught with technical, social, legal, and ethical challenges.
The 10th Workshop on Online Abuse and Harms (WOAH) invites paper submissions from a diverse range of fields, including but not limited to natural language processing, machine learning, computational social science, law, political science, psychology, sociology, and cultural studies. We explicitly encourage interdisciplinary research, technical and non-technical contributions, and submissions that focus on under-resourced languages. Non-archival papers and civil society reports are also welcome.
Topics covered by WOAH include, but are not limited to:
* New models or methods for detecting abusive and harmful online content, including misinformation;
* Biases and limitations in existing detection models or datasets for abusive and harmful content, especially those in commercial use;
* Development of new datasets and taxonomies for online abuse and harms;
* Novel evaluation metrics and procedures for detecting harmful content;
* Analyses of the dynamics of online abuse, its propagation, and its impact on different communities;
* Social, legal, and ethical considerations in detecting, monitoring, and moderating online abuse.
Special Theme: “Ten Years of WOAH: Reflecting on Progress and New Frontiers”
In its 10th edition, WOAH highlights the theme “Ten Years of WOAH: Reflecting on Progress and New Frontiers”. Over the past decade, WOAH has become a central interdisciplinary venue for online harms research. As harms and enabling technologies have evolved, the field has moved beyond an early focus on textual hate speech and harassment to address more complex phenomena. Advances in AI and online ecosystems have expanded the scale and diversity of harms. Transformer models, multimodal platforms, and recommendation systems have contributed to the escalation of issues like misinformation, radicalisation, child sexual exploitation, identity-based abuse, algorithmic bias, privacy violations, and AI-mediated harms. Methods tackling this have evolved from monolingual lexicon-based approaches to deep learning, multilinguality, multimodality, interpretability, and interdisciplinarity.
Despite this progress, fundamental challenges remain. There is limited consensus on what constitutes “harm”, how context and thresholds should be defined, or how harms vary across cultures and modalities. These ambiguities affect datasets and models, constrain comparability, and often marginalise affected communities. The past decade also calls for critical self-reflection. Research has frequently prioritised detection, high-resource languages, and narrowly defined phenomena over intervention, global perspectives, and systemic or structural harms, with insufficient attention to user agency, platform incentives, lived experience, and participatory approaches. Finally, ten years of work have underscored that interdisciplinarity is essential for addressing the sociotechnical nature of the phenomenon. Addressing future online harms will require deeper integration across NLP, ML, social sciences, law, policy, and HCI. WOAH 10 seeks to consolidate lessons from the past decade, identify enduring gaps, and connect research, practice, and policy to guide the next generation of work on online harms.
Submission
Submission is electronic, using the Softconf START conference management system.
Submission link: TBA
The workshop will accept three types of papers.
1) Academic Papers (long and short): Long papers of up to 8 pages, excluding references, and short papers of up to 4 pages, excluding references. Unlimited pages for references and appendices. Accepted papers will be given an additional page of content to address reviewer comments. Previously published papers cannot be accepted.
2) Non-Archival Submissions: Up to 2 pages, excluding references, to summarise and showcase in-progress work and work published elsewhere.
3) Civil Society Reports: Non-archival submissions, with a minimum of 2 pages and no upper limit. Can include work published elsewhere.
All submissions must use the official ACL style files<https://github.com/acl-org/acl-style-files>. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review. All submissions should adhere to the workshop policies https://www.workshopononlineabuse.com/policies.html.
WOAH Community
We are excited to share the WOAH community Slack channel — a workspace for researchers interested in or working on understanding and addressing online abuse and harms!
Join us here: https://join.slack.com/t/hatespeechdet-47d7560/shared_invite/zt-2a8d96j4z-g…
Contact Info
Please send any questions about the workshop to organizers(a)workshopononlineabuse.com<mailto:organizers@workshopononlineabuse.com>
Organisers
Agostina Calabrese, Cohere
Thomas Davidson, Rutgers University-New Brunswick
Christine de Kock, University of Melbourne
Urja Khurana, Delft University of Technology
Marta Marchiori Manerba, University of Turin
Paloma Piot, Universidade da Coruña
Zeerak Talat, University of Edinburgh
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
*** Second Call for Participation for HAHA at IberLEF 2026
<https://sites.google.com/view/iberlef-2026> ***
Humor Analysis based on Human Annotation and Automatic Humor Generation
https://www.fing.edu.uy/inco/grupos/pln/haha/
Codabench page: https://www.codabench.org/competitions/14700/
NEWS:
The trial and development data have been released. You can now submit your
systems for the development phase!
Can computers be funny? Can humans identify computer-generated humor?
While humor has been studied historically from psychological, cognitive,
and linguistic perspectives, its computational study is an active area of
research in Machine Learning and Computational Linguistics that has gained
traction in recent years. There has been significant development mainly in
the field of automatic humor detection and classification, but a
characterization of humor that enables its automatic recognition and
generation is far from being solved.
This task aims to gain better insight into what is humorous and what causes
laughter, and to take some steps forward by assessing the capabilities of
current LLMs to generate actual humorous content in Spanish and attempting
to see whether it’s possible to automatically distinguish between
computer-generated humor and humor written by humans. The target audience
is NLP researchers interested in advancing the understanding of highly
subjective and creative tasks, though anyone is welcome to participate.
Task description
This year, the HAHA evaluation campaign proposes three different subtasks
related to automatic humor detection and generation, with the aim of
deepening our understanding of computational humor.
Subtask 1 - Humor Detection: determining if a news headline is satirical or
real. The main performance metric for this subtask will be the F1 score of
the 'humorous' class. This subtask is similar to the first subtask proposed
in previous editions of the HAHA shared task, but this time it's applied to
a particular domain where humorous and non-humorous content might sometimes
be difficult to tell apart.
Subtask 2 - LLM-generated humor detection: determining if a joke inspired
by a news headline was generated by an LLM or written by a human. The main
performance metric for this subtask will be the F1 score of the 'automatic'
class.
Subtask 3 - Humor Generation: generating jokes from a news headline using
computational methods. This subtask will be evaluated through human
preference judgments, employing LLM arena-style battles between pairs of
generated jokes, and ranking the systems using an Elo-based leaderboard.
How to Participate
The CodaBench page for the competition is available:
https://www.codabench.org/competitions/14700/
Important Dates
March 18th, 2026: team registration page.
April 8th, 2026: development sets released and open for dev submissions.
May 27th, 2026: test sets released and open for test submissions.
June 3rd, 2026: end of test submissions, publication of results of subtasks
1 and 2.
June 10th, 2026: publication of results of subtask 3.
June 12th, 2026: paper submission.
June 23rd, 2026: notification of acceptance.
July 1st, 2026: camera-ready paper submission.
September 2026: IberLEF 2026 Workshop.
<Apologies for cross-postings>
------------------------------------------------
Release of test data and registration still open !!
PROFE 2026: Language Proficiency Evaluation
IberLEF 2026 Shared Task
Website URL: https://sites.google.com/view/profe2026
CodaLab site: https://www.codabench.org/competitions/15902/
PROFE 2026 reuses the exams for Spanish proficiency evaluation developed by Instituto Cervantes along many years to evaluate human students. Therefore, automatic systems will be evaluated under the same conditions as humans were. Systems will receive a set of exercises with their corresponding instructions without specific training material. In this way we expect Transfer Learning approaches or the use of Generative Large Language Models.
The previous edition proposed exams based only on text. In this new edition, we will include exams with images, which sometimes require interpretation to answer the exercise correctly. We propose evaluating systems on their ability to perform multimodal reasoning, moving beyond text-only comprehension.
We will provide a limited set of new image-based exercises while retaining the dataset from the previous edition. This setup encourages participants to develop strategies for handling the scarcity of specific training data.
Subtasks
PROFE 2026 has three subtasks, one per exercise type. Teams can participate in any combination of them. Each subtask contains several exercises of the same type. The subtasks are:
1.
Multiple choice subtask: each exercise includes a text and a set of multiple-choice questions about the text where only one answer is correct. Given a multiple-choice question, systems must select the correct answer among the candidates.
2.
Matching subtask: each exercise contains two sets of texts. Systems must find the text in the second set that best matches the first set. There is only one possible matching per text, but the first set can contain extra unnecessary texts.
3.
Filling the gap subtask: each exercise contains a text with several gaps corresponding to textual fragments that have been removed and presented disorderly as options. Systems must determine the correct position for each fragment. There is only one correct text per gap, but there could be more candidates than gaps.
The different exercises open research on how to approach them, adapting different prompts when using generative models.
As the main novelty in this edition, some exercises will contain images. While some of these images will be the candidate answers (rather than text excerpts), others might provide visual information needed to answer the exercise correctly. Conversely, some images will not provide essential information. Consequently, systems participating in this edition must adopt a multimodal approach, capable of discerning when to integrate visual cues and when to disregard them. This necessity to filter visual relevance introduces significant new challenges compared to the previous edition.
Dataset
We will use the IC-UNED-RC-ES dataset created from real examinations at Instituto Cervantes. These exams were created by human experts to assess language proficiency in Spanish. We have already collected the exams and converted them to a digital format, which is ready to be used in the task. The dataset contains exams at different levels (from A1 to C2). The description of the full dataset was published in the following paper:
*
Anselmo Peñas, Álvaro Rodrigo, Javier Fruns-Jiménez, Inés Soria-Pastor, Sergio Moreno-Álvarez, Alberto Pérez García-Plaza, and Julio Reyes-Montesinos. A Spanish Language Proficiency Dataset for AI Evaluation<https://www.mdpi.com/2078-2489/17/2/159>. Information 17, no. 2: 159. DOI: 10.3390/info17020159<https://doi.org/10.3390/info17020159>. 2026.
The complete dataset contains 282 exams with 855 exercises. The total number of evaluation points are 6146 (among 16570 options) distributed by exercise type as:
multiple-choice: 3544 responses
matching: 2309 responses
fill-the-gap: 293 responses
In PROFE 2026, we plan to use around 50% of the exams; the other 50% was already used for the PROFE 2025 edition.
We intend not to distribute the gold standard to prevent overfitting in post-campaign experiments and data contamination in LLMs.
Evaluation measures and baseline
We will use traditional accuracy (proportion of correct answers) as the main evaluation measure. Systems will receive evaluation scores from two different perspectives:
*
At the question level, where correct answers are counted individually without grouping them.
*
At the exam level, where scores for each exam are considered. Each exam contains several exercises of different types. An exam is considered to be passed if an accuracy score (accounted as the proportion of correct answers) above 0.5 is reached. Then, the proportion of passed exams is given as a global score. This perspective will only apply to those teams participating in the three subtasks.
More in detail, the exact evaluation per subtask is as follows:
*
Multiple choice subtask: we will measure accuracy as the proportion of questions correctly answered
*
Matching subtask: we will measure accuracy as the proportion of correct texts matched.
*
Fill in the gap subtask: We will measure accuracy as the proportion of correctly filled gaps.
We will use accuracy as the evaluation measure because there is only one correct option among candidates and because it is the measure applied to humans doing the same exams. Thus, we can compare the performance of automatic systems and humans under the same conditions
A preliminary baseline using ChatGPT obtains the following results for each exercise type (provided that different prompting can produce slightly different results):
*
Multiple choice accuracy: 0.64
*
Filling the gap accuracy: 0.43
*
Matching accuracy: 0.51
Schedule
April
April 10, 2026
Development data released
April 27, 2026
Test set release
May
May 11, 2026
Deadline for submitting runs
May 18, 2026
Release of evaluation results
June
June 3, 2026
Paper submission deadline
Organizers
Alvaro Rodrigo<https://www.uned.es/universidad/docentes/informatica/alvaro-rodrigo-yuste.h…>, UNED NLP & IR Group (Universidad Nacional de Educación a Distancia)
Anselmo Peñas<https://www.uned.es/universidad/docentes/informatica/anselmo-penas-padilla.…>, UNED NLP & IR Group (Universidad Nacional de Educación a Distancia)
Alberto Pérez<https://www.uned.es/universidad/docentes/informatica/alberto-perez-garcia-p…>, UNED NLP & IR Group (Universidad Nacional de Educación a Distancia)
Sergio Moreno<https://www.uned.es/universidad/docentes/en/informatica/sergio-moreno-alvar…>, UNED NLP & IR Group (Universidad Nacional de Educación a Distancia)
Javier Fruns, Instituto Cervantes
Inés Soria, Instituto Cervantes
Rodrigo Agerri<https://ragerri.github.io/>, HiTz (Universidad del País Vasco, UPV/EHU)
AVISO LEGAL. Este mensaje puede contener información reservada y confidencial. Si usted no es el destinatario no está autorizado a copiar, reproducir o distribuir este mensaje ni su contenido. Si ha recibido este mensaje por error, le rogamos que lo notifique al remitente.
Le informamos de que sus datos personales, que puedan constar en este mensaje, serán tratados en calidad de responsable de tratamiento por la UNIVERSIDAD NACIONAL DE EDUCACIÓN A DISTANCIA (UNED) c/ Bravo Murillo, 38, 28015-MADRID-, con la finalidad de mantener el contacto con usted. La base jurídica que legitima este tratamiento, será su consentimiento, el interés legítimo o la necesidad para gestionar una relación contractual o similar. En cualquier momento podrá ejercer sus derechos de acceso, rectificación, supresión, oposición, limitación al tratamiento o portabilidad de los datos, ante la UNED, Oficina de Protección de datos<https://www.uned.es/dpj>, o a través de la Sede electrónica<https://uned.sede.gob.es/> de la Universidad.
Para más información visite nuestra Política de Privacidad<https://descargas.uned.es/publico/pdf/Politica_privacidad_UNED.pdf>.