*** First Call for Replication and Negative Results ***
37th IEEE International Symposium on Software Reliability Engineering
(ISSRE 2026)
October 20-23, 2026, 5* St. Raphael Resort and Marina
Limassol, Cyprus
https://cyprusconferences.org/issre2026/
The Replications and Negative Results (RENE) Track has been introduced in the software
engineering community for a while and received overwhelmingly positive feedback. This
year, we establish this track at ISSRE and invite researchers to (1) replicate results from
previous papers and (2) publish studies with important and relevant negative or null
results (results that fail to show an effect, yet demonstrate the research paths that did not
pay off).
We also encourage the publication of the negative results or replicable aspects of
previously published work. For example, authors of a published paper reporting a working
solution for a given problem can document in a “negative results paper” other (failed)
attempts they made before defining the working solution they published.
• Replication studies. The papers in this category must go beyond simply re-
implementing an algorithm and/or re-running the artifacts provided by the original paper.
Such submissions should at least apply the approach to new data sets (open-source or
proprietary). A replication study should clearly report on results that the authors were
able to replicate, as well as on the aspects of the work that were not replicable.
• Negative results papers. We seek papers that report on negative results. We seek
negative results for all types of program comprehension research in any empirical area
(qualitative, quantitative, case study, experiment, etc.). For example, did your controlled
experiment not show an improvement over the baseline? Even if negative, results obtained
are still valuable when they are either not obvious or disprove widely accepted wisdom.
Evaluation Criteria
Both Replication Studies and Negative Results submissions will be evaluated according to
the following standards:
• Depth and breadth of the empirical studies
• Clarity of writing
• Appropriateness of conclusions
• Amount of useful, actionable insights
• Availability of artifacts
• Underlying methodological rigor. A negative result due primarily to misaligned
expectations or due to lack of statistical power (small samples) is not a good submission.
The negative result should be a result of a lack of effect, not a lack of methodological
rigor.
Most importantly, we expect replication studies to clearly point out the artifacts upon
which the study is built, and to provide the links to all the artifacts in the submission (the
only exception will be given to those papers that replicate the results on proprietary
datasets that can not be publicly released).
Submission Instructions
Submissions must be original, in the sense that the findings and writing have not been
previously published or under consideration elsewhere. However, as either replication
studies or negative results, some overlap with previous work is expected. Please make
clear in the paper the overlap with and difference from previous work.
All submissions must be in PDF format and conform, at time of submission, to the IEEE
Computer Society Format Guidelines:
(https://www.ieee.org/conferences/publishing/templates).
Authors are strongly encouraged to print the PDF and review it for integrity (fonts,
symbols, equations, etc.) before submission, as defective printing can undermine a
paper’s chance of success. By submitting to the ISSRE RENE Track, authors acknowledge
that they are aware of and agree to be bound by the IEEE Plagiarism FAQ. In particular,
papers submitted to the RENE track must not have been published elsewhere and must not
be under review or submitted for review elsewhere whilst under consideration for ISSRE
2026. Contravention of this concurrent submission policy will be deemed a serious breach
of scientific ethics, and appropriate action will be taken in all such cases. To check for
double submission and plagiarism issues, the chairs reserve the right to (1) share the list
of submissions with the PC Chairs of other conferences with overlapping review periods
and (2) use external plagiarism detection software, under contract to the IEEE, to detect
violations of these policies.
Submissions to the RENE Track can be made via the ISSRE RENE track submission site:
https://easychair.org/conferences?conf=issre2026 .
Submission Length: The ISSRE RENE Track accepts submissions of two lengths:
(1) New replication studies and new descriptions of negative results should have a length
of up to 10 pages, plus 2 pages which may only contain references.
(2) Negative results documented during the preparation of previously published work by
the authors should be described in up to 5 pages, plus 1 page, which may only contain
references (e.g., as previously mentioned, authors of a published paper can document
negative results they obtained while working on it, such as methodologically sound
solutions that did not work).
Important note 1: Both types of papers (replication and negative results) will be included
as part of the main conference proceedings.
Important note 2: The RENE track does not follow a double-anonymous review process.
Publication and Presentation
Upon notification of acceptance, all authors of accepted papers will receive further
instructions for preparing the camera-ready versions of their submissions. If a submission
is accepted, at least one author of the paper is required to have a full registration for ISSRE
2026, attend the conference, and present the paper in person. All accepted papers will be
published in the conference electronic proceedings. The presentation is expected to be
delivered in person, unless this is impossible due to travel limitations (e.g., related to
health or visa). Details about the presentations will follow the notifications.
The official publication date is the date the proceedings are made available in the IEEE
Digital Libraries. The official publication date affects the deadline for any patent filings
related to published work.
Purchases of additional pages in the proceedings are not allowed.
Important Dates (AoE)
• Submission deadline: July 5, 2026
• Notification of acceptance: August12 29, 2026
• Camera-ready copy submission: August 19, 2026
• Author registration deadline: August 19, 2026
Organisation
General Chairs
• Leonardo Mariani, University of Milano - Bicocca, Italy
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Coordinator
• Roberto Natella, GSSI, Italy
Research Program Committee Chairs
• Domenico Cotroneo, UNC Charlotte, USA
• Jie M. Zhang, King's College London, UK
Industry Program Chairs
• Jinyang Liu, Bytedance, USA
• Sigrid Eldh, Ericsson AB, Sweden
Workshop Chairs
• Georgia Kapitsaki, University of Cyprus, Cyprus
• August Shi, The University of Texas at Austin, USA
Doctoral Symposium Chairs
• Stefan Winter, LMU Munich, Germany
• Lili Wei, McGill University, Canada
Fast Abstract Chairs
• Luigi Lavazza, University of Insubria, Italy
• Yintong Huo, SMU, Singapore
JIC2 Chair
• Helene Waeselynck, LAAS-CNRS, France
Publicity Chairs
• Allison K. Sulivan, The University of Texas at Arlington, USA
• Jose D'Abruzzo Pereira, University of Coimbra, Portugal
Publication Chairs
• Sherlock Licorish, Otago Business School, New Zealand
• Maria Teresa Rossi, GSSI, Italy
Artifact Evaluation Chairs
• Naghmeh Ivaki, University of Coimbra, Portugal
• Fumio Machida, University of Tsukuba, Japan
Diversity and Inclusion Chair
• Eleni Constantinou, University of Cyprus, Cyprus
Financial Chair
• Costas Pattichis, University of Cyprus, Cyprus
Web Chairs
• Michalis Ioannides, Easy Conferences LTD
• Elena Masserini, University of Milano - Bicocca, Italy
Registration Chair
• Easy Conferences LTD
We invite submissions to PoliticalNLP 2026, the 3rd Workshop on Natural Language Processing for Political Sciences, co-located with LREC 2026. The workshop will take place in Palma de Mallorca, Spain, at the Palau de Congressos de Palma.
Theme for 2026
Trust, Transparency and Generative AI in Political Discourse Analysis
Large language models and generative AI are increasingly shaping political communication, public opinion, and democratic processes. PoliticalNLP 2026 provides an interdisciplinary forum to examine these developments critically and responsibly, at the intersection of NLP, political science, law, and the social sciences.
Topics of interest include, but are not limited to
• Trustworthy, explainable, and fair NLP for political data
• Bias, misinformation, and ethical risks of LLMs
• Multilingual and cross cultural political NLP
• Generative AI for policy analysis and deliberative democracy
• Reproducibility, transparency, and responsible AI practices
• Datasets, tools, and resources for political and civic technologies
Important dates
• Paper submission (long and short): 16 February 2026
• Notification: 11 March 2026
• Camera ready: 30 March 2026
• Workshop: 11 to 12 May 2026, or 16 May 2026 (final date to be confirmed by LREC)
Proceedings
Accepted papers will appear in the LREC 2026 Workshop Proceedings.
Submission and CFP
Full Call for Papers and details: https://sites.google.com/view/politicalnlp2026
Submission is electronic via the Softconf START system: https://softconf.com/lrec2026/PoliticalNLP2026/
Best regards,
PoliticalNLP 2026 Organizer
--
Wajdi Zaghouani, Ph.D.
Associate Professor in Residence,
Communication Program
Northwestern Qatar | Education City
T +974 4454 5232 | M +974 3345 4992
[cid:image001.png@01DB0DA7.8D0D9A20]
Second International Conference on Natural Language Processing
and Artificial Intelligence for Cyber Security
(NLPAICS'2026)
University of Alicante, Alicante, Spain
11 and 12 June 2026
https://nlpaics2026.gplsi.es/
Third Call for Papers
Recent advances in Natural Language Processing (NLP), Deep Learning and
Large Language Models (LLMs) have resulted in improved performance of
applications. In particular, there has been a growing interest in
employing AI methods in different Cyber Security applications.
In today's digital world, Cyber Security has emerged as a heightened
priority for both individual users and organisations. As the volume of
online information grows exponentially, traditional security approaches
often struggle to identify and prevent evolving security threats. The
inadequacy of conventional security frameworks highlights the need for
innovative solutions that can effectively navigate the complex digital
landscape to ensure robust security. NLP and AI in Cyber Security have
vast potential to significantly enhance threat detection and mitigation
by fostering the development of advanced security systems for autonomous
identification, assessment, and response to security threats in real
time. Recognising this challenge and the capabilities of NLP and AI
approaches to fortify Cyber Security systems, the Second International
Conference on Natural Language Processing (NLP) and Artificial
Intelligence (AI) for Cyber Security (NLPAICS'2026) continues the
tradition from NLPAICS'2024 to be a gathering place for researchers in
NLP and AI methods for Cyber Security. We invite contributions that
present the latest NLP and AI solutions for mitigating risks in
processing digital information.
Conference topics
The conference invites submissions on a broad range of topics related to
the employment of NLP and AI (and in general, language studies and
models) for Cyber Security, including but not limited to:
_Societal and Human Security and Safety_
* Content Legitimacy and Quality
* Detection and mitigation of hate speech and offensive language
* Fake news, deepfakes, misinformation and disinformation
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* Trust and credibility of online information
* User Security and Safety
* Cyberbullying and identification of internet offenders
* Monitoring extremist fora
* Suicide prevention
* Clickbait and scam detection
* Fake profile detection in online social networks
* Technical Measures and Solutions
* Social engineering identification, phishing detection
* NLP for risk assessment
* Controlled languages for safe messages
* Prevention of malicious use of ai models
* Forensic linguistics
* Human Factors in Cyber Security
_Speech Technology and Multimodal Investigations for Cyber Security_
* Voice-based security: Analysis of voice recordings or transcripts
for security threats
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* NLP and biometrics in multimodal context
_Data and Software Security_
* Cryptography
* Digital forensics
* Malware detection, obfuscation
* Models for documentation
* NLP for data privacy and leakage prevention (DLP)
* Addressing dataset "poisoning" attacks
_Human-Centric Security and Support_
* Natural language understanding for chatbots: NLP-powered chatbots
for user support and security incident reporting
* User behaviour analysis: analysing user-generated text data (e.g.,
chat logs and emails) to detect insider threats or unusual behaviour
* Human supervision of technology for Cyber Security
_Anomaly Detection and Threat Intelligence_
* Text-Based Anomaly Detection
* Identification of unusual or suspicious patterns in logs, incident
reports or other textual data
* Detecting deviations from normal behaviour in system logs or network
traffic
* Threat Intelligence Analysis
* Processing and analysing threat intelligence reports, news, articles
and blogs on latest Cyber Security threats
* Extracting key information and indicators of compromise (IoCs) from
unstructured text
_Systems and Infrastructure Security_
* Systems Security
* Anti-reverse engineering for protecting privacy and anonymity
* Identification and mitigation of side-channel attacks
* Authentication and access control
* Enterprise-level mitigation
* NLP for software vulnerability detection
* Malware Detection through Code Analysis
* Analysing code and scripts for malware
* Detection using NLP to identify patterns indicative of malicious
code
_Financial Cyber Security_
* Financial fraud detection
* Financial risk detection
* Algorithmic trading security
* Secure online banking
* Risk management in finance
* Financial text analytics
_Ethics, Bias, and Legislation in Cyber Security_
* Ethical and Legal Issues
* Digital privacy and identity management
* The ethics of NLP and speech technology
* Explainability of NLP and speech technology tools
* Legislation against malicious use of AI
* Regulatory issues
* Bias and Security
* Bias in Large Language Models (LLMs)
* Bias in security related datasets and annotations
_Datasets and resources for Cyber Security Applications_
_Specialised Security Applications and Open Topics_
* Intelligence applications
* Emerging and innovative applications in Cyber Security
_Special Theme Track - Future of Cyber Security in the Era of LLMs and
Generative AI_
NLPAICS 2026 will feature a special theme track with the goal of
stimulating discussion around Large Language Models (LLMs), Generative
AI and ensuring their safety. The latest generation of LLMs, such as
ChatGPT, Gemini, DeepSeek, LLAMA and open-source alternatives, has
showcased remarkable advancements in text and image understanding and
generation. However, as we navigate through uncharted territory, it
becomes imperative to address the challenges associated with employing
these models in everyday tasks, focusing on aspects such as fairness,
ethics, and responsibility. The theme track invites studies on how to
ensure the safety of LLMs in various tasks and applications and what
this means for the future of the field. The possible topics of
discussion include (but are not limited to) the following:
* Detection of LLM-generated language in multimodal context (text,
speech and gesture)
* LLMs for forensic linguistics
* Bias in LLMs
* Safety benchmarks for LLMs
* Legislation against malicious use of LLMs
* Tools to evaluate safety in LLMs
* Methods to enhance the robustness of language models
Submissions and Publication
NLPAICS welcomes high-quality submissions in English, which can take two
forms:
* Regular long papers: These can be up to eight (8) pages long,
presenting substantial, original, completed, and unpublished work.
* Short (poster) papers: These can be up to four (4) pages long and
are suitable for describing small, focused contributions, ongoing
research, negative results, system demonstrations, etc. Short papers
will be presented as part of a poster session.
The conference will not consider and evaluate abstracts only.
Accepted papers, including both long and short papers, will be published
as e-proceedings with ISBN will be available online on the conference
website at the time of the conference and are expected to be uploaded
into the ACL Anthology.
To prepare your submission, please make sure to use the NLPAICS 2026
style files available here:
LaTeX: NLPAICS_2026_LaTeX.zip [1]
Overleaf: https://www.overleaf.com/read/sgwmrzbmjfhc#aeea77
Word:
https://nlpaics2026.gplsi.es/wp-content/uploads/2025/11/NLPAICS2026_Proceed…
Papers should be submitted through Softconf/START using the following
link: https://softconf.com/p/nlpaics2026/user/
The conference will feature a student workshop, and awards will be
offered to the authors of best papers.
Important dates
* Submissions due: 16 March 2026
* Reviewing process: 1 April - 30 April 2026
* Notification of acceptance: 5 May 2026
* Camera-ready due: 19 May 2026
* Conference camera-ready proceedings ready 1 June 2026
* Conference: 11-12 June 2026
Organisation
Conference Chairs
Ruslan Mitkov (University of Alicante)
Rafael Muñoz (University of Alicante)
Programme Committee Chairs
Elena Lloret (University of Alicante)
Tharindu Ranasinghe (Lancaster University)
Publication Chair
Ernesto Estevanell (University of Alicante)
Sponsorship Chair
Andres Montoyo (University of Alicante)
Student Workshop Chair
Salima Lamsiyah (University of Luxembourg)
Best Paper Award Chair
Saad Ezzini (King Fahd University of Petroleum & Minerals)
Publicity Chair
Beatriz Botella (University of Alicante)
Social Programme Chair
Alba Bonet (University of Alicante)
Venue
The Second International Conference on Natural Language Processing and
Artificial Intelligence for Cyber Security (NLPAICS'2026) will take
place at the University of Alicante and is organised by the University
of Alicante GPLSI research group.
Further information and contact details
The follow-up calls will list keynote speakers and members of the
programme committee once confirmed. The conference website is
https://nlpaics2026.gplsi.es/ and will be updated on a regular basis.
For further information, please email nlpaics2026(a)dlsi.ua.es
Registration will open in March 2026.
Links:
------
[1] http://summer-school.gplsi.es/NLPAICS_2026_LaTeX.zip
The deadline for submitting abstracts to the Learner Corpus Research conference, due 16–19 September 2026 in Prague, Czech Republic (https://lcr2026.ff.cuni.cz), has been extended until 7 February 2026.
*** Call for Participation ***
The Annual ACM Conference on Intelligent User Interfaces (IUI 2026)
March 23-26, 2026, 5* Coral Beach Hotel & Resort, Paphos, Cyprus
https://iui.hosting.acm.org/2026/
(*** Early Registration Deadline: February 13, 2026 ***)
The 2026 ACM Conference on Intelligent User Interfaces (ACM IUI) is the annual premier
venue, where researchers and practitioners meet and discuss state-of-the-art advances
at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI).
Ideal IUI submissions should address practical HCI challenges using machine intelligence
and discuss both computational and human-centric aspects of such methodologies,
techniques and systems.
This year we had a record number of submissions, so we have a record number of
accepted papers (114), a record number of posters and demos (53) and we hope for a
record number of participants.
Furthermore, we have 8 workshops and 5 tutorials.
Finally, the technical program will feature two keynotes, by Antonio Kruger on the role of
HCI in trusted A.I. and Pattie Maes on designing A.I. interaction for human flourishing.
The detailed program of IUI 2026 can be found on the conference website:
https://iui.acm.org/2026/program/ .
The early registration deadline is on February 13th and the registration page is:
https://iui.acm.org/2026/registration/
We are looking forward to meeting everybody in Paphos.
Organisation
General Chairs
• Tsvi Kuflik, The University of Haifa, Israel
• Styliani Kleanthous, Open University of Cyprus, Cyprus
Local Organising Chair
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Chairs
• Li Chen, Hong Kong Baptist University, China
• Giulio Jacucci, University of Helsinki, Finland
• Alison Renner, Dataminr, USA
Dear all,
I would like to draw your attention to the position announced below. We are searching for a postdoc or an advanced PhD student whose research interests align with the interdisciplinary focus of the project. The position requires:
a background in NLP, including prior experience with prompting LLMs and data analysis,
familiarity with key concepts in linguistics and formal pragmatics (e.g. presuppositions),
enthusiasm for interdisciplinary research.
The position is funded for 20 months and runs until the end of 2027.
If you are interested, please submit a single PDF containing:
a brief motivation letter outlining your research interests and connection to the APHIC project;
a CV, including a publication list;
contact information for one to two references.
Applications should be sent to Agnieszka Faleńska (agnieszka.falenska at ims.uni-stuttgart.de). until February 13, 2026. The position will remain open until filled, so please feel free to get in touch even if you come across this call after February 13th.
Best regards,
Agnieszka Faleńska
———
Dear colleagues,
We invite applications for a Postdoctoral Researcher position in the interdisciplinary project "Authority Presuppositions in Human—AI Communication" (APHIC) at the University of Stuttgart. The project is led by Dr. Agnieszka Faleńska [1] and Prof. Judith Tonhauser [2] under the IRIS-HISIT initiative.
Project
APHIC is part of the "Human-Intelligent Systems Interaction and Teaming" (IRIS-HISIT) programme [3] funded by the Ministry of Science, Research, and the Arts of Baden-Württemberg [4]. The project investigates authority presuppositions—implicit assumptions that an AI system has the expertise to deliver high-stakes advice (e.g., medical or legal). The project aims to (1) develop a theoretically grounded taxonomy of authority presuppositions, and (2) design and evaluate conversational repair strategies that maintain trust and informativeness while ensuring user safety.
Position
• Duration: 20 months
• Earliest start: March 2026
• Salary: TV-L 13 (100%), see [5] for details
• Environment: The researcher will be embedded in the IRIS community [6] and collaborate closely with colleagues at the Institute for Natural Language Processing [7] and the Institute of Linguistics [8].
Candidate Profile
• PhD in computational linguistics or a related field
• Familiarity with key concepts in linguistics and formal pragmatics, e.g. presuppositions
• Experience in programming, prompting LLMs, and machine learning
• Excellent communication skills and enthusiasm for interdisciplinary research
• Proficiency in English (German not required)
How to Apply
Please submit one PDF containing:
• a brief motivation letter outlining your research interests,
• your CV with publication list
• contact information for one to two references.
Applications should be sent to: Agnieszka Faleńska (agnieszka.falenska at ims.uni-stuttgart.de <http://ims.uni-stuttgart.de/>). Applications received before 13th February 2026 will be given full consideration. The position will remain open until filled, so do not hesitate to get in touch when you find this opening after 13th of February.
The University of Stuttgart would like to increase the proportion of women in the scientific field and is therefore particularly interested in applications from women. Severely disabled persons are given priority in the case of equal suitability.
University of Stuttgart
The University of Stuttgart is a technically oriented university in Germany. It is especially known for engineering and related topics, with its computer science department being ranked highly, both nationally and internationally.
The city of Stuttgart is the capital of the state of Baden-Württemberg in southwest Germany. It is a lively and international city, known for its strong economy and rich culture. With Germany's high-speed train system, it is well-connected to many other interesting places, for instance, Munich and Cologne (~2.5 hours), Paris (~3.5 hours), Berlin (~5.5 hours), Strasbourg (<1.5 hours), and Lake of Constance (~2.5 hours).
Links
[1] www.ims.uni-stuttgart.de/en/institute/team/Falenska <http://www.ims.uni-stuttgart.de/en/institute/team/Falenska>
[2] www.ling.uni-stuttgart.de/en/institute/team/Tonhauser/ <http://www.ling.uni-stuttgart.de/en/institute/team/Tonhauser/>
[3] www.iris.uni-stuttgart.de/research/human-intelligent-systems-interaction-an… <http://www.iris.uni-stuttgart.de/research/human-intelligent-systems-interac…>
[4] mwk.baden-wuerttemberg.de/de/startseite <http://mwk.baden-wuerttemberg.de/de/startseite>
[5] oeffentlicher-dienst.info/c/t/rechner/tv-l/west?id=tv-l-2025 <http://oeffentlicher-dienst.info/c/t/rechner/tv-l/west?id=tv-l-2025>
[6] www.iris.uni-stuttgart.de/ <http://www.iris.uni-stuttgart.de/>
[7] www.ims.uni-stuttgart.de <http://www.ims.uni-stuttgart.de/>
[8] www.ling.uni-stuttgart.de <http://www.ling.uni-stuttgart.de/>
**** We apologize for the multiple copies of this email. In case you are
already registered to the next webinar, you do not need to register
again. ****
-------------------------------------
Dear colleague,
We are happy to announce the next webinar in the Language Technology
webinar series organized by The HiTZ Chair of Artificial Intelligence
and Language Technology (https://hitz.eus/katedra). We are organizing
one seminar every month.
Next webinar:
Speaker: Henning Wachsmuth (Leibniz University Hannover)
Title: Toward Argumentative Large Language Models
Date: Thursday, February 5, 2026 - 15:00
Summary: Today's large language models (LLMs) are optimized toward
giving helpful answers in response to prompts. In many situations,
however, it may be preferable for an LLM to foster critical thinking
rather than just following an instruction. While recent LLMs are said to
'reason', they barely build on established reasoning concepts known from
argumentation theory. In this talk, I will give insights into recent
efforts of my group in making LLMs more argumentative. Starting from
basics of LLM training processes, I will present how to specialize LLMs
for argumentation tasks via instruction fine-tuning as well as how to
align the arguments they generate using reinforcement learning. From
there, I will give an outlook on how to improve the actual reasoning
capabilities of LLMs.
Bio: Henning Wachsmuth leads the Natural Language Processing Group at
the Institute of Artificial Intelligence of Leibniz University Hannover.
After receiving his PhD from Paderborn University in 2015, he worked as
a PostDoc at Bauhaus-Universität Weimar and as a junior professor in
Paderborn, before he became a full professor in Hannover in 2022. His
group does basic research on large language models for computational
argumentation, social bias detection and mitigation, as well as
explainable and educational NLP. Henning's main research interests
include the generation of audience-aware text, the assessment of
pragmatic text quality, and the modeling of bias and framing.
Registration: https://www.hitz.eus/webinar_izenematea
Upcoming webinars:
José Andrés González-López (March 5)
Ranjay Krishna (April 16)
Barbara Plank (May 7)
You can view the videos of previous webinars and the schedule for
upcoming webinars here: http://www.hitz.eus/webinars
If you cannot attend this seminar, but you want to be informed of the
following HiTZ webinars, please complete this registration form instead:
http://www.hitz.eus/webinar_info
Best wishes,
The HiTZ Chair of Artificial Intelligence and Language Technology
P.S: HiTZ will not grant any type of certificate for attendance at these
webinars.
[apologies for cross posting]
DeTermIt! Workshop @ LREC 2026
Second Workshop on Evaluating Text Difficulty in a Multilingual Context
Location: Palau de Congressos de Palma, Palma de Mallorca (Spain)
#####################
Second Call for Papers
Schedule
- Paper submissions: 23 February 2026
- Notification of acceptance: 13 March 2026
- Camera-ready due: 30 March 2026
- Workshop: one of 11, 12, or 16 May 2026 (half-day)
All deadlines are 11:59PM UTC-12:00 AoE (“Anywhere on Earth”)
For more information, please visit:
Website: https://determit2026.dei.unipd.it/
#####################
In today’s interconnected world, where information dissemination knows no linguistic bounds, it is crucial to ensure that knowledge is accessible to diverse audiences, regardless of language proficiency and domain expertise. Automatic Text Simplification (ATS) and text difficulty assessment are central to this goal, especially in the age of Large Language Models (LLMs) and Generative AI (GenAI), which increasingly mediate access to information.
The second edition of the DeTermIt! workshop focuses on the evaluation and modeling of text difficulty in multilingual, terminology-rich contexts, with a particular emphasis on the interaction between:
- text simplification,
- terminology and conceptual complexity, and
- LLM/GenAI-based generation and rewriting.
The 2026 edition builds on the first DeTermIt! workshop held at LREC-COLING 2024 (https://determit2024.dei.unipd.it/), as well as related initiatives such as the CLEF SimpleText track (https://simpletext-project.com/), which provides reusable data and benchmarks for scientific text summarization and simplification. DeTermIt! 2026 aims to bring together researchers and practitioners interested in terminology-aware simplification, lexical and conceptual difficulty, and evaluation protocols for GenAI systems.
We welcome contributions that address theoretical, methodological, and applied aspects of text difficulty, including resource creation and evaluation (e.g., corpora, datasets, and benchmarks), with a focus on how linguistic complexity, specialized terminology, and domain knowledge interact with human understanding. In particular, we encourage work that explores how LLMs and GenAI can be evaluated, constrained, or guided to produce readable, faithful, and accessible texts.
#####################
Topics of Interest
#####################
We invite submissions on (but not limited to) the following themes:
1. Theoretical and Modeling Perspectives
- Cognitive and linguistic models of text and lexical complexity.
- Multilingual readability and text difficulty prediction.
- Modeling conceptual difficulty and domain-specific terminology.
- Theoretical connections between lexicography, terminology, and text simplification.
2. Terminology and Conceptual Complexity
- Identification and classification of specialized terms and concepts.
- Estimation of term difficulty for lay readers and second language learners.
- Use of terminological databases, ontologies, and knowledge graphs in simplification pipelines.
- Methods for adapting domain-specific terminology for accessible communication (e.g., in medicine, law, technology).
3. Generative and Explainable AI for Text Simplification
- LLM- and GenAI-based approaches to text simplification and paraphrasing.
- Terminology-Augmented Generation (TAG) and term-preserving simplification.
- Evaluation of GenAI outputs: readability, factuality, terminology fidelity, and hallucination analysis.
- Readability-controlled or difficulty-controlled generation; controllable simplification.
- Human-centered and explainable approaches to text accessibility in GenAI systems.
4. Resources, Benchmarks, and Evaluation Frameworks
- Corpora, annotation schemes, and benchmarks for text difficulty and simplification.
- Datasets and methods for evaluating terminology-aware simplification and explanation.
- FAIR and reusable resources for multilingual text accessibility.
- Evaluation protocols and metrics for cross-lingual and cross-domain simplification and GenAI-based rewriting.
5. Applications and Case Studies
- Domain-specific simplification (e.g., healthcare, legal, scientific communication).
- Tools and systems for educational settings, language learning, or accessible communication.
- User studies, human evaluation setups, and mixed-method approaches to assessing text difficulty and GenAI-assisted simplification.
- Industrial and real-world experiences with integrating ATS and terminology into LLM-driven workflows.
#####################
Submission Guidelines
#####################
We invite original contributions, including research papers, case studies, negative results, and system demonstrations.
When submitting a paper through the START system of LREC 2026, authors will be asked to provide essential information about language resources (in a broad sense: data, tools, services, standards, evaluation packages, etc.) that have been used for the work described in the paper or are a new result of the research. ELRA strongly encourages all authors to share the resources described in their papers to support reproducibility and reusability.
Papers must be compliant with the stylesheet adopted for the LREC 2026 Proceedings (see https://lrec2026.info/authors-kit/).
The workshop proceedings will be published in the LREC 2026 workshop proceedings.
PAPER TYPES
We accept three types of submissions:
- Regular long papers – up to eight (8) pages of content, presenting substantial, original, completed, and unpublished work.
- Short papers – up to four (4) pages of content, describing smaller focused contributions, work in progress, negative results, or system demonstrations.
- Position papers – up to eight (8) pages of content, discussing key open challenges, methodological issues, and cross-disciplinary perspectives on text difficulty, terminology, and GenAI.
References do not count toward the page limits.
#####################
Organizers
#####################
Chairs
Giorgio Maria Di Nunzio, University of Padua, Italy
Federica Vezzani, University of Padua, Italy
Liana Ermakova, Université de Bretagne Occidentale, France
Hosein Azarbonyad, Elsevier, The Netherlands
Jaap Kamps, University of Amsterdam, The Netherlands
Scientific Committee
Florian Boudin - Nantes University, France
Lynne Bowker - University of Ottawa, Canada
Sara Carvalho - Universidade NOVA de Lisboa / Universidade de Aveiro, Portugal
Rute Costa - Universidade NOVA de Lisboa, Portugal
Eric Gaussier - University Grenoble Alpes, France
Natalia Grabar - CNRS, France
Ana Ostroški Anić - Institute of Croatian Language and Linguistics, Croatia
Tatiana Passali - Aristotle University of Thessaloniki
Grigorios Tsoumakas - Aristotle University of Thessaloniki
Sara Vecchiato - University of Udine, Italy
Cornelia Wermuth - KU Leuven, Belgium
#####################
Contact
#####################
For inquiries, please contact:
giorgiomaria.dinunzio(a)unipd.it <mailto:giorgiomaria.dinunzio@unipd.it>
ComputEL-9: Ninth Workshop on the Use of Computational Methods in the
Study of Endangered Languages
First CALL FOR PAPERS for REGULAR SESSION
Submission deadline: March 20, 2026
Submission link: https://softconf.com/acl2026/ComputEL2026
ComputEL-9 will be co-located with ACL 2026 in San Diego, California.
We encourage submissions that explore the interface and intersection of
computational linguistics, documentary linguistics, and community-based
efforts in language revitalization and reclamation. This includes
submissions that:
(i) demonstrate new methods or technologies for tasks or applications
focused on low-resource settings, and in particular, endangered languages,
(ii) examine the use of specific methods in the analysis of data from
low-resource languages, or demonstrate new methods for analysis of such
data, oriented toward the goals of language reclamation and revitalization,
(iii) propose new models for the collection, management, and
mobilization of language data in community settings, with attention to
e.g. issues of data sovereignty and community protocols,
(iv) explore concrete steps for a more fruitful interaction among
computer scientists, documentary linguists, and language communities.
IMPORTANT DATES
20-Mar-2026 Deadline for submissions
1-May-2026 Notification of Acceptance
Early May Camera-ready papers due
(Exact date TBD by ACL)
July 3 or 4 Workshop (Exact date TBD by ACL)
PRESENTATIONS
Presentation of accepted papers will be in both oral sessions and a
poster session. The decision on whether a presentation for a paper will
be oral and/or poster will be made by the Organizing Committee on the
advice of the Program Committee, taking into account the subject matter
and how the content might be best conveyed. Oral and poster
presentations will not be distinguished in the Proceedings.
SUBMISSIONS
In line with our goal of reaching multiple overlapping communities,
authors can submit to one of the workshop’s tracks: (a) language
community perspective and (b) academic perspective.
All submissions must be anonymous following ACL guidelines and will be
peer-reviewed by the scientific Program Committee.
PROCEEDINGS
The authors of selected accepted full papers (long or short) will be
invited by the Organizing Committee to submit their papers for online
publication via the open-access ACL Anthology. Final versions of long
and short papers will be allotted one additional page (altogether 5 and
9 pages) excluding references.
Proceedings papers should be revised and improved versions of the work
that was submitted for, and which underwent, review. Any revisions
should concern responses to reviewer comments or the addition of
relevant details and clarifications, but not entirely new, unreviewed
content.
ADDITIONAL AND CONTACT INFORMATION
Please see the ComputEL-9 website for further information:
https://computel-workshop.org/computel-9/
Email for Organizing Committee: computel.workshop(a)gmail.com
--
======================================================================
Antti Arppe - Ph.D (General Linguistics), M.Sc. (Engineering)
Professor of Quantitative Linguistics
Director, Alberta Language Technology Lab (ALTLab)
Project Director, 21st Century Tools for Indigenous Languages (21C)
Department of Linguistics, University of Alberta
Algonquian Studies Association - Secretary-Treasurer
E-mail: arppe(a)ualberta.ca, antti.arppe(a)iki.fi
WWW: www.ualberta.ca/~arppe, altlab.ualberta.ca
Mānahtu ina rēdûti ihza ummânūti ihannaq - dulum ugulak úmun ingul
----------------------------------------------------------------------
ICMI 2026 CALL FOR PAPERS
============================================
5-9 October 2026, Napoli - Italy
https://icmi.acm.org/2026/
============================================
The 28th International Conference on Multimodal Interaction (ICMI 2026) will be held in Napoli, Italy. ICMI is the premier international forum for advancing research at the intersection of multimodal artificial intelligence (AI) and social interaction to create technically innovative, effective, and human-centered multimodal interactive systems. A unique aspect of ICMI is its multidisciplinary nature, bringing together research in AI, multimodal data processing, human-machine, and human-human interaction to bridge behavioral understanding with technology, with an eye towards impactful applications that benefit people and society.
Novelty will be evaluated along two dimensions: scientific novelty and technical novelty. Accepted papers at ICMI 2026 must demonstrate novelty in at least one of these two dimensions.
The theme of this year's conference is "Context and Cultural Awareness for Multimodal Interaction", to explore how context and cultural factors influence multimodal interaction systems, including their design, implementation, and evaluation. We welcome papers that address the integration of contextual understanding, such as environmental, social, and emotional factors, into multimodal interaction systems. We also encourage contributions that explore cultural considerations in the development and deployment of interactive technologies.
Topics of interest include, but are not limited to:
* Affective computing and interaction
* User-adaptive systems
* Cognitive modelling and multimodal interaction
* Context-aware modelling
* Cross-cultural design and evaluation
* Gesture, touch, and haptics
* Healthcare, assistive technologies
* Human communication dynamics
* Human-robot/agent multimodal interaction
* Human-centred AI and ethics
* Interaction with a smart environment
* Machine learning for multimodal interaction
* Mobile and wearable multimodal systems
* Multimodal behaviour generation
* Multimodal datasets and validation
* Multimodal dialogue modeling
* Multimodal fusion and representation
* Multimodal interactive applications
* Novel multimodal datasets
* Spoken/visual behaviours in social interaction
* System components and multimodal platforms
* Virtual/augmented reality and multimodal interaction
Commitment to ethical conduct is mandatory, and submissions must adhere to ethical standards, in particular when human-derived data are employed. Authors are encouraged to consult the ACM Code of Ethics and Professional Conduct (https://ethics.acm.org/).
*** Important Dates
Abstract deadline April 13, 2026
Paper Submission April 20, 2026
Rebuttal Period June 1, 2026
Paper notification July 6, 2026
Camera-ready paper July 23, 2026
Presenting at the main conference October 6-8, 2026
*** ACM Publication Policies
ACM's New Open Access Publishing Model. Starting January 1, 2026, ACM will fully transition to Open Access. All ACM publications, including those from ACM-sponsored conferences, will be 100% Open Access. Authors will have two primary options for publishing Open Access articles with ACM: the ACM Open institutional model or by paying Article Processing Charges (APCs). With over 1,800 institutions already part of ACM Open, the majority of ACM-sponsored conference papers will not require APCs from authors or conferences (currently, around 70-75%).
Authors from institutions not participating in ACM Open will need to pay an APC to publish their papers, unless they qualify for a financial or discretionary waiver. To find out whether an APC applies to your article, please consult the list of participating institutions in ACM Open and review the APC Waivers and Discounts Policy. Keep in mind that waivers are rare and are granted based on specific criteria set by ACM.
Understanding that this change could present financial challenges, ACM has approved a temporary subsidy for 2026 to ease the transition and allow more time for institutions to join ACM Open. The subsidy will offer:
* $250 APC for ACM/SIG members
* $350 for non-members
This represents a 65% discount, funded directly by ACM. Authors are encouraged to help advocate for their institutions to join ACM Open during this transition period. This temporary subsidized pricing will apply to all conferences scheduled for 2026, including ICMI.