We are glad to announce the release and availability of the PhraseNET Database Toolbox v3, a web app for mobile phones & tablets with an icon menu with 29 resources.
PhraseNET Database Toolbox for Mobile Devices (https://www.phrasenet.com/webapp/index.html) is part of an integrated cross-platform package that works across all Windows and Android environments. However, here the focus is only on the reduced version, which has an icon menu for mobile devices and allows desktop use. The user interface contains open dialogues related to the extraction and loading of n-grams, collocations, phraseology, phrasal verbs, keywords, and storage of corpus in databases and concordancers based on web corpus and database corpus, which line up a list of words to the right and left that can vary on either side, providing an easily identifiable view of the keywords that are in the center, representing a small sample of the context in which they are found or of the complete sentence.
The CHADES (Corpus Hispanoamericano del Español) is integrated into PhraseNET in such a way that it allows the extraction of phraseological units, searching by keywords. And the linear text is available online at https://chades.com.br/chades/chades.html. Without hypertextualization, the same corpus with hypertextualization is available at http://dhytex.com/dhytex.html in a non-linear way. It’s a set of compiled texts, written by native speakers of Spanish in many varieties, extracted from literature and newspapers, and based on both printed and digital sources.
Sincerely,
J. L. De LUCCA
Universidad Politècnica de Valencia, PhD, 2011
Sao Paulo University, PhD, 2001
*** Call for Participation for MWAHAHA at SemEval 2026 (Task 1) ***
Can computers be funny?
MWAHAHA - Models Write Automatic Humor And Humans Annotate at SemEval
2026 (Task 1)
https://pln-fing-udelar.github.io/semeval-2026-humor-gen/
While Humor Understanding has been the focus of many shared tasks, Humor
Generation remains an even more challenging and largely unexplored
frontier. MWAHAHA, which stands for Models Write Automatic Humor And
Humans Annotate, is SemEval 2026's Task 1 [1] and is the first task
dedicated to advancing the state of the art in Computational Humor
Generation. We invite participants to develop systems capable of
generating genuinely humorous content under various constraints.
Our goal is to push models beyond memorization and towards true humorous
creativity. By using carefully designed constraints, we aim to ensure
fairness in evaluation and encourage the generation of novel jokes. This
task has significant implications for more engaging conversational AI,
creative writing tools, and a deeper understanding of the complex nature
of humor itself.
The task is organized into two subtasks:
*
Subtask A: Text-based Humor Generation
Systems must generate jokes following constraints (related to a news
headline, or containing certain words). This subtask is in English,
Spanish, and Chinese.
*
Subtask B: Image-Based Caption Generation
Systems must generate a free-form humorous caption given a GIF image
that enhances its comedic effect. This subtask is in English only.
The participating systems will be evaluated based on human preferences
on 1-on-1 arena-style battles.
To participate in this task, please join our CodaBench competition:
https://www.codabench.org/competitions/9719/
Important Dates:
* Development data release: September 1, 2025
* Evaluation trial phase starts: October 1, 2025
* Evaluation trial phase ends: December 15, 2025
* Evaluation period starts: January 10, 2026
* Evaluation period ends: January 31, 2026
* System description paper submission: February 28, 2026
* Notification of acceptance: March 31, 2026
* Camera-ready papers due: April 30, 2026
* SemEval 2026 Workshop: July 2026
Links:
------
[1] https://semeval.github.io/SemEval2026/
We gladly inform that *PROPOR 2026*, the17th International Conference on
Computational Processing of Portuguese Salvador - Bahia April 13th to 16th
2026
has extended the submission date to *7 December 2025*.
Check the website for more details: https://propor2026.ufba.br/#cfp
Larissa Freitas and Diana Santos, Program Chairs
*** Second Combo Call for Workshop Papers ***
The Annual ACM Conference on Intelligent User Interfaces (IUI 2026)
March 23-26, 2026, 5* Coral Beach Hotel & Resort, Paphos, Cyprus
https://iui.hosting.acm.org/2026/<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
The ACM Conference on Intelligent User Interfaces (ACM IUI) is the leading annual venue
for researchers and practitioners to explore advancements at the intersection of Artificial
Intelligence (AI) and Human-Computer Interaction (HCI).
IUI 2026 attracted a record number of submissions for the main conference (561 full
paper submissions after an initial submission of 697 abstracts). Although the submission
deadline for the main conference is now over, we welcome the submission of papers to
a number of workshops that will be held as part of IUI 2026.
A list of these workshops, with a short description and the workshops' websites for
further information, follows below.
AgentCraft: Workshop on Agentic AI Systems Development (full-day workshop)
Organizers: Karthik Dinakar (Pienso), Justin D. Weisz (IBM Research), Henry Lieberman
(MIT CSAIL), Werner Geyer (IBM Research)
URL: https://agentcraft-iui.github.io/2026/<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
Ambitious efforts are underway to build AI agents powered by large language models
across many domains. Despite emerging frameworks, key challenges remain: autonomy,
reasoning, unpredictable behavior, and consequential actions. Developers struggle to
comprehend and debug agent behaviors, as well as determine when human oversight is
needed. Intelligent interfaces that enable meaningful oversight of agentic plans,
decisions, and actions are needed to foster transparency, build trust, and manage
complexity. We will explore interfaces for mixed-initiative collaboration during agent
development and deployment, design patterns for debugging agent behaviors, strategies
for determining developer control and oversight, and evaluation methods grounding
agent performance in real-world impact.
AI CHAOS! 1st Workshop on the Challenges for Human Oversight of AI Systems
(full-day workshop)
Organizers: Tim Schrills (University of Lübeck), Patricia Kahr (University of Zurich),
Markus Langer (University of Freiburg), Harmanpreet Kaur (University of Minnesota),
Ujwal Gadiraju (Delft University of Technology)
URL: https://sites.google.com/view/aichaos/iui-2026?authuser=0<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
As AI permeates high-stakes domains—healthcare, autonomous driving, criminal justice
—failures can endanger safety and rights. Human oversight is vital to mitigate harm, yet
methods and concepts remain unclear despite regulatory mandates. Poorly designed
oversight risks false safety and blurred accountability. This interdisciplinary workshop
unites AI, HCI, psychology, and regulation research to close this gap. Central questions
are: How can systems enable meaningful oversight? Which methods convey system states
and risks? How can interventions scale? Through papers, talks, and interactive
discussions, participants will map challenges, define stakeholder roles, survey tools,
methods, and regulations, and set a collaborative research agenda.
CURE 2026: Communicating Uncertainty to foster Realistic Expectations via Human-
Centered Design (half-day workshop)
Organizers: Jasmina Gajcin (IBM Research), Jovan Jeromela (Trinity College Dublin), Joel
Wester (Aalborg University), Sarah Schömbs (University of Melbourne), Styliani Kleanthous
(Open University of Cyprus), Karthikeyan Natesan Ramamurthy (IBM Research), Hanna
Hauptmann (Utrecht University), Rifat Mehreen Amin (LMU Munich)
URL: https://cureworkshop.github.io/cure-2026/<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
Communicating system uncertainty is essential for achieving transparency and can help
users calibrate their trust in, reliance on, and expectations from an AI system. However,
uncertainty communication is plagued by challenges such as cognitive biases, numeracy
skills, calibrating risk perception, and increased cognitive load, with research finding that
lay users can struggle to interpret probabilities and uncertainty visualizations.
HealthIUI 2026: Workshop on Intelligent and Interactive Health User Interfaces
(half-day workshop)
Organizers: Peter Brusilovsky (University of Pittsburgh), Behnam Rahdari (Stanford
University), Shriti Raj (Stanford University), Helma Torkamaan (TU Delft)
URL: https://healthiui.github.io/2026/<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
As AI transforms health and care, integrating Intelligent User Interfaces (IUI) in wellness
applications offers substantial opportunities and challenges. This workshop brings
together experts from HCI, AI, healthcare, and related fields to explore how IUIs can
enhance long-term engagement, personalization, and trust in health systems. Emphasis
is on interdisciplinary approaches to create systems that are advanced, responsive to
user needs, mindful of context, ethics, and privacy. Through presentations, discussions,
and collaborative sessions, participants will address key challenges and propose
solutions to drive health IUI innovation.
MIRAGE: Misleading Impacts Resulting from AI-Generated Explanations (full-day
workshop)
Organizers: Simone Stumpf (University of Glasgow), Upol Ehsan (Northeastern University),
Elizabeth M. Daly (IBM Research), Daniele Quercia (Nokia Bell Labs)
URL: https://mirage-workshop.github.io<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
Explanations from AI systems can illuminate, yet they can misguide. MIRAGE at IUI
tackles pitfalls and dark patterns in AI explanations. Evidence now shows that
explanations may inflate unwarranted trust, warp mental models, and obscure power
asymmetries—even when designers intend no harm. We classify XAI harms as Dark
Patterns (intentional, e.g., trust-boosting placebos) and Explainability Pitfalls
(unintended effects without manipulative intent). These harms include error propagation
(model risks), over-reliance (interaction risks), and false security (systemic risks). We
convene an interdisciplinary group to define, detect, and mitigate these risks. MIRAGE
shifts focus to safe explanations, advancing accountable, human-centered AI.
PARTICIPATE-AI: Exploring the Participatory Turn in Citizen-Centred AI (half-day
workshop)
Organizers: Pam Briggs (Northumbria University), Cristina Conati (University of British
Columbia), Shaun Lawson (Northumbria University), Kyle Montague (Northumbria
University), Hugo Nicolau (University of Lisbon), Ana Cristina Pires (University of Lisbon),
Sebastien Stein (University of Southampton), John Vines (University of Edinburgh)
URL: https://sites.google.com/view/participate-ai/workshop<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
This workshop explores value alignment for participatory AI, focusing on interfaces and
tools that bridge citizen participation and technical development. As AI systems
increasingly impact society, meaningful and actionable citizen input in their development
becomes critical. However, current participatory approaches often fail to influence actual
AI systems, with citizen values becoming trivialized. This workshop will address
challenges such as risk articulation, value evolution, democratic legitimacy, and the
translation gap between community input and system implementation. Topics include
value elicitation within different communities, critical analysis of failed participatory
attempts, and methods for making citizen concerns actionable for developers.
SHAPEXR: Shaping Human-AI-Powered Experiences in XR (full-day workshop)
Organizers: Giuseppe Caggianese (National Research Council of Italy, Institute for High-
Performance Computing and Networking Napoli), Marta Mondellini (National Research
Council of Italy, Institute of Intelligent Industrial Systems and Technologies for Advanced
Manufacturing, Lecco), Nicola Capece (University of Basilicata), Mario Covarrubias
(Politecnico di Milano), Gilda Manfredi (University of Basilicata)
URL: https://shapexr.icar.cnr.it<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
This workshop explores how eXtended Reality (XR) can serve as a multimodal interface
for AI systems, including LLMs and conversational agents. It focuses on designing
adaptive, human-centered XR environments that incorporate speech, gesture, gaze, and
haptics for seamless interaction. Main topics include personalization, accessibility,
cognitive load, trust, and ethics in AI-driven XR experiences. Through presentations,
discussions, and collaborative sessions, the workshop aims to establish a subcommunity
within IUI to develop a roadmap that includes design principles and methodologies for
inclusive and adaptive intelligent interfaces, enhancing human capabilities across various
domains, such as healthcare, education, and collaborative environments.
TRUST-CUA: Trustworthy Computer-Using Generalist Agents for Intelligent User
Interfaces (full-day workshop)
Organizers: Toby Jia-Jun Li (University of Notre Dame), Segev Shlomov (IBM Research),
Xiang Deng (Scale AI), Ronen Brafman (Ben-Gurion University of the Negev), Avi Yaeli
(IBM Research) Zora (Zhiruo) Wang (Carnegie Mellon University)
URL: https://sites.google.com/view/trust-cuaiui26/home<http://www.cs.ucy.ac.cy/~george/GPLists_2021/lm.php?tk=Y29ycG9yYQkJCWNvcnBv…>
Computer-Using Agents (CUAs) are moving from point automations to generalist agents
acting across GUIs, browsers, APIs, and CLIs—raising core IUI questions of trust,
predictability, and control. This workshop advances trustworthy-by-design CUAs
through human-centered methods: mixed-initiative interaction, explanation and
sensemaking, risk/uncertainty communication, and recovery/rollback UX. Outcomes
include (1) a practical TRUST-CUA checklist for oversight, consent, and auditing, (2) a
user-centered evaluation profile (“CUBench-IUI,” e.g., predictability, oversight effort,
time-to-recovery, policy-aligned success), and (3) curated design patterns and open
challenges for deployable, accountable agentic interfaces.
Important Dates
• Paper Submission: December 19, 2025
• Notification: February 2, 2026
All dates are 23:59h AoE (anywhere on Earth).
Organisation
General Chairs
• Tsvi Kuflik, The University of Haifa, Israel
• Styliani Kleanthous, Open University of Cyprus, Cyprus
Local Organising Chair
• George A. Papadopoulos, University of Cyprus, Cyprus
Workshop and Tutorial Chairs
• Karthik Dinakar, Pienso Inc, USA
• Werner Geyer, IBM Research, USA
• Patricia Kahr, University of Zurich, Switzerland
• Antonela Tommasel, ISISTAN, CONICET-UNCPBA, JKU, Argentina, Austria
(apologies for cross-postings)
Dear colleagues,
We are looking for two Postdoctoral Research Associates (in Latin Linguistics and in computational linguistics) for theERC-selected project “Computational Corpus Annotation for Quantitative Analysis of Latin Lexical Semantics” (COALA) at King’s College London. Led by Dr Barbara McGillivray, the project is funded by UKRI under the ERC Guarantee scheme and explores Latin semantics through large-scale corpus annotation and computational analysis, combining NLP with historical linguistics.
Location: Strand Campus, London
Fixed-term: 3 years
Closing date for applications: 2 December 2025
1. Post-doctoral research associate in Latin Linguistics
The successful candidate will design and curate a 1-million-token Latin corpus, conduct annotation of word senses in the corpus, and conduct case studies on Latin semantic change and variation.
Fixed-term: 3 years
Full details and application link: https://www.kcl.ac.uk/jobs/130725-post-doctoral-research-associate-in-latin…
2. Post-doctoral research associate in Computational Linguistics
The successful candidate will design and curate a 1-million-token Latin corpus, conduct annotation of word senses in the corpus, and conduct case studies on Latin semantic change and variation.
Fixed-term: 2 years and 11 months
Full details and application link: https://www.kcl.ac.uk/jobs/130705-post-doctoral-research-associate-in-compu…
Dr Barbara McGillivray, FHEA | <https://twitter.com/BarbaraMcGilli> @BarbaraMcGilli<https://twitter.com/BarbaraMcGilli>
Senior Lecturer in Digital and Cultural Humanities and convenor of the MA programme in Digital Humanities
Room 3.28, Department of Digital Humanities, King’s College London, Strand Campus, Strand, London, WC2R 2LS
Group lead of the Computational Humanities Research Group<https://www.kcl.ac.uk/research/computational-humanities-research-group>
Open Research Lead, Faculty of Arts and Humanities
Editor-in-chief of Journal of Open Humanities Data<https://openhumanitiesdata.metajnl.com/>
𝗦𝗲𝗰𝗼𝗻𝗱 𝗜𝗻𝘁𝗲𝗿𝗻𝗮𝘁𝗶𝗼𝗻𝗮𝗹 𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝗼𝗻 𝗡𝗮𝘁𝘂𝗿𝗮𝗹 𝗟𝗮𝗻𝗴𝘂𝗮𝗴𝗲 𝗣𝗿𝗼𝗰𝗲𝘀𝘀𝗶𝗻𝗴 𝗮𝗻𝗱 𝗔𝗿𝘁𝗶𝗳𝗶𝗰𝗶𝗮𝗹 𝗜𝗻𝘁𝗲𝗹𝗹𝗶𝗴𝗲𝗻𝗰𝗲 𝗳𝗼𝗿 𝗖𝘆𝗯𝗲𝗿 𝗦𝗲𝗰𝘂𝗿𝗶𝘁𝘆 (𝗡𝗟𝗣𝗔𝗜𝗖𝗦’𝟮𝟬𝟮𝟲)
University of Alicante, Alicante, Spain
11 and 12 June 2026
https://nlpaics2026.gplsi.es/
𝐒𝐞𝐜𝐨𝐧𝐝 𝐂𝐚𝐥𝐥 𝐟𝐨𝐫 𝐏𝐚𝐩𝐞𝐫𝐬
Recent advances in Natural Language Processing (NLP), Deep Learning and Large Language Models (LLMs) have resulted in improved performance of applications. In particular, there has been a growing interest in employing AI methods in different Cyber Security applications.
In today's digital world, Cyber Security has emerged as a heightened priority for both individual users and organisations. As the volume of online information grows exponentially, traditional security approaches often struggle to identify and prevent evolving security threats. The inadequacy of conventional security frameworks highlights the need for innovative solutions that can effectively navigate the complex digital landscape to ensure robust security. NLP and AI in Cyber Security have vast potential to significantly enhance threat detection and mitigation by fostering the development of advanced security systems for autonomous identification, assessment, and response to security threats in real-time. Recognising this challenge and the capabilities of NLP and AI approaches to fortify Cyber Security systems, the Second International Conference on Natural Language Processing (NLP) and Artificial Intelligence (AI) for Cyber Security (NLPAICS’2026) continues the tradition from NLPAICS’2024 to be a gathering place for researchers in NLP and AI methods for Cyber Security. We invite contributions that present the latest NLP and AI solutions for mitigating risks in processing digital information.
𝗖𝗼𝗻𝗳𝗲𝗿𝗲𝗻𝗰𝗲 𝘁𝗼𝗽𝗶𝗰𝘀
The conference invites submissions on a broad range of topics related to the employment of NLP and AI (and in general, language studies and models) for Cyber Security including but not limited to:
- 𝘚𝘰𝘤𝘪𝘦𝘵𝘢𝘭 𝘢𝘯𝘥 𝘏𝘶𝘮𝘢𝘯 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘢𝘯𝘥 𝘚𝘢𝘧𝘦𝘵𝘺
- Content Legitimacy and Quality
- Detection and mitigation of hate speech and offensive language
- Fake news, deepfakes, misinformation and disinformation
- Detection of machine generated language in multimodal context (text, speech and gesture)
- Trust and credibility of online information
- User Security and Safety
- Cyberbullying and identification of internet offenders
- Monitoring extremist fora
- Suicide prevention
- Clickbait and scam detection
- Fake profile detection in online social networks
- Technical Measures and Solutions
- Social engineering identification, phishing detection
- NLP for risk assessment
- Controlled languages for safe messages
- Prevention of malicious use of ai models
- Forensic linguistics
- Human Factors in Cyber Security
- 𝘚𝘱𝘦𝘦𝘤𝘩 𝘛𝘦𝘤𝘩𝘯𝘰𝘭𝘰𝘨𝘺 𝘢𝘯𝘥 𝘔𝘶𝘭𝘵𝘪𝘮𝘰𝘥𝘢𝘭 𝘐𝘯𝘷𝘦𝘴𝘵𝘪𝘨𝘢𝘵𝘪𝘰𝘯𝘴 𝘧𝘰𝘳 𝘊𝘺𝘣𝘦𝘳 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺
- Voice-based security: Analysis of voice recordings or transcripts for security threats
- Detection of machine generated language in multimodal context (text, speech and gesture)
- NLP and biometrics in multimodal context
- 𝘋𝘢𝘵𝘢 𝘢𝘯𝘥 𝘚𝘰𝘧𝘵𝘸𝘢𝘳𝘦 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺
- Cryptography
- Digital forensics
- Malware detection, obfuscation
- Models for documentation
- NLP for data privacy and leakage prevention (DLP)
- Addressing dataset “poisoning” attacks
- 𝘏𝘶𝘮𝘢𝘯-𝘊𝘦𝘯𝘵𝘳𝘪𝘤 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘢𝘯𝘥 𝘚𝘶𝘱𝘱𝘰𝘳𝘵
- Natural language understanding for chatbots: NLP-powered chatbots for user support and security incident reporting
- User behaviour analysis: analysing user-generated text data (e.g., chat logs and emails) to detect insider threats or unusual behaviour
- Human supervision of technology for Cyber Security
- 𝘈𝘯𝘰𝘮𝘢𝘭𝘺 𝘋𝘦𝘵𝘦𝘤𝘵𝘪𝘰𝘯 𝘢𝘯𝘥 𝘛𝘩𝘳𝘦𝘢𝘵 𝘐𝘯𝘵𝘦𝘭𝘭𝘪𝘨𝘦𝘯𝘤𝘦
- Text-Based Anomaly Detection
- Identification of unusual or suspicious patterns in logs, incident reports or other textual data
- Detecting deviations from normal behaviour in system logs or network traffic
- Threat Intelligence Analysis
- Processing and analysing threat intelligence reports, news, articles and blogs on latest Cyber Security threats
- Extracting key information and indicators of compromise (IoCs) from unstructured text
- 𝘚𝘺𝘴𝘵𝘦𝘮𝘴 𝘢𝘯𝘥 𝘐𝘯𝘧𝘳𝘢𝘴𝘵𝘳𝘶𝘤𝘵𝘶𝘳𝘦 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺
- Systems Security
- Anti-reverse engineering for protecting privacy and anonymity
- Identification and mitigation of side-channel attacks
- Authentication and access control
- Enterprise-level mitigation
- NLP for software vulnerability detection
- Malware Detection through Code Analysis
- Analysing code and scripts for malware
- Detection using NLP to identify patterns indicative of malicious code
- 𝘍𝘪𝘯𝘢𝘯𝘤𝘪𝘢𝘭 𝘊𝘺𝘣𝘦𝘳 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺
- Financial fraud detection
- Financial risk detection
- Algorithmic trading security
- Secure online banking
- Risk management in finance
- Financial text analytics
- 𝘌𝘵𝘩𝘪𝘤𝘴, 𝘉𝘪𝘢𝘴, 𝘢𝘯𝘥 𝘓𝘦𝘨𝘪𝘴𝘭𝘢𝘵𝘪𝘰𝘯 𝘪𝘯 𝘊𝘺𝘣𝘦𝘳 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺
- Ethical and Legal Issues
- Digital privacy and identity management
- The ethics of NLP and speech technology
- Explainability of NLP and speech technology tools
- Legislation against malicious use of AI
- Regulatory issues
- Bias and Security
- Bias in Large Language Models (LLMs)
- Bias in security related datasets and annotations
- 𝘋𝘢𝘵𝘢𝘴𝘦𝘵𝘴 𝘢𝘯𝘥 𝘙𝘦𝘴𝘰𝘶𝘳𝘤𝘦𝘴 𝘧𝘰𝘳 𝘊𝘺𝘣𝘦𝘳 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘈𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴
- 𝘚𝘱𝘦𝘤𝘪𝘢𝘭𝘪𝘴𝘦𝘥 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘈𝘱𝘱𝘭𝘪𝘤𝘢𝘵𝘪𝘰𝘯𝘴 𝘢𝘯𝘥 𝘖𝘱𝘦𝘯 𝘛𝘰𝘱𝘪𝘤s
- Intelligence applications
- Emerging and innovative applications in Cyber Security
𝘚𝘱𝘦𝘤𝘪𝘢𝘭 𝘛𝘩𝘦𝘮𝘦 𝘛𝘳𝘢𝘤𝘬 - 𝘍𝘶𝘵𝘶𝘳𝘦 𝘰𝘧 𝘊𝘺𝘣𝘦𝘳 𝘚𝘦𝘤𝘶𝘳𝘪𝘵𝘺 𝘪𝘯 𝘵𝘩𝘦 𝘌𝘳𝘢 𝘰𝘧 𝘓𝘓𝘔𝘴 𝘢𝘯𝘥 𝘎𝘦𝘯𝘦𝘳𝘢𝘵𝘪𝘷𝘦 𝘈𝘐
NLPAICS 2026 will feature a special theme track with the goal of stimulating discussion around Large Language Models (LLMs), Generative AI and ensuring their safety. The latest generation of LLMs, such as CHATGPT, Gemini, DeepSeek, LLAMA and open-source alternatives, has showcased remarkable advancements in text and image understanding and generation. However, as we navigate through uncharted territory, it becomes imperative to address the challenges associated with employing these models in everyday tasks, focusing on aspects such as fairness, ethics, and responsibility. The theme track invites studies on how to ensure the safety of LLMs in various tasks and applications and what this means for the future of the field. The possible topics of discussion include (but are not limited to) the following:
• Detection of LLM-generated language in multimodal context (text, speech and gesture)
• LLMs for forensic linguistics
• Bias in LLMs
• Safety benchmarks for LLMs
• Legislation against malicious use of LLMs
• Tools to evaluate safety in LLMs
• Methods to enhance the robustness of language models
𝗦𝘂𝗯𝗺𝗶𝘀𝘀𝗶𝗼𝗻𝘀 𝗮𝗻𝗱 𝗣𝘂𝗯𝗹𝗶𝗰𝗮𝘁𝗶𝗼𝗻
NLPAICS welcomes high-quality submissions in English, which can take two forms:
• Regular long papers: These can be up to eight (8) pages long, presenting substantial, original, completed, and unpublished work.
• Short (poster) papers: These can be up to four (4) pages long and are suitable for describing small, focused contributions, ongoing research, negative results, system demonstrations, etc. Short papers will be presented as part of a poster session.
The conference will not consider and evaluate abstracts only.
Accepted papers, including both long and short papers, will be published as e-proceedings with ISBN will available online on the conference website at the time of the conference and are expected to be uploaded into the ACL Anthology.
To prepare your submission, please make sure to use the NLPAICS 2026 style files available here -
Latex - https://www.overleaf.com/read/sgwmrzbmjfhc#aeea77
Word - https://nlpaics2026.gplsi.es/wp-content/uploads/2025/11/NLPAICS2026_Proceed…
Papers should be submitted through Softconf/START using the following link: https://softconf.com/p/nlpaics2026/user/
The conference will feature a student workshop, and awards will be offered to the authors of the best papers.
𝗜𝗺𝗽𝗼𝗿𝘁𝗮𝗻𝘁 𝗱𝗮𝘁𝗲𝘀
• Submissions due: 16 March 2026
• Reviewing process: 1 April – 30 April 2026
• Notification of acceptance: 5 May 2026
• Camera-ready due: 19 May 2026
• Conference camera-ready proceedings ready 1 June 2026
• Conference: 11-12 June 2026
𝗢𝗿𝗴𝗮𝗻𝗶𝘀𝗮𝘁𝗶𝗼𝗻
𝙲̲𝚘̲𝚗̲𝚏̲𝚎̲𝚛̲𝚎̲𝚗̲𝚌̲𝚎̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲𝚜̲ ̲
Ruslan Mitkov (University of Alicante)
Rafael Muñoz (University of Alicante)
𝙿̲𝚛̲𝚘̲𝚐̲𝚛̲𝚊̲𝚖̲𝚖̲𝚎̲ ̲𝙲̲𝚘̲𝚖̲𝚖̲𝚒̲𝚝̲𝚝̲𝚎̲𝚎̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲𝚜̲
Elena Lloret (University of Alicante)
Tharindu Ranasinghe (Lancaster University)
𝙿̲𝚞̲𝚋̲𝚕̲𝚒̲𝚌̲𝚊̲𝚝̲𝚒̲𝚘̲𝚗̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲
Ernesto Estevanell (University of Alicante)
𝚂̲𝚙̲𝚘̲𝚗̲𝚜̲𝚘̲𝚛̲𝚜̲𝚑̲𝚒̲𝚙̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲
Andres Montoyo (University of Alicante)
𝚂̲𝚝̲𝚞̲𝚍̲𝚎̲𝚗̲𝚝̲ ̲𝚆̲𝚘̲𝚛̲𝚔̲𝚜̲𝚑̲𝚘̲𝚙̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲
Salima Lamsiyah (University of Luxembourg)
𝙱̲𝚎̲𝚜̲𝚝̲ ̲𝙿̲𝚊̲𝚙̲𝚎̲𝚛̲ ̲𝙰̲𝚠̲𝚊̲𝚛̲𝚍̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲
Saad Ezzini (King Fahd University of Petroleum & Minerals)
𝙿̲𝚞̲𝚋̲𝚕̲𝚒̲𝚌̲𝚒̲𝚝̲𝚢̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲
Beatriz Botella (University of Alicante)
𝚂̲𝚘̲𝚌̲𝚒̲𝚊̲𝚕̲ ̲𝙿̲𝚛̲𝚘̲𝚐̲𝚛̲𝚊̲𝚖̲𝚖̲𝚎̲ ̲𝙲̲𝚑̲𝚊̲𝚒̲𝚛̲
Alba Bonet (University of Alicante)
𝗩𝗲𝗻𝘂𝗲
The Second International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security (NLPAICS’2026) will take place at the University of Alicante and is organised by the University of Alicante GPLSI research group.
Further information and contact details
The follow-up calls will list keynote speakers and members of the programme committee once confirmed.
The conference website is https://nlpaics2026.gplsi.es/ and will be updated on a regular basis. For further information, please email nlpaics2026(a)dlsi.ua.es
Registration will open in February 2026.
Best Regards
Tharindu Ranasinghe
*🎓 *We are happy to remind you the next webinar in the CIRCE online
seminar series organized by the CIRCE <https://www.circe-project.eu/>
project in collaboration with DFCLAM University of Siena
<https://www.dfclam.unisi.it/en>, H2IOSC <https://www.h2iosc.cnr.it/>
project and CNR-ILC <https://www.ilc.cnr.it/en/>.
*Dr. Samantha Jackson*
/University of Toronto, Canada/
*/Biased ears: Investigating and reducing accent discrimination in
hiring evaluations/*
📅 *November 24, 2025*
🕓 *4:30 PM – 5:30 PM (CEST)*
*Venue*: Online
*Attendees*: Researchers, secondary school teachers, language instructors
*Summary: *As international migration continues to increase (IOM 2024),
the last few years have seen a growth in nationalist politics (Bieber
2018) and anti-immigrant sentiments, despite economic reliance on
immigrants in many Global North countries. Immigrants bring not only
their skills and experience, but also their multifaceted identities,
which are partly reflected in their accents. A listener’s language
attitudes often reflect broader social attitudes (Lippi-Green 2012),
influencing their valuation of their interlocutor’s words. In the
workplace, such biases can influence decisions in hiring, promotion and
conflict resolution, creating pressures for immigrants to assimilate
linguistically. In the current climate, it is therefore important that
equitable workplace practices are engrained. This talk uses Canada as a
case study to explore accent discrimination against immigrants in the
job interview, a key “gatekeeping encounter” (Erickson 1976). Canada’s
laws promote mutual acceptance of all cultures and identities. The
project involved 96 Human Resources students trained in recruitment and
selection, who evaluated scripted interview responses from women born
and raised in Canada, China, England, Germany, India, Jamaica, and
Nigeria. Participants were unaware the responses were scripted. A mixed
methods analysis of scores and comments indicates significant bias
against non-Canadian accents. The standard Canadian accent was
associated with greater competence, comprehensibility, desirability and
aesthetic appeal. The talk concludes with an introduction to a new
cross-disciplinary project aimed at mitigating the biases that were
uncovered.
*Bio: *Samantha Jackson is an Assistant Professor, Teaching Stream, in
the Department of Linguistics and the Graduate Centre for Academic
Communication. Her research interests stem from the need to address
societal issues such as accent discrimination, bias in large language
models and under-documented language acquisition norms. Her pedagogical
interests focus on student engagement and research-informed teaching.
Register at the seminar registration page:
https://events.teams.microsoft.com/event/5322572e-6caf-42f4-826b-24e9a13fa9…
Make sure to have the Teams platform installed.
Upcoming webinars:
- Julia Swan (Monday, 15 December 2025)
The recording of the last CIRCE seminar by Onur Özkaynak is now
available on theH2IOSC Training Environment
<https://h2iosc-training-platform.ilc4clarin.ilc.cnr.it/en/login>. Once
logged in with your credentials, choose the course “Language and Accent
Discrimination - Online Seminar Series” and activate it with the code
PbK837GtE. For any inquiry, write to contact(a)circe-project.eu.
All the best,
Claudia Soria
CIRCE Project