Dear all,
We are organizing a workshop co-located with LREC 2026 on Identity Aware
NLP. The details are as follows:
=====================================================================
SECOND CALL FOR PAPERS
Ethical and Technical Challenges for Identity-Aware NLP
Workshop at LREC 2026, Palma de Mallorca, Spain, May 11-16, 2026
https://identity-aware-ai.github.io/
=====================================================================
*Workshop Theme:* What makes each of us unique, and which ethical and
technical challenges does this imply?
*OVERVIEW*
What makes us unique? Language (and thus the automatic processing of it)
is about people and what they mean. However, current practice relies on
the assumptions that the involved humans are all the same, and that if
enough data (and compute power) is present, the resulting
generalizations will be robust enough and represent the majority.
This approach often harms marginalized communities and ignores the
notion of identity in models and systems. Our interdisciplinary workshop
aims to raise the question of "what makes each of us unique?" to the NLP
community.
*WORKSHOP GOALS*
- The development of a shared and interdisciplinary understanding of
identities and how identity is treated in AI
- The development of new methods that push the effective, fair, and
inclusive treatment of individuals in AI to the next level
*TOPICS OF INTEREST*
We invite submissions on the following topics:
*Modeling subjective phenomena and disagreement: *Personalization and
perspectivist methods that challenge one-size-fits-all approaches by
leveraging disaggregated data and annotator metadata. Methods that learn
from disagreements rather than forcing consensus that erases unique
perspectives.
*Auditing and evaluating identity representation:* Techniques to measure
how well models represent diverse identities, diagnose failures in
capturing marginalized perspectives, and assess whether systems treat
all identities equitably. Frameworks for identity-aware performance
evaluation beyond aggregate metrics.
*Bias detection and fairness interventions: *Methods to identify when
models fail marginalized groups due to over-generalization, and
techniques to mitigate such harms while preserving model utility.
*Identity representation in LLMs: *How language models encode (or erase)
diverse identities, embody particular perspectives, and either reproduce
or challenge stereotypes. Measuring LLMs' capacity for reasoning about
identities beyond majority groups.
*Socio-political applications: *Modeling polarization, opinion
formation, and deliberation in ways that account for identity rather
than assuming homogeneous populations. How identity-aware approaches
improve accuracy for politically sensitive tasks.
*Methodological foundations from social sciences:* Best practices from
psychology and survey science for measuring identity constructs (values,
morals, narratives). Addressing challenges of using LLMs to model
diverse populations while avoiding erasure through aggregation.
*Accountability and responsible development: *Ethical responsibilities
when building systems that represent (or exclude) identities. Making AI
development processes accountable to marginalized communities most
affected by over-generalization.
*Identity-aware and community informed evaluation and auditing*:
Community informed bias evaluation and auditing. Human evaluation of
LLMs and other AI systems in an identity-aware manner.
*SUBMISSION TYPES*
We welcome the following types of submissions:
* Long papers: 4-8 pages of content (excluding references)
* Short papers: 4-8 pages of content (excluding references)
* Non-archival submissions, student project presentations, mixed-media
submissions
For non-archival submissions, we welcome creative formats including:
- Art, poetry, music
- Blog posts
- Jupyter notebooks
- Teaching materials
- Videos
- Findings papers
- Late-breaking papers
- Extended abstracts
For creative format submissions, please submit a PDF containing:
- A summary or abstract of your work
- A link to your work (if hosted externally)
- Any additional context or documentation
*SUBMISSION GUIDELINES
*
* All submissions will be double-blind reviewed
* Submissions should follow LREC 2026 formatting guidelines available
at: https://lrec2026.info/authors-kit/
* Papers must be 4-8 pages in length (excluding references)
* Papers must include ethics and limitations sections
* NO appendices are allowed (initial submission), up to 10 pages
camera-ready
* Originality and simultaneous submissions: submissions must be
original, previously unpublished work. If a paper is submitted to or
under consideration at another venue at the same time, this must be
declared at submission time. If accepted here, it must be withdrawn from
other venues; if accepted elsewhere while under review here, please
notify us promptly.
* Preprints: there is no anonymity period at LREC 2026, so authors may
post preprints at any time; however, the version submitted for review
must still be anonymized
* Language resources (optional): at submission time, authors may share
related language resources with the community; repository entries are
linked to the LRE Map and provide metadata for the resource
* Submission site: https://softconf.com/lrec2026/IdentityAwareAI
* Proceedings and presentation: accepted papers will appear in the
workshop proceedings. All accepted papers will be presented as posters.
For remote participants, we will also organize a lightning round of
short virtual presentations to accompany the posters.
*WORKSHOP FORMAT*
The workshop will be a half-day event featuring:
- Keynote speeches from leading experts in the field
- Paper presentations (oral and lightning talks)
- Participatory design activity to develop a shared interdisciplinary
vocabulary, identify current gaps in datasets for studying identity, and
design a vision for collecting new datasets
We are committed to ensuring that our workshop is accessible to all. The
workshop will be held in a hybrid format, allowing both in-person and
virtual participation.
*IMPORTANT DATES*
All deadlines are 11:59 PM AoE (Anywhere on Earth)
* Submission Deadline: February 20, 2026
* Notification of Acceptance: March 20, 2026
* Camera-Ready Deadline: March 30, 2026
* Workshop Date: May 16, 2026
*DIVERSITY & INCLUSION
*
We actively encourage submissions from underrepresented communities and
countries. The workshop organizers will provide mentorship and thorough
feedback, especially to first-time authors and reviewers.
*
ORGANIZERS*
Pranav A (University of Hamburg)
Valerio Basile (University of Turin)
Neele Falk (University of Stuttgart)
David Jurgens (University of Michigan)
Gabriella Lapesa (GESIS, Leibniz Institute for the Social Sciences &
Heinrich-Heine University of Düsseldorf)
Anne Lauscher (University of Hamburg)
Soda Marem Lo (University of Turin)
*CONTACT*
For queries, please contact: identity-aware-ai(a)googlegroups.com
Join us at Identity-Aware AI 2026 to contribute to this important
conversation!
Apologies for cross-posting.
---------------------------------------------------------------------------
*SIGUL 2026 Joint Workshop with ELE, EURALI, and DCLRL*
*Towards Inclusivity and Equality: Language Resources and Technologies for
Under-Resourced and Endangered Languages*
*https://sites.google.com/view/sigul2026/home-page
<https://sites.google.com/view/sigul2026/home-page>*
------------------------------------
We are pleased to announce the upcoming SIGUL 2026 Joint Workshop with ELE,
EURALI, and DCLRL on Towards Inclusivity and Equality: Language Resources
and Technologies for Under-Resourced and Endangered Languages
<https://sites.google.com/view/sigul2026/home-page>, co-located with *LREC
2026 *in Palma, Mallorca, Spain. This workshop brings together researchers
working on less-resourced, endangered, minority, low-density, and
underrepresented languages to share novel techniques, resources,
strategies, and evaluation methods. We emphasize the entire pipeline: data
creation, modeling, adaptation/transfer, system development, evaluation,
deployment, and ethical/community engagement.
We invite contributions on, but not limited to, the following topics:
-
Data collection, annotation, and curation for under-resourced languages
(crowdsourcing, participatory methods, gamification, unsupervised or weakly
supervised methods)
-
Learning with limited supervision (zero- or few-shot, PEFT, RAG with
linguistic resources)
-
Multilingual alignment, representation learning, and language
embeddings, including rare languages
-
Speech, multimodal, and cross-modal technologies for under-resourced
languages (speech recognition, synthesis, speech-to-text, speech
translation, multimodal resources)
-
Basic text processing (normalization, orthography, transliteration,
tokenization/segmentation, morphological and syntactic processing) in and
for low-resource settings.
-
Low-resource machine translation (pivoting, alignment, synthetic data)
-
Evaluation frameworks, benchmarks, and metrics designed or adapted for
underrepresented languages
-
Adaptation, domain adaptation, and robustness to domain shift in
low-resource contexts
-
Responsible approaches, ethical issues, community engagement, data
sovereignty, and language revitalization
-
Deployment, tools, and practical systems for underserved languages
(e.g., mobile apps, dictionary or translation apps, linguistic tools)
-
Case studies of success and negative results (with lessons learned)
-
Interoperability, standardization, and metadata practices for datasets
in low-resource scenarios
Special Themes
Language modeling for intra-language variation, dialects, accents, and
regional variants of less-resourced languages
Many less-resourced languages display rich internal diversity, including
dialects, accents, and regional or social varieties. This special theme
focuses on developing language models and speech technologies that capture
and respect intra-language variation rather than reduce it to a single
“standard.” We welcome work on dialect identification and adaptation,
accent-robust speech systems, normalization vs. diversity-preserving
modeling, and cross-dialect transfer in low-data scenarios. Approaches
combining linguistic insights, community participation, and ethical
awareness are especially encouraged. The aim is to build technologies that
reflect and sustain the true linguistic richness of under-resourced
languages.
Ultra-Low-Resource Language Adaptation
This special theme focuses on methods that enable effective language and
speech technology development under extreme data scarcity. We invite
research on transfer learning, cross-lingual adaptation, multilingual
pretraining, and self-supervised or few-shot approaches tailored to
ultra-low-resource settings. Work on evaluation, data augmentation
(including synthetic data), and leveraging typological or linguistic
knowledge is also welcome. The goal is to advance techniques that extend
modern language technologies to the most underrepresented languages,
ensuring inclusivity in the digital age.
Community-Led Project Showcase
To help ground research in community needs, we invite brief (5–10 min)
presentations by language community members, NGOs, or practitioners
describing real-world challenges or resource needs. Position papers or
research posters are appropriate formats for this category.
Important Dates
Paper Submission Deadline: February 20 (Friday), 2026
Notification of Acceptance: March 22 (Sunday), 2026
Submission of Camera-Ready: March 30 (Monday), 2026
Workshop Date: 11-12 May 2026
All deadlines are anywhere-on-earth (AoE).
Call for Papers
We welcome original research papers and ongoing work relevant to the topics
of the workshop. Each submission can be one of the following categories:
-
research papers;
-
position papers for reflective considerations of methodological, best
practice, and institutional issues (e.g., ethics, data ownership, speakers’
community involvement, de-colonizing approaches);
-
posters, for work-in-progress projects in the early stage of development
or description of new resources;
-
demo papers and early-career/student papers (to be submitted as extended
abstracts and presented as posters).
The research and position papers should range from four (4) to eight (8)
pages, while demo papers are limited to four (4) pages. References don't
count towards page limits. Accepted papers will appear in the workshop
proceedings, which include both oral and poster papers in the same format.
Determination of the presentation format (oral vs. poster) is based solely
on an assessment of the optimal method of communication (more or less
interactive), given the paper content.
Submissions must be anonymous and follow LREC formatting guidelines
<https://lrec2026.info/authors-kit/>.
For inquiries, send an email to claudia.soria(a)cnr.it.
Identify, Describe and Share your LRs!
When submitting a paper from the START page, authors will be asked to
provide essential information about resources (in a broad sense, i.e. also
technologies, standards, evaluation kits, etc.) that have been used for the
work described in the paper or are a new result of your research. Moreover,
ELRA encourages all LREC authors to share the described LRs (data, tools,
services, etc.) to enable their reuse and replicability of experiments
(including evaluation ones).
Thanks,
Atul
The next meeting of the Edge Hill Corpus Research Group will take place online (via MS Teams) on Friday 6 February 2026, 2:00-3:30 pm (GMT<https://time.is/United_Kingdom>).
Topic: Discourse Oriented Corpus Studies
Speaker: Dan Malone<https://www.researchgate.net/profile/Daniel-Malone> (Edge Hill University, UK)
Title: From Global Uncertainty to Domestic Danger: The lone wolf terrorist as a topos of threat in (poly)crisis discourses
The abstract and registration link are here: https://sites.edgehill.ac.uk/crg/next
Attendance is free. Registration closes on Wednesday 4 February.
If you have problems registering, or have any questions, please email the organiser, Costas Gabrielatos (gabrielc(a)edgehill.ac.uk<mailto:gabrielc@edgehill.ac.uk>).
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
*** First Call for Replication and Negative Results ***
37th IEEE International Symposium on Software Reliability Engineering
(ISSRE 2026)
October 20-23, 2026, 5* St. Raphael Resort and Marina
Limassol, Cyprus
https://cyprusconferences.org/issre2026/
The Replications and Negative Results (RENE) Track has been introduced in the software
engineering community for a while and received overwhelmingly positive feedback. This
year, we establish this track at ISSRE and invite researchers to (1) replicate results from
previous papers and (2) publish studies with important and relevant negative or null
results (results that fail to show an effect, yet demonstrate the research paths that did not
pay off).
We also encourage the publication of the negative results or replicable aspects of
previously published work. For example, authors of a published paper reporting a working
solution for a given problem can document in a “negative results paper” other (failed)
attempts they made before defining the working solution they published.
• Replication studies. The papers in this category must go beyond simply re-
implementing an algorithm and/or re-running the artifacts provided by the original paper.
Such submissions should at least apply the approach to new data sets (open-source or
proprietary). A replication study should clearly report on results that the authors were
able to replicate, as well as on the aspects of the work that were not replicable.
• Negative results papers. We seek papers that report on negative results. We seek
negative results for all types of program comprehension research in any empirical area
(qualitative, quantitative, case study, experiment, etc.). For example, did your controlled
experiment not show an improvement over the baseline? Even if negative, results obtained
are still valuable when they are either not obvious or disprove widely accepted wisdom.
Evaluation Criteria
Both Replication Studies and Negative Results submissions will be evaluated according to
the following standards:
• Depth and breadth of the empirical studies
• Clarity of writing
• Appropriateness of conclusions
• Amount of useful, actionable insights
• Availability of artifacts
• Underlying methodological rigor. A negative result due primarily to misaligned
expectations or due to lack of statistical power (small samples) is not a good submission.
The negative result should be a result of a lack of effect, not a lack of methodological
rigor.
Most importantly, we expect replication studies to clearly point out the artifacts upon
which the study is built, and to provide the links to all the artifacts in the submission (the
only exception will be given to those papers that replicate the results on proprietary
datasets that can not be publicly released).
Submission Instructions
Submissions must be original, in the sense that the findings and writing have not been
previously published or under consideration elsewhere. However, as either replication
studies or negative results, some overlap with previous work is expected. Please make
clear in the paper the overlap with and difference from previous work.
All submissions must be in PDF format and conform, at time of submission, to the IEEE
Computer Society Format Guidelines:
(https://www.ieee.org/conferences/publishing/templates).
Authors are strongly encouraged to print the PDF and review it for integrity (fonts,
symbols, equations, etc.) before submission, as defective printing can undermine a
paper’s chance of success. By submitting to the ISSRE RENE Track, authors acknowledge
that they are aware of and agree to be bound by the IEEE Plagiarism FAQ. In particular,
papers submitted to the RENE track must not have been published elsewhere and must not
be under review or submitted for review elsewhere whilst under consideration for ISSRE
2026. Contravention of this concurrent submission policy will be deemed a serious breach
of scientific ethics, and appropriate action will be taken in all such cases. To check for
double submission and plagiarism issues, the chairs reserve the right to (1) share the list
of submissions with the PC Chairs of other conferences with overlapping review periods
and (2) use external plagiarism detection software, under contract to the IEEE, to detect
violations of these policies.
Submissions to the RENE Track can be made via the ISSRE RENE track submission site:
https://easychair.org/conferences?conf=issre2026 .
Submission Length: The ISSRE RENE Track accepts submissions of two lengths:
(1) New replication studies and new descriptions of negative results should have a length
of up to 10 pages, plus 2 pages which may only contain references.
(2) Negative results documented during the preparation of previously published work by
the authors should be described in up to 5 pages, plus 1 page, which may only contain
references (e.g., as previously mentioned, authors of a published paper can document
negative results they obtained while working on it, such as methodologically sound
solutions that did not work).
Important note 1: Both types of papers (replication and negative results) will be included
as part of the main conference proceedings.
Important note 2: The RENE track does not follow a double-anonymous review process.
Publication and Presentation
Upon notification of acceptance, all authors of accepted papers will receive further
instructions for preparing the camera-ready versions of their submissions. If a submission
is accepted, at least one author of the paper is required to have a full registration for ISSRE
2026, attend the conference, and present the paper in person. All accepted papers will be
published in the conference electronic proceedings. The presentation is expected to be
delivered in person, unless this is impossible due to travel limitations (e.g., related to
health or visa). Details about the presentations will follow the notifications.
The official publication date is the date the proceedings are made available in the IEEE
Digital Libraries. The official publication date affects the deadline for any patent filings
related to published work.
Purchases of additional pages in the proceedings are not allowed.
Important Dates (AoE)
• Submission deadline: July 5, 2026
• Notification of acceptance: August12 29, 2026
• Camera-ready copy submission: August 19, 2026
• Author registration deadline: August 19, 2026
Organisation
General Chairs
• Leonardo Mariani, University of Milano - Bicocca, Italy
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Coordinator
• Roberto Natella, GSSI, Italy
Research Program Committee Chairs
• Domenico Cotroneo, UNC Charlotte, USA
• Jie M. Zhang, King's College London, UK
Industry Program Chairs
• Jinyang Liu, Bytedance, USA
• Sigrid Eldh, Ericsson AB, Sweden
Workshop Chairs
• Georgia Kapitsaki, University of Cyprus, Cyprus
• August Shi, The University of Texas at Austin, USA
Doctoral Symposium Chairs
• Stefan Winter, LMU Munich, Germany
• Lili Wei, McGill University, Canada
Fast Abstract Chairs
• Luigi Lavazza, University of Insubria, Italy
• Yintong Huo, SMU, Singapore
JIC2 Chair
• Helene Waeselynck, LAAS-CNRS, France
Publicity Chairs
• Allison K. Sulivan, The University of Texas at Arlington, USA
• Jose D'Abruzzo Pereira, University of Coimbra, Portugal
Publication Chairs
• Sherlock Licorish, Otago Business School, New Zealand
• Maria Teresa Rossi, GSSI, Italy
Artifact Evaluation Chairs
• Naghmeh Ivaki, University of Coimbra, Portugal
• Fumio Machida, University of Tsukuba, Japan
Diversity and Inclusion Chair
• Eleni Constantinou, University of Cyprus, Cyprus
Financial Chair
• Costas Pattichis, University of Cyprus, Cyprus
Web Chairs
• Michalis Ioannides, Easy Conferences LTD
• Elena Masserini, University of Milano - Bicocca, Italy
Registration Chair
• Easy Conferences LTD
We invite submissions to PoliticalNLP 2026, the 3rd Workshop on Natural Language Processing for Political Sciences, co-located with LREC 2026. The workshop will take place in Palma de Mallorca, Spain, at the Palau de Congressos de Palma.
Theme for 2026
Trust, Transparency and Generative AI in Political Discourse Analysis
Large language models and generative AI are increasingly shaping political communication, public opinion, and democratic processes. PoliticalNLP 2026 provides an interdisciplinary forum to examine these developments critically and responsibly, at the intersection of NLP, political science, law, and the social sciences.
Topics of interest include, but are not limited to
• Trustworthy, explainable, and fair NLP for political data
• Bias, misinformation, and ethical risks of LLMs
• Multilingual and cross cultural political NLP
• Generative AI for policy analysis and deliberative democracy
• Reproducibility, transparency, and responsible AI practices
• Datasets, tools, and resources for political and civic technologies
Important dates
• Paper submission (long and short): 16 February 2026
• Notification: 11 March 2026
• Camera ready: 30 March 2026
• Workshop: 11 to 12 May 2026, or 16 May 2026 (final date to be confirmed by LREC)
Proceedings
Accepted papers will appear in the LREC 2026 Workshop Proceedings.
Submission and CFP
Full Call for Papers and details: https://sites.google.com/view/politicalnlp2026
Submission is electronic via the Softconf START system: https://softconf.com/lrec2026/PoliticalNLP2026/
Best regards,
PoliticalNLP 2026 Organizer
--
Wajdi Zaghouani, Ph.D.
Associate Professor in Residence,
Communication Program
Northwestern Qatar | Education City
T +974 4454 5232 | M +974 3345 4992
[cid:image001.png@01DB0DA7.8D0D9A20]
Second International Conference on Natural Language Processing
and Artificial Intelligence for Cyber Security
(NLPAICS'2026)
University of Alicante, Alicante, Spain
11 and 12 June 2026
https://nlpaics2026.gplsi.es/
Third Call for Papers
Recent advances in Natural Language Processing (NLP), Deep Learning and
Large Language Models (LLMs) have resulted in improved performance of
applications. In particular, there has been a growing interest in
employing AI methods in different Cyber Security applications.
In today's digital world, Cyber Security has emerged as a heightened
priority for both individual users and organisations. As the volume of
online information grows exponentially, traditional security approaches
often struggle to identify and prevent evolving security threats. The
inadequacy of conventional security frameworks highlights the need for
innovative solutions that can effectively navigate the complex digital
landscape to ensure robust security. NLP and AI in Cyber Security have
vast potential to significantly enhance threat detection and mitigation
by fostering the development of advanced security systems for autonomous
identification, assessment, and response to security threats in real
time. Recognising this challenge and the capabilities of NLP and AI
approaches to fortify Cyber Security systems, the Second International
Conference on Natural Language Processing (NLP) and Artificial
Intelligence (AI) for Cyber Security (NLPAICS'2026) continues the
tradition from NLPAICS'2024 to be a gathering place for researchers in
NLP and AI methods for Cyber Security. We invite contributions that
present the latest NLP and AI solutions for mitigating risks in
processing digital information.
Conference topics
The conference invites submissions on a broad range of topics related to
the employment of NLP and AI (and in general, language studies and
models) for Cyber Security, including but not limited to:
_Societal and Human Security and Safety_
* Content Legitimacy and Quality
* Detection and mitigation of hate speech and offensive language
* Fake news, deepfakes, misinformation and disinformation
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* Trust and credibility of online information
* User Security and Safety
* Cyberbullying and identification of internet offenders
* Monitoring extremist fora
* Suicide prevention
* Clickbait and scam detection
* Fake profile detection in online social networks
* Technical Measures and Solutions
* Social engineering identification, phishing detection
* NLP for risk assessment
* Controlled languages for safe messages
* Prevention of malicious use of ai models
* Forensic linguistics
* Human Factors in Cyber Security
_Speech Technology and Multimodal Investigations for Cyber Security_
* Voice-based security: Analysis of voice recordings or transcripts
for security threats
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* NLP and biometrics in multimodal context
_Data and Software Security_
* Cryptography
* Digital forensics
* Malware detection, obfuscation
* Models for documentation
* NLP for data privacy and leakage prevention (DLP)
* Addressing dataset "poisoning" attacks
_Human-Centric Security and Support_
* Natural language understanding for chatbots: NLP-powered chatbots
for user support and security incident reporting
* User behaviour analysis: analysing user-generated text data (e.g.,
chat logs and emails) to detect insider threats or unusual behaviour
* Human supervision of technology for Cyber Security
_Anomaly Detection and Threat Intelligence_
* Text-Based Anomaly Detection
* Identification of unusual or suspicious patterns in logs, incident
reports or other textual data
* Detecting deviations from normal behaviour in system logs or network
traffic
* Threat Intelligence Analysis
* Processing and analysing threat intelligence reports, news, articles
and blogs on latest Cyber Security threats
* Extracting key information and indicators of compromise (IoCs) from
unstructured text
_Systems and Infrastructure Security_
* Systems Security
* Anti-reverse engineering for protecting privacy and anonymity
* Identification and mitigation of side-channel attacks
* Authentication and access control
* Enterprise-level mitigation
* NLP for software vulnerability detection
* Malware Detection through Code Analysis
* Analysing code and scripts for malware
* Detection using NLP to identify patterns indicative of malicious
code
_Financial Cyber Security_
* Financial fraud detection
* Financial risk detection
* Algorithmic trading security
* Secure online banking
* Risk management in finance
* Financial text analytics
_Ethics, Bias, and Legislation in Cyber Security_
* Ethical and Legal Issues
* Digital privacy and identity management
* The ethics of NLP and speech technology
* Explainability of NLP and speech technology tools
* Legislation against malicious use of AI
* Regulatory issues
* Bias and Security
* Bias in Large Language Models (LLMs)
* Bias in security related datasets and annotations
_Datasets and resources for Cyber Security Applications_
_Specialised Security Applications and Open Topics_
* Intelligence applications
* Emerging and innovative applications in Cyber Security
_Special Theme Track - Future of Cyber Security in the Era of LLMs and
Generative AI_
NLPAICS 2026 will feature a special theme track with the goal of
stimulating discussion around Large Language Models (LLMs), Generative
AI and ensuring their safety. The latest generation of LLMs, such as
ChatGPT, Gemini, DeepSeek, LLAMA and open-source alternatives, has
showcased remarkable advancements in text and image understanding and
generation. However, as we navigate through uncharted territory, it
becomes imperative to address the challenges associated with employing
these models in everyday tasks, focusing on aspects such as fairness,
ethics, and responsibility. The theme track invites studies on how to
ensure the safety of LLMs in various tasks and applications and what
this means for the future of the field. The possible topics of
discussion include (but are not limited to) the following:
* Detection of LLM-generated language in multimodal context (text,
speech and gesture)
* LLMs for forensic linguistics
* Bias in LLMs
* Safety benchmarks for LLMs
* Legislation against malicious use of LLMs
* Tools to evaluate safety in LLMs
* Methods to enhance the robustness of language models
Submissions and Publication
NLPAICS welcomes high-quality submissions in English, which can take two
forms:
* Regular long papers: These can be up to eight (8) pages long,
presenting substantial, original, completed, and unpublished work.
* Short (poster) papers: These can be up to four (4) pages long and
are suitable for describing small, focused contributions, ongoing
research, negative results, system demonstrations, etc. Short papers
will be presented as part of a poster session.
The conference will not consider and evaluate abstracts only.
Accepted papers, including both long and short papers, will be published
as e-proceedings with ISBN will be available online on the conference
website at the time of the conference and are expected to be uploaded
into the ACL Anthology.
To prepare your submission, please make sure to use the NLPAICS 2026
style files available here:
LaTeX: NLPAICS_2026_LaTeX.zip [1]
Overleaf: https://www.overleaf.com/read/sgwmrzbmjfhc#aeea77
Word:
https://nlpaics2026.gplsi.es/wp-content/uploads/2025/11/NLPAICS2026_Proceed…
Papers should be submitted through Softconf/START using the following
link: https://softconf.com/p/nlpaics2026/user/
The conference will feature a student workshop, and awards will be
offered to the authors of best papers.
Important dates
* Submissions due: 16 March 2026
* Reviewing process: 1 April - 30 April 2026
* Notification of acceptance: 5 May 2026
* Camera-ready due: 19 May 2026
* Conference camera-ready proceedings ready 1 June 2026
* Conference: 11-12 June 2026
Organisation
Conference Chairs
Ruslan Mitkov (University of Alicante)
Rafael Muñoz (University of Alicante)
Programme Committee Chairs
Elena Lloret (University of Alicante)
Tharindu Ranasinghe (Lancaster University)
Publication Chair
Ernesto Estevanell (University of Alicante)
Sponsorship Chair
Andres Montoyo (University of Alicante)
Student Workshop Chair
Salima Lamsiyah (University of Luxembourg)
Best Paper Award Chair
Saad Ezzini (King Fahd University of Petroleum & Minerals)
Publicity Chair
Beatriz Botella (University of Alicante)
Social Programme Chair
Alba Bonet (University of Alicante)
Venue
The Second International Conference on Natural Language Processing and
Artificial Intelligence for Cyber Security (NLPAICS'2026) will take
place at the University of Alicante and is organised by the University
of Alicante GPLSI research group.
Further information and contact details
The follow-up calls will list keynote speakers and members of the
programme committee once confirmed. The conference website is
https://nlpaics2026.gplsi.es/ and will be updated on a regular basis.
For further information, please email nlpaics2026(a)dlsi.ua.es
Registration will open in March 2026.
Links:
------
[1] http://summer-school.gplsi.es/NLPAICS_2026_LaTeX.zip
The deadline for submitting abstracts to the Learner Corpus Research conference, due 16–19 September 2026 in Prague, Czech Republic (https://lcr2026.ff.cuni.cz), has been extended until 7 February 2026.
*** Call for Participation ***
The Annual ACM Conference on Intelligent User Interfaces (IUI 2026)
March 23-26, 2026, 5* Coral Beach Hotel & Resort, Paphos, Cyprus
https://iui.hosting.acm.org/2026/
(*** Early Registration Deadline: February 13, 2026 ***)
The 2026 ACM Conference on Intelligent User Interfaces (ACM IUI) is the annual premier
venue, where researchers and practitioners meet and discuss state-of-the-art advances
at the intersection of Artificial Intelligence (AI) and Human-Computer Interaction (HCI).
Ideal IUI submissions should address practical HCI challenges using machine intelligence
and discuss both computational and human-centric aspects of such methodologies,
techniques and systems.
This year we had a record number of submissions, so we have a record number of
accepted papers (114), a record number of posters and demos (53) and we hope for a
record number of participants.
Furthermore, we have 8 workshops and 5 tutorials.
Finally, the technical program will feature two keynotes, by Antonio Kruger on the role of
HCI in trusted A.I. and Pattie Maes on designing A.I. interaction for human flourishing.
The detailed program of IUI 2026 can be found on the conference website:
https://iui.acm.org/2026/program/ .
The early registration deadline is on February 13th and the registration page is:
https://iui.acm.org/2026/registration/
We are looking forward to meeting everybody in Paphos.
Organisation
General Chairs
• Tsvi Kuflik, The University of Haifa, Israel
• Styliani Kleanthous, Open University of Cyprus, Cyprus
Local Organising Chair
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Chairs
• Li Chen, Hong Kong Baptist University, China
• Giulio Jacucci, University of Helsinki, Finland
• Alison Renner, Dataminr, USA
Dear all,
I would like to draw your attention to the position announced below. We are searching for a postdoc or an advanced PhD student whose research interests align with the interdisciplinary focus of the project. The position requires:
a background in NLP, including prior experience with prompting LLMs and data analysis,
familiarity with key concepts in linguistics and formal pragmatics (e.g. presuppositions),
enthusiasm for interdisciplinary research.
The position is funded for 20 months and runs until the end of 2027.
If you are interested, please submit a single PDF containing:
a brief motivation letter outlining your research interests and connection to the APHIC project;
a CV, including a publication list;
contact information for one to two references.
Applications should be sent to Agnieszka Faleńska (agnieszka.falenska at ims.uni-stuttgart.de). until February 13, 2026. The position will remain open until filled, so please feel free to get in touch even if you come across this call after February 13th.
Best regards,
Agnieszka Faleńska
———
Dear colleagues,
We invite applications for a Postdoctoral Researcher position in the interdisciplinary project "Authority Presuppositions in Human—AI Communication" (APHIC) at the University of Stuttgart. The project is led by Dr. Agnieszka Faleńska [1] and Prof. Judith Tonhauser [2] under the IRIS-HISIT initiative.
Project
APHIC is part of the "Human-Intelligent Systems Interaction and Teaming" (IRIS-HISIT) programme [3] funded by the Ministry of Science, Research, and the Arts of Baden-Württemberg [4]. The project investigates authority presuppositions—implicit assumptions that an AI system has the expertise to deliver high-stakes advice (e.g., medical or legal). The project aims to (1) develop a theoretically grounded taxonomy of authority presuppositions, and (2) design and evaluate conversational repair strategies that maintain trust and informativeness while ensuring user safety.
Position
• Duration: 20 months
• Earliest start: March 2026
• Salary: TV-L 13 (100%), see [5] for details
• Environment: The researcher will be embedded in the IRIS community [6] and collaborate closely with colleagues at the Institute for Natural Language Processing [7] and the Institute of Linguistics [8].
Candidate Profile
• PhD in computational linguistics or a related field
• Familiarity with key concepts in linguistics and formal pragmatics, e.g. presuppositions
• Experience in programming, prompting LLMs, and machine learning
• Excellent communication skills and enthusiasm for interdisciplinary research
• Proficiency in English (German not required)
How to Apply
Please submit one PDF containing:
• a brief motivation letter outlining your research interests,
• your CV with publication list
• contact information for one to two references.
Applications should be sent to: Agnieszka Faleńska (agnieszka.falenska at ims.uni-stuttgart.de <http://ims.uni-stuttgart.de/>). Applications received before 13th February 2026 will be given full consideration. The position will remain open until filled, so do not hesitate to get in touch when you find this opening after 13th of February.
The University of Stuttgart would like to increase the proportion of women in the scientific field and is therefore particularly interested in applications from women. Severely disabled persons are given priority in the case of equal suitability.
University of Stuttgart
The University of Stuttgart is a technically oriented university in Germany. It is especially known for engineering and related topics, with its computer science department being ranked highly, both nationally and internationally.
The city of Stuttgart is the capital of the state of Baden-Württemberg in southwest Germany. It is a lively and international city, known for its strong economy and rich culture. With Germany's high-speed train system, it is well-connected to many other interesting places, for instance, Munich and Cologne (~2.5 hours), Paris (~3.5 hours), Berlin (~5.5 hours), Strasbourg (<1.5 hours), and Lake of Constance (~2.5 hours).
Links
[1] www.ims.uni-stuttgart.de/en/institute/team/Falenska <http://www.ims.uni-stuttgart.de/en/institute/team/Falenska>
[2] www.ling.uni-stuttgart.de/en/institute/team/Tonhauser/ <http://www.ling.uni-stuttgart.de/en/institute/team/Tonhauser/>
[3] www.iris.uni-stuttgart.de/research/human-intelligent-systems-interaction-an… <http://www.iris.uni-stuttgart.de/research/human-intelligent-systems-interac…>
[4] mwk.baden-wuerttemberg.de/de/startseite <http://mwk.baden-wuerttemberg.de/de/startseite>
[5] oeffentlicher-dienst.info/c/t/rechner/tv-l/west?id=tv-l-2025 <http://oeffentlicher-dienst.info/c/t/rechner/tv-l/west?id=tv-l-2025>
[6] www.iris.uni-stuttgart.de/ <http://www.iris.uni-stuttgart.de/>
[7] www.ims.uni-stuttgart.de <http://www.ims.uni-stuttgart.de/>
[8] www.ling.uni-stuttgart.de <http://www.ling.uni-stuttgart.de/>
**** We apologize for the multiple copies of this email. In case you are
already registered to the next webinar, you do not need to register
again. ****
-------------------------------------
Dear colleague,
We are happy to announce the next webinar in the Language Technology
webinar series organized by The HiTZ Chair of Artificial Intelligence
and Language Technology (https://hitz.eus/katedra). We are organizing
one seminar every month.
Next webinar:
Speaker: Henning Wachsmuth (Leibniz University Hannover)
Title: Toward Argumentative Large Language Models
Date: Thursday, February 5, 2026 - 15:00
Summary: Today's large language models (LLMs) are optimized toward
giving helpful answers in response to prompts. In many situations,
however, it may be preferable for an LLM to foster critical thinking
rather than just following an instruction. While recent LLMs are said to
'reason', they barely build on established reasoning concepts known from
argumentation theory. In this talk, I will give insights into recent
efforts of my group in making LLMs more argumentative. Starting from
basics of LLM training processes, I will present how to specialize LLMs
for argumentation tasks via instruction fine-tuning as well as how to
align the arguments they generate using reinforcement learning. From
there, I will give an outlook on how to improve the actual reasoning
capabilities of LLMs.
Bio: Henning Wachsmuth leads the Natural Language Processing Group at
the Institute of Artificial Intelligence of Leibniz University Hannover.
After receiving his PhD from Paderborn University in 2015, he worked as
a PostDoc at Bauhaus-Universität Weimar and as a junior professor in
Paderborn, before he became a full professor in Hannover in 2022. His
group does basic research on large language models for computational
argumentation, social bias detection and mitigation, as well as
explainable and educational NLP. Henning's main research interests
include the generation of audience-aware text, the assessment of
pragmatic text quality, and the modeling of bias and framing.
Registration: https://www.hitz.eus/webinar_izenematea
Upcoming webinars:
José Andrés González-López (March 5)
Ranjay Krishna (April 16)
Barbara Plank (May 7)
You can view the videos of previous webinars and the schedule for
upcoming webinars here: http://www.hitz.eus/webinars
If you cannot attend this seminar, but you want to be informed of the
following HiTZ webinars, please complete this registration form instead:
http://www.hitz.eus/webinar_info
Best wishes,
The HiTZ Chair of Artificial Intelligence and Language Technology
P.S: HiTZ will not grant any type of certificate for attendance at these
webinars.