Call for Papers: ArgMining 2026 – Workshop on Argument Mining
The Workshop on Argument Mining (ArgMining) provides a regular forum for presenting and discussing cutting-edge research in argument mining (a.k.a. argumentation mining) for academic and industry researchers. Continuing a series of twelve successful previous workshops, the 2026 edition welcomes submissions of long papers, short papers, extended abstracts, and PhD proposals.
Workshop Theme
The 2026 edition of ArgMining places a special focus on understanding and evaluating arguments in both human and machine reasoning. With this theme, we broaden the workshop’s scope to include reasoning—a long-standing area of AI research that has recently gained renewed interest within the ACL community, driven by the latest generation of large language models (LLMs).
Reasoning is tightly connected to argumentation, as it represents, analyzes, and evaluates the process of reaching conclusions based on available information. Viewing argumentation as a paradigm for capturing reasoning enables the evaluation of machines (particularly LLMs) based on their ability to address argument mining tasks.
Topics of Interest
Topics include, but are not limited to:
* Automatic extraction of textual patterns describing argument components in human and machine argumentation
* Cross-lingual, cross-cultural, and multi-perspective argument mining and reasoning
* Argument mining and generation from multimodal and/or multilingual data
* Explainability in argument mining through reasoning
* Modeling, assessing, and critically reflecting on the argumentation capabilities of LLMs
* Novel benchmarks in argument mining addressing recent developments in LLM reasoning
* Guidelines for assessing and documenting reasoning processes reflected in benchmarks
* Annotation guidelines, linguistic analysis, and argumentation corpora
* Real-world applications (e.g., social sciences, education, law, scientific writing; misinformation detection)
* Integration of commonsense and domain knowledge into argumentation models
* Combining information retrieval with argument mining (e.g., argumentative search engines)
* Ethical aspects and societal impact of argument mining and LLM reasoning
Submissions from all application areas are welcome.
Submission Types
The workshop accepts the following submission types:
* Long Papers (archival)
* Short Papers (archival)
* Extended Abstracts (non-archival)
* PhD Proposals (non-archival)
Accepted contributions will be presented as oral or poster presentations.
Archival Submissions
* Long papers:
* Substantial, original, completed, and unpublished work
* Up to 8 pages (excluding references)
* Unlimited references
* Up to 2 appendix pages
* 1 additional page in the final version for reviewer comments
* Short papers:
* Original, unpublished work with a focused contribution
* Not shortened versions of long papers
* Up to 4 pages (excluding references)
* Unlimited references
* Up to 1 appendix page
* 1 additional page in the final version for reviewer comments
Non-Archival Submissions
* Extended abstracts:
* Up to 2 pages including references
* 1 additional appendix page for tables/figures
* Selection based on workshop fit and the special theme
* Priority given to abstracts with doctoral students as first authors unable to present at *CL conferences due to visa restrictions
* PhD proposals:
* Up to 4 pages
* Description of PhD project, research challenges, contributions, and future directions
* Presented in a dedicated poster session for feedback and discussion
Multiple Submissions Policy
ArgMining 2026 will not consider papers simultaneously under review elsewhere. Submissions overlapping significantly (>25%) with active ARR submissions will not be accepted. ARR-reviewed papers are allowed if reviews and meta-reviews are available by the ARR commitment deadline.
Submission Format
* Two-column ACL 2026 format
* LaTeX or Microsoft Word templates
* PDF submissions only
* Submissions via OpenReview
Important Dates
* Direct paper submission deadline (archival): March 5, 2026
* ARR commitment deadline (archival): March 24, 2026
* Direct paper submission deadline (non-archival): April 7, 2026
* Notification of acceptance: April 28, 2026
* Camera-ready deadline: May 12, 2026
* Workshop dates: July 2–3, 2026
Review Policy
Long and short papers will follow ACL double-blind review policies. Submissions must be anonymized, including self-references and links. Papers violating anonymity requirements will be rejected without review. Demo descriptions are exempt from anonymization.
Best Paper Award
ArgMining 2026 will present a Best Paper Award to recognize significant contributions to argument mining research. All accepted papers are eligible.
Contact and Information
Website: https://argminingorg.github.io/2026/
Email: argmining.org [at] gmail.com
Workshop Organizers
Mohamed Elaraby (University of Pittsburgh)
Annette Hautli-Janisz (University of Passau)
John Lawrence (University of Dundee)
Elena Musi (University of Liverpool)
Julia Romberg (GESIS Leibniz Institute for the Social Sciences)
Federico Ruggeri (University of Bologna)
CALL FOR PAPERS: THE 1ST WORKSHOP ON COMPUTATIONAL AFFECTIVE SCIENCE AT
LREC 2026
December 15, 2025 | BY vk22priya
Event Notification Type:
Call for Papers
Abbreviated Title:
First CfP: CAS Workshop@LREC 2026
Location:
Palma de Mallorca, Spain
State:
Mallorca
Country:
Spain
Contact Email:
Christopher.Bagdon(a)uni-bamberg.de
vkpriya(a)cs.toronto.edu
City:
Palma de Mallorca
Contact:
Christopher Bagdon
Krishnapriya Vishnubhotla
Website:
https://casworkshop.github.io/
Submission Deadline:
Monday, 16 February 2026
First Call for Papers: The 1st Workshop on Computational Affective
Science (CAS 2026), co-located with the Language Resources and
Evaluation Conference (LREC) 2026 in Palma de Mallorca, Spain, May
11-16.
Website: https://casworkshop.github.io/
Contact: <workshop.cas1(a)gmail.com>
We invite submissions to the first Workshop on Computational Affective
Science (CAS 2026), co-located with LREC 2026, on research related to
the understanding of affect and emotions through language and
computation. CAS will accept archival long and short paper submissions,
featuring substantial, original, and unpublished research. We also
encourage submissions of extended abstracts from researchers in the
broader Affective Science community, with up to two pages of content
featuring the research background/hypotheses and a description of
methods/results. Extended abstracts are non-archival, offering the
option for publication and presentation at other conference venues.
**Motivation**
Affect refers to the fundamental neural processes that generate and
regulate emotions, moods, and feeling states. Affect and emotions are
central to how we organize meaning, to our behavior, to our health and
well-being, and to our very survival. Despite this, and even though most
of us are intimately familiar with emotions in everyday life, there is
much we do not know about how emotions work and how they impact our
lives. Affective Science is a broad interdisciplinary field that
explores these and related questions about affect and emotions.
Since language is a powerful mechanism of emotion expression, there is a
growing use of language data and advanced natural language processing
(NLP) algorithms to shed light on fundamental questions about emotions.
The Workshop on Computational Affective Science (CAS) aims to be a
dedicated venue for work focused specifically on the link between NLP
and affective science.
**Interdisciplinary Scope**
The workshop takes an interdisciplinary approach to affective science
and aims at bringing together NLP researchers, scientists, and theorists
from many research areas, including psychology, sociology, neuroscience,
and philosophy. Although work in sentiment analysis is decades old, this
work often proceeds separately and in different fields from research and
theory in affective science. Meanwhile, affective scientists in
psychology, sociology, neuroscience and philosophy increasingly seek to
use linguistic tools to shed light on the nature of emotions, moods, and
feeling states. CAS is therefore co-organized by an interdisciplinary
group of researchers (spanning NLP and Affective Science) to foment
collaboration at this exciting frontier of research.
**Submissions**
We invite long and short archival paper submissions, as well as
non-archival extended abstracts on a broad range of topics at the
intersection of affective science and natural language processing,
including but not limited to:
1. The Nature of Affect and Computational Modeling of Emotions
Computational experiments that add to our understanding of affect and
emotions, including findings relevant to:
* theories and nature of emotion
* the biology or neuroscience of emotions
* appraisal models
* dimensional models (valence / arousal / dominance)
* models of constructed emotion
* cognitive-affective architectures
* emotion dynamics (emergence, intensification, decay, transitions)
* emotion granularity
* emotion regulation
* affective embodiment
* evolutionary and developmental affect
* emotion-cognition interactions
These areas are relevant not just to human affect, but may also apply to
data animals and artificial agents.
2. Affective Data and Resources
Work on compiling and annotating affect-related information in text,
speech, facial and bodily expression, and physiological signals (ECG,
EEG, GSR, multimodal biosensing), with a focus on text data (monolingual
or multilingual) and multimodal data suitable for an NLP venue. Data
from underserved languages is especially encouraged.
3. Emotion Recognition, Prediction, and Inference
At the instance level:
* emotion classification (discrete emotions, dimensional ratings)
* emotion intensity estimation
* emotion cause detection
* context-aware affect inference (culture, situation, social setting)
* structured emotion analysis
At the aggregate level:
* creating emotion arcs
* determining broad trends in emotions over time or across locations
* tracking emotional responses toward entities of interest (e.g.,
climate change)
* document-level and cross-document emotion analysis
* labeling social networks
4. Applications
Including but not limited to:
* Affect and health, psychopathology, and mental disorders
* Affect and behavior/social science (e.g., interpersonal affect,
empathy, group-level affect, affect contagion, computational emotion
regulation)
* Affect and education
* Affect and literature/narratives/digital humanities
* Affect and commerce
5. Explainability and Interpretability in Computational Affective Models
Work aimed at improving the transparency and interpretability of
affective systems. This includes understanding how models represent and
infer emotions and identifying key cues driving predictions.
6. Ethics, Fairness, Theory Integration, Philosophical Implications
* Bias and generalizability of affective systems across demographics
* Privacy and ethics in affective data collection
* Examining whether automatic NLP systems rely on current and valid
theories of affect and emotion
* The implications of machines modeling or simulating affect
* Societal considerations surrounding affective artificial agents
**Important Dates (tentative):**
· Submission deadline:16 Feb 2026
· Notification of acceptance: 16 March 2026
· Camera Ready Paper due: 23 March 2026
· Workshop date: TBA (11-16 May 2026)
**Submission Details:**
We invite submissions for archival long and short papers, as well as
non-archival extended abstracts.
Archival long and short papers should feature novel and unpublished work
relating to the topics detailed above.
We also invite submissions of extended abstracts from researchers in the
broader Affective Science community, with up to two pages of content
featuring the research background/hypotheses and a description of
methods/results. Extended abstracts are non-archival, offering the
option for publication and presentation at other conference venues.
Archival Track:
· Long Paper: Consists of up to 8 pages of content, with additional
pages for references, limitations, ethical considerations, and
appendices.
· Short Paper: Consists of up to 4 pages of content, with additional
pages for references, limitations, ethical considerations, and
appendices.
(When preparing camera ready papers, you will be allowed one extra page
to address comments by the reviewers.)
Non-Archival Track:
· Extended Abstract: Up to 2 pages.
**Submission Format:**
All submissions must use the LREC 2026 template and follow the
guidelines found at: https://lrec2026.info/authors-kit/ (Note: extended
abstracts can be limited to being 1-2 pages in length).
**Mandatory Ethics Section:** We ask all authors to include a section on
Ethical Considerations in their submission, touching on the ethical
concerns and broader societal impacts of the work. This discussion
section will not count towards the page limit.
**Submission Site:**
All submissions must be made through the SoftConf portal. The link to
the system will be shared shortly.
**Additional Details:**
Website: https://casworkshop.github.io/
Attendance: The workshop will follow the attendance policy of the main
conference (https://lrec2026.info/registration-policy/ ).
**Organizers:**
* Christopher Bagdon, University of Bamberg, Germany
* Krishnapriya Vishnubhotla, National Research Council Canada
* Kristen A. Lindquist, The Ohio State University, USA
* Lyle Ungar, University of Pennsylvania, USA
* Roman Klinger, University of Bamberg, Germany
* Saif M. Mohammad, National Research Council Canada
Contact us at <workshop.cas1(a)gmail.com> with any questions.
The 2st Workshop on DHOW: Diffusion of Harmful Content on Online Web
Workshop
The workshop will be conducted in a *hybrid* format to ensure maximum
participation, accommodating attendees both *online* and in person.
Submission deadline: *July 11 2025 AOE*
*Workshop site*: https://dhow-workshop.github.io/2025/
*Co-located with ACMMM 2025*
https://acmmm2025.org/ <https://lrec-coling-2024.org/>
Dublin, Ireland, 27-31 October 2024
*Important Dates*
Submission deadline: extended to *July 11, 2025*
Notification of acceptance: August 01, 2025
Camera-ready papers due: August 11, 2025
Workshop date: October 27/28, 2025
*Workshop Description*
With the advancement of digital technologies and gadgets, online content
is easily accessible. At the same time, harmful content also gets
spread. There are different harmful content available on different
platforms in multiple languages. The topic of harmful content is broad
and covers multiple research directions. But from the user’s aspect,
they are affected by them all. Often, it is studied individually, like
misinformation and hate speech. Research has been done on one platform,
monolingual, on a particular issue. It leads to harmful content
spreaders switching platforms and languages to reach the user base.
Harmful is not limited to social media but also news media. Spreader
shares harmful content in posts, news articles, comments, and
hyperlinks. So, there is a need to study the harmful content by
combining cross-platform, language, multimodal data and topics.
We will bring the research on harmful content under one umbrella so that
research on different topics (hate speech, misinformation,
disinformation, self-harm, offensive content, etc.) can bring some novel
methods and recommendations for users, leveraging text analysis with
image, audio, and video recognition to detect harmful content in diverse
formats. The workshop will cover the ongoing issue of war or elections
in 2025.
We believe this workshop will provide a unique opportunity for
researchers and practitioners to exchange ideas, share latest
developments, and collaborate on addressing the challenges associated
with harmful contents spread across the Web. We expect that the workshop
will generate insights and discussions that will help advance the field
of societal artificial intelligence (AI) for the development of safer
internet. In addition to attracting high quality research contributions
to the workshop, one of the aims of the workshop is to mobilise the
researchers working on the related areas to form a community.
*Submissions Topics*
•Studying different types of harmful content
•Computational fact-checking & Misinformation Detection
•Role of Generative AI in Mitigating Harmful Content
•Harassment, Bullying, and Hate Speech Detection
•Explainable AI for Harmful Content Analysis
•Multimodal and Multilingual Harmful Content Detection such as fake
news, spam, and troll detection.
•Deepfake and Synthetic Media
•Ethical & Societal Implications of AI in Content Moderation
•Both Qualitative and Quantitative study on harmful content
•Psychological effects of harmful content like mental health
•Approaches for data collection or data annotation using multimodal
large models on harmful content
•User study on the effects of harmful content on human beings
*Submissions*
- Submission Instructions: https://dhow-workshop.github.io/2025/#call
<https://dhow-workshop.github.io/2025/#call>
- Submission Link:
https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW
<https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW>
***Workshop organizers*
•Thomas Mandl (University of Hildesheim, Germany)
•Haiming Liu (University of Southampton, United Kingdom)
•Gautam Kishore Shahi(University of Duisburg-Essen, Germany)
•Amit Kumar Jaiswal (University of Surrey, United Kingdom )
•Durgesh Nandini (University of Bayreuth, Germany)
DHOW 2025
*-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------*
*Call for Participation for *DravidianLangTech-2026 Workshop and
Sharedtaks @ ACL-2026
*-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------*
*CFP for the
Sixth Workshop on Speech and Language Technologies for Dravidian Languages- *
*DravidianLangTech-2026 (**Theme: Multilingual Multicultural Multimodal
LLMs)*
*---------------------------------------------------------------------------------------------------------------------------------------------------------*
*DravidianLangTech-2026 @ The 64th Annual Meeting of the Association for
Computational Linguistics (ACL) 2026*
*Venue: San Diego, California, United States*
*Conference Date: July 02 - 07, 2026*
*Workshop Website: https://sites.google.com/view/dravidianlangtech-2026
<https://sites.google.com/view/dravidianlangtech-2026>*
-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------
With the rapid advancement of artificial intelligence and language
technologies, internet usage has continued to surge globally, enabling many
widely spoken languages to adapt successfully to the digital age. However,
regional and underresourced languages still face significant challenges due
to limited computational resources, annotated datasets, and specialized
tools. One such group is the Dravidian language family, primarily spoken in
South India and Sri Lanka, with communities across Nepal, Pakistan,
Malaysia, London, and other parts of the world. The Dravidian languages,
with a history spanning more than 4,500 years and spoken by millions of
speakers, are under-resourced in speech and natural language processing.
Despite growing research interest, gaps persist in areas such as speech
recognition, multimodal processing, and generative AI applications for
Dravidian languages. This is the sixth workshop on speech and language
technologies for Dravidian languages, building upon the success of the
previous editions. DravidianLangTech-2026 continues to serve as a
collaborative forum for researchers, practitioners, and students to share
insights and advance computational methods for Dravidian languages. The
main objectives of DravidianLangTech-2026 are as follows,
The broader objectives of DravidianLangTech-2026 will be
- To explore challenges and innovations in developing speech and
language resources for Dravidian languages.
- To design and adapt language technologies for multilingual,
multimodal, and code-mixed Dravidian contexts.
- To facilitate collaboration between the global Dravidian language
community and international scholars across computational linguistics, AI,
and digital humanities.
- To address ethical, cultural, and inclusivity aspects in the creation
of language technologies for under-represented communities.
- To encourage the integration of Agentic AI frameworks for building
interactive, explainable, and collaborative language systems in Dravidian
contexts.
*Call for Papers :*
DravidianLangTech-2026 welcomes theoretical, empirical, and application
driven contributions on any Dravidian languages (e.g., Tamil, Kannada,
Malayalam, Telugu, Tulu, Allar, Aranadan, Attapadya, Kurumba, etc.) that
advance language processing, speech technologies, multimodality, or
resource development. Submissions can address challenges in monolingual,
bilingual, and code-mixed settings as well as crosslingual and low-resource
transfer approaches.
Topics of interest include but are not limited to
- Corpus(Data) development, annotation tools, benchmarks, and evaluation
methodologies
- Detecting Hate Speech, Offensive Language, Misinformation, Fake News,
Spam, and Rumor
- Generative AI and Prompt Engineering for Dravidian languages
- Agentic AI and Multi-agent Systems: workflow orchestration, reasoning
agents, and collaborative agents for Dravidian language processing
- Multimodal processing: Text, Speech, Image, Video, and Memes in
Dravidian contexts
- Speech Technology: Automatic Speech Recognition, Speech Synthesis,
Voice Conversion
- Impaired/Normal Speech Recognition and Assistive Technologies for
Dravidian speech
- Accent Recognition, Verification, and Dialect Modeling
- Emotion and Sentiment Recognition from Dravidian Speech and Text
- Machine Translation and Cross-lingual Transfer in Dravidian languages
- Language Resources for Generative and Instruction-Tuned LLMs
- Document Analysis and Understanding for Dravidian texts and scripts
- Object Detection and Recognition in multimodal Dravidian datasets
- Ethical and Fair AI for Low-resource Language Communities
- Healthcare and Mental Health Applications (e.g., depression detection,
doctor-patient communication) in Dravidian speech
- Educational Applications: Digital literacy, inclusive tools for rural
Dravidian language communities
*---------------------------------------------------------------------------------------------------------------------------------------------------------*
*Workshop Paper Submission Link
<https://openreview.net/group?id=aclweb.org/ACL/2026/Workshop/DravidianLangT…>*
*---------------------------------------------------------------------------------------------------------------------------------------------------------*
*Important Dates*
*---------------------------------------------------------------------------------------------------------------------------------------------------------*
*First call for workshop papers: December 10, 2025Second call for workshop
papers: January 15, 2026Third call for workshop papers: February 20,
2026Direct paper submission deadline: March 5, 2026Pre-reviewed ARR
commitment deadline: March 24, 2026Notification of acceptance: April 28,
2026Camera-ready paper due: May 12, 2026Pre-recorded video due (hard
deadline): June 4, 2026*
*Workshop dates: July 2-3, 2026*
with regards,
Dr. Bharathi Raja Chakravarthi,
Assistant Professor / Lecturer-above-the-bar
Programme Director (MSc Computer Science - Artificial Intelligence)
<https://www.universityofgalway.ie/courses/taught-postgraduate-courses/compu…>
School of Computer Science, University of Galway, Ireland
Insight SFI Research Centre for Data Analytics, Data Science Institute,
University of Galway, Ireland
E-mail: bharathiraja.akr(a)gmail.com , bharathi.raja(a)universityofgalway.ie
<bharathiraja.asokachakravarthi(a)universityofgalway.ie>
Google Scholar: https://scholar.google.com/citations?user=irCl028AAAAJ&hl=en
Website:
https://research.universityofgalway.ie/en/persons/bharathi-raja-asoka-chakr…
Apologies for cross-posting.
---------------------------------------------------------------------------
*SIGUL 2026 Joint Workshop with ELE, EURALI, and DCLRL*
*Towards Inclusivity and Equality: Language Resources and Technologies for
Under-Resourced and Endangered Languages*
*https://sites.google.com/view/sigul2026/home-page
<https://sites.google.com/view/sigul2026/home-page>*
------------------------------------
We are pleased to announce the upcoming SIGUL 2026 Joint Workshop with ELE,
EURALI, and DCLRL on Towards Inclusivity and Equality: Language Resources
and Technologies for Under-Resourced and Endangered Languages
<https://sites.google.com/view/sigul2026/home-page>, co-located with *LREC
2026 *in Palma, Mallorca, Spain. This workshop brings together researchers
working on less-resourced, endangered, minority, low-density, and
underrepresented languages to share novel techniques, resources,
strategies, and evaluation methods. We emphasize the entire pipeline: data
creation, modeling, adaptation/transfer, system development, evaluation,
deployment, and ethical/community engagement.
We invite contributions on, but not limited to, the following topics:
-
Data collection, annotation, and curation for under-resourced languages
(crowdsourcing, participatory methods, gamification, unsupervised or weakly
supervised methods)
-
Learning with limited supervision (zero- or few-shot, PEFT, RAG with
linguistic resources)
-
Multilingual alignment, representation learning, and language
embeddings, including rare languages
-
Speech, multimodal, and cross-modal technologies for under-resourced
languages (speech recognition, synthesis, speech-to-text, speech
translation, multimodal resources)
-
Basic text processing (normalization, orthography, transliteration,
tokenization/segmentation, morphological and syntactic processing) in and
for low-resource settings.
-
Low-resource machine translation (pivoting, alignment, synthetic data)
-
Evaluation frameworks, benchmarks, and metrics designed or adapted for
underrepresented languages
-
Adaptation, domain adaptation, and robustness to domain shift in
low-resource contexts
-
Responsible approaches, ethical issues, community engagement, data
sovereignty, and language revitalization
-
Deployment, tools, and practical systems for underserved languages
(e.g., mobile apps, dictionary or translation apps, linguistic tools)
-
Case studies of success and negative results (with lessons learned)
-
Interoperability, standardization, and metadata practices for datasets
in low-resource scenarios
Special Themes
Language modeling for intra-language variation, dialects, accents, and
regional variants of less-resourced languages
Many less-resourced languages display rich internal diversity, including
dialects, accents, and regional or social varieties. This special theme
focuses on developing language models and speech technologies that capture
and respect intra-language variation rather than reduce it to a single
“standard.” We welcome work on dialect identification and adaptation,
accent-robust speech systems, normalization vs. diversity-preserving
modeling, and cross-dialect transfer in low-data scenarios. Approaches
combining linguistic insights, community participation, and ethical
awareness are especially encouraged. The aim is to build technologies that
reflect and sustain the true linguistic richness of under-resourced
languages.
Ultra-Low-Resource Language Adaptation
This special theme focuses on methods that enable effective language and
speech technology development under extreme data scarcity. We invite
research on transfer learning, cross-lingual adaptation, multilingual
pretraining, and self-supervised or few-shot approaches tailored to
ultra-low-resource settings. Work on evaluation, data augmentation
(including synthetic data), and leveraging typological or linguistic
knowledge is also welcome. The goal is to advance techniques that extend
modern language technologies to the most underrepresented languages,
ensuring inclusivity in the digital age.
Community-Led Project Showcase
To help ground research in community needs, we invite brief (5–10 min)
presentations by language community members, NGOs, or practitioners
describing real-world challenges or resource needs. Position papers or
research posters are appropriate formats for this category.
Important Dates
Paper Submission Deadline: February 20 (Friday), 2026
Notification of Acceptance: March 22 (Sunday), 2026
Submission of Camera-Ready: March 30 (Monday), 2026
Workshop Date: 11-12 May 2026
All deadlines are anywhere-on-earth (AoE).
Call for Papers
We welcome original research papers and ongoing work relevant to the topics
of the workshop. Each submission can be one of the following categories:
-
research papers;
-
position papers for reflective considerations of methodological, best
practice, and institutional issues (e.g., ethics, data ownership, speakers’
community involvement, de-colonizing approaches);
-
posters, for work-in-progress projects in the early stage of development
or description of new resources;
-
demo papers and early-career/student papers (to be submitted as extended
abstracts and presented as posters).
The research and position papers should range from four (4) to eight (8)
pages, while demo papers are limited to four (4) pages. References don't
count towards page limits. Accepted papers will appear in the workshop
proceedings, which include both oral and poster papers in the same format.
Determination of the presentation format (oral vs. poster) is based solely
on an assessment of the optimal method of communication (more or less
interactive), given the paper content.
Submissions must be anonymous and follow LREC formatting guidelines
<https://lrec2026.info/authors-kit/>.
For inquiries, send an email to claudia.soria(a)cnr.it.
Identify, Describe and Share your LRs!
When submitting a paper from the START page, authors will be asked to
provide essential information about resources (in a broad sense, i.e. also
technologies, standards, evaluation kits, etc.) that have been used for the
work described in the paper or are a new result of your research. Moreover,
ELRA encourages all LREC authors to share the described LRs (data, tools,
services, etc.) to enable their reuse and replicability of experiments
(including evaluation ones).
Thanks,
Atul
Dear all,
We are organizing a workshop co-located with LREC 2026 on Identity Aware
NLP. The details are as follows:
=====================================================================
SECOND CALL FOR PAPERS
Ethical and Technical Challenges for Identity-Aware NLP
Workshop at LREC 2026, Palma de Mallorca, Spain, May 11-16, 2026
https://identity-aware-ai.github.io/
=====================================================================
*Workshop Theme:* What makes each of us unique, and which ethical and
technical challenges does this imply?
*OVERVIEW*
What makes us unique? Language (and thus the automatic processing of it)
is about people and what they mean. However, current practice relies on
the assumptions that the involved humans are all the same, and that if
enough data (and compute power) is present, the resulting
generalizations will be robust enough and represent the majority.
This approach often harms marginalized communities and ignores the
notion of identity in models and systems. Our interdisciplinary workshop
aims to raise the question of "what makes each of us unique?" to the NLP
community.
*WORKSHOP GOALS*
- The development of a shared and interdisciplinary understanding of
identities and how identity is treated in AI
- The development of new methods that push the effective, fair, and
inclusive treatment of individuals in AI to the next level
*TOPICS OF INTEREST*
We invite submissions on the following topics:
*Modeling subjective phenomena and disagreement: *Personalization and
perspectivist methods that challenge one-size-fits-all approaches by
leveraging disaggregated data and annotator metadata. Methods that learn
from disagreements rather than forcing consensus that erases unique
perspectives.
*Auditing and evaluating identity representation:* Techniques to measure
how well models represent diverse identities, diagnose failures in
capturing marginalized perspectives, and assess whether systems treat
all identities equitably. Frameworks for identity-aware performance
evaluation beyond aggregate metrics.
*Bias detection and fairness interventions: *Methods to identify when
models fail marginalized groups due to over-generalization, and
techniques to mitigate such harms while preserving model utility.
*Identity representation in LLMs: *How language models encode (or erase)
diverse identities, embody particular perspectives, and either reproduce
or challenge stereotypes. Measuring LLMs' capacity for reasoning about
identities beyond majority groups.
*Socio-political applications: *Modeling polarization, opinion
formation, and deliberation in ways that account for identity rather
than assuming homogeneous populations. How identity-aware approaches
improve accuracy for politically sensitive tasks.
*Methodological foundations from social sciences:* Best practices from
psychology and survey science for measuring identity constructs (values,
morals, narratives). Addressing challenges of using LLMs to model
diverse populations while avoiding erasure through aggregation.
*Accountability and responsible development: *Ethical responsibilities
when building systems that represent (or exclude) identities. Making AI
development processes accountable to marginalized communities most
affected by over-generalization.
*Identity-aware and community informed evaluation and auditing*:
Community informed bias evaluation and auditing. Human evaluation of
LLMs and other AI systems in an identity-aware manner.
*SUBMISSION TYPES*
We welcome the following types of submissions:
* Long papers: 4-8 pages of content (excluding references)
* Short papers: 4-8 pages of content (excluding references)
* Non-archival submissions, student project presentations, mixed-media
submissions
For non-archival submissions, we welcome creative formats including:
- Art, poetry, music
- Blog posts
- Jupyter notebooks
- Teaching materials
- Videos
- Findings papers
- Late-breaking papers
- Extended abstracts
For creative format submissions, please submit a PDF containing:
- A summary or abstract of your work
- A link to your work (if hosted externally)
- Any additional context or documentation
*SUBMISSION GUIDELINES
*
* All submissions will be double-blind reviewed
* Submissions should follow LREC 2026 formatting guidelines available
at: https://lrec2026.info/authors-kit/
* Papers must be 4-8 pages in length (excluding references)
* Papers must include ethics and limitations sections
* NO appendices are allowed (initial submission), up to 10 pages
camera-ready
* Originality and simultaneous submissions: submissions must be
original, previously unpublished work. If a paper is submitted to or
under consideration at another venue at the same time, this must be
declared at submission time. If accepted here, it must be withdrawn from
other venues; if accepted elsewhere while under review here, please
notify us promptly.
* Preprints: there is no anonymity period at LREC 2026, so authors may
post preprints at any time; however, the version submitted for review
must still be anonymized
* Language resources (optional): at submission time, authors may share
related language resources with the community; repository entries are
linked to the LRE Map and provide metadata for the resource
* Submission site: https://softconf.com/lrec2026/IdentityAwareAI
* Proceedings and presentation: accepted papers will appear in the
workshop proceedings. All accepted papers will be presented as posters.
For remote participants, we will also organize a lightning round of
short virtual presentations to accompany the posters.
*WORKSHOP FORMAT*
The workshop will be a half-day event featuring:
- Keynote speeches from leading experts in the field
- Paper presentations (oral and lightning talks)
- Participatory design activity to develop a shared interdisciplinary
vocabulary, identify current gaps in datasets for studying identity, and
design a vision for collecting new datasets
We are committed to ensuring that our workshop is accessible to all. The
workshop will be held in a hybrid format, allowing both in-person and
virtual participation.
*IMPORTANT DATES*
All deadlines are 11:59 PM AoE (Anywhere on Earth)
* Submission Deadline: February 20, 2026
* Notification of Acceptance: March 20, 2026
* Camera-Ready Deadline: March 30, 2026
* Workshop Date: May 16, 2026
*DIVERSITY & INCLUSION
*
We actively encourage submissions from underrepresented communities and
countries. The workshop organizers will provide mentorship and thorough
feedback, especially to first-time authors and reviewers.
*
ORGANIZERS*
Pranav A (University of Hamburg)
Valerio Basile (University of Turin)
Neele Falk (University of Stuttgart)
David Jurgens (University of Michigan)
Gabriella Lapesa (GESIS, Leibniz Institute for the Social Sciences &
Heinrich-Heine University of Düsseldorf)
Anne Lauscher (University of Hamburg)
Soda Marem Lo (University of Turin)
*CONTACT*
For queries, please contact: identity-aware-ai(a)googlegroups.com
Join us at Identity-Aware AI 2026 to contribute to this important
conversation!
The next meeting of the Edge Hill Corpus Research Group will take place online (via MS Teams) on Friday 6 February 2026, 2:00-3:30 pm (GMT<https://time.is/United_Kingdom>).
Topic: Discourse Oriented Corpus Studies
Speaker: Dan Malone<https://www.researchgate.net/profile/Daniel-Malone> (Edge Hill University, UK)
Title: From Global Uncertainty to Domestic Danger: The lone wolf terrorist as a topos of threat in (poly)crisis discourses
The abstract and registration link are here: https://sites.edgehill.ac.uk/crg/next
Attendance is free. Registration closes on Wednesday 4 February.
If you have problems registering, or have any questions, please email the organiser, Costas Gabrielatos (gabrielc(a)edgehill.ac.uk<mailto:gabrielc@edgehill.ac.uk>).
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
*** First Call for Replication and Negative Results ***
37th IEEE International Symposium on Software Reliability Engineering
(ISSRE 2026)
October 20-23, 2026, 5* St. Raphael Resort and Marina
Limassol, Cyprus
https://cyprusconferences.org/issre2026/
The Replications and Negative Results (RENE) Track has been introduced in the software
engineering community for a while and received overwhelmingly positive feedback. This
year, we establish this track at ISSRE and invite researchers to (1) replicate results from
previous papers and (2) publish studies with important and relevant negative or null
results (results that fail to show an effect, yet demonstrate the research paths that did not
pay off).
We also encourage the publication of the negative results or replicable aspects of
previously published work. For example, authors of a published paper reporting a working
solution for a given problem can document in a “negative results paper” other (failed)
attempts they made before defining the working solution they published.
• Replication studies. The papers in this category must go beyond simply re-
implementing an algorithm and/or re-running the artifacts provided by the original paper.
Such submissions should at least apply the approach to new data sets (open-source or
proprietary). A replication study should clearly report on results that the authors were
able to replicate, as well as on the aspects of the work that were not replicable.
• Negative results papers. We seek papers that report on negative results. We seek
negative results for all types of program comprehension research in any empirical area
(qualitative, quantitative, case study, experiment, etc.). For example, did your controlled
experiment not show an improvement over the baseline? Even if negative, results obtained
are still valuable when they are either not obvious or disprove widely accepted wisdom.
Evaluation Criteria
Both Replication Studies and Negative Results submissions will be evaluated according to
the following standards:
• Depth and breadth of the empirical studies
• Clarity of writing
• Appropriateness of conclusions
• Amount of useful, actionable insights
• Availability of artifacts
• Underlying methodological rigor. A negative result due primarily to misaligned
expectations or due to lack of statistical power (small samples) is not a good submission.
The negative result should be a result of a lack of effect, not a lack of methodological
rigor.
Most importantly, we expect replication studies to clearly point out the artifacts upon
which the study is built, and to provide the links to all the artifacts in the submission (the
only exception will be given to those papers that replicate the results on proprietary
datasets that can not be publicly released).
Submission Instructions
Submissions must be original, in the sense that the findings and writing have not been
previously published or under consideration elsewhere. However, as either replication
studies or negative results, some overlap with previous work is expected. Please make
clear in the paper the overlap with and difference from previous work.
All submissions must be in PDF format and conform, at time of submission, to the IEEE
Computer Society Format Guidelines:
(https://www.ieee.org/conferences/publishing/templates).
Authors are strongly encouraged to print the PDF and review it for integrity (fonts,
symbols, equations, etc.) before submission, as defective printing can undermine a
paper’s chance of success. By submitting to the ISSRE RENE Track, authors acknowledge
that they are aware of and agree to be bound by the IEEE Plagiarism FAQ. In particular,
papers submitted to the RENE track must not have been published elsewhere and must not
be under review or submitted for review elsewhere whilst under consideration for ISSRE
2026. Contravention of this concurrent submission policy will be deemed a serious breach
of scientific ethics, and appropriate action will be taken in all such cases. To check for
double submission and plagiarism issues, the chairs reserve the right to (1) share the list
of submissions with the PC Chairs of other conferences with overlapping review periods
and (2) use external plagiarism detection software, under contract to the IEEE, to detect
violations of these policies.
Submissions to the RENE Track can be made via the ISSRE RENE track submission site:
https://easychair.org/conferences?conf=issre2026 .
Submission Length: The ISSRE RENE Track accepts submissions of two lengths:
(1) New replication studies and new descriptions of negative results should have a length
of up to 10 pages, plus 2 pages which may only contain references.
(2) Negative results documented during the preparation of previously published work by
the authors should be described in up to 5 pages, plus 1 page, which may only contain
references (e.g., as previously mentioned, authors of a published paper can document
negative results they obtained while working on it, such as methodologically sound
solutions that did not work).
Important note 1: Both types of papers (replication and negative results) will be included
as part of the main conference proceedings.
Important note 2: The RENE track does not follow a double-anonymous review process.
Publication and Presentation
Upon notification of acceptance, all authors of accepted papers will receive further
instructions for preparing the camera-ready versions of their submissions. If a submission
is accepted, at least one author of the paper is required to have a full registration for ISSRE
2026, attend the conference, and present the paper in person. All accepted papers will be
published in the conference electronic proceedings. The presentation is expected to be
delivered in person, unless this is impossible due to travel limitations (e.g., related to
health or visa). Details about the presentations will follow the notifications.
The official publication date is the date the proceedings are made available in the IEEE
Digital Libraries. The official publication date affects the deadline for any patent filings
related to published work.
Purchases of additional pages in the proceedings are not allowed.
Important Dates (AoE)
• Submission deadline: July 5, 2026
• Notification of acceptance: August12 29, 2026
• Camera-ready copy submission: August 19, 2026
• Author registration deadline: August 19, 2026
Organisation
General Chairs
• Leonardo Mariani, University of Milano - Bicocca, Italy
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Coordinator
• Roberto Natella, GSSI, Italy
Research Program Committee Chairs
• Domenico Cotroneo, UNC Charlotte, USA
• Jie M. Zhang, King's College London, UK
Industry Program Chairs
• Jinyang Liu, Bytedance, USA
• Sigrid Eldh, Ericsson AB, Sweden
Workshop Chairs
• Georgia Kapitsaki, University of Cyprus, Cyprus
• August Shi, The University of Texas at Austin, USA
Doctoral Symposium Chairs
• Stefan Winter, LMU Munich, Germany
• Lili Wei, McGill University, Canada
Fast Abstract Chairs
• Luigi Lavazza, University of Insubria, Italy
• Yintong Huo, SMU, Singapore
JIC2 Chair
• Helene Waeselynck, LAAS-CNRS, France
Publicity Chairs
• Allison K. Sulivan, The University of Texas at Arlington, USA
• Jose D'Abruzzo Pereira, University of Coimbra, Portugal
Publication Chairs
• Sherlock Licorish, Otago Business School, New Zealand
• Maria Teresa Rossi, GSSI, Italy
Artifact Evaluation Chairs
• Naghmeh Ivaki, University of Coimbra, Portugal
• Fumio Machida, University of Tsukuba, Japan
Diversity and Inclusion Chair
• Eleni Constantinou, University of Cyprus, Cyprus
Financial Chair
• Costas Pattichis, University of Cyprus, Cyprus
Web Chairs
• Michalis Ioannides, Easy Conferences LTD
• Elena Masserini, University of Milano - Bicocca, Italy
Registration Chair
• Easy Conferences LTD
We invite submissions to PoliticalNLP 2026, the 3rd Workshop on Natural Language Processing for Political Sciences, co-located with LREC 2026. The workshop will take place in Palma de Mallorca, Spain, at the Palau de Congressos de Palma.
Theme for 2026
Trust, Transparency and Generative AI in Political Discourse Analysis
Large language models and generative AI are increasingly shaping political communication, public opinion, and democratic processes. PoliticalNLP 2026 provides an interdisciplinary forum to examine these developments critically and responsibly, at the intersection of NLP, political science, law, and the social sciences.
Topics of interest include, but are not limited to
• Trustworthy, explainable, and fair NLP for political data
• Bias, misinformation, and ethical risks of LLMs
• Multilingual and cross cultural political NLP
• Generative AI for policy analysis and deliberative democracy
• Reproducibility, transparency, and responsible AI practices
• Datasets, tools, and resources for political and civic technologies
Important dates
• Paper submission (long and short): 16 February 2026
• Notification: 11 March 2026
• Camera ready: 30 March 2026
• Workshop: 11 to 12 May 2026, or 16 May 2026 (final date to be confirmed by LREC)
Proceedings
Accepted papers will appear in the LREC 2026 Workshop Proceedings.
Submission and CFP
Full Call for Papers and details: https://sites.google.com/view/politicalnlp2026
Submission is electronic via the Softconf START system: https://softconf.com/lrec2026/PoliticalNLP2026/
Best regards,
PoliticalNLP 2026 Organizer
--
Wajdi Zaghouani, Ph.D.
Associate Professor in Residence,
Communication Program
Northwestern Qatar | Education City
T +974 4454 5232 | M +974 3345 4992
[cid:image001.png@01DB0DA7.8D0D9A20]
Second International Conference on Natural Language Processing
and Artificial Intelligence for Cyber Security
(NLPAICS'2026)
University of Alicante, Alicante, Spain
11 and 12 June 2026
https://nlpaics2026.gplsi.es/
Third Call for Papers
Recent advances in Natural Language Processing (NLP), Deep Learning and
Large Language Models (LLMs) have resulted in improved performance of
applications. In particular, there has been a growing interest in
employing AI methods in different Cyber Security applications.
In today's digital world, Cyber Security has emerged as a heightened
priority for both individual users and organisations. As the volume of
online information grows exponentially, traditional security approaches
often struggle to identify and prevent evolving security threats. The
inadequacy of conventional security frameworks highlights the need for
innovative solutions that can effectively navigate the complex digital
landscape to ensure robust security. NLP and AI in Cyber Security have
vast potential to significantly enhance threat detection and mitigation
by fostering the development of advanced security systems for autonomous
identification, assessment, and response to security threats in real
time. Recognising this challenge and the capabilities of NLP and AI
approaches to fortify Cyber Security systems, the Second International
Conference on Natural Language Processing (NLP) and Artificial
Intelligence (AI) for Cyber Security (NLPAICS'2026) continues the
tradition from NLPAICS'2024 to be a gathering place for researchers in
NLP and AI methods for Cyber Security. We invite contributions that
present the latest NLP and AI solutions for mitigating risks in
processing digital information.
Conference topics
The conference invites submissions on a broad range of topics related to
the employment of NLP and AI (and in general, language studies and
models) for Cyber Security, including but not limited to:
_Societal and Human Security and Safety_
* Content Legitimacy and Quality
* Detection and mitigation of hate speech and offensive language
* Fake news, deepfakes, misinformation and disinformation
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* Trust and credibility of online information
* User Security and Safety
* Cyberbullying and identification of internet offenders
* Monitoring extremist fora
* Suicide prevention
* Clickbait and scam detection
* Fake profile detection in online social networks
* Technical Measures and Solutions
* Social engineering identification, phishing detection
* NLP for risk assessment
* Controlled languages for safe messages
* Prevention of malicious use of ai models
* Forensic linguistics
* Human Factors in Cyber Security
_Speech Technology and Multimodal Investigations for Cyber Security_
* Voice-based security: Analysis of voice recordings or transcripts
for security threats
* Detection of machine-generated language in multimodal context (text,
speech and gesture)
* NLP and biometrics in multimodal context
_Data and Software Security_
* Cryptography
* Digital forensics
* Malware detection, obfuscation
* Models for documentation
* NLP for data privacy and leakage prevention (DLP)
* Addressing dataset "poisoning" attacks
_Human-Centric Security and Support_
* Natural language understanding for chatbots: NLP-powered chatbots
for user support and security incident reporting
* User behaviour analysis: analysing user-generated text data (e.g.,
chat logs and emails) to detect insider threats or unusual behaviour
* Human supervision of technology for Cyber Security
_Anomaly Detection and Threat Intelligence_
* Text-Based Anomaly Detection
* Identification of unusual or suspicious patterns in logs, incident
reports or other textual data
* Detecting deviations from normal behaviour in system logs or network
traffic
* Threat Intelligence Analysis
* Processing and analysing threat intelligence reports, news, articles
and blogs on latest Cyber Security threats
* Extracting key information and indicators of compromise (IoCs) from
unstructured text
_Systems and Infrastructure Security_
* Systems Security
* Anti-reverse engineering for protecting privacy and anonymity
* Identification and mitigation of side-channel attacks
* Authentication and access control
* Enterprise-level mitigation
* NLP for software vulnerability detection
* Malware Detection through Code Analysis
* Analysing code and scripts for malware
* Detection using NLP to identify patterns indicative of malicious
code
_Financial Cyber Security_
* Financial fraud detection
* Financial risk detection
* Algorithmic trading security
* Secure online banking
* Risk management in finance
* Financial text analytics
_Ethics, Bias, and Legislation in Cyber Security_
* Ethical and Legal Issues
* Digital privacy and identity management
* The ethics of NLP and speech technology
* Explainability of NLP and speech technology tools
* Legislation against malicious use of AI
* Regulatory issues
* Bias and Security
* Bias in Large Language Models (LLMs)
* Bias in security related datasets and annotations
_Datasets and resources for Cyber Security Applications_
_Specialised Security Applications and Open Topics_
* Intelligence applications
* Emerging and innovative applications in Cyber Security
_Special Theme Track - Future of Cyber Security in the Era of LLMs and
Generative AI_
NLPAICS 2026 will feature a special theme track with the goal of
stimulating discussion around Large Language Models (LLMs), Generative
AI and ensuring their safety. The latest generation of LLMs, such as
ChatGPT, Gemini, DeepSeek, LLAMA and open-source alternatives, has
showcased remarkable advancements in text and image understanding and
generation. However, as we navigate through uncharted territory, it
becomes imperative to address the challenges associated with employing
these models in everyday tasks, focusing on aspects such as fairness,
ethics, and responsibility. The theme track invites studies on how to
ensure the safety of LLMs in various tasks and applications and what
this means for the future of the field. The possible topics of
discussion include (but are not limited to) the following:
* Detection of LLM-generated language in multimodal context (text,
speech and gesture)
* LLMs for forensic linguistics
* Bias in LLMs
* Safety benchmarks for LLMs
* Legislation against malicious use of LLMs
* Tools to evaluate safety in LLMs
* Methods to enhance the robustness of language models
Submissions and Publication
NLPAICS welcomes high-quality submissions in English, which can take two
forms:
* Regular long papers: These can be up to eight (8) pages long,
presenting substantial, original, completed, and unpublished work.
* Short (poster) papers: These can be up to four (4) pages long and
are suitable for describing small, focused contributions, ongoing
research, negative results, system demonstrations, etc. Short papers
will be presented as part of a poster session.
The conference will not consider and evaluate abstracts only.
Accepted papers, including both long and short papers, will be published
as e-proceedings with ISBN will be available online on the conference
website at the time of the conference and are expected to be uploaded
into the ACL Anthology.
To prepare your submission, please make sure to use the NLPAICS 2026
style files available here:
LaTeX: NLPAICS_2026_LaTeX.zip [1]
Overleaf: https://www.overleaf.com/read/sgwmrzbmjfhc#aeea77
Word:
https://nlpaics2026.gplsi.es/wp-content/uploads/2025/11/NLPAICS2026_Proceed…
Papers should be submitted through Softconf/START using the following
link: https://softconf.com/p/nlpaics2026/user/
The conference will feature a student workshop, and awards will be
offered to the authors of best papers.
Important dates
* Submissions due: 16 March 2026
* Reviewing process: 1 April - 30 April 2026
* Notification of acceptance: 5 May 2026
* Camera-ready due: 19 May 2026
* Conference camera-ready proceedings ready 1 June 2026
* Conference: 11-12 June 2026
Organisation
Conference Chairs
Ruslan Mitkov (University of Alicante)
Rafael Muñoz (University of Alicante)
Programme Committee Chairs
Elena Lloret (University of Alicante)
Tharindu Ranasinghe (Lancaster University)
Publication Chair
Ernesto Estevanell (University of Alicante)
Sponsorship Chair
Andres Montoyo (University of Alicante)
Student Workshop Chair
Salima Lamsiyah (University of Luxembourg)
Best Paper Award Chair
Saad Ezzini (King Fahd University of Petroleum & Minerals)
Publicity Chair
Beatriz Botella (University of Alicante)
Social Programme Chair
Alba Bonet (University of Alicante)
Venue
The Second International Conference on Natural Language Processing and
Artificial Intelligence for Cyber Security (NLPAICS'2026) will take
place at the University of Alicante and is organised by the University
of Alicante GPLSI research group.
Further information and contact details
The follow-up calls will list keynote speakers and members of the
programme committee once confirmed. The conference website is
https://nlpaics2026.gplsi.es/ and will be updated on a regular basis.
For further information, please email nlpaics2026(a)dlsi.ua.es
Registration will open in March 2026.
Links:
------
[1] http://summer-school.gplsi.es/NLPAICS_2026_LaTeX.zip