First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security (NLPAICS’2024)
Lancaster University, Lancaster, United Kingdom 29-30 July 2024 https://www.nlpaics.com
Second Call for Papers
Recent advances in Natural Language Processing (NLP), Deep Learning and Large Language Models (LLMs) have resulted in improved performance of applications. . In particular, there has been a growing interest in employing AI methods in different Cyber Security applications.
In today's digital world, Cyber Security has emerged as a heightened priority for both individual users and organisations. As the volume of online information grows exponentially, traditional security approaches often struggle to identify and prevent evolving security threats. The inadequacy of conventional security frameworks highlights the need for innovative solutions that can effectively navigate the complex digital landscape for ensuring robust security. NLP and AI in Cyber Security have vast potential to significantly enhance threat detection and mitigation by fostering the development of advanced security systems for autonomous identification, assessment, and response to security threats in real-time. Recognising this challenge and the capabilities of NLP and AI approaches to fortify Cyber Security systems, the First International Conference on Natural Language Processing (NLP) and Artificial Intelligence (AI) for Cyber Security (NLPAICS’2024) serves as a gathering place for researchers in NLP and AI methods for Cyber Security. We invite contributions that present the latest NLP and AI solutions for mitigating risks in processing digital information.
Conference topics
The conference invites submissions on a broad range of topics related to the employment of NLP and AI (and in general, language studies and models) for Cyber Security including but not limited to:
Societal and Human Security and Safety
· Content Legitimacy and Quality
o Detection and mitigation of hate speech and offensive language
o Fake news, deepfakes, misinformation and disinformation
o Detection of machine generated language in multimodal context (text, speech and gesture)
o Trust and credibility of online information
· User Security and Safety
o Cyberbullying and identification of internet offenders
o Monitoring extremist fora
o Suicide prevention
o Clickbait and scam detection
o Fake profile detection in online social networks
· Technical Measures and Solutions
o Social engineering identification, phishing detection
o NLP for risk assessment
o Controlled languages for safe messages
o Prevention of malicious use of ai models
o Forensic linguistics
· Human Factors in Cyber Security
Speech Technology and Multimodal Investigations for Cyber Security
· Voice-based security: Analysis of voice recordings or transcripts for security threats
· Detection of machine generated language in multimodal context (text, speech and gesture)
· NLP and biometrics in multimodal context
Data and Software Security
· Cryptography
· Digital forensics
· Malware detection, obfuscation
· Models for documentation
· NLP for data privacy and leakage prevention (DLP)
· Addressing dataset “poisoning” attacks
Human-Centric Security and Support
· Natural language understanding for chatbots: NLP-powered chatbots for user support and security incident reporting
· User behaviour analysis: analysing user-generated text data (e.g., chat logs and emails) to detect insider threats or unusual behaviour
· Human supervision of technology for Cyber Security
Anomaly Detection and Threat Intelligence
· Text-Based Anomaly Detection
o Identification of unusual or suspicious patterns in logs, incident reports or other textual data
o Detecting deviations from normal behaviour in system logs or network traffic
· Threat Intelligence Analysis
o Processing and analysing threat intelligence reports, news, articles and blogs on latest Cyber Security threats
o Extracting key information and indicators of compromise (IoCs) from unstructured text
Systems and Infrastructure Security
· Systems Security
o Anti-reverse engineering for protecting privacy and anonymity
o Identification and mitigation of side-channel attacks
o Authentication and access control
o Enterprise-level mitigation
o NLP for software vulnerability detection
· Malware Detection through Code Analysis
o Analysing code and scripts for malware
o Detection using NLP to identify patterns indicative of malicious code
Financial Cyber Security
· Financial fraud detection
· Financial risk detection
· Algorithmic trading security
· Secure online banking
· Risk management in finance
· Financial text analytics
Ethics, Bias, and Legislation in Cyber Security
· Ethical and Legal Issues
o Digital privacy and identity management
o The ethics of NLP and speech technology
o Explainability of NLP and speech technology tools
o Legislation against malicious use of AI
o Regulatory issues
· Bias and Security
o Bias in Large Language Models (LLMs)
o Bias in security related datasets and annotations
Datasets and resources for Cyber Security Applications
Specialised Security Applications and Open Topics
· Intelligence applications
· Emerging and innovative applications in Cyber Security
Special Theme Track - Future of Cyber Security in the Era of LLMs and Generative AI
We are excited to share that NLPAICS 2024 will have a special theme track with the goal of stimulating discussion around Large Language Models (LLMs), Generative AI and ensuring their safety. The latest generation of LLMs, such as CHATGPT, Gemini, LLAMA and open-source alternatives, has showcased remarkable advancements in text and image understanding and generation. However, as we navigate through uncharted territory, it becomes imperative to address the challenges associated with employing these models in everyday tasks, focusing on aspects such as fairness, ethics, and responsibility. The theme track invites studies on how to ensure the safety of LLMs in various tasks and applications and what this means for the future of the field. The possible topics of discussion include (but are not limited to) the following:
· Detection of LLM-generated language in multimodal context (text, speech and gesture)
· LLMs for forensic linguistics
· Bias in LLMs
· Safety benchmarks for LLMs
· Legislation against malicious use of LLMs
· Tools to evaluate safety in LLMs
· Methods to enhance the robustness of language models
Submissions and Publication
NLPAICS welcomes high-quality submissions in English, which can take two forms:
· Regular long papers: These can be up to eight (8) pages long, presenting substantial, original, completed, and unpublished work.
· Short papers: These can be up to four (4) pages long and are suitable for describing small, focused contributions, negative results, system demonstrations, etc.
Note that the page limits mentioned above exclude additional pages for references, ethical considerations, conflict-of-interest statements, as well as data and code availability statements.
Papers must be anonymised to support double-blind reviewing.
Please submit your work as pdf using the following link: https://softconf.com/n/nlpaics2024/
Submission templates can be accessed here: LaTeX Overleaf, LaTeX , MS Office
Accepted papers, including both long and short papers, will be published as part of the same e-proceedings to be uploaded on ACL Anthology.
Important dates
· Submissions due: 5 April 2024
· Reviewing process: 25 April-31 May 2024
· Notification of acceptance: 5 June 2024
· Camera-ready due: 20 June 2024
· Conference: 29-30 July 2024
Venue The First International Conference on Natural Language Processing and Artificial Intelligence for Cyber Security (NLPAICS’2024) will take place at Lancaster University and is organised by the Lancaster University UCREL NLP research group.
Organisation
· Conference Chair
o Ruslan Mitkov (Lancaster University)
· Conference Programme Chairs
o Cengiz Acartürk (Jagiellonian University)
o Matthew Bradbury (Lancaster University)
o Mo El-Haj (Lancaster University)
o Paul Rayson (Lancaster University)
· Sponsorship Chair
o Saad Ezzini (Lancaster University)
· Publicity Chair
o Tharindu Ranasinghe (Aston University)
· Publication Chair
o Ignatius Ezeani (Lancaster University)