We're proud to announce a new research special interest group, SIGSEC, to cover work on LLM and NLP security. SIGSEC is part of the Association for Computational Linguistics (www.aclweb.org).
We host regular talks on NLP & LLM Security, a mailing list for people interested in NLP & LLM security, and an annual research workshop.
The ACL Special Group on NLP Security exists to:
* provide infrastructure and community for those many ACL members working in NLP Security; * establish a serious research body that represents NLP and ACL interests in the burgeoning field of LLM and NLP security; and * bridge the Information Security and Computational Linguistics communities, which is a link already actively being pursued by the Information Security community.
Membership is free, and there's an exciting talks series. The video links are posted on https://sig.llmsecurity.net/talks/. We start with:
* Thursday November 2nd, 10.00 ET / 15.00 CET - Text Embeddings Reveal (Almost) As Much As Text - John X. Morris * Thursday November 9th, 11.00 ET / 17.00 CET - LLM-Deliberation: Evaluating LLMs with Interactive Multi-Agent Negotiation Games - Sahar Abdelnabi * Thursday November 23rd, 11.00 ET / 17.00 CET - Privacy Side Channels in Machine Learning Systems - Edoardo Debenedetti
All talks present cutting-edge research on LLM security vulnerabilities and assessment methods.
Join us here! https://sig.llmsecurity.net/join/
We look forward to welcoming you.
SIGSEC President: Leon Derczynski, ITU Copenhagen / NVIDIA Corp SIGSEC Secretary: Muhao Chen, University of Southern California SIGSEC Expert Advisor: Jekaterina Novikova, AI Risk and Vulnerability Alliance / Cambridge Cognition