We are happy to announce the next online seminar in the Neurocognition, Language and Visual Processing (NLVP) series organized by the NLVP group and IDSAI at the University of Exeter. You can check the slides, videos of previous talks and the schedule for upcoming talks here: https://sites.google.com/view/neurocognit-lang-viz-group/seminars
Zoom meeting link: https://Universityofexeter.zoom.us/j/93707609239?pwd=ErfOgIy30fwkAH7V5iFFVgA... (Meeting ID: 937 0760 9239 Password: 259613)
***Seminar 1: Thursday, 12 Dec 2024, 16:00 to 17:00, BST***
Speaker: Dr Vered Shwartz (University of British Columbia)
Title: Navigating Cultural Adaptation of LLMs: Knowledge, Context, and Consistency
Abstract: Despite their amazing success, large language models and vision and language models suffer from several limitations. This talk focuses on one of these limitations: the models’ narrow Western, North American, or even US-centric lens, as a result of training on web text and images primarily from US-based users. As a result, users from diverse cultures that are interacting with these tools may feel misunderstood and experience them as less useful. Worse still, when such models are used in applications that make decisions about people’s lives, lack of cultural awareness may lead to models perpetuating stereotypes and reinforcing societal inequalities. In this talk, I will present a line of work from our lab aimed at quantifying and mitigating this bias.
Speaker's short bio: Vered Shwartz is an Assistant Professor of Computer Science at the University of British Columbia, and a CIFAR AI Chair at the Vector Institute. Her research interests include commonsense reasoning, computational semantics and pragmatics, multimodal models, and cultural considerations in NLP. Previously, Vered was a postdoctoral researcher at the Allen Institute for AI (AI2) and the University of Washington, and received her PhD in Computer Science from Bar-Ilan University.
***Seminar 2: Thursday, 16 Jan 2025, 15:00 to 16:00, BST***
Speaker: Prof Roberto Navigli (Sapienza University of Rome)
Title: What's Behind Text? The Long, Challenging Path Towards a Unified Language-Independent Representation of Meaning
Abstract: In the era of Large Language Models (LLMs), the pursuit of a unified, language-independent representation of meaning remains both essential and complex. This talk revisits the rationale for advancing semantic understanding beyond the capabilities of LLMs and highlights the development of a large-scale multilingual inter-task resource like MOSAICo and the design of innovative methods that bridge word- and sentence-level meanings across languages. I will also explore how building a robust, multilingual framework for interpreting meaning with greater precision and depth enhances the quality and reliability of system outputs, including text generated by LLMs.
Speaker's short bio: Roberto Navigli is Professor of Natural Language Processing at the Sapienza University of Rome, where he leads the Sapienza NLP Group. He has received two ERC grants on lexical and sentence-level multilingual semantics, highlighted among the 15 projects through which the ERC transformed science. He received several prizes, including two Artificial Intelligence Journal prominent paper awards and several outstanding/best paper awards from ACL. He is the co-founder of Babelscape, a successful deep-tech company which enables NLU in dozens of languages. He served as Associate Editor of the Artificial Intelligence Journal (2013-2020) and Program Co-Chair of ACL-IJCNLP 2021. He is a Fellow of ACL, ELLIS and EurAI and currently serves as General Chair of ACL 2025.
Check past and upcoming seminars at the following url: https://sites.google.com/view/neurocognit-lang-viz-group/seminars.
If you want to follow future NLVP seminars, you are welcome to join our *Google group*: https://groups.google.com/g/neurocognition-language-and-vision-processing-gr...
Best wishes, Hang Dong (https://computerscience.exeter.ac.uk/staff/hd524) on behalf of the NLVP group (https://sites.google.com/view/neurocognit-lang-viz-group/members)