Please consider contributing and/or forwarding to appropriate colleagues and groups.
****We apologize for the multiple copies of this e-mail****
----------------------------------------------------------------------------------------------------
Call for Participation ----------------------------------------------------------------------------------------------------
First Call for Participation:
Task: EXIST 2025: sEXism Identification in Social neTworks
Website: http://nlp.uned.es/exist2025/
EXIST is a series of scientific events and shared tasks on sexism identification in social networks. EXIST aims to foster the automatic detection of sexism in a broad sense, from explicit misogyny to other subtle expressions that involve implicit sexist behaviours (EXIST 2021, EXIST 2022, EXIST 2023, EXIST 2024). The fifth edition of the EXIST shared task will be held as a Lab in CLEF 2025, on September 9-12, 2025, at UNED, Madrid, Spain.
In the 2024 EXIST campaign the datasets contained multimedia content in the format of memes, stepping forward research on more robust techniques to identify sexism in social networks. Following this line, this year the challenge will focus on TikTok videos, so that the dataset includes the three more important multimedia elements used to spread sexism: text, images and videos. Consequently, it is essential to develop automated multimodal tools capable of detecting sexism in text, images, and videos, to raise alarms or automatically remove such content from social networks because platforms’ algorithms often amplify content that perpetuates gender stereotypes and internalized misogyny. This lab will contribute to the creation of applications that identify sexist content in social media across all three formats.
Similar to the approach in the 2023 and 2024 edition, this edition will also embrace the Learning With Disagreement (LeWiDi) paradigm for both the development of the dataset and the evaluation of the systems. The LeWiDi paradigm doesn’t rely on a single “correct” label for each example. Instead, the model is trained to handle and learn from conflicting or diverse annotations. This enables the system to consider various annotators’ perspectives, biases, or interpretations, resulting in a fairer learning process.
Participants will be asked to classify tweets, memes and videos (in English and Spanish) according to the following tasks:
TASK 1: Sexism detection in Tweets:
SUBTASK 1.1 - Sexism Identification in Tweets: The first subtask is a binary classification. The systems have to decide whether or not a given tweet contains sexist expressions or behaviours (i.e., it is sexist itself, describes a sexist situation or criticizes a sexist behaviour), and classify it according to two categories: YES and NO.
SUBTASK 1.2 - Source Intention in Tweets: For the tweets that have been classified as sexist, the second task aims to classify each tweet according to the intention of the person who wrote it. We propose a ternary classification task: (i) DIRECT sexist message, (ii) REPORTED sexist message and (iii) JUDGEMENTAL message.
SUBTASK 1.3 - Sexism Categorization in Tweets: Once a message has been classified as sexist, the third subtask aims to categorize the message in different types of sexism (according to a categorization proposed by experts and that takes into account the different facets of women that are undermined). In particular, each sexist tweet must be categorized in one or more of the following categories: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE.
TASK 2: Sexism detection in Memes:
TASK 2.1 - Sexism Identification in Memes: This is a binary classification subtask consisting on determining wheter a meme describes a sexist situation or criticizes a sexist behaviour), and classifying it into two categories: YES and NO.
Task 2.2: Source Intention in Memes: As in subtask 1.2, this subtask aims to categorize the meme according to the intention of the author. Due to the characteristics of the memes, the REPORTED label is virtually null, so in this task systems should only classify memes into the DIRECT or JUDGEMENTAL categories.
Task 2.3: Sexism Categorization in Memes: This task aims to classify sexist memes according to the categorization provided for subtask 1.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE. TASK 3: Sexism detection in Videos:
SUBTASK 3.1 - Sexism Identification in Videos: This is a binary classification task as in Subtasks 1.1 and 2.1.
SUBTASK 3.2: Source Intention in Videos: This subtask replicates subtask 2.2 for memes, but it takes as source videos.
SUBTASK 3.3: This subtask aims to classify sexist videos according to the categorization provided for Task 1.3: (i) IDEOLOGICAL AND INEQUALITY, (ii) STEREOTYPING AND DOMINANCE, (iii) OBJECTIFICATION, (iv) SEXUAL VIOLENCE and (v) MISOGYNY AND NON-SEXUAL VIOLENCE.
Although we recommend to participate in all subtasks and in both languages, participants are allowed to participate just in one of them (e.g. subtask 1) and in one language (e.g. English).
During the training phase, the task organizers will provide the participants with the manually-annotated EXIST 2025 dataset. For the evaluation of the systems, the unlabeled test data will be released.
We encourage participation from both academic institutions and industrial organizations. We invite participants to register for the lab at CLEF 2025 Labs Registration site (https://clef2025-labs-registration.dei.unipd.it/). You will receive information about how to join the Discord Group for the EXIST 2025 shared task.
Important Dates: * 18 November 2024: Registration opens. * 3 February 2025: Training and development sets available. * 7 April 2025: Test set available. * 25 April 2025: Registration closes. * 18 May 2025: Runs submission due to organizers. * 8 June 2025: Results notification to participants. * 15 June 2025: Submission of Working Notes by participants. * 29 June 2025: Notification of acceptance (peer reviews). * 7 July 2025: Camera-ready participant papers due to organizers. * 9-12 September 2025: EXIST 2025 at CLEF Conference.
** Note: All deadlines are 11:59PM UTC-12:00 ("anywhere on Earth") **
Organizers: Laura Plaza, Universidad Nacional de Educación a Distancia (UNED) Jorge Carrillo-de-Albornoz, Universidad Nacional de Educación a Distancia (UNED) Enrique Amigó, Universidad Nacional de Educación a Distancia (UNED) Julio Gonzalo, Universidad Nacional de Educación a Distancia (UNED) Roser Morante, Universidad Nacional de Educación a Distancia (UNED) Paolo Rosso, Universitat Politècnica de València (UPV) Iván Arcos, Universitat Politècnica de València (UPV) Damiano Spina, Royal Melbourne Institute of Technology (RMIT)
Contact: Contact the organizers by writing to: jcalbornoz@lsi.uned.es
Website: http://nlp.uned.es/exist2025/
AVISO LEGAL. Este mensaje puede contener información reservada y confidencial. Si usted no es el destinatario no está autorizado a copiar, reproducir o distribuir este mensaje ni su contenido. Si ha recibido este mensaje por error, le rogamos que lo notifique al remitente. Le informamos de que sus datos personales, que puedan constar en este mensaje, serán tratados en calidad de responsable de tratamiento por la UNIVERSIDAD NACIONAL DE EDUCACIÓN A DISTANCIA (UNED) c/ Bravo Murillo, 38, 28015-MADRID-, con la finalidad de mantener el contacto con usted. La base jurídica que legitima este tratamiento, será su consentimiento, el interés legítimo o la necesidad para gestionar una relación contractual o similar. En cualquier momento podrá ejercer sus derechos de acceso, rectificación, supresión, oposición, limitación al tratamiento o portabilidad de los datos, ante la UNED, Oficina de Protección de datoshttps://www.uned.es/dpj, o a través de la Sede electrónicahttps://sede.uned.es/ de la Universidad. Para más información visite nuestra Política de Privacidadhttps://descargas.uned.es/publico/pdf/Politica_privacidad_UNED.pdf.