[Apologies for multiple postings]
ImageCLEF 2025
Multimedia Retrieval in CLEF
http://www.imageclef.org/2025/
We warmly invite you to take part in this year’s ImageCLEF evaluation campaign! With seven exciting and challenging tasks—each featuring multiple sub-tasks and unique research opportunities—there’s something for everyone. You and your team can begin development immediately, as all the training data is already available. Don’t miss the chance to showcase your skills and secure a spot on our leaderboard!
*** CALL FOR PARTICIPATION ***
ImageCLEF 2025 is an evaluation campaign conducted as part of the CLEF (Conference and Labs of the Evaluation Forum) labs. It features multiple research tasks, inviting teams from around the world to participate.
The campaign results are published in the working notes proceedings of CEUR Workshop Proceedings (CEUR-WS.org) and presented at the CLEF conference. Additionally, selected contributions from participants may be invited for publication in the following year’s Springer Lecture Notes in Computer Science (LNCS), alongside the annual lab overviews.
ImageCLEF’s target communities include, but are not limited to, researchers in information retrieval (text, vision, audio, multimedia, social media, sensor data, etc.), machine learning, deep learning, data mining, natural language processing, image and video processing, and computer vision. The campaign places particular emphasis on challenges related to multi-modality, multi-linguality, and interactive search.
*** 2025 TASKS ***
ImageCLEFmedical Automatic Image Captioning
ImageCLEFmedical Synthetic Medical Images Created via GANs
ImageCLEFmedical Visual Question Answering
ImageCLEFmedical Multimodal And Generative TelemedICine (MAGIC)
Image Retrieval/Generation for Arguments
ImageCLEFtoPicto
ImageCLEF Multimodal Reasoning
#ImageCLEFmedical Automatic Image Captioning (9th edition) - Training data released!
https://www.imageclef.org/2025/medical/caption
Interpreting and summarizing the insights gained from medical images such as radiology output is a time-consuming task that involves highly trained experts and often represents a bottleneck in clinical diagnosis pipelines.The Automatic Image Captioning task is split into 2 subtasks: Concept Detection Task, based on identifying the presence and location of relevant concepts in a large corpus of medical images and the Caption Prediction Task, where participating systems are tasked with composing coherent captions for the entirety of an image
Organizers: Hendrik Damm, Johannes Rückert, Christoph M. Friedrich, Louise Bloch, Raphael Brüngel, Ahmad Idrissi-Yaghir, Benjamin Bracke (University of Applied Sciences and Arts Dortmund, Germany), Asma Ben Abacha (Microsoft, USA), Alba García Seco de Herrera (University of Essex, UK), Henning Müller (University of Applied Sciences Western Switzerland, Sierre, Switzerland), Henning Schäfer, Tabea M. G. Pakull (Institute for Transfusion Medicine, University Hospital Essen, Germany), Cynthia S. Schmidt, Obioma Pelka (Institute for Artificial Intelligence in Medicine, Germany)
#ImageCLEFmedical Synthetic Medical Images Created via GANs (3rd edition) - Train & Test data released!
https://www.imageclef.org/2025/medical/gan
The task aims to further investigate the hypothesis that generative models generate synthetic medical images that retain "fingerprints" from the real images used during their training. These fingerprints raise important security and privacy concerns, particularly in the context of personal medical image data being used to create artificial images for various real-life applications. In the first subtask, participants will analyze synthetic biomedical images to determine whether specific real images were used in the training process of generative models. In the second subtask, participants will link each synthetic biomedical image to the specific subset of real data used during its generation. The goal is to identify the particular dataset of real images that contributed to the training of the generative model responsible for creating each synthetic image.
Organizers: Alexandra Andrei, Liviu-Daniel Ștefan, Mihai Gabriel Constantin, Mihai Dogariu, Bogdan Ionescu (National University of Science and Technology POLITEHNICA Bucharest, Romania), Ahmedkhan Radzhabov, Yuri Prokopchuk (National Academy of Science of Belarus, Minsk, Belarus), Vassili Kovalev (Belarusian Academy of Sciences, Minsk, Belarus), Henning Müller (University of Applied Sciences Western Switzerland, Sierre, Switzerland)
#ImageCLEFmedical Visual Question Answering (3rd edition) - Train & Test data released!
https://www.imageclef.org/2025/medical/vqa
This year, the challenge looks at the integration of Visual Question Answering (VQA) with synthetic gastrointestinal (GI) data, aiming to enhance diagnostic accuracy and learning algorithms. The challenge includes developing algorithms that can interpret and answer questions based on synthetic GI images, creating advanced synthetic images that mimic accurate diagnostic visuals in detail and variability, and evaluating the effectiveness of VQA techniques with both synthetic and real GI data.
The 1st subtask asks participants to build algorithms that can accurately interpret and respond to questions pertaining to gastrointestinal (GI) images. This involves understanding the context and details within the images and providing precise answers that would assist in medical diagnostics, while the 2nd subtask focuses on the generation of synthetic GI images that are highly detailed and variable enough to closely resemble real medical images.
Organizers: Steven A. Hicks, Sushant Gautam, Michael A. Riegler, Vajira Thambawita, Pål Halvorsen (SimulaMet, Norway)
#ImageCLEFmedical Multimodal And Generative TelemedICine (MEDIQA-MAGIC) (3rd edition) - Train data is released!
https://www.imageclef.org/2025/medical/mediqa
The task extends on the previous year’s dataset and challenge based on multimodal dermatology response generation. Participants will be given a clinical narrative context along with accompanying images. The task is divided into two relevant sub-parts: (i) segmentation of dermatological problem regions, and (ii) providing answers to closed-ended questions (participants will be given a dermatological query, its accompanying images, as well as a closed-question with accompanying choices – the task is to select the correct answer to each question)
Organizers: Asma Ben Abacha, Wen-wai Yim, Noel Codella (Microsoft), Roberto Andres Novoa (Stanford University), Josep Malvehy (Hospital Clinic of Barcelona)
#Image Retrieval/Generation for Arguments (4th edition) - In collaboration with Touché!
https://www.imageclef.org/2025/argument-images
Given a set of arguments, the task is to return for each argument several images that help convey the argument. A suitable image could depict the argument or show a generalization or specialization. Participants can optionally add a short caption that explains the meaning of the image. Images can be either retrieved from the focused crawl or generated using an image generator.
Organizers: Maximilian Heinrich, Johannes Kiesel, Benno Stein (Bauhaus-Universität Weimar), Moritz Wolter (Leipzig University), Martin Potthast (University of Kassel, hessian.AI, scads.AI)
#ImageCLEFtoPicto (3rd edition) - Train & Test data released!
https://www.imageclef.org/2025/topicto
The goal of ToPicto is to bring together linguists, computer scientists, and translators to develop new translation methods to translate either speech or text into a corresponding sequence of pictograms. The task refers to the relationship between text and related pictograms and is composed of 2 subtasks: the Text-to-Picto task, which focuses on the automatic generation of a corresponding sequence of pictogram terms and the Speech-to-Picto task, which focuses on directly translating speech to pictogram terms.
Organizers: Diandra Fabre, Cécile Macaire, Benjamin Lecouteux, Didier Schwab (Université Grenoble Alpes, LIG, France)
#ImageCLEF Multimodal Reasoning (new) - Train data released!
https://www.imageclef.org/2025/multimodalreasoning
MultimodalReason is a new task focusing on Multilingual Visual Question Answering (VQA). The formulation of the task is the following: Given an image of a question with 3-5 possible answers, participants must identify the single correct answer.The task is split into many subtasks, each handling a different language (English, Bulgarian, Arabic, Serbian, Italian, Hungarian, Croatian, Urdu, Kazakh, Spanish, with a few more on the way). The task's goal is to assess modern LLMs' reasoning capabilities on complex inputs, presented in different languages, across various subjects.
Organizers: Dimitar Dimitrov, Ivan Koychev (Sofia University "St. Kliment Ohridski", Bulgaria), Rocktim Jyoti Das, Zhuohan Xie, Preslav Nakov (Mohamed bin Zayed University of Artificial Intelligence (MBZUAI), Abu Dhabi, UAE)
*** IMPORTANT DATES ***
(may vary depending on the task)
- Run submission deadline: May 10, 2025
- Working notes submission: May 30, 2025
- CLEF 2025 conference: September 9-12, 2025, Madrid, Spain
*** REGISTRATION ***
Follow the instructions here https://www.imageclef.org/2025
*** OVERALL COORDINATION ***
Bogdan Ionescu, Politehnica University of Bucharest, Romania
Henning Müller, HES-SO, Sierre, Switzerland
Dan-Cristian Stanciu, Politehnica University of Bucharest, Romania
On behalf of the organizers,
Dan-Cristian Stanciu
https://www.aimultimedialab.ro/
A PhD position at the University of Groningen, the Netherlands:
- Fully-funded 4-year position
- Research focus: using computational models (including small probabilistic
models and neural network language models) to study the acquisition of
modal verbs
- Programming skills and background in linguistics / language acquisition
required
- Supervisors: Annemarie van Dooren, Yevgen Matusevych, Arianna Bisazza
- Application deadline: 24 April 2025
- More details and application:
https://www.rug.nl/about-ug/work-with-us/job-opportunities/?details=00347-0…
--
Yevgen Matusevych
Assistant Professor
Computational Linguistics, University of Groningen
https://yevgen.web.rug.nl
We invite paper submissions to the 9th Workshop on Online Abuse and Harms (WOAH), which will take place on August 1 at ACL 2025.
Website: https://www.workshopononlineabuse.com/cfp.html
Important Dates
* Submission due: April 18, 2025
* ARR reviewed submission due: May 20, 2025
* Notification of acceptance: May 30, 2025
* Camera-ready papers due: June 13, 2025
* Workshop: August 1st, 2025
Overview
Digital technologies have brought significant benefits to society, transforming how people connect, communicate, and interact. However, these same technologies have also enabled the widespread dissemination and amplification of abusive and harmful content, such as hate speech, harassment, and misinformation. Given the sheer volume of content shared online, addressing abuse and harm at scale requires the use of computational tools. Yet, detecting and moderating online abuse remains a complex task, fraught with technical, social, legal, and ethical challenges.
The 9th Workshop on Online Abuse and Harms (WOAH) invites paper submissions from a diverse range of fields, including but not limited to natural language processing, machine learning, computational social science, law, political science, psychology, sociology, and cultural studies. We explicitly encourage interdisciplinary research, technical and non-technical contributions, and submissions that focus on under-resourced languages. Non-archival papers and civil society reports are also welcome.
Topics covered by WOAH include, but are not limited to:
* New models or methods for detecting abusive and harmful online content, including misinformation;
* Biases and limitations in existing detection models or datasets for abusive and harmful content, especially those in commercial use;
* Development of new datasets and taxonomies for online abuse and harms;
* Novel evaluation metrics and procedures for detecting harmful content;
* Analyses of the dynamics of online abuse, its propagation, and its impact on different communities;
* Social, legal, and ethical considerations in detecting, monitoring, and moderating online abuse.
Special Theme: Harms Beyond Hate Speech
In its 9th edition, WOAH highlights the theme Harms Beyond Hate Speech. We aim to expand the conversation beyond conventional definitions of harmful content by exploring the nuanced ways online harms manifest—such as technologically mediated inauthentic behavior, the power of technologies to reshape perceptions and opinions, and their potential to incite discrimination, hostility, violence, or even genocide. Additionally, we emphasize the diverse targets affected by such harms and the unique considerations computational interventions demand.
To facilitate this exploration, we invite NLP researchers, social scientists, cultural scholars, and practitioners to engage with key issues, including child sexual abuse material, radicalization, misinformation, platform policies, security, and the politics of computational approaches. By fostering interdisciplinary collaboration, our goal is to deepen understanding of these complex phenomena and advance effective, ethical solutions
Submission
Submission is electronic, using the Softconf START conference management system.
Submission link: https://softconf.com/acl2025/woah2025/
The workshop will accept three types of papers:
1) Academic Papers (long and short): Long papers of up to 8 pages, excluding references, and short papers of up to 4 pages, excluding references. Unlimited pages for references and appendices. Accepted papers will be given an additional page of content to address reviewer comments. Previously published papers cannot be accepted.
2) Non-Archival Submissions: Up to 2 pages, excluding references, to summarise and showcase in-progress work and work published elsewhere.
3) Civil Society Reports: Non-archival submissions, with a minimum of 2 pages and no upper limit. Can include work published elsewhere.
All submissions must use the official ACL style files<https://github.com/acl-org/acl-style-files>. Submissions that do not conform to the required styles, including paper size, margin width, and font size restrictions, will be rejected without review. All submissions should adhere to the workshop policies https://www.workshopononlineabuse.com/policies.html.
WOAH Community
We are excited to share the WOAH community Slack channel — a workspace for researchers interested in or working on understanding and addressing online abuse and harms!
Join us here: https://join.slack.com/t/hatespeechdet-47d7560/shared_invite/zt-2a8d96j4z-g…
Contact Info
Please send any questions about the workshop to organizers(a)workshopononlineabuse.com<mailto:organizers@workshopononlineabuse.com>
Organisers
Agostina Calabrese, University of Edinburgh
Christine de Kock, University of Melbourne
Debora Nozza, Bocconi University
Flor Miriam Plaza-del-Arco, Bocconi University
Zeerak Talat, University of Edinburgh
Francielle Vargas, University of São Paulo
The University of Edinburgh is a charitable body, registered in Scotland, with registration number SC005336. Is e buidheann carthannais a th’ ann an Oilthigh Dhùn Èideann, clàraichte an Alba, àireamh clàraidh SC005336.
*****
Second Workshop on Automated Evaluation of Learning and Assessment Content
AIED 2025 workshop | Palermo (Italy) & Hybrid | 22-26 July 2025
https://sites.google.com/cam.ac.uk/eval-lac-2025
<https://eur03.safelinks.protection.outlook.com/?url=https%3A%2F%2Fsites.goo…>
*****
We are happy to announce the second edition of the Workshop on Automated
Evaluation of Learning and Assessment Content will be held in Palermo
(Italy) & online during the AIED 2025 conference.
About the workshop
Evaluation of learning and assessment content has always been a crucial
task in the educational domain, but traditional approaches based on human
feedback are not always usable in modern educational settings. Indeed, the
advent of machine learning models, in particular Large Language Models
(LLMs), enabled to quickly and automatically generate large quantities of
data, making human evaluation unfeasible. Similarly, Massive Open Online
Courses (MOOCs) have numbers of attending students so large that it is
unsustainable to manually provide feedback to all of them. Thus, the need
for accurate and automated techniques for evaluating educational content –
e.g., questions, hints, and feedback – became pressing. Building on the
success of the First Workshop on the Automatic Evaluation of Learning and
Assessment Content, which was held at AIED 2024, this workshop aims to
attract professionals from both academia and industry, and to to offer an
opportunity to discuss common challenges, share best practices, and
promising new research directions.
Topics of interests include but are not limited to:
-
Question evaluation (e.g., in terms of the pedagogical criteria listed
above: alignment to the learning objectives, factual accuracy, language
level, cognitive validity, etc.).
-
Estimation of question statistics (e.g., difficulty, discrimination,
response time, etc.).
-
Evaluation of distractors in Multiple Choice Questions.
-
Evaluation of reading passages in reading comprehension questions.
-
Evaluation of lectures and course material.
-
Evaluation of learning paths (e.g., in terms of prerequisites and topics
taught before a specific exam).
-
Evaluation of educational recommendation systems (e.g., personalised
curricula).
-
Evaluation of hints and scaffolding questions, as well as their
adaptation to different students.
-
Evaluation of automatically generated feedback provided to students.
-
Evaluation of techniques for automated scoring.
-
Evaluation of pedagogical alignment of LLMs.
-
Evaluation of the ethical implications of using open-weight and
commercial LLMs in education.
-
Evaluation of bias in educational content and LLM outputs.
Human-in-the-loop approaches are welcome, provided that there is also an
automated component in the evaluation and there is a focus on the
scalability of the proposed approach. Papers on generation are also very
welcome, as long as there is an extensive focus on the evaluation step.
Important dates
Submission deadline: May 25, 2025
Notification of acceptance: June 15, 2025
Camera ready: June 22, 2025
Workshop: July 22 or July 26, 2025
Submission guidelines
There are two tracks, with different submission deadlines.
Full and short papers: We are accepting short papers (5 pages, excluding
references) and long papers (10 pages, excluding references), formatted
according to the workshop style (using either the LaTeX template
<https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-w…>
or the DOCX template <https://ceur-ws.org/Vol-XXX/CEUR-Template-1col.docx>
).
Extended abstracts: We also accept extended abstracts (max 2 pages), to
showcase work in progress and preliminary results. Papers should be
formatted according to the workshop style (using either the LaTeX template
or the DOCX template).
Submissions should contain mostly novel work, but there can be some overlap
between the submission and work submitted elsewhere (e.g., summaries, focus
on the evaluation phase of a broader work). Each of the submissions will be
reviewed by the members of the Program Committee, and the proceedings
volume will be submitted for publication to CEUR Workshop Proceedings. Due
to CEUR-WS.org policies, only full and short papers will be submitted for
publication, not the extended abstracts.
Organisers
Luca Benedetto (1), Andrew Caines (1), George Dueñas (2), Diana Galvan-Sosa
(1), Gabrielle Gaudeau (1), Anastassia Loukina (3), Shiva Taslimipoor (1),
Torsten Zesch (4)
(1) ALTA Institute, Dept. of Computer Science and Technology, University of
Cambridge
(2) National Pedagogical University, Colombia
(3) Grammarly, Inc.
(4) FernUniversität in Hagen
Call for Papers: The 19th Linguistic Annotation Workshop (LAW-XIX)
We invite submissions for LAW-XIX, co-located with ACL 2025 in Vienna,
Austria, in July/Aug 2025.
The LAW-XIX will provide a forum for presentation and discussion of
innovative research on all aspects of linguistic annotation, including
creation/evaluation of annotation schemes, methods for automatic and
manual annotation, use and evaluation of annotation software and
frameworks, representation of linguistic data and annotations,
semi-supervised “human in the loop” methods of annotation,
crowd-sourcing approaches, and more.
Special Theme
The special theme of LAW-XIX is "*Subjectivity and variation in
linguistic annotations*". In addition to LAW's general topics, we
specifically invite submissions on:
* Subjectivity and human label variation in linguistic annotations
* Learning from annotation disagreements
* Detecting annotation noise in human label variation
* Accounting for subjectivity in label aggregation
* Ways to aggregate multiple annotators' labels beyond majority vote
* Any other topics related to the special theme.
Regarding subjectivity, we are particularly interested in work
addressing the*annotation of multidimensional constructs from the
political and social sciences* and encourage submissions on the
following topics:
* Theory-driven operationalization of complex political or
socio-psychological constructs,
* such as populism, moral values, or stereotypes Creation of
linguistically annotated datasets that capture such constructs
* Relation between theories and textual annotations
* Challenges for the measurement of multidimensional constructs from text
* Challenges for validating (a) theories, (b) annotations
* Implications and risks for manual annotation and automatic
prediction of socio-psychological constructs from text.
Important Dates
All submission deadlines are 11:59 p.m. UTC-12:00 “anywhere on Earth.”
Workshop papers due (ARR Commitment) Mar 25, 2025
Workshop papers due (Direct Submission) April 04, 2025
Notification of acceptance May 16, 2025
Camera-ready papers due May 30, 2025
Workshop date July/Aug, 2025
Submissions
Please submit your paper here: https://softconf.com/acl2025/law2025
For more information on the workshop and submission formats, please
refer to the workshop homepage:
https://sigann.github.io/LAW-XIX-2025
If you have any questions, please feel free to contact the program
co-chairs at law2025workshop(a)gmail.com.
Workshop Organizers
Siyao (Logan) Peng (Program Co-Chair)
Ines Rehbein (Program Co-Chair)
Amir Zeldes (ACL SIGANN President)
--
Ines Rehbein
Data and Web Science Group
University of Mannheim, Germany
Dear Colleagues,
In response to multiple requests, we are pleased to announce an extension for abstract submissions to the Learner Corpus Research Graduate Conference 2025, organized under the aegis of the Learner Corpus Association. The revised submission deadline is April 25, 2025.
LCRGrad25 will be hosted by the Chair of English and Digital Linguistics, Chemnitz University of Technology, and will take place virtually on 22-23-24 October 2025.
The main aim of the conference, as in the previous editions, is to offer a space for MA and PhD students as well as researchers who have earned their doctoral degree in the last two years prior to the conference to discuss their (ongoing) projects. Researchers who already hold a doctoral degree are welcome and strongly encouraged to attend as panelists, mentors or non-presenting delegates, helping to ensure a fruitful academic dialogue and to foster the careers of graduate students and recent graduates within the field of Learner Corpus Research.
The central theme of this year’s conference is “The Pattern Beneath”. This theme celebrates the unique role of learner corpus research in uncovering the underlying structures and patterns of learner language through LCR. It emphasizes the field’s potential to provide insights into second language acquisition, linguistic development, and the intricacies of language use in educational contexts.
We invite submissions across a range of formats to foster diverse discussions and engagement:
• Papers (20-minute presentation): Original, completed research with substantial findings.
• Work-in-Progress Paper (15-minute presentation): Presentation of ongoing research for feedback, collaborative discussions and ideas for improvement.
• Workshops or Software Demonstrations (45-60 minutes): Hands-on, interactive sessions or demonstrations of corpus tools or technologies.
• Roundtable Discussions (45 minutes): Topic proposals for collaborative and in-depth discussions among participants are welcome.
• Panel Proposals (90 minutes): A panel with 3–4 speakers ideally made up of supervisors/senior researchers and graduate students on one of the sub-themes of LCRGrad25.
• Posters: Completed research or works-in-progress in a visually engaging format. Digital posters are to be submitted with a short video (max. 3 minutes) prior to the start of the conference. The videos will be made available throughout the conference for asynchronous comments and questions.
The call for papers and abstract submission guidelines can be found on the conference website: https://lcrgrad2025.tu-chemnitz.de
KEYNOTE SPEAKERS
Prof. Dr. Randi Reppen: Professor Emerita of Applied Linguistics and TESL at Northern Arizona University
Prof. Dr. Michaela Mahlberg: Professor of Digital Humanities, Alexander-von-Humboldt Professor at Friedrich-Alexander-Universität Erlangen-Nürnberg
Dr. Dana Gablasova: Senior Lecturer in Linguistics and English Language at Lancaster University
IMPORTANT DATES
25.04.2025 Abstract submission deadline
10.06.25 Notification of acceptance
15.06.25 Presenter registration deadline
Additional highlights of LCRGrad25:
- Registration is free of charge for all participants.
- Sessions will be hosted on Zoom, ensuring accessibility for participants world-wide.
- To accommodate participants from various time zones, the conference will adopt a flexible and inclusive schedule.
- Ask-Me-Anything Panels and Research Consultation Clinics with leading experts.
- Interactive sessions connecting Early Career Researchers and Senior Academics.
- Awards and recognitions for best paper and best poster.
- Soft skills workshops: Hands-on sessions to enhance skills like publishing, career planning, data visualization, and networking.
We look forward to your participation in this exciting virtual gathering.
Best,
Cansu Akan
English and Digital Linguistics
Chemnitz University of Technology (TU Chemnitz)
Call for Nominations for the 2025 Test-of-Time (ToT) Paper Awards
The ACL is pleased to open the call for nominations for the 2025
Test-of-Time
(ToT) paper awards.
In 2025, the ToT paper awards will honor up to four influential papers
from
ACL events from 25 and 10 years ago, namely, up to two papers from 2000
ACL
events and up to two papers from 2015 ACL events.
ACL ToT papers should describe research that has had a long-lasting
influence
on the field. That is, they should have had a significant impact on a
subarea
of CL, across subareas of CL, or outside of the CL research community.
They
may have proposed new research directions and new technologies, or
released
results and resources that have greatly benefited the community.
All nominations will be evaluated by the Test-of-Time paper award
nomination
committee to decide the winners. The winners will be honored at ACL
2025.
Please enter your nomination via the following form:
https://forms.gle/5CspmZ8zRhQQKnma9 [1]
The deadline for nominations is April 8th.
- Multiple nominations by the same nominator are allowed
- Self-nominations are allowed
- ACL workshops from the appropriate years are included in the eligible
venues.
For any further information, please contact us.
Best wishes,
Yuki Arase (ACL conference officer)
Yue Zhang (ToT paper award nomination committee co-chair)
Joyce Chai (ToT paper award nomination committee co-chair)
Michael Strube (ToT paper award nomination committee co-chair)
Read more:
https://www.aclweb.org/portal/content/call-nominations-2025-test-time-tot-p…
[1] https://forms.gle/5CspmZ8zRhQQKnma9
PhD positions at the Institute for Logic, Language and Computation
(ILLC) at the University of Amsterdam, Netherlands
Salary: EUR 2.901 - EUR 3.707 gross per month
Closing date: 21 April 2025
We have two open PhD positions in natural language processing (NLP),
starting in September 2025 or as soon as possible thereafter. The focus of
the project is on the development of methodologies for multilingual NLP and
alignment of large language models. We welcome applications from candidates
with an NLP / AI background and an interest in language and society.
For further information and to apply:
https://werkenbij.uva.nl/en/vacancies/two-phd-positions-in-natural-language…
For any questions, please send an email to e.shutova(a)uva.nl
*Registration closes today* *!!*
****We apologize for multiple postings of this e-mail****
MentalRiskES2025 describes the third edition of a novel task on early risk
identification of mental disorders in Spanish comments from social media
sources. The first and the second editions took place in the IberLEF
evaluation forum as part of the SEPLN 2023 and SEPLN 2024. The task was
resolved as an online problem, that is, the participants had to detect a
potential risk as early as possible in a continuous stream of data.
Therefore, the performance not only depended on the accuracy of the systems
but also on how fast the problem is detected. These dynamics are reflected
in the design of the tasks and the metrics used to evaluate participants. For
this third edition, we propose two novel tasks, the first subtask is about
the detection of the gambling disorder and the second subtask consists of
detecting a type of Addiction.
We would like to invite you to participate in the following tasks:
1. Risk Detection of Gambling Disorders (Binary classification)
2. Type of Addiction Detection (Multiclass classification)
Find out more at https://sites.google.com/view/mentalriskes2025.
MentalRiskES 2025 is part of the IberLEF Workshop and will be held in
conjunction with the SEPLN 2025 conference in Zaragoza (Spain).
-------------------------------------------------------------------------------
Important Dates
-------------------------------------------------------------------------------
Feb 14th Registration open
Feb 25th Release of trial corpora (trial server available)
Mar 19th Release of training corpora
*Mar 31st Registration closed*
Apr 7th Release of test corpora and start of the evaluation
campaign (test server available and trial submissions closed)
Apr 14th End of evaluation campaign (deadline for submission
of runs)
Apr 18th Publication of official results and release of test
gold labels
May 12th Deadline for paper submission
May 30th Acceptance notification
Jun 16th Camera-ready submission deadline
Sep TBD Publication of proceedings
Note: All deadlines are 11:59PM UTC-12:00
Please reach out to the organizers at MentalRiskEs@IberLEF2025.
The MentalRiskES 2025 organizing committee.
-----------------------------------------------------------
Mas informacion sobre listas de correo en la Univ. de Jaen
http://www.ujaen.es/sci/redes/listas/
-----------------------------------------------------------
*Dear Colleagues,*
We are pleased to announce the *Multi-Domain Detection of AI-Generated Text
(M-DAIGT*) shared task, hosted at *RANLP 2025*. This task brings together
researchers to explore methods for detecting AI-generated text across
multiple domains, with a focus on news articles and academic writing.
*We invite participation in two subtasks:*
1. *News Article Detection (NAD):* Classify news articles and snippets
as human-written or AI-generated.
2. *Academic Writing Detection (AWD):* Identify AI-generated content
within student coursework and academic research across various disciplines.
- Participants will receive balanced datasets containing human-written
and AI-generated texts from multiple language models. Evaluation will be
conducted on the CodaLab platform.
*Evaluation Metrics:*
- *Primary:* Accuracy, Precision, Recall, F1-score
- *Secondary:* Robustness across text lengths, domains, and generation
sources
*Important Dates:*
- Training Data Release: *March 31, 2025*
- Evaluation Data Release: *April 30, 2025*
- Evaluation Period: *May 2–15, 2025*
- Paper Submission Deadline: *May 25, 2025*
- Workshop Dates: *September 11–12, 2025*
*More Information and Registration:*
- *Website:* https://ezzini.github.io/M-DAIGT/
- *GitHub Repository:* https://github.com/ezzini/M-DAIGT
- *Registration: *Click here to register for solo or team participation
<https://docs.google.com/forms/d/e/1FAIpQLSextZDY7qjGRJSLCBNISPcBNQZwusRWKvy…>
- *Join us on Slack: *Slack Workspace
<https://mdaigtsharedt-xye5995.slack.com/?redir=%2Fssb%2Fredirect>
We look forward to your participation and encourage you to share this with
colleagues who may be interested. For any queries, feel free to reach out
to the organizers.
*Yours sincerely,The M-DAIGT Organizers*