KOLLOQUIUM “AUTORSCHAFT UND INDIVIDUELLER SPRACHGEBRAUCH” - Call for Abstracts
Vom 14. bis 15. November 2025 findet an der Ruhr-Universität Bochum ein linguistisches Kolloquium mit dem Titel "Autorschaft und individueller Sprachgebrauch" statt.
Die Veranstaltung wird von der Germanistik an der RUB in Kooperation mit dem Bereich Autorenerkennung des Bundeskriminalamts (BKA) Wiesbaden ausgerichtet.
In Abhängigkeit von Kontext und Motivation (linguistisch, philologisch, forensisch, technisch) dient die Bestimmung der Autorschaft unterschiedlichen Zielen. Im forensischen Bereich soll die linguistische Auswertung der Strafverfolgung helfen, das hinter dem Text stehende Individuum zu identifizieren, da sich für den Verfasser durch den Text rechtliche Konsequenzen ergeben. Ein zentrales Anwendungsfeld der forensischen Linguistik ist daher die Autorschaftsbestimmung.
Immer dann, wenn danach gefragt wird, wer einen Text geschrieben haben könnte, richtet sich das Interesse auf die individualisierende Seite des Sprachgebrauchs: Das Individuelle, und mit ihm sprachliche Variation, ihre Ausgestaltung und ihre Funktionalität, treten dann gegenüber dem Systemisch-Allgemeinen von Sprache in den Vordergrund. Der individuelle Sprachgebrauch wird zum Mittel, über das eine Zuschreibung an einen Autor erfolgt.
Die Veranstaltung will eine Diskussion zwischen verschiedenen Zugängen zum Thema anregen und ein Forum zum Austausch bieten. Der Fokus sollte auf Forschung zum Deutschen liegen. Die Einbindung in den forensischen Kontext ist ausdrücklich erwünscht, aber andere Zugänge sind ebenso willkommen. Wir laden Kolleginnen und Kollegen ein, die sich auf der Grundlage deutschsprachigen Materials mit forensischer Autorschaftsanalyse befassen, wie auch Forschende, die sich (jenseits einer forensischen Zielsetzung) für die Zusammenhänge von Individuum, Sprache und Autorschaft interessieren und dazu arbeiten.
Von den Beiträgen erhoffen wir uns Impulse zu folgenden Bereichen:
• In welchen Kontexten und für welche Forschungsfragen werden Texte in Hinblick auf ihre Autorschaft analysiert? Welche sprachlichen Aspekte werden dabei als relevant angesehen?
• Was macht das Individuelle am Sprachgebrauch aus und in welcher Form lässt es sich ermitteln?
• Welche linguistischen Variablen fließen in die Analyse mit ein und wie realisieren sie sich im Geschriebenen? Wie ergänzen weitere, außersprachliche Faktoren die Analyse?
• Welche Instrumentarien und Analyseverfahren werden für Autorenprofile und/oder Textvergleiche genutzt, mit welchen Ergebnissen? Wie geht man mit komplexeren Merkmalen um (z. B. Argumentationen, Stilzügen, Sprachhandlungen)?
• Welche Formen der Autorschaft sind (forensisch-linguistisch) relevant? Was definiert Autorschaft z. B. im kollaborativen Schreiben und wie geht man (forensisch-linguistisch) mit multipler Autorschaft um?
• Was behindert, was fördert die Identifizierung eines individuellen Sprachgebrauchs? Wo kommt linguistische Analyse an ihre Grenzen? Was ist darüber hinaus bei Verfassern im kriminellen Kontext zu bedenken?
Weitere mögliche Themenvorschläge:
• Variablen/Parameter des Autorenprofils
• Bewertung und Interpretation von Merkmalen
• inter- und intra-individuelle Variation, Stil und Textsorte
• Gruppensprache, Gruppenidentität und Autor
• Textproduktionsbedingungen und ihre Auswirkung auf individualtypische Merkmale
• Verstellungformen, Verstellungsstrategien
• Kombinationen qualitativer und quantitativer Ansätze, korpusbasierte Analysen
• Anwendungsbeispiele (aus der Praxis)
• interdisziplinäre Fragestellungen und Analyseansätze
Es sind alle Beiträge willkommen, die sich mit diesen und ähnlichen Fragen aus theoretischer, empirischer oder praktischer Perspektive befassen. Die Vorträge sollen eine Dauer von 20 Minuten nicht überschreiten, es folgt jeweils eine 10-minütige Diskussionsrunde. Wir bitten um Abstracts im Umfang von ca. 350 Wörtern (exkl. Literatur) mit maximal fünf Literaturangaben im pdf-Format an die Emailadresse autorschaft2025(a)rub.de . Ein Zeitslot für Posterpräsentationen ist ebenfalls vorgesehen. Vortragssprachen sind Deutsch und Englisch.
Eingeladene Sprecher:innen:
Lars Bülow (LMU München)
Dana Roemling (University of Birmingham)
Markus Shiegg (Universität Freiburg)
Call for Abstracts:
30.05.2025 Ende der Einreichfrist für Abstracts
31.07.2025 Benachrichtigung der Vortragenden/Posterpräsentierenden
Anmeldung:
Anmeldungen für Teilnehmende sind ab 1.9.2025 möglich. Nähere Informationen folgen.
Veranstaltungsort:
Landesspracheninstitut (LSI) in der Ruhr-Universität Bochum, Max-Kade-Halle
Laerholzstraße 84
44801 Bochum
Organisationskomitee:
Maria Berger (RUB)
Eilika Fobbe (BKA)
Nora Giljohann (RUB)
Steffen Hessler (RUB)
Kerstin Kucharczik (RUB)
Karin Pittner (RUB)
Tatjana Scheffler (RUB)
Website: https://staff.germanistik.rub.de/kolloquium-autorschaftsanalyse/
---
Tatjana Scheffler (she/her)
GB 5/157
Ruhr-Universität Bochum
Digital Forensic Linguistics
Fakultät für Philologie, Germanistisches Institut
Universitätsstraße 150
44780 Bochum
Germany
Mail: tatjana.scheffler(a)rub.de
Web: http://staff.germanistik.rub.de/digitale-forensische-linguistik/
Mastodon: https://fediscience.org/@tschfflr
Tel.: +49 234 32-21471
Dear colleagues,
(Apologize if you received multiple emails from different mailing lists)
We are delighted to announce the call for task proposals for NTCIR-19.
NTCIR (NII Testbeds and Community for Information Access Research) is a
series of evaluation conferences that mainly focus on information access
with East Asian languages and English. The first NTCIR conference (NTCIR-1)
took place in August/September 1999, and the latest NTCIR-18 conference
will be held on June 10-13, 2025. Research teams from all over the world
participate in one or more NTCIR tasks to advance the state of the art and
to learn from one another's experiences.
It is time to call for task proposals for the next NTCIR (NTCIR-19), which
will start in September 2025 and conclude in December 2026. Task proposals
will be reviewed by the NTCIR Program Committee, and organizers of accepted
tasks will have a chance to present their proposed tasks at the NTCIR-18
Conference held in NII, Tokyo, Japan, from June 10-13, 2025.
* IMPORTANT DATES:
*March 31, 2025: Task Proposal Submission Due (Anywhere on Earth)*May 15,
2025: Acceptance Notification of Task Proposals
June 10-13, 2025: NTCIR-18 Conference (Organizers of accepted tasks have a
chance to present their proposed tasks)
* SUBMISSION LINK:
*https://easychair.org/conferences/?conf=ntcir19proposal
<https://easychair.org/conferences/?conf=ntcir19proposal>*
* NTCIR-19 TENTATIVE SCHEDULE:
January 2026: Dataset release*
January-June 2026: Dry run*
March-July 2026: Formal run*
August 1, 2026: Evaluation results return
August 1, 2026: Task overview release (draft)
September 1, 2026: Submission due of participant papers (draft)
November 1, 2026: Camera-ready participant paper due
December 2026: NTCIR-19 Conference at NII, Tokyo, Japan
(* indicates that the schedule can be different for different tasks)
* WHO SHOULD SUBMIT NTCIR-19 TASK PROPOSALS?
We invite new task proposals within the expansive field of information
access. Organizing an evaluation task entails pinpointing significant
research challenges, strategically addressing them through collaboration
with fellow researchers (including co-organizers and participants),
developing the requisite evaluation framework to propel advancements in the
state of the art, and generating a meaningful impact on both the research
community and future developments.
Prospective applicants are urged to underscore the real-world applicability
of their proposed tasks by utilizing authentic data, focusing on practical
tasks, and solving tangible problems. Additionally, they should confront
challenges in evaluating information access technology, such as the
extensive number of assessments needed for evaluation, ensuring privacy
while using proprietary data, and conducting live tests with actual users.
In the era of large language models (LLMs), these models are anticipated to
significantly influence daily human activities. Nonetheless, the content
produced by LLMs often exhibits issues, such as hallucinations. NTCIR-19
encourages tasks that focus on the evaluation of the quality of content
generated by LLMs continued from NTCIR-18 as well as information access
exploiting LLMs, including generative information retrieval (IR), IR using
generative queries, conversational search using generated utterances,
evaluation using LLM (relevance judgements or language annotation using
LLM), and RAG.
* PROPOSAL TYPES:
We will accept two types of task proposals:
- Proposal of a Core task:
This is for fostering research on a particular information access problem
by providing researchers with a common ground for evaluation. New test
collections and evaluation methods may be developed through the
collaboration between task organizers (proposers) and task participants. At
NTCIR-18, the core tasks are AEOLLM, FairWeb-2, FinArg-2, Lifelog-6,
MedNLP-CHAT, RadNLP, and Transfer-2. Details can be found at
http://research.nii.ac.jp/ntcir/NTCIR-18/tasks.html.
- Proposal of a Pilot task:
This is recommended for organizers who propose to focus on a novel
information access problem, and there are uncertainties either in task
design or organization. It may focus on a sub-problem of an information
access problem and attract a smaller group of participating teams than core
tasks. However, it may grow into a core challenging task in the next round
of NTCIR. At NTCIR-18, the pilot tasks are HIDDEN-RAD, SUSHI, and U4.
Details can be found at http://research.nii.ac.jp/ntcir/NTCIR-18/tasks.html.
Organizers are expected to run their tasks mainly with their own funding
and to make the task as self-sustaining as possible. A part of the fund can
be supported by NTCIR, which is called "seed funding." It is usually used
for some limited purposes such as hiring relevance assessors. The seed
funding allocated to each task varies depending on requirements and the
number of accepted tasks. Typical cases would be around 1M JPY for a core
task and around 0.5M JPY for a pilot task (note that the amount is subject
to change).
Please submit your task proposal as a PDF file via EasyChair by March 31,
2025 (Anywhere on Earth).
https://easychair.org/conferences/?conf=ntcir19proposal
* TASK PROPOSAL FORMAT:
The proposal should not exceed four pages in A4 single-column format. The
first three pages should contain the main part and appendix, and the last
page should contain only a description of the data to be used in the task.
Please describe the data in as much detail as possible so that we can help
your data release process after the proposal is accepted. In the past
NTCIRs, it took much time to create memorandums for data release, which
sometimes slowed down the task organization.
Main part
- Task name and short name
- Task type (core or pilot) - Abstract
- Motivation
- Methodology
- Expected results
Appendix
- Names and contact information of the organizers - Prospective participants
- Data to be used and/or constructed
- Budget planning
- Schedule
- Other notes
Data (to be used in your task) - Details
(Please describe the details of the data, which should include the source
of the data, methods to collect the data, range of the data, etc.)
- License
(Please make sure that you have a license to distribute the data, and
details of the license should be provided. If you do not have permission to
release the data yet, please describe your plan to get the permission.)
- Distribution
(Please describe how you plan to distribute the data to participants. There
are mainly three choices: distributed by the data provider, distributed by
organizers, and distributed by NII.)
- Legal / Ethical issues
(If the data can cause legal or ethical problems, please describe how you
propose to address them. e.g., some medical data may need approval from an
ethical committee. e.g., some Web data may need filtering for excluding
discriminative messages.)
If you want NII to distribute your data to task participants on your
behalf, please email ntc-admin(a)nii.ac.jp before your task proposal
submission attaching the task proposal.
* REVIEW CRITERIA:
- Importance of the task to the information access community and the
society - Timeliness of the task
- Organizers’ commitment in ensuring a successful task
- Financial sustainability (self-sustainable tasks are encouraged)
- Soundness of the evaluation methodology
- Detailed description about the data to be used
- Language scope
* NTCIR-19 PROGRAM CO-CHAIRS:
Qingyao Ai (Tsinghua University, China)
Chung-Chi Chen (National Institute of Advanced Industrial Science and
Technology (AIST), Japan)
Shoko Wakamiya (Nara Institute of Science and Technology (NAIST), Japan)
* NTCIR-19 GENERAL CHAIRS:
Charles Clarke (University of Waterloo, Canada)
Noriko Kando (National Institute of Informatics, Japan)
Makoto P. Kato (University of Tsukuba, Japan)
Yiqun Liu (Tsinghua University, China)
SciVQA: Scientific Visual Question Answering Shared Task
Hosted as part of the SDP 2025 Workshop
July 31 or August 1st, 2025 (tbc)
Vienna, Austria
(co-located with ACL 2025)
SciVQA Shared Task: https://sdproc.org/2025/scivqa.html
SDP 2025 Workshop: https://sdproc.org/2025/index.html
Task Overview
Scholarly articles convey valuable information not only through unstructured text but also via (semi-)structured figures such as charts and diagrams. Automatically interpreting the semantics of knowledge encoded in these figures can be beneficial for downstream tasks such as question answering (QA).
In the SciVQA challenge, participants will develop multimodal QA systems using a dataset of scientific figures from ACL Anthology and arXiv papers. Each figure image is annotated with seven QA pairs and includes metadata such as caption, figure ID, figure type (e.g., compound, line graph, bar chart, scatter plot, etc.), QA pair type. This shared task specifically focuses on closed-ended visual (i.e., addressing visual attributes of a figure such as colour, shape, size, height, etc.) and non-visual (not addressing figure visual attributes) questions.
Evaluation
Systems will be evaluated using metrics such as BLEU, METEOR, and ROUGE. Automated evaluations of submitted systems will be done through the Codabench platform (link will be provided soon on the webpage).
Important Dates
Release of training data: April 1, 2025
Release of testing data: April 15, 2025
Deadline for system submissions: May 16, 2025
Paper submission deadline: May 23, 2025
Notification of acceptance: June 13, 2025
Camera-ready paper due: June 20, 2025
Workshop: July 31, 2025 or August 1, 2025 (TBA)
Participants are also invited to submit papers on their systems. Successful submissions will be published in the proceedings of the SDP 2025 workshop.
Organizers
Ekaterina Borisova (DFKI, Berlin, Germany)
Georg Rehm (DFKI, Berlin, Germany)
ClimateCheck: Shared Task on Scientific Fact-Checking of Social Media Claims on Climate Change
Hosted as part of the SDP 2025 Workshop
July 31 or August 1st, 2025 (tbc)
Vienna, Austria
(co-located with ACL 2025)
ClimateCheck Shared Task: https://sdproc.org/2025/climatecheck.html
SDP 2025 Workshop: https://sdproc.org/2025/index.html
Task Overview
Social media facilitates discussions on critical issues such as climate change, but it also contributes to the rapid dissemination of misinformation, which complicates efforts to maintain an informed public and create evidence-based policies. In this shared task, we emphasise the need to link public discourse to peer-reviewed scholarly articles by gathering claims from social media about climate change (both real-life and automatically generated ones) as well as a corpus of about 400K abstracts of publications from the climate sciences domains. The participants will be asked to retrieve relevant abstracts for each claim (subtask I) and classify the relation between the claim and abstract as ‘supports’, ‘refutes’, or ‘not enough information’ (subtask II). The task will be hosted on Codabench (link will be provided soon on the webpage). Participants are allowed to take part either in subtask I only, or in both subtasks.
Subtask I: Abstracts Retrieval
Task: given a claim from social media about climate change and a corpus of abstracts, retrieve the top K most relevant abstracts.
Evaluation: MAP and B-Pref accounting for retrieving relevant abstracts and not penalising unjudged documents.
Subtask II: Claim Verification
Task: given the claim-abstract pairs received from the previous subtask, classify their relation as ‘support’, ‘refutes’, or ‘not enough information’.
Evaluation: F1 score based on judged documents from gold data; unjudged documents will not be included in computing the score.
Important dates
Release of training data: April 1, 2025
Release of testing data: April 15, 2025
Deadline for system submissions: May 16, 2025
Paper submission deadline: May 23, 2025
Notification of acceptance: June 13, 2025
Camera-ready paper due: June 20, 2025
Workshop: July 31, 2025 or August 1, 2025 (TBA)
We encourage and invite participation from junior researchers and students from diverse backgrounds. Participants are also encouraged to submit a paper describing their systems to the SDP 2025 workshop.
Organisers
Raia Abu Ahmad (DFKI, Berlin, Germany)
Aida Usmanova (Leuphana University, Lüneburg, Germany)
Georg Rehm (DFKI, Berlin, Germany)
*** REGISTER NOW: Hybrid conference on Experimental Methods in Language (acquisition) Research (EMLaR), April 15-17, 2025 - Utrecht University (The Netherlands) ***
The Institute for Language Sciences (ILS) of Utrecht University is pleased to announce the 21st edition of EMLaR. This three-day conference will take place from April 15th – 17th 2025 (Tuesday to Thursday) in hybrid format. The physical location is Utrecht University, in the city center of Utrecht, The Netherlands.
EMLaR aims at training PhD students and advanced MA students in experimental methods of language (acquisition) research. Experts in various domains of linguistic research will give lectures and hands-on tutorials, and speakers will give method-oriented talks during plenary sessions. We also provide the opportunity to present your (ongoing) research at the poster session.
**Program**
Keynote speaker:
• Sonja Kotz (Maastricht University)
Invited speakers:
• Bram van Dijk (Leiden University Medical Center, Leiden Institute of Advanced Computer Science)
• Michael Franke (University of Tübingen)
• Mieke Slim (Max Planck Institute for Psycholinguistics)
• Roberta D’Alessandro (Utrecht University)
• Rowena Garcia (Leibniz-Centre General Linguistics, University of the Philippines)
Tutorials:
• Automatic Speech Recognition
• Bayesian Hypothesis Evaluation Using JASP and R
• Coloring Book – a tool for testing language comprehension with young children
• Computational Methods
• Event-related Brain Potentials (Introduction)
• Event-related Brain Potentials (Advanced*)
• Ethics and Privacy
• Eye-tracking
• Online experiments for language scientists
• Open (your) Science Using the Statistical Package JASP
• PRAAT
• Probabilistic Pragmatics
• Research with infants: Tips and tricks
• Statistics with R (Introduction)
• Statistics with R (Advanced*)
• Visual World Paradigm
For registration and more details, please visit our website: https://emlar.wp.hum.uu.nl/.
If you have any questions, please send an email to EMLAR2025(a)uu.nl.
We hope to see you there!
Kind regards,
EMLaR 2025 organization
FoRC 2025: Shared Task on Field of Research Classification
of Scholarly Publications
Hosted as part of the NSLP 2025 Workshop
1 or 2 June 2024 (tbc)
Portoroz, Slovenia
(co-located with ESWC 2025)
FoRC Shared Task: https://nfdi4ds.github.io/nslp2025/docs/forc_shared_task.html
NSLP 2025 Workshop: https://nfdi4ds.github.io/nslp2025/
A core application of Natural Scientific Language Processing (NSLP) is classifying scientific articles for their respective field of research (FoR). The 2025 iteration of the FoRC shared task builds on the data developed for Subtask II of FoRC in 2024 <https://nfdi4ds.github.io/nslp2024/docs/forc_shared_task.html>, adding to it a weakly supervised dataset of over 40K ACL publications. Participants are asked to design classification systems based on FoRC4CL, a corpus of 1500 English scholarly articles in Computational Linguistics (CL), collected from the ACL Anthology (CC BY 4.0) and manually annotated according to a novel hierarchical taxonomy, Taxonomy4CL, which consists of 170 core CL (sub-)topics. In addition, over 40K weakly supervised publications are provided to supplement the corpus and potentially increase model capabilities. Metadata fields include ACL Anthology ID, title, abstract, author(s), URL to the full text, publisher, publication year and month, proceedings title, DOI, venue, and the full text of the respective article.
Task Overview
Given an article from the ACL Anthology and a taxonomy of NLP/CL sub-topics (Taxonomy4CL), predict the entities from the taxonomy that correspond to the main contributions of the article.
As a highly unbalanced, multi-label, hierarchical classification problem, this task will be evaluated by computing micro, macro and weighted precison, recall, and F1-score.
Codabench page for participation: https://www.codabench.org/competitions/5779
Important dates
Training and testing data release: February 18, 2025
System submissions deadline: March 25, 2025
Paper submissions: March 27, 2025
Notification of acceptance: April 10, 2025
Camera-ready submission: April 17, 2025
We encourage and invite participation from junior researchers and students from diverse backgrounds. Participants are also encouraged to submit a paper describing their systems to the NSLP 2025 workshop.
Organisers
Maria Francis (DFKI, Berlin, Germany & University of Trento, Italy)
Raia Abu Ahmad (DFKI, Berlin, Germany)
Ekaterina Borisova (DFKI, Berlin, Germany)
Georg Rehm (DFKI, Berlin, Germany)
SemEval-2026: Call for Task Proposals
URL: https://semeval.github.io/SemEval2026/cft
# Call for Task Proposals
We invite proposals for tasks to be run as part of SemEval-2026. SemEval
(the International Workshop on Semantic Evaluation) is an ongoing series of
evaluations of computational semantics systems, organized under the
umbrella of SIGLEX, the Special Interest Group on
the Lexicon of the Association for Computational Linguistics.
SemEval tasks explore the nature of meaning in natural languages: how to
characterize meaning and how to compute it. This is achieved in practical
terms, using shared datasets and standardized evaluation metrics to
quantify the strengths and weaknesses and possible
solutions. SemEval tasks encompass a broad range of semantic topics from
the lexical level to the discourse level, including word sense
identification, semantic parsing, coreference resolution, and sentiment
analysis, among others.
For SemEval-2026, we welcome tasks that can test an automatic system for
semantic analysis of text (e.g., intrinsic semantic evaluation, or an
application-oriented evaluation). We especially encourage tasks for
languages other than English, cross-lingual tasks, and tasks that develop
novel applications of computational semantics. See the websites of previous
editions of SemEval to get an idea about the range of tasks explored, e.g.
SemEval-2020 (http://alt.qcri.org/semeval2020/) and SemEval-2021/2025 (
https://semeval.github.io).
We strongly encourage proposals based on pilot studies that have already
generated initial data, evaluation measures and baselines. In this way, we
can avoid unforeseen challenges down the road that may delay the task. We
suggest providing a reasonable baseline (e.g.,
providing a BERT baseline for a classification task) apart from majority
vote / random guess.
In case you are not sure whether a task is suitable for SemEval, please
feel free to get in touch with the SemEval organizers at
semevalorganizers(a)gmail.com to discuss your idea.
## Task Selection
Task proposals will be reviewed by experts, and reviews will serve as the
basis for acceptance decisions. Everything else being equal, more
innovative new tasks will be given preference over task reruns. Task
proposals will be evaluated on:
- Novelty: Is the task on a compelling new problem that has not been
explored much in the community? Is the task a rerun, but covering
substantially new ground (new subtasks, new types of data, new languages,
etc. - one addition is not sufficient)?
- Interest: Is the proposed task likely to attract a sufficient number of
participants?
- Data: Are the plans for collecting data convincing? Will the resulting
data be of high quality? Will annotations have meaningfully high
inter-annotator agreements? Have all appropriate licenses for use and
re-use of the data after the evaluation been secured? Have all
international privacy concerns been addressed? Will the data annotation be
ready on time?
- Evaluation: Is the methodology for evaluation sound? Is the necessary
infrastructure available or can it be built in time for the shared task?
Will research inspired by this task be able to evaluate in the same manner
and on the same data after the initial task? Is the task significantly
challenging (e.g. room for improvement over the baselines)?
- Impact: What is the expected impact of the data in this task on future
research beyond the SemEval Workshop?
- Ethical – The data must be compliant with privacy policies. e.g.
a) avoid personally identifiable information (PII). Tasks aimed
at identifying specific people will not be accepted,
b) avoid medical decision making (compliance with HIPAA, do not try to
replace medical professionals, especially if it has anything to do with
mental health)
c) these are representative and not exhaustive
## Submission Details
The task proposal should be a self-contained document of no longer than 3
pages (plus additional pages for references). Please see website for
further information.
## Important dates
- Task proposals due 31 March 2025 (Anywhere on Earth)
- Task selection notification 19 May 2025
## Preliminary timetable
- Sample data ready 15 July 2025
- Training data ready 1 September 2025
- Evaluation data ready 1 December 2025 (internal deadline; not for public
release)
- Evaluation start 10 January 2026
- Evaluation end by 31 January 2026 (latest date; task organizers may
choose an earlier date)
- Paper submission due February 2026
- Notification to authors March 2026
- Camera ready due April 2026
- SemEval workshop Summer 2026 (co-located with a major NLP conference)
Tasks that fail to keep up with crucial deadlines (such as the dates for
having the task and CodaLab website up and dates for uploading sample,
training, and evaluation data) may be cancelled at the discretion of
SemEval organizers. While consideration will be given to extenuating
circumstances, our goal is to provide sufficient time for the participants
to develop strong and well-thought-out systems. Cancelled tasks will be
encouraged to submit proposals for the subsequent year’s SemEval. To reduce
the risk of tasks failing to meet the deadlines, we are unlikely to accept
multiple tasks with overlap in the task organizers.
## Chairs
- Sara Rosenthal, IBM Research AI
- Aiala Rosá, Universidad de la República, Uruguay
- Marcos Zampieri, George Mason University, USA
- Debanjan Ghosh, Educational Testing Service,
IndiREAD Workshop 2025: 1st Call for Papers
Saarbrücken, Germany, November 26-27, 2025
IndiREAD is a workshop jointly organized by the ERC Project
"Individualized Interaction in Discourse" IDDISC [1] and the MultiplEYE
COST [2] action "Enabling multilingual eye-tracking data collection for
human and machine language processing research".
While experimental research in reading has a long tradition in
identifying key factors that influence reading patterns--including text
properties such as font difficulty, word and structure frequency, word
predictability, and dependency length--recent studies have emphasized
the importance of individual variability in reading behaviour (e.g.,
Haeuser & Kray, 2024; Kuperman et al., 2018; Nicenboim et al., 2016;
Staub, 2021). This work has linked individual variability in reading
patterns to differences in working memory capacity, reading skills,
linguistic experience, and domain expertise among readers. This informs
our understanding of how text characteristics and individual reader
attributes interact to shape eye movements during reading.
IndiREAD aims to bring together researchers interested in investigating
individual differences in reading using both experimental and
computational approaches. This workshop will focus on methods such as
eye-tracking, self-paced reading, and the Maze task, with particular
interest in how reading behaviour is correlated with individual
differences. We also encourage submissions of computational models for
eye movements or reading behavior that shed light on the mechanisms
behind these differences. The goal is to foster collaboration between
experimental and computational researchers to better understand
individual variability among readers. We especially welcome submissions
of reading time experiments and modelling of languages beyond English.
The IndiREAD Workshop invites submissions of abstracts addressing the
following questions:
* How do individual differences impact the way people read?
* How do reading patterns vary across different languages,
particularly in bilinguals?
* How do reading patterns change across the lifespan?
* Which individual difference measures are most suitable for capturing
variability in reading patterns?
* How can we evaluate psycholinguistic theories of reading and
sentence processing across languages?
* How can computational models account for individual differences in
reading?
* How does text adaptation influence reading patterns and
comprehension among different individuals?
* What statistical methods are best suited for reliably identifying
latent groups and relating individual differences to reading
performance?
Workshop dates: November 26-27, 2025
Workshop format: The workshop will be held in-person in Saarbrücken,
Germany. It will feature presentations from invited speakers, as well as
contributions based on workshop submissions. The format of the
presentations (oral or poster) will be determined based on the number of
submissions we receive.
Submission deadline: July 23, 2025.
We invite 1000-word abstracts from interested presenters. Information
about submission and formatting will be available on our website soon.
Conference website: https://www.uni-saarland.de/indiread [3]
Contact email: indiread(a)lst.uni-saarland.de
Travel grants: This workshop is sponsored by the MultiplEYE COST Action,
which will provide financial support to cover travel expenses for a
limited number of participants. Authors will be invited to apply for
travel funding upon abstract acceptance. Funding may be partial, and
priority will be given to junior researchers.
Best,
Iza Škrjanec
IndiREAD Organizing Committee
Links:
------
[1]
https://www.uni-saarland.de/lehrstuhl/demberg/individualized-interaction-in…
[2] https://multipleye.eu/
[3] https://www.uni-saarland.de/indiread