[Apologies for cross-postings]
Call for Papers
First International Workshop on Extraction from Triplet
Text-Table-Knowledge Graph and associated Challenge
https://ecladatta.github.io/triplet2026/
in conjunction with the 23rd European Semantic Web Conference (ESWC 2026)
https://2026.eswc-conferences.org/, Dubrovnik, Croatia
Important dates:
- **Submission deadline**: 3 March, 2026 (11:59pm, AoE)
- **Notifications**: 31 March, 2026
- **Camera-ready deadline**: 15 April, 2026 (11:59pm, AoE)
- **Workshop**: Sunday 10 May OR Monday 11 May 2026
Motivation:
Understanding information spread across text and table is essential for
tasks such as question answering and fact checking. Existing benchmarks
primarily deal with semantic table interpretation or reasoning over
tables for question answering, leaving a gap in evaluating models that
integrate tabular and textual information, perform joint information
extraction across modalities, or can automatically detect
inconsistencies between modalities.
This workshop aims to provide a forum for exchanging ideas between the
NLP community working on open information extraction and the vibrant
Semantic Web community working on the core challenge of matching tabular
data to Knowledge Graphs, on populating knowledge graphs using texts and
on reasoning across text, tabular data and knowledge graphs. The
workshop also targets researchers focusing on the intersection of
learning over structured data and information retrieval, for example, in
retrieval augmented generation (RAG) and question answering (QA)
systems. Hence, the goal of the workshop is to connect researchers and
trigger collaboration opportunities by bringing together views from the
Semantic Web, NLP, database, and IR disciplines.
Scope:
The topics of interest include but are not limited to:
- Semantic Table Interpretation
- Automated Tabular Data Understanding
- Using Large Language Models (LLMs) for Information Extraction
- Generative Models and LLMs for Structured Data
- Knowledge Graph Construction and Completion with Tabular Data and Texts
- Analysis of Tabular Data on the Web (Web Tables)
- Benchmarking and Evaluation Frameworks for Joint Text-Table Data Analysis
- Applications (e.g. data search, fact-checking, Question-Answering, KG
alignment)
Submission Guidelines:
We invite two types of submissions:
1. Full research papers (12-15 pages) including references and appendices
2. Challenge papers (6-8 pages) including references and appendices
All submissions should be formatted in the CEUR layout format,
https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-w…
This workshop is double-blind and non-archival. Submissions are managed
through EasyChair at
https://easychair.org/conferences/?conf=triplet2026. All accepted papers
will be presented as posters or as oral talks.
**TRIPLET Challenge:**
In recent years, the research community has shown increasing interest in
the joint understanding of text and tabular data, often, for performing
tasks such as question answering or fact checking where evidences can be
found in texts and tables. Hence, various benchmarks have been developed
for jointly querying tabular data and textual documents in domains such
as finance, scientific publications, and open domain. While benchmarks
for triple extraction from text for Knowledge Graph construction and
semantic annotation of tabular data exist in the community, there
remains a gap in benchmarks and tasks that specifically address the
joint extraction of triples from text and tables by leveraging
complementary clues across these different modalities.
The TRIPLET 2026 challenge is proposing three sub-tasks and benchmarks
for understanding the complementarity between tables, texts, and
knowledge graphs, and in particular to propose a joint knowledge
extraction and reconciliation process.
#Sub-Task 1: Assessing the Relatedness Between Tables and Textual Passages
The goal of this task is to assess the relatedness between tables and
textual passages (within documents and across documents). For this
purpose, we have constructed LATTE (Linking Across Table and Text for
Relatedness Evaluation), a human annotated dataset comprising table–text
pairs with relatedness labels. LATTE consists of 7,674 unique tables and
41,880 unique textual paragraphs originating from 3,826 distinct
Wikipedia pages. Each text paragraph is drawn from the same or
contextually linked pages as the corresponding table, rather than being
artificially generated. LATTE provides a challenging benchmark for
cross-modal reasoning by requiring classification of related and
unrelated table–text pairs. Unlike prior resources centered on
table-to-text generation or text retrieval, LATTE emphasizes
fine-grained semantic relatedness between structured and unstructured data.
The Figure below provides an example, using a web-annotation tool we
developed, of how we identify the relatedness between the sentence
containing the entity AirPort Extreme 802.11n (highlighted in Orange)
and the data table providing information about output power and
frequency for this entity. Participants are provided with tables and
textual passages that would need to be ranked. The evaluation will use
metrics such as P@k, R@k and F1@k.
Go to https://www.codabench.org/competitions/12776/ and enroll to
participate in this Task.
#Sub-Task 2: Joint Relation Extraction Between Texts and Tables
The goal of this task is to automatically extract knowledge jointly from
tables and related texts. For this purpose, we created ReTaT, a dataset
that can be used to train and evaluate systems for extracting such
relations. This dataset is composed of (table, surrounding text) pairs
extracted from Wikipedia pages and has been manually annotated with
relation triples. ReTaT is organized in three subsets with distinct
characteristics: domain (business, telecommunication and female
celebrities), size (from 50 to 255 pairs), language (English vs French),
type of relations (data vs object properties), close vs open list of
relation, size of the surrounding text (paragraph vs full page). We then
assessed its quality and suitability for the joint table-text relation
extraction task using Large Language Models (LLMs).
Given a Wikipedia page containing texts and tables and a list of
predicates defined in Wikidata, a participant system should extract
triples composed of mentions located partly in the text and partly in
the table and disambiguated with entities and predicates identified in
the Wikidata reference knowledge graph. For example, in the Figure
below, an annotation triple <Q13567390, P2109, 24.57> is associated with
mentions highlighted in orange (subject), blue (predicate) and green
(object) to annotate the document available at
https://en.wikipedia.org/wiki/AirPort_Extreme. Similar to the
Text2KGBench evaluation
(https://link.springer.com/chapter/10.1007/978-3-031-47243-5_14), and
because the set of triples are not exhaustive for a given sentence, to
avoid false negatives, we follow a locally closed approach by only
considering the relations that are part of the ground truth. The
evaluation then uses metrics such as P, R and F1.
Go to https://www.codabench.org/competitions/12936/ and enroll to
participate in this Task.
# Sub-Task 3: Detecting Inconsistencies Between Texts, Tables and
Knowledge Graphs
The goal of this task is to check the consistency of knowledge extracted
from tables and texts with existing triples in the Wikidata knowledge
graph. Different kind of inconsistencies will be considered in this
task. Participants to this task will be able to report on their findings
in their system paper.
See the Figure at
https://ecladatta.github.io/images/triplet_annotation_tool.png
# Data & Evaluation:
For the first 2 sub-tasks, we have released a training dataset with
ground-truth annotations, enabling participant teams to develop machine
learning-based systems, and in particular for training purposes and for
hyperparameter optimizations and internal validations.
A separate blind test dataset will remain private and be used for
ranking the submissions.
Participants should register on Codabench and then enroll for each
sub-task separately (Task 1:
https://www.codabench.org/competitions/12776/ and Task 2:
https://www.codabench.org/competitions/12936/). Each team are allowed a
limited number of daily submissions, and the highest achieved accuracy
will be reported as the team's final result. We encourage participants
to develop open-source solutions, to utilise and fine-tune pre-trained
language models and to experiment with LLMs of various size in zero-shot
or few-shot settings.
# Challenge Important Dates:
- Release of training set: 13 February 2026
- Deadline for registering to the challenge: 15 March 2026
- Release of test set: 24 March 2026
- Submission of results: 10 April 2026
- System Results & Notification of Acceptance: 17 April 2026
- Submission of System Papers: 28 April 2026
- Presentations @ TRIPLET Workshop: May 2026
Workshop Organizers
- Raphael Troncy (EURECOM, France)
- Yoan Chabot (Orange, France)
- Véronique Moriceau (IRIT, France)
- Nathalie Aussenac-Gilles (IRIT, France)
- Mouna Kamel(IRIT, France)
Contact:
For discussions, please use our Google Group,
https://groups.google.com/g/triplet-challenge
The workshop is supported by the ECLADATTA project funded by the French
National Funding Agency ANR under the grant ANR-22-CE23-0020.
--
Raphaël Troncy
EURECOM, Campus SophiaTech
Data Science Department
450 route des Chappes, 06410 Biot, France.
e-mail: raphael.troncy(a)eurecom.fr & raphael.troncy(a)gmail.com
Tel: +33 (0)4 - 9300 8242
Fax: +33 (0)4 - 9000 8200
Web: http://www.eurecom.fr/~troncy/
Dear colleagues,
We are looking for a postdoctoral researcher with experience in explainable AI and human-computer interaction who will pursue an independent research agenda, acquire third-party funding, and eventually establish their own research group. More details can be found in our application portal https://jobs.dfki.de/en/vacancy/en-senior-researcher*in-fur-erklarbare-spra…
*The application deadline is March 1st.*
The position is initially assigned to the Efficient and Explainable NLP group in the Multilinguality and Language Technology research department at DFKI in Saarbrücken, which is headed by Dr. Simon Ostermann (https://www.dfki.de/en/web/research/research-departments/multilinguality-an…). The group works on national and international research and development projects in the field of explainable and efficient language processing. The environment offers a very active publication culture, high methodological expertise, a focus on basic research, and close ties to current developments in NLP and AI research.
The focus of the position is on the research and development of explainable AI systems with a special emphasis on user perspectives and interaction. Relevant topics include explainable multimodal models, user-centred explanation approaches, and the generation of rationales for complex model decisions. The specific design of the research agenda leaves room for your own ideas and new research directions.
A central goal of the position is to establish or expand an independent scientific profile. This includes, in particular, the development of your own project ideas, the acquisition of third-party funding, and the preparation and establishment of your own research group in the aforementioned field within the MLT research department.
The position is ideally to be filled on 1 April 2026; later employment is possible. The position is initially limited to three years, but may be made permanent if the candidate successfully assists in acquiring third-party funding.
Your tasks
- You will work on independent research questions in the field of explainable AI and interaction as part of a BMFTR-funded project.
- You will develop and evaluate user-centred explanatory approaches for complex AI models.
- You will publish your research at leading international conferences.
- You will design and apply for independent third-party funded projects in collaboration with other members of the group.
- You will actively participate in developing your own research agenda and take on scientific leadership tasks in the future.
- You will supervise master's theses and doctoral dissertations and participate in teaching.
Your qualifications
- Completed doctorate in computer science, AI, computational linguistics, human-computer interaction or a related field.
- Very good knowledge of explainable AI, interpretability and language processing.
- High motivation for independent research and profile building.
- Initial experience in acquiring third-party funding.
- A convincing track record of publications in the field of NLP and explainable AI, ideally at leading international conferences such as ACL, EMNLP, NAACL, EACL, COLING, CHI, FAccT, NeurIPS, ICLR or ICML.
- Very good publication and communication skills.
Your benefits
- An excellent research environment with high international visibility.
- Great scientific freedom while being part of an established research group.
- Active support in acquiring third-party funding and setting up your own working group.
- An interdisciplinary network at the interface of AI research, language processing and human-computer interaction.
- A young, motivated and collegial team.
- A working environment in which we place great value on positive, respectful and constructive collaboration.
The German Research Center for Artificial Intelligence (DFKI) has operated as a non-profit, Public-Private-Partnership (PPP) since 1988. DFKI combines scientific excellence and commercially-oriented value creation with social awareness and is recognized as a major "Center of Excellence" by the international scientific community. In the field of artificial intelligence, DFKI has focused on the goal of human-centric AI for more than 35 years. Research is committed to essential, future-oriented areas of application and socially relevant topics.
DFKI encourages applications from people with disability; DFKI intends to increase the proportion of female employees in the field of science and encourages women to apply for this position.
12th Workshop on the Challenges in the Management of Large Corpora The next meeting of CMLC (see also http://corpora.ids-mannheim.de/cmlc.html ) will be held as part of the LREC-2026 conference [3] in Palma, Mallorca.
3rd Call for Papers and deadline extensionImportant dates *
Deadline for paper submission: the 16th
25th of February 2026 (Monday, 23:59 UTC)
* Notification of acceptance: the 12th of March 2026 (Thursday)
* Deadline for the submission of camera-ready papers: the 30th of March 2026 (Monday)
* Meeting: the 11th of May, morning slot
Paper submission * We invite anonymised extended abstracts for oral presentations on the topics
listed above, as PDF created according to LREC-2026 templates [1].
Length and content: 4 to 8 pages in length, excluding acknowledgements, references,
potential Ethics Statements and discussion on Limitations. Appendices or
supplementary material are not permitted during the initial submission
phase, as papers should be self-contained and reviewable on their own.
However, appendices and supplementary material will be allowed in the
final, camera-ready version of the paper.
* CMLC has always reserved a track for national corpus project reports, and to
this end, we invite poster proposals of 500-750 words. National project
reports need not be anonymised.
* Submissions are accepted solely through the LREC START system [2].
* A volume of proceedings will be published online by ELRA. Oral and poster
contributions will have equal status.
Workshop description As in the previous CMLC meetings, we wish to explore common areas of interest across a range of issues in language resource management, corpus linguistics, natural language processing, natural
language generation, and data science.
Large textual datasets require careful design, collection, cleaning, encoding, annotation, storage, retrieval, and curation to be of use for a wide range of research questions and to users across a
number of disciplines. A growing number of national and other very large corpora are being made available, many historical archives are being digitised, numerous publishing houses are opening their
textual assets for text mining, and many billions of words can be quickly sourced from the web and online social media.
A mixed blessing of the times is that much of those texts, in mono- and multi-lingual arrangements can now be created automatically by exploiting Large Language Models at various scales. That, on the
one hand, makes it possible to inflate the amounts of data where normally data would be scarce: in under-resourced languages or language varieties, in specific genres or for intricate and rarely
attested constructions. On the other hand, such procedures immediately raise concerns regarding the authenticity and quality of such data, casting doubt on the possibility of adequately (truthfully,
verifiably, reproducibly) addressing the kind of research questions that provoked the rapid but tainted increase of the available data volumes in the first place. Similar doubts may be directed at
mass creation of secondary and tertiary data ordinarily crucial for linguistic research: apart from potential legal constraints on the use of the initial amounts of human-created data, new questions
arise as to the legal status of the derived data, the ways to create e.g. provenance metadata of the derived resources, and the level of trust regarding mass-produced grammatical (and other)
annotation layers.
These new as well as more traditional questions lie at the base of the list of topics that management of large corpora (for any currently suitable definition of “large”) invokes or at least strongly
brushes against.
Topics of interest This year's event adds new items to the standard range of CMLC themes and addresses some of LREC-2026 focus topics:
*
Interoperability and accessibility
How to make corpora as accessible as possibleInteroperable APIs for query and analysis softwareProvision of multiple levels of access for different tasks
* Machine/Deep Learning
Data preparation for machine learning inputCreation, curation, maintenance and
dissemination of language models based on machine learning (e.g. word
embeddings and entire deep learning networks)Legal issues concerning language model distribution
* Linguistic content challenges
Dealing with the variety of language:
multilinguality, minority and/or underrepresented languages, historical
texts, noisy OCR texts, user-generated content, etc.Diversity and inclusion in language resourcesIntegration of human computation (crowdsourcing) and automatic annotationQuality management of
annotationsEnsuring linguistic integrity of data
through deduplication, correction of typos and errors, removal of
incomplete or malformed sentences, and filtering harmful, offensive and
toxic content, etc.Integrating different linguistic data types (text, audio, video, facsimiles, experimental data, neuroimaging data, …)
* Technical challenges
Storage and retrieval solutions for large
text corpora: primary data (potentially including facsimiles, etc.),
metadata, and annotation dataCorpus versioning and release managementScalable and efficient NLP tooling for
annotating and analysing large datasets: distributed and GPGPU
computing; using big data analysis frameworks for language processingDealing with streaming data (e.g. Social Media) and rapidly changing corporaEnvironmental impact of big language data
computingEngineering and management of research software
* Exploitation challenges
Legal and privacy issuesQuery languages, data models, and standardisationLicensing models of open and closed data, coping with intellectual property restrictionsInnovative approaches for
aggregation and visualisation of text analyticsRepurposing or extending application areas of existing corpora and tools
National corpus initiatives In the tradition of CMLC, we invite reports on national corpus initiatives; submitters of these reports should be prepared to present a poster. Given that it's been a while since the last round, we
would be happy to have a little "What's the news?" session, and we cordially invite both our veteran presenters as well as colleagues who have not yet introduced their national corpus projects,
Our poster sessions are usually scheduled to overlap with the coffee break, to ensure informal atmosphere and to maximally use the time slot available to us. A flash presentation section is plan for
just before the poster session: ca. 3 minutes for the highlights.
LRE 2026 Map and the "Share your LRs!" initiative When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that
have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC authors to share the described LRs (data, tools, services, etc.) to enable
their reuse and replicability of experiments (including evaluation ones).
Programme Committee * Laurence Anthony (Waseda University, Japan)
* Vladimír Benko (Slovak Academy of Sciences)
* Felix Bildhauer (IDS Mannheim)
* Mark Davies (English-Corpora.org)
* Nils Diewald (IDS Mannheim)
* Kaja Dobrovoljc (University of Ljubljana / Jožef Stefan Institute)
* Jarle Ebeling (University of Oslo)
* Tomaž Erjavec (Jožef Stefan Institute, Ljubljana)
* Andrew Hardie (Lancaster University, UK)
* Serge Heiden (ENS de Lyon)
* Ulrich Heid (University of Hildesheim)
* Nancy Ide (Vassar College / Brandeis University)
* Olha Kanishcheva (Heidelberg University)
* Gražina Korvel (Vilnius University)
* Natalia Kocyba (Samsung Poland)
* Michal Křen (Charles University, Prague)
* Anna Latusek (ICS PAS, Warsaw)
* Paul Rayson (Lancaster University)
* Laurent Romary (INRIA)
* Thomas Schmidt (University of Duisburg-Essen)
* Serge Sharoff (University of Leeds)
* Maria Shvedova (Kharkiv Polytechnic Institute / University of Jena)
* Irena Spasić (Cardiff University)
* Martin Wynne (University of Oxford)
Organising Committee * 📩 Piotr Bański (IDS Mannheim)
* 📩 Dawn Knight (Cardiff University)
* 📩 Marc Kupietz (IDS Mannheim)
* 📩 Andreas Witt (IDS Mannheim)
* 📩 Alina Wróblewska (ICS PAS, Warsaw)
[1] LREC-2026 templates https://lrec2026.info/authors-kit/
[2] LREC START system https://softconf.com/lrec2026/CMLC2026/
[3] LREC-2026 conference https://lrec2026.info/
**Final Call for Papers (with extended deadline)**
Gaze4NLP - The Second Workshop on Gaze Data and Natural Language Processing
12 May 2026, Palma de Mallorca, Spain (co-located with LREC 2026)
https://gaze4nlp.github.io/Gaze4NLP2026/
The Second Workshop on Gaze Data and Natural Language Processing
(Gaze4NLP) invites papers of a theoretical or experimental
nature describing research methodologies by employing
interdisciplinary perspectives, including computer science and
engineering perspectives and cognitive sciences, and identifying
challenges to resolve in the intersection of the two domains: eye
tracking and NLP. Gaze4NLP aims to bring together researchers
conducting research on eyes on eyes on text and NLP; and
establishing bridges between them for identifying future venues
of research.
Workshop webpage:
https://gaze4nlp.github.io/Gaze4NLP2026/
Important Dates
Workshop paper submission deadline: 23 February 2026
Workshop paper acceptance notification: 16 March 2026
Workshop paper camera-ready versions: 30 March 2026
Workshop date: 12 May 2026
All deadlines are 11:59PM UTC-12:00 (anywhere on Earth)
Topics for the workshop will include, but are not limited to:
- Investigating the pillars for bridging the gap between the research
on eyes on text and NLP. Study how to expand research methodologies
by employing interdisciplinary perspectives, including computer
science and engineering perspectives and cognitive sciences, and
identify challenges, issues to resolve.
- Exploring new areas so that both fields benefit from each other
better than the past, identifying novel domains of exploration for
further research.
- Discussing how to develop cognitively inspired models that align
human reading data with LLMs.
Submissions
We solicit regular workshop papers, which will be included in the
proceedings as archival publications. The length of the papers should
be between 4 and 8 pages (excluding references). The submissions
should not include any appendices. Accepted papers will be presented
in the form of either oral or poster presentations.
Please note that camera-ready papers are allowed an additional page of
content to address reviewer comments, and unlimited pages for
appendices. The workshop proceedings will be part of the ACL
anthology. Accepted papers will also be given an opportunity with an
extended version to be published as part of an edited book.
Submissions will be handled via the START Conference Manager.
- Submission link: https://softconf.com/lrec2026/Gaze4NLP/
All submissions should follow the LREC style guidelines. We strongly
recommend the use of the LaTeX style files, OpenDocument, or Microsoft
Word templates created for LREC: <https://lrec2026.info/authors-kit/>.
All papers must be anonymous, i.e., not reveal author(s) on the title
page or through self-references. So, e.g., “We previously showed
(Smith, 2020)”, should be avoided. Instead, use citations such as
“Smith (2020) previously showed”.
LRE-Map and Sharing Language Resources
When submitting a paper from the START page, authors will be asked to
provide essential information about resources (in a broad sense, i.e.
also technologies, standards, evaluation kits, etc.) that have been
used for the work described in the paper or are a new result of your
research. Moreover, ELRA encourages all LREC authors to share the
described LRs (data, tools, services, etc.) to enable their reuse and
replicability of experiments (including evaluation ones).
Organization Committee:
Cengiz Acarturk, Jagiellonian University, Poland
Jamal Nasir, University of Galway, Ireland
Burcu Can, University of Stirling, Scotland, UK
Cagri Coltekin, University of Tubingen, Germany
Dear colleagues,
Do you care about improving language technologies beyond mainstream languages? Do you wonder how to collect data for low-resource languages? Or how to create the first translation system? And then adapt efficiently to various downstream tasks?
We are pleased to announce an upcoming LREC2026 tutorial
“Low-Resource, High-Impact: Building Corpora for Inclusive Language Technologies.”
This tutorial is aimed at NLP practitioners, researchers, and developers working with multilingual and low-resource languages who are interested in building more equitable, inclusive, and socially impactful language technologies.
**Tutorial overview**
The tutorial covers the full lifecycle of NLP technologies development for a language, including:
* Data collection and corpus creation (e.g., web crawling and annotation)
* Parallel sentence mining and machine translation
* Downstream applications such as text classification and multimodal reasoning
* Strategies for addressing data scarcity, cultural variance, and reproducibility
* Fair and community-informed development practices
**Who should attend**
* Researchers and practitioners in NLP and multilingual technologies
* Corpus builders and linguists working on underrepresented languages
* Developers interested in low-resource or inclusive NLP
* Students and early-career researchers
**Scope and highlights**
* Case studies spanning 10+ languages from diverse language families and geopolitical contexts
* Coverage of both digitally resource-rich and severely underrepresented languages
* Emphasis on hands-on methods and applied modeling frameworks
**Save the date and place**:
Saturday, 16 May 2026, morning session, Room 6
More information:
https://tum-nlp.github.io/low-resource-tutorial/
Stay tuned for our website – we will fully open-source the tutorial materials!
Additionally, we would like to have an overview of overall practices and challenges researchers facing working with non-mainstream languages. If you are such a researcher, you are working on a very surprising language, or just have experience to share about the topic, please, fill in this form to participate in the interview: https://forms.gle/L81hpvZGfemyMjtX7
**Organisers**:
Ekaterina (Katya) Artemova, Toloka.ai
Laurie Burchell, Common Crawl Foundation
Daryna Dementieva, Technical University of Munich
Shu Okabe, Technical University of Munich
Mariya Shmatova, Toloka.ai
Pedro Ortiz Suarez, Common Crawl Foundation
See you at LREC!
Best regards,
Daryna Dementieva
On behalf of Tutorial Organisers
The next meeting of the Edge Hill Corpus Research Group will take place online (via MS Teams) on Friday 6 March 2026, 10:00-11:30 am (GMT<https://time.is/United_Kingdom>).
Topic: LLMs, Corpus Linguistics, and Language Learning
Speaker: Peter Crosthwaite<https://languages-cultures.uq.edu.au/profile/2845/peter-crosthwaite> (University of Queensland, Australia)
Title: Corpora, Prompts, and Pedagogy: Human-AI Text Comparison in Applied Linguistics
The abstract and registration link are here: https://sites.edgehill.ac.uk/crg/next
Attendance is free. Registration closes on Wednesday 4 March.
If you have problems registering, or have any questions, please email the organiser, Costas Gabrielatos (gabrielc(a)edgehill.ac.uk<mailto:gabrielc@edgehill.ac.uk>).
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
Dear all,
I would like to draw your attention to the position announced below. We are searching for a postdoc or an advanced PhD student whose research interests align with the goals of the project. The position is funded until the end of 2027.
Since the position requires very good German language proficiency, the details below are posted in German.
Best regards,
Antje Schweitzer
--
Dr. Antje Schweitzer
IMS Uni Stuttgart
0711-685 81376
https://www.ims.uni-stuttgart.de/~schweitz
Stellenausschreibung
PostDoc-Stelle (m/w/d, E 13 TV-L, 100%) im Projekt MEKI
Bereich KI in der Berufsbildung
1. Mai 2026 bis 31. Dezember 2027
Universität Stuttgart
Institut für Maschinelle Sprachverarbeitung
Arbeitsgruppe Digitale Phonetik, Prof. Dr. Thang Vu, https://www.ims.uni-stuttgart.de/en/institute/team/Vu-00002/
In Deutschland werden laut aktuellen Prognosen bis zum Jahr 2035 ganze 7 Millionen Arbeitskräfte fehlen. Dieser dramatische Fachkräftemangel wird davon verstärkt, dass immer mehr junge Erwachsene keinen Berufsabschluss haben. Außerdem brechen bis zu 28% der Auszubildenden ihre Ausbildung vorzeitig ab. Dies wollen wir ändern!
Im Projekt MEKI (Mehr erreichen mit KI) entwickeln wir eine KI-unterstützte (open source) Lernsoftware, die vor allem schwächeren Auszubildenden in der Berufsschule helfen soll, ihre Ausbildung erfolgreich zu beenden. Das Projekt konzentriert sich auf den gewerblich-technischen Bereich, entwickelt aber Konzepte, die auf andere Bereiche übertragen werden können. Es wird vom BMBSFJ im Rahmen der InnoVET PLUS-Förderrichtlinie gefördert. Wir arbeiten in MEKI mit vier Verbundpartnern (zwei Industrie- und Handelskammern (IHKs) sowie der TU München und der LMU München) zusammen. Die IHKs sowie die LMU sind vor allem für die Ermittlung der Bedarfe zuständig; die Software wird von der TU und uns gemeinsam entwickelt, wobei unsere Hauptzuständigkeit die KI-Features sind.
Im Projekt ist zum nächstmöglichen Zeitpunkt, möglichst ab Mai 2026, bis zum Ende der Projektlaufzeit im Dezember 2027, eine PostDoc-Stelle (m/w/d, E 13 TV-L, 100%) zu besetzen. Nach Absprache ist auch eine Beschäftigung in Teilzeit (bei gleicher Laufzeit) denkbar.
Wir suchen eine*n motivierte*n Kolleg*in, die*der Lust hat, sich außerhalb der akademischen Bildung zu engagieren und ein gesellschaftliches Problem mithilfe von KI-Methoden anzugehen. Der Einsatz von KI in der Bildung eröffnet innovative Möglichkeiten wie individualisiertes und multimodales Lernen oder Gamification zur Motivationssteigerung.
Gewünschtes Profil:
• sehr gute Deutschkenntnisse in Wort und Schrift
• Lust auf Teamarbeit
• sehr gute Kommunikationsfähigkeit
• Gewissenhaftigkeit und analytisches Denken
• eine hervorragende Promotion (Summa oder Magna cum laude) im Bereich Maschinelle Sprachverarbeitung, Computational Linguistics, Digital Humanities, Data Science, Softwareentwicklung oder in einem vergleichbaren Bereich
• umfassende Erfahrung mit dem Einsatz von LLMs im Rahmen der Promotion
• sehr gute Programmierkenntnisse, insbesondere in Python
• Erfahrung im Bereich multimodale Generierung
• Erfahrung mit Deep Learning ist ein Plus, aber nicht unbedingt notwendig
• Erfahrung mit Git
• Erfahrung in Frontend-Entwicklung, z.B. React/Typescript
Aufgaben im Projekt:
• Konzeption und Implementierung neuer KI-basierter Software-Features insbesondere im Bereich multimodales Lernen
• Beteiligung an der projektbegleitenden Evaluierung von KI-Features der Software
• Gemeinsames Testen der Software im Rahmen von Nutzungsstudien mit Projektpartnern
• Dokumentation und Veröffentlichung von Projektergebnissen
• Aktive Zusammenarbeit mit den Verbundpartnern sowie aktive Teilnahme an verbundinternen Treffen
• Entwicklung neuer Ideen zum Einsatz von KI in der Bildung für zukünftige Projekte
Das bieten wir:
• Ein diverses, engagiertes Team und angenehme Arbeitsatmosphäre mit exzellenter Forschungsumgebung in internationalem und interdisziplinärem Umfeld am Institut für Maschinelle Sprachverarbeitung der Universität Stuttgart
• Moderne Themen mit gesellschaftlicher Relevanz im Bereich Bildung
• Kooperationsmöglichkeiten und Kontakt zu anderen interdisziplinären Projekten im Bereich Deep Learning und LLMs am Institut für Maschinelle Sprachverarbeitung
• Unterstützung durch studentische Hilfskräfte
• Weiterbildungsmöglichkeiten im Bereich Lehre
Bewerbungsverfahren
Wir bitten um eine PDF-Datei mit:
• einem kurzen Motivationsschreiben, in dem die Forschungsinteressen dargelegt sind,
• dem Lebenslauf mit Publikationsliste,
• den Kontaktdaten von ein bis zwei Referenzpersonen.
Die Universität Stuttgart steht für gelebte Vielfalt und Chancengerechtigkeit sowie für die Vereinbarkeit von Beruf und Familie. Bewerberinnen werden bei gleicher Eignung, Befähigung und fachlicher Leistung in Bereichen, in denen Frauen unterrepräsentiert sind, bevorzugt berücksichtigt. Schwerbehinderte Bewerber*innen werden bei gleicher Qualifikation vorrangig eingestellt. Bewerbungen von Menschen anderer Nationalitäten oder mit Migrationsgeschichte begrüßen wir ausdrücklich.
Bewerbungen sind zu richten an: Antje Schweitzer, <mailto:antje.schweitzer@ims.uni-stuttgart.de>
Bewerbungen, die bis zum 28. Februar 2026 eingehen, werden vollständig berücksichtigt. Die Stelle bleibt bis zur Besetzung offen.
Informationen zur Umgebung
Die Universität Stuttgart ist eine technisch orientierte Universität. Sie ist besonders für Ingenieurwissenschaften und verwandte Themen bekannt, wobei ihre Informatikabteilung sowohl national als auch international einen hohen Stellenwert einnimmt.
Das Institut für Maschinelle Sprachverarbeitung (IMS), das zur Fakultät für Informatik und Elektrotechnik gehört, ist eines der größten akademischen Forschungsinstitute für natürliche Sprachverarbeitung in Deutschland. Es schlägt die Brücke zwischen der Grundlagenforschung zu Sprache und der Entwicklung von Technologien für die Gesellschaft. Mit 6 Professuren, über 50 Wissenschaftler*innen und 200 Studierenden in den Studiengängen B.Sc. und M.Sc. Computational Linguistics gehört das IMS zu den größten Standorten der Computerlinguistik in Deutschland und Europa.
Stuttgart ist bekannt für seine starke Wirtschaft und abwechslungsreiche Kulturszene bei überschaubarer Größe. Sehenswert ist auch die Lage inmitten einer Vielzahl von Hügeln und Weinbergen. Stuttgart ist eine lebhafte Stadt mit einer aktiven Bar- und Clubszene und einem gut ausgebauten öffentlichen Nahverkehr. Mit dem Zug ist Stuttgart gut an viele andere interessante Orte angebunden, zum Beispiel München und Köln (~2 Stunden), Paris (~3,5 Stunden), Berlin (~5,5 Stunden), Straßburg (1 Stunde) oder den Bodensee (2 Stunden).
Förderprogramm des BMBFSFJ:
https://www.inno-vet.de/innovet/de/innovet_plus/innovet-plus_node.html
Projektbeschreibung beim Verbundkoordinator IHK Reutlingen:
https://www.reutlingen.ihk.de/ausbildung/azubis-hier-lang/digitale-lernange…
Institut für Maschinelle Sprachverarbeitung:
https://www.ims.uni-stuttgart.de
Digitale Phonetik
https://www.ims.uni-stuttgart.de/institut/arbeitsgruppen/dp/
*** First Call for Workshop Proposals ***
37th IEEE International Symposium on Software Reliability Engineering
(ISSRE 2026)
October 20-23, 2026, 5* St. Raphael Resort and Marina
Limassol, Cyprus
https://cyprusconferences.org/issre2026/
Objectives
ISSRE strives to be the conference that appeals to both researchers and practitioners. To
that end, we invite proposals for workshops to co-locate with the Symposium and provide
additional opportunities for collaborating and exchanging information. The workshops
aim at discussing research developments and challenges at an early stage. ISSRE welcomes
workshops that explore new ways to provide and assess software reliability, safety, and
security. We also seek workshops that deal with the provision of reliable, safe, and secure
software and systems in fast-growing, transformative application domains. Appropriately
defined workshop proposals have the following characteristics:
• They offer researchers a forum to exchange and discuss scientific and engineering ideas
at an early stage before maturation that would warrant conference or journal publication.
• They attract practitioners and researchers to working sessions to discuss and make
progress toward solutions to current and future problems in engineering high assurance
software and systems.
• They focus on collaborative discussions and information sharing between researchers
and industry practitioners.
Recurring Workshops
Workshops affiliated with ISSRE in previous years with good organization and numbers of
participants are pre-approved. Their organizers do not need to submit a new workshop
proposal. Their organizers are kindly asked to inform the workshop chairs about returning
the workshop to ISSRE in 2026.
Topics of Interest
Topics of interest include development, analysis methods and models throughout the
software development lifecycle, and are not limited to:
• Primary dependability attributes (i.e., security, safety, maintainability) impacting software
reliability
• Secondary dependability attributes (i.e., survivability, resilience, robustness) impacting
software reliability
• Reliability threats, i.e. faults (defects, bugs, etc.), errors, failures
• Reliability means (fault prevention, fault removal, fault tolerance, fault forecasting)
• Machine Learning and AI-based approaches for enhancing reliability of systems
• Reliability, threads and biases of AI-based software systems, in particular Large
Language Models
• Data-related reliability and vulnerability issues and risks
• Learning-based models of software systems, threads, and reliability estimates
• Automated debugging and program repair
• Metrics, measurements and threat estimation for reliability prediction and the interplay
with safety/security
• Reliability of software services
• Reliability of open source software
• Reliability in networks softwarization
• Reliability of Software as a Service (SaaS)
• Reliability of software dealing with Big Data
• Reliability of model-based and auto-generated software
• Reliability of software in artificial intelligence based software systems
• Reliability of software within specific types of systems (e.g., autonomous and adaptive,
green and sustainable, mobile systems)
• Reliability of software within specific technological spaces (e.g., Internet of Things,
Cloud, 5G/6G, edge-to-cloud computing, Semantic Web/Web 3.0, Virtualization,
Blockchain)
• Normative/regulatory/ethical spaces pertaining to software reliability
• Societal aspects of software reliability
Proposal Submissions
Workshop proposals should include information about the proposed organizing committee
and address the following questions:
• Workshop length: Half day or one full day
• Workshop style: papers, panels, posters, workgroups
• Outline of themes and goals of the workshop
• How will you solicit participation (call for workshop papers, invitation only, etc.)
• Desired/estimated number of participants
• Organizing committee members and their past experience
Submissions need to be performed via Easy Chair, selecting the appropriate track for
workshop proposals. The submission link is:
https://easychair.org/conferences?conf=issre2026 .
Proposal Evaluation
Workshop proposals will be evaluated by the ISSRE 2026 Organizing Committee. The
criteria include the alignment with the ISSRE charter, relevance to the larger ISSRE
community, and the strength and experience of the organizing team.
Logistics
The conference will be “in presence” with all presenters of accepted papers expected to
attend the conference physically in Limassol, Cyprus.
Important Dates (AoE)
• Workshop proposal deadline: May 14, 2026
• Workshop proposal notification: May 21, 2026
• Workshop paper submission deadline: July 20, 2026
(NOTE: This date is only indicative – please refer to individual workshop webpages for
information about deadlines)
• Workshop paper notification to authors: August 10, 2026
• Camera ready papers: August 17, 2026
Organisation
General Chairs
• Leonardo Mariani, University of Milano - Bicocca, Italy
• George A. Papadopoulos, University of Cyprus, Cyprus
Program Coordinator
• Roberto Natella, GSSI, Italy
Research Program Committee Chairs
• Domenico Cotroneo, UNC Charlotte, USA
• Jie M. Zhang, King's College London, UK
Industry Program Chairs
• Jinyang Liu, Bytedance, USA
• Sigrid Eldh, Ericsson AB, Sweden
Workshop Chairs
• Georgia Kapitsaki, University of Cyprus, Cyprus
• August Shi, The University of Texas at Austin, USA
Doctoral Symposium Chairs
• Stefan Winter, LMU Munich, Germany
• Lili Wei, McGill University, Canada
Fast Abstract Chairs
• Luigi Lavazza, University of Insubria, Italy
• Yintong Huo, SMU, Singapore
JIC2 Chair
• Helene Waeselynck, LAAS-CNRS, France
Publicity Chairs
• Allison K. Sulivan, The University of Texas at Arlington, USA
• Jose D'Abruzzo Pereira, University of Coimbra, Portugal
Publication Chairs
• Sherlock Licorish, Otago Business School, New Zealand
• Maria Teresa Rossi, GSSI, Italy
Artifact Evaluation Chairs
• Naghmeh Ivaki, University of Coimbra, Portugal
• Fumio Machida, University of Tsukuba, Japan
Diversity and Inclusion Chair
• Eleni Constantinou, University of Cyprus, Cyprus
Financial Chair
• Costas Pattichis, University of Cyprus, Cyprus
Web Chairs
• Michalis Ioannides, Easy Conferences LTD
• Elena Masserini, University of Milano - Bicocca, Italy
Registration Chair
• Easy Conferences LTD
LT4HALA 2026 -- deadline extension -- The Fourth Workshop on Language Technologies for Historical and Ancient Languages @ LREC 2026
The Fourth Workshop on Language Technologies for Historical and Ancient Languages (LT4HALA 2026) will be held on Monday, May 11 in Palma, Mallorca (Spain), co-located with LREC 2026. This one-day workshop seeks to bring together scholars, who are developing and/or are using Language Technologies (LTs) for historically attested languages, so to foster cross-fertilization between the Computational Linguistics community and the areas in the Humanities dealing with historical linguistic data, e.g. historians, philologists, linguists, archaeologists and literary scholars.
* Submission deadline: 17th February 2026 **NEW DEADLINE: 23rd February 2026**
Website: https://circse.github.io/LT4HALA/2026/
Submission page: https://softconf.com/lrec2026/LT4HALA2026/
[http://static.unicatt.it/ext-portale/5xmille_firma_mail_2023.jpg] <https://www.unicatt.it/uc/5xmille>
!!! DEADLINE APPROACHING !!!
ICMI 2026 CALL FOR SPECIAL SESSIONS
============================================
5-9 October 2026, Napoli - Italy
https://icmi.acm.org/2026/
============================================
The ICMI 2026 organizing committee invites proposals for Special Sessions to be held during the 28th International Conference on Multimodal Interaction (ICMI 2026) in Naples, Italy. ICMI is the premier international forum that brings together multimodal artificial intelligence (AI) and social interaction research. Multimodal AI encompasses technical challenges in machine learning and computational modeling, such as representations, fusion, data, and systems.
* Important Dates
Special Session Proposal Submission Deadline February 18th, 2026
Notification of Acceptance February 27th, 2026
* Special Session Proposals
Special sessions provide an opportunity to explore emerging topics within multimodal interaction and are a key part of this year’s conference program. We are seeking proposals that will enhance the conference’s diversity and offer valuable insights into the conference theme of “Context and Cultural Awareness for Multimodal Interaction”.
Prospective special session organizers are invited to submit proposals via icmi2026-specialsessions-chairs(a)acm.org. Special Session Proposals should include the following information:
- Title: A title that shows the relevance of the session for the ICMI community and the novelty of the chosen topic.
- Aims and scope: Explain why the chosen topic is novel and relevant, with the potential to contribute to the growth of the ICMI community and/or is aligned with this year’s conference theme.
- Tentative speakers: A list of prospective contributing authors with tentative titles for their contributions. Special Sessions are normally expected to have 4 to 6 papers. While there is some flexibility for invited keynote and industry talks, the majority of the special session should consist of peer-reviewed papers.
- Organizers and bios: A short bio of the session organizers including their experience in the topic of the Special Session.
The primary criteria in evaluating the special session will include the relevance of the topic, the quality and track record of the proposers and speakers, diversity, the coherence of the proposal, and the expected value it will bring to the conference. We prioritize proposals that meet these criteria and welcome submissions from underrepresented communities.
The organizers of accepted special sessions will actively participate in the high-quality peer review process for submitted papers, following ICMI standards for the main track papers. They will serve as Area Chairs (ACs) for their proposed sessions, being responsible for tasks such as assigning reviewers and facilitating discussion phases. Papers submitted to an accepted Special Session will follow the same review process as the main conference track papers, including the same submission system (PCS), formatting guidelines (short or long papers), notification dates, and a rigorous peer review process.
For any questions and further information about the Special Sessions, please email icmi2026-specialsessions-chairs(a)acm.org or check https://icmi.acm.org/2026/special-sessions/.