We invite you to participate in our survey “Investigating the Use of
Large Language Models in Academic Research and Coding (LLM-ARCo)”🤖📚
The goal of this study is to better understand how researchers use large
language models in their academic work. The survey takes approximately
15 minutes⏱️. All responses are anonymous🔒, and the data will be used
only for research purposes.
As a small thank-you, 10 participants will be randomly selected🎁 to
receive a €50 Amazon voucher💶.
You can participate here:👉https://www.surveymonkey.com/r/RYBRXQL
<https://www.surveymonkey.com/r/RYBRXQL>
Please feel free to share this invitation with colleagueswho might be
interested 🔁👩🔬👨💻.Thank you very much for supporting our research! ❤️
On behalf of Dr. Younes,
Ansgar
** Call for Papers **
1st International Workshop on Science-Related Discourse on the Web
(SDW'26)
We are excited to announce the 1st International Workshop on
Science-Related Discourse on the Web (SDW’26), collocated with ACM Web
Science Conference 2026, 26-29 May, Braunschweig, Germany
In recent years, a growing number of people have been engaging in
science-related discussions on online platforms. This typically informal
and sometimes decontextualized discourse may result in
oversimplification, misinterpretation or instrumentalization of
scientific knowledge. Analyzing such discourse is challenging: it
differs from general online talk, spans multiple platforms, and requires
interdisciplinary methods.
This workshop provides a venue for interdisciplinary exchange on
computational and social scientific approaches and resources to
platform specific and cross platform analysis of science-related
communication.
***Workshop Themes***
- Methodological challenges related to the analysis of
science-related discourse on the Web, including data acquisition and
processing
- Insights concerning science-related discourse on the Web, e.g., its
characteristics, evolution, and impact
Topics of interest within these themes include, but are not limited to
the following:
- (Cross-platform-) crawling approaches for science-related discourse
- Methods for the detection and filtering of science-related online
discourse data
- Issues and methods related to tracking/linking of users and
messages within and across different platforms (e.g., X, Bluesky,
Mastodon, Threads)
- Practical/legal/ethical issues concerning data access
- Detection of arguments, claims, evidence, sources, or stances in
science-related online discourse data
- Classification of scientific claims w.r.t. verifiability,
credibility, or veracity (including distinguishing different types of
misinformation such as oversimplification)
- Analysis of science-related discourse on social media platforms and
in online news
- Assessment of the expertness of social media users (e.g.,
scientist, expert, lay person)
- Classification of sources w.r.t. credibility, political leaning,
and other biases
- Usage of social media platforms (e.g., X, Bluesky, Mastodon,
Threads) by different user groups
- Usage of memes in science-related discourse on social media
- Analysis of LLM-generated text in science-related discourse
- Usage of preprints and open access publications in science-related
Web discourse
***Submissions***
We invite contributions from Computer Science, Computational Social
Science, Communication Science, Science Communication, Media and
Communication Studies, Information Science, Computational Linguistics
and related fields. We accept both technical and non technical
submissions, including research papers (completed or in progress and
unpublished work), annotated datasets, questionnaires, novel data
collections, tools, and other resources. Emphasis is placed on
discussion and exchange of ideas, thus we welcome submissions of work in
progress.
- full papers (6 - 10 pages, including references, appendices, etc.),
- short papers (up to 5 pages including references, appendices,
etc.),
- extended abstracts for posters and demos (up to 2 pages including
references, appendices, etc.).
All accepted contributions except extended abstracts will be part of the
WebSci'26 workshop proceedings, which will be published in a companion
volume in the ACM Digital Library.
Submission of all papers is electronic, using the EasyChair conference
management system: https://easychair.org/conferences/?conf=sdw26
Please find more information on the workshop website:
https://sdw2026.wordpress.com/dates-and-submission/
***Important Dates***
Papers due: March 24, 2026
Paper notifications: April 3, 2026
Paper camera-ready versions due: April 14, 2026
Workshop: May 26, 2026
*** Third Call for Research Papers ***
International Conference on Software and Systems Reuse, Product Lines,
and Configuration (VARIABILITY 2026)
29 September - 2 October 2026, 5* St. Raphael Resort and Marina
Limassol, Cyprus
https://conf.researchr.org/home/variability-2026
The International Conference on Software and Systems Reuse, Product Lines, and
Configuration (VARIABILITY 2026) invites high-quality contributions from researchers and
practitioners in software engineering, systems engineering, and related disciplines
focussing on a broad spectrum of methods, concepts, and tools for variability.
VARIABILITY aims to be the premier forum for the exchange of ideas, experiences, and
results in all aspects of software and systems variability management, reuse, software
configuration, and customization.
As software and systems become increasingly configurable, reusable, and adaptable,
managing their variability across all lifecycle phases is more critical—and more challenging
—than ever. VARIABILITY 2026 seeks to bring together the diverse communities that
address these challenges from theoretical, technical, and practical perspectives.
VARIABILITY results from a merge of three prominent conferences focussing on software
and systems variability, configuration and reuse: SPLC (the International Systems and
Software Product Line Conference, 29 successful editions), VaMoS (the International
Working Conference on Variability Modelling of Software-Intensive Systems, 19 successful
editions), and ICSR (the International Conference on Systems and Software Reuse, 22
successful editions).
VARIABILITY is by design open as a conference. It welcomes new fields of variability-
intensive research, such as artificial intelligence, hybrid software-hardware systems, etc.
For this first edition of VARIABILITY, we strive to continue the success of the predecessor
conferences ICSR, SPLC, and VaMoS by welcoming high-quality submissions for the
research track in numerous closely related areas, such as systems and software product
lines, systems and software reuse, configurable systems and software, product
configuration, and systems and software variability. We will award the best research paper
and the best artifact paper.
Topics of Interest
We invite contributions on variability management, reuse, and configuration across all
phases of the software and systems lifecycle. The topics of interest include, but are not
limited to:
Requirements & Domain Engineering
• Domain analysis and variability modeling
• Decision modeling and support
• Customization and personalization specification
• Requirements variability and traceability
Architecture & Design
• Variability-aware software architectures
• Architecture-centric product line engineering
• Model-driven engineering (MDE)
• Multi-product lines, program families, product lines of product lines, software
ecosystems
Implementation & Code Generation
• Generative programming and code synthesis
• Modularization techniques for reusable code
• Programming languages and frameworks for variability
• Open-source strategies for software reuse
Testing, Verification & Quality Assurance
• Testing and analysis of configurable systems
• Safety and security in variable systems
• Formal Methods for Software Product Lines
• Non-functional properties: quality-aware analysis, quality-driven configuration
• Reuse in testing, verification, and quality assurance
Evolution, Maintenance & Operation
• Refactoring and restructuring of configurable systems
• Reverse engineering, variability mining, and refactoring
• Runtime variability and dynamic (software) product lines
• Maintenance strategies for large-scale reused systems
• Variability in DevOps and CI/CD pipelines
AI and Data-Driven Methods
• Machine learning for variability management
• AI-assisted product configuration
• Data and repository mining from product lines and configuration histories
• Recommendation systems for reuse and customization
Publication of Proceedings
Accepted papers will be published in the VARIABILITY 2026 proceedings by Springer in the
LNCS series.
Submission Guidelines
Paper Types
We invite the following types of submissions:
• Full Papers (up to 18 pages excluding references): Research papers must present
original, unpublished work with validated results through empirical evaluation, formal
analysis, or implementation-based experiments. Submissions must clearly articulate the
problem, its relevance, the proposed contribution, and validation results.
• Short Papers (6 - 8 pages excluding references): Short papers present early-stage
research, novel ideas, or conceptual proposals that are not yet fully developed or
validated but offer promising directions. These papers should articulate the vision,
motivation, and potential impact.
Formatting
Papers must use the Springer LNCS template according to:
https://www.springer.com/gp/computer-science/lncs/conference-proceedings-gu…
Springer provides author guidelines that should be consulted for further details:
https://resource-preview-cms.springernature.com/springer-cms/rest/v1/conten…
Submission Link
Submissions should be made via Easy Chair, selecting the research track:
https://easychair.org/conferences?conf=variability2026
Paper Originality, Double-Anonymous Policy, Reviewing
All papers must be original and not under review elsewhere. Submissions will be double-
anonymous and reviewed by at least three experts. Submissions will be evaluated based
on their novelty, relevance, rigor, transparency, and presentation. Authors of submissions
to the first deadline might be invited to submit a revision of their papers to the second
deadline, which will be reviewed as a revision.
Revisions
Research-track papers can be submitted to the first or second cycle. In the first cycle,
papers can receive the following decisions: accept, revision, or reject. Revision means that
the reviewers believe that the paper has potential, but that its quality or contribution is not
yet ready for publication. Such papers are offered lightweight shepherding by a community
member, who is not necessarily a PC member or reviewer. Revised papers should be
submitted to the second cycle together with a response letter, explaining how the reviewer
comments were addressed. They are then reviewed by the same PC members. Papers
rejected in the first cycle can be resubmitted in the second cycle, but need to contain an
appendix “Changes to First-Cycle Submission” at the end of the PDF (after references,
regardless of the page limit) that lists the major changes in bullet-point format.
Best Paper Awards
Springer will sponsor the awards for bet papers with an overall amount of €1000.
Journal Special Issue
Selected accepted papers will be invited to submit extended versions with at least 30%
additional and original material, to be published in a special issue in a reputable Software
Engineering journal (currently under negotiation).
Important Dates (AoE)
• Paper Submission Deadline: 2 April 2026
• Notification of Acceptance: 1 June 2026
• Camera-Ready Deadline: 15 July 2025
• Author Registration: 15 July 2025
Organisation
General Chairs
• George A. Papadopoulos, University of Cyprus, Cyprus
• Gilles Perrouin, FNRS & University of Namur, Belgium
Research Track Chairs
• Thorsten Berger, Ruhr University Bochum, Germany
• Ina Schaefer, KIT, Germany
Industry Track Chairs
• Shaukat Ali, Simula Research Lab and Oslo Metropolitan University, Norway
• Martin Becker, Fraunhofer IESE, Germany
Journal First Track Chairs
• Mathieu Acher, University Rennes, Inria, CNRS, IRISA, France
• Xhevahire Tërnava, LTCI, Télécom Paris, Institut Polytechnique de Paris, France
Doctoral Symposium Track Chairs
• Rick Rabiser, LIT CPS, Johannes Kepler University Linz, Austria
• Iris Reinhartz-Berger, University of Haifa, Israel
Demos and Tools Track Chairs
• Sandra Greiner, University of Southern Denmark, Denmark
• Leopoldo Teixeira, Federal University of Pernambuco
Projects Showcase Chairs
• Daniel Struber, Chalmers, University of Gothenburg, Radbound University, Sweden
• Dalila Tamzalit, Nantes Université, France
Hall of Fame Chairs
• Martin Becker, Fraunhofer IESE, Germany
• Goetz Botterweck, Lero - The Irish Software Research Centre and University of Limerick, Ireland
• Natsuko Noda, Shibaura Institute of Technology, Japan
Workshops Chairs
• Lidia Fuentes, Universidad de Malaga, Spain
• Malte Lochau, University of Siegen, Germany
Tutorials Chairs
• Loek Cleophas, Eindhoven University of Technology and Stellenbosch University, The Netherlands
• Mahsa Varshosaz, IT University of Copenhagen, Denmark
Proceedings Chair
• Sophie Fortz, King's College London, UK
Publicity Chairs
• Wesley Assunção, North Carolina State University, USA
• Kentaro Yoshimura, Hitachi Ltd, Japan
Local Organiser and Finance Chair
• George A. Papadopoulos, University of Cyprus, Cyprus
[Apologies for multiple postings]
The Early-Bird Deadline has been extended to March 15, 2026 (23:59 AoE)
All participants are encouraged to take advantage of the extended
deadline and register.
Main Conference authors must submit their camera-ready paper(s) and
complete their registration for LREC 2026 by this date.
We also encourage on-site participants to book their accommodation in
Palma as soon as possible.
Two partner hotel options are currently available for LREC participants.
1. Meliá Hotels: https://events.melia.com/en/events/palma-port/LREC-2026
2. Aubamar Palma Resort: https://www.aubamar.com/en/?promocode=LREC2026
<https://www.aubamar.com/en/?promocode=LREC2026&utm_source=lrec2026&utm_medi…>
Please consider using these options when arranging your stay.
LREC 2026 Management Chairs
---
Fees: https://lrec2026.info/registration-fees/
Registration Policy: https://lrec2026.info/registration-policy/
*LREC 2026 Contacts*
* Invitation and visa letters: german.rigau(a)ehu.es
* ELRA membership, payment, invoices: elrasecretariat(a)lrec2026.info
* Scientific programme and Main conference papers:
lrec2026-pcs(a)googlegroups.com
* Workshops: lrec2026-workshop-chairs(a)googlegroups.com
* Tutorials: lrec2026-tutorial-chairs(a)googlegroups.com
General contact: info(a)lrec2026.info
https://lrec2026.info
Follow us on LinkedIn
Final Call for papers
Final Call for Papers: The Seventh Workshop on Privacy in Natural Language Processing (PrivateNLP) co-located with ACL 2026, San Diego, July 2-7, 2026
Submission deadline extension
Website: https://sites.google.com/view/privatenlp2026/
PrivateNLP invites quality research contributions in different formats:
Original research papers (long and short)
Position and opinion papers
All submissions will undergo a double-blind review process, and accepted submissions will be presented at the workshop.
Topics of interest include but are not limited to:
Privacy preserving machine learning for language models
Generating privacy preserving test sets
Data extraction attacks on NLP systems (e.g. membership inference attacks)
Differential privacy for NLP models and data
Generating Differentially private derived data
NLP, privacy and regulatory compliance
Private Generative Adversarial Networks
Privacy in Active Learning and Crowdsourcing
Privacy and Federated Learning in NLP
User perceptions on privatized personal data
Auditing provenance in language models
Continual learning under privacy constraints
NLP for studying privacy policies and other texts about privacy
Ethical ramifications of AI/NLP in support of usable privacy
Homomorphic encryption for language models
Machine unlearning methods for language models
Auditing privacy-preserving methods applied to NLP models and data
Memorization of private information by language models
Important Dates
Submission deadline: Extended to March 19, 2026
Fast-track submission deadline: March 24, 2026
Acceptance notification: April 28, 2026
Camera-ready versions: May 12, 2026
Submission deadline for presenting findings papers: May 28, 2026
Workshop: July 2 or 3, 2026
All deadlines 23:59 Anywhere on Earth
Submission Instructions
Two types of submissions are invited: full papers and short papers. Please follow the ACL submission policies.
Full papers should not exceed eight (8) pages of text, plus unlimited references. Final versions of full papers will be given one additional page of content (up to 9 pages) so that reviewers' comments can be taken into account.
Short papers may consist of up to four (4) pages of content, plus unlimited references. Upon acceptance, short papers will still be given up to five (5) content pages in the proceedings.
We also ask authors to include a limitation section and broader impact statement, following guidelines from the main conference.
We will be using OpenReview for submissions:
https://openreview.net/group?id=aclweb.org/ACL/2026/Workshop/PrivateNLP
Please note OpenReview's moderation policy for newly created profiles:
New profiles created without an institutional email will go through a moderation process that can take up to two weeks.
New profiles created with an institutional email will be activated automatically.
No anonymity period will be required for papers submitted to the workshop, per the latest updates to the ACL anonymity policy. However, submissions must still remain fully anonymized.
Fast-Track Submission
If your paper has been reviewed by ACL, EMNLP, EACL, or ARR and the average rating is higher than 2.5 (either average soundness or excitement score), the paper is qualified to be submitted to the fast-track. In the appendix, please include the reviews and a short statement discussing what parts of the paper have been revised.
Link to fast-track submissions: Link TBD
Please upload the following 3 documents in a single ZIP file:
ARR reviews (including discussions and the meta-review) as a single PDF (e.g. printing the review webpage to PDF)
The submitted anonymous paper as PDF
A plain text file with the corresponding author's name and contact email
Dual Submission Policy
In addition to previously unpublished work, we invite papers on relevant topics which have been submitted to alternative venues (such as other NLP or ML conferences). Please follow the double-submission policy from ACL. Accepted cross-submissions will be presented as posters, with an indication of the original venue. Selection of cross-submissions will be determined solely by the organizing committee.
Non-Archival Option
There are no formatting or page restrictions for non-archival submissions. The accepted papers to the non-archival track will be displayed on the workshop website, but will NOT be included in the workshop proceedings or otherwise archived.
Workshop organizers
Ivan Habernal (ruhr-uni-bochum.de)
Sepideh Ghanavati (maine.edu)
Sara Haghighi (maine.edu)
Krithika Ramesh (jhu.edu)
Timour Igamberdiev (univie.ac.at)
Shomir Wilson (psu.edu)
Contact
privatenlp26-orga(a)lists.ruhr-uni-bochum.de
Join us at the 48th European Conference on Information Retrieval (ECIR 2026), Europe's premier forum for research in Information Retrieval (IR). ECIR brings together researchers, practitioners, and industry experts to share new, unpublished, and innovative research that is shaping the future of IR.
29 March – 2 April 2026
Delft, The Netherlands
ECIR 2026 will take place as a fully in-person conference, fostering vibrant discussions and meaningful networking opportunities. The programme includes tutorials on Sunday, 29 March, the main conference from 30 March to 1 April, and a dedicated workshop day on Thursday, 2 April.
What to Expect at ECIR 2026
Multiple research tracks covering recent advances in Information Retrieval
Three distinguished keynote speakers offering inspiring insights
A dedicated Industry Day bridging research and real-world innovation
Workshops, tutorials, posters, and rich networking opportunities
A welcoming environment with a strong tradition of supporting early-career researchers, including postgraduate students and postdoctoral fellows
Whether you are presenting new research, engaging in thought-provoking discussions, or connecting with leading experts in academia and industry, ECIR 2026 promises an inspiring and dynamic experience.
Explore the full program schedule:
https://ecir2026.eu/programme/program-schedule
Learn more about the conference, tracks, keynote speakers, and Industry Day:
https://ecir2026.eu/
Register now to secure your place:
https://ecir2026.dryfta.com/registration-page
We look forward to welcoming you to Delft as we explore the latest advancements and challenges in Information Retrieval at ECIR 2026!
Colleagues,
I have updated the corpuslab.com website with a simple concordance/collocation interface. You can search IMRDC sections of research articles in Arts, Economics, Engineering, Psychology and Social Science. It is not heavy-duty text analysis; you get 100 hits. You can sort by clicking the left or right context and you can highlight hedges, boosters, connectives, and reporting verbs. It is meant to be a gentle introduction to text analysis, an aid to student writers.
There are also details of quite advanced disciplinary writing books written by me and Claude. As noted in those, many disciplines don’t follow the IMRDC structure, but use alternative labels or thematic headings. I am assuming, in the concordance search described above, the language features found in the Method section will still be present in articles that use an alternative heading, such as Experiment 1. I should perhaps add some explicit statement about the typical macrostructure found in these disciplines.
Let me know if you have any questions, find any problems etc.
Michael
---
Michael Barlow. www.michaelbarlow.com<http://www.michaelbarlow.com/>
Assoc. Prof. Applied Linguistics, University of Auckland
Recent publications
M. Barlow. 2026. Writing the Social Science Research Article.<https://www.amazon.com/dp/0940753367/ref=sr_1_1_so_ABIS_BOOK?crid=25EQBROGY…> Athelstan.
M. Barlow. 2026. Writing the Humanities Research Article.<https://www.amazon.com/s?k=writing+the+humanities+research+article&i=stripb…> Athelstan.
M.Barlow. 2026. Writing the Business Research Article<https://www.amazon.com/dp/0940753405/ref=sr_1_2?crid=2XO1YCSNL8ZT&dib=eyJ2I…>.<https://www.amazon.com/dp/0940753405/ref=sr_1_2?crid=2XO1YCSNL8ZT&dib=eyJ2I…> Athelstan
M.Barlow. 2026. Writing the Economics Research Article<https://www.amazon.com/dp/0940753383/ref=sr_1_1?crid=2XO1YCSNL8ZT&dib=eyJ2I…>. Athelstan
M. Barlow 2023. Ten Lectures on Corpora and Cognitive Linguistics<https://brill.com/display/title/61682>. Brill
Le, Pham & Barlow 2023. The Academic Discourse of Mechanical Engineering<https://benjamins.com/catalog/scl.107>. Benjamins
Dear colleagues,
We invite submissions to the First Workshop on Evaluating LLMs for Specialized Domains (Eval4SD), to be held co-located with KONVENS 2026 in Hamburg, Germany (September 14th - 17th).
The workshop focuses on the evaluation of large language models in specialized domains such as—but not limited to—law, medicine, science, finance, digital humanities, social sciences, education, and politics. In this space, we have identified three core areas detailed below: LLM Benchmarking, Domain Research Replication, and Evaluation Methodology. Work that fits within the general theme but not any of the focus areas is also welcome!
- **LLM Benchmarking:** We invite contributions that evaluate multiple models, datasets, inference methods, or prompting techniques on existing data or introduce novel, specialized benchmarking datasets. Papers in this direction may seek to answer questions like: ‘Which model should I use for my social science project?’ ‘Are open-weight models inferior for specialized tasks?’, or ‘Given a limited budget, what is my best choice of LLM for my digital humanities question?’ We especially encourage submissions that evaluate performance in low- and medium-resource languages.
- **Domain Research Replication:** Does information automatically extracted using a different model or a slightly altered approach still support the same domain conclusions? We invite submissions that attempt to replicate existing domain research using a tweaked LLM setup. For us, testing open-weight models is especially important in light of replicability. We are excited to see how robust domain research is to adaptations of the automation setups, from prompting to model weights and training data.
- **Metrics and Evaluation Methodology:** We invite submissions on methodology for assessing LLM outputs in complex tasks. This includes work on LLM judge setups or novel rule-based metrics for specialized tasks.
We allow submissions in two categories:
- **Long Papers (up to 8 pages + references):** Complete research contributions with novel findings, experimental results, and thorough analysis. Suitable for mature work on LLM evaluation methodology or new benchmark proposals.
- **Short & Position Papers (up to 4 pages + references):** Preliminary results, position papers, system descriptions, and focused contributions. Great for provocative arguments or narrowly scoped empirical studies.
Submissions follow the ACL template; reviews are double-blind and are conducted via OpenReview (https://openreview.net/group?id=GSCL.org/KONVENS/2026/Workshop/Eval4SD).
Additionally, we welcome non-archival submissions to present recently published work or seek feedback on work-in-progress without violating dual-submission policies. Accepted papers will be presented at the workshop, but will not be included in the official proceedings.
Important dates:
- Submission deadline: July 03, 2026 (23:59 CEST)
- Notification of acceptance: July 31, 2026
- Camera-ready deadline: August 15, 2026
- Workshop date: co-located with KONVENS (14th - 17th), exact day TBA
Website: https://eval4sd.github.io/
Contact: eval4sd-organizers(a)googlegroups.com
[Apologies for cross-postings]
Call for Papers
First International Workshop on Extraction from Triplet
Text-Table-Knowledge Graph and associated Challenge
https://ecladatta.github.io/triplet2026/
in conjunction with the 23rd European Semantic Web Conference (ESWC 2026)
https://2026.eswc-conferences.org/, Dubrovnik, Croatia
Important dates:
- **Submission deadline (extended)**: 13 March, 2026 (11:59pm, AoE)
- **Notifications**: 31 March, 2026
- **Challenge registration deadline**: 15 March, 2026
- **Challenge results submission**: 10 April, 2026
- **Camera-ready deadline**: 15 April, 2026 (11:59pm, AoE)
- **Workshop**: Sunday 10 May OR Monday 11 May 2026
Motivation:
Understanding information spread across text and table is essential for
tasks such as question answering and fact checking. Existing benchmarks
primarily deal with semantic table interpretation or reasoning over
tables for question answering, leaving a gap in evaluating models that
integrate tabular and textual information, perform joint information
extraction across modalities, or can automatically detect
inconsistencies between modalities.
This workshop aims to provide a forum for exchanging ideas between the
NLP community working on open information extraction and the vibrant
Semantic Web community working on the core challenge of matching tabular
data to Knowledge Graphs, on populating knowledge graphs using texts and
on reasoning across text, tabular data and knowledge graphs. The
workshop also targets researchers focusing on the intersection of
learning over structured data and information retrieval, for example, in
retrieval augmented generation (RAG) and question answering (QA)
systems. Hence, the goal of the workshop is to connect researchers and
trigger collaboration opportunities by bringing together views from the
Semantic Web, NLP, database, and IR disciplines.
Scope:
The topics of interest include but are not limited to:
- Semantic Table Interpretation
- Automated Tabular Data Understanding
- Using Large Language Models (LLMs) for Information Extraction
- Generative Models and LLMs for Structured Data
- Knowledge Graph Construction and Completion with Tabular Data and Texts
- Analysis of Tabular Data on the Web (Web Tables)
- Benchmarking and Evaluation Frameworks for Joint Text-Table Data Analysis
- Applications (e.g. data search, fact-checking, Question-Answering, KG
alignment)
Submission Guidelines:
We invite two types of submissions:
1. Full research papers (12-15 pages) including references and appendices
2. Challenge papers (6-8 pages) including references and appendices
All submissions should be formatted in the CEUR layout format,
https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-w…
This workshop is double-blind and non-archival. Submissions are managed
through EasyChair at
https://easychair.org/conferences/?conf=triplet2026. All accepted papers
will be presented as posters or as oral talks.
**TRIPLET Challenge:**
In recent years, the research community has shown increasing interest in
the joint understanding of text and tabular data, often, for performing
tasks such as question answering or fact checking where evidences can be
found in texts and tables. Hence, various benchmarks have been developed
for jointly querying tabular data and textual documents in domains such
as finance, scientific publications, and open domain. While benchmarks
for triple extraction from text for Knowledge Graph construction and
semantic annotation of tabular data exist in the community, there
remains a gap in benchmarks and tasks that specifically address the
joint extraction of triples from text and tables by leveraging
complementary clues across these different modalities.
The TRIPLET 2026 challenge is proposing three sub-tasks and benchmarks
for understanding the complementarity between tables, texts, and
knowledge graphs, and in particular to propose a joint knowledge
extraction and reconciliation process.
#Sub-Task 1: Assessing the Relatedness Between Tables and Textual Passages
The goal of this task is to assess the relatedness between tables and
textual passages (within documents and across documents). For this
purpose, we have constructed LATTE (Linking Across Table and Text for
Relatedness Evaluation), a human annotated dataset comprising table–text
pairs with relatedness labels. LATTE consists of 7,674 unique tables and
41,880 unique textual paragraphs originating from 3,826 distinct
Wikipedia pages. Each text paragraph is drawn from the same or
contextually linked pages as the corresponding table, rather than being
artificially generated. LATTE provides a challenging benchmark for
cross-modal reasoning by requiring classification of related and
unrelated table–text pairs. Unlike prior resources centered on
table-to-text generation or text retrieval, LATTE emphasizes
fine-grained semantic relatedness between structured and unstructured data.
The Figure below provides an example, using a web-annotation tool we
developed, of how we identify the relatedness between the sentence
containing the entity AirPort Extreme 802.11n (highlighted in Orange)
and the data table providing information about output power and
frequency for this entity. Participants are provided with tables and
textual passages that would need to be ranked. The evaluation will use
metrics such as P@k, R@k and F1@k.
Go to https://www.codabench.org/competitions/12776/ and enroll to
participate in this Task.
#Sub-Task 2: Joint Relation Extraction Between Texts and Tables
The goal of this task is to automatically extract knowledge jointly from
tables and related texts. For this purpose, we created ReTaT, a dataset
that can be used to train and evaluate systems for extracting such
relations. This dataset is composed of (table, surrounding text) pairs
extracted from Wikipedia pages and has been manually annotated with
relation triples. ReTaT is organized in three subsets with distinct
characteristics: domain (business, telecommunication and female
celebrities), size (from 50 to 255 pairs), language (English vs French),
type of relations (data vs object properties), close vs open list of
relation, size of the surrounding text (paragraph vs full page). We then
assessed its quality and suitability for the joint table-text relation
extraction task using Large Language Models (LLMs).
Given a Wikipedia page containing texts and tables and a list of
predicates defined in Wikidata, a participant system should extract
triples composed of mentions located partly in the text and partly in
the table and disambiguated with entities and predicates identified in
the Wikidata reference knowledge graph. For example, in the Figure
below, an annotation triple <Q13567390, P2109, 24.57> is associated with
mentions highlighted in orange (subject), blue (predicate) and green
(object) to annotate the document available at
https://en.wikipedia.org/wiki/AirPort_Extreme. Similar to the
Text2KGBench evaluation
(https://link.springer.com/chapter/10.1007/978-3-031-47243-5_14), and
because the set of triples are not exhaustive for a given sentence, to
avoid false negatives, we follow a locally closed approach by only
considering the relations that are part of the ground truth. The
evaluation then uses metrics such as P, R and F1.
Go to https://www.codabench.org/competitions/12936/ and enroll to
participate in this Task.
# Sub-Task 3: Detecting Inconsistencies Between Texts, Tables and
Knowledge Graphs
The goal of this task is to check the consistency of knowledge extracted
from tables and texts with existing triples in the Wikidata knowledge
graph. Different kind of inconsistencies will be considered in this
task. Participants to this task will be able to report on their findings
in their system paper.
See the Figure at
https://ecladatta.github.io/images/triplet_annotation_tool.png
# Data & Evaluation:
For the first 2 sub-tasks, we have released a training dataset with
ground-truth annotations, enabling participant teams to develop machine
learning-based systems, and in particular for training purposes and for
hyperparameter optimizations and internal validations.
A separate blind test dataset will remain private and be used for
ranking the submissions.
Participants should register on Codabench and then enroll for each
sub-task separately (Task 1:
https://www.codabench.org/competitions/12776/ and Task 2:
https://www.codabench.org/competitions/12936/). Each team are allowed a
limited number of daily submissions, and the highest achieved accuracy
will be reported as the team's final result. We encourage participants
to develop open-source solutions, to utilise and fine-tune pre-trained
language models and to experiment with LLMs of various size in zero-shot
or few-shot settings.
# Challenge Important Dates:
- Release of training set: 13 February 2026
- Deadline for registering to the challenge: 15 March 2026
- Release of test set: 24 March 2026
- Submission of results: 10 April 2026
- System Results & Notification of Acceptance: 17 April 2026
- Submission of System Papers: 28 April 2026
- Presentations @ TRIPLET Workshop: May 2026
Workshop Organizers
- Raphael Troncy (EURECOM, France)
- Yoan Chabot (Orange, France)
- Véronique Moriceau (IRIT, France)
- Nathalie Aussenac-Gilles (IRIT, France)
- Mouna Kamel(IRIT, France)
Contact:
For discussions, please use our Google Group,
https://groups.google.com/g/triplet-challenge
The workshop is supported by the ECLADATTA project funded by the French
National Funding Agency ANR under the grant ANR-22-CE23-0020.
--
Raphaël Troncy
EURECOM, Campus SophiaTech
Data Science Department
450 route des Chappes, 06410 Biot, France.
e-mail: raphael.troncy(a)eurecom.fr & raphael.troncy(a)gmail.com
Tel: +33 (0)4 - 9300 8242
Fax: +33 (0)4 - 9000 8200
Web: http://www.eurecom.fr/~troncy/
Call for papers - Workshop on New Trends in Automatized Language Assessment (TALA)
The Workshop on New Trends in Automatized Language Assessment (TALA) will take place on 7 April 2026 in Louvain-la-Neuve, Belgium, and online (hybrid event). This meeting aims to provide an overview of various recent approaches of automatized language assessment and to offer researchers, academics and (PhD) students an excellent opportunity to share and discuss recent trends and cutting-edge methods on language assessment-related research.
In particular, the workshop will focus on proficiency assessment by mainly targeting automatic readability assessment (ARA) and automated essay scoring (AES).
Automatic readability assessment constitutes an interdisciplinary field of research concerned with the linguistic, cognitive, and typographic factors that influence the ease with which a text can be read and understood by different audiences. It is gaining increasing importance across a wide range of domains, including education, institutional communication, digital accessibility, and automated assessment of language proficiency. It has been an active field within natural language processing since the beginning of the 21st century.
Automated essay scoring aims to analyze written productions in order to generate an evaluation of writers’ competence in a specific field. For language-oriented AES, it is the written linguistic skills that are targeted. This task is particularly critical in language assessment contexts, but it can also support learning processes and the generation of formative feedback.
The workshop will include an invited speaker talk and some presentations based on abstract selection.
Invited speaker:
Rodrigo Wilkens (University of Exeter) is a specialist in computational readability modeling and automated essay scoring. His research focuses on multilingual proficiency assessment, linguistic feature modeling, and the use of large language models for educational applications. He has contributed to the development and evaluation of ARA and AES systems, with particular emphasis on non-English languages. His recent work explores the representational capacity of transformer models for proficiency prediction, interpretability in automated assessment, and readability-guided text generation.
We welcome abstracts addressing literature review, research results, ongoing research or negative results on the topics related to the main theme, with particular interest in the following subfields:
· AI and LLM-based approaches to automated language assessment, especially AES and ARA
· Computational and linguistic modeling of readability and writing proficiency
· Evaluation methodologies, validation frameworks, and interpretability in automated assessment
· Multilingual and non-English language assessment
· Corpus creation, annotation schemes, and new benchmark tasks
· Fairness, bias, and ethical considerations in automated assessment
· Theoretical perspectives linking linguistic features and proficiency modeling
· Critical reviews and meta-analyses in ARA/AES research
Submission format: Abstracts may be submitted in French or English and should be between 300 and 500 words. Authors are encouraged to include a short list of relevant references, which will not count toward the word limit. Abstracts should be anonymized for review. Author names and affiliations must be provided separately in the submission form. Submissions must be made via the online form available at: https://forms.gle/w1L6JNx8YAEgtswB7.
Please note that no proceedings will be later organized, as this workshop aims to foster scientific exchanges above all.
Important dates:
Abstract deadline: 20 March
Acceptance notification: 1st of April
Workshop: 7 April
Organizing Committee:
Prof. Thomas François, Prof. Rodrigo Wilkens, Dr. Eleonora Guzzi, Lingyun Gao, Amandine Pay, Elodie Vanzeveren, Romane Werner.