Dear corpora-list members,
We are pleased to announce SemEval-2026 Task 7: Everyday Knowledge Across Diverse Languages and Cultureshttps://github.com/BLEnD-SemEval2026/SemEval-2026-Task-7, organised under the BLEnD benchmark (Myung et al., 2024https://proceedings.neurips.cc/paper_files/paper/2024/file/8eb88844dafefa92a26aaec9f3acad93-Paper-Datasets_and_Benchmarks_Track.pdf).
BLEnD is a hand-crafted benchmark built by native speakers to evaluate cultural knowledge in LLMs and NLP systems in general, as they often lack culture-specific knowledge of daily life, particularly across diverse regions and non-English languages.
Task 7 extends BLEnD to over 30 language-culture pairs to evaluate models and provide insights for improving cross-cultural performance. BLEnD's manually constructed data is used only for validation and testing, ensuring that models are evaluated on their ability to generalise to unseen cultural and linguistic contexts.
We invite researchers, practitioners, and students to participate. Note that our task is inclusive and junior-friendly. We will organise a live Q&A session and a system-description writing tutorial for all participants.
Tracks
* Track 1: Short Answer Questions (SAQ) – Given a question in a target language, the model must generate an answer in that same language.
* Track 2: Multiple Choice Questions (MCQ) – Each question is in English, with four answer options reflecting different cultural perspectives (countries or regions).
Contact
* Competition website: https://www.codabench.org/competitions/10281/
* Discord channel (updates & discussion): https://discord.com/invite/8x6XP97kmw
* GitHub: https://github.com/BLEnD-SemEval2026/SemEval-2026-Task-7/https://github.com/BLEnD-SemEval2026/SemEval-2026-Task-7/issues
* Email organisers: semeval-2026-blend-organisers@googlegroups.commailto:semeval-2026-blend-organisers@googlegroups.com
* Participants’ Google Group: semeval-2026-task7-blend-participants@googlegroups.commailto:semeval-2026-task7-blend-participants@googlegroups.com
Important dates (subject to change)
* Evaluation start: 19 January 2026
* Evaluation end: 2 February 2026
* Paper submission due: ~February 2026
* SemEval workshop: Co-located with ACL2026
Best regards, The SemEval-2026 Task 7 Organising Team.