Call for Abstracts: Analysis of Linguistic VAriation for BEtter Tools (ALVABET) within the LLcD 2024 Conference (https://llcd2024.sciencesconf.org/)
Workshop
Variation plays a particularly important role in linguistic change, since every change stem from a state of variation; but each state of variation does not necessarily end up with a change: the new variant can disappear, or variation can linger but in different contexts. Access to sufficient amounts of data and their quantification, in order to detect the emergence of new variants as precisely as possible, and the recession or even disappearance of others, is a precious tool for the study of variations, whatever their dimensions (diachronic, diatopic, …) and in whatever field (syntax, morphology, …). The appearance of large corpora has thus renewed the study of variation. NLP has contributed largely to this renewal, providing tools for the enrichment and the exploration of these corpora. In return, linguistic analysis can help explain some of these errors and thus deepen the picture where performance metrics tend to flatten out everything under a single number, or even help improve the performances.
NLP annotation tools, such as syntactic parsers and morphological taggers, reach great performances nowadays when they are applied on similar data to those seen during their development. However, they quickly drop as the target data diverges from those of the training scenario. This raises a number of issues when it comes to using automatically annotated data to perform linguistic studies.
This workshop aims at exploring bilateral contributions between Natural Language Processing and variation analysis in the fields of morphosyntax and syntax, from diachronic and diatopic perspectives but also from genre, domain or form of writing, without any restriction on the languages of interest.
We warmly welcome submissions dealing with the issues and contributions of applying NLP to variation analysis :
• Quantification of variation along its different dimensions (both external and internal ones as well as in interaction with each other);
• Impact of annotation errors on the study of marginal structures (emergent or recessing);
• Syntactic variation when it is induced by semantic changes.
But also submissions dealing with the contributions of variation analysis to NLP:
• Variation mitigation (spelling standardisation...);
• Domain adaptation (domain referring here to any variation dimension);
• Error analysis (in and out of domain) in light of known variation phenomena, amongst which (de-)grammaticalisation;
• The evolution of grammatical categories and its impact on prediction models;
• The place of variation studies in NLP in the large language model era.
These themes are only suggestions, and the workshop will gladly host any submission that deals substantially with the reciprocal contributions between NLP and variation analysis in the mentioned fields.
Full workshop description: https://llcd2024.sciencesconf.org/data/pages/WS12Eng.pdf
Important Dates
• Apr 30, 2024: extended deadline for abstract submission
• May 15, 2024: Notification
• Sep 9-11: Conference
Submissions
Abstracts must clearly state the research questions, approach, method, data and (expected) results. They must be anonymous: not only must they not contain the presenters' names, affiliations or addresses, but they must avoid any other information that might reveal their author(s). They should not exceed 500 words (including examples, but excluding bibliographical references).
Abstracts will be assessed by two members of the Scientific Committee and (one of) the workshop organizers.
The Content-Centered Computing
<https://www.cs.unito.it/do/gruppi.pl/Show?_id=453y> group at the
University of Turin, Italy, offers *two 14-month postdoc positions* in the
context of HARMONIA (Harmony in Hybrid Decision-Making: Knowledge-enhanced
Perspective-taking LLMs for Inclusive Decision-Making), funded by the
European Union under the NextGenerationEU program within the larger project
FAIR (Future Artificial Intelligence) Spoke 2 "Integrative AI"
<https://fair.fbk.eu/>. The project aims at developing methods for the
adoption of knowledge-enhanced Large Language Models (LLMs) in supporting
informed and inclusive political decisions within public decision-making
processes.
The topics of the postdoc fellowships are:
- Computational linguistics methods for knowledge-enhanced
perspective-taking LLMs to support Inclusive Decision-Making
- Perspective-taking LLMs for supporting Inclusive Decision-Making
(full descriptions below)
The team includes members of the Computer Science Department and Economics
and Statistics Department of the University of Turin.
A PhD in Computer Science, Computational Linguistics, or related areas is
highly recommended. Knowledge of Italian is not mandatory.
The deadline for application is *May 13th 2024*.
The gross salary is €25.328 (about €1,860/month net salary). Turin
<https://www.turismotorino.org/en/territory/torino-metropoli/torino> is a
vibrant and liveable city in Northern Italy, close to the beautiful Italian
Alps and with a manageable cost of living
<https://en.unito.it/living-turin/when-you-arrive/cost-living-turin>.
Link to the call
<https://www.turismotorino.org/en/territory/torino-metropoli/torino> (in
Italian). Link to the application platform
<https://pica.cineca.it/unito/assegni-di-ricerca-unito-2024-i-pnrr/>.
Please write to <valerio.basile(a)unito.it> or <viviana.patti(a)unito.it> for
further information on how to apply.
Best regards,
Valerio Basile
--
*Computational linguistics methods for knowledge-enhanced
perspective-taking LLMs to support Inclusive Decision-Making*
The activity will focus on a) design of a semantic model to represent
interactions between urban services and citizens and integrate multi-source
hybrid data; b) data annotation by citizens with different socio-cultural
backgrounds to collect different perspectives on social issues. Data will
be collected and organized in a Knowledge Graph. The activity will be
supported by an interdisciplinary team of experts in KR, behavioral
economics and LLMs (link with the design of knowledge-enhanced LLMs).
*Perspective-taking LLMs for supporting Inclusive Decision-Making*
The activity will focus on a) exploring techniques for integrating
multi-source hybrid citizen data into LLMs (RAG and Knowledge Injection);
b) developing methods for training and evaluating perspective-taking LLMs,
which explicitly encode multiple perspectives, embodying the point of view
of different citizen communities on a topic. Planned activities include:
benchmark creation, error analysis, and evaluation of the efficiency and
reliability of the developed technologies.
*CALAMITA - Challenge the Abilities of LAnguage Models in ITAlian*
*Special event co-located with the Tenth Italian Conference on
Computational Linguistics - CLiC-it 2024 Pisa, 4 - 6 December, 2024 -
https://clic2024.ilc.cnr.it/ <https://clic2024.ilc.cnr.it/> *
*Upcoming deadline: 17th May 2024, challenge pre-proposal submission!
Pre-proposal form: *https://forms.gle/u4rSt9yXHHYquKrB6
*Project Description*
AILC, the Italian Association for Computational Linguistics, is launching a
*collaborative* effort to develop a dynamic and growing benchmark for
evaluating LLMs’ capabilities in Italian.
In the *long term*, we aim to establish a suite of tasks in the form of a
benchmark which can be accessed through a shared platform and a live
leaderboard. This would allow for ongoing evaluation of existing and newly
developed Italian or multilingual LLMs.
In the *short term*, we are looking to start building this benchmark
through a series of challenges collaboratively construed by the research
community. Concretely, this happens through the present call for challenge
contributions. In a similar style to standard Natural Language Processing
shared tasks, *participants are asked to contribute a task and the
corresponding dataset with which a set of LLMs should be challenged*.
Participants are expected to provide an explanation and motivation for a
given task, a dataset that reflects that task together with any information
relevant to the dataset (provenance, annotation, distribution of labels or
phenomena, etc.) and a rationale for putting that together that way.
Evaluation metrics and example prompts should also be provided. Existing
relevant datasets are also very welcome, together with related publications
if available. All of the proposed challenges either with existing datasets
or new datasets, will have to follow the challenge template, which will be
distributed in due time, towards the write-up of a challenge paper.
In this first phase, all prospective participants are asked to submit a
*pre-proposal* by filling in this form https://forms.gle/u4rSt9yXHHYquKrB6.
Please fill in all the fields so we can get an idea of what challenge you’d
like to propose, how the model should be prompted to perform the task,
where you’d get the data and how much, whether it’s already available, etc.
The organizers will examine the submitted pre-proposals and select those
challenges that comply with the template’s requirements, with an eye to
balancing different challenge types. The selected challenges will be
expanded with a full dataset, longer descriptions, etc. according to the
aforementioned template which will be distributed later. The final report
of each accepted challenge must provide the code for the evaluation with an
example that must smoothly run on a pre-selected base LLM (most likely
LLaMa-2) which will be communicated by the organisers in the second phase.
All reports will be published as CEUR Proceedings related to the CALAMITA
event. Subsequently, all challenge organisers who wish to be involved can
participate in a broader follow-up paper, targeting a top venue, which will
describe the whole benchmark, procedures, results, and analyses.
Once this first challenge set is put together, the *CALAMITA organizers*
will run *zero* or *few* shots experiments with a selection of LLMs, and
write a final report. No tuning materials or experiments are expected at
this stage of the project.
*Deadlines (tentative)*
- *17th May 2024: pre-proposal submission*
- 27th May 2024: notification of pre-proposal acceptance
- End of May 2024: distribution of challenge paper template and further
instructions
- 2nd September 2024: data and report submission
- 30th September 2024: benchmark ready with reports for each challenge
(after light review)
- October-November 2024: running selected models on the benchmark with
analyses
- 4th-6th December 2024: CLIC-it Pisa (special event co-located with
CLIC-it 2024)
*Website:* https://clic2024.ilc.cnr.it/calamita (under construction)
*Mail: *calamita.ailc(a)gmail.com
*Organizers*
- Pierpaolo Basile (University of Bari Aldo Moro)
- Danilo Croce (University of Rome, Tor Vergata)
- Malvina Nissim (University of Groningen)
- Viviana Patti (University of Turin)
On behalf of Prof. Mark Sandler.
Lyrics generation project using LLMs.
Notice the closing deadline.
From: Mark Sandler <mark.sandler(a)qmul.ac.uk>
I am happy to announce that the Centre for Digital Music is now formally advertising the new research positions I posted last week. One area is lyrics generation and the other is music signal processing (instrument ID, loop ID, lyric transcription). Both are collaborative with London-based music industry companies, session and stage.
These are available immediately and can be offered as either post-doctoral or graduate research assistants, and can be either full- or part-time. Closing date is May 1 2024.
Details can be found here
https://www.qmul.ac.uk/jobs/vacancies/items/9619.htmlhttps://www.qmul.ac.uk/jobs/vacancies/items/9617.html
help
---- Replied Message ----
From corpora-request(a)list.elra.info Date 04/22/2024 20:00 To corpora(a)list.elra.info Cc Subject Corpora Digest, Vol 789, Issue 1
Send Corpora mailing list submissions to
corpora(a)list.elra.info
To subscribe or unsubscribe via email, send a message with subject or
body 'help' to
corpora-request(a)list.elra.info
You can reach the person managing the list at
corpora-owner(a)list.elra.info
When replying, please edit your Subject line so it is more specific
than "Re: Contents of Corpora digest..."
Today's Topics:
1. WMT 2024: Low-Resource Indic Language Translation. (Santanu Pal)
2. Final CPF: SIGIR eCom'24: May 3rd (Tracy Holloway King)
3. [2nd CFP] Special issue on Abusive Language Detection of the journal Traitement Automatique des Langues (TAL)
(Farah Benamara)
4. [Call for Participation]: GermEval2024 Shared Task GerMS-Detect - Sexism Detection in German Online News Fora @Konvens 2024
(stephanie.gross(a)ofai.at)
----------------------------------------------------------------------
Message: 1
Date: Sun, 21 Apr 2024 13:02:42 +0100
From: Santanu Pal <santanu.pal.ju(a)gmail.com>
Subject: [Corpora-List] WMT 2024: Low-Resource Indic Language
Translation.
To: corpora(a)list.elra.info
Message-ID:
<CALdLWwZZ4EJ6Vk5r9xS1b90vGBgtWpfq_PwGJSF=F+UQ6-ZCUg(a)mail.gmail.com>
Content-Type: multipart/alternative;
boundary="00000000000043488c06169a1b64"
Dear Colleagues,
We are pleased to inform you that we will be hosting the "Shared Task:
Low-Resource Indic Language Translation" again this year as part of WMT
2024. Following the outstanding success and enthusiastic participation
witnessed in the previous year's edition, we are excited to continue this
important initiative. Despite recent advancements in machine translation
(MT), such as multilingual translation and transfer learning techniques,
the scarcity of parallel data remains a significant challenge, particularly
for low-resource languages.
The WMT 2024 Indic Machine Translation Shared Task aims to address this
challenge by focusing on low-resource Indic languages from diverse language
families. Specifically, we are targeting languages such as Assamese, Mizo,
Khasi, Manipuri, Nyishi, Bodo, Mising, and Kokborok.
For inquiries and further information, please contact us at
lrilt.wmt24(a)gmail.com. Additionally, you can find more details and updates
on the task through the following link: Task Link:
https://www2.statmt.org/wmt24/indic-mt-task.html.
We highly encourage participants to register in advance so that we can
provide updates regarding release dates of data and other relevant
information periodically
To register for the event, please fill out the registration form available
here. (
https://docs.google.com/forms/d/e/1FAIpQLSd8LwriqdLLhVNAvUWEcGRJmKuBFQZ9BR_…
)
We look forward to your participation and contributions to advancing
low-resource Indic language translation.
with best regards,
Santanu
The first workshop on evaluating IR systems with Large Language Models
(LLMs) is accepting submissions that describe original research findings,
preliminary research results, proposals for new work, and recent relevant
studies already published in high-quality venues.
Topics of interest
We welcome both full papers and extended abstract submissions on the
following topics, including but not limited to:
- LLM-based evaluation metrics for traditional IR and generative IR.
- Agreement between human and LLM labels.
- Effectiveness and/or efficiency of LLMs to produce robust relevance
labels.
- Investigating LLM-based relevance estimators for potential systemic
biases.
- Automated evaluation of text generation systems.
- End-to-end evaluation of Retrieval Augmented Generation systems.
- Trustworthiness in the world of LLMs evaluation.
- Prompt engineering in LLMs evaluation.
- Effectiveness and/or efficiency of LLMs as ranking models.
- LLMs in specific IR tasks such as personalized search, conversational
search, and multimodal retrieval.
- Challenges and future directions in LLM-based IR evaluation.
Submission guidelines
We welcome the following submissions:
- Previously unpublished manuscripts will be accepted as extended
abstracts and full papers (any length between 1 - 9 pages) with unlimited
references, formatted according to the latest ACM SIG proceedings template
available at http://www.acm.org/publications/proceedings-template.
- Published manuscripts can be submitted in their original format.
All submissions should be made through Easychair:
https://easychair.org/conferences/?conf=llm4eval
All papers will be peer-reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion. For
already published studies, the paper can be submitted in the original
format. These submissions will be reviewed for their relevance to this
workshop. All submissions must be in English (PDF format).
All accepted papers will have a poster presentation with a few selected for
spotlight talks. Accepted papers may be uploaded to arXiv.org, allowing
submission elsewhere as they will be considered non-archival. The
workshop’s website will maintain a link to the arXiv versions of the papers.
Important Dates
- Submission Deadline: April 25th May 2nd, 2024 (AoE time)
- Acceptance Notifications: May 31st, 2024 (AoE time)
- Workshop date: July 18, 2024
Website
For more information, visit the workshop website:
https://llm4eval.github.io/
Contact
For any questions about paper submission, you may contact the workshop
organizers at llm4eval(a)easychair.org
*First Call For papers: 17th International Natural Language Generation Conference INLG 2024*
We invite the submission of long and short papers, as well as system demonstrations, related to all aspects of Natural Language Generation (NLG), including data-to-text, concept-to-text, text-to-text and vision-to-text approaches. Accepted papers will be presented as oral talks or posters.
The event is organized under the auspices of the Special Interest Group on Natural Language Generation (SIGGEN) (https://aclweb.org/aclwiki/SIGGEN) of the Association for Computational Linguistics (ACL) (https://aclweb.org/). The event will be held from 23-27 September in Tokyo, Japan. INLG 2024 will be taking place after SIGDial 2024 (18-20 September) nearby in Kyoto.
**Important dates**
All deadlines are Anywhere on Earth (UTC-12)
• START system regular paper submission deadline: May 31, 2024
• ARR commitment to INLG deadline via START system: June 24, 2024
• START system demo paper submission deadline: June 24, 2024
• Notification: July 15, 2024
• Camera ready: August 16, 2024
• Conference: 23-27 September 2024
**Topics**
INLG 2024 solicits papers on any topic related to NLG. General topics of interest include, but are not limited to:
• Large Language Models (LLMs) for NLG
• Affect/emotion generation
• Analysis and detection of automatically generated text
• Bias and fairness in NLG systems
• Cognitive modelling of language production
• Computational efficiency of NLG models
• Content and text planning
• Corpora and resources for NLG
• Ethical considerations of NLG
• Evaluation and error analysis of NLG systems
• Explainability and Trustworthiness of NLG systems
• Generalizability of NLG systems
• Grounded language generation
• Lexicalisation
• Multimedia and multimodality in generation
• Natural language understanding techniques for NLG
• NLG and accessibility
• NLG in speech synthesis and spoken language models
• NLG in dialogue
• NLG for human-robot interaction
• NLG for low-resourced languages
• NLG for real-world applications
• Paraphrasing, summarization and translation
• Personalisation and variation in text
• Referring expression generation
• Storytelling and narrative generation
• Surface realization
• System architectures
**Submissions & Format**
Three kinds of papers can be submitted:
• Long papers are most appropriate for presenting substantial research results and must not exceed eight (8) pages of content, plus unlimited pages of ethical considerations, supplementary material statements, and references. The supplementary material statement provides detailed descriptions to support the reproduction of the results presented in the paper (see below for details). The final versions of long papers will be given one additional page of content (up to 9 pages) so that reviewers' comments can be taken into account.
• Short papers are more appropriate for presenting an ongoing research effort and must not exceed four (4) pages, plus unlimited pages of ethical considerations, supplementary material statements, and references. The final versions of short papers will be given one additional page of content (up to 5 pages) so that reviewers' comments can be taken into account.
• Demo papers should be no more than two (2) pages, including references, and should describe implemented systems relevant to the NLG community. It also should include a link to a short screencast of the working software. In addition, authors of demo papers must be willing to present a demo of their system during INLG 2024.
Submissions should follow ACL Author Guidelines (https://www.aclweb.org/adminwiki/index.php?title=ACL_Author_Guidelines) and policies for submission, review and citation, and be anonymised for double blind reviewing. Please use ACL 2023 style files; LaTeX style files and Microsoft Word templates are available at: https://acl-org.github.io/ACLPUB/formatting.html
Authors must honor the ethical code set out in the ACL Code of Ethics (https://www.aclweb.org/portal/content/acl-code-ethics). If your work raises any ethical issues, you should include an explicit discussion of those issues. This will also be taken into account in the review process. You may find the following checklist of use: https://aclrollingreview.org/responsibleNLPresearch/
Authors are strongly encouraged to ensure that their work is reproducible; see, e.g., the following reproducibility checklist (https://2021.aclweb.org/calls/reproducibility-checklist/). Papers involving any kind of experimental results (human judgments, system outputs, etc) should incorporate a data availability statement into their paper. Authors are asked to indicate whether the data is made publicly available. If the data is not made available, authors should provide a brief explanation why. (E.g. because the data contains proprietary information.) A statement guide is available on the INLG 2024 website: https://inlg2024.github.io/
To submit a long or short paper to INLG 2024, authors can either submit directly or commit a paper previously reviewed by ARR via the same paper submission site (https://softconf.com/n/inlg2024/). For direct submissions, the deadline for submitting papers is May 31, 2024, 11:59:59 AOE. If committing an ARR paper to INLG, the submission is also made through the INLG 2024 paper submission site, indicating the link of the paper on OpenReview. The deadline for committing an ARR paper to INLG is June 24, 2024, 11:59:59 AOE, and the last eligible ARR paper submission deadline for INLG 2024 is May 24, 2024. It is important to note that when committing an ARR paper to INLG, it should be submitted through the INLG 2024 paper submission site, just like a direct submission paper, with the only difference being the need to provide the OpenReview link to the paper and to provide an optional author response to reviews.
Demo papers should be submitted directly through the INLG 2024 paper submission site (https://softconf.com/n/inlg2024/) by June 24, 2024, 11:59:59 AOE.
All accepted papers will be published in the INLG 2024 proceedings and included in the ACL anthology. A paper accepted for presentation at INLG 2024 must not have been presented at any other meeting with publicly available proceedings. Dual submission to other conferences is permitted, provided that authors clearly indicate this in the submission form. If the paper is accepted at both venues, the authors will need to choose which venue to present at, since they can not present the same paper twice.
Finally, at least one of the authors of an accepted paper must register to attend the conference.
**Awards**
INLG 2024 will present several awards to recognize outstanding achievements in the field. These awards are:
• Best Long Paper Award: This award will be given to the best long paper submission based on its originality, impact, and contribution to the field of NLG.
• Best Short Paper Award: This award will be given to the best short paper submission based on its originality, impact, and contribution to the field of NLG.
• Best Demo Paper Award: This award will recognize the best demo paper submitted to the conference. This award considers not only the paper's quality but also the demonstration given at the conference. The demonstration will play a significant role in the judging process.
• Best Evaluation Award: The award is a new addition to INLG 2024. This award is designed to honor authors who have demonstrated the most comprehensive and insightful analysis in evaluating their results. This award aims to highlight papers where the authors have gone the extra mile in providing a thorough and detailed analysis of their results, offering a nuanced understanding of their findings.
Dear colleagues,
We are happy to announce that the journal *Research in Corpus Linguistics,*
sponsored by the *Spanish Association for Corpus Linguistics**, has been
indexed in Scimago Journal and Country Rank. The journal has been ranked in
Quartile 2 (Q2) for the period 2019–2023.*
As SJR’s indexing criteria show, there is no doubt that RiCL has been an
academically relevant linguistic journal since 2019.
The editorial team is particularly happy with the coverage (2019–2023),
which is beneficial for authors who have published in RiCL over the
time-frame (2019–2023), and evidence of the substantial work carried out by
the team since 2018.
We would like to thank all contributors, guest editors and reviewers for
their collaboration with the journal and invite scholars to submit their
submission proposals to the journal.
Specific areas of interest include corpus design, compilation, and
typology; discourse, literary analysis and corpora; corpus-based
grammatical studies; corpus-based lexicology and lexicography; corpora,
contrastive studies and translation; corpus and linguistic variation;
corpus-based computational linguistics; corpora, language acquisition and
teaching; and special uses of corpus linguistics.
Further information at: *https://ricl.aelinco.es
<https://urldefense.com/v3/__https://ricl.aelinco.es/__;!!D9dNQwwGXtA!XdCA4T…>*
All the best,
Paula Rodríguez-Puente and Carlos Prado-Alonso (editors of RiCL)
--
Paula Rodríguez Puente
rodriguezppaula(a)uniovi.es <paula.r.puente(a)gmail.com>
http://www.usc-vlcg.es/PRP.htm
<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_cam…>
Libre
de virus.www.avg.com
<http://www.avg.com/email-signature?utm_medium=email&utm_source=link&utm_cam…>
<#DAB4FAD8-2DD7-40BB-A1B8-4E2AA1F9FDF2>
***Apologies for possible cross-posting ***
Third Call for Papers:
5th International Workshop on Computational Approaches to Historical
Language Change (LChange’24)
/!\ Less than 3 weeks before the submission deadline /!\
We're organizing a full-day workshop co-located with the ACL conference on
Aug 15, 2024 in Bangkok and online. We hope to make this fifth edition
another resounding success!
This year, we are happy to host a *shared task* within LChange: the
*AXOLOTL-24* Shared Task on Explainable Semantic Change Modeling.
Workshop: https://www.changeiskey.org/event/2024-acl-lchange/
Shared task: https://github.com/ltgoslo/axolotl24_shared_task/.
Contact email: lchange(a)changeiskey.org
Workshop description
The LChange workshop targets all aspects of computational modeling of
language change, historical as well as synchronic change. It is running in
its fifth iteration following successful workshops in 2019
<https://languagechange.org/events/2019-acl-lcworkshop/>, 2021
<https://languagechange.org/events/2021-acl-lcworkshop/>, 2022
<https://languagechange.org/events/2022-acl-lchange/>, and 2023
<https://languagechange.org/events/2023-emnlp-lchange/>, and will be
co-located with ACL 2024 in Bangkok (Thailand), as a hybrid event. The
workshop will take place on Thursday 15 August 2024.
The main topics of the workshop remain the same: all aspects around
computational approaches to language change with a focus on digital text
corpora. LChange explores state-of-the-art computational methodologies,
theories and digital text resources on exploring the time-varying nature of
human language.
The aim of this workshop is to provide pioneering researchers who work on
computational methods, evaluation, and large-scale modeling of language
change an outlet for disseminating research on topics concerning language
change. Besides these goals, this workshop will also support discussion on
evaluating computational methodologies for uncovering language change.
We’ll also be offering mentorship to students, to discuss their research
topic with a member of the field, regardless of whether they are submitting
a paper or not.
Important Dates
* May 10, 2024: Paper submission
* June 20, 2024: Notification of acceptance
* June 30, 2024: Camera-ready papers due
* August 15, 2024: Workshop date
AXOLOTL-24 Shared Task
AXOLOTL-24 stands for “Ascertain and eXplain Overhauls of the Lexicon Over
Time at LChange'24” and is organized by Mariia Fedorova and Andrey Kutuzov
(University of Oslo), Timothee Mickus, Niko Partanen and Janine Siewert
(University of Helsinki), and Elena Spaziani (Sapienza University Rome).
The AXOLOTL'24 shared task itself is finished, and the leaderboards are
published. But you can still participate in the post-evaluation phase and
submit a paper to LChange. The shared task presents two subtasks:
-
Subtask 1: https://codalab.lisn.upsaclay.fr/competitions/18570 Finding
the target word usages associated with new, gained senses
-
Subtask 2: https://codalab.lisn.upsaclay.fr/competitions/18572
<https://codalab.lisn.upsaclay.fr/competitions/18008>Describing these
senses in a way that facilitates understanding and lexicographical research.
Submissions
We accept two types of submissions, long and short papers, consisting of up
to eight (8) and four (4) pages of content, respectively, plus unlimited
references; final versions will be given one additional page of content so
that reviewers' comments can be taken into account.
We also welcome papers focusing on releasing a dataset or a model; these
papers fall into the short paper category.
We invite original research papers from a wide range of topics, including
but not limited to:
-
Novel methods for detecting diachronic semantic change and lexical
replacement
-
Automatic discovery and quantitative evaluation of laws of language
change
-
Computational theories and generative models of language change
-
Sense-aware (semantic) change analysis
-
Diachronic word sense disambiguation
-
Novel methods for diachronic analysis of low-resource languages
-
Novel methods for diachronic linguistic data visualization
-
Novel applications and implications of language change detection
-
Quantification of sociocultural influences on language change
-
Cross-linguistic, phylogenetic, and developmental approaches to language
change
-
Novel datasets for cross-linguistic and diachronic analyses of language
Accepted papers will be presented orally or as posters and included in the
workshop proceedings. Submissions are open to all and are to be submitted
anonymously. All papers will be refereed through a double-blind peer review
process by at least three reviewers with final acceptance decisions made by
the workshop organizers. If you have published in the field previously, and
are interested in helping out in the program committee to review papers,
please send us an email!
Workshop organizers
Nina Tahmasebi, University of Gothenburg
Syrielle Montariol, École polytechnique fédérale de Lausanne
Andrey Kutuzov, University of Oslo
Simon Hengchen, iguanodon.ai and Université de Genève
David Alfter, University of Gothenburg
Francesco Periti, University of Milan
Pierluigi Cassotti, University of Gothenburg
1st CALL FOR PARTICIPATION
We are pleased to announce the GermEval Shared Task GerMS-Detect on Sexism Detection in German Online News Fora collocated with Konvens 2024.
Competition Website: https://ofai.github.io/GermEval2024-GerMS/
Important Dates:
Trial phase: April 20 - April 29, 2024
Development phase: May 1 - June 5, 2024
Competition phase: June 7 - June 25, 2024
Paper submission due: July 1, 2024
Camera ready due: July 20, 2024
Shared Task @KONVENS: 10 September, 2024
Task description:
This shared task is about the detection of sexism/misogyny in comments posted in (mostly) German language to the comment section of an Austrian online newspaper. The data was originally collected for the development of a classifier that supports human moderators in detecting potentially sexist comments or identify comment fora with a high rate of sexist comments. For details see the Competition Website (https://ofai.github.io/GermEval2024-GerMS/).
Organizers:
The task is organized by the Austrian Research Institute for Artificial Intelligence (OFAI) (www.ofai.at).
Organizing team:
Brigitte Krenn (brigitte.krenn (AT) ofai.at)
Johann Petrak (johann.petrak (AT) ofai.at)
Stephanie Gross (stephanie.gross (AT) ofai.at)