DMR 2024 - 2nd Call for Papers
Timeline When Tues, May 21 Where Torino, Italy Mode hybrid Direct Submission Deadline February 23 --> EXTENDED ARR Commitment Deadline March 25 Notification of Acceptance March 27 Final Version Due April 8
Workshop site: https://dmr2024.github.io/index.html
DMR 2024 will be co-located with LREC-COLING 2024 (the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation), 20-25 May, 2024 at the Lingotto Conference Centre, Torino, Italy. DMR 2024 will be a hybrid event (real-time virtual participation allowed), but in-person participation is encouraged.
DMR 2024 submission website: https://softconf.com/lrec-coling2024/dmr2024/
LREC-COLING 2024 website: https://lrec-coling-2024.org/
Contact us with questions at dmr.workshop.0@gmail.com
Overview DMR 2024 invites the submissions of long and short papers about original works on meaning representations. As the special theme of DMR 2024, we also invite the submissions of original research that have in any way leveraged, expanded, or been inspired by the “Marthaverse of Meaning”-- the 50 years of gold-standard contributions to the field of NLP by 2023 ACL lifetime Achievement Award recipient, Dr. Martha Palmer.
Broader Goals DMR intends to bring together researchers who are producers and consumers of meaning representations and, through their interaction, gain a deeper understanding of the key elements of meaning representations that are the most valuable to the NLP community. The workshop will provide an opportunity for meaning representation researchers to present new frameworks and to critically examine existing frameworks with the goal of using their findings to inform the design of next-generation meaning representations. One particular goal is to explore opportunities and identify challenges in the design and use of meaning representations in multilingual settings. Another is to understand the relationship between distributed meaning representations trained on large data sets using network models and the symbolic meaning representations that are carefully designed and annotated by NLP researchers, with an aim of gaining a deeper understanding of areas where each type of meaning representation is the most effective.
Special Theme: A Marthaverse of Meaning In her 2023 ACL Lifetime Achievement Award acceptance speech, Dr. Martha Palmer (University of Colorado, Boulder) sums up her 50 years of research in AI and NLP in six words: “Finding meaning, quite literally, in words.” This year's workshop honors Dr. Palmer's contributions with a special theme on resources, approaches, and applications that draw upon her manifold contributions to the field. These resources include Treebanks (Chinese and Arabic TreeBanks, Hindi and Urdu Treebanks), PropBanks (English, Chinese and Arabic), VerbNet, OntoNotes, Abstract Meaning Representation (AMR), and Uniform Meaning Representation (UMR). These resources share attention to semantic detail combined with scalability and, therefore, an ability to generalize to and support a variety of different NLP applications and tasks. Indeed, the applicability of her research extends beyond the textual to the multimodal, where she has broadly contributed to the cross-modal event understanding.
DMR 2024 seeks to highlight the depth and the breadth of Dr. Palmer's contributions and their influence over the field of natural language processing by inviting the submission of original works that have in any way leveraged, expanded, or been inspired by the ``Marthaverse of Meaning.'' We also seek to recognize Dr. Palmer's long tenure of dedication to outstanding mentorship that has been so powerful for the many students who have gone on to shape the NLP research community and the field at large.
Topics The workshop solicits papers that address one or more of the following topics: Treebanks and the syntax-semantics interface; PropBanks, VerbNets, and semantic role labeling resources; OntoNotes and word sense disambiguation resources; Expansion or pairing of semantic resources with LLMs; Design and annotation of meaning representations; Cross-framework comparison of meaning representations; Automatic parsing of meaning representations; Automatic generation of text from meaning representations; Strengths and weaknesses of existing meaning representations exposed as a result of using them in natural language applications or natural language understanding systems; Use of meaning representations in real-world applications; Issues in applying meaning representations to multilingual settings; Issues in bringing multimodality into meaning representations; The relationship between symbolic meaning representations and distributed semantic representations; The use of LLMs to create meaning representations Formal properties of meaning representations; Any other topics that address the design, processing, and use of meaning representations or Dr. Martha Palmer's contributions to NLP.
Submission Details Submissions should report original and unpublished research on topics of interest to the workshop. Accepted papers are expected to be presented at the workshop and will be published in the workshop proceedings on the ACL Anthology. They should emphasize obtained results rather than intended work and should clearly indicate the state of completion of the reported results. A paper accepted for presentation at the workshop must not be or have been presented at any other meeting with publicly available proceedings.
Submissions and Templates: Submission is electronic, using the Softconf START conference management system at https://softconf.com/lrec-coling2024/dmr2024/. Submissions must adhere to the two-column LREC-COLING format. Long papers must not exceed eight (8) pages of content and short papers must not exceed four (4) pages of content. If a paper is accepted, the authors will be given an additional page to address reviewers’ comments in the final version. References and appendices do not count against these limits.
When submitting a paper from the START page, authors will be asked to provide essential information about resources (in a broad sense, i.e. also technologies, standards, evaluation kits, etc.) that have been used for the work described in the paper or are a new result of your research. Moreover, ELRA encourages all LREC-COLING authors to share the described LRs (data, tools, services, etc.) to enable their reuse and replicability of experiments (including evaluation ones). We also accept commitments from the ACL Rolling Review (ARR). All ARR commitments to DMR must have received all reviews and meta-reviews by March 25, 2024. For more info on ARR in general, see https://aclrollingreview.org.
Author Responsibilities: Reviewing of papers will be double-blind. Therefore, the paper must not include the authors’ names and affiliations or self-references that reveal any author’s identity–e.g., “We previously showed (Smith, 1991) …” should be replaced with citations such as “Smith (1991) previously showed …”. The submissions should also avoid links to non-anonymized repositories: the code should be either submitted as supplementary material in the final version of the paper, or as a link to an anonymized repository (e.g., Anonymous GitHub or Anonym Share). Papers that do not conform to these requirements will be rejected without review. If the paper is available as a preprint, this must be indicated on the submission form but not in the paper itself. In addition, DMR 2024, in accordance with LREC-COLING 2024, will follow the same policy as ACL conferences establishing an anonymity period during which non-anonymous posting of preprints is not allowed. Papers that have been or will be under consideration for other venues at the same time must be declared at submission time. If a paper is accepted for publication at DMR 2024, it must be immediately withdrawn from other venues. If a paper under review at DMR 2024 is accepted elsewhere and authors intend to proceed there, the workshop committee must be notified immediately.
Authors of papers that have been or will be submitted to other meetings or publications must provide this information to the workshop organizers dmr.workshop.0@gmail.com. Authors of accepted papers must notify the program chairs within 10 days of acceptance if the paper is withdrawn for any reason.
Dear colleagues,
The Lattice laboratory has a vacancy for a Research Engineer (M/F) on the application of large language models to the analysis of French literature (12 months, at the Lattice laboratory in Montrouge/Paris, starting April 1, 2024 or short thereafter).
Models like Bert, Llama or Mistral are great, but their performance is still far from perfect on tasks such as coreference resolution in long texts, or quotation attribution. This is particularly the case when working on languages different from English, like French. Moreover, literary texts are especially challenging, which makes it a perfect testbed for these models.
We are looking for a research engineer with some experience on these models, who has an interest for challenging tasks and computational humanities / literature. French does not need to be your mother tongue, but some command of the language is necessary.
For more details (context, skills, salary, ...) and to apply, please use the CNRS job portal exclusively: https://emploi.cnrs.fr/Offres/CDD/UMR8094-THIPOI0-010/Default.aspx
You can contact me directly for questions not addressed in the job description.
Best regards,
Thierry Poibeau