*******************************************************
EAMT 2024: The 25th Annual Conference of
The European Association for Machine Translation
24 - 27 June 2024
Sheffield, UK
https://eamt2024.sheffield.ac.uk/
@eamt_2024 (X account)
Keynote speaker: Alexandra Birch (University of Edinburgh, UK)
Workshop proposal deadline: 31 January 2024
Workshop date: 27 June 2024
More information:
https://eamt2024.sheffield.ac.uk/conference-calls/call-for-workshops
*******************************************************
*** Overview ***
The European Association for Machine Translation (EAMT) invites proposals
for workshops to be held in conjunction with the EAMT 2024 conference
taking place in Sheffield, UK, from 24 to 27 June 2024, with workshops held
on 27 June. We solicit proposals in all areas of machine translation. EAMT
workshops are intended to provide the opportunity for MT-related
communities of interest to spend focused time together advancing the state
of thinking or the state of practice in their area of interest or
endeavour. Workshops are generally scheduled as full-day events. Every
effort will be made to accept or reject (with reason) workshop proposals as
soon as possible after they are received by the organising committee so
that the workshop organisers have adequate time to prepare the workshop.
*** Submission information ***
Proposals should be submitted as PDF documents. Note that submissions
should be ready to be turned into a Call for Papers to the workshop within
one week of notification. The proposals should be at most two pages for the
main proposal and at most two additional pages for information about the
organisers, programme committee, and references. Thus, the whole proposal
should not be more than four pages long. The two pages for the main
proposal must include:
- A title and authors, affiliations, and contact information.
- A title and a brief description of the workshop topic and content.
- A list of speakers and alternates whom you intend to invite to present at
the workshop.
- An estimate of the number of attendees.
- A description of any shared tasks associated with the workshop (if any),
and an estimate of the number of participants.
- A description of special requirements and technical needs.
- If the workshop has been held before, a note specifying where previous
workshops were held, how many submissions the workshop received, how many
papers were accepted (also specify if they were not regular papers, e.g.,
shared task system description papers), and how many attendees the workshop
attracted.
- An outline of the intended workshop timeline with details about the
following items:
---- First call for workshop papers: some date
---- Second call for workshop papers: some date
---- Workshop paper due: some date
---- Notification of acceptance: some date
---- Camera-ready papers due: some date
Workshops are expected to follow the timelines below, so please make sure
the dates above fit into the schedule:
- 1st Call: no later than 14 March
- 2nd Call: no later than 04 April
- Deadline: 15 April (no later than 20 April)
- Acceptance: no later than 20 May
- Camera ready: no later than 27 May
- Proceedings deadline: 12 June
- Workshops: 27 June
The two pages for information about the organisers, program committee, and
references must include the following:
- The names, affiliations, and email addresses of the organisers, with a
brief description (2-5 sentences) of their research interests, areas of
expertise, and experience in organising workshops and related events.
- A list of Programme Committee members, with an indication of which
members have already agreed.
- References
Submissions should be formatted according to the templates specified below.
Anonymisation is not required. Submissions should be no longer than 4
pages, and submitted as PDF files to OpenReview:
https://openreview.net/group?id=EAMT.org/2024/Workshops_Track.
*** Templates for writing your proposal ***
There templates available in the following formats (check our website --
https://eamt2024.sheffield.ac.uk/conference-calls/call-for-papers):
- LaTeX
- Cloneable Overleaf template
- Word
- Libre Office/Open Office
- PDF
Please also use these templates for camera-ready workshop contributions to
comply with the format requirements for the workshop proceedings to be
published in the ACL Anthology.
*** Evaluation criteria ***
The workshop proposals will be evaluated according to their originality and
impact, and the quality of the organising team and Programme Committee.
*** Organiser Responsibilities ***
The organisers of the accepted proposals will be responsible for
publicising and running the workshop, including reviewing submissions,
producing the camera-ready workshop proceedings in the ACL Anthology
format, as well as organising the schedule with local EAMT organisers.
For every accepted workshop, we offer one free registration for the EAMT
2024 conference to one workshop organiser.
*** Important dates ***
- Proposal submission deadline: 31 January 2024
- Notification of acceptance: rolling basis (no later than 28/02/2024)
All deadlines are 23:59 CEST
*** Workshop Co-Chairs***
Mary Nurminen (Tampere University)
Diptesh Kanojia (University of Surrey)
*** Local organising committee ***
Carolina Scarton (University of Sheffield)
Charlotte Prescott (ZOO Digital)
Chris Bayliss (ZOO Digital)
Chris Oakley (ZOO Digital)
Xingyi Song (University of Sheffield)
--
*Carolina Scarton*
Lecturer in Natural Language Processing
Department of Computer Science
University of Sheffield
http://staffwww.dcs.shef.ac.uk/people/C.Scarton/
*** First Call for Research Projects Exhibition ***
36th International Conference on Advanced Information Systems Engineering
(CAiSE'24)
June 3-7, 2024, 5* St. Raphael Resort and Marina, Limassol, Cyprus
https://cyprusconferences.org/caise2024/
(*** Submission Deadline: 8th April, 2024 AoE ***)
CAiSE 2024 features a Research Project Exhibition (RPE@CAiSE'24) where researchers and
practitioners can present their ongoing research projects (e.g., H2020 or ERC projects,
national grants) in the context of Information Systems Engineering. The main objective of this
call is to serve as a forum where presenters can disseminate the intermediate results of their
projects or get feedback about research project proposals being developed. The exhibition
will also provide a warm environment to find potential research partners, foster existing
relationships, and discuss research ideas.
To participate in the RPE@CAiSE'24, the authors should submit a short paper (5-8 pages)
showcasing the project, including the participants, the main objectives of the project and
relevant results obtained so far (or expected results in the case of project proposals). Each
submission will be peer-reviewed on the relevance of the submitted paper in the context of
CAiSE 2024. If the paper is accepted, the authors will be invited to register for the conference
to present their work at the Research Projects Exhibition session at CAiSE 2024.
The accepted contributions will be proposed for publication by CEUR proceedings using the
1-column CEUR-ART style. In addition, the authors of the most influential project presented
at the RPE@CAiSE'24 will receive an award distinguishing their contribution as the "Most
Influential Project of the Research Project Exhibition @CAiSE'24".
RESEARCH PROJECTS REQUIREMENTS
For the Research Projects Exhibition, we solicit submissions of projects related to the topics
of CAiSE that meet the following criteria:
• Projects funded by the European Union, by national or local funding organisations, or even
by individual universities and industries.
• Projects focused on fundamental research, applied research or more industry-oriented.
• Research projects carried out by an international consortium of partners or by a national
research team.
• Research statements for future projects concerning the Information Systems Engineering
community.
SUBMISSION GUIDELINES
Papers should be submitted via Easychair
(https://www.easychair.org/conferences/?conf=caise2024) by selecting the "Research
Projects Exhibition". Each submission of a research project should include:
• The project's full name, acronym, duration (from-to), participants, funding agency and URL.
• Names of presenter(s) and main contributors.
• Abstract and keywords.
• Summary of project objectives and expected tangible outputs.
• The relevance of the project (or one of its work packages) to the topics of the International
Conference on Advanced Information Systems Engineering.
• If the project is ongoing: summary of current status and intermediate results.
All submissions should be 5 to 8 pages long and be formatted as a 1-column CEUR-ART style
(templates available at https://ceur-ws.org/Vol-XXX/). An intention to submit should be
performed one week before the deadline, including the full name of the project, the authors'
name and the abstract.
Each submission will be reviewed by at least two members of the Program Committee. In case
of disagreement, a third member of the Program Committee will review the submission. The
Program Committee will comprise international researchers with expertise in the field.
ATTENDANCE AND PRESENTATION
During the Research Projects Exhibition session, the authors of accepted contributions will
present the research project. Details about the format of the session and instructions to
prepare the presentation will be given to authors after the acceptance notification. At least
one author of each submission accepted for the Research Projects Exhibition must register
and attend the conference to present the work. The author needs a full registration to present
the research project.
IMPORTANT DATES
• Intention to Submit: 1st April, 2024 (AoE)
• Submission: 8th April, 2024 (AoE)
• Notification of Acceptance: 22nd April, 2024
• Camera Ready: 13th May, 2024
• Author Registration: 17th May, 2024
• Conference Dates: 3rd-7th June, 2024
RESEARCH PROJECTS EXHIBITION CHAIRS
• Raimundas Matulevicius, University of Tartu, Estonia
• Henderik A. Proper, TU Wien, Austria
The Fourth Workshop on Human Evaluation of NLP Systems (HumEval 2024)
invites the submission of long and short papers on current human evaluation
research and future directions. HumEval 2024 will take place in Turin
(Italy) on May 21 2024, during LREC-COLING 2024.
Website: https://humeval.github.io/
Important dates:
Submission deadline: 11 March 2024
Paper acceptance notification: 4 April 2024
Camera-ready versions: 19 April 2024
HumEval 2024: 21 May 2024
LREC-COLING 2024 conference: 20–25 May 2024
All deadlines are 23:59 UTC-12.
===============================================
Human evaluation plays a central role in NLP, from the large-scale
crowd-sourced evaluations carried out e.g. by the WMT workshops, to the
much smaller experiments routinely encountered in conference papers.
Moreover, while NLP embraced a number of automatic evaluation metrics, the
field has always been acutely aware of their limitations (Callison-Burch et
al., 2006; Reiter and Belz, 2009; Novikova et al., 2017; Reiter, 2018;
Mathur et al., 2020a), and has gauged their trustworthiness in terms of how
well, and how consistently, they correlate with human evaluation scores
(Gatt and Belz, 2008; Popović and Ney, 2011., Shimorina, 2018; Mille et
al., 2019; Dušek et al., 2020, Mathur et al., 2020b). Yet there is growing
unease about how human evaluations are conducted in NLP. Researchers have
pointed out the less than perfect experimental and reporting standards that
prevail (van der Lee et al., 2019; Gehrmann et al., 2023), and that
low-quality evaluations with crowdworkers may not correlate well with
high-quality evaluations with domain experts (Freitag et al., 2021). Only a
small proportion of papers provide enough detail for reproduction of human
evaluations, and in many cases the information provided is not even enough
to support the conclusions drawn (Belz et al., 2023). We have found that
more than 200 different quality criteria (such as Fluency, Accuracy,
Readability, etc.) have been used in NLP, and that different papers use the
same quality criterion name with different definitions, and the same
definition with different names (Howcroft et al., 2020). Furthermore, many
papers do not use a named criterion, asking the evaluators only to assess
'how good' the output is. Inter and intra-annotator agreement are usually
given only in the form of an overall number without analysing the reasons
and causes for disagreement and potential to reduce them. A small number of
papers have aimed to address this from different perspectives, e.g.
comparing agreement for different evaluation methods (Belz and Kow, 2010),
or analysing errors and linguistic phenomena related to disagreement
(Pavlick and Kwiatkowski, 2019; Oortwijn et al., 2021; Thomson and Reiter,
2020; Popović, 2021). Context beyond sentences needed for a reliable
evaluation has also started to be investigated (e.g. Castilho et al.,
2020). The above aspects all interact in different ways with the
reliability and reproducibility of human evaluation measures. While
reproducibility of automatically computed evaluation measures has attracted
attention for a number of years (e.g. Pineau et al., 2018, Branco et al.,
2020), research on reproducibility of measures involving human evaluations
is a more recent addition (Cooper & Shardlow, 2020; Belz et al., 2023).
The HumEval workshops (previously at EACL 2021, ACL 2022, and RANLP 2023)
aim to create a forum for current human evaluation research and future
directions, a space for researchers working with human evaluations to
exchange ideas and begin to address the issues human evaluation in NLP
faces in many respects, including experimental design, meta-evaluation and
reproducibility. We will invite papers on topics including, but not limited
to, the following topics as addressed in any subfield of NLP
- Experimental design and methods for human evaluations
- Reproducibility of human evaluations
- Inter-evaluator and intra-evaluator agreement
- Ethical considerations in human evaluation of computational systems
- Quality assurance for human evaluation
- Crowdsourcing for human evaluation
- Issues in meta-evaluation of automatic metrics by correlation with human
evaluations
- Alternative forms of meta-evaluation and validation of human evaluations
- Comparability of different human evaluations
- Methods for assessing the quality and the reliability of human evaluations
- Role of human evaluation in the context of Responsible and Accountable AI
Submissions for both short and long papers will be made directly via START,
following submission guidelines issued by LREC-COLING 2024. For full
submission details please refer to the workshop website.
The third ReproNLP Shared Task on Reproduction of Automatic and Human
Evaluations of NLP Systems will be part of HumEval, offering (A) an Open
Track for any reproduction studies involving human evaluation of NLP
systems; and (B) the ReproHum Track where participants will reproduce the
papers currently being reproduced by partner labs in the EPSRC ReproHum
project. A separate call will be issued for ReproNLP 2024.
--
Kind regards, Simone Balloccu.
The 6th Clinical Natural Language Processing Workshop@ NAACL 2024
<https://2024.naacl.org/>. 20 or 21 June 2024, Mexico City, Mexico.
https://clinical-nlp.github.io/2024
Clinical text is growing rapidly as electronic health records become
pervasive. Much of the information recorded in a clinical encounter is
located exclusively in provider narrative notes, which makes them
indispensable for supplementing structured clinical data in order to better
understand patient state and care provided. The methods and tools developed
for the clinical domain have historically lagged behind the scientific
advances in the general-domain NLP. Despite the substantial recent strides
in clinical NLP, a substantial gap remains. The goal of this workshop is to
address this gap by establishing a regular event in CL conferences that
brings together researchers interested in developing state-of-the-art
methods for the clinical domain. The focus is on improving NLP technology
to enable clinical applications, and specifically, information extraction
and modeling of narrative provider notes from electronic health records,
patient encounter transcripts, and other clinical narratives.
Relevant topics for the workshop include, but are not limited to:
- Modeling clinical text in standard NLP tasks (tagging, chunking,
parsing, entity identification, entity linking/normalization, relation
extraction, coreference, summarization, etc.)
- De-identification and other handling of protected health information
- Disease detection and other coding of clinical documents (e.g., ICD)
- Structure of clinical documents (e.g., section identification)
- Information extraction from clinical text
- Integration of structured and textual data for clinical tasks
- Domain adaptation and transfer learning techniques for clinical data
- Generation of clinical notes: summarization, image-to-text, generation
of notes from clinical conversations, etc.
- Annotation schemes and annotation methodology for clinical data
- Evaluation techniques for the clinical domain
- Bias and fairness in clinical text
In 2024, Clinical NLP will encourage submissions from the following special
tracks:
- Clinical NLP in low-resource settings (e.g., languages other than
English)
- Clinical NLP for clinical conversations (e.g., doctor-patient)
- Risk analysis of large language models for clinical NLP (e.g.,
privacy, bias)
The 6th Clinical NLP Workshop will be co-located with NAACL 2024 in Mexico
City on June 20 or 21.
Shared Tasks
Clinical NLP 2024 is hosting four shared tasks:
- Task 1 - MEDIQA-CORR: Medical Error Detection & Correction
- Task 2 - MEDIQA-M3G: Multilingual & Multimodal Medical Answer
Generation
- Task 3 - EHRSQL: Reliable Text-to-SQL Modeling on Electronic Health
Records
- Task 4 - Chemotherapy Timelines Extraction
Please visit the shared task websites to register to participate and for
additional information about the shared tasks.
- MEDIQA-CORR: https://sites.google.com/view/mediqa2024/mediqa-corr
- MEDIQA-M3G: https://sites.google.com/view/mediqa2024/mediqa-m3g
- EHRSQL: https://github.com/glee4810/ehrsql-2024
- Chemotherapy Timelines Extraction:
http://chemotimelines2024.healthnlp.org
Submissions
The OpenReview submission site is:
-
https://openreview.net/group?id=aclweb.org/NAACL/2024/Workshop/Clinical_NLP
All submissions must follow ACL formatting guidelines
<https://acl-org.github.io/ACLPUB/formatting.html>, including:
- Submissions should be anonymous and must not include any identifying
information about the authors
<https://acl-org.github.io/ACLPUB/review-version.html>
- Long papers may have up to eight (8) pages of content and short papers
may have up to four (4) pages of content.
- You are allowed unlimited pages for references
<https://acl-org.github.io/ACLPUB/formatting.html#paper-length>. Any
“Limitations” section or “Ethics Statement” is similar to references; it
does not count toward the page limit.
Clinical NLP 2024 has no preprint restrictions; you may post to arXiv at
any time. Clinical NLP 2024 workshop proceedings are archival will be
published on the ACL Anthology
<https://aclanthology.org/venues/clinicalnlp/>.
We encourage submissions of papers submitted to but not accepted by EACL
2024 <https://2024.eacl.org/>, NAACL 2024 <https://2024.naacl.org/>, or ACL
Rolling Review <https://aclrollingreview.org/>, as long as the topics are
relevant to Clinical NLP.
Important Dates
All deadlines are 11:59PM UTC-12:00 (anywhere on Earth
<https://www.timeanddate.com/time/zones/aoe>).
EventDate
Submission deadline Tuesday, March 19, 2024
Notification of acceptance Tuesday April 16, 2024
Final versions of papers due Wednesday April 24, 2024
Workshop June 20 or 21, 2024Shared Task Dates
EventDate
Shared task registration opens Monday January 8, 2024
Shared task release of training / validation sets Friday January 26, 2024
Shared task release of the test sets Monday February 26, 2024
Shared task run submission deadline Friday March 1, 2024
Shared task release of official results Wednesday March 6, 2024
Paper-related deadlines See General Dates aboveWorkshop Organizers
- Asma Ben Abacha (Microsoft)
- Danielle Bitterman (Harvard Medical School)
- Kirk Roberts (UTHealth Houston)
- Steven Bethard (University of Arizona)
- Tristan Naumann (Microsoft Research)
Contact
For inquiries, please contact:
clinical-nlp-workshop-organizers(a)googlegroups.com.
***************
Semantic Methods for Events and Stories, 2nd Edition (SEMMES 2024) – Call for Papers
***************
Website: https://anr-kflow.github.io/semmes/
Workshop co-located with the Extended Semantic Web Conference (ESWC) in Hersonissos, Greece
Submission deadline: March 7th, 2024
Scope
***************
An important part of human history and knowledge is made of events, which can be aggregated and connected to create stories, be they real or fictional. These events as well as the stories created from them can typically be inherently complex, reflect societal or political stances and be perceived differently across the world population. The Semantic Web offers technologies and methods to represent these events and stories, as well as to interpret the knowledge encoded into graphs and use it for different applications, spanning from narrative understanding and generation to fact-checking.
The aim of the 2nd edition of our workshop on Semantic Methods for Events and Stories (SEMMES) is to offer an opportunity to discuss the challenges related to dealing with events and stories, and how we can use semantic methods to tackle them. We welcome approaches which combine data, methods and technologies coming from the Semantic Web with methods from other fields, including machine learning, narratology or information extraction. This workshop wants to bring together researchers working on complementary topics, in order to foster collaboration and sharing of expertise in the context of events and stories.
Topics
***************
Topics of interest include, but are not limited to:
- Ontologies and data models for representing events, event relations, and narratives;
- Event extraction, co-reference and linking;
- Event Relation extraction and linking (e.g. temporal, causal, modal relationships);
- Methods combining KGs and LLMs targeting event- or narrative-related research;
- Fake events detection and event verification;
- Event-centric question answering;
- Event information visualisation;
- Event-centric knowledge graphs and vocabularies;
- Completion of event-centric knowledge graphs and reasoning;
- Event summarisation;
- Automatic narrative understanding and generation;
- Storytelling Applications/Demos.
Submission Guidelines
***************
We welcome the following types of contributions.
- Long papers (10-15 pages including references)
- Short papers (5-9 pages including references)
We welcome any types of research, resource and application papers, as well as (short only) demonstration submissions.
Submissions must be written in English and formatted using the template for submissions to CEUR Workshop Proceedings (https://www.overleaf.com/latex/templates/template-for-submissions-to-ceur-w…)
All papers and abstracts have to be submitted electronically via EasyChair: https://easychair.org/conferences/?conf=semmes2024.
Each accepted paper needs to be presented by one of the authors, who agrees to register and participate in SEMMES.
Authors may be requested to serve as reviewers for max 2 papers.
Important Dates
***************
- Submission deadline: March 7th, 2024
- Notifications: April 4th, 2024
- Camera-ready version: April 18th, 2024
- Workshop day: May 26th or 27th, 2024 (half-day, TBA)
All deadlines are 23:59 anywhere on earth (UTC-12).
Proceedings
***************
The complete set of papers will be published with the joint CEUR ESWC Workshop Proceedings (http://CEUR-WS.org), listed by the DBLP.
--
Pasquale Lisena
EURECOM, Campus SophiaTech
450 route des Chappes, 06410 Biot, France
e-mail: pasquale.lisena(a)eurecom.fr
site: http://pasqlisena.github.io/
*** First Call for Papers ***
We invite paper submissions to the 8th Workshop on Online Abuse and Harms (WOAH), which will take place on June 20/21 at NAACL 2024.
Website: https://www.workshopononlineabuse.com/cfp.html
Important Dates
Submission due: March 10, 2024
ARR reviewed submission due: April 7, 2024
Notification of acceptance: April 14, 2024
Camera-ready papers due: April 24, 2024
Workshop: June 20-21, 2024
Overview
Digital technologies have brought many benefits for society, transforming how people connect, communicate and interact with each other. However, they have also enabled abusive and harmful content such as hate speech and harassment to reach large audiences, and for their negative effects to be amplified. The sheer amount of content shared online means that abuse and harm can only be tackled at scale with the help of computational tools. However, detecting and moderating online abuse and harms is a difficult task, with many technical, social, legal and ethical challenges. The Workshop on Online Abuse and Harms invites paper submissions from a wide range of fields, including natural language processing, machine learning, computational social sciences, law, politics, psychology, sociology and cultural studies. We explicitly encourage interdisciplinary submissions, technical as well as non-technical submissions, and submissions that focus on under-resourced languages. We also invite non-archival submissions and civil society reports.
The topics covered by WOAH include, but are not limited to:
* New models or methods for detecting abusive and harmful online content, including misinformation;
* Biases and limitations of existing detection models or datasets for abusive and harmful online content, particularly those in commercial use;
* New datasets and taxonomies for online abuse and harms;
* New evaluation metrics and procedures for the detection of harmful content;
* Dynamics of online abuse and harms, as well as their impact on different communities
* Social, legal, and ethical implications of detecting, monitoring and moderating online abuse
In addition, we invite submissions related to the theme for this eighth edition of WOAH, which will be online harms in the age of large language models. Highly capable Large Language Models (LLMs) are now widely deployed and easily accessible by millions across the globe. Without proper safeguards, these LLMs will readily follow malicious instructions and generate toxic content. Even the safest LLMs can be exploited by bad actors for harmful purposes. With this theme, we invite submissions that explore the implications of LLMs for the creation, dissemination and detection of harmful online content. We are interested in how to stop LLMs from following malicious instructions and generating toxic content, but also how they could be used to improve content moderation and enable countermeasures like personalised counterspeech. To support our theme, we have invited an interdisciplinary line-up of high-profile speakers across academia, industry and public policy.
Submission
Submission is electronic, using the Softconf START conference management system.
Submission link: TBD
The workshop will accept three types of papers.
* Academic Papers (long and short): Long papers of up to 8 pages, excluding references, and short papers of up to 4 pages, excluding references. Unlimited pages for references and appendices. Accepted papers will be given an additional page of content to address reviewer comments. Previously published papers cannot be accepted.
* Non-Archival Submissions: Up to 2 pages, excluding references, to summarise and showcase in-progress work and work published elsewhere.
* Civil Society Reports: Non-archival submissions, with a minimum of 2 pages and no upper limit. Can include work published elsewhere.
Format and styling
All submissions must use the official ACL two-column format, using the supplied official style files. The templates can be downloaded in Style Files and Formatting<https://github.com/acl-org/acl-style-files>.
Please send any questions about the workshop to organizers(a)workshopononlineabuse.com<mailto:organizers@workshopononlineabuse.com>
Organisers
Paul Röttger, Bocconi University
Yi-Ling Chung, The Alan Turing Institute
Debora Nozza, Bocconi University
Aida Mostafazadeh Davani, Google Research
Agostina Calabrese, University of Edinburgh
Flor Miriam Plaza-del-Arco, Bocconi University
Zeerak Talat, MBZUAI
The Alan Turing Institute is a limited liability company, registered in England with registered number 09512457. Our registered office is at British Library, 96 Euston Road, London, England, NW1 2DB. We are also a charity registered in England with charity number 1162533. This email and any attachments are confidential and may be legally privileged. If you have received it in error, you are on notice of its status. If you have received this message in error, please send it back to us, and immediately and permanently delete it. Do not use, copy or disclose the information contained in this message or in any attachment. DISCLAIMER: Although The Alan Turing Institute has taken reasonable precautions to ensure no viruses are present in this email, The Alan Turing Institute cannot accept responsibility for any loss or damage sustained as a result of computer viruses and the recipient must ensure that the email (and attachments) are virus free. While we take care to protect our systems from virus attacks and other harmful events, we give no warranty that this message (including attachments) is free of any virus or other harmful matter, and we accept no responsibility for any loss or damage resulting from the recipient receiving, opening or using it. E-mail transmission cannot be guaranteed to be secure or error-free as information could be intercepted, corrupted, lost, destroyed, arrive late or be incomplete. If you think someone may have interfered with this email, please contact the Alan Turing Institute by telephone only and speak to the person dealing with your matter or the Accounts Department. Fraudsters are increasingly targeting organisations and their affiliates, often requesting funds to be transferred to a different bank account. The Alan Turing's bank details are contained within our terms of engagement. If you receive a suspicious or unexpected email from us, or purporting to have been sent on our behalf, particularly containing different bank details, please do not reply to the email, click on any links, open any attachments, nor comply with any instructions contained within it, but contact our Accounts department by telephone. Our Transparency Notice found here - https://www.turing.ac.uk/transparency-notice sets out how and why we collect, store, use and share your personal data and it explains your rights and how to raise concerns with us.
We are very pleased to share our second call for papers for our workshop on Reference, Framing, and Perspective co-located with LREC-COLING 2024.
* Workshop website: https://cltl.github.io/reference-framing-perspective/
* When: Saturday, May 25th, 20204
* Where: Torino, Italy (co-located with LREC-COLING 2024)
* Deadline for submissions: February 20, 2024
* Paper submission link: https://softconf.com/lrec-coling2024/reference-framing-perspective2024/user/
* Deadline for camera-ready papers: beginning of April 1, 2024
* Shared dataset: https://github.com/cltl/rfp_corpus_collection
When something happens in the world, we have access to an unlimited range of ways (from lexical choices to specific syntactic structures) to refer to the same real-world event. We can chose to express information explicitly or imply it. Variations in reference may convey radically different perspectives. This process of making reference to something by adopting a specific perspective is also known as framing. Although previous work in this area is present (see Ali and Hassan (2022)’s survey for an overview), there is a lack of a unitary framework and only few targeted datasets (Chen et al., 2019) and tools based on Large Language Models exist (Minnema et al., 2022). In this workshop, we propose to adopt Frame Semantics (Fillmore, 1968, 1985, 2006) as a unifying theoretical framework and analysis method to understand the choices made in linguistic references to events. The semantic frames (expressed by predicates and roles) we choose give rise to our understanding, or framing, of an event. We aim to bring together different research communities interested in lexical and syntactic variation, referential grounding, frame semantics, and perspectives. We believe that there is significant overlap within the goals and interests of these communities, but not necessarily the common ground to enable collaborative work.
Referentially Grounded Shared Dataset
One way to study variation in framing is to conduct contrastive analyses of texts reporting on the same real-world event. Such an analysis can help to reveal the extent of variation in framing and possibly give rise to the underlying factors that lead to different choices in framing the same event. We collected such a corpus about the Eurovision Song Festival and make it available as a Shared Dataset for the Workshop. The purpose of this corpus is to enable exploratory analyses, facilitate discussion among participants, and, last but not least, make our workshop a real working workshop.
The corpus is composed of news articles reporting on the Eurovision Song Contest that took place in Rotterdam, the Netherlands (canceled in 2020 and held in 2021). The news articles have been collected using the structured data-to-text approach (Vossen et al., 2018). The corpus contains news articles in multiple languages. We invite participants to submit short and targeted analyses using the data (extended abstracts to be discussed in a hands-on data session). Participants are also free to use the data in regular contributions.
Regular contributions:
We aim to lay the groundwork for such efforts. We invite contributions (regular long papers of 8 pages or short papers of 4 pages) targeting any of the following - non-exhaustive - list of topics:
* Theoretical models of framing and perspective
* Annotation frameworks for framing and perspectives
* Computational models of framing and perspective
* Approaches for creating and analyzing referentially grounded datasets (containing different perspectives, written at different points in time, written in different languages)
* Approaches for and analyses of texts about contested and divisive events triggering different opinions and perspectives
* Analyses of and methods for analyzing (diachronic) lexical variation and framing
* Language resources for reference, frames, and perspectives
* Approaches and tools to compare claims of sources
* Frames as expressions of bias in the representation of social groups
* User interface for the visualization of multiple perspectives
Extended abstracts:
We invite extended abstracts (1,500 words maximum) about small analyses or experiments conducted on our Shared Data. The abstracts will be non-archival and discussed in a dedicated data session.
Invited speakers:
Maria Antoniak
Vered Shwartz
Organizers:
Pia Sommerauer, Tommaso Caselli, Malvina Nissim, Levi Remijnse, Piek Vossen
Fully Funded PostDoc Position in NLP / Scientific Document Analysis
The Tübingen AI Center at the University of Tübingen is looking for a motivated postdoctoral researcher interested in natural language processing for scientific document analysis. The researcher will be supervised by Prof. Andreas Geiger (University of Tübingen) and Iryna Gurevych (TU Darmstadt) and will have the opportunity to supervise Master and PhD students.
Description: The body of scientific literature is growing at an ever-increasing rate. As a result, it is increasingly difficult for researchers to keep up-to-date. This hinders scientific progress at large and leads to a suboptimal usage of resources including research funds, compute, energy and intellectual capacity. In this project, we plan to develop novel NLP methods and algorithms and to collect new datasets to advance research in scientific documents processing. Research topics include:
* Efficient hierarchical and multi-modal document representations
* Structured intra- and inter-document models
* Distillation and adaptation of LLMs for scientific document analysis and generation
* Self-supervised learning with multi-scale pre-text tasks
* Explainable and grounded scientific document models
* Deployment of algorithms and collection of datasets (www.scholar-inbox.com)
Requirements: We are looking for candidates that hold a PhD degree and who have published at top conferences in the field (ACL, EMNLP, NAACL, TACL).
About Us: The University of Tübingen is one of Germany's excellence universities with an excellence cluster on machine learning, an ELLIS Unit and the Tübingen AI Center. Embedded in the interdisciplinary research environment of CyberValley, the Autonomous Vision Group conducts curiosity-driven fundamental research, providing researchers with access to unique research facilities and great research teams. Currently, 2 PhD students are working on this project. Our culture is international, inclusive and collaborative. We are looking forward to your application!
To apply, please send your application materials including your CV, research statement, transcripts and names of referees to: a.geiger(a)uni-tuebingen.de
Dear Colleagues,
Tomorrow is the last day for Early Bird registration for DHd2024 in Passau.
Late Bird registration will be open until 18.02.2024
Please register through conftool.
More information:
https://dhd2024.dig-hum.de/registrierung/
For the DHd2024 Team in Passau
Thomas Haider