###################
########## Call for Papers
######## Special Session
###### EnGeoData'2023: Geospatial data analysis under the umbrella of One Health
#### https://simbig.org/engeodata/2023 <https://simbig.org/engeodata/2023>
###
## IEEE DSAA 2023
# The 10th IEEE International Conference on Data Science and Advanced Analytics
# October 9-13, 2023, Thessaloniki, Greece
### AIMS AND TOPICS
1. Abstract
Current context of urbanization, globalization, high mobility/trade, and climate change amid the health domain favors the (re-) emergence of known and unknown diseases. Thus, geospatial and environmental data analysis for One Health is crucial to provide insights into the connections between humans, animals, and environment. This type of analysis allows us to identify and monitor health issues that arise due to the interactions between these three areas. However, it is challenging due to: (1) the multi-modality of the data (e.g., unstructured, imaging, semantic, spatial, temporal, among others); and (2) the difficulty in choosing the "most appropriate” knowledge discovery process according to specific field needs (e.g., animal, plant or human health; crisis and disaster surveillance).
EnGeoData 2023 aims to provide high quality research facing the challenges mentioned above with theoretical and/or experimental approaches.
2. Topics
Topics of interest include (but are not limited to):
- Pre and post processing of environmental data
- Geographical information retrieval
- Spatial data mining, spatial data warehousing, and spatial data lake
- Knowledge discovery use-cases applied to environmental data
- Spatial text mining
- Spatial ontology
- Spatial recommendation and personalization
- Visual analytics for geo-spatial data
- Dedicated applications:
* Spatio-temporal analytics platform
* Agricultural decision support systems
* Urban traffic systems
* Trajectory analysis
* Land-use and urban policies
* Land-use and urban planning analysis
* Spatio-temporal analysis in ecology and agriculture
* Disease surveillance systems (One Health)
### SUBMISSION
All papers should be submitted electronically via EasyChair Submissions: https://easychair.org/my/conference?conf=dsaa2023 <https://easychair.org/my/conference?conf=dsaa2023> under the “Special Session” Track
- Paper Submission Deadline: May 22, 2023
- Paper Notification: July 17, 2023
- Paper Camera Ready Due: August 7, 2023
The length of each paper submitted to the special session should be no more than ten (10) pages and should be formatted following the standard 2-column U.S. letter style of IEEE Conference template. For further information and instructions, see the IEEE Proceedings Author Guidelines.
All submissions will be blind reviewed by the Program Committee on the basis of technical quality, relevance to the session’s topics of interest, originality, significance, and clarity. Author names and affiliations must not appear in the submissions, and bibliographic references must be adjusted to preserve author anonymity. Submissions failing to comply with paper formatting and authors anonymity will be rejected without reviews.
### CHAIRS
- Mathieu Roche, CIRAD, TETIS, France
- Antonio Lossio-Ventura, National Institutes of Health, USA
- Hamid Laga, Murdoch University, Australia
- Maguelonne Teisseire, INRAE, TETIS, France
For questions, please contact us at engeodata(a)teledetection.fr <mailto:engeodata@teledetection.fr>
AmericasNLP 2023 has extended its paper submission deadline. The new
deadline is: *April 22, 2023*. (The original deadline was April 15.) More
information below!
The Third Workshop on NLP for Indigenous Languages of the Americas (
AmericasNLP 2023)
The Third Workshop on NLP for Indigenous Languages of the Americas (
AmericasNLP) will be co-located with the 61st Annual Meeting of the
Association for Computational Linguistics (ACL 2023
<https://2023.aclweb.org/>), which is scheduled to be held in Toronto,
Canada, between July 9-14, 2023.
The goal of the workshop is to encourage and increase the visibility of
work on the Indigenous languages of the Americas. It aims to encourage
research on NLP, computational linguistics, corpus linguistics and speech
for Indigenous languages, to connect researchers and professionals from
underrepresented communities and native speakers of endangered languages
with the ACL community, and, more generally, to promote machine learning
approaches suitable for low-resource languages.
We invite the submission of
-
Long papers (8 pages) and short papers (4 pages) on substantial,
original, and unpublished research
-
Non-archival extended abstracts (2 pages), technical reports (8 pages),
and work which has been presented at other venues (in the format of the
original publication)
Submissions do not need to describe work on native languages directly, as
long as it is clear why those can benefit from the described approaches.
Areas of interest include but are not limited to:
-
Creation of datasets for NLP applications
-
Incorporation of external knowledge into neural systems
-
Linguistic typology and the use of typological features for NLP
-
Transfer learning, meta-learning, and active learning
-
Weakly supervised, semi-supervised, and unsupervised learning
-
Machine translation of low-resource languages
-
Morphology and phonology of low-resource languages
-
NLP applications for Indigenous languages of the Americas
Important dates:
-
Start of the anonymity period: March 15, 2023
-
Submission deadline: April 15, 2023 *April 22, 2023*
-
Notification of acceptance: May 15, 2023
-
Camera ready papers due: May 26, 2023
-
Workshop: July 14, 2023
All deadlines are 11.59 pm UTC -12h (anywhere on earth).
Link to submission portal:
https://softconf.com/acl2023/AmericasNLP2023/
The workshop also includes:
-
A machine translation shared task on truly low-resource languages
-
A mentoring program to support students and newcomers from
underrepresented communities (application form:
https://forms.gle/afBWauDfDQijXHTy9)
We also have a diverse set of invited speakers, focused on bridging the gap
between linguists, NLP, and machine learning research!
-
Steven Bird (linguistics; ethics)
-
Angela Fan (NLP; machine translation)
-
Kristine Stenzel (field linguistics; American Indigenous languages)
Organizing Committee
-
Manuel Mager, AWS AI Labs
-
Arturo Oncevay, University of Edinburgh
-
Enora Rice, University of Colorado Boulder
-
Abteen Ebrahimi, University of Colorado Boulder
-
Shruti Rijhwani, Google Research
-
Alexis Palmer, University of Colorado Boulder
-
Katharina Kann, University of Colorado Boulder
More information and contact information can be found at
http://turing.iimas.unam.mx/americasnlp/.
--
Dr. Katharina Kann
Assistant Professor of Computer Science
University of Colorado Boulder
Personal page: https://kelina.github.io
Group page: https://nala-cub.github.io
Apologies for cross-posting.
The task proposals deadline had been extended to the 24th April 2023
anywhere on earth. We look forward to your proposals.
Important Dates:
- Task proposals due April 17, 2023 April 24, 2023 (Anywhere on Earth)
- Task selection notification May 22, 2023
Contact: semevalorganizers(a)gmail.com
We invite proposals for tasks to be run as part of SemEval-2024. SemEval
(the International Workshop on Semantic Evaluation)is an ongoing series of
evaluations of computational semantics systems, organized under the
umbrella of SIGLEX, the Special Interest Group on the Lexicon of the
Association for Computational Linguistics.
SemEval tasks explore the nature of meaning in natural languages: how to
characterize meaning and how to compute it. This is achieved in practical
terms, using shared datasets and standardized evaluation metrics to
quantify the strengths and weaknesses and possible solutions. SemEval tasks
encompass a broad range of semantic topics from the lexical level to the
discourse level, including word sense identification, semantic parsing,
coreference resolution, and sentiment analysis, among others.
For SemEval-2024, we welcome any task that can test an automatic system for
the semantic analysis of text, which could be an intrinsic semantic
evaluation or an application-oriented evaluation. We especially encourage
tasks for languages other than English, cross-lingual tasks, and tasks that
develop novel applications of computational semantics. See the websites of
previous editions of SemEval to get an idea about the range of tasks
explored, SemEval-2022 and SemEval-2023.
We strongly encourage proposals based on pilot studies that have already
generated initial data, as this can provide concrete examples and can help
to foresee the challenges of preparing the full task. In the event of
receiving many proposals, preference will be given to proposals that have
already run a pilot study.
In case you are not sure whether a task is suitable for SemEval, please
feel free to get in touch with the SemEval organizers at
semevalorganizers(a)gmail.com to discuss your idea.
*=== Task Selection ===*
Task proposals will be reviewed by experts, and reviews will serve as the
basis for acceptance decisions. Everything else being equal, more
innovative new tasks will be given preference over task reruns. Task
proposals will be evaluated on:
Novelty: Is the task on a compelling new problem that has not been explored
much in the community? Is the task a rerun, but covering substantially new
ground (new subtasks, new types of data, new languages, etc.)?
Interest: Is the proposed task likely to attract a sufficient number of
participants?
Data: Are the plans for collecting data convincing? Will the resulting data
be of high quality? Will annotations have meaningfully high inter-annotator
agreements? Have all appropriate licenses for use and re-use of the data
after the evaluation been secured? Have all international privacy concerns
been addressed? Will the data annotation be ready on time?
Evaluation: Is the methodology for evaluation sound? Is the necessary
infrastructure available or can it be built in time for the shared task?
Will research inspired by this task be able to evaluate in the same manner
and on the same data after the initial task?
Impact: What is the expected impact of the data in this task on future
research beyond the SemEval Workshop?
*=== New Tasks vs. Task Reruns ===*
We welcome both new tasks and task reruns. For a new task, the proposal
should address whether the task would be able to attract participants.
Preference will be given to novel tasks that have not received much
attention yet.
For reruns of previous shared tasks (whether or not the previous task was
part of SemEval), the proposal should address the need for another
iteration of the task. Valid reasons include: a new form of evaluation
(e.g. a new evaluation metric, a new application-oriented scenario), new
genres or domains (e.g. social media, domain-specific corpora), or a
significant expansion in scale. We further discourage carrying over a
previous task and just adding new subtasks, as this can lead to the
accumulation of too many subtasks. Evaluating on a different dataset with
the same task formulation, or evaluating on the same dataset with a
different evaluation metric, typically should not be considered a separate
subtask.
*=== Task Organization ===*
We welcome people who have never organized a SemEval task before, as well
as those who have. Apart from providing a dataset, task organizers are
expected to:
- Verify the data annotations have sufficient inter-annotator agreement
- Verify licenses for the data to allow its use in the competition and
afterwards. In particular, text that is publicly available online is not
necessarily in the public domain; unless a license has been provided, the
author retains all rights associated with their work, including copying,
sharing and publishing. For more information, see:
https://creativecommons.org/faq/#what-is-copyright-and-why-does-it-matter
- Resolve any potential security, privacy, or ethical concerns about the
data
- Make the data available in a long-term repository under an appropriate
license, preferably using Zenodo: https://zenodo.org/communities/semeval/
- Provide task participants with format checkers and standard scorers.
- Provide task participants with baseline systems to use as a starting
point (in order to lower the obstacles to participation). A baseline system
typically contains code that reads the data, creates a baseline response
(e.g. random guessing, majority class prediction), and outputs the
evaluation results. Whenever possible, baseline systems should be written
in widely used programming languages and/or should be implemented as a
component for standard NLP pipelines.
- Create a mailing list and website for the task and post all relevant
information there.
- Create a CodaLab or other similar competition for the task and upload the
evaluation script.
- Manage submissions on CodaLab or a similar competition site.
- Write a task description paper to be included in SemEval proceedings, and
present it at the workshop.
- Manage participants’ submissions of system description papers, manage
participants’ peer review of each others’ papers, and possibly shepherd
papers that need additional help in improving the writing.
- Review other task description papers.
*=== Important dates ===*
- Task proposals due April 17, 2023 (Anywhere on Earth)
- Task selection notification May 22, 2023
*=== Preliminary timetable ===*
- Sample data ready July 15, 2023
- Training data ready September 1, 2023
- Evaluation data ready December 1, 2023 (internal deadline; not for public
release)
- Evaluation starts January 10, 2024
- Evaluation end by January 31, 2024 (latest date; task organizers may
choose an earlier date)
- Paper submission due February 2024
- Notification to authors on March 2024
- Camera-ready due April 2024
- SemEval workshop Summer 2024 (co-located with a major NLP conference)
Tasks that fail to keep up with crucial deadlines (such as the dates for
having the task and CodaLab website up and dates for uploading samples,
training, and evaluation data) may be cancelled at the discretion of
SemEval organizers. While consideration will be given to extenuating
circumstances, our goal is to provide sufficient time for the participants
to develop strong and well-thought-out systems. Cancelled tasks will be
encouraged to submit proposals for the subsequent year’s SemEval. To reduce
the risk of tasks failing to meet the deadlines, we are unlikely to accept
multiple tasks with overlap in the task organizers.
*=== Submission Details ===*
The task proposal should be a self-contained document of no longer than 3
pages (plus additional pages for references). All submissions must be in
PDF format, following the ACL template.
Each proposal should contain the following:
- Overview
- Summary of the task
- Why this task is needed and which communities would be interested in
participating
- Expected impact of the task
- Data & Resources
- How the training/testing data will be produced. Please discuss whether
existing corpora will be re-used.
- Details of copyright, so that the data can be used by the research
community both during the SemEval evaluation and afterwards
- How much data will be produced
- How data quality will be ensured and evaluated
- An example of what the data would look like
- Resources required to produce the data and prepare the task for
participants (annotation cost, annotation time, computation time, etc.)
- Assessment of any concerns with respect to ethics, privacy, or security
(e.g. personally identifiable information of private individuals; potential
for systems to cause harm)
- Pilot Task (strongly recommended)
- Details of the pilot task
- What lessons were learned and how these will impact the task design
- Evaluation
- The evaluation methodology to be used, including clear evaluation
criteria
- For Task Reruns
- Justification for why a new iteration of the task is needed (see
criteria above)
- What will differ from the previous iteration
- Expected impact of the rerun compared with the previous iteration
- Task organizers
- Names, affiliations, email addresses
- (optional) brief description of relevant experience or expertise
- (if applicable) years and task numbers, of any SemEval tasks you have
run in the past
Proposals will be reviewed by an independent group of area experts who may
not have familiarity with recent SemEval tasks, and therefore all proposals
should be written in a self-explanatory manner and contain sufficient
examples.
The submission webpage is:
https://openreview.net/group?id=aclweb.org/ACL/2023/Workshop/SemEval
*=== Chairs ===*
Atul Kr. Ojha, SFI Insight Centre for Data Analytics, DSI, University of
Galway
A. Seza Doğruöz, Ghent University
Giovanni Da San Martino, University of Padua
Harish Tayyar Madabushi, The University of Bath
Ritesh Kumar, Dr. Bhimrao Ambedkar University
Contact: semevalorganizers(a)gmail.com
Dear colleagues,
In the context of the Lexhnology ANR project (joint linguistic and NLP
discourse structure modelling of legal texts for language pedagogy),
started early 2023, we currently have one open position for doctoral
candidates with a background in Natural Language Processing or related
fields.
# Thesis topics
Interest in the legal field has recently exploded in the NLP community.
International evaluation campaigns are proposed on several semantic
tasks such as legal information extraction, entailment, rhetorical role
recognition, judgement prediction (LegalEval@SemEval2023,
COLIEE-2023)... In addition, several conferences and workshops gathering
researchers have been recently organised (ASAIL@ICAIL2023,
JURISIN@IsAI-2023), showing the growing interest of the NLP community in
this specific domain. Numerous datasets are now built and collected
(PileOfLaw20, LexGLUE22), allowing the community to create specialised
Large Language Models (LLMs) in the legal field (e.g. LegalBERT).
This craze is due to the fact that legal texts have several specific
characteristics that make their automatic processing difficult and
require specific development: they are both language and domain specific
and often longer than the length LLMs can handle.
The role of the PhD student to be recruited will be to:
- propose a framework for probing Pretrained Language Models in terms of
the captured discourse information
- research effective methods to inject discourse knowledge in
Transformer-based language models (discourse inspired self-learning
tasks or multi-tasks learning or Transformer architecture revision...)
- develop an argumentative structure recognition system which will be
used in an online platform by legal English users for supporting their
reading and understanding tasks
# Project context
The PhD fellowship is offered in the context of the Lexhnology (joint
linguistic and NLP discourse structure modelling of legal texts for
language pedagogy) project funded by the French National Research Agency
(https://lexhnology.hypotheses.org/). Partenaires include CRINI (Nantes
Université), LS2N (Nantes Université), ATILF (CNRS & Université de
Lorraine), and LAIRDIL (Université de Toulouse).
The successful candidate will join the NLP research group at LS2N lab in
Nantes (https://taln-ls2n.github.io/). Nantes is located in the western
part of France, crossed by the Loire River, and situated just 50
kilometres away from the Atlantic coast
(https://www.levoyageanantes.fr/en/to-see/).
# Requirements
* Master degree (completed or nearly completed) in Computer Sciences,
Computational Linguistics, Natural Language Processing, Machine
Learning, Data Sciences or a closely related field
* Excellent academic records
* Practical experience in Machine Learning (esp Deep Learning) methods
* Good knowledge of experimental design methodology and statistics
* Some level of familiarity with discourse analysis would be a plus
* Excellent programming skills (esp. Python)
* English (at least B2) and French proficiency both spoken and written
* Initiative and ability to work independently and as part of a team
# General information
* Supervisors: Prof. Richard Dufour, Dr Nicolas Hernandez, Dr Laura
Monceaux.
* Type of Contract : PhD Student contract / Thesis offer
* Contract Period : 36 months
* Start date of the thesis : 1 September 2023
* Proportion of work : Full time on site
* Remuneration : about 2175 € gross monthly (before taxes), partial
reimbursement of public transport costs
# Additional information and application
Application deadline : 8 May 2023
For further information and application, contact Nicolas Hernandez
(nicolas.hernandez(a)univ-nantes.fr) AND Laura Monceaux
(laura.monceaux(a)univ-nantes.fr) AND Richard Dufour
(richard.dufour(a)univ-nantes.fr).
Applications should contain all the documents indicated below:
a. Free style cover letter outlining the interest for the PhD/ANR
project
b. Curriculum vitae
c. Transcripts of grades from first and second year of master's program
(or, if applicable, a document attesting to anticipated success)
d. Names and addresses of two references.
Shortlisted applicants will be interviewed online.
--
Dr. Nicolas Hernandez
Associate Professor (Maître de Conférences)
Nantes Université - LS2N UMR6004
https://nicolashernandez.github.io/
+33 (0)2 51 12 53 94
+33 (0)2 40 30 60 67
https://sciences-techniques.univ-nantes.fr/programme-du-m1-atal
Dear Memebers
We are very excited to share a new tool with the community. Multi-Feature
Tagger of English (MFTE) is the Python version of the MFTE Perl (Le Foll
2021). This improved and extended Python version includes semantic tags
from Biber (2006) and Biber et al. (1999), as well as additional tags,
e.g., separate tags for third person singular male and female pronouns.
This tagger first uses the Python NLP library stanza for grammatical
part-of-speech tagging before applying rule-based regular expressions to
tag for a range of more complex lexico-grammatical and semantic features
typically used in multidimensional analysis (MDA; cf. Biber 1984; 1988).
The software is available as a free and opensource Python command line and
simple GUI. Current version is a pre-alpha release with bugs and errors
expected (& incomplete documentation). If you are interested in doing MDA
studies, you may want to give it a try. Please feel free to report any
errors or other glitches using the Issues tab on the Github repo. The
software is available on the link below along with instructions how to
install and use it.
https://github.com/mshakirDr/MFTE
Regards
Workshop on Automatic Translation for Signed and Spoken Languages
***** The submission deadline for AT4SLL has been extended to April, 24th 2023 *****
SCOPE
According to the World Federation of the Deaf (WFD) over 70 million people are deaf and communicate primarily via Sign Language (SL). Currently, human interpreters are the main medium for sign-to-spoken, spoken-to-sign and sign-to-sign language translation. The availability and cost of these professionals is often a limiting factor in communication between signers and non-signers. Machine Translation (MT) is a core technique for reducing language barriers for spoken languages. Although MT has come a long way since its inception in the 1950s, it still has a long way to go to successfully cater to all communication needs and users. When it comes to the deaf and hard of hearing communities, MT is in its infancy. The complexity of the task to automatically translate between SLs or sign and spoken languages, requires a multidisciplinary approach.
The rapid technological and methodological advances in deep learning, and in AI in general, that we see in the last decade, have not only improved MT, recognition of image, video and audio signals, the understanding of language, the synthesis of life-like 3D avatars, etc., but have also led to the fusion of interdisciplinary research innovations that lays the foundation of automated translation services between sign and spoken languages.
This one-day workshop aims to be a venue for presenting and discussing (complete, ongoing or future) research on automatic translation between sign and spoken languages and bring together researchers, practitioners, interpreters and innovators working in related fields. We are delighted to confirm that two interpreters for English<>International Sign (IS) will be present during the event, to make it as inclusive as possible to anyone who wishes to participate.
Theme of the workshop: Data is one of the key factors for the success of today’s AI, including language and translation models for sign and spoken languages. However, when it comes to SL, MT and Natural Language Processing, we face problems related to small volumes of (parallel) data, low veracity in terms of origin of annotations (deaf or hearing interpreters), non-standardized annotations (e.g. glosses differ across corpora), video quality or recording setting, and others. The theme of this edition of the workshop is Sign language parallel data – challenges, solutions and resolutions.
The AT4SSL workshop aims to open a (guided) discussion between participants about current challenges, innovations and future developments related to the automatic translation between sign and spoken languages. To this extent, AT4SSL will host a moderated round table around the following three topics: (i) quality of recognition and synthesis models and user-expectations; (ii) co-creation -- deaf, hearing and hard-of-hearing people joining forces towards a common goal and (iii) sign-to-spoken and spoken-to-sign translation technology in media.
TOPICS
This workshop aims to focus on the following topics. However, submissions related to the general topic of automatic translation between signed and spoken languages that deviate from these topics are also welcome:
* Data: resources, collection and curation, challenges, processing, data life cycle
* Use-cases, applications
* Ethics, privacy and policies
* Sign language linguistics
* Machine translation (with a focus on signed-to-signed, signed-to-spoken or spoken-to-signed language translation)
* Natural language processing
* Interpreting of sign and spoken languages
* Image and video recognition (for the purpose of sign language recognition)
* 3D avatar and virtual signers synthesis
* Usability and challenges of current methods and methodologies
* Sign language in the media
SUBMISSION FORMAT
Two types of submissions are going to be accepted for the AT4SSL workshop:
* Research, review, position and application papers
Unpublished papers that present original, completed work. The length of each paper should be at least four (4) and maximum eight (8) pages, with unlimited pages for references.
* Extended abstracts
Extended abstracts should present original, ongoing work or innovative ideas. The length of each extended abstract is four (4) pages, with unlimited pages for references.
Both papers should be formatted according to the official EAMT 2023 style templates (LaTex<https://events.tuni.fi/uploads/2022/12/ee35fd56-latex_template.zip>. Overleaf<https://www.overleaf.com/read/mkjbkppndvxw>, MS Word<https://events.tuni.fi/uploads/2022/12/edd598d2-eamt23.docx>, Libre/Open Office<https://events.tuni.fi/uploads/2022/12/ece98f81-eamt23.odt>, PDF<https://events.tuni.fi/uploads/2022/12/6e89772e-eamt23.pdf>).
Accepted papers and extended abstracts will be published in the EAMT 2023 proceedings and will be presented at the conference.
SUBMISSION POLICY
*
Submissions must be anonymized.
*
Papers and extended abstracts should be submitted using EASY Chair<https://easychair.org/my/conference?conf=at4ssl2023>.
*
Work that has been or is planned to be submitted to other venues must be declared as such. Upon acceptance at AT4SSL, it must be withdrawn from the other venues.
*
The review will be double-blind.
IMPORTANT DATES:
* First call for papers: 13-March-2023
* Second call for papers: 31-March-2023
* Submission deadline: 14-April-2023 24-April-2023 (Extended!)
* Review process: between 25-April-2023 and 05-May-2023
* Acceptance notification: 12-May-2023
* Camera ready submission: 01-June-2023
* Submission of material for interpreters: 06-June-2023
* Programme will be finalised by: 01-June-2023
* Workshop date: 15-June-2023
ORGANISATION COMMITTEE:
Dimitar Shterionov (TiU)
Mirella De Sisto (TiU)
Mathias Muller (UZH)
Davy Van Landuyt (EUD)
Rehana Omardeen (EUD)
Shaun O’Boyle (DCU)
Annelies Braffort (Paris-Saclay University)
Floris Roelofsen (UvA)
Frédéric Blain (TiU)
Bram Vanroy (KU Leuven; UGent)
Eleftherios Avramidis (DFKI)
INTERPRETING:
We will provide English to International Sign (IS) interpreting during the workshop.
FOR CONTACTS:
Dimitar Shterionov, workshop chair: d.shterionov(a)tilburguniversity.edu
Registration will be handled by the EAMT2023 conference. (To be announced)
Corpus Approaches to Lexicogrammar (LxGr2023)
LAST CALL FOR PAPERS
Abstract submission closes tomorrow, 15 April 2023
The symposium will take place online on 7-8 July 2023.
Invited Speakers:
Gaëtanelle Gilquin<https://perso.uclouvain.be/gaetanelle.gilquin> (Université catholique de Louvain)
Thomas Herbst<https://www.angam.phil.fau.de/fields/engling/herbst/> (Friedrich-Alexander-Universität)
If you would like to present, send an abstract of 500 words (excluding references) to lxgr(a)edgehill.ac.uk<mailto:lxgr@edgehill.ac.uk>. Make sure that the abstract clearly specifies the research focus (research questions or hypotheses), the corpus, the methodology (techniques and metrics), the theoretical orientation, and the main findings. Abstracts will be double-blind reviewed, and decisions will be communicated within four weeks.
Full papers will be allocated 35 minutes (including 10 minutes for discussion).
Work-in-progress reports will be allocated 20 minutes (including 5 minutes for discussion).
There will be no parallel sessions.
Participation is free.
The focus of LxGr is the interaction of lexis and grammar. The focus is influenced by Halliday's view of lexis and grammar as "complementary perspectives" (1991: 32), and his conception of the two as notional ends of a continuum (lexicogrammar), in that "if you interrogate the system grammatically you will get grammar-like answers and if you interrogate it lexically you get lexis-like answers" (1992: 64).
For more information and details of past symposia, see here: https://ehu.ac.uk/lxgr.
If you have any questions, contact lxgr(a)edgehill.ac.uk<mailto:lxgr@edgehill.ac.uk>.
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
***** The submission deadline for GITT has been extended to April 24th for full papers and May 3th for extended abstracts and research communications *****
First International Workshop on Gender-Inclusive Translation Technologies (GITT) at EAMT 2023
15 June 2023, Tampere, Finland
https://sites.google.com/tilburguniversity.edu/gitt2023
@GITT2023
*New timeline* (Time zone: Anywhere on Earth)
- Submission deadline:
- Papers: 24 April, 2023
- Research communications and extended abstracts: 3 May, 2023
- Notification of Acceptance: 12 May, 2023
- Camera Ready Copy due: 19 May, 2023
- Workshop: 15 June, 2023
*Aim and scope*
The Gender-Inclusive Translation Technologies Workshop (GITT) is set out to be the first workshop that focuses on gender-inclusive language in translation and cross-lingual scenarios. The workshop aims to bring together researchers from diverse areas, including industry partners, MT practitioners, and language professionals. GITT aims to encourage multidisciplinary research that develops and interrogates both solutions and challenges for addressing bias and promoting gender inclusivity in MT and translation tools.
*Topics*
GITT invites technical as well as non-technical submissions, which consist of experimental, theoretical or methodological contributions. We explicitly welcome interdisciplinary submissions and submissions that focus on innovative, non-binary linguistic strategies and/or with sociolinguistically-informed perspectives. The topics of interest include, but are not limited to:
- Models or methods for assessing and mitigating gender bias
- New resources for inclusive language and gender translation (e.g., datasets, translation memories, dictionaries)
- Social, cross-lingual, and ethical implications of gender bias
- Qualitative and quantitative analyses on the potential limits of current approaches to gender bias in translation and MT, error taxonomies as well as best practices and guidelines
- User-centric case studies on the impact of biased language and/or mitigating approaches which can include translators, post-editors, or monolingual MT users
GITT is also open to other non-listed topics aligned with the scope of the workshop and works focusing on non-textual modalities (e.g., audiovisual translation)
*Submission*
We welcome three types of submissions:
- Research papers: of at least 4 up to 10 pages (including references)
- Extended Abstracts: up to 2 pages (including references)
Accepted papers and extended abstracts consisting of novel work will be published online as proceedings in the ACL Anthology.
- Research Communications: up to 2 pages (including reference)
We include a parallel submission policy for papers accepted in other venues in 2022. Research communications will not be included in the proceedings, but will serve to promote the dissemination of research aligned with the scope of the workshop.
Submissions should adhere to the EAMT 2023 guidelines and style templates (PDF, LaTeX, Word) and be uploaded on EasyChair: https://easychair.org/my/conference?conf=gitt2023
*Activities*
During the workshop, there will be a guided discussion starting from examples of gender bias in MT output collected via the DeBiasByUs website.
Attendees are invited to contribute their own examples beforehand via DeBiasByUs (https://debiasbyus.ugent.be/share/)
More information about the project and the activity can be found on the WS website.
*Workshop organizers*
Eva Vanmassenhove, University of Tilburg
Beatrice Savoldi, Fondazione Bruno Kessler
Luisa Bentivogli, Fondazione Bruno Kessler
Joke Daems, University of Ghent
Janiça Hackenbuchner, Cologne University of Applied Sciences
Second Call for papers for CLEF 2023: Conference and Labs of the Evaluation
Forum
18-21 September 2023, Thessaloniki, Greece
https://clef2023.clef-initiative.eu
Important Dates (Time zone: Anywhere on Earth)
-
Submission of Long, Short, Best of 2023 Labs Papers: 12 May, 2023
-
Notification of Acceptance: 9 June, 2023
-
Camera Ready Copy due: 30 June, 2023
-
Conference: 18-21 September, 2023
Aim and Scope
The CLEF Conference addresses all aspects of Information Access in any
modality and language. The CLEF conference includes presentation of
research papers and a series of workshops presenting the results of
lab-based comparative evaluation benchmarks.
CLEF 2023 is the 14th CLEF conference continuing the popular CLEF campaigns
which have run since 2000 contributing to the systematic evaluation of
information access systems, primarily through experimentation on shared
tasks. The CLEFconference has a clear focus on experimental IR as carried
out within evaluation forums (e.g., CLEF Labs, TREC, NTCIR, FIRE,
MediaEval, RomIP, SemEval, and TAC) with special attention to the
challenges of multimodality, multilinguality, and interactive search also
considering specific classes of users as children, students, impaired users
in different tasks (e.g., academic, professional, or everyday-life). We
invite paper submissions on significant new insights demonstrated on IR
test collections, on analysis of IR test collections and evaluation
measures, as well as on concrete proposals to push the boundaries of the
Cranfield style evaluation paradigm.
All submissions to the CLEF main conference will be reviewed on the basis
of relevance, originality, importance, and clarity. CLEF welcomes papers
that describe rigorous hypothesis testing regardless of whether the results
are positive or negative. CLEF also welcomes past runs/results/data
analysis and new data collections. Methods are expected to be written so
that they are reproducible by others, and the logic of the research design
is clearly described in the paper. The conference proceedings will be
published in the Springer Lecture Notes in Computer Science (LNCS).
Topics
Relevant topics for the CLEF 2023 Conference include but are not limited to:
-
Information access in any language or modality: information retrieval,
image retrieval, question answering, information extraction and
summarisation, search interfaces and design, infrastructures, etc.
-
Analytics for information retrieval: theoretical and practical results
in the analytics field that are specifically targeted for information
access data analysis, data enrichment, etc.
-
User studies either based on lab studies or crowdsourcing.
-
Past results/run deep analysis both statistically and fine grain based.
-
Evaluation initiatives: conclusions, lessons learned, impact and
projection of any evaluation initiative after completing their cycle.
-
Evaluation: methodologies, metrics, statistical and analytical tools,
component based, user groups and use cases, ground-truth creation, impact
of multilingual/multicultural/multimodal differences, etc.
-
Technology transfer: economic impact/sustainability of information
access approaches, deployment and exploitation of systems, use cases, etc.
-
Interactive information retrieval evaluation: interactive evaluation of
information retrieval systems using user-centered methods, evaluation of
novel search interfaces, novel interactive evaluation methods, simulation
of interaction, etc.
-
Specific application domains: information access and its evaluation in
application domains such as cultural heritage, digital libraries, social
media, health information, legal documents, patents, news, books, and in
the form of text, audio and/or image data.
-
New data collection: presentation of new data collection with potential
high impact on future research, specific collections from companies or
labs, multilingual collections.
-
Work on data from rare languages, collaborative, social data.
Format
Authors are invited to electronically submit original papers, which have
not been published and are not under consideration elsewhere, using the
LNCS proceedings format:
http://www.springer.com/it/computer-science/lncs/conference-proceedings-gui…
Two types of papers are solicited:
-
Long papers: 12 pages max (excluding references). Aimed to report
complete research works.
-
Short papers: 6 pages max (excluding references). Position papers, new
evaluation proposals, developments and applications, etc.
Review Process
Authors of long and short papers are asked to submit the following TWO
versions of their manuscript:
Methodology version: This version does NOT report anything related to the
results of the study. At this stage, the manuscripts will be evaluated
based on the importance of the problem addressed and the soundness of the
methodology. Manuscripts can include an introduction, description of the
proposed methodology and datasets used. However, there should be no result
and discussion sections. The authors should also remove mentions of results
in the included sections (e.g., abstract, introduction)
Experimental version: This is the full version of the manuscript that
contains all the sections of the paper including the experiments and
results.
Papers will be peer-reviewed by 3 members of the program committee in two
stages. At the first stage, the members will review the methodology version
of the manuscripts based on originality and methodology. At the second
stage, the full version of the manuscripts that passed from the first sage
will be reviewed. Selection will be based on originality, clarity, and
technical quality.
The deadline for the submission of both versions is 12th of May.
Paper submission
Papers should be submitted in PDF format to the following address:
https://easychair.org/my/conference?conf=clef2023
-
Submit the methodology version at the Methodology Track
-
Submit the experimental version at the Experimental Track
Organisation
General Chairs
Evangelos Kanoulas, University of Amsterdam, the Netherlands
Theodora Tsikrika, Information Technologies Institute, CERTH, GR
Stefanos Vrochidis, Information Technologies Institute, CERTH, GR
Avi Arampatzis, Democritus University of Thrace, Greece
Program Chairs
Anastasia Giachanou, Utrecht University, the Netherlands
Dan Li, Elsevier
Evaluation Lab Chairs
Mohammad Aliannejadi, University of Amsterdam, the Netherlands
Michalis Vlachos, University of Lausanne, Switzerland
Lab Mentorship Chair
Jian-Yun Nie, University of Montreal, Canada
(Apologies for double posting, the previous version contained errors in the dates, and these are fixed here)
==============================================================
Call for Participation
2nd Cardiff NLP Workshop, 26-27 June 2023
==============================================================
We are organising the 2nd Cardiff NLP Summer Workshop, an in-person workshop on Natural Language Processing. It will take place from June 26 to June 27 2023 in the Abacws building in Cardiff (Wales, UK).
The workshop is especially designed for PhD students and early career researchers, and the registration is free for everyone. Please fill the following expression of interest form by April 24th if interested in joining the workshop: <https://forms.gle/zzvHetEvWBTJeVW8A> https://docs.google.com/forms/d/e/1FAIpQLSeuxrsPy3qD6YdNSJiZ09rqQlnAGWzXQqL…
Workshop Activities:
* Invited speakers from academia and industry
* Tutorials
* Poster session and networking
* Panel on large language models and NLP research
Important Dates:
* Application Period: 2 March-24 April
* Notification of Acceptance: 28 April
* Workshop: 26-27 June 2023 in Cardiff
For more details, please check the workshop website: https://www.cardiffnlpworkshop.org/
Cardiff NLP Organisation team
--
Jose Camacho Collados
http://www.josecamachocollados.com<http://www.josecamachocollados.com/>