Dear Members,
I would like to bring to your attention the following research associate
position in the Field of Psycholinguistics and hearing research at
University of Oldenburg, Germany.
For more information about the position and how to apply, please refer to
the detailed description provided in the email below.
Best regards,
Jörge Minula
---
The cluster of excellence Hearing4all: Models, Technology and Solutions for
Diagnostics, Restoration and Support of Hearing at the Universität
Oldenburg (in collaboration with Medizinische Hochschule Hannover and
Leibniz Universität Hannover) is seeking to fill as soon as possible the
position of a
Research Associate (fulltime)
in the Field of Psycholinguistics and hearing research (m/f/d)
in the Department of Dutch, Faculty of Linguistics and Cultural Studies.
The position is available from 1st of October 2023 (or as soon as possible
after that) until 31st of December 2025. Salary is depending on previous
experience and education (German TV-L E13
<https://lohntastik.de/od-rechner/tv-salary-calculator/TV-L/E-13/1>). The
position is suitable for part-time work.
A paramount goal of the cluster of excellence Hearing4all (
www.hearing4all.de) is to transform audiology into an "exact" science based
on the interplay between experiment and theory as well as between basic
science and clinical research. In the framework provided by the cluster the
successful candidate is expected to contribute to the research goals of the
cluster to research thread 1 "Auditory processing deficits throughout the
lifespan", in which one of the goals is to identify the impact of hearing
loss in young and old age on cognitive and language development and its
decline. Specifically, the candidate is expected to do research on the
interaction of hearing abilities and language processing/development.
Candidates are expected to have PhD in the field of (psycho)- linguistics,
psychology, or a related discipline (with a specialization in
speech/language processing and/or language acquisition) and have shown
their ability to perform excellent scientific work, usually demonstrated by
the outstanding quality of their Doctorate/PhD research and a good
publication record. Experience in hearing research is an advantage.
We are seeking candidates with experience in statistical analysis as well
as knowledge in at least one of the following methods/areas: language
acquisition, online sentence processing, reaction time studies, eye
tracking, ERP. Matlab skills, and/ or experience with E-prime will be
helpful, as well as working knowledge of German. Since the positions entail
close interdisciplinary cooperation with several other disciplines
(audiology, psychology, physics), the willingness and ability to integrate
methods, concepts and issues of the 'other' discipline into theories,
concepts and methods current in one's own are required for successful work
in this project.
The University of Oldenburg is an equal opportunities employer. According
to § 21 para. 3 of the legislation governing Higher Education in Lower
Saxony (NHG), preference shall be given to female candidates in cases of
equal qualification. The same applies to persons with disabilities.
More information at: https://uol.de/stellen?stelle=69714
Dear All,
We have an exciting opportunity for highly motivated and talented
researchers to apply for our Postdoctoral position at GREYC Research Centre
(https://www.greyc.fr/en/home/), France. Since 2000, the GREYC has been a
joint research unit associated with the French National Centre for
Scientific Research (CNRS), the University of Caen Normandy (UNICAEN) and
the Ecole Nationale Supérieure d’Ingénieurs de Caen (ENSICAEN). The GREYC
lab realizes research works in the field of digital science with activities
in image processing, machine learning, artificial intelligence, computer
security, fundamental computer science, Web science, electronics. It has 7
research groups with faculty members from ENSICAEN, UNICAEN and CNRS, PhD
students and administrative & technical members.
The postdoctoral scholar will be working on Multimodal Neural Web Page
Segmentation with a primary goal to detect the different zones of a web
page. This interdisciplinary research project combines computer vision,
natural language processing, and machine learning techniques to develop
advanced algorithms capable of segmenting web pages into meaningful and
semantically distinct regions.
*Location: **GREYC Research Centre *(https://www.greyc.fr/en/home/)*,
France*
*Closing Date: **15 August*, *2023*
*Benefits:*
The successful candidate will receive a competitive salary and other
benefits, as well as access to state-of-the-art research facilities and
resources. The fellowship will be initially offered for 12-18 months, with
the possibility of extension based on performance.
*Eligibility Criteria:*
Applicants interested in this position must meet the following criteria:
-
Hold a recent Ph.D. degree in Computer Science, Electrical Engineering,
or a related field.
-
Demonstrate a strong research background in natural language processing
or computer vision.
-
Possess a track record of publications in top-tier conferences/journals
related to computer vision, NLP, or related areas.
-
Strong programming skills.
-
Excellent written and verbal communication and interpersonal skills.
*Application Process:*
Interested candidates can send an application with the following documents
directly Prof. Gael Dias, email: gael.dias(a)unicaen.fr and Dr. Mohammed
Hasanuzzaman, email: mohammed.hasanuzzaman(a)mtu.ie
-
Updated Curriculum Vitae (CV) with a list of publications.
-
A cover letter outlining research interests, relevant background, and
motivation for applying.
-
Contact information for three academic referees who can provide
recommendation letters.
------------------------------------------------------------------------------------------------------
*Dr. Mohammed Hasanuzzaman, Lecturer, Munster Technological University
<https://www.mtu.ie/> *
*Funded Investigator, ADAPT Centre- <https://www.adaptcentre.ie/> A
<https://www.adaptcentre.ie/>* World-Leading SFI Research Centre
<https://www.adaptcentre.ie/>
*Member, Lero, the SFI Research Centre for Software
<https://lero.ie/>**C**hercheur
Associé*, GREYC UMR CNRS 6072 Research Centre, France
<https://www.greyc.fr/en/home/>
*Associate Editor:** IEEE Transactions on Affective Computing, Nature
Scientific Reports, IEEE Transactions on Computational Social Systems, ACM
TALLIP, PLOS One, Computer Speech and Language*
Dept. of CS
Munster Technological University
Bishopstown campus
Cork e: mohammed.hasanuzzaman(a)adaptcentre.ie <email(a)adaptcentre.ie>/
Ireland https://mohammedhasanuzzaman.github.io/
[image: Mailtrack]
<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=sig…>
Sender
notified by
Mailtrack
<https://mailtrack.io?utm_source=gmail&utm_medium=signature&utm_campaign=sig…>
19/07/23,
12:52:06
Dear Colleagues,
the 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP), co-located at the 2023 Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (AACL 2023), invites the submission of long and short papers, with a theoretical or experimental nature, describing recent advances in system evaluation and comparison in NLP.
** Important Dates **
All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”).
- Direct submission to Eval4NLP deadline: August 25
- Submission of pre-reviewed papers to Eval4NLP (see below for details) : September 25
- Notification of acceptance: October 2
- Camera-ready papers due: October 15
- Workshop day: November 1
Please see the Call for Papers for more details [1].
** Shared Task **
This year’s version will come with a shared task on explainable evaluation of generated language (MT and summarization) with a focus on LLM prompts. Please find more information on the shared task page: [2].
** Topics **
Designing evaluation metrics: Proposing and/or analyzing metrics with desirable properties, e.g., high correlations with human judgments, strong in distinguishing high-quality outputs from mediocre and low-quality outputs, robust across lengths of input and output sequences, efficient to run, etc.; Reference-free evaluation metrics, which only require source text(s) and system predictions; Cross-domain metrics, which can reliably and robustly measure the quality of system outputs from heterogeneous modalities (e.g., image and speech), different genres (e.g., newspapers, Wikipedia articles and scientific papers) and different languages; Cost-effective methods for eliciting high-quality manual annotations; and Methods and metrics for evaluating interpretability and explanations of NLP models
Creating adequate evaluation data: Proposing new datasets or analyzing existing ones by studying their coverage and diversity, e.g., size of the corpus, covered phenomena, representativeness of samples, distribution of sample types, variability among data sources, eras, and genres; and Quality of annotations, e.g., consistency of annotations, inter-rater agreement, and bias check
Reporting correct results: Ensuring and reporting statistics for the trustworthiness of results, e.g., via appropriate significance tests, and reporting of score distributions rather than single-point estimates, to avoid chance findings; reproducibility of experiments, e.g., quantifying the reproducibility of papers and issuing reproducibility guidelines; and Comprehensive and unbiased error analyses and case studies, avoiding cherry-picking and sampling bias.
** Submission Guidelines **
The workshop welcomes two types of submission -- long and short papers. Long papers may consist of up to 8 pages of content, plus unlimited pages of references. Short papers may consist of up to 4 pages of content, plus unlimited pages of references. Please follow the ACL ARR formatting requirements, using the official templates [3]. Final versions of both submission types will be given one additional page of content for addressing reviewers’ comments. The accepted papers will appear in the workshop proceedings. The review process is double-blind. Therefore, no author information should be included in the papers and the (optional) supplementary materials. Self-references that reveal the author's identity must be avoided. Papers that do not conform to these requirements will be rejected without review.
** The submission sites on Openreview **
Standard submissions: [4]
Pre-reviewed submissions: [5]
See below for more information on the two submission modes.
** Two submission modes: standard and pre-reviewed **
Eval4NLP features two modes of submissions. Standard submissions: We invite the submission of papers that will receive up to three double-blind reviews from the Eval4NLP committee, and a final verdict from the workshop chairs. Pre-reviewed: To a later deadline, we invite unpublished papers that have already been reviewed, either through ACL ARR, or recent AACL/EACL/ACL/EMNLP/COLING venues (these papers will not receive new reviews but will be judged together with their reviews via a meta-review; authors are invited to attach a note with comments on the reviews and describe possible revisions).
Final verdicts will be either accept, reject, or conditional accept, i.e., the paper is only accepted provided that specific (meta-)reviewer requirements have been met. Please also note the multiple submission policy.
** Optional Supplementary Materials **
Authors are allowed to submit (optional) supplementary materials (e.g., appendices, software, and data) to improve the reproducibility of results and/or to provide additional information that does not fit in the paper. All of the supplementary materials must be zipped into one single file (.tgz or .zip) and submitted via Openreview together with the paper. However, because supplementary materials are completely optional, reviewers may or may not review or even download them. So, the submitted paper should be fully self-contained.
** Preprints **
Papers uploaded to preprint servers (e.g., ArXiv) can be submitted to the workshop. There is no deadline concerning when the papers were made publicly available. However, the version submitted to Eval4NLP must be anonymized, and we ask the authors not to update the preprints or advertise them on social media while they are under review at Eval4NLP.
** Multiple Submission Policy **
Eval4NLP allows authors to submit a paper that is under review in another venue (journal, conference, or workshop) or to be submitted elsewhere during the Eval4NLP review period. However, the authors need to withdraw the paper from all other venues if they get accepted and want to publish in Eval4NLP. Note that AACL and ARR do not allow double submissions. Hence, papers submitted both to the main conference and AACL workshops (including Eval4NLP) will violate the multiple submission policy of the main conference. If authors would like to submit a paper under review by AACL to the Eval4NLP workshop, they need to withdraw their paper from AACL and submit it to our workshop before the workshop submission deadline.
** Best Paper Awards **
We will optionally award prizes to the best paper submissions (subject to availability; more details to come soon). Both long and short submissions will be eligible for prizes.
** Presenting Published Papers **
If you want to present a paper which has been published recently elsewhere (such as other top-tier AI conferences) at our workshop, you may send the details of your paper (Paper title, authors, publication venue, abstract, and a link to download the paper) directly to eval4nlp(a)gmail.com. We will select a few high-quality and relevant papers to present at Eval4NLP. This allows such papers to gain more visibility from the workshop audience and increases the variety of the workshop program. Note that the chosen papers are considered as non-archival here and will not be included in the workshop proceedings.
-------------------------------------------------
Best wishes,
Eval4NLP organizers
Website: https://eval4nlp.github.io/2023/index.html
Email: eval4nlp(a)gmail.com
[1] https://eval4nlp.github.io/2023/index.html
[2] https://eval4nlp.github.io/2023/shared-task.html
[3] https://github.com/acl-org/acl-style-files
[4] https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4N…
[5] https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4N…
AICS (https://aics.asus.com/ ) is a young ASUS division incubating applied software and AI services to create a new generation of smart medical solutions. The team is building and deploying deep technologies in Data Analytics, Speech Recognition, Natural Language Understanding, and Computer Vision to help accelerate the transformation towards an AI-powered future in healthcare.
The role of our AI Researchers is to work on ambitious long-term research goals, while laying out intermediate milestones to solve large-scale, real-world problems. As a senior member of the team, you will also help define research directions and guide junior researchers. Specific research topics of interest include (but are not limited to) generative models, knowledge discovery, domain generalization, long-tailed learning, and self-supervised approaches. We are currently recruiting at the senior level, either at our Singapore or Taipei offices.
Job Responsibilities:
* Drive AI/ML research from concepts to concrete output
* Invent and implement innovative ML algorithms and tools
* Publish and present your work in the top ML/NLP venues
* Guide junior research team members and PhD students
* Work with the product and engineering team to transform research prototypes into production
* Forge research collaborations with university partners and clinicians
Requirements:
* Ph.D. degree in computer science or related discipline
* At least 5+ years experience post-PhD, preferably in industry
* Strong track record of publications at top peer-reviewed conferences and journals
* Active in the research community and recognized by peers (e.g. conference committee membership, highly cited papers, awards)
* Expertise in state-of-the-art machine learning methods, especially in the domains of natural language processing
* Hands-on experience with deep learning frameworks such as PyTorch
* Familiarity with MLOps and deploying models in production will be beneficial
You may apply via https://aics.asus.com/career-en/ai-researcher/ or email me your CV directly. We look forward to hearing from you!
Best regards,
Stefan Winkler
--
Director of R&D
Asus AICS Singapore
https://aics.asus.com/
===================================================================================================================================
This email and any attachments to it contain confidential information and are intended solely for the use of the individual to whom it is addressed. If you are not the intended recipient or receive it accidentally, please immediately notify the sender by e-mail and delete the message and any attachments from your computer system, and destroy all hard copies. Please be advised that any unauthorized disclosure, copying, distribution or any action taken or omitted in reliance on this, is illegal and prohibited. Any views or opinions expressed are solely those of the author and do not represent those of ASUSTeK.
For pricing information, ASUS is only entitled to set a recommendation resale price. All customers are free to set their own price as they wish.
===================================================================================================================================
(apologies for cross-posting)
*-----Workshop for NLP Open Source Software (NLP-OSS)*
06 Dec 2023, Co-located with EMNLP 2023
https://nlposs.github.io/
Deadline for Long and Short Paper submission: 09 August, 2023 (23:59,
GMT-11)
-----
You have tried to use the latest, bestest, fastest LLM models and bore
grievances but found the solution after hours of coffee and computer
staring. Share that at NLP-OSS and suggest how open source could change for
the better (e.g. best practices, documentation, API design etc.)
You came across an awesome SOTA system on NLP task X that no LLM has beaten
its F1 score. But now the code is stale and it takes a dinosaur to
understand the code. Share your experience at NLP-OSS and propose how to
"replicate" these forgotten systems.
You see this shiny GPT from a blog post, tried it to reproduce similar
results on a different task and it just doesn't work on your dataset. You
did some magic to the code and now it works. Show us how you did it! Though
they're small tweaks, well-motivated and empirically tested are valid
submissions to NLP-OSS.
You have tried 101 NLP tools and there's none that really do what you want.
So you wrote your own shiny new package and made it open source. Tell us
why your package is better than the existing tools. How did you design the
code? Is it going to be a one-time thing? Or would you like to see
thousands of people using it?
You have heard enough of open-source LLM and pseudo-open-source GPT but not
enough about how it can be used for your use-case or your commercial
product at scale. So you contacted your legal department and they explained
to you about how data, model and code licenses work. Sharing the knowledge
with the NLP-OSS community.
You have a position/opinion to share about free vs open vs closed source
LLMs and have valid arguments, references or survey/data to support your
position. We would like to hear more about it.
At last, you've found the avenue to air these issues in an academic
platform at the NLP-OSS workshop!!!
Sharing your experiences, suggestions and analysis from/of NLP-OSS
----
P/S: 2nd Call for Paper
*Workshop for NLP Open Source Software (NLP-OSS)*
06 Dec 2023, Co-located with EMNLP 2023
https://nlposs.github.io/
Deadline for Long and Short Paper submission: 09 August, 2023 (23:59,
GMT-11)
------------------------------
The Third Workshop for NLP Open Source Software (NLP-OSS) will be
co-located with EMNLP 2023 on 06 Dec 2023.
Focusing more on the social and engineering aspect of NLP software and less
on scientific novelty or state-of-art models, the Workshop for NLP-OSS is
an academic forum to advance open source developments for NLP research,
teaching and application.
NLP-OSS also provides an academic workshop to announce new
software/features, promote the collaborative culture and best practices
that go beyond the conferences.
We invite full papers (8 pages) or short papers (4 pages) on topics related
to NLP-OSS broadly categorized into (i) software development, (ii)
scientific contribution and (iii) NLP-OSS case studies.
-
*Software Development*
- Designing and developing NLP-OSS
- Licensing issues in NLP-OSS
- Backwards compatibility and stale code in NLP-OSS
- Growing, maintaining and motivating an NLP-OSS community
- Best practices for NLP-OSS documentation and testing
- Contribution to NLP-OSS without coding
- Incentivizing OSS contributions in NLP
- Commercialization and Intellectual Property of NLP-OSS
- Defining and managing NLP-OSS project scope
- Issues in API design for NLP
- NLP-OSS software interoperability
- Analysis of the NLP-OSS community
-
*Scientific Contribution*
- Surveying OSS for specific NLP task(s)
- Demonstration, introductions and/or tutorial of NLP-OSS
- Small but useful NLP-OSS
- NLP components in ML OSS
- Citations and references for NLP-OSS
- OSS and experiment replicability
- Gaps between existing NLP-OSS
- Task-generic vs task-specific software
-
*Case studies*
- Case studies of how a specific bug is fixed or feature is added
- Writing wrappers for other NLP-OSS
- Writing open-source APIs for open data
- Teaching NLP with OSS
- NLP-OSS in the industry
Submission should be formatted according to the EMNLP 2023 templates
<https://2023.emnlp.org/call-for-papers> and submitted to OpenReview
<https://openreview.net/group?id=EMNLP/2023/Workshop/NLP-OSS>
ORGANIZERS
Geeticka Chauhan, Massachusetts Institute of Technology
Dmitrijs Milajevs, Grayscale AI
Elijah Rippeth, University of Maryland
Jeremy Gwinnup, Air Force Research Laboratory
Liling Tan, Amazon
Dear Colleagues,
the 4th Workshop on Evaluation and Comparison for NLP systems (Eval4NLP),
co-located at the 2023 Conference of the Asia-Pacific Chapter of the
Association for Computational Linguistics (AACL 2023), invites the
submission of long and short papers, with a theoretical or experimental
nature, describing recent advances in system evaluation and comparison in
NLP.
** Important Dates **
All deadlines are 11.59 pm UTC -12h (“Anywhere on Earth”).
- Direct submission to Eval4NLP deadline: August 25
- Submission of pre-reviewed papers to Eval4NLP (see below for details) :
September 25
- Notification of acceptance: October 2
- Camera-ready papers due: October 15
- Workshop day: November 1
Please see the Call for Papers for more details [1].
** Shared Task **
This year’s version will come with a shared task on explainable evaluation
of generated language (MT and summarization) with a focus on LLM prompts.
Please find more information on the shared task page: [2].
** Topics **
Designing evaluation metrics: Proposing and/or analyzing metrics with
desirable properties, e.g., high correlations with human judgments, strong
in distinguishing high-quality outputs from mediocre and low-quality
outputs, robust across lengths of input and output sequences, efficient to
run, etc.; Reference-free evaluation metrics, which only require source
text(s) and system predictions; Cross-domain metrics, which can reliably
and robustly measure the quality of system outputs from heterogeneous
modalities (e.g., image and speech), different genres (e.g., newspapers,
Wikipedia articles and scientific papers) and different languages;
Cost-effective methods for eliciting high-quality manual annotations; and
Methods and metrics for evaluating interpretability and explanations of NLP
models
Creating adequate evaluation data: Proposing new datasets or analyzing
existing ones by studying their coverage and diversity, e.g., size of the
corpus, covered phenomena, representativeness of samples, distribution of
sample types, variability among data sources, eras, and genres; and Quality
of annotations, e.g., consistency of annotations, inter-rater agreement,
and bias check
Reporting correct results: Ensuring and reporting statistics for the
trustworthiness of results, e.g., via appropriate significance tests, and
reporting of score distributions rather than single-point estimates, to
avoid chance findings; reproducibility of experiments, e.g., quantifying
the reproducibility of papers and issuing reproducibility guidelines; and
Comprehensive and unbiased error analyses and case studies, avoiding
cherry-picking and sampling bias.
** Submission Guidelines **
The workshop welcomes two types of submission -- long and short papers.
Long papers may consist of up to 8 pages of content, plus unlimited pages
of references. Short papers may consist of up to 4 pages of content, plus
unlimited pages of references. Please follow the ACL ARR formatting
requirements, using the official templates [3]. Final versions of both
submission types will be given one additional page of content for
addressing reviewers’ comments. The accepted papers will appear in the
workshop proceedings. The review process is double-blind. Therefore, no
author information should be included in the papers and the (optional)
supplementary materials. Self-references that reveal the author's identity
must be avoided. Papers that do not conform to these requirements will be
rejected without review.
** The submission sites on Openreview **
Standard submissions: [4]
Pre-reviewed submissions: [5]
See below for more information on the two submission modes.
** Two submission modes: standard and pre-reviewed **
Eval4NLP features two modes of submissions. Standard submissions: We invite
the submission of papers that will receive up to three double-blind reviews
from the Eval4NLP committee, and a final verdict from the workshop chairs.
Pre-reviewed: To a later deadline, we invite unpublished papers that have
already been reviewed, either through ACL ARR, or recent
AACL/EACL/ACL/EMNLP/COLING venues (these papers will not receive new
reviews but will be judged together with their reviews via a meta-review;
authors are invited to attach a note with comments on the reviews and
describe possible revisions).
Final verdicts will be either accept, reject, or conditional accept, i.e.,
the paper is only accepted provided that specific (meta-)reviewer
requirements have been met. Please also note the multiple submission policy.
** Optional Supplementary Materials **
Authors are allowed to submit (optional) supplementary materials (e.g.,
appendices, software, and data) to improve the reproducibility of results
and/or to provide additional information that does not fit in the paper.
All of the supplementary materials must be zipped into one single file
(.tgz or .zip) and submitted via Openreview together with the paper.
However, because supplementary materials are completely optional, reviewers
may or may not review or even download them. So, the submitted paper should
be fully self-contained.
** Preprints **
Papers uploaded to preprint servers (e.g., ArXiv) can be submitted to the
workshop. There is no deadline concerning when the papers were made
publicly available. However, the version submitted to Eval4NLP must be
anonymized, and we ask the authors not to update the preprints or advertise
them on social media while they are under review at Eval4NLP.
** Multiple Submission Policy **
Eval4NLP allows authors to submit a paper that is under review in another
venue (journal, conference, or workshop) or to be submitted elsewhere
during the Eval4NLP review period. However, the authors need to withdraw
the paper from all other venues if they get accepted and want to publish in
Eval4NLP. Note that AACL and ARR do not allow double submissions. Hence,
papers submitted both to the main conference and AACL workshops (including
Eval4NLP) will violate the multiple submission policy of the main
conference. If authors would like to submit a paper under review by AACL to
the Eval4NLP workshop, they need to withdraw their paper from AACL and
submit it to our workshop before the workshop submission deadline.
** Best Paper Awards **
We will optionally award prizes to the best paper submissions (subject to
availability; more details to come soon). Both long and short submissions
will be eligible for prizes.
** Presenting Published Papers **
If you want to present a paper which has been published recently elsewhere
(such as other top-tier AI conferences) at our workshop, you may send the
details of your paper (Paper title, authors, publication venue, abstract,
and a link to download the paper) directly to eval4nlp(a)gmail.com. We will
select a few high-quality and relevant papers to present at Eval4NLP. This
allows such papers to gain more visibility from the workshop audience and
increases the variety of the workshop program. Note that the chosen papers
are considered as non-archival here and will not be included in the
workshop proceedings.
-------------------------------------------------
Best wishes,
Eval4NLP organizers
Website: https://eval4nlp.github.io/2023/index.html
Email: eval4nlp(a)gmail.com
[1] https://eval4nlp.github.io/2023/index.html
[2] https://eval4nlp.github.io/2023/shared-task.html
[3] https://github.com/acl-org/acl-style-files
[4]
https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4N…
[5]
https://openreview.net/group?id=aclweb.org/AACL-IJCNLP/2023/Workshop/Eval4N…
A number of 2-year fellowships are available for postdoc researchers.
Define your own research programme & move to beautiful Slovenia! Please
note that possible research areas include NLP topics, possible hosts
include the NLP groups at U Ljubljana and JSI, and that associated
placement partners include our NLP lab at Queen Mary University of London.
See below for more detail:
Dear colleagues,
The second call for applications for SMASH postdoctoral Fellowships,
co-funded by Marie Sklodowska Curie Actions, is now open! SMASH offers
excellent research opportunities that revolve around machine learning and
its applications to the fields of climate research, linguistics, precision
medicine and fundamental physics.
In this call, SMASH aims to hire 20 fellows who will be hosted in five
Slovenian institutions. They can also spend up to 1/3 of their 2-years
appointments at one of our international academic partners (including top
EU centres like Gravitation and Astroparticle Physics at the University of
Amsterdam, and world-leading institutions like CERN and UC Berkeley) or at
some of the most successful Slovenian companies.
Each 2-year fellowship offers excellent working conditions, access to top
infrastructure (including the HPC Vega), substantial research and travel
funds and a salary that is significantly higher than local costs of living.
The SMASH program values an inclusive culture and believes diversity is the
key to success, so SMASH warmly welcomes applicants from underrepresented
groups. Note also that SMASH offers a dedicated allowance to support
applicants with special needs.
In order to apply, fellows need to contact their desired supervisor who
will assist them in preparing short research proposals and provide them
with the necessary letters of support from the host institutions.
For more information see: https://smash.ung.si/become-a-fellow/.
The application deadline is October 27th.
--
Matthew Purver - http://www.eecs.qmul.ac.uk/~mpurver/
Computational Linguistics Lab - http://compling.eecs.qmul.ac.uk/
Cognitive Science Research Group - http://cogsci.eecs.qmul.ac.uk/
School of Electronic Engineering and Computer Science
Queen Mary University of London, London E1 4NS, UK
*My working days for QMUL are Monday-Wednesday; responses to mail on other
days may be delayed.*
Apologies for cross posting
,
We are delighted to announce the Call for
Papers for the upcoming Second International Conference on Speech &
Language Technology for Low-resource Languages (SPELLL 2023), scheduled to
be held on 06-08 December 2023 at Kongu Engineering College, Erode, Tamil
Nadu, India. The previous edition, SPELLL 2022 was held at Sri
Sivasubramaniya Nadar College of Engineering, Chennai, India during 23-25
November, 2022. The proceedings of the first edition have been published in
the Springer series: Communications in Computer and Information Science
(CCIS). This proceedings can be accessed via
https://link.springer.com/book/10.1007/978-3-031-33231-9.
We would like to invite you to submit your research work and
contribute to the success of the second edition, SPELLL 2023
<http://spelll.org/callforpapers.html>. SPELLL 2023 aims to bring together
researchers, experts, and practitioners from diverse fields to foster
intellectual discussions, exchange knowledge, and explore innovative
solutions to the challenges of NLP. This interdisciplinary conference will
provide a platform for participants to present their latest research
findings, engage in vibrant discussions, and build valuable collaborations.
Conference Link: http://spelll.org/ <http://spelll.org/committee.html>CALL
FOR PAPERS
This conference aims at bringing together researchers from across the world
working on low-resourced and minority languages to create more speech and
language technology for languages of the world.
We invite submissions on topics that include, but are not limited to, the
following:
Track 1 - Language Resources (LRs)
- Lexicons and machine-readable dictionaries
- Linguistic Theories, Phonology, Morphological analysis, Syntax and
Semantics
- Corpus development, tools, analysis and evaluation
- Issues in the design, construction and use of LRs: text, speech,
sign, gesture, image, in single or multimodal/multimedia data
- Exploitation of LRs in systems and applications
- Annotation, analysis, enrichment of text archives
Track 2 - Language Technologies (LT)
- Code-mixing
- Cognitive modeling and psycholinguistics
- Computer-assisted language learning (call)
- Covid-19 alert, NLP applications for emergency situations and
crisis management
- Equality, diversity, and inclusion for language technology
- Fake news, spam, and rumour detection
- Hate speech detection and offensive language detection
- Machine translation, sentiment analysis, and text summarization
- Text and data mining for social sciences and humanities research
- Text and data mining of (bio) medical literature, including
pandemics
- Knowledge representation and reasoning
- Knowledge graphs for corpora processing and analysis
- Applications for language, data and knowledge
- Question answering and semantic search
- Text analytics on big data
- Semantic content management
- Computer-aided language learning
- Natural language interfaces to big data
- Knowledge-based NLP
Track 3 - Speech Technologies (ST)
- Speech technology and automatic speech recognition
- Spoken dialog systems and analysis of conversation
- Spoken language processing — translation, information retrieval,
summarization resources and evaluation
- Speaker verification and identification
- Multimodal/multimedia speaker recognition and diarization
- Analysis of speech and audio signals
- Speech coding and enhancement
- Speech recognition - architecture, search, and linguistic components
- Speech, voice, and hearing disorders
- Speech synthesis and spoken language generation
- Cross-lingual and multilingual components for speech recognition /
code switching
Track 4 - Other related topics
- Analysis of para-linguistics in speech and language
- Multimodal analysis
- Visualisation of social sciences and humanities research
------------------------------
SUBMISSION GUIDELINES
Regular Papers
Regular submissions must describe substantial, original, completed and
unpublished work. Wherever appropriate, concrete evaluation and analysis
should be included.
Regular papers may consist of 12 - 15 pages of content including
references. However, page restrictions will not be followed strictly, if
the authors wish to have more explanation of their work.
Short Papers
SPELLL 2023 also solicits short papers. Short paper submissions must
describe original and unpublished work. Short papers should have a point
that can be made in a few pages. Some kinds of short papers are:
- A small, focused contribution
- Work in progress
- Experience notes
Short papers may consist of 6 - 8 pages including references. Short papers
will be presented in one or more oral or poster sessions. While short
papers will be distinguished from regular papers in the proceedings, there
will be no distinction in the proceedings between short papers presented
orally and as posters. However, page restrictions will not be followed
strictly, if the authors wish to have more explanation of their work.
Review Policy
All submissions to SPELLL 2023 will be reviewed on the basis of
originality, relevance, importance and clarity by at least two reviewers.
The review process will be double blind and the authors should refer to
themselves in third person when citing their own work. Phrases like "In our
earlier work..." or "We previosuly showed that..." should be avoided when
submitting the paper for review.
*Author Guidelines*
- Authors must follow the Springer LNCS formatting instructions.
- For camera-ready papers use Latex or Word style
<http://preview.springer.com/gp/computer-science/lncs/conference-proceedings…>
provided
on the authors' page for the preparation of papers.
- The LaTeX Proceedings Template for scientific authoring platform in
Overleaf.
<https://www.overleaf.com/latex/templates/springer-lecture-notes-in-computer…>
- Each paper will receive at least three reviews. At least one author of
each accepted paper must register by the early registration date indicated
on the conference website and present the paper.
------------------------------
Important Dates
Paper Submission Due: Aug 4, 2023
Acceptance Notification: *September 20, 2023.*
Camera Ready Submission: *October 15, 2023.*
Conference: *December 06-08, 2023.*
PUBLICATION
Accepted papers that are presented at the conference will be published in
the Springer series: Communications in Computer and Information Science
(CCIS).
with regards,
Dr. Bharathi Raja Chakravarthi,
Assistant Professor / Lecturer-above-the-bar
School of Computer Science, University of Galway, Ireland
Insight SFI Research Centre for Data Analytics, Data Science Institute,
University of Galway, Ireland
E-mail: bharathiraja.akr(a)gmail.com , bharathi.raja(a)universityofgalway.ie
<bharathiraja.asokachakravarthi(a)universityofgalway.ie>
Google Scholar: https://scholar.google.com/citations?user=irCl028AAAAJ&hl=en
Website:
https://www.universityofgalway.ie/our-research/people/bharathirajaasokachak…
We are seeking a postdoctoral researcher to contribute to an
interdisciplinary project focused on enhancing the reasoning and analytical
rigor in reports analyzing complex geopolitical situations. This work is
part of IARPA's REASON <https://www.iarpa.gov/research-programs/reason>
initiative, under which we've united an extensive international team of
computer scientists and subject matter specialists. Our goal is to develop
and evaluate systems that aid intelligence analysts, and researchers more
broadly, in crafting superior analyses of complex scenarios.
Our goal is to improve the quality of arguments. Are they sound and
persuasive? Does the report draw on appropriate evidence? Are there
overlooked elements that warrant inclusion? Have potential counterarguments
been adequately tackled? This work will involve building higher-level
reasoning structures that then call large language models as functions to
do the detailed analyses.
apply here
<https://docs.google.com/forms/d/e/1FAIpQLSfSl0x3IkKqg49HdZQoC6fCaCxOTQ-Gihr…>
Joint work Chris Calison-Burch, Mark Yatskar, Lyle Ungar and a large team.
--
Prof. Lyle H. Ungar
The University of Pennsylvania
<http://wwwp.org/>
https://www.cis.upenn.edu/~ungar <http://www.cis.upenn.edu/~ungar>
https://www.wwbp.org
In this newsletter:
LanguageArc featured in Babel magazine
Fall 2023 LDC Data Scholarship Program
New publications:
Mixer 7 Spanish Speech<https://catalog.ldc.upenn.edu/LDC2023S04>
LORELEI Thai Representative Language Pack<https://catalog.ldc.upenn.edu/LDC2023T08>
________________________________
LanguageArc featured in Babel magazine
The May 2023 edition of Babel<https://cloud.3dissue.com/18743/41457/106040/BabelNo43/index.html> (The Language Magazine) features an article about LDC's citizen science portal LanguageArc<https://languagearc.com/> (Language Analysis Research Community) and the diverse projects available there that utilize a variety of novel incentives to supplement traditional methods of creating data resources. Consider LanguageArc for your next collection project. Note: a subscription is necessary to view the article.
Fall 2023 LDC Data Scholarship Program
Student applications for the Fall 2023 LDC Data Scholarship program are being accepted now through September 15, 2023. This program provides eligible students with no-cost access to LDC data. Students must complete an application consisting of a data use proposal and letter of support from their advisor. For application requirements and program rules, visit the LDC Data Scholarships page<https://www.ldc.upenn.edu/language-resources/data/data-scholarships>.
________________________________
New publications:
Mixer 7 Spanish Speech<https://catalog.ldc.upenn.edu/LDC2023S04> was developed by LDC and contains 9,600 hours of audio recordings of interviews, transcript readings, and conversational telephone speech involving 191 distinct native Spanish speakers. This material was collected by LDC in 2011-2012 as part of the Mixer project, and the recordings were used in the 2012 NIST SRE test set.
Recruited speakers were connected through a robot operator to carry on casual conversations on a pre-set topic lasting up to 10 minutes. Participants also visited LDC's human subjects collection lab equipped with a 14-microphone array where they participated in interviews and transcript readings and conducted up to 3 telephone calls under varying conditions. Selected speaker metadata was also collected.
2023 members can access this corpus through their LDC accounts. This corpus is a members-only release and is not available for non-member licensing. Contact ldc(a)ldc.upenn.edu<mailto:ldc@ldc.upenn.edu> for information about membership.
*
LORELEI Thai Representative Language Pack<https://catalog.ldc.upenn.edu/LDC2023T08> is comprised of over 39 million words of Thai monolingual text, 2.85 million words of found Thai-English parallel text, and 141,000 Thai words translated from English data. Over 186,000 words were annotated for named entities and more than 25,000 words were annotated for entity discovery and linking and situation frames (identifying entities, needs, and issues). Data was collected from discussion forum, news, reference, social network, and weblogs.
The LORELEI (Low Resource Languages for Emergent Incidents) program was concerned with building human language technology for low resource languages in the context of emergent situations. Representative languages were selected to provide broad typological coverage.
The knowledge base for entity linking annotation is available separately as LORELEI Entity Detection and Linking Knowledge Base (LDC2020T10)<https://catalog.ldc.upenn.edu/LDC2020T10>.
2023 members can access this corpus through their LDC accounts. Non-members may license this data for a fee.
To unsubscribe from this newsletter, log in to your LDC account<https://catalog.ldc.upenn.edu/login> and uncheck the box next to "Receive Newsletter" under Account Options or contact LDC for assistance.