*Two-year fully funded postdoctoral position in quantitative text analysis/
NLP*
*Location:* University College Dublin, School of Politics and
International Relations
*Start date:* 1st September, 2024
*Deadline:* 12th May, noon, 2024
University College Dublin is currently recruiting a post-doctoral
researcher to implement natural language processing (NLP) tools to analyse
interview data.
The main objective of this position is to develop tools to identify and
analyse so-called cognitive maps (Axelrod 1976) from interview data.
Dornschneider and Henderson (2016, 2023) and Dornschneider (2019) have
developed tools for the computational analysis of cognitive maps. What is
needed is a set of tools to infer cognitive maps from natural language.
This Irish Research Council funded project investigates the role of women
in Muslim resistance movements, based on Arabic interviews conducted by the
Principal Investigator. The cognitive mapping analysis has several main
objectives: 1- to show typical behavioral decisions (e.g. to join a
resistance movement) described by the interviewees; 2- to identify common
reasoning processes related to these decisions; and 3- to trace the role of
religious beliefs in these reasoning processes.
You will work with the PI, Dr. Stephanie Dornschneider-Elkink, to deliver
the research objectives of the project. You will support the development
and subsequent publication of new tools to convert text into cognitive
maps. Tasks will include but are not limited to POS tagging, sequence
analysis, word embeddings, and visualization. You will have the chance to
give substantial input to the analysis and to co-author papers with the PI.
Full ad*:* https://my.corehr.com/pls/coreportal_ucdp/apply?id=017201
*References*
Axelrod, R. (ed.). 1976. *Structure of decision: The cognitive maps of
political elites*. Princeton: Princeton university press.
Dornschneider-Elkink, S. and Henderson, N., 2023. Repression and Dissent:
How Tit-for-Tat Leads to Violent and Nonviolent Resistance. *Journal of
Conflict Resolution*, p.00220027231179102.
https://doi.org/10.1177/0022002714540473
Dornschneider, S., 2019. High‐Stakes Decision‐Making Within Complex Social
Environments: A Computational Model of Belief Systems in the Arab
Spring. *Cognitive
Science*, *43*(7), p.e12762. https://doi.org/10.1111/cogs.12762
Dornschneider, S. and Henderson, N., 2016. A computational model of
cognitive maps: Analyzing violent and nonviolent activity in Egypt and
Germany. *Journal of Conflict Resolution*, *60*(2), pp.368-399.
--
Dr Stephanie Dornschneider-Elkink
Assistant Professor, School of Politics & International Relations (SPIRe)
University College Dublin
Newman Building, F316, Belfield, Dublin 4, Ireland
http://www.dornschneider.net/
[Apologies for cross-posting]
23rd EDITION OF THE SEPLN AWARD TO THE BEST DOCTORAL THESIS IN NATURAL LANGUAGE PROCESSING
[EXTENSION: May 15nd, 2024]
The Spanish Society for Natural Language Processing announces the 23rd Edition of the SEPLN Award for the Best Doctoral Thesis in Natural Language Processing, which will be governed by the following bases:
1.- The purpose of this award is the promotion and dissemination of research in the field of natural language processing.
2.- The thesis will be awarded with a compact laptop (tablet) and €300 for attendance at the congress. The award will be presented at the 40th International Congress of the Spanish Society of Natural Language Processing (SEPLN 2024), after a brief presentation of the award-winning work by the author.
3.- In order to compete, the author of the doctoral thesis must be a member of the SEPLN at the time of submitting the work. No contestant may participate as author in more than one work.
4.- Doctoral theses read during the year 2023, written in a language of the Spanish State or in English, may be submitted to the competition.
In addition to the complete thesis, it is essential to send:
a) a 4-page summary of the thesis, clearly describing the topic and the relevance of the research, the objectives, methods, results achieved and contributions.
b) a brief description of the scientific career of the author of the thesis, detailing the participation in scientific activities such as organization of competitive tasks, congresses, generation of open access resources such as sets of data, language models, etc., and participation in projects, contracts, and/or patents.
The quality of the presentation, the technical and methodological correctness, the relevance, originality, the generation, evaluation and publication of resources, as well as the research trajectory during the pre-doctoral period will be the criteria used for the award of the prize by the jury.
The works will be submitted through the website of the Society's magazine (http://journal.sepln.org) in PDF format before May 15nd 2024.
The final decision will be communicated during the 40th International Congress of the Spanish Society for Natural Language Processing (SEPLN 2024).
Submission instructions (http://www.sepln.org/sites/default/files/noticia/documentos_relacionados/20…)
For more information aitziber.atucha(a)ehu.eus
EDICIÓN XXIII PREMIO SEPLN A LA MEJOR TESIS DOCTORAL EN PROCESAMIENTO DEL LENGUAJE NATURAL
[EXTENSIÓN: 15 de mayo de 2024]
La Sociedad Española para el Procesamiento del Lenguaje Natural convoca la Edición XXIII del Premio SEPLN a la Mejor Tesis Doctoral en Procesamiento del Lenguaje Natural, que se regirá por las siguientes bases:
1.- La finalidad de este premio es la promoción y divulgación de la investigación en el campo del procesamiento del lenguaje natural.
2.- La tesis será premiada con una computadora portátil compacta (tablet) y 300€ para la asistencia al congreso. Se dará entrega del premio en el 40 Congreso Internacional de la Sociedad Española del Procesamiento del Lenguaje Natural (SEPLN 2024), tras una breve presentación del trabajo premiado por parte del autor.
3.- Para poder concursar, el autor de la tesis doctoral debe ser socio de la SEPLN en el momento de presentar el trabajo. Ninguna persona concursante podrá participar como autora en más de un trabajo.
4.- Se podrán presentar a concurso tesis doctorales leídas durante el año 2023, escritas en una lengua del Estado español o en lengua inglesa.
Además de la tesis completa, es imprescindible enviar:
a.- Un breve resumen de 4 páginas donde claramente se indique el tema y la relevancia de la investigación, los objetivos, métodos, resultados alcanzados y contribuciones.
b.- Una breve descripción de la trayectoria científica del autor de la tesis, en la que se describa la participación en actividades científicas como organización de de tareas competitivas, congresos, generación de recursos open access como conjuntos de datos, modelos de lenguaje, etc, y participación en proyectos, contratos, y/o patentes.
La calidad de la presentación, la corrección técnica y metodológica, la relevancia, originalidad, la generación, evaluación y publicación de recursos, así como la trayectoria investigadora durante el periodo predoctoral serán los criterios empleados para la adjudicación del premio por parte del jurado.
Los trabajos se enviarán a través de la web de la revista de la Sociedad (http://journal.sepln.org) en formato PDF antes del 15 de mayo de 2024.
La resolución del premio se comunicará durante el 40 Congreso Internacional de la Sociedad Española del Procesamiento del Lenguaje Natural (SEPLN 2024).
Documento con las instrucciones (http://www.sepln.org/sites/default/files/noticia/documentos_relacionados/20…)
Para más información dirigirse a aitziber.atucha(a)ehu.eus
Dear colleagues,
We have received many requests to extend the submission deadline for CMC-Corpora 2024 and are therefore pleased to announce an extension of the paper and abstract submission deadline to 23:59 CEST (GMT +2), April, 26th, 2024.
We are also very happy to inform you that Susan Herring (Indiana University) will be our keynote speaker!
For submission details, please see the conference website: https://cmc-corpora-nice.sciencesconf.org/
Looking forward to receiving your submission!
On behalf of the organizing and steering committees,
Céline Poudat and Steven Coats
University Lecturer, Docent
English, Faculty of Humanities
University of Oulu
P.O. Box 8000, FI-90014 University of Oulu
Finland
https://cc.oulu.fi/~scoats
We invite the community to participate in a shared task organized in the
context of the CONDA workshop https://conda-workshop.github.io/
<https://conda-workshop.github.io/>.
Data contamination, where evaluation data is inadvertently included in
pre-training corpora of large scale models, and language models (LMs) in
particular, has become a concern in recent times (Sainz et al. 2023
<https://aclanthology.org/2023.findings-emnlp.722/>; Jacovi et al. 2023
<https://aclanthology.org/2023.emnlp-main.308/>). The growing scale of
both models and data, coupled with massive web crawling, has led to the
inclusion of segments from evaluation benchmarks in the pre-training
data of LMs (Dodge et al., 2021
<https://aclanthology.org/2021.emnlp-main.98/>; OpenAI, 2023
<https://arxiv.org/abs/2303.08774>; Google, 2023
<https://arxiv.org/abs/2305.10403>; Elazar et al., 2023
<https://arxiv.org/abs/2310.20707>). The scale of internet data makes it
difficult to prevent this contamination from happening, or even detect
when it has happened (Bommasani et al., 2022
<https://arxiv.org/abs/2108.07258>; Mitchell et al., 2023
<https://arxiv.org/abs/2212.05129>). Crucially, when evaluation data
becomes part of pre-training data, it introduces biases and can
artificially inflate the performance of LMs on specific tasks or
benchmarks (Magar and Schwartz, 2022
<https://aclanthology.org/2022.acl-short.18/>). This poses a challenge
for fair and unbiased evaluation of models, as their performance may not
accurately reflect their generalization capabilities.
The shared task is a community effort on centralized data contamination
evidence collection. While the problem of data contamination is
prevalent and serious, the breadth and depth of this contamination are
still largely unknown. The concrete evidence of contamination is
scattered across papers, blog posts, and social media, and it is
suspected that the true scope of data contamination in NLP is
significantly larger than reported.
With this shared task we aim to provide a structured, centralized
platform for contamination evidence collection to help the community
understand the extent of the problem and to help researchers avoid
repeating the same mistakes. The shared task also gathers evidence of
clean, non-contaminated instances. The platform is already available for
perusal at
https://huggingface.co/spaces/CONDA-Workshop/Data-Contamination-Database
<https://huggingface.co/spaces/CONDA-Workshop/Data-Contamination-Report>.
Participants in the shared task need to submit their contamination
evidence (see instructions below). The CONDA 2024 workshop organizers
will review the evidence through pull requests.
*/Compilation Paper/*
As a companion to the contamination evidence platform, we will produce a
paper that will provide a summary and overview of the evidence collected
in the shared task. The participants who contribute to the shared task
will be listed as co-authors in the paper.
*/
/*
*/Instructions for Evidence Submission/*
Each submission should report a case of contamination or lack of
contamination thereof. The submission can be either about (1)
contamination in the corpus used to pre-train language models, where the
pre-training corpus contains a specific evaluation dataset, or about (2)
contamination in a model that shows evidence of having seen a specific
evaluation dataset while being trained. Each submission needs to mention
the corpus (or model) and the evaluation dataset, in addition to some
evidence of contamination. Alternatively, we also welcome evidence of a
lack of contamination.
Reports must be submitted through a Pull Request in the Data
Contamination Report space at HuggingFace. The reports must follow the
Contribution Guidelines provided in the space and will be reviewed by
the organizers. If you have any questions, please contact us at
conda-workshop(a)googlegroups.com
<mailto:conda-workshop@googlegroups.com> or open a discussion in the
space itself.
URL with contribution guidelines:
https://huggingface.co/spaces/CONDA-Workshop/Data-Contamination-Database
<https://huggingface.co/spaces/CONDA-Workshop/Data-Contamination-Report> (“Contribution
Guidelines” tab)
*/Important dates/*
* Deadline for evidence submission: July 1, 2024
* Workshop day: August 16, 2024
*/Sponsors/*
* AWS AI and Amazon Bedrock
* HuggingFace
* Google
*/Contact/*
* Website: https://conda-workshop.github.io/
<https://conda-workshop.github.io/>
* Email: conda-workshop(a)googlegroups.com
<mailto:conda-workshop@googlegroups.com>
*/Organizers/*
Oscar Sainz, University of the Basque Country (UPV/EHU)
Iker García Ferrero, University of the Basque Country (UPV/EHU)
Eneko Agirre, University of the Basque Country (UPV/EHU)
Jon Ander Campos, Cohere
Alon Jacovi, Bar Ilan University
Yanai Elazar, Allen Institute for Artificial Intelligence and University
of Washington
Yoav Goldberg, Bar Ilan University and Allen Institute for Artificial
Intelligence
Dear all,
(Apologies for cross-posting)
This is the third CFP for the second Arabic Natural Language Processing
Conference (ArabicNLP 2024)
Co-located with ACL 2024 in Bangkok, Thailand, August 16, 2024. (Hybrid
Mode).
Conference URL: https://arabicnlp2024.sigarab.org/
Upcoming deadline: May 3, 2024: Abstract of direct conference paper
submissions due date (Open Review)
ArabicNLP 2024 builds on eight previous conference and workshop editions,
which have been very successful drawing in a large active participation in
various capacities (See Scholar Page
<https://scholar.google.com/citations?user=LGzh8jYAAAAJ>). This conference
is timely given the continued rise in research projects focusing on Arabic
NLP. The conference is organized by the Special Interest Group on Arabic
NLP (SIGARAB <https://www.sigarab.org/>), an Association for Computational
Linguistics Special Interest Group on Arabic NLP.
Call for Papers
We invite long (up to 8 pages), short (up to 4 pages), and demo paper (up
to 4 pages) submissions. Long and short papers will be presented orally or
as posters as determined by the program committee; presentation mode does
not reflect the quality of the work.
Submissions are invited on topics that include, but are not limited to, the
following:
-
Enabling technologies: (any size) language models, diacritization,
lemmatization, morphological analysis, disambiguation, tokenization, POS
tagging, named entity detection, chunking, parsing, semantic role labeling,
sentiment analysis, Arabic dialect modeling, etc.
-
Applications: dialog modeling, machine translation, speech recognition,
speech synthesis, optical character recognition, pedagogy, assistive
technologies, social media analytics, etc.
-
Resources: dictionaries, annotated data, corpora, etc.
Submissions may include work in progress as well as finished work.
Submissions must have a clear focus on specific issues pertaining to the
Arabic language whether it is standard Arabic, dialectal, classical, or
mixed. Papers on other languages sharing problems faced by Arabic NLP
researchers, such as Semitic languages or languages using Arabic script,
are welcome provided that they propose techniques or approaches that would
be of interest to Arabic NLP, and they explain why this is the case.
Additionally, papers on efforts using Arabic resources but targeting other
languages are also welcome. Descriptions of commercial systems are welcome,
but authors should be willing to discuss the details of their work. We also
welcome position papers and surveys about any of the above topics.
Conference Paper Submission URL: <https://softconf.com/emnlp2022/WANLP2022>
https://openreview.net/group?id=SIGARAB.org/ArabicNLP/2024/Conference
Important Dates for Conference Papers
-
May 3, 2024: Abstract of direct conference paper submissions due date
(Open Review)
-
May 10, 2024: Full direct conference paper submissions due date (Open
Review)
-
May 17, 2024: ARR commitment date <https://aclrollingreview.org/dates>
-
May 31, 2024: Reviews submission deadline
-
June 17, 2024: Notification of acceptance
-
July 1, 2024: Camera-ready papers due
-
August 16, 2024: ArabicNLP conference
All deadlines are 11:59 pm UTC -12h
<https://www.timeanddate.com/time/zone/timezone/utc-12> (“Anywhere on
Earth”).
There are eight exciting shared tasks:
https://arabicnlp2024.sigarab.org/shared-tasks
-
Task 1: AraFinNLP: Arabic Financial NLP
-
Task 2: FIGNEWS 2024: Shared Task on News Media Narratives of the Israel
War on Gaza
-
Task 3: ArAIEval: Propagandistic Techniques Detection in Unimodal and
Multimodal Arabic Content
-
Task 4: StanceEval2024: Arabic Stance Evaluation Shared Task
-
Task 5: WojoodNER 2024: The 2nd Arabic Named Entity Recognition Shared
Task
-
Task 6: ArabicNLU Shared-Task: Arabic Natural Language Understanding
-
Task 7: NADI 2024: Nuanced Arabic Dialect Identification
-
Task 8: KSAA-CAD Shared Task: Contemporary Arabic Reverse Dictionary and
Word Sense Disambiguation
If you have any questions, please contact us at
arabicnlp-pc-chairs(a)sigarab.org
The ArabicNLP 2024 Organizing Committee
--
Salam Khalifa
PhD Student at Stony Brook Linguistics
<https://www.linguistics.stonybrook.edu/>.
Job title: Design of Information Extraction Tools to characterize
molecules produced or degraded by microbes and applications to
plant-fermented food ecosystems.
MaIAGE-Bibliome (INRAE, University Paris-Saclay), a transdisciplinary
research lab, offers a PhD position in NLP applied to biology and food
science. The candidate will work within the FAIROmics doctoral network;
a Marie Skłodowska-Curie Action that aims to leverage AI techniques to
improve and discover knowledge about fermented food.
The position is located in Jouy-en-Josas (near Paris) and includes a
twelve-month secondment at the Applied AI Research Group at the
University of Szeged (Hungary). Both universities will award the PhD
diploma.
We are looking for candidates with:
- Master’s degree in Computer Science with a solid background in NLP,
AI, and/or ML. Strong academic records are highly recommended.
- Experience in deep learning approaches for NLP.
- Programming skills in Python.
- Very good English skills (both writing and speaking).
- An interest in biology, bioinformatics, and food science.
Application deadline: 15/05/2024 23:59 - Europe/Brussels.
Application form: https://sondages.inrae.fr/index.php/342264
<https://sondages.inrae.fr/index.php/342264>(select DC9)
Detailed description:
https://www.dn-fairomics.eu/open-phd-positions/dc-9-phd-position
<https://www.dn-fairomics.eu/open-phd-positions/dc-9-phd-position>
FAIROmics Doctoral Network: https://www.dn-fairomics.eu
<https://www.dn-fairomics.eu/>
MaIAGE-Bibliome: https://maiage.inrae.fr/en/bibliome
<https://maiage.inrae.fr/en/bibliome>
Department of Software Engineering - University of Szeged:
https://www.sed.inf.u-szeged.hu <https://www.sed.inf.u-szeged.hu>
Feel free to contact us for any questions: Robert.Bossy(a)inrae.fr
<mailto:Robert.Bossy@inrae.fr>
Apologies for crossposting.
Call for Papers
Information Processing & Management (IPM), Elsevier
-
CiteScore: 14.8
-
Impact Factor: 8.6
Guest editors:
-
Omar Alonso, Applied Science, Amazon, Palo Alto, California, USA.
E-mail: omralon(a)amazon.com
-
Stefano Marchesin, Department of Information Engineering, University of
Padua, Padua, Italy. E-mail: stefano.marchesin(a)unipd.it
-
Gianmaria Silvello, Department of Information Engineering, University
of Padua, Padua, Italy. E-mail: gianmaria.silvello(a)unipd.it
Special Issue on “Large Language Models and Data Quality for Knowledge
Graphs”
In recent years, Knowledge Graphs (KGs), encompassing millions of
relational facts, have emerged as central assets to support virtual
assistants and search and recommendations on the web. Moreover, KGs are
increasingly used by large companies and organizations to organize and
comprehend their data, with industry-scale KGs fusing data from various
sources for downstream applications. Building KGs involves data management
and artificial intelligence areas, such as data integration, cleaning,
named entity recognition and disambiguation, relation extraction, and
active learning.
However, the methods used to build these KGs involve automated components
that could be better, resulting in KGs with high sparsity and incorporating
several inaccuracies and wrong facts. As a result, evaluating the KG
quality plays a significant role, as it serves multiple purposes – e.g.,
gaining insights into the quality of data, triggering the refinement of the
KG construction process, and providing valuable information to downstream
applications. In this regard, the information in the KG must be correct to
ensure an engaging user experience for entity-oriented services like
virtual assistants. Despite its importance, there is little research on
data quality and evaluation for KGs at scale.
In this context, the rise of Large Language Models (LLMs) opens up
unprecedented opportunities – and challenges – to advance KG construction
and evaluation, providing an intriguing intersection between human and
machine capabilities. On the one hand, integrating LLMs within KG
construction systems could trigger the development of more context-aware
and adaptive AI systems. At the same time, however, LLMs are known to
hallucinate and can thus generate mis/disinformation, which can affect the
quality of the resulting KG. In this sense, reliability and credibility
components are of paramount importance to manage the hallucinations
produced by LLMs and avoid polluting the KG. On the other hand,
investigating how to combine LLMs and quality evaluation has excellent
potential, as shown by promising results from using LLMs to generate
relevance judgments in information retrieval.
Thus, this special issue promotes novel research on human-machine
collaboration for KG construction and evaluation, fostering the
intersection between KGs and LLMs. To this end, we encourage submissions
related to using LLMs within KG construction systems, evaluating KG
quality, and applying quality control systems to empower KG and LLM
interactions on both research- and industrial-oriented scenarios.
Topics include but are not limited to:
-
KG construction systems
-
Use of LLMs for KG generation
-
Efficient solutions to deploy LLMs on large-scale KGs
-
Quality control systems for KG construction
-
KG versioning and active learning
-
Human-in-the-loop architectures
-
Efficient KG quality assessment
-
Quality assessment over temporal and dynamic KGs
-
Redundancy and completeness issues
-
Error detection and correction mechanisms
-
Benchmarks and Evaluation
-
Domain-specific applications and challenges
-
Maintenance of industry-scale KGs
-
LLM validation via reliable/credible KG data
Submission guidelines:
Authors are invited to submit original and unpublished papers. All
submissions will be peer-reviewed and judged on originality, significance,
quality, and relevance to the special issue topics of interest. Submitted
papers should not have appeared in or be under consideration for another
journal.
Papers can be submitted *up *to 1 September 2024. The estimated publication
date for the special issue is 15 January 2025.
Papers submission via IP&M electronic submission system:
https://www.editorialmanager.com/IPM
To submit your manuscript to the special issue, please choose the article
type:
"VSI: LLMs and Data Quality for KGs".
More info here:
https://www.sciencedirect.com/journal/information-processing-and-management…
Instructions for authors:
https://www.sciencedirect.com/journal/information-processing-and-management…
Important dates:
-
Submissions close: 1 September 2024
-
Publication date (estimated): 15 January 2025
References:
Weikum G., Dong X.L., Razniewski S., et al. (2021) Machine knowledge:
creation and curation of comprehensive knowledge bases. Found. Trends
Databases, 10, 108–490.
Hogan A., Blomqvist E., Cochez M. et al. (2021) Knowledge graphs. ACM
Comput. Surv., 54, 71:1–71:37.
B. Xue and L. Zou. 2023. Knowledge Graph Quality Management: A
Comprehensive Survey. IEEE Trans. Knowl. Data Eng. 35, 5 (2023), 4969 – 4988
G. Faggioli, L. Dietz, C. L. A. Clarke, G. Demartini, M. Hagen, C. Hauff,
N. Kando, E. Kanoulas, M. Potthast, B. Stein, and H. Wachsmuth. 2023.
Perspectives on Large Language Models for Relevance Judgment. In Proc. of
the 2023 ACM SIGIR International Conference on Theory of Information
Retrieval, ICTIR 2023, Taipei, Taiwan, 23 July 2023. ACM, 39 – 50.
S. MacAvaney and L. Soldaini. 2023. One-Shot Labeling for Automatic
Relevance Estimation. In Proc. of the 46th International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR
2023, Taipei, Taiwan, July 23-27, 2023. ACM, 2230 – 2235.
X. L. Dong. 2023. Generations of Knowledge Graphs: The Crazy Ideas and the
Business Impact. Proc. VLDB Endow. 16, 12 (2023), 4130 – 4137.
S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu. 2023. Unifying Large
Language Models and Knowledge Graphs: A Roadmap. CoRR abs/2306.08302 (2023).
--
Stefano Marchesin, PhD
Assistant Professor (RTD/a)
Information Management Systems (IMS) Group
Department of Information Engineering
University of Padua
Via Gradenigo 6/a, 35131 Padua, Italy
Home page: http://www.dei.unipd.it/~marches1/
9th Symposium on Corpus Approaches to Lexicogrammar (LxGr2024)
CALL FOR PAPERS
Extended deadline for abstract submission: 15 April 2024
The symposium will take place online on Friday 5 and Saturday 6 July 2024.
Invited Speakers
Lise Fontaine<http://www.uqtr.ca/PagePerso/Lise.Fontaine> (Université du Québec à Trois-Rivières): Reconciling (or not) lexis and grammar
Ute Römer-Barron<http://alsl.gsu.edu/profile/ute-romer> (Georgia State University): Phraseology research in second language acquisition
LxGr primarily welcomes papers reporting on corpus-based research on any aspect of the interaction of lexis and grammar - particularly studies that interrogate the system lexicogrammatically to get lexicogrammatical answers. However, position papers discussing theoretical or methodological issues are also welcome, as long as they are relevant to both lexicogrammar and corpus linguistics.
If you would like to present, send an abstract of 500 words (excluding references) to lxgr(a)edgehill.ac.uk<mailto:lxgr@edgehill.ac.uk>
Abstracts for research papers should specify the research focus (research questions or hypotheses), the corpus, the methodology (techniques, metrics), the theoretical orientation, and the main findings. Abstracts for position papers should specify the theoretical orientation and the potential contribution to both lexicogrammar and corpus linguistics.
Abstracts will be double-blind reviewed by members of the Programme Committee<https://sites.edgehill.ac.uk/lxgr/committee>.
Full papers will be allocated 35 minutes (including 10 minutes for discussion).
Work-in-progress reports will be allocated 20 minutes (including 5 minutes for discussion).
There will be no parallel sessions.
Participation is free.
For details, visit the LxGr website: https://sites.edgehill.ac.uk/lxgr/lxgr2024
If you have any questions, contact gabrielc(a)edgehill.ac.uk<mailto:gabrielc@edgehill.ac.uk>
________________________________
Edge Hill University<http://ehu.ac.uk/home/emailfooter>
Modern University of the Year, The Times and Sunday Times Good University Guide 2022<http://ehu.ac.uk/tef/emailfooter>
University of the Year, Educate North 2021/21
________________________________
This message is private and confidential. If you have received this message in error, please notify the sender and remove it from your system. Any views or opinions presented are solely those of the author and do not necessarily represent those of Edge Hill or associated companies. Edge Hill University may monitor email traffic data and also the content of email for the purposes of security and business communications during staff absence.<http://ehu.ac.uk/itspolicies/emailfooter>
The Department of Digital Humanities, Faculty of Arts, University of Helsinki, invites applications for the position of
UNIVERSITY LECTURER IN HUMANITIES DATA SCIENCE / COMPUTATIONAL HUMANITIES
for a permanent appointment starting 1st of September 2024.
https://jobs.helsinki.fi/job/Helsinki-University-Lecturer-in-Humanities-Dat…
Due date: April 25, 2024
The position relates to the application of computational and/or statistical methods in the humanities. The application areas are to be interpreted broadly, from area studies to cognitive science, linguistics to history, phonetics to literature. Application, on the other hand, is to be understood primarily from the viewpoint of end-use across this plethora of humanistic research, e.g. through matching approaches to research questions and data, and not as a focus on methodological development itself. The lecturer will be attached to the Liberal Arts and Sciences bachelor’s programme currently under preparation at the university.
——————————————
Jörg Tiedemann
University of Helsinki
https://blogs.helsinki.fi/language-technology/
The first workshop on evaluating IR systems with Large Language Models
(LLMs) is accepting submissions that describe original research findings,
preliminary research results, proposals for new work, and recent relevant
studies already published in high-quality venues.
Topics of interest
We welcome both full papers and extended abstract submissions on the
following topics, including but not limited to:
- LLM-based evaluation metrics for traditional IR and generative IR.
- Agreement between human and LLM labels.
- Effectiveness and/or efficiency of LLMs to produce robust relevance
labels.
- Investigating LLM-based relevance estimators for potential systemic
biases.
- Automated evaluation of text generation systems.
- End-to-end evaluation of Retrieval Augmented Generation systems.
- Trustworthiness in the world of LLMs evaluation.
- Prompt engineering in LLMs evaluation.
- Effectiveness and/or efficiency of LLMs as ranking models.
- LLMs in specific IR tasks such as personalized search, conversational
search, and multimodal retrieval.
- Challenges and future directions in LLM-based IR evaluation.
Submission guidelines
We welcome the following submissions:
- Previously unpublished manuscripts will be accepted as extended
abstracts and full papers (any length between 1 - 9 pages) with unlimited
references, formatted according to the latest ACM SIG proceedings template
available at http://www.acm.org/publications/proceedings-template.
- Published manuscripts can be submitted in their original format.
All submissions should be made through Easychair:
https://easychair.org/conferences/?conf=llm4eval
All papers will be peer-reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion. For
already published studies, the paper can be submitted in the original
format. These submissions will be reviewed for their relevance to this
workshop. All submissions must be in English (PDF format).
All accepted papers will have a poster presentation with a few selected for
spotlight talks. Accepted papers may be uploaded to arXiv.org, allowing
submission elsewhere as they will be considered non-archival. The
workshop’s website will maintain a link to the arXiv versions of the papers.
Important Dates
- Submission Deadline: April 25th, 2024 (AoE time)
- Acceptance Notifications: May 31st, 2024 (AoE time)
- Workshop date: July 18, 2024
Website
For more information, visit the workshop website:
https://llm4eval.github.io/
Contact
For any questions about paper submission, you may contact the workshop
organizers at llm4eval(a)easychair.org