----------------------------
HealTAC 2024
June 12-14th, 2024, Lancaster (UK)
https://healtac2024.github.io/ [healtac2024.github.io]<https://urldefense.com/v3/__https:/healtac2024.github.io/__;!!PDiH4ENfjr2_J…>
----------------------------
1) Second call for contributions: deadline extended to April 8th
2) Keynote speakers: Suzan Verberne and Alistair Johnson
3) Tutorial: ‘Healthcare Text Analytics in the Era of LLMs’ (12 June)
Second call for contributions
The 7th Healthcare Text Analytics Conference (HealTAC 2024) invites contributions that address any aspect of healthcare text analytics. This year, we invite submissions in the form of extended abstracts that describe either methodological or application work that has not been previously presented in a conference. Submissions (up to 2 pages) should be prepared based on a template that is available at the conference web site.
We also invite PhD and fellowship project submissions that describe ongoing PhD research (any stage) or a planned fellowship application. The conference will provide an opportunity to receive constructive feedback from a panel of experts.
As in previous years, there will be a post-conference open call to submit a journal length paper for further peer review and publication in Frontiers in Digital Health.
Submission site: https://easychair.org/conferences/?conf=healtac2024 [easychair.org]<https://urldefense.com/v3/__https:/easychair.org/conferences/?conf=healtac2…>
Updated key dates :
Deadline for all contributions: April 8th 2024
Notification of acceptance: April 24th 2024
Tutorial: June 12th 2024
Conference: June 13-14th 2024
Keynote speakers
We are pleased to announce that the keynote speakers at HealTAC 2024 will be Suzan Verberne (Leiden University) and Alistair Johnson (Glowyr).
Tutorial
We are also pleased that a team from the Institute of Health Informatics, University College London (Yunsoo Kim, Jinge Wu, Honghan Wu) will deliver a tutorial on 'Healthcare Text Analytics in the Era of Large Language Models'.
Announcements
Follow the conference announcements on social media at #HEALTAC2024
We are looking forward to welcoming you to HealTAC 2024.
Jaya Chaturvedi
KCL DRIVE-Health CDT PhD Student (drive-health.org)
Department of Biostatistics and Health Informatics
C3.15, Social Genetic and Developmental Psychiatry Centre
Institute of Psychiatry, Psychology & Neuroscience
King’s College London
PO Box 80, De Crespigny Park
London SE5 8AF
Pronunciation: Juh-yaa
Pronouns: she/her
GitHub: https://github.com/jayachaturvedi
X: @JayaChatur
Project website: https://sites.google.com/view/pain-mental-health/home
If you receive an email from me outside of normal working hours, please do not feel the need to respond outside of your own working hours.
Hi there,
Could you please distribute the following call for paper? thanks.
Best,
Pascal Denis
===================================================================================================================================================
Langues et langage à la croisée des disciplines ( LLcD ) is a new research network, supported by the French CNRS. It organizes an international conference in France aiming to advance the scientific understanding of human language and linguistic systems, as well as to foster new collaborations and cross-fertilization among different research areas and approaches.
The LLcD Meeting, slated to occur annually, will change its location and its thematic focus every year. It will be organized around a conference, as well as a panel discussion and a summer school, both within the theme promoted that year. The LLcD conference will feature plenary lectures delivered by invited keynote speakers, a general session and thematic workshops.
The first edition of the conference is scheduled to take place at Sorbonne University, Paris, from September 9th to September 11th, 2024 . Its main theme will be: Languages, variations, changes: synchronic, diachronic, typological and comparative approaches.
We invite papers on all sub-fields of linguistics, languages, approaches (descriptive, theoretical, empirical, interdisciplinary, etc.), in order to promote diversity across disciplines and methodologies. The proposals are not required to address the main theme of the conference. The languages of the conference are English and French. Abstracts may be submitted for thematic workshops or the general session, no later than April 20, 2024 , on EasyChair .
[ https://llcd2024.sciencesconf.org/resource/page/id/18 | List of workshops ]
Abstracts must clearly state the research questions, approach, method, data and (expected) results. They must be anonymous: not only must they not contain the presenters' names, affiliations or addresses, but they must avoid any other information that might reveal their author(s). They should not exceed 500 words (including examples, but excluding bibliographical references).
Each submission is subject to three reviews. Abstracts submitted for the general session will be evaluated by three members of the Scientific Committee. Abstracts submitted for workshops will be assessed by two members of the Scientific Committee and (one of) the workshop organizer(s).
EasyChair submission page: [ https://easychair.org/conferences/?conf=llcd2024 | https://easychair.org/conferences/?conf=llcd2024 ]
Contact: [ mailto:llcd@sciencesconf.org | llcd(a)sciencesconf.org ]
--
Pascal
----
Pour une évaluation indépendante, transparente et rigoureuse !
Je soutiens la Commission d'Évaluation de l'Inria.
----
+++++++++++++++++++++++++++++++++++++++++++++++
Pascal Denis
Equipe MAGNET, INRIA Lille Nord Europe
Bâtiment B, Avenue Heloïse
Parc scientifique de la Haute Borne
59650 Villeneuve d'Ascq
Tel: ++33 3 59 35 87 24
Url: http://researchers.lille.inria.fr/~pdenis/
+++++++++++++++++++++++++++++++++++++++++++++++
The “test suites” sub-task will be included for the sixth time in the
General MT Shared Task of the Conference on Machine Translation (WMT24).
*OVERVIEW*
Test suites are custom extensions to the test sets of the General MT
Shared Task, constructed so that they can focus on concrete aspects of
the MT output. They consist of a source-side test-set and a customized
evaluation service. As opposed to the standard evaluation process which
produces generic quality scores, test suites often produce separate
fine-grained results for each phenomenon.
Since the usage of LLMs for translation is getting more popular, and we
are expecting more LLMs participations in WMT this year, the theme of
this year’s test suite sub-task is "Help us break LLMs", i.e. to reveal
weaknesses and serious flaws of LLMs when translating, hidden within the
overall high-quality generation.
*IMPORTANT DATES*
* 11th April: Test suite source texts may be submitted for a pre-run
on SoTA MT systems
* 12th June: Test suite source texts must reach us
* 11th July: Translated test suites shipped back to test suites authors:
* TBC - August: Test suite description and analysis paper
* 12th-13th November: Conference
Potential participants are kindly requested to fill in this form
https://forms.office.com/e/e4JuMTSWFF <the “Test suites” sub-task will
be included for the sixth time in the General MT Shared Task of the
Conference on Machine Translation (WMT24). , ,,*OVERVIEW* ,,Test suites
are custom extensions to the test sets of the General MT Shared Task,
constructed so that they can focus on concrete aspects of the MT output.
They consist of a source-side test-set and a customized evaluation
service. As opposed to the standard evaluation process which produces
generic quality scores, test suites often produce separate fine-grained
results for each phenomenon. ,,Since the usage of LLMs for translation
is getting more popular, and we are expecting more LLMs participations
in WMT this year, the theme of this year’s test suite sub-task is "Help
us break LLMs", i.e. to reveal weaknesses and serious flaws of LLMs when
translating, hidden within the overall high-quality generation. ,,
,,*IMPORTANT DATES* ,,11th April: Test suite source texts may be
submitted for a pre-run on SoTA MT systems ,,12th June: Test suite
source texts must reach us ,,18th July: Translated test suites shipped
back to test suites authors: ,,TBC - August: Test suite description and
analysis paper ,, ,,Potential participants are kindly requested to fill
in this form ,,https://forms.office.com/e/e4JuMTSWFF ,, ,,Further
information can be found in the dedicated page of the WMT website
,,http://www2.statmt.org/wmt24/testsuite-subtask.html>
Further information can be found in the dedicated page of the WMT website
http://www2.statmt.org/wmt24/testsuite-subtask.html
<http://www2.statmt.org/wmt24/testsuite-subtask.html>
--
Eleftherios Avramidis, senior researcher
German Research Center for Artificial Intelligence (DFKI)
departments: Design Research eXplorations, Speech and Language Technology
short name: Lefteris, (pronouns: he/him), languages: English, German, Greek
Website:https://www.dfki.de/~elav01
Address: Alt Moabit 91c, 10559 Berlin, Germany
Tel.: +49 30 23895 1806
Sec.: +49 30 23895 1800
Fax.: +49 30 23895 1810
Firmensitz: Trippstadter Strasse 122, D-67663 Kaiserslautern, Germany
Geschäftsführung: Prof. Dr. Antonio Krüger, Helmut Ditzer
Vorsitzender des Aufsichtsrats: Dr. Ferri Abolhassan
Amtsgericht Kaiserslautern, HRB 2313
*** Last Call for Journal First Submissions ***
36th International Conference on Advanced Information Systems Engineering
(CAiSE'24)
June 3-7, 2024, 5* St. Raphael Resort and Marina, Limassol, Cyprus
https://cyprusconferences.org/caise2024/
(*** Submission Deadline: 31st March, 2024 AoE ***)
CAiSE 2024 is organising journal-first sessions as part of the scientific program. The aim of
these sessions is to disseminate recent important research contributions and spark
discussions between authors and researchers in the CAiSE community. Authors of selected
journal articles on CAiSE-related topics will be invited to present their work at the
conference.
SCOPE
For the journal-first sessions, we solicit submissions related to articles that have been
accepted for publication by a reputable journal and that meet the following criteria:
• The article relates to the topics of the CAiSE conference and the recent call for papers.
• The article is an original submission to the journal and not an extension of an earlier
conference or workshop paper.
• The article is an original research article; review articles or commentaries will not be
considered.
• The article was accepted for publication by a journal on or after 1 January 2023, the
acceptance must have been publicly announced, the article must be available at the
publisher’s website (e.g., as "articles in advance" or published on a journal’s website), and
the article must be written in English.
• The article has not been presented at, and is not under consideration for, journal-first
tracks of other conferences.
FORMAT
Accepted submissions will be presented as part of the CAiSE 2024 scientific programme.
SUBMISION
Submissions must be done electronically via Easychair
(https://easychair.org/my/conference?conf=caise2024) and include:
• Title and author information of the article.
• The original abstract and keywords.
• DOI of the original publication or, alternatively, a link to the publication at the journal’s
website.
EVALUATION
All submissions will be reviewed by the track chairs with the aim to accept all qualifying
submissions subject to ability to accommodate them in the program. If needed, priority will
be given to submissions according to their topical fit with the scope of the conference, the
importance of the contribution, as well as the standing of the respective journal (including,
but not limited to, the journal's impact factor and ranking results).
ATTENDANCE AND PRESENTATION
At least one author of each submission accepted for the journal-first track must register
and attend the conference to present the work. The author needs a full registration to
present the journal article. As the articles of the journal-first track have been published
already, they will not be part of the CAiSE 2024 proceedings. The articles will be listed in
the conference program and CAiSE 2024 participants will have access to the respective
abstracts and a pointer to the original journal article.
IMPORTANT DATES
• Submission: 31st March, 2024 (AoE)
• Notification of Acceptance: 14th April, 2024
• Author Registration: 17th May, 2024
• Conference Dates: 3rd-7th June, 2024
JOURNAL FIRST CHAIRS
• Paolo Giorgini, University of Trento, Italy
• Jeffrey Parsons, Memorial University of Newfoundland, Canada
Apologies for crossposting.
Call for Papers
Information Processing & Management (IPM), Elsevier
-
CiteScore: 14.8
-
Impact Factor: 8.6
Guest editors:
-
Omar Alonso, Applied Science, Amazon, Palo Alto, California, USA. E-mail:
omralon(a)amazon.com
-
Stefano Marchesin, Department of Information Engineering, University of
Padua, Padua, Italy. E-mail: stefano.marchesin(a)unipd.it
-
Gianmaria Silvello, Department of Information Engineering, University
of Padua, Padua, Italy. E-mail: gianmaria.silvello(a)unipd.it
Special Issue on “Large Language Models and Data Quality for Knowledge
Graphs”
In recent years, Knowledge Graphs (KGs), encompassing millions of
relational facts, have emerged as central assets to support virtual
assistants and search and recommendations on the web. Moreover, KGs are
increasingly used by large companies and organizations to organize and
comprehend their data, with industry-scale KGs fusing data from various
sources for downstream applications. Building KGs involves data management
and artificial intelligence areas, such as data integration, cleaning,
named entity recognition and disambiguation, relation extraction, and
active learning.
However, the methods used to build these KGs involve automated components
that could be better, resulting in KGs with high sparsity and incorporating
several inaccuracies and wrong facts. As a result, evaluating the KG
quality plays a significant role, as it serves multiple purposes – e.g.,
gaining insights into the quality of data, triggering the refinement of the
KG construction process, and providing valuable information to downstream
applications. In this regard, the information in the KG must be correct to
ensure an engaging user experience for entity-oriented services like
virtual assistants. Despite its importance, there is little research on
data quality and evaluation for KGs at scale.
In this context, the rise of Large Language Models (LLMs) opens up
unprecedented opportunities – and challenges – to advance KG construction
and evaluation, providing an intriguing intersection between human and
machine capabilities. On the one hand, integrating LLMs within KG
construction systems could trigger the development of more context-aware
and adaptive AI systems. At the same time, however, LLMs are known to
hallucinate and can thus generate mis/disinformation, which can affect the
quality of the resulting KG. In this sense, reliability and credibility
components are of paramount importance to manage the hallucinations
produced by LLMs and avoid polluting the KG. On the other hand,
investigating how to combine LLMs and quality evaluation has excellent
potential, as shown by promising results from using LLMs to generate
relevance judgments in information retrieval.
Thus, this special issue promotes novel research on human-machine
collaboration for KG construction and evaluation, fostering the
intersection between KGs and LLMs. To this end, we encourage submissions
related to using LLMs within KG construction systems, evaluating KG
quality, and applying quality control systems to empower KG and LLM
interactions on both research- and industrial-oriented scenarios.
Topics include but are not limited to:
-
KG construction systems
-
Use of LLMs for KG generation
-
Efficient solutions to deploy LLMs on large-scale KGs
-
Quality control systems for KG construction
-
KG versioning and active learning
-
Human-in-the-loop architectures
-
Efficient KG quality assessment
-
Quality assessment over temporal and dynamic KGs
-
Redundancy and completeness issues
-
Error detection and correction mechanisms
-
Benchmarks and Evaluation
-
Domain-specific applications and challenges
-
Maintenance of industry-scale KGs
-
LLM validation via reliable/credible KG data
Submission guidelines:
Authors are invited to submit original and unpublished papers. All
submissions will be peer-reviewed and judged on originality, significance,
quality, and relevance to the special issue topics of interest. Submitted
papers should not have appeared in or be under consideration for another
journal.
Papers can be submitted from 1 June 2024 to 1 September 2024. The estimated
publication date for the special issue is 15 January 2025.
Papers submission via IP&M electronic submission system:
https://www.editorialmanager.com/IPM
Instructions for authors:
https://www.sciencedirect.com/journal/information-processing-and-management…
To submit your manuscript to the special issue, please choose the article
type:
"VSI: LLMs and Data Quality for KGs".
More info here:
https://www.sciencedirect.com/journal/information-processing-and-management…
Important dates:
-
Submissions open: 1 June 2024
-
Submissions close: 1 September 2024
-
Publication date: 15 January 2025
References:
Weikum G., Dong X.L., Razniewski S., et al. (2021) Machine knowledge:
creation and curation of comprehensive knowledge bases. Found. Trends
Databases, 10, 108–490.
Hogan A., Blomqvist E., Cochez M. et al. (2021) Knowledge graphs. ACM
Comput. Surv., 54, 71:1–71:37.
B. Xue and L. Zou. 2023. Knowledge Graph Quality Management: A
Comprehensive Survey. IEEE Trans. Knowl. Data Eng. 35, 5 (2023), 4969 – 4988
G. Faggioli, L. Dietz, C. L. A. Clarke, G. Demartini, M. Hagen, C. Hauff,
N. Kando, E. Kanoulas, M. Potthast, B. Stein, and H. Wachsmuth. 2023.
Perspectives on Large Language Models for Relevance Judgment. In Proc. of
the 2023 ACM SIGIR International Conference on Theory of Information
Retrieval, ICTIR 2023, Taipei, Taiwan, 23 July 2023. ACM, 39 – 50.
S. MacAvaney and L. Soldaini. 2023. One-Shot Labeling for Automatic
Relevance Estimation. In Proc. of the 46th International ACM SIGIR
Conference on Research and Development in Information Retrieval, SIGIR
2023, Taipei, Taiwan, July 23-27, 2023. ACM, 2230 – 2235.
X. L. Dong. 2023. Generations of Knowledge Graphs: The Crazy Ideas and the
Business Impact. Proc. VLDB Endow. 16, 12 (2023), 4130 – 4137.
S. Pan, L. Luo, Y. Wang, C. Chen, J. Wang, and X. Wu. 2023. Unifying Large
Language Models and Knowledge Graphs: A Roadmap. CoRR abs/2306.08302 (2023).
--
Stefano Marchesin, PhD
Assistant Professor (RTD/a)
Information Management Systems (IMS) Group
Department of Information Engineering
University of Padua
Via Gradenigo 6/a, 35131 Padua, Italy
Home page: http://www.dei.unipd.it/~marches1/
Join us for the 1st workshop on “Reclaiming the Narrative: Digital Recovery, AI & Mitigating Harm in Social Media” at ICWSM!
If you work in harm reduction, NLP, AI, Social Sciences, recovery, HCI, and adjacent narrative studies, this one’s for you!
* When: June 3, 2024.
* Format: Hybrid (@Buffalo, NY and Zoom)
* [NEW] Submission deadline extended to: April 1
We invite submissions of abstracts (2 pages), as well as Long (8 pages) and Short (4 pages) papers, excluding references and appendices. The Long and Short papers will be included in the ICWSM Workshop proceedings, published by AAAI Press.
For more details, please visit https://sites.google.com/view/reclaiming-the-narrative/
Overview
The first workshop on evaluating IR systems with Large Language Models
(LLMs) is accepting submissions that describe original research findings,
preliminary research results, proposals for new work, and recent relevant
studies already published in high-quality venues. The workshop will have
both an in-person and virtual component, and submissions are welcome even
for researchers who cannot attend in person, as they will present their
work in the virtual component.
Topics of interest
We welcome both full papers and extended abstract submissions on the
following topics, including but not limited to:
- LLM-based evaluation metrics for traditional IR and generative IR.
- Agreement between human and LLM labels.
- Effectiveness and/or efficiency of LLMs to produce robust relevance
labels.
- Investigating LLM-based relevance estimators for potential systemic
biases.
- Automated evaluation of text generation systems.
- End-to-end evaluation of Retrieval Augmented Generation systems.
- Trustworthiness in the world of LLMs evaluation.
- Prompt engineering in LLMs evaluation.
- Effectiveness and/or efficiency of LLMs as ranking models.
- LLMs in specific IR tasks such as personalized search, conversational
search, and multimodal retrieval.
- Challenges and future directions in LLM-based IR evaluation.
Submission guidelines
We welcome the following submissions:
- Previously unpublished manuscripts will be accepted as extended
abstracts and full papers (any length between 1 - 9 pages) with unlimited
references, formatted according to the latest ACM SIG proceedings template
available at http://www.acm.org/publications/proceedings-template.
- Published manuscripts can be submitted in their original format.
All submissions should be made through Easychair:
https://easychair.org/conferences/?conf=llm4eval
All papers will be peer-reviewed (single-blind) by the program committee
and judged by their relevance to the workshop, especially to the main
themes identified above, and their potential to generate discussion. For
already published studies, the paper can be submitted in the original
format. These submissions will be reviewed for their relevance to this
workshop. All submissions must be in English (PDF format).
Please note the workshop will have an in-person (to be held with SIGIR
2024) and virtual component (to be held at a later date on SIGIR VF).
During submission, the authors should select their preferred component. All
accepted papers will have a poster presentation with a few selected for
spotlight talks. Accepted papers may be uploaded to arXiv.org, allowing
submission elsewhere as they will be considered non-archival. The
workshop’s website will maintain a link to the arXiv versions of the papers.
Important Dates
- Submission Deadline: April 25th, 2024 (AoE time)
- Acceptance Notifications: May 31st, 2024 (AoE time)
- Workshop date: July 18, 2024
Website and Contact
More details are available at https://llm4eval.github.io/cfp/.
For any questions about paper submission, you may contact the workshop
organizers at llm4eval(a)easychair.org
Dear colleagues,
We are happy to announce the availability of the following lexical resource:
A graded word list of American English, for 126K words.
The publication is:
Flor, M., Holtzman, S., Deane, P., & I. Bejar (2024).
Mapping of American English vocabulary by grade levels.
ITL - International Journal of Applied Linguistics.
DOI: https://doi.org/10.1075/itl.22025.flo
The resource is available at GitHub:
https://github.com/maafiah/VXGL
Michael Flor
Senior Research Scientist
Research Division
Educational Testing Service
Princeton, NJ, USA
mflor(a)ets.org<mailto:mflor@ets.org>
________________________________
This e-mail and any files transmitted with it may contain privileged or confidential information. It is solely for use by the individual for whom it is intended, even if addressed incorrectly. If you received this e-mail in error, please notify the sender; do not disclose, copy, distribute, or take any action in reliance on the contents of this information; and delete it from your system. Any other use of this e-mail is prohibited.
Thank you for your compliance.
________________________________
Cher.e.s, collègues,
un poste d’enseignant-chercheur (MCF) en Sciences de l’information et de la communication est ouvert au concours lors de la campagne sychronisée 2024.
Le profil de poste "Mutations de l'information et de la communication scientifique » est disponible sur galaxie : https://www.galaxie.enseignementsup-recherche.gouv.fr/ensup/ListesPostesPub…
La personne recrutée effectuera ses recherches au sein du laboratoire Geriico. Les enseignements s’effectueront au sein du département INFODOC de l’université de Lille.
Bien cordialement,
Amel Fraisse.
<>
Amel Fraisse
Maitresse de Conférences
Directrice Département INFODOC
Université de Lille - Département INFODOC - Laboratoire GERiiCO
amel.fraisse(a)univ-lille.fr <mailto:prenom.nom@univ-lille.fr> / https://pro.univ-lille.fr/amel-fraisse/ <http://www.univ-lille.fr/>
Domaine Universitaire de Pont de Bois - Villeneuve d'Ascq
Bât. 2 - bureau B2.467
T. +33 (0)3 20 41 69 38
[apologies if you received multiple copies of this call]
Dear colleagues and friends,
*We are pleased to release the 1st Call for Participation - LLMs4OL
Challenge collocated with The International Semantic Web Conference (ISWC
2024)*
*Overview:* LLMs4OL stands for "Large Language Models for Ontology
Learning." The LLMs4OL paradigm was first introduced in our research paper (
https://link.springer.com/chapter/10.1007/978-3-031-47240-4_22) published
in the ISWC 2023 main conference proceedings. In this context, we aimed to
test the readiness of LLMs to address the Ontology Learning (OL) task
w.r.t. three main subtasks: 1) Term Typing, 2) Type Taxonomy Discovery, and
3) Non-Taxonomic Relation Extraction. Therein our evaluations included
ontolgies from various knowledge domains, i.e. lexicosemantics (WordNet),
geography (GeoNames), biomedicine (NCI, MEDICIN, SNOMEDCT), and web content
types (schema.org). With the ISWC-LLMs4OL 2024 challenge, we aim to
catalyze community-wide engagement in validating and expanding the use of
LLMs in OL by releasing our evaluation datasets publicly in the community.
This initiative is poised to advance our comprehension of LLMs’ roles
within the Semantic Web, encouraging innovation and collaboration in
developing scalable and accurate OL methods.
More info on the task website: https://sites.google.com/view/llms4ol/
The LLMs4OL Challenge will be divided into two evaluation phases:
- Evaluation Phase 1: Few-shot Testing;
- Evaluation Phase 2: Zero-shot Testing
*Dates*
Training datasets available: March 30, 2024
Test data available (Task A): May 27, 2024
Evaluation ends (Task A): June 4, 2024
Test data available (Tasks B & C): June 5, 2024
Evaluation ends (Tasks B & C): June 18, 2024
Participant papers due: June 28, 2024
Notification to authors: July 19, 2024
Camera ready due: July 30, 2024
ISWC 2024, Baltimore, Maryland, USA: 11-15 November 2024
*Task Organizers*
Hamed Babaei Giglou (TIB Leibniz Information Centre for Science and
Technology - Germany)
Jennifer D’Souza (TIB Leibniz Information Centre for Science and Technology
- Germany)
Sören Auer (TIB Leibniz Information Centre for Science and Technology -
Germany)
We look forward to having you on board!
*Contact:* llms4ol.challenge [at] gmail.com