Event Notification Type: Test set and submission instructions released.
Website:
https://pan.webis.de/clef24/pan24-web/oppositional-thinking-analysis.html#s…
*TEST SET AND INSTRUCTIONS RELEASED*
*Oppositional Thinking Analysis PAN@CLEF*
Dear all,
As announced, we are excited to communicate that the test set for the
evaluation phase together with the instructions on how to participate are
released and can be consulted on the shared task website:
https://pan.webis.de/clef24/pan24-web/oppositional-thinking-analysis.html#s…
Please, find below some important dates to keep in mind:
- February 23rd, 2024: Training Set Release
- May 15th, 2024: Test Set Release
- May 30th, 2024: Submission Deadline
- June 15th, 2024: Participant paper submission Midnight CEST
- July 1st, 2024: Peer review notification
- July 7th, 2024: Camera-ready participant papers submission Midnight
CEST
Once again, thank you for your interest and support. If you have any
questions or need assistance at any point during the campaign, please feel
free to reach out to us.
Warm regards,
Francisco Rangel
on behalf of the Oppositional Thinking Analysis Task Committee
Dear colleagues,
Please find below the first call for proposals for this year's new shared
task proposals to be presented during the Generation Challenges session of
INLG.
=======================================
GenChal @ 17th International Conference on Natural Language Generation
Tokyo, Japan, September 23-27 2024
INLG Twitter: @inlgmeeting
INLG website:https://inlg2024.github.io/ <https://inlg2023.github.io/>
=======================================
Submission deadline: June 24th 2024
We invite submissions of papers describing ideas for future shared tasks in
the general area of language generation (Generation Challenges 2024).
Proposed tasks can be in the area of core NLG, or in other research areas
in which language is generated. Examples include, but are not limited to:
data-to-text NLG, text-to-text generation (including MT and summarisation),
combining core NLG and MT, combining core NLG and text summarisation, NLG
quality estimation, NLG evaluation metrics, and/or generating language from
heterogeneous data, including image and video.
The Generation Challenges (GenChal) are an umbrella event designed to bring
together a variety of shared-task efforts that involve the generation of
natural language. This year, Generation Challenges will be held as a
workshop at the 17th International Conference on Natural Language
Generation (INLG 2024 <https://inlg2024.github.io/>), scheduled on
September 23-27 2024. The workshop will follow the format of previous
GenChal results sessions, with presentations of results by the organisers
of the generation challenges that are currently running, a poster session
for task participants to present their submissions, as well as
presentations of proposals for new shared tasks in the Task Proposals
Track, and discussion sessions. You can see some of the previous GenChal
tasks in the past GenChal proceedings on the ACL Anthology (see e.g. 2022
<https://aclanthology.org/volumes/2022.inlg-genchal/> or 2023
<https://aclanthology.org/volumes/2023.inlg-genchal/>) or on the dedicated
repository <https://sites.google.com/view/genchalrepository/home>.
Submissions should describe possible future tasks in detail, including
information regarding organisers, task description, motivating theoretical
interest and/or application context, size and state of completion of data
to be used, schedule and evaluation plans. Accepted shared tasks will be
run in the 2025 iteration of INLG.
Important dates
-
Submission deadline: June 24th 2024
-
Notification: July 15th 2024
-
Camera-ready submission: August 16th 2024
-
Workshop at INLG conference: September 23rd-24h 2024
All deadlines are 11.59 pm UTC -12h ("anywhere on Earth").
Submissions and format
Submissions in the Shared Task Proposals track should be no more than 4
(four) pages long excluding citations, and should follow the ACLPUB
formatting guidelines <https://acl-org.github.io/ACLPUB/formatting.html> (you
will find LaTeX style files and Microsoft Word templates under this link).
Proposals should be uploaded to the SoftConf
<https://softconf.com/n/inlg2024/user/scmd.cgi?scmd=submitPaperCustom&pageid…>
GenChal
submission page, using the Submission type New shared task proposal.
Submissions will be peer-reviewed by the program committee. As reviewing
will not be blind, there is no need to anonymise papers.
This is not intended to be a selective process, since the aim is to discuss
new potential shared tasks with INLG delegates. However, the organisers
reserve the right to reject proposals which do not fall within the scope of
the GenChal initiative, or which do not follow guidelines. Accepted
submissions will be published in separate GenChal 2023 proceedings on the
ACL Anthology, as was done in 2022
<https://aclanthology.org/volumes/2022.inlg-genchal/> and 2023
<https://aclanthology.org/volumes/2023.inlg-genchal/>.
Looking forward to seeing you at INLG!
Miruna and Simon, GenChal chairs, on behalf of the INLG'24 organisers
*ADAPT Research Centre / Ionaid Taighde ADAPT*
*School of Computing, Dublin City University, Glasnevin Campus
/ Scoil na Ríomhaireachta,
Campas Ghlas Naíon, Ollscoil Chathair Bhaile Átha Cliath*
PrivateNLP 2024: Fifth Workshop on Privacy in Natural Language Processing at ACL 2024
Final Call For Papers
ACL PrivateNLP is a full day workshop taking place on August 15, 2024 in conjunction with ACL 2024.
Workshop website: https://sites.google.com/view/privatenlp/
Important Dates:
• [Extended] Submission Deadline: May 30, 2024
• Acceptance Notification: June 17, 2024
• Camera-ready versions: July 01, 2024
• Workshop: August 15, 2024
Privacy-preserving data analysis has become essential in the age of Large Language Models (LLMs) where access to vast amounts of data can provide gains over tuned algorithms. A large proportion of user-contributed data comes from natural language e.g., text transcriptions from voice assistants.
It is therefore important to curate NLP datasets while preserving the privacy of the users whose data is collected, and train LLMs models that only retain non-identifying user data.
The workshop aims to bring together practitioners and researchers from academia and industry to discuss the challenges and approaches to designing, building, verifying, and testing privacy preserving systems in the context of Natural Language Processing.
Topics of interest include but are not limited to:
* Privacy in Large Language Models
* Generating privacy preserving test sets
* Inference and identification attacks
* Generating Differentially private derived data
* NLP, privacy and regulatory compliance
* Private Generative Adversarial Networks
* Privacy in Active Learning and Crowdsourcing
* Privacy and Federated Learning in NLP
* User perceptions on privatized personal data
* Auditing provenance in language models
* Continual learning under privacy constraints
* NLP and summarization of privacy policies
* Ethical ramifications of AI/NLP in support of usable privacy
* Homomorphic encryption for language models
Submissions:
Accepted papers will be presented orally or as posters and included in the workshop proceedings. Submissions are open to all, and are to be submitted anonymously. All papers will be refereed through a double-blind peer review process by at least three reviewers with final acceptance decisions made by the workshop organizers.
OpenReview direct submission: https://openreview.net/group?id=aclweb.org/ACL/2024/Workshop/PrivateNLP
Organizers:
Sepideh Ghanavati, University of Maine
Abhilasha Ravichander, Allen AI
Niloofar Mireshghallah, University of Washington
Ivan Habernal, Paderborn University
Seyi Feyisetan, Amazon
Patricia Thaine, Private AI
Vijayanta Jain, University of Maine
Timour Igamberdiev, Technical University of Darmstadt
Contact us: privatenlp24-orga(a)lists.uni-paderborn.de
GermEval2024 Shared Task: GerMS-Detect -- Sexism Detection and Annotator Disagreement Prediction in German Online News Fora
=====================================================================================
2nd CALL FOR PARTICIPATION
We would like to invite you to the GermEval Shared Task GerMS-Detect on Sexism Detection and Annotator Disagreement Prediction in German Online News Fora collocated with Konvens 2024 (https://konvens-2024.univie.ac.at/).
Competition Website: https://ofai.github.io/GermEval2024-GerMS/
Important Dates
------------------
Development phase: May 1 - June 5, 2024 (ongoing)
Competition phase: June 7 - June 25, 2024
Paper submission due: July 1, 2024
Camera ready due: July 20, 2024
Shared Task @KONVENS: 10 September, 2024
Task description
------------------
This shared task is not just about the detection of sexism/misogyny in comments posted in (mostly) German language to the comment section of an Austrian online newspaper: many of the texts to be classified contain ambiguous language, very subtle ways to express misogyny or sexism or lack important context. For these reasons, there can be quite some disagreement between annotators on the appropriate label. In many cases, there is no single correct label. For this reason the shared task is not just about correctly predicting a single label chosen from all the labels assigned by human annotators, but about models which can predict the level of disagreement, the range of labels assigned by annotators or the distribution of labels to expect for a specific group of annotators.
For details see the Competition Website (https://ofai.github.io/GermEval2024-GerMS/).
Organizers
------------
The task is organized by the Austrian Research Institute for Artificial Intelligence (OFAI).
Organizing team
------------------
Brigitte Krenn (brigitte.krenn (AT) ofai.at)
Johann Petrak (johann.petrak (AT) ofai.at)
Stephanie Gross (stephanie.gross (AT) ofai.at)
I have written several little text manipulation tools that I would like anyone interested to try out and give me comments and suggestions on. They are pure JavaScript (no libraries) and work using arrays so they can handle texts up to about the size of a novel. They can be used from their website or alternatively saved and used offline on any device from a phone to a laptop.
The tools include:
vlviewtext.html: a tool for viewing a text file in either text or concordance mode with fast switching between the two views.
vlmakelist.html: a tool for creating a wordlist or frequency list from a text file, the former as a csv file, the latter as html or csv
vltaglist.html: a tool for creating or editing tagged wordlists with up to three levels of tags
The tools may be found at:
https://vincilingua.ca/Tools/index.html
Each comes with a basic online manual and sample texts in English and French may be found on the site.
Please send any comments or suggestions to me directly at lessardg(a)protonmail.com.
With thanks in advance,
Greg Lessard
Dear All,
We invite paper submissions to the Workshop on COuntering Disinformation
with AI (CODAI), which will take place on 20 October at ECAI 2024.
*Website:* https://codai2024.github.io/
*Important dates*
Submission deadline: 24th May 2024
Accept/Reject Communications: 1st July 2024
Camera-ready papers due: 22nd July 2024
Workshop date: 20 October 2024
All deadlines are 11:59 pm UTC-12 (“anywhere on earth”).
*Overview*
Social media platforms which have been designed primarily to allow users to
create and share content with others, have become integral parts of modern
communication, enabling people to connect with each other as well as for
broadcasting information to a wider audience. On one side these platforms
provide an opportunity to facilitate discussions in an open and free
environment. On the flip side, new societal issues have started emerging on
these platforms. Among all the issues, the topic of misinformation has been
prevalent on these platforms. The term misinformation is an umbrella term
which encompasses various entities such as fake news, hoaxes, rumors to
name a few. While misinformation refers to non-intentional spread of
non-authentic information, the term disinformation points to spreading of a
piece of inauthentic information with certain malign intentions.
*Topics*
Areas of interest to include, but are not limited to, the following:
- Information diffusion models for understanding and thwarting the
spread of low-quality information;
- Characterization and detection of coordinated inauthentic behavior;
- Novel techniques for detecting malicious accounts (e.g., bots, cyborgs
and trolls);
- Information diffusion models for understanding and thwarting the
spread of low-quality information;
- Understanding and detection of disinformation;
- Study, inference and detection of narratives in disinformation
campaigns;
- Impact/Harm of misinformation on society.
- Case-studies on the spread and impact of fake news in controversial
topics such as politics, health, climate change, economics, migration.
- Social and psychological studies, or data analytics related to
misinformation spreaders.
- Metrics, tools and methods for measuring the impact of fake news and
of coordinated inauthentic behaviors;
- Datasets for evaluation.
*Submission Link:* https://chairingtool.com/
*Submission Types*
*Original submissions:* The submissions will be reviewed through a
double-blind process and must remain anonymous. They can be either short
papers (2-4 pages) or long papers (6-8 pages), with additional pages
allowed for references. .
*Non-archival option:* In addition to regular paper submissions, authors
have the option of submitting previous research or abstract as non-archival.
Accepted submissions will be presented at the workshop as oral
presentations.
*Format and styling*
Submissions should be formatted according to the ECAI formatting
instructions and not exceed 7 pages (plus 1 extra page for references).
All submissions should use the ECAI 2024 template and formatting
requirements specified by ECAI.
Please send any questions about the workshop to codaihelp(a)gmail.com
*Organisers*
Rajesh Sharma, University of Tartu, Estonia
Anselmo Peñas, Universidad Nacional de Educación a Distancia (UNED), Spain
CALL FOR PAPERS: ACM Transactions on Recommender Systems
Special Issue on Recommender Systems for Good
Submission deadline: 1 September 2024
Guest Editors:
- Marko Tkalčič, University of Primorska, Slovenia
- Noemi Mauro, University of Turin, Italy
- Alan Said, University of Gothenburg, Sweden
- Nava Tintarev, University of Maastricht, Netherlands
- Antonela Tommasel, ISISTAN, CONICET-UNCPBA, Argentina
Recommender systems are among the most widely used applications of machine learning. Since they are so widely used, it is important that we, as practitioners and researchers, think about the impact these systems may have on users, society, and other stakeholders. In practice, the focus is often on systems and values of improving key performance indicators (KPIs), such as increased sales or customer retention. Recommendation technology is currently underutilized to serve societal goals that go beyond the business objectives of individual corporations.
However, other values, bound more to societal good, could be considered in the development and goals of a recommender system. In fact, recommender systems have already been explored to stimulate healthier eating behavior and for improved health and well-being in general, to help low-income families make school choices, to suggest successful learning paths for students, to entice climate-protecting energy-saving behavior, to support fair micro-lending, or improve the information diets of news readers. Research in these areas is however limited in numbers, compared to the many papers that are published every year that propose new models for improved movie recommendations.
Moreover, concerning the methodology and evaluation perspective in this area, it is essential to find a clear methodology and criteria for evaluating the effectiveness and "goodness" of the proposed algorithms. This includes acknowledging that different values may be conflicting, as well as resolving how and when (and by whom) certain values should be prioritized over others.
Research on "Recommender Systems for Good" may benefit from an interdisciplinary approach, drawing on insights from fields such as computer science, ethics, sociology, psychology, law, and economics. Collaborations with stakeholders from diverse backgrounds can enrich the research and ensure that recommendations are grounded in real-world needs and values.
This special issue aims to present state-of-the-art research works where recommender systems have a positive societal impact and help us address urgent societal challenges. It will thereby serve as a call to action for more research in these areas. Ultimately, through this special issue, we hope to establish a vision of "Recommender Systems for Good', following the spirit of the "AI for Good" initiative (https://aiforgood.itu.int) to achieve the United Nations Sustainable Development Goals (2015) and the more recent UNESCO recommendation on the Ethics of Artificial Intelligence (2024) (https://www.unesco.org/en/artificial-intelligence/recommendation-ethics).
Topics:
We aim to collect the latest research on recommender systems for societal good. The topics of the special issues include (but are not limited to):
- Recommender systems for safety, security, and privacy (e.g., reducing poverty and inequality)
- Recommender systems that protect the environment and ecosystems (e.g., lower energy consumption, water and energy management)
- Recommender systems that give control of data back to the users (e.g., transparency of data, models, and outputs)
- Recommender systems for the interconnected society (e.g., increase of solidarity, online conversational health, multi-stakeholder recommenders)
- Accountability in recommender systems, including addressing emerging regulations, such as the DSA (Digital Service Act)
- Recommender systems for the public good (e.g., mental and physical health, welfare, digital literacy, stakeholder engagement, e-learning)
- Introspective studies on the current state of RSs concerning societal good
- Fairness-preserving and fairness-enhancing recommender systems, unbiased recommendations (e.g. to preserve gender equality)
- Responsible recommendation (e.g., in social media and traditional news, avoiding filter bubbles and echo chambers)
- Sustainability and Cultural recommendations (e.g., art, cultural heritage)
- Recommendations to support disadvantaged groups (e.g., elderly, minorities)
- Recommender systems for personal development and well-being (e.g., behavioral change, fitness, self-actualization, personal growth)
Important Dates:
- Submission deadline: September 1, 2024
- First-round review decisions: December 1, 2024
- Deadline for revision submissions: February 1, 2025
- Notification of final decisions: April 1, 2025
Submissions that are received before the first deadline will be directly sent out for review; papers will be immediately published online after acceptance.
Submission Information:
The special issue welcomes technical research papers, survey papers, and opinion/reflective papers. Each paper should address one or more of the abovementioned topics or be in other scopes of Recommender Systems for Good. The special issue will also consider peer-reviewed journal versions (at least 30% new content) of top papers from related recommender system conferences such as RecSys, SIGIR, KDD, CIKM, IUI, UMAP, CHI, WSDM, ACL, etc. The new content must be in terms of intellectual contributions, technical experiments, and findings.
Submissions must be prepared according to the TORS submission guidelines (https://dl.acm.org/journal/tors/author-guidelines) and must be submitted via Manuscript Central (https://mc.manuscriptcentral.com/tors).
For questions and further information, please contact the guest editors at rs4good [at] acm [dot] org.
We have **extended the submission deadline** for the Language Technologies
and Digital Humanities Conference (JT-DH 2024), which will take place on
September 19 and 20, 2024, in Ljubljana, Slovenia. More about the venue,
topics, templates, and programme is available here:
https://www.sdjt.si/wp/jtdh-2024-en.
Important dates
- May 31, 2024: **Extended deadline for abstract/paper submission**
- July 5, 2024: Notification of acceptance
- August 23, 2024: Final abstract/paper submission
- August 23, 2024: Registration deadline
- September 18, 2024: Pre-conference events and workshops
- September 19 & 20, 2024: JTDH 2024 Conference.
-- Reminder --
We are seeking a motivated candidate to join our research team in
MediaFutures, at the University of Bergen, Norway. The primary task of this
position will be to develop novel techniques for generating news articles.
This involves creating resources that adapt lexical, grammatical, and
stylistic choices based on various parameters, including user profiles,
cognitive accessibility, and journalistic formats. We are also interested
in exploring how news content can be versioned and adapted dynamically.
This includes tailoring news articles to different user preferences and
user segments, ensuring readability, and optimizing content delivery across
various platforms.
We expect that the candidate will explore how large language models can be
used for news generation while maintaining ethical and responsible
practices. The position also offers the opportunity to collaborate with
industry partners and gather domain-specific datasets from leading
Norwegian media houses. This real-world collaboration will enhance the
relevance and impact of the produced research.
The PhD candidate will work at MediaFutures in Work Package 5 and will
cooperate with researchers and partners in the work package, including the
Language Technology Group at the University of Oslo, the National Library
of Norway, Schibsted, Amedia, and TV 2. In addition to relevant researchers
and partners in other work packages.
As an applicant you should have an excellent written and spoken command of
English. Proficiency in Norwegian is an important advantage, but *not* a
requirement.
The deadline is 25th May 2024. For more details about the position and how
to apply see:
https://www.jobbnorge.no/en/available-jobs/job/262259/phd-position-in-langu…
If you have any questions, do not hesitate to contact me.
Best,
Samia
*---*
*Samia Touileb*
*Associate Professor in Natural Language Processing*
*Department of Information Science and Media Studies,* *University of
Bergen*
*MediaFutures: Research Center for Responsible Media Technology &
Innovation*