Hi, good morning
This is to share with you that our research group made publicly available the beta version of
*the chatbot for the Portuguese language based on open LLMs, Evaristo.ai <https://evaristo.ai/>
*
One of such LLMs is Gervásio 8B <https://huggingface.co/PORTULAN/gervasio-8b-portuguese-ptpt-decoder>, we developed for Portuguese and we are releasing also now,
and which is the LLM that you find active by default when arriving at this chatbot.
Though you may not be proficient in Portuguese, on-the-fly translators will likely help
you to get its content and its basic functioning. *We invite you, and your colleagues
and students, to visit it and try out this first test version. We welcome any help
you can give us in testing it, any feedback and any suggestions you can share with us.*
It has a unique set of features, among others: it is an open AI chatbot for
the Portuguese language; it is multi-model and multi-heteronym, as well as being agentic,
multi-tool and multi-modal; it does not track its users or pass on their content to third parties,
safeguarding user privacy and ownership of their content.
You'll find the presentation of its motivation in this press release <https://evaristo.ai/assets/pressRelease_EvaristoAI.pdf>(in English),
that can be complemented by the more complete description in the section about <https://evaristo.ai/about> (in Portuguese)
The current open LLMs available are typically between 10 and 100 times smaller than
the top-of-the-range closed LLMs used in commercial chatbots, so the costs associated
with training and operating them are much lower. The performance of open LLMs, however,
is much more satisfactory than this linear disproportion would suggest. They therefore have
an excellent ratio of performance quality versus cost, and are a viable option for
fully autonomous generative AI services focused on concrete use cases.
In this context, we see this chatbot as a milestone in the democratization of
generative technology for the Portuguese language, through open LLMs, by encouraging
more and more organizations to move forward with their own AI services,
rooted in their own computers, and focused on their concrete use cases.
Have a nice day,
António
Dear colleagues,
We are pleased to announce the last call for participation for 1st first Shared Task on Language Identification for Web Data at WMDQS/COLM 2025.
Important information:
🗓️ Registration Deadline: July 23 (AoE)
📍 Montréal, Canada
🌐 https://wmdqs.org/shared-task/
Registration:
To register, please submit a one-page document with a title, a list of authors, a list of provisional languages that you want to focus on, and a brief description of your approach. This document should be sent to wmdqs-pcs(a)googlegroups.com. You can change the list of languages or the system description during the shared task. This document's only purpose is to register your participation in the shared task. The shared task will run until the last week of September.
Motivation:
The lack of training data—especially high-quality data—is the root cause of poor language model performance for many languages. One obstacle to improving the quantity and quality of available text data is language identification (LangID or LID). LangID remains far from solved for many languages. Several of the commonly used LangID models were introduced in 2017 (e.g. fastText and CLD3). The aim of this shared task is to encourage innovation in open-source language identification and improve accuracy on a broad range of languages.
All participants will be invited to contribute a larger paper, which will be submitted to a high-impact NLP venue.
Description:
The main shared task is to submit LangID models that work well on a wide variety of languages on web data. We encourage participants to employ a range of approaches, including the development of new architectures and the curation of novel high-quality annotated datasets.
We recommend using the GlotLID corpus as a starting point for training data. Access to the data will be managed through the Hugging Face repository. Please note that this data should not be redistributed. We will use the same language label format as those used by GlotLID: an ISO 639-3 language code plus an ISO 15924 script code, separated by an underscore.
Although all systems will be evaluated on the full range of languages in our test set, we encourage submissions that focus on a particular language or set of languages, especially if those language(s) present particular challenges for language identification.
The shared task will take place in rounds. The first round will only include data from already existing datasets, subsequent rounds will include data annotated by the community as it is collected and processed. More languages will also be added in subsequent rounds.
Organizers:
For any questions, please drop a mail to wmdqs-pcs(a)googlegroups.com
Program Chairs:
Pedro Ortiz Suarez (Common Crawl Foundation)
Sarah Luger (MLCommons)
Laurie Burchell (Common Crawl Foundation)
Kenton Murray (Johns Hopkins University)
Catherine Arnett (EleutherAI)
Organizing Committee:
Thom Vaughan (Common Crawl Foundation)
Sara Hincapié (Factored)
Rafael Mosquera (MLCommons)
Dear colleagues,
My name is Alessandra Teresa Cignarella, I'm a postdoctoral researcher in the Language and Translation Technology Team (LT3) at Ghent University in Belgium. My research project is called RAINBOW [??] and I'm currently studying stereotypes about LGBTQIA+ people, particularly on social media, in online discourse, and in AI systems.
We have developed a brief questionnaire to gather diverse perspectives from those who experience or recognize these stereotypes. Your participation will support the creation of a multilingual dataset (Italian, Dutch, and Farsi) aimed at improving the inclusivity and reducing the harm caused by AI technologies toward queer communities. Whether you identify as LGBTQIA+, are an ally, or are interested in this research area, your input is highly valued.
Please find the questionnaire here:
*
ITALIAN: https://lnkd.in/dfPuyT6j
*
DUTCH: https://lnkd.in/d-3Di7WY
*
FARSI: https://lnkd.in/dfvWzWCu
Should you have any questions, please do not hesitate to contact me at: alessandrateresa.cignarella(a)ugent.be<mailto:alessandrateresa.cignarella@ugent.be>
I would greatly appreciate it if you could share this survey with your contacts who speak any of these three languages.
Thank you very much for your support!
Best regards,
Alessandra*
Alessandra Teresa Cignarella (she/her)
MSCA postdoctoral fellow
LT3, Language and Translation Technology Team
Department of Translation, Interpreting and Communication
Ghent University
[cid:8c809589-1d29-40cc-89a8-eea510c2a88f]
UCCTS 2025 - Call for Participation
The eighth edition of the UCCTS conference (www.uni-hildesheim.de/uccts2025) will be held on the 8-10th of September 2025 in Hildesheim, Germany.
UCCTS conference series are meant to bring together researchers who collect, annotate, analyze corpora and/or use them to inform contrastive linguistics and translation theory and/or develop corpus-informed tools (in foreign language teaching, language testing and quality assessment, translation pedagogy, computer-aided/machine translation or other related NLP domains). We invite original submissions that open to various topics within empirical contrastive linguistics and translation studies (see below). We welcome interdisciplinary contributions that combine corpus data with other types of empirical data (e.g. experiment) and allow for an interplay between different methods and data types. Moreover, we encourage contributions applying information and computational technologies including Large Language Models (LLMs).
Keynote speakers
*
Elke Teich, Saarland University in Germany *
Dylan Glynn, Université Paris 8, Vincennes - St Denis *
Christian Hardmeier, IT University of CopenhagenProgramme details: https://www.uni-hildesheim.de/fb3/institute-1/institut-fuer-uebersetzungswi…
Information on registration: https://www.uni-hildesheim.de/fb3/institute-1/institut-fuer-uebersetzungswi…
The UCCTS conference in Hildesheim precedes the annual conference series on computational linguistics KONVENS which will take place on 10-12the September in Hldesheim too.
Questions and inquiries under uccts2025(at)uni-hildesheim.de
***********************************
***** 2nd Call for Abstracts *****
***********************************
*** NARNiHS 2026
*** North American Research Network in Historical Sociolinguistics
*** Eighth Annual Meeting
*** 100% IN PERSON
*** Co-Located with the Linguistic Society of America (LSA) Annual Meeting
*** New Orleans, Louisiana USA
*** 8-11 January 2026
This event offers an opportunity for historical sociolinguistics scholars from all over the world to gather and share leading research. We encourage our fellow historical sociolinguists and scholars in related fields from our global scholarly community to **join us in New Orleans** for our Eighth Annual Meeting.
Consult this Call for Abstracts on the web: https://narnihs.org/?page_id=3135 .
--------------- Call for Abstracts ---------------.
Abstract submission online: https://easyabs.linguistlist.org/conference/NARNiHS_26/ .
Deadline: Friday, 15 August 2025, 11:59 PM US Eastern Time.
Late abstracts will not be considered.
The North American Research Network in Historical Sociolinguistics (NARNiHS) is accepting abstracts for its Eighth Annual Meeting in New Orleans, Thursday, January 8 -- Sunday, January 11, 2026. The 8th edition of this inclusive NARNiHS event seeks to provide a collaborative environment where presenters bring fully developed work for presentation and enrichment. We see the NARNiHS Annual Meeting as a place for showcasing excellent projects in historical sociolinguistics, seeking feedback from peers, and engaging in productive development of the field’s enduring questions.
NARNiHS welcomes papers in all areas of historical sociolinguistics, which is understood as the application and/or development of sociolinguistic theories, methods, and models for the study of historical language variation and change over time, or more broadly, the study of the interaction of language and society in historical periods and from historical perspectives. Thus, a wide range of linguistic areas, subdisciplines, methodologies, and adjacent disciplines easily find their place within historical sociolinguistics, and we encourage submission of abstracts that reflect this broad scope.
Abstracts will be accepted for both 20-minute papers and posters. Please note that, at the NARNiHS annual meeting, poster presentations are an integral part of the conference (not second-tier presentations). Abstracts will be assigned a paper or a poster presentation based on determinations in the review process about the most effective format for the submission. However, if you prefer that your submission be considered primarily for poster presentation, please specify this in your abstract.
Successful abstracts will demonstrate *thorough grounding* in historical sociolinguistics, *scientific rigor* in the formulation of research questions, and promise for rich discussion of ideas. Successful abstracts will be explicit about which *theoretical frameworks*, *methodological protocols*, and *analytical strategies* are being applied or critiqued. *Data sources and examples* should be sufficiently presented, so as to allow reviewers a full understanding of the scope and claims of the research. Please note that the *connection of your research to the field of historical sociolinguistics* should be explicitly outlined in your abstract. Failure to adhere to these criteria will likely result in rejection.
*** Abstract Format Guidelines***.
- Abstracts must be submitted in PDF format.
- Abstracts must fit on one 8.5x11 inch page, with margins no smaller than 1 inch and a font style and size no smaller than Times New Roman 12 point. You are encouraged to use the entire page, providing a full and robust description of the research. All additional supporting content (visualizations, trees, tables, figures, captions, examples, and references) must fit on a single (1) additional page. No exceptions to these requirements are allowed; abstracts longer than one page or with more than one additional page of supporting content will be rejected without review.
- Specify if you prefer your submission be considered primarily for a poster presentation.
- Anonymize your abstract. We realize that sometimes complete anonymity is not attainable, but there is a difference between the nature of the research creating an inability to anonymize and careless non-anonymizing (in citations, references, file names, etc.). Be sure to anonymize your PDF file (you may do so in Adobe Acrobat Reader by clicking on "File", then "Properties", removing your name if it appears in the "Author" line of the "Description" tab, and re-saving the file before submission). Do not use your name when saving your PDF (e.g. Smith_Abstract.pdf); file names will not be automatically anonymized by the EasyAbs system. Rather, use non-identifying information in your file name (e.g. HistSoc4Lyfe.pdf). Your name should only appear in the online form accompanying your abstract submission. Papers that are not sufficiently anonymized wherever possible will be rejected without review.
*** General Requirements ***.
- Abstracts must be submitted electronically using the following link: https://easyabs.linguistlist.org/conference/NARNiHS_26/ .
- Authors may submit a maximum of two abstracts: One single-author abstract and one co-authored abstract.
- Authors may not submit identical abstracts for presentation at the NARNiHS annual meeting and the LSA annual meeting or another LSA sister society meeting (ADS, ANS, NAHoLS, SCiL, SPCL, or SSILA).
- After submission, no changes of author, title, or wording of the abstract may occur. If your abstract is accepted, adjustment of typographical errors is permitted before a final version of the abstract is printed in the conference booklet.
- Papers and posters must be delivered as projected in the abstract or represent bona fide developments of the same research.
- Authors are expected to attend the conference in-person and present their own papers and posters. This will not be a hybrid event.
Contact us at NARNiHistSoc(a)gmail.com with any questions.
Ethical LLMs 2025: The first Workshop on Ethical Concerns in Training, Evaluating and Deploying Large Language Models<https://sites.google.com/view/ethical-llms-2025> @ RANLP2025<https://ranlp.org/ranlp2025/>
2nd Call for papers:
Scope
Large Language Models (LLMs) represent a transformative leap in Artificial Intelligence (AI), delivering remarkable language-processing capabilities that are reshaping how we interact with technology in our daily lives. With their ability to perform tasks such as summarisation, translation, classification, and text generation, LLMs have demonstrated unparalleled versatility and power. Drawing from vast and diverse knowledge bases, these models hold the potential to revolutionise a wide range of fields, including education, media, law, psychology, and beyond. From assisting educators in creating personalised learning experiences to enabling legal professionals to draft documents or supporting mental health practitioners with preliminary assessments, the applications of LLMs are both expansive and profound.
However, alongside their impressive strengths, LLMs also face significant limitations that raise critical ethical questions. Unlike humans, these models lack essential qualities such as emotional intelligence, contextual empathy, and nuanced ethical reasoning. While they can generate coherent and contextually relevant responses, they do not possess the ability to fully understand the emotional or moral implications of their outputs. This gap becomes particularly concerning when LLMs are deployed in sensitive domains where human values, cultural nuances, and ethical considerations are paramount. For example, biases embedded in training data can lead to unfair or discriminatory outcomes, while the absence of ethical reasoning may result in outputs that inadvertently harm individuals or communities. These limitations highlight the urgent need for robust research in Natural Language Processing (NLP) to address the ethical dimensions of LLMs. Advancements in NLP research are crucial for developing methods to detect and mitigate biases, enhance transparency in model decision-making, and incorporate ethical frameworks that align with human values. By prioritising ethics in NLP research, we can better understand the societal implications of LLMs and ensure their development and deployment are guided by principles of fairness, accountability, and respect for human dignity. This workshop will dive into these pressing issues, fostering a collaborative effort to shape the future of LLMs as tools that not only excel in technical performance but also uphold the highest ethical standards.
Key Dates
Submissions Open - 1st June 2025
Paper Submission Deadline - 28th July 2025
Acceptance Notification - 10th August 2025
Camera-Ready Deadline - 20th August 2025
Submission Guidelines
We follow the RANLP 2025 standards for submission format and guidelines. EthicalLLMs 2025 invites the submission of long papers, up to eight pages in length, and short papers, up to six pages in length. These page limits only apply to the main body of the paper. At the end of the paper (after the conclusions but before the references) papers need to include a mandatory section discussing the limitations of the work and, optionally, a section discussing ethical considerations. Papers can include unlimited pages of references and an unlimited appendix.
To prepare your submission, please make sure to use the RANLP 2025 style files available here:
* Latex<https://ranlp.org/ranlp2025/wp-content/uploads/2025/05/ranlp2025-LaTeX.zip>
* Word<https://ranlp.org/ranlp2025/wp-content/uploads/2025/05/ranlp2025-word.docx>
Papers should be submitted through Softconf/START using the following link: https://softconf.com/ranlp25/EthicalLLMs2025/
Topics of interest
The workshop invites submissions on a broad range of topics related to the ethical development and evaluation of LLMs, including but not limited to the following.
1. Bias Detection and Mitigation in LLMs
Research focused on identifying, measuring, and reducing social, cultural, and algorithmic biases in large language models.
2. Ethical Frameworks for LLM Deployment
Approaches to integrating ethical principles—such as fairness, accountability, and transparency—into the development and use of LLMs.
3. LLMs in Sensitive Domains: Risks and Safeguards
Case studies or methodologies for deploying LLMs in high-stakes fields such as healthcare, law, and education, with an emphasis on ethical implications.
4. Explainability and Transparency in LLM Decision-Making
Techniques and tools for improving the interpretability of LLM outputs and understanding model reasoning.
5. Cultural and Contextual Understanding in NLP Systems
Strategies for enhancing LLMs’ sensitivity to cultural, linguistic, and social nuances in global and multilingual contexts.
6. Human-in-the-Loop Approaches for Ethical Oversight
Collaborative models that involve human expertise in guiding, correcting, or auditing LLM behaviour to ensure responsible use.
7. Mental Health and Emotional AI: Limits of LLM Empathy
Discussions on the role of LLMs in mental health support, highlighting the boundary between assistive technology and the need for human empathy.
Organisers
Damith Premasiri – Lancaster University, UK
Tharindu Ranasinghe – Lancaster University, UK
Hansi Hettiarachchi – Lancaster University, UK
Contact
If you have any questions regarding the workshop, please contact Damith: d.dolamullage(a)lancaster.ac.uk
Call for Participations and Papers
Shared Task for the 3rd International Workshop of AI Werewolf and
Dialog System (AIWolfDial2025) at the 18th International Natural
Language Generation conference (INLG 2025)
# Summary
Recent achievements of generation models, e.g. ChatGPT, are gathering
greater attention. However, there is still room to investigate LLMs
sufficiently able to handle coherent responses, longer contexts,
common grounds, and logics.
Werewolf is a social, hidden identity game that requires debate
between players and coalition building. The goal of our AIWerewolf
contest is to build an AI agent that is able to play this game against
other AI. We will hold 5-players and 13-players tracks.
# Schedule
Shared tasks
August 9, 2025: Competition Registration Deadline
August 9, 2025: Preliminary Round (Self-play) Result Submission Deadline
Mid August 2025: Final Round (Online Matches)
Workshop papers
August 26, 2025: Paper Submission Deadline
September 24, 2025: Notification of Acceptance
October 3, 2025: Camera-ready Submission Deadline
INLG 2025 Conference Period
October 29 - November 2, 2025 (in Hanoi)
October 30, 2025: AIWolfDial 2025 Workshop in Hanoi/online (Paper
Presentations and Competition Results)
Our shared task is held as a part of our AIWolfDial 2025 workshop at
INLG 2025 (18th International Natural Language Generation Conference).
Our workshop will be held in Hanoi, Vietnam and online on October
30th. It is not mandatry for our shared task participants to attend
the INLG 2025 conference, but encouraged to submit thier papers to the
workshop and present in the workshop day.
Please refer to our websites for the details including technical requirments:
https://aiwolfdial.github.io/aiwolf-nlp/en/
We have a seperate call for papers of our workshop.
# Why AI Werewolf?
Recent achievements of generation models, e.g. ChatGPT, are gathering
greater attentions. However, such a huge language model would not be
sufficiently able to handle coherent responses, longer contexts,
common grounds, and logics.
The AIWolfDial 2025 contest, which is an international open contest
for automatic players of the conversation game "Mafia", requires
players not just to communicate but to infer, persuade, deceive other
players via coherent logical conversations, while having the
role-playing non-task-oriented chats as well. We believe that this
contest reveals current issues in the recent huge language models,
showing directions of next breakthrough in the NLP area.
From the viewpoint of Game AI area, players must hide information, in
contrast to perfect information games such as chess or Reversi. Each
player acquires secret information from other players' conversations
and behavior and acts by hiding information to accomplish their
objectives. Players are required persuasion for earning confidence,
and speculation for detecting fabrications.
Participants must build an artificial intelligence agent that can play
the werewolf game as humans do, using natural language. Participant
agents will be evaluated by a panel of judges, who will grade the
subjective quality of the dialog generated by the agent, in addition
to their win rates. Agents must communicate in English.
# Registration
A team should send required information via
https://forms.gle/WuZdfjFAvLV98NU49
Registration is free.
# System Evaluation
Participants should submit a paper to the workshop, or a system design
description document to the organizers. In addition to the win rates,
reviewers will perform subjective evaluations on the game logs of a
self-match games and multi-agent games, using following criteria:
A Natural utterance expressions
B Contextually natural conversation
C Coherent (not contradictory) conversation
D Coherent game actions (vote, attack, divine) with conversation contents
E Diverse utterance expressions, including coherent characterization
Please note that vague utterances that could be used regardless of
context are not always natural in the werewolf game.
F Team play
# Call for Papers
We call for short papers and long papers as same as the INLG main
conference, both for shared task papers and papers in general. Please
use the ACL format as specified in the INLG conference webpage.
Submission site will open soon.
Submitted papers will be peer-reviewed and published as part of our
workshop proceedings in the ACL anthology.
# Organizers
Organizers and Program Commitee:
Yoshinobu Kano, Shizuoka University, Japan
Claus Aranha, Tsukuba University
Takashi Otsuki, Yamagata University, Japan
Fujio Toriumi, The University of Tokyo, Japan
Hirotaka Osawa, Keio University, Japan
Daisuke Katagami, Tokyo Polytechnic University, Japan
Michimasa Inaba, The University of Electro-Communications, Japan
Kei Harada, The University of Electro-Communications, Japan
Takeshi Ito, The University of Electro-Communications, Japan
Local Organizers:
Yoshinobu Kano, Shizuoka University, Japan
Neo Watanabe, Shizuoka University, Japan
Yuto Sahashi, Shizuoka University, Japan
Yuya Harada, Shizuoka University, Japan
Links (same as above):
Registration https://forms.gle/WuZdfjFAvLV98NU49
Contest and workshop website https://aiwolfdial.github.io/aiwolf-nlp/en/
INLG 2025 https://2025.inlgmeeting.org/
Contact;
aiwolf(a)kanolab.net
On behalf of the AIWolf organizers
--
Yoshinobu Kano, Ph.D.
Professor, Research Fellow
Faculty of Informatics, Shizuoka University
personal webpage: http://kanolab.net/kano/ e-mail: kano(a)kanolab.net
kano(a)inf.shizuoka.ac.jp
Dear colleagues,
I'm recruiting at least one post-doc for a project at New York University
aimed at creating language models that process language more like humans than
mainstream LLMs do
<https://tallinzen.net/media/papers/huang_et_al_2024_jml.pdf>. We are
planning to explore architectural modifications, training data
interventions, and steering through interpretability.
One motivation for this project is the empirical finding
<https://direct.mit.edu/tacl/article/doi/10.1162/tacl_a_00548/115371/Why-Doe…>
that the better LLMs become in terms of perplexity and task performance,
the worse they are as cognitive models of how people read and learn
language; we think that to reverse this trend we need to find ways to
constrain them (in terms of e.g. working memory, parse parallelism, and
factual and linguistic knowledge), and improve them in other ways to make
up for these constraints, e.g. through increasing data efficiency
<https://tallinzen.net/media/papers/wilcox_et_al_2025_jml.pdf>.
We're planning to benchmark the models against behavioral and neural data
from humans: eyetracking, fMRI and intracranial recordings. Some of the
data already exists, and some will be collected by collaborators at other
universities specifically for this project. But we also expect to do a lot
of fundamental modeling and interpretability work.
You do not need to have existing experience in cognitive science, but you
should have a strong track record in computational research; and you should
be interested in using AI for science, in learning about cognitive science
and collaborating with linguistics and cognitive scientists, and in doing
open-ended fundamental research on LLMs.
There are no teaching requirements. The position will be renewed every
year, but we expect the funding for this project to last four years. You
will be affiliated with NYU's Center for Data Science, and, if relevant,
also with the department of linguistics. NYU has large NLP and
computational cognitive science communities, with lots of opportunities for
collaborations.
The start date is flexible, though of course you should have a PhD by the
time you start. Your application is most likely to be considered if you
apply before *August 10th.* Please fill out this lightweight form
<https://docs.google.com/forms/d/e/1FAIpQLSc5IwTU43CWVjQYsWbvPkDFH7dFKglqRfP…>
to
express interest, and you can also email me directly if they have any
questions. I'll be at ACL 2025 and am happy to chat about the position. If
you're interested in working together but don't exactly fit the
description, don't hesitate to reach out!
--
Tal Linzen <https://tallinzen.net/>
Associate Professor of Linguistics and Data Science
New York University
Dear Corpora members,
this is a reminder of the call for the *"Emanuele Pianta" Award 2025*,
which recognizes outstanding Master's theses in Computational
Linguistics submitted at Italian universities.
To recognise excellence in student research as well as promote awareness
of our field and with the endorsement of the Italian Association of
Computational Linguistics (AILC), we are conferring the Emanuele Pianta
Award for the best Master’s Thesis (Laurea Magistrale) in Computational
Linguistics submitted at an Italian University. The prize consists of
€500.00 plus free membership to AILC for one year and free registration
to the upcoming CLiC-it 2025, where the author will have the chance to
present the thesis.
Master’s theses submitted to and defended at any University in Italy
within the yearly time frame specified in the call (see below) are
eligible for the prize. The thesis should address a topic in
computational linguistics or its applications, and may be written in
Italian or English. The sub-areas involved are those listed in the
yearly call for papers of the Italian Conference on Computational
Linguistics (CLiC-it).
The candidates’ works will be evaluated by a jury composed of three
members: one of the co-chairs of the previous CLiC-it conference, one
co-chair of the current CLiC-it conference (who agrees to serve for two
years, so as to ensure continuity), and a member of the board of AILC.
The jury will decide in consensus to which candidate the prize will be
awarded. The jury can also decide not to award a prize or to award the
prize to a maximum of two candidates. In the latter case, the money
prize will be shared.
Procedure
Master theses *defended between August 1st 2024 and July 31st 2025* are
eligible for the 2025 prize.
The supervisor of the thesis submits the thesis by *August 1st, 2025*
(11:59 pm CEST) with a motivation letter (1 page) that explains why the
thesis deserves the prize. Both the thesis and the motivation letter
should be submitted through the START platform using the following link:
https://softconf.com/p/clic-it2025.
The prize will be awarded by a member of the jury during CLiC-it 2025.
The thesis must be available on-line, and will be presented during
CLiC-it 2025 by the author.
The full text of the call can also be found here:
https://clic2025.unica.it/emanuele-pianta-award-for-the-best-masters-thesis/
Dear colleagues,
We are pleased to invite you to the tutorial, “NLP for Counterspeech Against Hate and Misinformation” which will take place on Sunday, July 27, from 14:00 to 17:30 at ACL 2025 in Vienna.
Overview: This tutorial explores the use of counterspeech by individuals, activists, and organizations to combat abuse and misinformation, and how Natural Language Processing (NLP) and Natural Language Generation (NLG) can be used to automate it. It examines key challenges such as evaluating the effectiveness of counterspeech, integrating civil society expertise in dataset creation, and addressing fairness and bias in language models. The tutorial brings together insights from computer science, social sciences, and public policy through case studies, and highlights the emerging research challenge of addressing hate and misinformation together using NLP techniques.
For the full program and detailed agenda, please visit the tutorial website. https://sites.google.com/view/nlp4csham/
Invited speakers
Cathy Buerger (Director of Research at the Dangerous Speech Project)
Simone Fontana (Editorial Manager at Facta News)
Tutorial Organizers
Daniel Russo
Helena Bonaldi
Marco Guerini
Gavin Abercrombie,
Yi-Ling Chung
We look forward to your participation and hope to see you there!
________________________________
Founded in 1821, Heriot-Watt is a leader in ideas and solutions. With campuses and students across the entire globe we span the world, delivering innovation and educational excellence in business, engineering, design and the physical, social and life sciences. This email is generated from the Heriot-Watt University Group, which includes:
1. Heriot-Watt University, a Scottish charity registered under number SC000278
2. Heriot- Watt Services Limited (Oriam), Scotland's national performance centre for sport. Heriot-Watt Services Limited is a private limited company registered is Scotland with registered number SC271030 and registered office at Research & Enterprise Services Heriot-Watt University, Riccarton, Edinburgh, EH14 4AS.
The contents (including any attachments) are confidential. If you are not the intended recipient of this e-mail, any disclosure, copying, distribution or use of its contents is strictly prohibited, and you should please notify the sender immediately and then delete it (including any attachments) from your system.