The 2st Workshop on DHOW: Diffusion of Harmful Content on Online Web Workshop
The workshop will be conducted in a *hybrid* format to ensure maximum participation, accommodating attendees both *online* and in person.
Submission deadline: *July 11 2025 AOE*
*Workshop site*: https://dhow-workshop.github.io/2025/
*Co-located with ACMMM 2025*
https://acmmm2025.org/ https://lrec-coling-2024.org/
Dublin, Ireland, 27-31 October 2024
*Important Dates*
Submission deadline: extended to *July 11, 2025*
Notification of acceptance: August 01, 2025
Camera-ready papers due: August 11, 2025
Workshop date: October 27/28, 2025
*Workshop Description*
With the advancement of digital technologies and gadgets, online content is easily accessible. At the same time, harmful content also gets spread. There are different harmful content available on different platforms in multiple languages. The topic of harmful content is broad and covers multiple research directions. But from the user’s aspect, they are affected by them all. Often, it is studied individually, like misinformation and hate speech. Research has been done on one platform, monolingual, on a particular issue. It leads to harmful content spreaders switching platforms and languages to reach the user base. Harmful is not limited to social media but also news media. Spreader shares harmful content in posts, news articles, comments, and hyperlinks. So, there is a need to study the harmful content by combining cross-platform, language, multimodal data and topics.
We will bring the research on harmful content under one umbrella so that research on different topics (hate speech, misinformation, disinformation, self-harm, offensive content, etc.) can bring some novel methods and recommendations for users, leveraging text analysis with image, audio, and video recognition to detect harmful content in diverse formats. The workshop will cover the ongoing issue of war or elections in 2025.
We believe this workshop will provide a unique opportunity for researchers and practitioners to exchange ideas, share latest developments, and collaborate on addressing the challenges associated with harmful contents spread across the Web. We expect that the workshop will generate insights and discussions that will help advance the field of societal artificial intelligence (AI) for the development of safer internet. In addition to attracting high quality research contributions to the workshop, one of the aims of the workshop is to mobilise the researchers working on the related areas to form a community.
*Submissions Topics*
•Studying different types of harmful content
•Computational fact-checking & Misinformation Detection
•Role of Generative AI in Mitigating Harmful Content
•Harassment, Bullying, and Hate Speech Detection
•Explainable AI for Harmful Content Analysis
•Multimodal and Multilingual Harmful Content Detection such as fake news, spam, and troll detection.
•Deepfake and Synthetic Media
•Ethical & Societal Implications of AI in Content Moderation
•Both Qualitative and Quantitative study on harmful content
•Psychological effects of harmful content like mental health
•Approaches for data collection or data annotation using multimodal large models on harmful content
•User study on the effects of harmful content on human beings
*Submissions*
- Submission Instructions: https://dhow-workshop.github.io/2025/#call https://dhow-workshop.github.io/2025/#call
- Submission Link: https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW https://openreview.net/group?id=acmmm.org/ACMMM/2025/Workshop/DHOW
***Workshop organizers*
•Thomas Mandl (University of Hildesheim, Germany)
•Haiming Liu (University of Southampton, United Kingdom)
•Gautam Kishore Shahi(University of Duisburg-Essen, Germany)
•Amit Kumar Jaiswal (University of Surrey, United Kingdom )
•Durgesh Nandini (University of Bayreuth, Germany)
DHOW 2025
📬 🌐 ***Call for Papers: DHOW 2026 – Diffusion of Harmful Content on the Web** *
📍 *Co-located with WebSci 2026 | Braunschweig, Germany | May 26–29, 2026*
✨ We’re excited to announce the extended CFP for the DHOW 2026 workshop! Join us in tackling one of the most pressing challenges of our digital age: the **spread of harmful content across platforms, languages, and formats**.
🔍 Why this matters: With the rise of AI, social media, and global connectivity, harmful content — from misinformation 📰 and hate speech 🔥 to deepfakes 🎭 and self-harm triggers 🛑 — spreads faster than ever. But here’s the catch: researchers often study these issues in isolation — one platform, one language, one type. 👉 This leads to **"harmful content hopping"** — spreaders move to evade detection.
🎯 Our mission? Bring together diverse research under one umbrella 🤝 to: - Study **cross-platform, multi-lingual, multimodal** harmful content - Combine **text, image, audio, and video analysis** 🖼️🔊🎥 - Explore the role of **Generative AI** 🤖 and **Explainable AI** 🧠 in detection & defense - Understand **psychological impacts** 🧠 and **user experiences** 👥 - Address urgent topics like **elections, war, and disinformation** in 2025 🌍
📊 We’re looking for scientific contributions on: - 📊 Analysis of hate speech, misinformation, disinformation, self-harm, offensive content - ✅ Computational fact-checking & AI-driven detection - 🤖 Role of Generative AI in mitigating harm - 🎭 Deepfakes & their societal impact - 🌍 Multi-lingual & cross-platform detection (e.g., spam, bots, trolls) - 🧪 Qualitative & quantitative studies on mental health effects - 📥 LLM-assisted data collection & annotation - 👥 User studies & human-AI collaboration in defense - 🧩 Explainable AI for transparency & trust
📅 Important Dates: - 📅 Submission deadline: **March 15, 2026 (AOE)** - ✅ Notification of acceptance: **March 29, 2026** - 📤 Camera-ready due: **April 2, 2026** - 🎉 Workshop: **May 26–29, 2026**
🔗 Submit your work here: 👉 [Submission Portal (OpenReview)](https://openreview.net/group?id=acmmm.org/WebSci/2026/Workshop/DHOW)
🌐 [Workshop Website](https://dhow-workshop.github.io/2026/)
👥 We’re building a community!* This workshop is more than a venue — it’s a **collaborative space** for researchers, practitioners, and policymakers to share insights, spark innovation, and shape a safer, more responsible internet 🌱.
👨💻 Organizing Team: - Thomas Mandl (University of Hildesheim, Germany) - Haiming Liu (University of Southampton, UK) - Gautam Kishore Shahi (University of Duisburg-Essen, Germany) - Amit Kumar Jaiswal (IIT BHU Varanasi, India) - Durgesh Nandini (University of Bayreuth, Germany) - Luis-Daniel Ibáñez (University of Southampton, UK)
#DHOW2026 #WebSci2026 #HarmfulContent #AIforGood #SocietalAI #DigitalSafety #ResearchCommunity
https://dhow-workshop.github.io/2026/2/
📬 🌐 ***Call for Papers: DHOW 2026 – Diffusion of Harmful Content on the Web** *https://dhow-workshop.github.io/2026/2/
*Co-located with WebSci 2026 | Braunschweig, Germany | May 26–29, 2026*
✨ We’re excited to announce the extended CFP for the DHOW 2026 workshop! Join us in tackling one of the most pressing challenges of our digital age: the **spread of harmful content across platforms, languages, and formats**.
🔍 Why this matters: With the rise of AI, social media, and global connectivity, harmful content — from misinformation 📰 and hate speech 🔥 to deepfakes 🎭 and self-harm triggers 🛑 — spreads faster than ever. But here’s the catch: researchers often study these issues in isolation — one platform, one language, one type. 👉 This leads to **"harmful content hopping"** — spreaders move to evade detection.
🎯 Our mission? Bring together diverse research under one umbrella 🤝 to:
- Study **cross-platform, multi-lingual, multimodal** harmful content
- Combine **text, image, audio, and video analysis** 🖼️🔊🎥
- Explore the role of **Generative AI** 🤖 and **Explainable AI** 🧠
in detection & defense
- Understand **psychological impacts** 🧠 and **user experiences** 👥
- Address urgent topics like **elections, war, and disinformation** in
2025 🌍
📊 We’re looking for scientific contributions on:
- 📊 Analysis of hate speech, misinformation, disinformation,
self-harm, offensive content
- ✅ Computational fact-checking & AI-driven detection
- 🤖 Role of Generative AI in mitigating harm
- 🎭 Deepfakes & their societal impact
- 🌍 Multi-lingual & cross-platform detection (e.g., spam, bots, trolls)
- 🧪 Qualitative & quantitative studies on mental health effects
- 📥 LLM-assisted data collection & annotation
- 👥 User studies & human-AI collaboration in defense
- 🧩 Explainable AI for transparency & trust
📅 Important Dates:
- 📅 Submission deadline: **March 15, 2026 (AOE)**
- ✅ Notification of acceptance: **March 29, 2026**
- 📤 Camera-ready due: **April 2, 2026**
- 🎉 Workshop: **May 26–29, 2026**
🔗 Submit your work here: 👉 [Submission Portal (OpenReview)](https://openreview.net/group?id=acmmm.org/WebSci/2026/Workshop/DHOW)
👥 We’re building a community!* This workshop is more than a venue — it’s a **collaborative space** for researchers, practitioners, and policymakers to share insights, spark innovation, and shape a safer, more responsible internet 🌱.
👨💻 Organizing Team:
- Thomas Mandl (University of Hildesheim, Germany)
- Haiming Liu (University of Southampton, UK)
- Gautam Kishore Shahi (University of Duisburg-Essen, Germany)
- Amit Kumar Jaiswal (IIT BHU Varanasi, India)
- Durgesh Nandini (University of Bayreuth, Germany)
- Luis-Daniel Ibáñez (University of Southampton, UK)
#DHOW2026 #WebSci2026 #HarmfulContent #AIforGood #SocietalAI #DigitalSafety #ResearchCommunity
Corpora mailing list --corpora@list.elra.info https://list.elra.info/mailman3/postorius/lists/corpora.list.elra.info/ To unsubscribe send an email tocorpora-leave@list.elra.info