<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Hate Speech | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/hate-speech/</link><atom:link href="http://nlp.unibo.it/tag/hate-speech/index.xml" rel="self" type="application/rss+xml"/><description>Hate Speech</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Hate Speech Detection with Argumentative Reasoning</title><link>http://nlp.unibo.it/proposals_am/hatespeech/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_am/hatespeech/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Hate speech often lies on implicit content and subtle reasoning nuances.
Our idea is to apply argumentative reasoning to hate speech to make implicit content explicit in order to define more interpretable and user-friendly hate speech detection systems.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>, &lt;a href="mailto:arianna.muti@unibocconi.it">Arianna Muti&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts&lt;/strong>&lt;br>
Arianna Muti, Federico Ruggeri, Khalid Al-Khatib, Alberto Barrón-Cedeño, Tommaso Caselli&lt;br>
Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 21091–21107, 2024&lt;br>
&lt;a href="https://doi.org/10.18653/v1/2024.emnlp-main.1174" target="_blank" rel="noopener">DOI&lt;/a>
| &lt;a href="https://aclanthology.org/2024.emnlp-main.1174.pdf" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p>
&lt;p>&lt;strong>PejorativITy: Disambiguating Pejorative Epithets to Improve Misogyny Detection in Italian Tweets&lt;/strong>&lt;br>
Arianna Muti, Federico Ruggeri, Cagri Toraman, Lorenzo Musetti, Samuel Algherini, Silvia Ronchi, Gianmarco Saretto, Caterina Zapparoli, Alberto Barrón-Cedeño.&lt;br>
In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12700–12711, Torino, Italia. ELRA and ICCL.&lt;br>
&lt;a href="https://aclanthology.org/2024.lrec-main.1112.pdf" target="_blank" rel="noopener">PDF&lt;/a>
| &lt;a href="https://aclanthology.org/2024.lrec-main.1112" target="_blank" rel="noopener">Anthology&lt;/a>&lt;/p></description></item><item><title>Multi-cultural Abusive and Hate Speech Detection</title><link>http://nlp.unibo.it/proposals_uki/hatespeech/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_uki/hatespeech/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
What is attributable as abusive or hate speech depends on the given socio-cultural context.
The same text might be reputed offensive by a certain culture, allowed by another, and, in the most extreme case, legally prosecutable by a third one.
Our aim is to evaluate how machine learning model are affected by different definitions of abusive and hate speech to promote awareness in developing accurate abusive speech detection systems.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>, &lt;a href="mailto:k.korre@athenarc.gr">Katerina Korre&lt;/a>, &lt;a href="mailto:arianna.muti@unibocconi.it">Arianna Muti&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains.&lt;/strong>&lt;br>
Katerina Korre, Arianna Muti, Federico Ruggeri, and Alberto Barrón-Cedeño. 2025.&lt;br>
In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3184–3198, Albuquerque, New Mexico. Association for Computational Linguistics.&lt;br>
&lt;a href="https://doi.org/10.18653/v1/2025.findings-naacl.175" target="_blank" rel="noopener">DOI&lt;/a>
| &lt;a href="https://aclanthology.org/2025.findings-naacl.175.pdf" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item></channel></rss>