Multi-cultural Abusive and Hate Speech Detection
Description:
What is attributable as abusive or hate speech depends on the given socio-cultural context.
The same text might be reputed offensive by a certain culture, allowed by another, and, in the most extreme case, legally prosecutable by a third one.
Our aim is to evaluate how machine learning model are affected by different definitions of abusive and hate speech to promote awareness in developing accurate abusive speech detection systems.
Contact: Federico Ruggeri, Katerina Korre, Arianna Muti
References:
Untangling Hate Speech Definitions: A Semantic Componential Analysis Across Cultures and Domains.
Katerina Korre, Arianna Muti, Federico Ruggeri, and Alberto Barrón-Cedeño. 2025.
In Findings of the Association for Computational Linguistics: NAACL 2025, pages 3184–3198, Albuquerque, New Mexico. Association for Computational Linguistics.
DOI
| PDF