Multimodal Argument Mining

Description:
Make use of speech information (e.g. prosody) to enhance the set of features that can be used to detect arguments. Speech can either be represented by means of ad-hoc feature extraction methods (e.g. MFCC) or via end-to-end architectures. Few existing corpora both offer argument annotation layers and speech data regarding a given text document.

Contact: Eleonora Mancini, Federico Ruggeri

References:

MAMKit: A Comprehensive Multimodal Argument Mining Toolkit.
Eleonora Mancini, Federico Ruggeri, Stefano Colamonaco, Andrea Zecca, Samuele Marro, and Paolo Torroni. 2024.
In Proceedings of the 11th Workshop on Argument Mining (ArgMining 2024), pages 69–82, Bangkok, Thailand. Association for Computational Linguistics.
DOI | PDF

Multimodal Fallacy Classification in Political Debates
Eleonora Mancini, Federico Ruggeri, Paolo Torroni
18th Conference of the European Chapter of the Association for Computational Linguistics (EACL), pp. 170–178, 2024
DOI | PDF

Multimodal Argument Mining: A Case Study in Political Debates
Eleonora Mancini, Federico Ruggeri, Andrea Galassi, and Paolo Torroni.
In Proceedings of the 9th Workshop on Argument Mining, pages 158–170, Online and in Gyeongju, Republic of Korea. International Conference on Computational Linguistics, 2022.
PDF | Anthology