<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Natural Language Processing | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/natural-language-processing/</link><atom:link href="http://nlp.unibo.it/tag/natural-language-processing/index.xml" rel="self" type="application/rss+xml"/><description>Natural Language Processing</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 23 Jan 2026 00:00:00 +0000</lastBuildDate><item><title>EquAl: Equitable Algorithms, Promoting Fairness and Countering Algorithmic Discrimination Through Norms and Technologies - Final Conference</title><link>http://nlp.unibo.it/news/equal/</link><pubDate>Fri, 23 Jan 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/equal/</guid><description>&lt;h2 id="project-info">Project info&lt;/h2>
&lt;p>The EquAl project addresses algorithmic evaluations, decisions, and predictions, to promote fairness and counter discrimination affecting individuals and groups.
The research project fundedis by the EU Commission under the NextGenerationEU program and the Italian Ministry of Education, University and Research.
(PRIN 2022. Ref. prot. n.: 2022KFLF3E-001 - CUP J53D23005560001).&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/equal/en/equal_project" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="reproducibility-crysis.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>PRIMA: PRivacy Infringements Machine-Advice - Final Conference</title><link>http://nlp.unibo.it/news/prima/</link><pubDate>Mon, 12 Jan 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/prima/</guid><description>&lt;h2 id="project-info">Project info&lt;/h2>
&lt;p>PRIMA (PRivacy Infringements Machine-Advice) studies the law and practice of privacy policies, develops methods and techniques for their automated analysis, and implements a prototype to assess their lawfulness.
It deploys legal analytics—a mix of data science, artificial intelligence, machine learning, natural language processing and statistics—to detect and assess privacy policies’ infringements.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/prima/en/project" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="explainability-via-highlights.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>ALMA-AI | Workshop LLM: a debate on technical experiences</title><link>http://nlp.unibo.it/news/alma-ai/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/alma-ai/</guid><description>&lt;h2 id="workshop-abstract">Workshop Abstract&lt;/h2>
&lt;p>La diffusione dei Large Language Model in ogni ambito delle attività umane ha reso necessario valutare lo stato dell’arte, i limiti, gli ostacoli e le barriere che questa potente tecnologia offre.
I rapidi aggiornamenti di queste tecnologie impongono di allargare gli orizzonti degli obiettivi di ricerca e delle metodologie.
Le capacità di reasoning degli Agentic AI indicano nuove sfide.
Tuttavia, sempre più emerge la necessità di valutare in quali ambiti queste tecnologie possano produrre un reale valore aggiunto e quando invece, anche considerando l’uso delle risorse necessarie, sia meglio adottare approcci più tradizionali.
Emerge quindi il tema di come valutare i modelli messi a confronto fra loro (benchmarking).
Gli aspetti tecnici si intersecano con quelli etici, giuridici, di sostenibilità, di efficacia, nonché di spiegabilità dei passaggi di reasoning.
Il workshop intende investigare questi temi, con particolare riguardo alle applicazioni nell’ambito del diritto dove il linguaggio è un pilastro costitutivo della disciplina.
Il dibattito è utile a definire anche le aspettative future delle quali le istituzioni, come i Parlamenti e le pubbliche amministrazioni, potranno avvantaggiarsi senza tuttavia ignorare rischi e false illusioni.&lt;/p>
&lt;h2 id="federico-ruggeris-speech">Federico Ruggeri&amp;rsquo;s speech&lt;/h2>
&lt;p>The talk addresses three main aspects of LLMs and reasoning capabilities.
First, we discuss what kind of reasoning type LLMs are tested for.
The short answer is that in the majority of cases, it is unclear which reason type(s) is (are) considered.
Second, we discuss to what extent do LLMs perform reasoning.
Some view LLMs as stochastic parrots, while others believe they acquire true reasoning capabilities.
Third, we show how reasoning and argumentation are tightly connected and discuss how argumentation is being progressively used as a way to assess reasoning capabilities in LLMs.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/hypermodelex/en" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="speech.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>The 13th Workshop on Argument Mining and Reasoning Co-located with ACL 2026</title><link>http://nlp.unibo.it/news/argmining2026/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/argmining2026/</guid><description>&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>The 2026 edition of the ArgMining workshop therefore places a special focus on understanding and evaluating arguments in both human and machine reasoning.
With this topic, we broaden the workshop&amp;rsquo;s focus to include reasoning, a long-standing area of research in AI that has recently gained renewed interest within the *ACL community, driven by the latest generation of LLMs.
Reasoning is tightly connected to argumentation as it represents, analyzes and evaluates the process of reaching conclusions on the basis of available information.
If we consider argumentation as a paradigm to capture reasoning, then machines (particularly LLMs) can be evaluated based on their ability to address argument mining tasks.&lt;/p>
&lt;p>The workshop will be co-located with ACL 2026 and held in San Diego, United States in a hybrid format.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://argmining-org.github.io/2026/" target="_blank" rel="noopener">Workshop Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="argmining.org@gmail.com">e-Mail&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/argmining-org" target="_blank" rel="noopener">Github&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://x.com/ArgminingOrg" target="_blank" rel="noopener">X/Twitter&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://bsky.app/profile/argminingorg.bsky.social" target="_blank" rel="noopener">Bluesky&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>Overview of the CLEF-2025 CheckThat! Lab: Subjectivity, fact-checking, claim normalization, and retrieval</title><link>http://nlp.unibo.it/events/2025clef/</link><pubDate>Mon, 01 Sep 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/events/2025clef/</guid><description>&lt;h5 id="resources">Resources&lt;/h5>
&lt;ul>
&lt;li>&lt;a href="https://checkthat.gitlab.io/clef2025/" target="_blank" rel="noopener">CheckThat! 2025&lt;/a>&lt;/li>
&lt;li>&lt;a href="paper.pdf">Paper&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h5 id="abstract">Abstract&lt;/h5>
&lt;p>This paper presents the eighth edition of the CheckThat! lab, part of the 2025 Conference and Labs of the Evaluation Forum (CLEF). As in previous editions of CheckThat!, the lab offers tasks from the core of the verification pipeline, including check-worthiness, identifying previously fact-checked claims, supporting evidence retrieval, and claim verification as well as auxiliary tasks addressing different facets of individual steps of the pipeline: Task 1 is on identification of subjectivity (a follow-up of the CheckThat! 2024 edition), which is related to the check-worthiness task, Task 2 is on claim normalization, Task 3 addresses fact-checking numerical claims, and Task 4 focuses on scientific web discourse processing. These challenging classification and retrieval problems are offered in different mono-, multi- and crosslingual settings covering more than 20 languages. This year, CheckThat! was one of the most popular labs at CLEF-2025 in terms of team registrations: 177 teams registered, almost half of them actually participating (a total of 83 teams) and 54 submitted system description papers.&lt;/p>
&lt;hr>
&lt;h5 id="citation">Citation&lt;/h5>
&lt;p>Firoj Alam, Julia Maria Struß, Tanmoy Chakraborty, Stefan Dietze, Salim Hafid, Katerina Korre, Arianna Muti, Preslav Nakov, Federico Ruggeri, Sebastian Schellhammer, et al. Overview of the CLEF-2025 CheckThat! Lab: Subjectivity, fact-checking, claim normalization, and retrieval. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 199–223. Springer, 2025.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-latex" data-lang="latex">&lt;span class="line">&lt;span class="cl">@inproceedings&lt;span class="nb">{&lt;/span>alam-etal-2025-overview,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> title=&lt;span class="nb">{&lt;/span>Overview of the &lt;span class="nb">{&lt;/span>CLEF&lt;span class="nb">}&lt;/span>-2025 &lt;span class="nb">{&lt;/span>C&lt;span class="nb">}&lt;/span>heck&lt;span class="nb">{&lt;/span>T&lt;span class="nb">}&lt;/span>hat! &lt;span class="nb">{&lt;/span>L&lt;span class="nb">}&lt;/span>ab: &lt;span class="nb">{&lt;/span>S&lt;span class="nb">}&lt;/span>ubjectivity, fact-checking, claim normalization, and retrieval&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> author=&lt;span class="nb">{&lt;/span>Alam, Firoj and Stru&lt;span class="nb">{&lt;/span>&lt;span class="k">\ss&lt;/span>&lt;span class="nb">}&lt;/span>, Julia Maria and Chakraborty, Tanmoy and Dietze, Stefan and Hafid, Salim and Korre, Katerina and Muti, Arianna and Nakov, Preslav and Ruggeri, Federico and Schellhammer, Sebastian and others&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> booktitle=&lt;span class="nb">{&lt;/span>International Conference of the Cross-Language Evaluation Forum for European Languages&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> pages=&lt;span class="nb">{&lt;/span>199--223&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> year=&lt;span class="nb">{&lt;/span>2025&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> organization=&lt;span class="nb">{&lt;/span>Springer&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>Overview of MM-ArgFallacy2025 on Multimodal Argumentative Fallacy Detection and Classification in Political Debates</title><link>http://nlp.unibo.it/events/2025argfallacy/</link><pubDate>Tue, 01 Jul 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/events/2025argfallacy/</guid><description>&lt;table>
&lt;tr>
&lt;td>&lt;img src="argfallacy.webp"/>&lt;/td>
&lt;td>&lt;img src="argfallacy.webp"/>&lt;/td>
&lt;td>&lt;img src="argfallacy.webp"/>&lt;/td>
&lt;/tr>
&lt;/table>
&lt;p>Multimodal Argumentative Fallacy Detection and Classification on Political Debates Shared Task.&lt;/p>
&lt;p>Co-located with The &lt;a href="https://argmining-org.github.io/2025/" target="_blank" rel="noopener">12th Workshop on Argument Mining&lt;/a> in Vienna, Austria.&lt;/p>
&lt;h1 id="overview">Overview&lt;/h1>
&lt;p>This shared task focuses on detecting and classifying fallacies in &lt;strong>political debates&lt;/strong> by integrating text and audio data. Participants will tackle two sub-tasks:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Argumentative Fallacy Detection&lt;/strong>&lt;/li>
&lt;li>&lt;strong>Argumentative Fallacy Classification&lt;/strong>&lt;/li>
&lt;/ul>
&lt;p>We offer three input settings:&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Text-only:&lt;/strong> Analyze textual arguments.&lt;/li>
&lt;li>&lt;strong>Audio-only:&lt;/strong> Explore paralinguistic features.&lt;/li>
&lt;li>&lt;strong>Text + Audio:&lt;/strong> Combine both for a multimodal perspective.&lt;/li>
&lt;/ul>
&lt;p>Join us to advance multimodal argument mining and uncover new insights into human reasoning! 💬&lt;/p>
&lt;h1 id="tasks">Tasks&lt;/h1>
&lt;p>&lt;strong>Task A&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Input&lt;/strong>: a sentence, in the form of text or audio or both, extracted from a political debate.&lt;/li>
&lt;li>&lt;strong>Task&lt;/strong>: to determine whether the input contains an argumentative fallacy.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Task B&lt;/strong>&lt;/p>
&lt;ul>
&lt;li>&lt;strong>Input&lt;/strong>: a sentence, in the form of text or audio or both, extracted from a political debate, containing a fallacy.&lt;/li>
&lt;li>&lt;strong>Task&lt;/strong>: to determine the type of fallacy contained in the input, according to the classification introduced by &lt;a href="https://www.ijcai.org/proceedings/2022/575" target="_blank" rel="noopener">Goffredo et al. (2022)&lt;/a>. We only refer to macro categories.&lt;/li>
&lt;/ul>
&lt;hr>
&lt;p>For each sub-task, participants can leverage the debate context of a given input: all its previous sentences and corresponding aligned audio samples. For instance, consider the &lt;strong>text-only&lt;/strong> input mode. Given a sentence from a political debate at index &lt;em>i&lt;/em>, participants can use sentences with indexes from &lt;em>0&lt;/em> to &lt;em>i - 1&lt;/em>, where &lt;em>0&lt;/em> denotes the first sentence in the debate.&lt;/p>
&lt;hr>
&lt;h1 id="data">Data&lt;/h1>
&lt;p>We use &lt;strong>MM-USED-fallacy&lt;/strong> and release a version of the dataset specifically designed for argumentative fallacy detection. This dataset includes 1,278 sentences from &lt;a href="https://aclanthology.org/P19-1463.pdf" target="_blank" rel="noopener">Haddadan et al.&amp;rsquo;s (2019)&lt;/a> dataset on US presidential elections. Each sentence is labeled with one of six argumentative fallacy categories, as introduced by &lt;a href="https://www.ijcai.org/proceedings/2022/575" target="_blank" rel="noopener">Goffredo et al. (2022)&lt;/a>.&lt;/p>
&lt;p>Inspired by observations from &lt;a href="https://www.ijcai.org/proceedings/2022/575" target="_blank" rel="noopener">Goffredo et al. (2022)&lt;/a> on the benefits of leveraging multiple argument mining tasks for fallacy detection and classification, we also provide additional datasets to encourage multi-task learning. A summary is provided in the table below:&lt;/p>
&lt;hr>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>&lt;strong>Dataset&lt;/strong>&lt;/th>
&lt;th>&lt;strong>Description&lt;/strong>&lt;/th>
&lt;th>&lt;strong>Size&lt;/strong>&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>&lt;strong>MM-USED-fallacy&lt;/strong>&lt;/td>
&lt;td>A multimodal extension of USElecDeb60to20 dataset, covering US presidential debates (1960-2020). Inlcludes labels for argumentative fallacy detection and argumentative fallacy classification.&lt;/td>
&lt;td>1,278 samples (updated version)&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>MM-USED&lt;/strong>&lt;/td>
&lt;td>A multimodal extension of the USElecDeb60to16 dataset, covering US presidential debates (1960–2016). Includes labels for argumentative sentence detection and component classification.&lt;/td>
&lt;td>23,505 sentences (updated version)&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>UKDebates&lt;/strong>&lt;/td>
&lt;td>386 sentences and audio samples from the 2015 UK Prime Ministerial elections. Sentences are labeled for argumentative sentence detection: containing or not containing a claim.&lt;/td>
&lt;td>386 sentences&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>&lt;strong>M-Arg&lt;/strong>&lt;/td>
&lt;td>A multimodal dataset for argumentative relation classification from the 2020 US Presidential elections. Sentences are labeled as attacking, supporting, or unrelated to another sentence.&lt;/td>
&lt;td>4,104 pairs&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr>
&lt;p>All datasets will be available through &lt;a href="https://nlp-unibo.github.io/mamkit/" target="_blank" rel="noopener">MAMKit&lt;/a>.&lt;/p>
&lt;p>Since many multimodal datasets cannot release audio samples due to copyright restrictions, MAMKit provides an interface to dynamically build datasets and promote reproducible research.&lt;/p>
&lt;p>Datasets are formatted as &lt;code>torch.Dataset&lt;/code> objects, containing input values (text, audio, or both) and corresponding task-specific labels. More details about data formats and dataset building are available in MAMKit&amp;rsquo;s documentation. ## Retrieving the Data through MAMKit&lt;/p>
&lt;p>To retrieve the datasets through MAMKit, you can use the following code interface:&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-python" data-lang="python">&lt;span class="line">&lt;span class="cl">&lt;span class="kn">from&lt;/span> &lt;span class="nn">mamkit.data.datasets&lt;/span> &lt;span class="kn">import&lt;/span> &lt;span class="n">MMUSEDFallacy&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">MMUSED&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">UKDebates&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">MArg&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">InputMode&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="kn">import&lt;/span> &lt;span class="nn">logging&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="kn">from&lt;/span> &lt;span class="nn">pathlib&lt;/span> &lt;span class="kn">import&lt;/span> &lt;span class="n">Path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">def&lt;/span> &lt;span class="nf">loading_data_example&lt;/span>&lt;span class="p">():&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">Path&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="vm">__file__&lt;/span>&lt;span class="p">)&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">parent&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">parent&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">resolve&lt;/span>&lt;span class="p">()&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">joinpath&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s1">&amp;#39;data&amp;#39;&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="c1"># MM-USED-fallacy dataset&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">mm_used_fallacy_loader&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">MMUSEDFallacy&lt;/span>&lt;span class="p">(&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">task_name&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s1">&amp;#39;afc&amp;#39;&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># Choose between &amp;#39;afc&amp;#39; or &amp;#39;afd&amp;#39; &lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">input_mode&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">InputMode&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">TEXT_AUDIO&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># Choose between TEXT_ONLY, AUDIO_ONLY, or TEXT_AUDIO&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">base_data_path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="c1"># MM-USED dataset&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">mm_used_loader&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">MMUSED&lt;/span>&lt;span class="p">(&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">task_name&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s1">&amp;#39;asd&amp;#39;&lt;/span>&lt;span class="p">,&lt;/span>&lt;span class="c1">#Choose between &amp;#39;asd&amp;#39; or &amp;#39;acc&amp;#39; &lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">input_mode&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">InputMode&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">TEXT_AUDIO&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># Choose between TEXT_ONLY, AUDIO_ONLY, or TEXT_AUDIO&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">base_data_path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="c1"># UKDebates dataset&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">uk_debates_loader&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">UKDebates&lt;/span>&lt;span class="p">(&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">task_name&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s1">&amp;#39;asd&amp;#39;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">input_mode&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">InputMode&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">TEXT_AUDIO&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># Choose between TEXT_ONLY, AUDIO_ONLY, or TEXT_AUDIO&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">base_data_path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="c1"># M-Arg dataset&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">m_arg_loader&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">MArg&lt;/span>&lt;span class="p">(&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">task_name&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s1">&amp;#39;arc&amp;#39;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">input_mode&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">InputMode&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">TEXT_AUDIO&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># Choose between TEXT_ONLY, AUDIO_ONLY, or TEXT_AUDIO&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">base_data_path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Each loader is initialized with the appropriate task name (&lt;code>afc&lt;/code> for argumentative fallacy classification, &lt;code>asd&lt;/code> for argumentative sentence detection, and &amp;lsquo;arc&amp;rsquo; for argumentative relation classification), input mode (InputMode.TEXT_ONLY, InputMode.AUDIO_ONLY, or InputMode.TEXT_AUDIO), and the base data path.&lt;/p>
&lt;p>Ensure that you have MAMKit installed and properly configured in your environment to use these loaders.&lt;/p>
&lt;p>For more details, refer to the MAMKit &lt;a href="https://github.com/nlp-unibo/mamkit" target="_blank" rel="noopener">GitHub repository&lt;/a> and &lt;a href="https://nlp-unibo.github.io/mamkit/" target="_blank" rel="noopener">website&lt;/a> .&lt;/p>
&lt;h2 id="test-set-access-">Test Set Access 🔍&lt;/h2>
&lt;p>The test set for &lt;strong>mm-argfallacy-2025&lt;/strong> is now available! To use it, please:&lt;/p>
&lt;ol>
&lt;li>Create a fresh environment&lt;/li>
&lt;li>Clone the repository and install the requirements:&lt;/li>
&lt;/ol>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-bash" data-lang="bash">&lt;span class="line">&lt;span class="cl">git clone git@github.com:nlp-unibo/mamkit.git
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="nb">cd&lt;/span> mamkit
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">pip install -r requirements.txt
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">pip install --editable .
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;ol start="3"> &lt;li>Access MAMKit in your Python code:&lt;/li> &lt;/ol>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-python" data-lang="python">&lt;span class="line">&lt;span class="cl">&lt;span class="kn">import&lt;/span> &lt;span class="nn">mamkit&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;p>Then, retrieve the data using the following code:&lt;/p>
&lt;h3 id="for-fallacy-classification-afc">For &lt;strong>Fallacy Classification&lt;/strong> (&lt;code>afc&lt;/code>):&lt;/h3>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-python" data-lang="python">&lt;span class="line">&lt;span class="cl">&lt;span class="kn">from&lt;/span> &lt;span class="nn">mamkit.data.datasets&lt;/span> &lt;span class="kn">import&lt;/span> &lt;span class="n">MMUSEDFallacy&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">InputMode&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="kn">from&lt;/span> &lt;span class="nn">pathlib&lt;/span> &lt;span class="kn">import&lt;/span> &lt;span class="n">Path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">def&lt;/span> &lt;span class="nf">loading_data_example&lt;/span>&lt;span class="p">():&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">Path&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="vm">__file__&lt;/span>&lt;span class="p">)&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">parent&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">parent&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">resolve&lt;/span>&lt;span class="p">()&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">joinpath&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s1">&amp;#39;data&amp;#39;&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">loader&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">MMUSEDFallacy&lt;/span>&lt;span class="p">(&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">task_name&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s1">&amp;#39;afc&amp;#39;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">input_mode&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">InputMode&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">TEXT_ONLY&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># or TEXT_AUDIO or AUDIO_ONLY&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">base_data_path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">split_info&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">loader&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">get_splits&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s1">&amp;#39;mm-argfallacy-2025&amp;#39;&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="for-fallacy-detection-afd">For &lt;strong>Fallacy Detection&lt;/strong> (&lt;code>afd&lt;/code>):&lt;/h3>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-python" data-lang="python">&lt;span class="line">&lt;span class="cl">&lt;span class="kn">from&lt;/span> &lt;span class="nn">mamkit.data.datasets&lt;/span> &lt;span class="kn">import&lt;/span> &lt;span class="n">MMUSEDFallacy&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="n">InputMode&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="kn">from&lt;/span> &lt;span class="nn">pathlib&lt;/span> &lt;span class="kn">import&lt;/span> &lt;span class="n">Path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="k">def&lt;/span> &lt;span class="nf">loading_data_example&lt;/span>&lt;span class="p">():&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">Path&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="vm">__file__&lt;/span>&lt;span class="p">)&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">parent&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">parent&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">resolve&lt;/span>&lt;span class="p">()&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">joinpath&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s1">&amp;#39;data&amp;#39;&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">loader&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">MMUSEDFallacy&lt;/span>&lt;span class="p">(&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">task_name&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="s1">&amp;#39;afd&amp;#39;&lt;/span>&lt;span class="p">,&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">input_mode&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">InputMode&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">TEXT_ONLY&lt;/span>&lt;span class="p">,&lt;/span> &lt;span class="c1"># or TEXT_AUDIO or AUDIO_ONLY&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">base_data_path&lt;/span>&lt;span class="o">=&lt;/span>&lt;span class="n">base_data_path&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> &lt;span class="n">split_info&lt;/span> &lt;span class="o">=&lt;/span> &lt;span class="n">loader&lt;/span>&lt;span class="o">.&lt;/span>&lt;span class="n">get_splits&lt;/span>&lt;span class="p">(&lt;/span>&lt;span class="s1">&amp;#39;mm-argfallacy-2025&amp;#39;&lt;/span>&lt;span class="p">)&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h3 id="references">References&lt;/h3>
&lt;ul>
&lt;li>&lt;strong>MM-USED-fallacy&lt;/strong>: &lt;a href="https://aclanthology.org/2024.eacl-short.16.pdf" target="_blank" rel="noopener">Mancini et al. (2024)&lt;/a>. The version provided through MAMKit includes updated samples, with refinements in the alignment process. This results in a different number of samples compared to the original dataset.&lt;/li>
&lt;li>&lt;strong>MM-USED&lt;/strong>: &lt;a href="https://aclanthology.org/2022.argmining-1.15.pdf" target="_blank" rel="noopener">Mancini et al. (2022)&lt;/a>. The version provided through MAMKit includes updated samples, with refinements in the alignment process. This results in a different number of samples compared to the original dataset.&lt;/li>
&lt;li>&lt;strong>UK-Debates&lt;/strong>: &lt;a href="https://ojs.aaai.org/index.php/AAAI/article/view/10384" target="_blank" rel="noopener">Lippi and Torroni (2016)&lt;/a>.&lt;/li>
&lt;li>&lt;strong>M-Arg&lt;/strong>: &lt;a href="https://aclanthology.org/2021.argmining-1.8.pdf" target="_blank" rel="noopener">Mestre et al. (2021)&lt;/a>.&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Note&lt;/strong>: By &amp;ldquo;updated version,&amp;rdquo; we mean that the datasets have undergone a refinement in the alignment process, which has resulted in adjustments to the number of samples included compared to the original versions published in the referenced papers.&lt;/p>
&lt;h1 id="evaluation">Evaluation&lt;/h1>
&lt;p>For argumentative fallacy detection, we will compute the binary F1-score on predicted sentence-level labels.&lt;br>
For argumentative fallacy classification, we will compute the macro F1-score on predicted sentence-level labels.&lt;br>
Metrics will be computed on the hidden test set to determine the best system for each sub-task and input mode.&lt;/p>
&lt;p>Evaluation will be performed via the &lt;a href="https://codalab.lisn.upsaclay.fr/competitions/22739" target="_blank" rel="noopener">CodaLab platform&lt;/a>.&lt;br>
On CodaLab, participants will find the leaderboard, along with the results of the provided baselines.&lt;br>
Submission guidelines can be found under the &lt;em>Evaluation&lt;/em> section of the CodaLab competition page.&lt;/p>
&lt;p>🚨 &lt;strong>Important&lt;/strong>: In the evaluation website, you will also find a link to a &lt;strong>mandatory participation survey&lt;/strong>.&lt;br>
Filling out this survey is required in order to participate in the task.&lt;br>
We also provide the survey link here for convenience: &lt;a href="https://tinyurl.com/limesurvey-argfallacy" target="_blank" rel="noopener">https://tinyurl.com/limesurvey-argfallacy&lt;/a>&lt;/p>
&lt;h3 id="baseline-results-on-test-set">Baseline Results on Test Set&lt;/h3>
&lt;h4 id="argumentative-fallacy-classification-afc--macro-f1-score">Argumentative Fallacy Classification (AFC) – Macro F1-score&lt;/h4>
&lt;hr>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Input Modality&lt;/th>
&lt;th>Model&lt;/th>
&lt;th>F1-Score&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Text-only&lt;/td>
&lt;td>BiLSTM w/ GloVe&lt;/td>
&lt;td>47.21&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Text-only&lt;/td>
&lt;td>RoBERTa&lt;/td>
&lt;td>39.25&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Audio-only&lt;/td>
&lt;td>BiLSTM w/ MFCCs&lt;/td>
&lt;td>15.82&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Audio-only&lt;/td>
&lt;td>WavLM&lt;/td>
&lt;td>6.43&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Text + Audio&lt;/td>
&lt;td>BiLSTM (GloVe + MFCCs)&lt;/td>
&lt;td>21.91&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Text + Audio&lt;/td>
&lt;td>MM-RoBERTa + WavLM&lt;/td>
&lt;td>38.16&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr>
&lt;h4 id="argumentative-fallacy-detection-afd--binary-f1-score">Argumentative Fallacy Detection (AFD) – Binary F1-score&lt;/h4>
&lt;hr>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Input Modality&lt;/th>
&lt;th>Model&lt;/th>
&lt;th>F1-Score&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>Text-only&lt;/td>
&lt;td>BiLSTM w/ GloVe&lt;/td>
&lt;td>24.62&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Text-only&lt;/td>
&lt;td>RoBERTa&lt;/td>
&lt;td>27.70&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Audio-only&lt;/td>
&lt;td>BiLSTM w/ MFCCs&lt;/td>
&lt;td>0.00&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Audio-only&lt;/td>
&lt;td>WavLM&lt;/td>
&lt;td>0.00&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Text + Audio&lt;/td>
&lt;td>BiLSTM (GloVe + MFCCs)&lt;/td>
&lt;td>23.37&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>Text + Audio&lt;/td>
&lt;td>MM-RoBERTa + WavLM&lt;/td>
&lt;td>28.48&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr>
&lt;h1 id="submission">Submission&lt;/h1>
&lt;p>All evaluated submissions are required to commit to submitting a system description paper. You can choose between two options:&lt;/p>
&lt;ul>
&lt;li>
&lt;p>&lt;strong>Non-Archival Paper&lt;/strong>:&lt;br>
A 2-page paper describing your system, with unlimited pages for appendices and bibliography. These papers will &lt;em>not&lt;/em> be published in the workshop proceedings, but your system will be mentioned in the Overview Paper of the shared task, upon acceptance.&lt;/p>
&lt;/li>
&lt;li>
&lt;p>&lt;strong>Archival Paper&lt;/strong>:&lt;br>
A 4-page paper describing your system, also with unlimited pages for appendices and bibliography. These papers &lt;em>will&lt;/em> be published in the official ACL workshop proceedings and must be presented at the workshop (poster or oral session).&lt;br>
⚠️ &lt;em>In accordance with ACL policy, at least one team member must register for the workshop in order to present an archival paper if aaccepted to be published at the ACL proceedings.&lt;/em>&lt;/p>
&lt;/li>
&lt;/ul>
&lt;p>All papers must use the official &lt;a href="https://github.com/acl-org/acl-style-files" target="_blank" rel="noopener">ACL style templates&lt;/a>, available in both LaTeX and Word. We strongly recommend using the official &lt;a href="https://www.overleaf.com/project/5f64f1fb97c4c50001b60549" target="_blank" rel="noopener">Overleaf template&lt;/a> for convenience.&lt;/p>
&lt;p>We have sent an email to each team with all the details regarding the system description paper submission for MM-ArgFallacy2025. Please check your inbox (and spam folder just in case).&lt;/p>
&lt;ul>
&lt;li>🗓️ &lt;strong>Submissions open&lt;/strong>: May 1st, 2025 (the day after the end of the evaluation period)&lt;/li>
&lt;li>🗓️ &lt;strong>Submissions close&lt;/strong>: May 15th, 2025&lt;/li>
&lt;li>📢 &lt;strong>Notification of acceptance&lt;/strong>: May 20th, 2025&lt;/li>
&lt;li>📝 &lt;strong>Camera-ready deadline&lt;/strong>: May 25th, 2025&lt;/li>
&lt;/ul>
&lt;p>&lt;strong>Important notes&lt;/strong>:&lt;/p>
&lt;ul>
&lt;li>All accepted &lt;strong>archival papers&lt;/strong> will be presented during the workshop’s poster session and require at least one registered author.&lt;/li>
&lt;li>&lt;strong>Non-archival papers&lt;/strong> do &lt;em>not&lt;/em> require registration and are not presented at the workshop, but their systems will be acknowledged in the Overview Paper.&lt;/li>
&lt;/ul>
&lt;p>We look forward to receiving your submissions!&lt;/p>
&lt;h2 id="-leaderboard--shared-task-results">🏆 Leaderboard – Shared Task Results&lt;/h2>
&lt;h3 id="afc-task--argumentative-fallacy-classification">&lt;code>AFC Task – Argumentative Fallacy Classification&lt;/code>&lt;/h3>
&lt;h4 id="-text-only">📝 Text-only&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Rank&lt;/th>
&lt;th>Team&lt;/th>
&lt;th>F1-Macro&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>Team NUST&lt;/td>
&lt;td>0.4856&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>Baseline BiLSTM&lt;/td>
&lt;td>0.4721&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>alessiopittiglio&lt;/td>
&lt;td>0.4444&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Baseline RoBERTa&lt;/td>
&lt;td>0.3925&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>5&lt;/td>
&lt;td>Team CASS&lt;/td>
&lt;td>0.1432&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h4 id="-audio-only">🔊 Audio-only&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Rank&lt;/th>
&lt;th>Team&lt;/th>
&lt;th>F1-Macro&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>alessiopittiglio&lt;/td>
&lt;td>0.3559&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>Team NUST&lt;/td>
&lt;td>0.1588&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>Baseline BiLSTM + MFCCs&lt;/td>
&lt;td>0.1582&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Team CASS&lt;/td>
&lt;td>0.0864&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>5&lt;/td>
&lt;td>Baseline WavLM&lt;/td>
&lt;td>0.0643&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h4 id="-text-audio">🔁 Text-Audio&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Rank&lt;/th>
&lt;th>Team&lt;/th>
&lt;th>F1-Macro&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>Team NUST&lt;/td>
&lt;td>0.4611&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>alessiopittiglio&lt;/td>
&lt;td>0.4403&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>Baseline RoBERTa + WavLM&lt;/td>
&lt;td>0.3816&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Baseline BiLSTM + MFCCs&lt;/td>
&lt;td>0.2191&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>5&lt;/td>
&lt;td>Team CASS&lt;/td>
&lt;td>0.1432&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;hr>
&lt;h3 id="afd-task--argumentative-fallacy-detection">&lt;code>AFD Task – Argumentative Fallacy Detection&lt;/code>&lt;/h3>
&lt;h4 id="-text-only-1">📝 Text-only&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Rank&lt;/th>
&lt;th>Team&lt;/th>
&lt;th>F1-Macro&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>Baseline RoBERTa&lt;/td>
&lt;td>0.2770&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>Ambali_Yashovardhan&lt;/td>
&lt;td>0.2534&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>Baseline BiLSTM&lt;/td>
&lt;td>0.2462&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Team EvaAdriana&lt;/td>
&lt;td>0.2195&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h4 id="-audio-only-1">🔊 Audio-only&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Rank&lt;/th>
&lt;th>Team&lt;/th>
&lt;th>F1-Macro&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>Ambali_Yashovardhan&lt;/td>
&lt;td>0.2095&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>Team EvaAdriana&lt;/td>
&lt;td>0.1690&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>Baseline BiLSTM + MFCCs&lt;/td>
&lt;td>0.0000&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Baseline WavLM&lt;/td>
&lt;td>0.0000&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h4 id="-text-audio-1">🔁 Text-Audio&lt;/h4>
&lt;table>
&lt;thead>
&lt;tr>
&lt;th>Rank&lt;/th>
&lt;th>Team&lt;/th>
&lt;th>F1-Macro&lt;/th>
&lt;/tr>
&lt;/thead>
&lt;tbody>
&lt;tr>
&lt;td>1&lt;/td>
&lt;td>Baseline RoBERTa + WavLM&lt;/td>
&lt;td>0.2848&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>2&lt;/td>
&lt;td>Baseline BiLSTM + MFCCs&lt;/td>
&lt;td>0.2337&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>3&lt;/td>
&lt;td>Ambali_Yashovardhan&lt;/td>
&lt;td>0.2244&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td>4&lt;/td>
&lt;td>Team EvaAdriana&lt;/td>
&lt;td>0.1931&lt;/td>
&lt;/tr>
&lt;/tbody>
&lt;/table>
&lt;h1 id="key-dates-anywhere-on-earth">Key Dates (Anywhere on Earth)&lt;/h1>
&lt;ul>
&lt;li>&lt;strong>Release of Training Data&lt;/strong>: February 25th&lt;/li>
&lt;li>&lt;strong>Release of Test Set&lt;/strong>: &lt;del>March 24th&lt;/del> → April 7th&lt;/li>
&lt;li>&lt;strong>Evaluation Start&lt;/strong>: &lt;del>April 14th&lt;/del> → April 21st&lt;/li>
&lt;li>&lt;strong>Evaluation End&lt;/strong>: &lt;del>April 25th&lt;/del> → April 30th&lt;/li>
&lt;li>&lt;strong>Paper Submissions Open&lt;/strong>: May 1st&lt;/li>
&lt;li>&lt;strong>Paper Submission Close&lt;/strong>: May 15th&lt;/li>
&lt;li>&lt;strong>Notification of acceptance&lt;/strong>: May 20th&lt;/li>
&lt;li>&lt;strong>Camera-ready Due&lt;/strong>: May 25th&lt;/li>
&lt;li>&lt;strong>Workshop&lt;/strong>: July 31st&lt;/li>
&lt;/ul>
&lt;h1 id="task-organizers">Task Organizers&lt;/h1>
&lt;table>
&lt;tr>
&lt;td style="width: 20%;">&lt;img src="emancini.png"/>&lt;/td>
&lt;td style="width: 30%;">
&lt;a href="https://helemanc.github.io/">&lt;bold>&lt;h2>Eleonora Mancini&lt;/h2>&lt;/bold>&lt;/a>
Language Technologies Lab, University of Bologna, Italy
&lt;/td>
&lt;td style="width: 20%;">&lt;img src="fruggeri.png"/>&lt;/td>
&lt;td style="width: 30%;">
&lt;a href="https://federicoruggeri.github.io/">&lt;bold>&lt;h2>Federico Ruggeri&lt;/h2>&lt;/bold>&lt;/a>
Language Technologies Lab, University of Bologna, Italy
&lt;/td>
&lt;/tr>
&lt;tr>
&lt;td style="width: 20%;">&lt;img src="svillata.jpg" height="20%"/>&lt;/td>
&lt;td style="width: 30%;">
&lt;a href="https://webusers.i3s.unice.fr/~villata/Home.html">&lt;bold>&lt;h2>Serena Villata&lt;/h2>&lt;/bold>&lt;/a>
Inria-I3S WIMMICS Laboratoire I3S, CNRS, Sophia Antipolis, France
&lt;/td>
&lt;td style="width: 20%;">&lt;img src="ptorroni.png"/>&lt;/td>
&lt;td style="width: 30%;">
&lt;a href="https://www.unibo.it/sitoweb/p.torroni/en/">&lt;bold>&lt;h2>Paolo Torroni&lt;/h2>&lt;/bold>&lt;/a>
Language Technologies Lab, University of Bologna, Italy
&lt;/td>
&lt;/tr>
&lt;/table>
&lt;h1 id="contacts">Contacts&lt;/h1>
&lt;p>&lt;strong>&lt;a href="https://join.slack.com/t/mm-argfallacy2025/shared_invite/zt-2yjct5udc-vbuGSsSelR5FMiopSne~wQ" target="_blank" rel="noopener">Join the MM-ArgFallacy2025 Slack Channel!&lt;/a>&lt;/strong>&lt;/p>
&lt;h1 id="cite">Cite&lt;/h1>
&lt;p>Eleonora Mancini, Federico Ruggeri, Serena Villata, and Paolo Torroni. 2025. Overview of MM-ArgFallacy2025 on Multimodal Argumentative Fallacy Detection and Classification in Political Debates. In Proceedings of the 12th Argument mining Workshop, pages 358–368, Vienna, Austria. Association for Computational Linguistics.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-latex" data-lang="latex">&lt;span class="line">&lt;span class="cl">@inproceedings&lt;span class="nb">{&lt;/span>mancini-etal-2025-overview,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> title = &amp;#34;Overview of &lt;span class="nb">{&lt;/span>MM&lt;span class="nb">}&lt;/span>-&lt;span class="nb">{&lt;/span>A&lt;span class="nb">}&lt;/span>rg&lt;span class="nb">{&lt;/span>F&lt;span class="nb">}&lt;/span>allacy2025 on Multimodal Argumentative Fallacy Detection and Classification in Political Debates&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> author = &amp;#34;Mancini, Eleonora and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Ruggeri, Federico and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Villata, Serena and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Torroni, Paolo&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> editor = &amp;#34;Chistova, Elena and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Cimiano, Philipp and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Haddadan, Shohreh and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Lapesa, Gabriella and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Ruiz-Dolz, Ramon&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> booktitle = &amp;#34;Proceedings of the 12th Argument mining Workshop&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> month = jul,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> year = &amp;#34;2025&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> address = &amp;#34;Vienna, Austria&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> publisher = &amp;#34;Association for Computational Linguistics&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> url = &amp;#34;https://aclanthology.org/2025.argmining-1.35/&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> doi = &amp;#34;10.18653/v1/2025.argmining-1.35&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> pages = &amp;#34;358--368&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> ISBN = &amp;#34;979-8-89176-258-9&amp;#34;,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> abstract = &amp;#34;We present an overview of the MM-ArgFallacy2025 shared task on Multimodal Argumentative Fallacy Detection and Classification in Political Debates, co-located with the 12th Workshop on Argument Mining at ACL 2025. The task focuses on identifying and classifying argumentative fallacies across three input modes: text-only, audio-only, and multimodal (text+audio), offering both binary detection (AFD) and multi-class classification (AFC) subtasks. The dataset comprises 18,925 instances for AFD and 3,388 instances for AFC, from the MM-USED-Fallacy corpus on U.S. presidential debates, annotated for six fallacy types: Ad Hominem, Appeal to Authority, Appeal to Emotion, False Cause, Slippery Slope, and Slogan. A total of 5 teams participated: 3 on classification and 2 on detection. Participants employed transformer-based models, particularly RoBERTa variants, with strategies including prompt-guided data augmentation, context integration, specialised loss functions, and various fusion techniques. Audio processing ranged from MFCC features to state-of-the-art speech models. Results demonstrated textual modality dominance, with best text-only performance reaching 0.4856 F1-score for classification and 0.34 for detection. Audio-only approaches underperformed relative to text but showed improvements over previous work, while multimodal fusion showed limited improvements. This task establishes important baselines for multimodal fallacy analysis in political discourse, contributing to computational argumentation and misinformation detection capabilities.&amp;#34;
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div>&lt;h1 id="credits">Credits&lt;/h1>
&lt;table>
&lt;tr>
&lt;td style="width: 80%;">This shared task is partially supported by the project European Commission's NextGeneration EU programme, PNRR -- M4C2 -- Investimento 1.3, Partenariato Esteso, PE00000013 - FAIR - Future Artificial Intelligence Research'' -- Spoke 8 Pervasive AI’’.&lt;/td>
&lt;td style="width: 25%;">&lt;img src="eulogo.svg"/>&lt;/td>
&lt;/tr>
&lt;/table></description></item><item><title>Overview of the CLEF-2024 CheckThat! Lab: Check-Worthiness, Subjectivity, Persuasion, Roles, Authorities, and Adversarial Robustness</title><link>http://nlp.unibo.it/events/2024clef/</link><pubDate>Sun, 01 Sep 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/events/2024clef/</guid><description>&lt;hr>
&lt;h5 id="resources">Resources&lt;/h5>
&lt;ul>
&lt;li>&lt;a href="https://checkthat.gitlab.io/clef2024/" target="_blank" rel="noopener">CheckThat! 2024&lt;/a>&lt;/li>
&lt;li>&lt;a href="paper.pdf">Paper&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h5 id="abstract">Abstract&lt;/h5>
&lt;p>We describe the seventh edition of the CheckThat! lab, part of the 2024 Conference and Labs of the Evaluation Forum (CLEF). Previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, identifying previously fact-checked claims, supporting evidence retrieval, and claim verification. In this edition, we introduced some new challenges, offering six tasks in fifteen languages (Arabic, Bulgarian, English, Dutch, French, Georgian, German, Greek, Italian, Polish, Portuguese, Russian, Slovene, Spanish, and code-mixed Hindi-English): Task 1 on estimation of check-worthiness (the only task that has been present in all CheckThat! editions), Task 2 on identification of subjectivity (a follow up of the CheckThat! 2023 edition), Task 3 on identification of the use of persuasion techniques (a follow up of SemEval 2023), Task 4 on detection of hero, villain, and victim from memes (a follow up of CONSTRAINT 2022), Task 5 on rumor verification using evidence from authorities (new task), and Task 6 on robustness of credibility assessment with adversarial examples (new task). These are challenging classification and retrieval problems at the document and at the span level, including multilingual and multimodal settings. This year, CheckThat! was one of the most popular labs at CLEF-2024 in terms of team registrations: 130 teams. More than one-third of them (a total of 46) actually participated.&lt;/p>
&lt;hr>
&lt;h5 id="citation">Citation&lt;/h5>
&lt;p>Alberto Barrón-Cedeño, Firoj Alam, Julia Maria Struß, Preslav Nakov, Tanmoy Chakraborty, Tamer Elsayed, Piotr Przyby￿a, Tommaso Caselli, Giovanni Da San Martino, Fatima Haouari, et al. Overview of the clef-2024 checkthat! lab: check-worthiness, subjectivity, persuasion, roles, authorities, and adversarial robustness. In International Conference of the Cross-Language Evaluation Forum for European Languages, pages 28–52. Springer, 2024.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-latex" data-lang="latex">&lt;span class="line">&lt;span class="cl">@inproceedings&lt;span class="nb">{&lt;/span>barron-etal-20240-overview,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> title=&lt;span class="nb">{&lt;/span>Overview of the CLEF-2024 CheckThat! lab: check-worthiness, subjectivity, persuasion, roles, authorities, and adversarial robustness&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> author=&lt;span class="nb">{&lt;/span>Barr&lt;span class="nb">{&lt;/span>&lt;span class="k">\&amp;#39;&lt;/span>o&lt;span class="nb">}&lt;/span>n-Cede&lt;span class="nb">{&lt;/span>&lt;span class="k">\~&lt;/span>n&lt;span class="nb">}&lt;/span>o, Alberto and Alam, Firoj and Stru&lt;span class="nb">{&lt;/span>&lt;span class="k">\ss&lt;/span>&lt;span class="nb">}&lt;/span>, Julia Maria and Nakov, Preslav and Chakraborty, Tanmoy and Elsayed, Tamer and Przyby&lt;span class="nb">{&lt;/span>&lt;span class="k">\l&lt;/span>&lt;span class="nb">}&lt;/span>a, Piotr and Caselli, Tommaso and Da San Martino, Giovanni and Haouari, Fatima and others&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> booktitle=&lt;span class="nb">{&lt;/span>International Conference of the Cross-Language Evaluation Forum for European Languages&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> pages=&lt;span class="nb">{&lt;/span>28--52&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> year=&lt;span class="nb">{&lt;/span>2024&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> organization=&lt;span class="nb">{&lt;/span>Springer&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item><item><title>Disruptive situation detection on public transport through speech emotion recognition</title><link>http://nlp.unibo.it/publication_journals/mancini-etal-2024-disruptive/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/publication_journals/mancini-etal-2024-disruptive/</guid><description>&lt;p>Add the &lt;strong>full text&lt;/strong> or &lt;strong>supplementary notes&lt;/strong> for the publication here using Markdown formatting.&lt;/p></description></item><item><title>Overview of the CLEF–2023 CheckThat! Lab on Checkworthiness, Subjectivity, Political Bias, Factuality, and Authority of News Articles and Their Source</title><link>http://nlp.unibo.it/events/2023clef/</link><pubDate>Fri, 01 Sep 2023 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/events/2023clef/</guid><description>&lt;hr>
&lt;h5 id="resources">Resources&lt;/h5>
&lt;ul>
&lt;li>&lt;a href="https://checkthat.gitlab.io/clef2023/" target="_blank" rel="noopener">CheckThat! 2023&lt;/a>&lt;/li>
&lt;li>&lt;a href="paper.pdf">Paper&lt;/a>&lt;/li>
&lt;/ul>
&lt;hr>
&lt;h5 id="abstract">Abstract&lt;/h5>
&lt;p>We describe the sixth edition of the CheckThat! lab, part of the 2023 Conference and Labs of the Evaluation Forum (CLEF). The five previous editions of CheckThat! focused on the main tasks of the information verification pipeline: check-worthiness, verifying whether a claim was fact-checked before, supporting evidence retrieval, and claim verification. In this sixth edition, we zoom into some new problems and for the first time we offer five tasks in seven languages: Arabic, Dutch, English, German, Italian, Spanish, and Turkish. Task 1 asks to determine whether an item —text or text plus image— is check-worthy. Task 2 aims to predict whether a sentence from a news article is subjective or not. Task 3 asks to assess the political bias of the news at the article and at the media outlet level. Task 4 focuses on the factuality of reporting of news media. Finally, Task 5 looks at identifying authorities in Twitter that could help verify a given target claim. For a second year, CheckThat! was the most popular lab at CLEF-2023 in terms of team registrations: 127 teams. About one-third of them (a total of 37) actually participated.&lt;/p>
&lt;hr>
&lt;h5 id="citation">Citation&lt;/h5>
&lt;p>Alberto Barrón-Cedeño, Firoj Alam, Andrea Galassi, Giovanni Da San Martino, Preslav Nakov, Tamer Elsayed, Dilshod Azizov, Tommaso Caselli, Gullal S. Cheema, Fatima Haouari, Maram Hasanain, Mücahid Kutlu, Chengkai Li, Federico Ruggeri, Julia Maria Struß, and Wajdi Zaghouani. Overview of the CLEF-2023 checkthat! lab on checkworthiness, subjectivity, political bias, factuality, and authority of news articles and their source.
In Avi Arampatzis, Evangelos Kanoulas, Theodora Tsikrika, Stefanos Vrochidis, Anastasia Giachanou, Dan Li, Mohammad Aliannejadi, Michalis Vlachos, Guglielmo Faggioli, and Nicola Ferro, editors, Experimental IR Meets Multilinguality, Multimodality, and Interaction - 14th International Conference of the CLEF Association, CLEF 2023, Thessaloniki, Greece, September 18-21, 2023, Proceedings, volume 14163 of Lecture Notes in Computer Science, pages 251–275. Springer, 2023.&lt;/p>
&lt;div class="highlight">&lt;pre tabindex="0" class="chroma">&lt;code class="language-latex" data-lang="latex">&lt;span class="line">&lt;span class="cl">@inproceedings&lt;span class="nb">{&lt;/span>cedeno-etal-2023-clef-overview,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> author = &lt;span class="nb">{&lt;/span>Alberto Barr&lt;span class="nb">{&lt;/span>&lt;span class="k">\&amp;#39;&lt;/span>&lt;span class="nb">{&lt;/span>o&lt;span class="nb">}}&lt;/span>n&lt;span class="nb">{&lt;/span>-&lt;span class="nb">}&lt;/span>Cede&lt;span class="nb">{&lt;/span>&lt;span class="k">\~&lt;/span>&lt;span class="nb">{&lt;/span>n&lt;span class="nb">}}&lt;/span>o and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Firoj Alam and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Andrea Galassi and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Giovanni Da San Martino and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Preslav Nakov and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Tamer Elsayed and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Dilshod Azizov and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Tommaso Caselli and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Gullal S. Cheema and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Fatima Haouari and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Maram Hasanain and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> M&lt;span class="nb">{&lt;/span>&lt;span class="k">\&amp;#34;&lt;/span>&lt;span class="nb">{&lt;/span>u&lt;span class="nb">}}&lt;/span>cahid Kutlu and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Chengkai Li and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Federico Ruggeri and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Julia Maria Stru&lt;span class="nb">{&lt;/span>&lt;span class="k">\ss&lt;/span>&lt;span class="nb">}&lt;/span> and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Wajdi Zaghouani&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> editor = &lt;span class="nb">{&lt;/span>Avi Arampatzis and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Evangelos Kanoulas and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Theodora Tsikrika and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Stefanos Vrochidis and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Anastasia Giachanou and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Dan Li and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Mohammad Aliannejadi and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Michalis Vlachos and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Guglielmo Faggioli and
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Nicola Ferro&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> title = &lt;span class="nb">{&lt;/span>Overview of the &lt;span class="nb">{&lt;/span>CLEF-2023&lt;span class="nb">}&lt;/span> CheckThat! Lab on Checkworthiness, Subjectivity,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Political Bias, Factuality, and Authority of News Articles and Their
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> Source&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> booktitle = &lt;span class="nb">{&lt;/span>Experimental &lt;span class="nb">{&lt;/span>IR&lt;span class="nb">}&lt;/span> Meets Multilinguality, Multimodality, and Interaction
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> - 14th International Conference of the &lt;span class="nb">{&lt;/span>CLEF&lt;span class="nb">}&lt;/span> Association, &lt;span class="nb">{&lt;/span>CLEF&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> 2023, Thessaloniki, Greece, September 18-21, 2023, Proceedings&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> series = &lt;span class="nb">{&lt;/span>Lecture Notes in Computer Science&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> volume = &lt;span class="nb">{&lt;/span>14163&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> pages = &lt;span class="nb">{&lt;/span>251--275&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> publisher = &lt;span class="nb">{&lt;/span>Springer&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> year = &lt;span class="nb">{&lt;/span>2023&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> url = &lt;span class="nb">{&lt;/span>https://doi.org/10.1007/978-3-031-42448-9&lt;span class="k">\_&lt;/span>20&lt;span class="nb">}&lt;/span>,
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl"> doi = &lt;span class="nb">{&lt;/span>10.1007/978-3-031-42448-9&lt;span class="k">\_&lt;/span>20&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;span class="line">&lt;span class="cl">&lt;span class="nb">}&lt;/span>
&lt;/span>&lt;/span>&lt;/code>&lt;/pre>&lt;/div></description></item></channel></rss>