<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Reasoning | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/reasoning/</link><atom:link href="http://nlp.unibo.it/tag/reasoning/index.xml" rel="self" type="application/rss+xml"/><description>Reasoning</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Hate Speech Detection with Argumentative Reasoning</title><link>http://nlp.unibo.it/proposals_am/hatespeech/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_am/hatespeech/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Hate speech often lies on implicit content and subtle reasoning nuances.
Our idea is to apply argumentative reasoning to hate speech to make implicit content explicit in order to define more interpretable and user-friendly hate speech detection systems.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>, &lt;a href="mailto:arianna.muti@unibocconi.it">Arianna Muti&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts&lt;/strong>&lt;br>
Arianna Muti, Federico Ruggeri, Khalid Al-Khatib, Alberto Barrón-Cedeño, Tommaso Caselli&lt;br>
Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 21091–21107, 2024&lt;br>
&lt;a href="https://doi.org/10.18653/v1/2024.emnlp-main.1174" target="_blank" rel="noopener">DOI&lt;/a>
| &lt;a href="https://aclanthology.org/2024.emnlp-main.1174.pdf" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p>
&lt;p>&lt;strong>PejorativITy: Disambiguating Pejorative Epithets to Improve Misogyny Detection in Italian Tweets&lt;/strong>&lt;br>
Arianna Muti, Federico Ruggeri, Cagri Toraman, Lorenzo Musetti, Samuel Algherini, Silvia Ronchi, Gianmarco Saretto, Caterina Zapparoli, Alberto Barrón-Cedeño.&lt;br>
In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12700–12711, Torino, Italia. ELRA and ICCL.&lt;br>
&lt;a href="https://aclanthology.org/2024.lrec-main.1112.pdf" target="_blank" rel="noopener">PDF&lt;/a>
| &lt;a href="https://aclanthology.org/2024.lrec-main.1112" target="_blank" rel="noopener">Anthology&lt;/a>&lt;/p></description></item><item><title>ArgMining</title><link>http://nlp.unibo.it/students_workshops/argmining/</link><pubDate>Fri, 27 Feb 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/students_workshops/argmining/</guid><description>&lt;p>Argument mining (also known as &amp;ldquo;argumentation mining&amp;rdquo;) is a well-established research area in computational linguistics that focuses on the automatic identification of argumentative structures, such as premises, conclusions, and inference schemes.
Since its beginnings, the focus has been on the development of large-scale argumentation dataset and tasks like argument quality assessment, argument persuasiveness, and the synthesis of argumentative texts, spanning various domains, such as legal, social, medical, political, and scientific settings.&lt;/p></description></item><item><title>ALMA-AI | Workshop LLM: a debate on technical experiences</title><link>http://nlp.unibo.it/news/alma-ai/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/alma-ai/</guid><description>&lt;h2 id="workshop-abstract">Workshop Abstract&lt;/h2>
&lt;p>La diffusione dei Large Language Model in ogni ambito delle attività umane ha reso necessario valutare lo stato dell’arte, i limiti, gli ostacoli e le barriere che questa potente tecnologia offre.
I rapidi aggiornamenti di queste tecnologie impongono di allargare gli orizzonti degli obiettivi di ricerca e delle metodologie.
Le capacità di reasoning degli Agentic AI indicano nuove sfide.
Tuttavia, sempre più emerge la necessità di valutare in quali ambiti queste tecnologie possano produrre un reale valore aggiunto e quando invece, anche considerando l’uso delle risorse necessarie, sia meglio adottare approcci più tradizionali.
Emerge quindi il tema di come valutare i modelli messi a confronto fra loro (benchmarking).
Gli aspetti tecnici si intersecano con quelli etici, giuridici, di sostenibilità, di efficacia, nonché di spiegabilità dei passaggi di reasoning.
Il workshop intende investigare questi temi, con particolare riguardo alle applicazioni nell’ambito del diritto dove il linguaggio è un pilastro costitutivo della disciplina.
Il dibattito è utile a definire anche le aspettative future delle quali le istituzioni, come i Parlamenti e le pubbliche amministrazioni, potranno avvantaggiarsi senza tuttavia ignorare rischi e false illusioni.&lt;/p>
&lt;h2 id="federico-ruggeris-speech">Federico Ruggeri&amp;rsquo;s speech&lt;/h2>
&lt;p>The talk addresses three main aspects of LLMs and reasoning capabilities.
First, we discuss what kind of reasoning type LLMs are tested for.
The short answer is that in the majority of cases, it is unclear which reason type(s) is (are) considered.
Second, we discuss to what extent do LLMs perform reasoning.
Some view LLMs as stochastic parrots, while others believe they acquire true reasoning capabilities.
Third, we show how reasoning and argumentation are tightly connected and discuss how argumentation is being progressively used as a way to assess reasoning capabilities in LLMs.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/hypermodelex/en" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="speech.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>The 13th Workshop on Argument Mining and Reasoning Co-located with ACL 2026</title><link>http://nlp.unibo.it/news/argmining2026/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/argmining2026/</guid><description>&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>The 2026 edition of the ArgMining workshop therefore places a special focus on understanding and evaluating arguments in both human and machine reasoning.
With this topic, we broaden the workshop&amp;rsquo;s focus to include reasoning, a long-standing area of research in AI that has recently gained renewed interest within the *ACL community, driven by the latest generation of LLMs.
Reasoning is tightly connected to argumentation as it represents, analyzes and evaluates the process of reaching conclusions on the basis of available information.
If we consider argumentation as a paradigm to capture reasoning, then machines (particularly LLMs) can be evaluated based on their ability to address argument mining tasks.&lt;/p>
&lt;p>The workshop will be co-located with ACL 2026 and held in San Diego, United States in a hybrid format.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://argmining-org.github.io/2026/" target="_blank" rel="noopener">Workshop Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="argmining.org@gmail.com">e-Mail&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/argmining-org" target="_blank" rel="noopener">Github&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://x.com/ArgminingOrg" target="_blank" rel="noopener">X/Twitter&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://bsky.app/profile/argminingorg.bsky.social" target="_blank" rel="noopener">Bluesky&lt;/a>&lt;/li>
&lt;/ul></description></item></channel></rss>