<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Latest News | Language Technologies Lab</title><link>http://nlp.unibo.it/news/</link><atom:link href="http://nlp.unibo.it/news/index.xml" rel="self" type="application/rss+xml"/><description>Latest News</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Paper accepted at TACL!</title><link>http://nlp.unibo.it/news/tacl2026/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/tacl2026/</guid><description>&lt;h2 id="description">Description&lt;/h2>
&lt;p>Our paper &amp;lsquo;Let Guidelines Guide You: A Prescriptive Guideline-Centered Data Annotation Methodology&amp;rsquo; has been accepted to Transactions of the Association for Computational Linguistics (TACL)!&lt;/p>
&lt;p>Proceedings and the updated paper will be available soon!&lt;/p>
&lt;h2 id="read-more">Read More&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://arxiv.org/pdf/2406.14099v2" target="_blank" rel="noopener">ArXiv&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>EquAl: Equitable Algorithms, Promoting Fairness and Countering Algorithmic Discrimination Through Norms and Technologies - Final Conference</title><link>http://nlp.unibo.it/news/equal/</link><pubDate>Fri, 23 Jan 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/equal/</guid><description>&lt;h2 id="project-info">Project info&lt;/h2>
&lt;p>The EquAl project addresses algorithmic evaluations, decisions, and predictions, to promote fairness and counter discrimination affecting individuals and groups.
The research project fundedis by the EU Commission under the NextGenerationEU program and the Italian Ministry of Education, University and Research.
(PRIN 2022. Ref. prot. n.: 2022KFLF3E-001 - CUP J53D23005560001).&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/equal/en/equal_project" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="reproducibility-crysis.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>PRIMA: PRivacy Infringements Machine-Advice - Final Conference</title><link>http://nlp.unibo.it/news/prima/</link><pubDate>Mon, 12 Jan 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/prima/</guid><description>&lt;h2 id="project-info">Project info&lt;/h2>
&lt;p>PRIMA (PRivacy Infringements Machine-Advice) studies the law and practice of privacy policies, develops methods and techniques for their automated analysis, and implements a prototype to assess their lawfulness.
It deploys legal analytics—a mix of data science, artificial intelligence, machine learning, natural language processing and statistics—to detect and assess privacy policies’ infringements.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/prima/en/project" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="explainability-via-highlights.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>ALMA-AI | Workshop LLM: a debate on technical experiences</title><link>http://nlp.unibo.it/news/alma-ai/</link><pubDate>Thu, 20 Nov 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/alma-ai/</guid><description>&lt;h2 id="workshop-abstract">Workshop Abstract&lt;/h2>
&lt;p>La diffusione dei Large Language Model in ogni ambito delle attività umane ha reso necessario valutare lo stato dell’arte, i limiti, gli ostacoli e le barriere che questa potente tecnologia offre.
I rapidi aggiornamenti di queste tecnologie impongono di allargare gli orizzonti degli obiettivi di ricerca e delle metodologie.
Le capacità di reasoning degli Agentic AI indicano nuove sfide.
Tuttavia, sempre più emerge la necessità di valutare in quali ambiti queste tecnologie possano produrre un reale valore aggiunto e quando invece, anche considerando l’uso delle risorse necessarie, sia meglio adottare approcci più tradizionali.
Emerge quindi il tema di come valutare i modelli messi a confronto fra loro (benchmarking).
Gli aspetti tecnici si intersecano con quelli etici, giuridici, di sostenibilità, di efficacia, nonché di spiegabilità dei passaggi di reasoning.
Il workshop intende investigare questi temi, con particolare riguardo alle applicazioni nell’ambito del diritto dove il linguaggio è un pilastro costitutivo della disciplina.
Il dibattito è utile a definire anche le aspettative future delle quali le istituzioni, come i Parlamenti e le pubbliche amministrazioni, potranno avvantaggiarsi senza tuttavia ignorare rischi e false illusioni.&lt;/p>
&lt;h2 id="federico-ruggeris-speech">Federico Ruggeri&amp;rsquo;s speech&lt;/h2>
&lt;p>The talk addresses three main aspects of LLMs and reasoning capabilities.
First, we discuss what kind of reasoning type LLMs are tested for.
The short answer is that in the majority of cases, it is unclear which reason type(s) is (are) considered.
Second, we discuss to what extent do LLMs perform reasoning.
Some view LLMs as stochastic parrots, while others believe they acquire true reasoning capabilities.
Third, we show how reasoning and argumentation are tightly connected and discuss how argumentation is being progressively used as a way to assess reasoning capabilities in LLMs.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://site.unibo.it/hypermodelex/en" target="_blank" rel="noopener">Project Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="program.pdf">Workshop Program&lt;/a>&lt;/li>
&lt;li>&lt;a href="speech.pdf">Federico Ruggeri&amp;rsquo;s Speech&lt;/a>&lt;/li>
&lt;/ul></description></item><item><title>The 13th Workshop on Argument Mining and Reasoning Co-located with ACL 2026</title><link>http://nlp.unibo.it/news/argmining2026/</link><pubDate>Wed, 01 Oct 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/news/argmining2026/</guid><description>&lt;h2 id="introduction">Introduction&lt;/h2>
&lt;p>The 2026 edition of the ArgMining workshop therefore places a special focus on understanding and evaluating arguments in both human and machine reasoning.
With this topic, we broaden the workshop&amp;rsquo;s focus to include reasoning, a long-standing area of research in AI that has recently gained renewed interest within the *ACL community, driven by the latest generation of LLMs.
Reasoning is tightly connected to argumentation as it represents, analyzes and evaluates the process of reaching conclusions on the basis of available information.
If we consider argumentation as a paradigm to capture reasoning, then machines (particularly LLMs) can be evaluated based on their ability to address argument mining tasks.&lt;/p>
&lt;p>The workshop will be co-located with ACL 2026 and held in San Diego, United States in a hybrid format.&lt;/p>
&lt;h2 id="useful-links">Useful Links&lt;/h2>
&lt;ul>
&lt;li>&lt;a href="https://argmining-org.github.io/2026/" target="_blank" rel="noopener">Workshop Page&lt;/a>&lt;/li>
&lt;li>&lt;a href="argmining.org@gmail.com">e-Mail&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://github.com/argmining-org" target="_blank" rel="noopener">Github&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://x.com/ArgminingOrg" target="_blank" rel="noopener">X/Twitter&lt;/a>&lt;/li>
&lt;li>&lt;a href="https://bsky.app/profile/argminingorg.bsky.social" target="_blank" rel="noopener">Bluesky&lt;/a>&lt;/li>
&lt;/ul></description></item></channel></rss>