<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>LLMs | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/llms/</link><atom:link href="http://nlp.unibo.it/tag/llms/index.xml" rel="self" type="application/rss+xml"/><description>LLMs</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Hate Speech Detection with Argumentative Reasoning</title><link>http://nlp.unibo.it/proposals_am/hatespeech/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_am/hatespeech/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Hate speech often lies on implicit content and subtle reasoning nuances.
Our idea is to apply argumentative reasoning to hate speech to make implicit content explicit in order to define more interpretable and user-friendly hate speech detection systems.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>, &lt;a href="mailto:arianna.muti@unibocconi.it">Arianna Muti&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Language is Scary when Over-Analyzed: Unpacking Implied Misogynistic Reasoning with Argumentation Theory-Driven Prompts&lt;/strong>&lt;br>
Arianna Muti, Federico Ruggeri, Khalid Al-Khatib, Alberto Barrón-Cedeño, Tommaso Caselli&lt;br>
Conference on Empirical Methods in Natural Language Processing (EMNLP), pp. 21091–21107, 2024&lt;br>
&lt;a href="https://doi.org/10.18653/v1/2024.emnlp-main.1174" target="_blank" rel="noopener">DOI&lt;/a>
| &lt;a href="https://aclanthology.org/2024.emnlp-main.1174.pdf" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p>
&lt;p>&lt;strong>PejorativITy: Disambiguating Pejorative Epithets to Improve Misogyny Detection in Italian Tweets&lt;/strong>&lt;br>
Arianna Muti, Federico Ruggeri, Cagri Toraman, Lorenzo Musetti, Samuel Algherini, Silvia Ronchi, Gianmarco Saretto, Caterina Zapparoli, Alberto Barrón-Cedeño.&lt;br>
In Proceedings of the 2024 Joint International Conference on Computational Linguistics, Language Resources and Evaluation (LREC-COLING 2024), pages 12700–12711, Torino, Italia. ELRA and ICCL.&lt;br>
&lt;a href="https://aclanthology.org/2024.lrec-main.1112.pdf" target="_blank" rel="noopener">PDF&lt;/a>
| &lt;a href="https://aclanthology.org/2024.lrec-main.1112" target="_blank" rel="noopener">Anthology&lt;/a>&lt;/p></description></item><item><title>Rationalization via LLMs</title><link>http://nlp.unibo.it/proposals_interpretability/llms/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/llms/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
LLMs are ubiquitous in NLP. Our aim is to evaluate LLM capabilities in performing selective rationalization via prompting.
How do they compare w.r.t. traditional SPP models?&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery&lt;/strong>&lt;br>
Linan Yue, Qi Liu, Yichao Du, Li Wang, Weibo Gao, Yanqing An.&lt;br>
The Twelfth International Conference on Learning Representations, 2024.&lt;br>
&lt;a href="https://openreview.net/pdf?id=uGtfk2OphU" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p>
&lt;p>&lt;strong>Learning Robust Rationales for Model Explainability: A Guidance-Based Approach&lt;/strong>&lt;br>
S Hu, K Yu.&lt;br>
Proceedings of the AAAI Conference on Artificial Intelligence, 2024.&lt;br>
&lt;a href="https://doi.org/10.1609/aaai.v38i16.29783" target="_blank" rel="noopener">DOI&lt;/a>
| &lt;a href="https://ojs.aaai.org/index.php/AAAI/article/view/29783/31352" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item><item><title>Text Classification with Guidelines Only</title><link>http://nlp.unibo.it/proposals_uki/clf_guidelines/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_uki/clf_guidelines/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
The standard approach for training a machine learning model on a task is to provide an annotated dataset $(\mathcal{X}, \mathcal{Y})$.
The dataset is built by providing unlabeled data $\mathcal{X}$ to a group of annotators previously trained on a set of annotation guidelines $\mathcal{G}$.
Annotators label data $\mathcal{X}$ via a given class set $\mathcal{C}$.
The main issue of this approach is that annotators define the mapping from data $\mathcal{X}$ to the class set $\mathcal{C}$ via the guidelines $\mathcal{G}$, while machine learning models are trained to learn the same mapping without guidelines $\mathcal{G}$.
Consequently, these models can learn any kind of mapping from $\mathcal{X}$ to $\mathcal{C}$ that better fits given data.
Our idea is to directly provide guidelines $\mathcal{G}$ to models without any access to class labels during training.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Let Guidelines Guide You: A Prescriptive Guideline-Centered Data Annotation Methodology&lt;/strong>&lt;br>
Federico Ruggeri, Eleonora Misino, Arianna Muti, Katerina Korre, Paolo Torroni, Alberto Barrón-Cedeño&lt;br>
September 2024&lt;br>
&lt;a href="https://arxiv.org/abs/2406.14099" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item><item><title>Transformers and LLMs for the detection and classification of unfair clauses</title><link>http://nlp.unibo.it/proposals_legal/unfairclauses/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_legal/unfairclauses/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
For several years, we have been working on tools for the automatic detection of unfair clauses in Terms of Services and Privacy Policies documents in the English language (see CLAUDETTE and PRIMA &lt;a href="http://nlp.unibo.it/projects">Projects page&lt;/a>).
We have already conducted several studies on this topic, and we are interested in applying new effective methods and techniques.
Right now, we are focused on LLMs, but we are also interested in alternative techniques.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:galassi@unibo.it">Andrea Galassi&lt;/a>, &lt;a href="mailto:marco.lippi@unifi.it">Marco Lippi&lt;/a>&lt;/p></description></item><item><title>Is It Worth Using LLMs for Unfair Clause Detection in Terms of Service?</title><link>http://nlp.unibo.it/publication_highlights/2025worth/</link><pubDate>Wed, 01 Jan 2025 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/publication_highlights/2025worth/</guid><description>&lt;p>&amp;#x1f3c6; Awarded the &amp;ldquo;Peter Jackson&amp;rdquo; Award for Best Innovative Application Paper&lt;/p></description></item><item><title>AI-based Smart Collaborative Manufacturing System (SmartCasm)</title><link>http://nlp.unibo.it/projects_national/smartcasm/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/smartcasm/</guid><description>&lt;p>The project involves using LLMs to integrate unstructured knowledge into industrial pipelines to speed up production and foster technical advancement.&lt;/p></description></item><item><title>Generative Models: Empowering Business Processes and Enhancing Workflows for Improved Performance (GeMEB)</title><link>http://nlp.unibo.it/projects_national/gemeb/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/gemeb/</guid><description>&lt;p>The project developing ad-hoc LLM-based solutions to speed up existing user assistance systems while guaranteeing privacy.&lt;/p></description></item></channel></rss>