<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Selective Rationalization | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/selective-rationalization/</link><atom:link href="http://nlp.unibo.it/tag/selective-rationalization/index.xml" rel="self" type="application/rss+xml"/><description>Selective Rationalization</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Knowledge Extraction from Rationalization</title><link>http://nlp.unibo.it/proposals_interpretability/extraction/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/extraction/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Rationalization is a type of example-specific explanation.
However, samples belonging to the same class might share similar rationales.
The idea is to define ways to go from a local explanation (i.e., rationalization) to a global explanation (i.e., knowledge base) by aggregating and summarizing extracted rationales.
This can be done with LLMs (e.g., prompting techniques) or other solutions.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>A Game Theoretic Approach to Class-wise Selective Rationalization&lt;/strong>&lt;br>
Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola.
33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, 2019.&lt;br>
&lt;a href="https://papers.neurips.cc/paper_files/paper/2019/file/5ad742cd15633b26fdce1b80f7b39f7c-Paper.pdf" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item><item><title>Mixture of Experts for Rationalization</title><link>http://nlp.unibo.it/proposals_interpretability/mixture/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/mixture/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Mixture of Experts (MoE) is a technique whereby several models are trained on the same data, each specializing in a certain subset.
MoE have been shown to be successful in a variety of applications and their original formulation dates back early 2000s.
The idea is to understand whether we can develop a MoE model for selective rationalization to address interlocking.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>A Survey on Mixture of Experts in Large Language Models&lt;/strong>&lt;br>
W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim and J. Huang.&lt;br>
In IEEE Transactions on Knowledge and Data Engineering, vol. 37, no. 7, pp. 3896-3915, July 2025.&lt;br>
&lt;a href="https://doi.org/10.1109/TKDE.2025.3554028" target="_blank" rel="noopener">DOI&lt;/a>&lt;/p></description></item><item><title>Rationalization via LLMs</title><link>http://nlp.unibo.it/proposals_interpretability/llms/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/llms/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
LLMs are ubiquitous in NLP. Our aim is to evaluate LLM capabilities in performing selective rationalization via prompting.
How do they compare w.r.t. traditional SPP models?&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery&lt;/strong>&lt;br>
Linan Yue, Qi Liu, Yichao Du, Li Wang, Weibo Gao, Yanqing An.&lt;br>
The Twelfth International Conference on Learning Representations, 2024.&lt;br>
&lt;a href="https://openreview.net/pdf?id=uGtfk2OphU" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p>
&lt;p>&lt;strong>Learning Robust Rationales for Model Explainability: A Guidance-Based Approach&lt;/strong>&lt;br>
S Hu, K Yu.&lt;br>
Proceedings of the AAAI Conference on Artificial Intelligence, 2024.&lt;br>
&lt;a href="https://doi.org/10.1609/aaai.v38i16.29783" target="_blank" rel="noopener">DOI&lt;/a>
| &lt;a href="https://ojs.aaai.org/index.php/AAAI/article/view/29783/31352" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item><item><title>Structured Rationalization via Tree kernel methods</title><link>http://nlp.unibo.it/proposals_interpretability/treekernels/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/treekernels/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
There are several techniques for transforming text into abstract structured content (AMR graphs, Parse trees, etc&amp;hellip;).
We are interested in applying rationalization in these contexts by also enforcing some structural constraints depending on the given scenario of application.
The constraints describe which type of allowed structured the rationalization system can extract.
In the case of tree kernels, these structures are different types of trees.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>Tree-constrained Graph Neural Networks for Argument Mining&lt;/strong>&lt;br>
Federico Ruggeri, Marco Lippi, Paolo Torroni&lt;br>
September 2021&lt;br>
&lt;a href="https://arxiv.org/abs/2110.00124" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item></channel></rss>