<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Local Explanation | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/local-explanation/</link><atom:link href="http://nlp.unibo.it/tag/local-explanation/index.xml" rel="self" type="application/rss+xml"/><description>Local Explanation</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Knowledge Extraction from Rationalization</title><link>http://nlp.unibo.it/proposals_interpretability/extraction/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/extraction/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Rationalization is a type of example-specific explanation.
However, samples belonging to the same class might share similar rationales.
The idea is to define ways to go from a local explanation (i.e., rationalization) to a global explanation (i.e., knowledge base) by aggregating and summarizing extracted rationales.
This can be done with LLMs (e.g., prompting techniques) or other solutions.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>A Game Theoretic Approach to Class-wise Selective Rationalization&lt;/strong>&lt;br>
Shiyu Chang, Yang Zhang, Mo Yu, Tommi S. Jaakkola.
33rd Conference on Neural Information Processing Systems (NeurIPS), Vancouver, Canada, 2019.&lt;br>
&lt;a href="https://papers.neurips.cc/paper_files/paper/2019/file/5ad742cd15633b26fdce1b80f7b39f7c-Paper.pdf" target="_blank" rel="noopener">PDF&lt;/a>&lt;/p></description></item></channel></rss>