<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Interlocking | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/interlocking/</link><atom:link href="http://nlp.unibo.it/tag/interlocking/index.xml" rel="self" type="application/rss+xml"/><description>Interlocking</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 02 Mar 2026 00:00:00 +0000</lastBuildDate><item><title>Mixture of Experts for Rationalization</title><link>http://nlp.unibo.it/proposals_interpretability/mixture/</link><pubDate>Mon, 02 Mar 2026 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/proposals_interpretability/mixture/</guid><description>&lt;p>&lt;strong>Description:&lt;/strong>&lt;br>
Mixture of Experts (MoE) is a technique whereby several models are trained on the same data, each specializing in a certain subset.
MoE have been shown to be successful in a variety of applications and their original formulation dates back early 2000s.
The idea is to understand whether we can develop a MoE model for selective rationalization to address interlocking.&lt;/p>
&lt;p>&lt;strong>Contact:&lt;/strong> &lt;a href="mailto:federico.ruggeri6@unibo.it">Federico Ruggeri&lt;/a>&lt;/p>
&lt;p>&lt;strong>References:&lt;/strong>&lt;/p>
&lt;p>&lt;strong>A Survey on Mixture of Experts in Large Language Models&lt;/strong>&lt;br>
W. Cai, J. Jiang, F. Wang, J. Tang, S. Kim and J. Huang.&lt;br>
In IEEE Transactions on Knowledge and Data Engineering, vol. 37, no. 7, pp. 3896-3915, July 2025.&lt;br>
&lt;a href="https://doi.org/10.1109/TKDE.2025.3554028" target="_blank" rel="noopener">DOI&lt;/a>&lt;/p></description></item></channel></rss>