Rationalization via LLMs

Description:
LLMs are ubiquitous in NLP. Our aim is to evaluate LLM capabilities in performing selective rationalization via prompting. How do they compare w.r.t. traditional SPP models?

Contact: Federico Ruggeri

References:

Towards Faithful Explanations: Boosting Rationalization with Shortcuts Discovery
Linan Yue, Qi Liu, Yichao Du, Li Wang, Weibo Gao, Yanqing An.
The Twelfth International Conference on Learning Representations, 2024.
PDF

Learning Robust Rationales for Model Explainability: A Guidance-Based Approach
S Hu, K Yu.
Proceedings of the AAAI Conference on Artificial Intelligence, 2024.
DOI | PDF