<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>National Projects | Language Technologies Lab</title><link>http://nlp.unibo.it/projects_national/</link><atom:link href="http://nlp.unibo.it/projects_national/index.xml" rel="self" type="application/rss+xml"/><description>National Projects</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Fri, 27 Feb 2026 00:00:00 +0000</lastBuildDate><item><title>AI-based Smart Collaborative Manufacturing System (SmartCasm)</title><link>http://nlp.unibo.it/projects_national/smartcasm/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/smartcasm/</guid><description>&lt;p>The project involves using LLMs to integrate unstructured knowledge into industrial pipelines to speed up production and foster technical advancement.&lt;/p></description></item><item><title>Generative Models: Empowering Business Processes and Enhancing Workflows for Improved Performance (GeMEB)</title><link>http://nlp.unibo.it/projects_national/gemeb/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/gemeb/</guid><description>&lt;p>The project developing ad-hoc LLM-based solutions to speed up existing user assistance systems while guaranteeing privacy.&lt;/p></description></item><item><title>PRivacy Infringements Machine-Advice (PRIMA)</title><link>http://nlp.unibo.it/projects_national/prima/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/prima/</guid><description>&lt;p>PRIMA (PRivacy Infringements Machine-Advice) studies the law and practice of privacy policies, develops methods and techniques for their automated analysis, and implements a prototype to assess their lawfulness.&lt;/p></description></item><item><title>Sustainable Development Goals Artificial Intelligence Enhance (ALMA-GAIE)</title><link>http://nlp.unibo.it/projects_national/alma-gaie/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/alma-gaie/</guid><description>&lt;p>&amp;#x1f3c6; The project received the &lt;a href="https://magazine.unibo.it/en/articles/artificial-intelligence-for-sustainable-pa-alma-gaie-wins-first-place" target="_blank" rel="noopener">“PA a colori” 2024 award&lt;/a> for sustainability in public administrations.&lt;/p>
&lt;p>A project funded by the University of Bologna aiming to develop an AI system for the automatic classification of research and educational products of the University of Bologna according to their contribution to the 17 Goals of the United Nations&amp;rsquo; 2030 Agenda for Sustainable Development.&lt;/p></description></item><item><title>Equitable Algorithms, Promoting Fairness and Countering Algorithmic Discrimination Through Norms and Technologies (EquAl)</title><link>http://nlp.unibo.it/projects_national/equal/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/equal/</guid><description>&lt;p>The EquAl project addresses algorithmic evaluations, decisions, and predictions, to promote fairness and counter discrimination affecting individuals and groups.
The research project fundedis by the EU Commission under the NextGenerationEU program and the Italian Ministry of Education, University and Research. (PRIN 2022. Ref. prot. n.: 2022KFLF3E-001 - CUP J53D23005560001)
EquAl aims (i) to provide an understanding of the concepts of algorithmic unfairness and discrimination, bridging the notions adopted in social sciences, law, statistics, and artificial intelligence.
(ii) To identify the ways in which algorithmic unfairness originates and spreads in different social contexts, affecting individuals and groups, and particularly to identify the cases in which algorithmic unfairness leads to prohibited discrimination.
(iii) To analyse the ways in which the law currently addresses algorithmic discrimination and propose appropriate measures to implement or upgrade the existing regulatory framework.
(iv) To examine the way in which technologies can promote fairness and support detecting and countering algorithmic unfairness and discrimination, in particular with regard to the assessment of asylum requests.
By identifying and remedying algorithmic unfairness and discrimination, EquAl will contribute to preventing and mitigating harms to individuals and groups and favour the law-abiding deployment of AI.
EquAl is premised on the fast-growing application of AI techniques for the purposes of prediction, evaluation, and decision making.
Algorithmic approaches have the potential to transform many aspects of the economic and social life, delivering cost effective solutions, increasing the equity, efficiency, controllability and precision of decision-making processes.
However, they may also lead to new and more subtle, opaque, and resilient forms of unfairness and discrimination.
Some discriminatory effects have been already addressed by case-law in Europe and beyond, and some proposals exist to regulate aspects of automated decision-making, but no comprehensive regulatory framework exists yet.
EquAl aims to place Italian legal research at the forefront in the domain of algorithmic fairness and non-discrimination, by: (a) delivering new insights on the specific nature, functioning, and evolution of fair and unfair instances of algorithmic decision-making; (b) evaluating existing anti-discrimination technologies and developing new methods to detect instances of unfairness in human and automated decisions and protect vulnerable individuals; (c) providing ethical and legal guidance and (d) supporting public bodies, NGOs and local communities, in particular, in the examination of asylum applications.
EquAl’s contribution is crucial to enhance interdisciplinary cross-fertilisation, since currently different criteria and terminologies are used in debating algorithmic fairness and non-discrimination by different research communities (legal scholars, sociologists, computer scientists, statisticians), and to ensure that the corpus of EU and Italian anti-discrimination law, regulations, and case-law can be effectively applied in the algorithmic domain.&lt;/p></description></item><item><title>Legal Analytics for Italian Law (LAILA)</title><link>http://nlp.unibo.it/projects_national/laila/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/laila/</guid><description>&lt;p>The project regards the application of Legal Analytics methods to a vast and heterogeneous set of legal information: legislations, contracts, and judgments. The purpose is the
application of Artificial Intelligence, Machine Learning, and Natural Language Processing
to extract legal knowledge, infer relationships, and produce data-driven forecasts.&lt;/p></description></item><item><title>Argument Mining In Covid-19 Articles (AMICA)</title><link>http://nlp.unibo.it/projects_national/amica/</link><pubDate>Fri, 01 Jan 2021 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/amica/</guid><description>&lt;p>The objective of the AMICA project was to exploit the argumentative content present
in the scientific literature regarding Covid-19 to improve the retrieval of relevant and
reliable articles. The project involved both medical and artificial intelligence experts and
aimed to develop an argument mining-based search engine, specifically designed for the
analysis of scientific literature related to Covid-19.&lt;/p></description></item></channel></rss>