<?xml version="1.0" encoding="utf-8" standalone="yes"?><rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom"><channel><title>Fairness | Language Technologies Lab</title><link>http://nlp.unibo.it/tag/fairness/</link><atom:link href="http://nlp.unibo.it/tag/fairness/index.xml" rel="self" type="application/rss+xml"/><description>Fairness</description><generator>Hugo Blox Builder (https://hugoblox.com)</generator><language>en-us</language><lastBuildDate>Mon, 01 Jan 2024 00:00:00 +0000</lastBuildDate><item><title>Future Artificial Intelligence Research (FAIR)</title><link>http://nlp.unibo.it/projects_international/fair/</link><pubDate>Mon, 01 Jan 2024 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_international/fair/</guid><description>&lt;p>The objective of the FAIR project is to contribute facing the research questions, methodologies, models, technologies, and ethical and legal rules to build AI systems capable of interacting and collaborating with humans, perceiving and acting in evolving contexts, to be conscious about their limits and capable to adapt to new situations, to be aware of the perimeters of safety and trust, and to be careful with the environmental and social impact that their creation and functioning may cause.&lt;/p></description></item><item><title>Equitable Algorithms, Promoting Fairness and Countering Algorithmic Discrimination Through Norms and Technologies (EquAl)</title><link>http://nlp.unibo.it/projects_national/equal/</link><pubDate>Sat, 01 Jan 2022 00:00:00 +0000</pubDate><guid>http://nlp.unibo.it/projects_national/equal/</guid><description>&lt;p>The EquAl project addresses algorithmic evaluations, decisions, and predictions, to promote fairness and counter discrimination affecting individuals and groups.
The research project fundedis by the EU Commission under the NextGenerationEU program and the Italian Ministry of Education, University and Research. (PRIN 2022. Ref. prot. n.: 2022KFLF3E-001 - CUP J53D23005560001)
EquAl aims (i) to provide an understanding of the concepts of algorithmic unfairness and discrimination, bridging the notions adopted in social sciences, law, statistics, and artificial intelligence.
(ii) To identify the ways in which algorithmic unfairness originates and spreads in different social contexts, affecting individuals and groups, and particularly to identify the cases in which algorithmic unfairness leads to prohibited discrimination.
(iii) To analyse the ways in which the law currently addresses algorithmic discrimination and propose appropriate measures to implement or upgrade the existing regulatory framework.
(iv) To examine the way in which technologies can promote fairness and support detecting and countering algorithmic unfairness and discrimination, in particular with regard to the assessment of asylum requests.
By identifying and remedying algorithmic unfairness and discrimination, EquAl will contribute to preventing and mitigating harms to individuals and groups and favour the law-abiding deployment of AI.
EquAl is premised on the fast-growing application of AI techniques for the purposes of prediction, evaluation, and decision making.
Algorithmic approaches have the potential to transform many aspects of the economic and social life, delivering cost effective solutions, increasing the equity, efficiency, controllability and precision of decision-making processes.
However, they may also lead to new and more subtle, opaque, and resilient forms of unfairness and discrimination.
Some discriminatory effects have been already addressed by case-law in Europe and beyond, and some proposals exist to regulate aspects of automated decision-making, but no comprehensive regulatory framework exists yet.
EquAl aims to place Italian legal research at the forefront in the domain of algorithmic fairness and non-discrimination, by: (a) delivering new insights on the specific nature, functioning, and evolution of fair and unfair instances of algorithmic decision-making; (b) evaluating existing anti-discrimination technologies and developing new methods to detect instances of unfairness in human and automated decisions and protect vulnerable individuals; (c) providing ethical and legal guidance and (d) supporting public bodies, NGOs and local communities, in particular, in the examination of asylum applications.
EquAl’s contribution is crucial to enhance interdisciplinary cross-fertilisation, since currently different criteria and terminologies are used in debating algorithmic fairness and non-discrimination by different research communities (legal scholars, sociologists, computer scientists, statisticians), and to ensure that the corpus of EU and Italian anti-discrimination law, regulations, and case-law can be effectively applied in the algorithmic domain.&lt;/p></description></item></channel></rss>