View Categories

AI – Fairness and Equity

Artificial intelligence (AI) can bring many opportunities to contribute to the wellbeing of individuals and the advancement of economies and societies, but also a variety of novel ethical, legal, social, and technological challenges. However, there are also some side-effects: the (AI) systems can be vulnerable to security attacks and allows considerable bias in order to handle errors in the system itself, putting humans at risk and leaves machines, robots, and data defenseless. In addition, since these AI systems are involved in decision-making process more and more, the concept of trust and its management has received considerable attention because trust is perceived as the basis for decision making in many contexts and the motivation for maintaining long-term relationships based on cooperation and collaboration.

Therefore, Trustworthy AI (TAI) can represent a possible answer and solution: it is based on the idea that trust builds the foundation of societies, economies, and sustainable development, thus individuals, organizations, and societies will only ever be able to realize the full potential of AI, if trust can be established in its development, deployment, and use. Thereby, the concept of TAI is guiding the rest of the topics presented in this category, as it can guarantee the human value and the environment. 

In particular, TAI comprehends some key elements, such as: “Accountability & Privacy”, “Safety & Security”, “Fairness & Transparency”, “Explainability and controllability”, and so on. Moreover, it is important to investigate engineering pitfalls and assessing typical associated threats and risks to AI systems.

Following the activity of SC42 WG3 (ISO/IEC JTC 1/SC 42) TAI can be defined as the degree to which a user or other stakeholder has confidence that a product or system will behave as intended. In details, here we consider the following elements: i) Unbiased and Non-discriminative AI; ii) Explainability AI (XAI); iii) Robustness; iv) Privacy.

We are aware that this list is not exhaustive at all, but this field is so broad that a kind of selection is necessary: we have regarded these aspects as the most relevant and investigated in literature (ISO/IEC JTC 1/SC 42, 2020), (Ribeiro, 2016), (Chamola et al., 2023), (Simion et al., 2023), (Kaur et al., 2021).

Main Question

Does the AI-based solution deliver fair and equitable outcomes?

Sub-Questions:

  1. Have the Ability of the AI system been considered? Ability of the AI system can be defined as its capability to do a specific task (for example, in terms of robustness, safety, reliability, etc.)
  2. Have the Integrity of AI system been considered? Integrity of AI system is regarded as the insurance that the inputs of the model will not be manipulated in a malicious way by the AI system itself (for example, in terms of completeness, accuracy, certainty, consistency, etc.).
  3. Have the Benevolence of AI system been considered? Benevolence of AI system, is defined as the extent to which it is believed to do good (or in other terms, to what extent the “Do Not Harm” principle is respected).

References

  • Ribeiro, 2016