View Categories

AI

Automated Driving (AD) functions are being developed to perform the primary driving task. These technologies hold great promise to enhance safety and to improve mobility for transportation. In this context, the advancement of artificial intelligence (AI) has made it possible, contributing to the development and deployment of AD vehicles, thanks to the ever-growing availability of big datasets from various sensing devices and advanced computing resources [1]. In this sense, AI has become an essential component of AD vehicles, above all for perceiving the surrounding environment, but also to potentially make the appropriate decision in complex environments and dynamic conditions. In order to take most of the advantages in using AI for AD (minimizing the side effects) it is important to know how AI works in these systems and how it can be implemented for their design, which is exactly the goal of this category.

In this context, AI – like any transformative technology – is a “work in progress”, continually growing in its capabilities and its societal impact. Therefore, it is necessary to recognize the real-world effects that AI can have on people and society, and to aim at using this power responsibly, for positive change. In order to address all these points, there is a specific approach to AI development, called Trustworthy AI (TAI), which prioritizes safety and transparency for the people who interact with it. For TAI, AI systems must be explainable, fair, transparent, secure, safe, and robust, mitigating potential harms to people and organizations. TAI involves various requirements and components, such as unbiased data, interpretable models, oversight, and collaborative human-machine decision-making. Reviews discuss challenges like AI bias and suggest solutions including ethical oversight, accountability, and regulation to build trust in AI systems for societal benefit. In addition to complying with privacy and consumer protection laws, TAI models are tested for safety, security and mitigation of unwanted bias. [2]-[3] In details, there are some fundamental principles for TAI, which have the goal to enable trust and transparency in AI. They include:

  • Privacy, namely complying with regulations, safeguarding data (it is true that the more data an algorithm is trained on, the more accurate its predictions; however, to develop TAI, it is important to consider not only if data are legally available to be used, but also if these data are socially responsible to be used).
  • Safety and Security, namely avoiding unintended harm, malicious threats (once deployed, AI systems have real-world impact, so it is essential they perform as intended to preserve user safety).
  • Nondiscrimination, that is minimizing bias (AI models are trained by humans, often using data that are limited by size, scope and diversity. To ensure that all people and communities can benefit from this technology, it is important to reduce unwanted bias in AI systems).
  • Transparency, that is making AI explainable (to “create” a TAI model, the algorithm cannot be a black box: its developers, users and stakeholders must be able to understand how the AI works, to trust its results). [4]-[5]

In particular, transparency in AI is a set of best practices, tools and design principles that helps users and other stakeholders understand how an AI model was trained and how it works. Explainable AI, or XAI, is a subset of transparency covering tools that inform stakeholders how an AI model makes certain predictions and decisions. They aspects are in some way addressed and included in our CoP, through the different questions and sub-questions. [6]-[7]

References

  • [1] Y. F. Ma, Z. Y. Wang, H. Yang, and L. Yang, “Artificial intelligence applications in the development of autonomous vehicles: a survey” IEEE/CAA J. Autom. Sinica, vol. 7, no. 2, pp. 315–329, Mar. 2020.
  • [2] Sanjit A. Seshia and Dorsa Sadigh. “Towards verified artificial intelligence”. CoRR, abs/1606.08514, 2016
  • [3] Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, Dario Amodei, Chris Olah, Jacob Steinhardt, Paul Christiano, John Schulman, and Dan Manè. “Concrete problems in AI safety”. CoRR, abs/1606.06565, 2016”
  • [4] Marco Tulio Ribeiro, Sameer Singh, and Carlos Guestrin. “Why should i trust you?: Explaining the predictions of any classifier”. In ACM SIGKDD International Conference on Knowledge Discovery and Data Mining (KDD2016), 2016
  • [5] Shneiderman, B. (2020). “Bridging the Gap Between Ethics and Practice: Guidelines for Reliable, Safe, and Trustworthy Human centered AI Systems”. ACM Trans. Interactive Intelligent. Systems, [online] Article 26 (October 2020), 31 pages. https://doi. org/10.1145/3419764
  • [6] Kazim, E., Denny, D.M.T. & Koshiyama, A. “AI auditing and impact assessment: according to the UK information commissioner’s office”. AI Ethics 1, 301–310 (2021). https://doi.org/10.1007/s43681-021-00039-2
  • [7] EU AI Act: first regulation on artificial intelligence. Published: 08-06-2023Last updated: 19-02-2025.