This question can deal with both the topics “Dataset Design for AI” and “Unbias and Non-dicriminative AI”: the primary goal of this headline is to nderstand how AI works and how it can be implemented for designing reliable and responsible AD systems (minimizing the side effects).
Main Question
Is unwanted bias reduced in training data and algorithms, to ensure fair outcomes for all people regardless of background?
Sub-Questions
- Is the social responsibility of safeguarding certain data guaranteed (by algorithms and data collection)?
- Should AD systems with deployed AI be protected against misuse and cyber threats?
References
- Kazim, 2021
- Ma, Y. et al. (2020) ‘Artificial intelligence applications in the development of Autonomous Vehicles: A survey’, IEEE/CAA Journal of Automatica Sinica, 7(2), pp. 315–329. doi:https://doi.org/10.1109/jas.2020.1003021
- Salay, R., Queiroz, R. and Czarnecki, K. (2017) ‘An Analysis of ISO 26262: Using Machine Learning Safely’. Automotive Software. doi:https://doi.org/10.48550/arXiv.1709.02435.
- Ghahramani, Z. (2015) ‘Probabilistic machine learning and artificial intelligence’. Nature, 521(7553), pp.452–459. doi:https://doi.org/10.1038/nature14541.