Existing standards for the regulation of functional safety, in particular ISO 26262, are not compatible with typical AI methods and ML in particular (Salay et al., 2017), (Ma, 2020). In fact, a common characteristic of AI systems is dealing with uncertain data in a context that bears uncertainty, while the results may be associated with some degree of uncertainty too (Ghahramani et al, 2015).
Given that, the topic “Verification & Validation” (V&V, in short) considers “Verification” for this question, with the goal to formally guarantee some properties of ML techniques (especially deep neural networks) such as robustness, safety, and correctness.
In particular, this first question is about assessment methods related to safety. (Kazim, 2021)
Main Question
Can a “safety assessment” be achieved?
Sub-Questions
- Are the risks identified, compared to the safety assessment before market introduction?
- Are there risks or anomalies identified and addressed during safety assessment?
- Is safety assessment guaranteed after market introduction?
References
- Kazim, 2021
- Ma, Y. et al. (2020) ‘Artificial intelligence applications in the development of Autonomous Vehicles: A survey’, IEEE/CAA Journal of Automatica Sinica, 7(2), pp. 315–329. doi:https://doi.org/10.1109/jas.2020.1003021
- Salay, R., Queiroz, R. and Czarnecki, K. (2017) ‘An Analysis of ISO 26262: Using Machine Learning Safely’. Automotive Software. doi:https://doi.org/10.48550/arXiv.1709.02435.
- Ghahramani, Z. (2015) ‘Probabilistic machine learning and artificial intelligence’. Nature, 521(7553), pp.452–459. doi:https://doi.org/10.1038/nature14541.