The Uncertainty is an inherent property of any ML model / AI system, resulting from either issues in the data (e.g., for labelling or epistemic sources) or aleatoric sources (such as a non-deterministic behaviour of the real world cases, e.g., the cognitive status of the drivers, their driving behaviour, etc.).
Main Question
Have Uncertainty estimation methods been considered, in order to evaluate the uncertainty related to a specific DNN prediction, given a specific input at run-time? (Kendall and Gall, 2017)
Sub-Questions
- Have any popular uncertainty estimation methods been considered, such as methods based on Monte-Carlo or Bayseian approaches (possibly in addition to techniques purely based on post-processing calibration)? (Gal et al., 2016)
- Have rigorous tests been considered, in order to that the uncertainty increases when the AI system (DNN in particular) is applied outside of its training data distribution? (Sicking et al., 2021)
- Can a negative impact be detected and mitigated by the model?
References
- Kendall, A. and Gal, Y. (2017) ‘What Uncertainties Do We Need in Bayesian Deep Learning for Computer Vision?’ Proceedings of the Conference on Neural Information Processing Systems (NIPS/NeurIPS) (Long Beach, CA, USA, 2017), pp. 5574–5584. Available at: https://proceedings.neurips.cc/paper_files/paper/2017/file/2650d6089a6d640c5e85b2b88265dc2b-Paper.pdf (Accessed 22 May 2024).
- Gal, Y. and Ghahramani, Z. (2016) ‘Dropout as a Bayesian Approximation: Representing Model Uncertainty in Deep Learning’. Proceedings of the International Conference on Machine Learning (ICML) (New York, NY, USA), pp. 1050–1059. Available at: https://arxiv.org/abs/1506.02142 (Accessed 22 May 2024).
- Sicking, J., Akila, M., Pintz, M., Wirtz, T., Fischer, A. and Wrobel, S. (2021) ‘A Novel Regression Loss for Non-Parametric Uncertainty Optimization’. Proceedings of the Symposium on Advances in Approximate Bayesian Inference (Virtual conference, 2021), pp.1–27. doi:https://doi.org/10.48550/arXiv.2101.02726.