Lectures: Uncertainty Quantification in Machine Learning
We had the privilege of hosting Prof. Dr. Eyke Hüllermeier for an engaging and insightful talk on Unvertainty Quantification in Machine Learning. A huge thank you to Prof. Hüllermeier for sharing his expertise and sparking meaningful discussions around this critical topic.
Why is Uncertainty Quantification important? As Prof. Hüllermeier highlighted, understanding and addressing uncertainty in AI is key to building trustworthy and robust machine learning models. This is especially crucial in safety-critical applications like autonomous driving and medical diagnostics, where the cost of errors can be significant.
He explained two main types of uncertainties in machine learning:
- Aleatoric Uncertainty: Stemming from the inherent randomness in the data-generating process, this type of uncertainty cannot be reduced. For instance, when visibility is low due to snow, it is impossible to interpret traffic signs accurately.
- Epistemic Uncertainty: This arises from the model’s lack of knowledge or capacity. For example, if a traffic sign is of an unfamiliar type, the model may fail to predict it correctly. This uncertainty can often be reduced with more data or better models.
Key findings of Prof. Hüllermeiers group in this context:
- A literature review on Uncertainty in Machine Learning shows the relevance of this topic and several approaches: https://link.springer.com/article/10.1007/s10994-021-05946-3
- In a recent study, they found that the Evidential Deep Learning approach cannot quantify epistemic uncertainty as there exists no proper scoring rule based on their assumptions:https://proceedings.mlr.press/v235/juergens24a.html
This lecture was a wonderful reminder of the value of uncertainty quantification in ensuring the reliability of AI systems. It’s not just about making accurate predictions but also about knowing when to trust those predictions.
A big thanks to everyone who attended and contributed to the thought-provoking discussions. Let’s keep exploring how we can address uncertainties to create more robust AI models.