Approaches to Uncertainty Quantification in Federated Deep Learning2021
Konferenz / Medium
Research Hub C: Sichere Systeme
RC 9: Intelligent Security Systems
Trustworthy machine learning allows data privacy and a robust assessment of the uncertainty of predictions. Methods for quantifying uncertainty in deep learning have recently gained attention, while federated deep learning allows to utilize distributed data sources in a privacy-preserving manner. In this paper, we integrate several approaches for uncertainty quantification in federated deep learning. In particular, we show that prominent approaches such as MC-dropout and stochastic weight averaging Gaussian (SWAG) can be extended efficiently to federated setup. Moreover, we demonstrate that deep ensembles allow for natural integration in the federated learning framework. Our empirical evaluation confirms that a trustworthy uncertainty quantification on out-of-distribution data is possible in federated learning with little (SWAG) to no (MC-dropout, ensembles) additional communication.
While all methods perform well in our empirical analysis and should serve as baselines in future developments in this field, deep ensembles and MC-dropout allow for better uncertainty based identification of out-of-distribution data and wrong classified data.