Ruhr-Uni-Bochum

On Hyperparameters and Backdoor-Resistance in Horizontal Federated Learning

2025

Konferenz / Journal

Research Hub

Research Hub C: Sichere Systeme

Research Challenges

RC 9: Intelligent Security Systems

Abstract

Horizontal Federated Learning (HFL) is particularly vulnerable to backdoor attacks as adversaries can easily manipulate both the training data and processes to execute sophisticated attacks. In this work, we study the impact of Training hyperparameters on the effectiveness of backdoor attacks and defenses in HFL. More specifically, we show both analytically and by means of measurements that the choice of hyperparameters by benign clients does not only influence model accuracy but also significantly impacts backdoor attack success. This stands in sharp contrast with the multitude of contributions in the area of HFL security, which often rely on custom ad-hoc hyperparameter choices for benign clients — leading to more pronounced Backdoor attack strength and diminished impact of defenses. Our results indicate that properly tuning benign clients’ hyperparameters — such as learning rate, batch size, and number of local epochs — can significantly curb the effectiveness of backdoor attacks, regardless of the malicious clients’ settings. We support this claim with an extensive robustness evaluation of state-of-the-art attack-defense combinations, showing that carefully Chosen hyperparameters yield across-the-board improvements in robustness without sacrificing main task accuracy. For example, we show that the 50%-lifespan of the strong A3FL attack can be reduced by 98.6%, respectively—all without using any defense and while incurring only a 2.9 percentage points drop in clean task accuracy.

Tags

Machine Learning