Adversarial Preprocessing: Understanding and Preventing Image-Scaling Attacks in Machine Learning
2020Conference / Journal
Research Hub
Research Hub C: Sichere Systeme
Research Challenges
RC 9: Intelligent Security Systems
Abstract
Machine learning has made remarkable progress in the lastyears, yet its success has been overshadowed by different at-tacks that can thwart its correct operation. While a large bodyof research has studied attacks against learning algorithms,vulnerabilities in the preprocessing for machine learning havereceived little attention so far. An exception is the recent workof Xiao et al. that proposes attacks against image scaling. Incontrast to prior work, these attacks are agnostic to the learn-ing algorithm and thus impact the majority of learning-basedapproaches in computer vision. The mechanisms underlyingthe attacks, however, are not understood yet, and hence theirroot cause remains unknown.In this paper, we provide the first in-depth analysis ofimage-scaling attacks. We theoretically analyze the attacksfrom the perspective of signal processing and identify theirroot cause as the interplay of downsampling and convolution.Based on this finding, we investigate three popular imaginglibraries for machine learning (OpenCV, TensorFlow, andPillow) and confirm the presence of this interplay in differentscaling algorithms. As a remedy, we develop a novel defenseagainst image-scaling attacks that prevents all possible at-tack variants. We empirically demonstrate the efficacy of thisdefense against non-adaptive and adaptive adversaries.