Ruhr-Uni-Bochum

Florian Tramèr (Stanford University)

"Measuring and Enhancing the Security of Machine Learning"

Copyright: CASA

Abstract. Failures of machine learning systems can threaten both the security and privacy of their users. His research studies these failures from an adversarial perspective, by building new attacks that highlight critical vulnerabilities in the machine learning pipeline, and designing new defenses that protect users against identified threats. In the first part of this talk, Florian Tramèr will explain why machine learning models are so vulnerable to adversarially chosen inputs. Also he will show that many proposed defenses are ineffective and cannot protect models deployed in overtly adversarial settings, such as for content moderation on the Web. In the second part of the talk, Florian Tramèr focuses on the issue of data privacy in machine learning systems, and he demonstrates how to enhance privacy by combining techniques from cryptography, statistics, and computer security.

Biography. Florian Tramèr is a PhD student at Stanford University advised by Dan Boneh. His research interests lie in Computer Security, Cryptography and Machine Learning security. In his current work, he studies the worst-case behavior of Deep Learning systems from an adversarial perspective, to understand and mitigate long-term threats to the safety and privacy of users. Florian is supported by a fellowship from the Swiss National Science Foundation and a gift from the Open Philanthropy Project.

To the YouTube-Video