Ruhr-Uni-Bochum
Cyber Security in the Age of Large-Scale Adversaries

CASA Distinguished Lectures

 

In den CASA Distinguished Lectures heißen wir ausgewählte international und nationale Wissenschaftler*innen am HGI willkommen.
An die meist einstündigen Vorträge dieser exzellenten Gastredner*innen schließt immer auch eine Diskussion mit den Teilnehmenden an. Damit möchten wir unser Ziel verwirklichen, einen regen Gedankenaustausch innerhalb der Cyber-Security-Forschung anzutreiben und neue Perspektiven zu öffnen.

Aufgrund der aktuellen Situation rund um die COVID-19-Epidemie werden die Lectures online abgehalten - und sind damit für Interessierte auf der ganzen Welt zugänglich. Der Zugangslink zur jeweiligen Veranstaltung wird unter den Informationen zu den Votragenden geteilt.

 

Service-Angebote zu den Distinguished Lectures

Auf unserem Youtube-Kanal können Sie sich einige vergangene Distinguished Lectures in voller Länge anschauen. Wenn Sie über die anstehenden Vorträge informiert werden möchten, melden Sie sich bitte zu unserem Newsletter an.

N. Asokan, Buse Gul Atli, Sebastian Szyller (University of Waterloo and Aalto University)

"Extraction of Complex DNN Models: Real Threat or Boogeyman?"

N. Asokan

Buse Gul Atli

Sebastian Szyller

Abstract. The success of deep learning in many application domains has been nothing short of dramatic. The success has brought the spotlight onto security and privacy concerns with deep learning. One of them is the threat of "model extraction": when a machine learning model is made available to customers via an inference interface, a malicious customer can use repeated queries to this interface and use the information gained to construct a surrogate model. In this talk, I will describe our work in exploring whether model extraction constitutes a realistic threat. I will also discuss possible countermeasures and the challenges in deploying them in popular machine learning configurations like federated learning.

Biography.

N. Asokan is a professor of computer science and a David R. Cheriton Chair at the University of Waterloo. He is also an adjunct professor of computer science at Aalto University. His research interests are broadly in the domain of systems security with particular emphases on platform security, and the interplay between artificial intelligence and security/privacy problems. Asokan joined academia after a long career in industrial research, first at IBM, and subsequently at Nokia. He is a fellow of both IEEE and ACM. For more information about his research, visit his homepage or follow him on Twitter at @nasokan.

Buse Gul Atli is a senior doctoral student in the Secure Systems Group. She obtained her M.Sc. degree in Signal, Speech and Language Processing from Aalto University in 2017. She was an intern in Nokia Bell Labs and worked on designing machine learning methods for cybersecurity. She is mainly working on both offensive and defensive methods related to the security and privacy of machine learning.

Sebastian Szyller is a senior doctoral student in the Secure Systems Group. Sebastian is an expert on the security and privacy of AI. He holds a M.Sc. degree in Machine Learning and Data Mining from Aalto University. Prior to joining Aalto, Sebastian worked as a software engineer in investment banking where he designed and implemented high throughput systems that facilitate trading.

Zum Youtube-Video

Cyber Security in the Age of Large-Scale Adversaries

N. Asokan, Buse Gul Atli, Sebastian Szyller (University of Waterloo and Aalto University)

"Extraction of Complex DNN Models: Real Threat or Boogeyman?"

N. Asokan

Buse Gul Atli

Sebastian Szyller

Abstract. The success of deep learning in many application domains has been nothing short of dramatic. The success has brought the spotlight onto security and privacy concerns with deep learning. One of them is the threat of "model extraction": when a machine learning model is made available to customers via an inference interface, a malicious customer can use repeated queries to this interface and use the information gained to construct a surrogate model. In this talk, I will describe our work in exploring whether model extraction constitutes a realistic threat. I will also discuss possible countermeasures and the challenges in deploying them in popular machine learning configurations like federated learning.

Biography.

N. Asokan is a professor of computer science and a David R. Cheriton Chair at the University of Waterloo. He is also an adjunct professor of computer science at Aalto University. His research interests are broadly in the domain of systems security with particular emphases on platform security, and the interplay between artificial intelligence and security/privacy problems. Asokan joined academia after a long career in industrial research, first at IBM, and subsequently at Nokia. He is a fellow of both IEEE and ACM. For more information about his research, visit his homepage or follow him on Twitter at @nasokan.

Buse Gul Atli is a senior doctoral student in the Secure Systems Group. She obtained her M.Sc. degree in Signal, Speech and Language Processing from Aalto University in 2017. She was an intern in Nokia Bell Labs and worked on designing machine learning methods for cybersecurity. She is mainly working on both offensive and defensive methods related to the security and privacy of machine learning.

Sebastian Szyller is a senior doctoral student in the Secure Systems Group. Sebastian is an expert on the security and privacy of AI. He holds a M.Sc. degree in Machine Learning and Data Mining from Aalto University. Prior to joining Aalto, Sebastian worked as a software engineer in investment banking where he designed and implemented high throughput systems that facilitate trading.

Zum Youtube-Video