Ruhr-Uni-Bochum
Cyber Security in the Age of Large-Scale Adversaries

CASA Distinguished Lectures

 

In den CASA Distinguished Lectures heißen wir ausgewählte international und nationale Wissenschaftler*innen am HGI willkommen.
An die meist einstündigen Vorträge dieser exzellenten Gastredner*innen schließt immer auch eine Diskussion mit den Teilnehmenden an. Damit möchten wir unser Ziel verwirklichen, einen regen Gedankenaustausch innerhalb der Cyber-Security-Forschung anzutreiben und neue Perspektiven zu öffnen.

Aufgrund der aktuellen Situation rund um die COVID-19-Epidemie werden die Lectures online abgehalten - und sind damit für Interessierte auf der ganzen Welt zugänglich. Der Zugangslink zur jeweiligen Veranstaltung wird unter den Informationen zu den Votragenden geteilt.

 

Service-Angebote zu den Distinguished Lectures

Auf unserem Youtube-Kanal können Sie sich einige vergangene Distinguished Lectures in voller Länge anschauen. Wenn Sie über die anstehenden Vorträge informiert werden möchten, melden Sie sich bitte zu unserem Newsletter an.

Elissa Redmiles (Safety & Society group, Max Planck Institute for Software Systems)

"Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning"

Abstract. A variety of experts -- computer scientists, policy makers, judges -- constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users.
This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people's preferences from which to infer best practice rather than using experts' normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Biography.  Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users' security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and
CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards including
a Facebook Research Award and the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S.
(Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.

Zum Youtube-Video

 

Cyber Security in the Age of Large-Scale Adversaries

Elissa Redmiles (Safety & Society group, Max Planck Institute for Software Systems)

"Learning from the People: From Normative to Descriptive Solutions to Problems in Security, Privacy & Machine Learning"

Abstract. A variety of experts -- computer scientists, policy makers, judges -- constantly make decisions about best practices for computational systems. They decide which features are fair to use in a machine learning classifier predicting whether someone will commit a crime, and which security behaviors to recommend and require from end-users. Yet, the best decision is not always clear. Studies have shown that experts often disagree with each other, and, perhaps more importantly, with the people for whom they are making these decisions: the users.
This raises a question: Is it possible to learn best-practices directly from the users? The field of moral philosophy suggests yes, through the process of descriptive decision-making, in which we observe people's preferences from which to infer best practice rather than using experts' normative (prescriptive) determinations of best practice. In this talk, I will explore the benefits and challenges of applying such a descriptive approach to making computationally-relevant decisions regarding: (i) selecting security prompts for an online system; (ii) determining which features to include in a classifier for jail sentencing; (iii) defining standards for ethical virtual reality content.

Biography.  Dr. Elissa M. Redmiles is a faculty member and research group leader of the Safety & Society group at the Max Planck Institute for Software Systems. She additionally serves as a consultant and researcher at multiple institutions, including Microsoft Research and Facebook. Dr. Redmiles uses computational, economic, and social science methods to understand users' security, privacy, and online safety-related decision-making processes. Her work has been featured in popular press publications such as Scientific American, Wired, Business Insider, Newsweek, Schneier on Security, and
CNET and has been recognized with multiple Distinguished Paper Awards at USENIX Security and research awards including
a Facebook Research Award and the John Karat Usable Privacy and Security Research Award. Dr. Redmiles received her B.S.
(Cum Laude), M.S., and Ph.D. in Computer Science from the University of Maryland.

Zum Youtube-Video