Cyber Security in the Age of Large-Scale Adversaries

CASA Distinguished Lectures

In den CASA Distinguished Lectures heißen wir ausgewählte international und nationale Wissenschaftler*innen am HGI willkommen.
An die meist einstündigen Vorträge dieser exzellenten Gastredner*innen schließt immer auch eine Diskussion mit den Teilnehmenden an. Damit möchten wir unser Ziel verwirklichen, einen regen Gedankenaustausch innerhalb der Cyber-Security-Forschung anzutreiben und neue Perspektiven zu öffnen.

Aufgrund der aktuellen Situation rund um die COVID-19-Epidemie werden die Lectures online abgehalten - und sind damit für Interessierte auf der ganzen Welt zugänglich. Der Zugangslink zur jeweiligen Veranstaltung wird unter den Informationen zu den Votragenden geteilt.

Auf unserem Youtube-Kanal können Sie sich außerdem einige unserer Distinguished Lectures jederzeit in voller Länge anschauen.

Bevorstehende Termine im Sommersemester 2020

Mittwoch, 10.6.2020, 10.15 Uhr

Battista Biggio (PRA Lab, University of Cagliari, Italy)

Cyber Security in the Age of Large-Scale Adversaries

Link zum Zoom-Webinar

Wild Patterns: Ten Years after the Rise of Adversarial Machine Learning


Abstract. Data-driven AI and machine-learning technologies have become pervasive, and even able to outperform humans on specific tasks. However, it has been shown that they suffer from hallucinations known as adversarial examples, i.e., imperceptible, adversarial perturbations to images, text and audio that fool these systems into perceiving things that are not there. This has severely questioned their suitability for mission-critical applications, including self-driving cars and autonomous vehicles. This phenomenon is even more evident in the context of cybersecurity domains with a clearer adversarial nature, like malware and spam detection, in which data is purposely manipulated by cybercriminals to undermine the outcome of automatic analyses.

As current data-driven AI and machine-learning methods have not been designed to deal with the intrinsic, adversarial nature of these problems, they exhibit specifc vulnerabilities that attackers can exploit either to mislead learning or to evade detection. Identifying these vulnerabilities and analyzing the impact of the corresponding attacks on learning algorithms has thus been one of the main open issues in the research field of adversarial machine learning, along with the design of more secure and explainable learning algorithms.

In this talk, I review previous work on evasion attacks, where malicious samples are manipulated at test time to evade detection, and poisoning attacks, which can mislead learning by manipulating even only a small fraction of the training data. I discuss some defense mechanisms against both attacks in the context of real-world applications, including computer vision, biometric identity recognition and computer security. Finally, I briefly discuss our ongoing work on attacks against deep-learning algorithms, and sketch some promising future research directions.

Bio. Battista Biggio (MSc ’06, PhD ‘10) is an Assistant Professor at the Department of Electrical and Electronic Engineering at the University of Cagliari, Italy, and a co-founder of Pluribus One, a startup company developing secure AI algorithms for cybersecurity tasks. In 2011, he visited the University of Tuebingen, Germany. His pioneering research on adversarial machine learning involved the development of secure learning algorithms for spam and malware detection, and computer-vision problems, playing a leading role in the establishment and advancement of this research field. On these topics, he has published more than 70 papers, collecting more than 4600 citations (Google Scholar, April 2020).

Dr. Biggio regularly serves as a reviewer and program committee member for several international conferences and journals on the aforementioned research topics (including CVPR, NeurIPS, IEEE Symp. S&P and ACM CCS), co-organizes three well-established workshops (AISec, DLS, S+SSPR) and he is Associate Editor for three high-impact journals (Pattern Recognition, IEEE TNNLS , and IEEE Comp. Intell. Magazine). He is chair of the TC1 on Statistical Pattern Recognition of the IAPR, a senior member of the IEEE and a member of the IAPR and ACM.


Donnerstag, 18.06.2020, 13.30 Uhr

Andreas Zeller (CISPA Helmholtz Center for Information Security)

Cyber Security in the Age of Large-Scale Adversaries

Link zum Zoom-Webinar

Learning the Language of Failure

Joint work with Rahul Gopinath and Zeller’s team at CISPA

Abstract. When diagnosing why a program fails, one of the first steps is to precisely understand the circumstances of the failure – that is, when the failure occurs and when it does not. Such circumstances are necessary for three reasons. First, one needs them to precisely predict when the failure takes place; this is important to devise the severity of the failure. Second, one needs them to design a precise fix: A fix that addresses only a subset of circumstances is incomplete, while a fix that addresses a superset may alter behavior in non-failing scenarios. Third, one can use them to create test cases that reproduce the failure and eventually validate the fix.

In this talk, I present and introduce tools and techniques that automatically learn circumstances of a given failure, expressed over features of input elements. I show how to automatically infer input languages as readable grammars, how to use these grammars for massive fuzzing, and how to systematically and precisely characterize the set of inputs that causes a given failure – the "language of failure".

Bio.  Andreas Zeller is Faculty at the CISPA Helmholtz Center for Information Security, and professor for Software Engineering at Saarland University, both in Saarbrücken, Germany. In 2010, Zeller was inducted as Fellow of the ACM for his contributions to automated debugging and mining software archives, for which he also obtained the ACM SIGSOFT Outstanding Research Award in 2018.  His current work focuses on specification mining and test case generation, funded by grants from DFG and the European Research Council (ERC).


Vergangene Termine im Sommersemester 2020

Joan Daemen (Radboud University Nijmegen)

Cyber Security in the Age of Large-Scale Adversaries

Hier finden Sie das Video zum Vortrag

On deck functions


Abstract. Modern symmetric encryption and/or authentication schemes consist of modes of block ciphers. Often these schemes have a proof of security on the condition that the underlying block cipher is PRP or SPRP-secure: keyed with a fixed and unknown key it shall be hard to distinguish from a random permutation. The PRP and SPRP security notions have become so accepted that they are referred to as the standard model. (S)PRP security cannot be proven but thanks to this nice split in primitives and modes, the assurance of block-cipher based cryptographic schemes relies on public scrutiny of the block cipher in the simple standard scenario.

Security proofs of modes can become quite complicated and errors have been made. This complexity can be reduced if we add an input to the block cipher, a so-called tweak. The resulting primitive is called a tweakable block cipher and its (S)PRP security is tweakable (S)PRP. The presence of the tweak makes these primitives more costly for the same target security strength due to the increase in degrees of freedom for the adversary.Another approach is to abandon block ciphers altogether and replace them by permutations.

During the last decade a field of permutation-based cryptography has appeared that defines modes on top of these primitives and many new permutations are proposed. At their core these modes often have a duplex-like construction and its parallel nephew, farfalle. However, while it is reasonable to assume one can build a block cipher that is (S)PRP secure it is impossible to formalize what it means for a permutation to behave like an ideal permutation. We show that permutation-based crypto can have its own standard model with (keyed) duplex functions or farfalle-based functions at their center, both instances of what we call deck functions and the standard model is the pseudorandom function (PRF) security of deck functions.

Modes can be defined in terms of deck functions and can be proven secure in the setting where the keyed deck function is hard to distinguish from a random oracle. The PRF security of the deck function is the subjec of public scrutiny.In this talk I will discuss some modes on top of deck functions and some concrete deck functions.

Bio. After graduating in electromechanical engineering Joan Daemen was awarded his PhD in 1995 from KU Leuven. After his contract ended at COSIC, he privately continued his crypto research and contacted Vincent Rijmen to continue their collaboration that would lead to the Rijndael block cipher, and this was selected by NIST as the new Advanced Encryption Standard in 2000. After over 20 years of security industry experience, including work as a security architect and cryptographer for STMicroelectronics, he is now a professor in the Digital Security Group at Radboud University Nijmegen.

He co-designed the Keccak cryptographic hash function thate was selected as the SHA-3 hash standard by NIST in 2012 and is one of the founders of the permutation-based cryptography movement and co-inventor of the sponge, duplex and farfalle constructions. In 2017 he won the Levchin Prize for Real World Cryptography and in 2020 the RSA award for excellence in mathematics. In 2018 he was awarded an ERC advanced grant called ESCADA and an NWO TOP grant called SCALAR, both for design and analysis of symmetric crypto.

Michele Mosca (University of Waterloo)

Cyber Security in the Age of Large-Scale Adversaries

Hier geht es zum Vortrag auf Youtube.

Toward a Quantum-Safe Future

ABSTRACT: There has been tremendous progress in the many layers needed to realize large-scale quantum computing, from the hardware layers to the high level software. There has also been vastly increased exploration into the potentially useful applications of quantum computers, which will drive the desire to build quantum computers and make them available to users. I will describe some of my research in quantum algorithmics and quantum compiling.

The knowledge and tools developed for these positive applications give us insight into the cost of implementing quantum cryptanalysis of today's cryptographic algorithms, which is a key factor in estimating when quantum computers will be cryptographically relevant (the "collapse time"). In addition to my own estimates, I will summarize the estimates of 22 other thought leaders in quantum computing.

What quantum cryptanalysis means to an organization or a sector depends not only on the collapse time, but also on the time to migrate to quantum-safe algorithms as well as the shelf-life of information assets being protected. In recent years, we have gained increasing insight into the challenges of a wide-scale migration of existing systems. We must also be proactive as we deploy new systems. Open-source platforms, like OpenQuantumSafe and OpenQKDNetwork, are valuable resources in helping meet many of these challenges.

While awareness of the challenges and the path forward has increased immensely, there is still a long road ahead as we work together with additional stakeholders not only to prepare our digital economy to be resilient to quantum attacks, but also to make us more resilient to other threats that emerge.

Michele Mosca is co-founder of the Institute for Quantum Computing at the University of Waterloo, a Professor in the Department of Combinatorics & Optimization of the Faculty of Mathematics, and a founding member of Waterloo's Perimeter Institute for Theoretical Physics. He was the founding Director of CryptoWorks21, a training program in quantum-safe cryptography. He co-founded the ETSI-IQC workshop series in Quantum-Safe Cryptography. He co-founded evolutionQ Inc. to support organizations as they evolve their quantum-vulnerable systems to quantum-safe ones and softwareQ Inc. to provide quantum software tools and services. 

He obtained his doctorate in Mathematics in 1999 from Oxford on the topic of Quantum Computer Algorithms, an MSc in Mathematics and the Foundations of Computer Science in 1996 from Oxford, and a BMath in Combinatorics & Optimization and Pure Mathematics in 1995 from Waterloo. 
His research interests include quantum computation and cryptographic tools designed to be safe against quantum technologies. He is globally recognized for his drive to help academia, industry and government prepare our cyber systems to be safe in an era with quantum computers. Dr. Mosca’s awards and honours include Fellow of the Institute for Combinatorics and its Applications (since 2000), 2010 Canada's Top 40 Under 40, Queen Elizabeth II Diamond Jubilee Medal (2013), SJU Fr. Norm Choate Lifetime Achievement Award (2017), and a Knighthood (Cavaliere) in the Order of Merit of the Italian Republic (2018).

Andreas Huelsing (Eindhoven University of Technology)

Cyber Security in the Age of Large-Scale Adversaries

Hier geht es zum Vortrag auf Youtube.

From hash-function security in a post-quantum world to SPHINCS+

ABSTRACT: In this talk, I will discuss the security and applications of cryptographic hash-functions in a post-quantum world.
In the first half of the talk I will focus on security properties. Taking adversaries with quantum-computing abilities into account
has an influence on security models and requires to re-access security against generic attacks. On the other hand,
new applications of cryptographic hash-functions require new security properties. I will discuss some such new properties and models.

Afterwards, I will move on to applications and present the stateless hash-based signature proposal SPHINCS+ which
is a contender in the 2nd round of the NIST PQC competition.  I will cover new results on the security of SPHINCS+ and its performance.

Personal Homepage

Peter Schwabe (Radboud University Nijmegen)

Cyber Security in the Age of Large-Scale Adversaries

Post-quantum crypto on embedded microcontrollers

ABSTRACT: Asymmetric crypto deployed today is essentially completely based on RSA, and (elliptic-curve) discrete logarithms. It is long known that these cryptosystems are no longer secure in a world where attackers are equipped with a large universal quantum computer. This is why not only academic researchers, but also government agencies, standardization bodies, and industry are putting effort into transitioning our cryptographic infrastructure to post-quantum primitives.

Probably the most prominent effort in this field is the NIST post-quantum crypto (PQC) project, which started in 2016 and aims at selecting and eventually standardizing several suitable post-quantum signatures and
key-encapsulation schemes. This effort by NIST is supported by the international research community.

In my talk I will first present the pqm4 project, a library, testing, and benchmarking framework for post-quantum cryptography on the ARM Cortex M4. The long-term goal of this framework is to collect optimized and also side-channel-protected implementations of all NIST PQC candidates. In the second part of my talk I will zoom into the optimzation effort for some of these schemes, specifically lattice-based key-encapsulation mechanisms.

Peter Schwabe is an associate professor at Radboud University Nijmegen. He graduated from RWTH Aachen University in computer science in 2006 and received a Ph.D. from the Faculty of Mathematics and Computer Science of Eindhoven University of Technology in 2011.

He then worked as a postdoctoral researcher at the Institute for Information Science and the Research Center for Information Technology Innovation of Academia Sinica, Taiwan and at National Taiwan University. His research is in the area of cryptographic engineering, in particular the design and secure implementation of cryptographic primitives and protcols for real-world applications.

In recent years his research has mainly focused on post-quantum cryptography. He is co-submitter of seven round-2 candidates in the NIST PQC project and since 2018 he is leading research in the project "EPOQUE -- Engineering post-quantum cryptography", which is supported by the European Research Council through an ERC Starting Grant.

Dan J. Bernstein (University of Chicago)

Cyber Security in the Age of Large-Scale Adversaries

Sorting integer arrays: security, speed, and verification

ABSTRACT: This talk will explain (1) the security concept of "constant-time" software; (2) how to build constant-time software to sort arrays of integers; (3) how to make constant-time sorting software run so quickly that it beats Intel's "Integrated Performance Primitives" library; and (4) how to automatically verify that the resulting software works correctly for all possible inputs.

Daniel J. Bernstein is the designer of the "tinydns" software used by Facebook to publish server addresses, the "ChaCha20" cipher used in the Wireguard VPN, the "dnscache" software used by Cisco's OpenDNS to handle 175 billion address requests per day from 90 million Internet users, the "SipHash" hash function (co-designed with Jean-Philippe Aumasson) used by Python to protect against hash-flooding attacks, and the "Curve25519" public-key system used by WhatsApp for end-to-end encryption. Cryptographic algorithms designed by Bernstein are used by default in Apple's iOS, Google's Chrome browser, Android, etc., encrypting data for billions of users.

Yasemin Acar (Leibniz University Hannover)

Cyber Security in the Age of Large-Scale Adversaries

A human-centered approach to the secure application of cryptography

ABSTRACT: In this talk I will outline my research that aims to bridge the gap between cryptography experts on the one side, and end users on the other side. Cryptography comes in many different shapes: we see it in password storage, digital signatures, encrypting files, and secure network connections. When implementations are less than ideal, security properties are lost. End users rarely have a chance to make informed choices about where to use which method of securing their data or communications, and crypto experts rarely implement the crypto all the way down to the application level that is actually used by end users. In-between are various actors, and their overarching problem can be summarized as “You are not your user”. Library developers, who implement crypto algorithms and make them available to software developers, may be crypto experts, but are rarely focused on library usability.

Software developers, the users of crypto libraries, are rarely crypto experts, and are rarely focused on human factors to the degree that they can meaningfully communicate security features (or the lack thereof) to end users. My research aims to connect the dots to secure end users by helping developers write secure code.

Yasemin Acar is a researcher at Leibniz University Hannover, where she works on human-centered security and privacy. She is the winner of the John Karat Usable Security and Privacy Student Research Award 2018. One of her papers on the impact of documentation usability on code security won the NSA Best Scientific Cybersecurity Paper Competition in 2016. She was a visiting researcher at the National Institute of Standards and Technology (NIST, USA) in the summer of 2019, where she worked on improving privacy workflows for professionals as well as helping developers choose secure software libraries. She has previously been a researcher at the Center of Information Security and Privacy (CISPA) at Saarland University.

Personal Homepage

Distinguished Lectures im Sommersemester 2019

Nico Döttling (CISPA Helmholtz Center for Information Security)

Cyber Security in the Age of Large-Scale Adversaries

Trapdoor Hash Functions and their Applications

ABSTRACT: We introduce a new primitive, called trapdoor hash functions (TDH), which are compressing hash functions with additional trapdoor function-like properties. Specifically, given an index i, TDHs allow for sampling an encoding key ek (which hides i) along with a corresponding trapdoor. Furthermore, given a hash value H(x), a hint value E(ek,x), and the trapdoor corresponding to \ek, the i-th bit of x can be efficiently recovered. In this setting, one of our main questions is: How small can the hint value E(ek,x) be? We obtain constructions where the hint is only one bit long based on DDH, QR, DCR, or LWE.

As the main application, we obtain the first constructions of private information retrieval (PIR) protocols with communication cost poly-logarithmic in the database size based on DDH or QR. These protocols are in fact rate-1 when considering block PIR.

I am a tenure-track faculty at the Helmholtz Center for Information Security (CISPA) in Saarbrücken. The focus of my research is public key encryption and secure two-party computation.

From 2017 to 2018 I was assistant professor at the Friedrich-Alexander-University Erlangen Nürnberg. Prior to that, I was a postdoc in the group of Sanjam Garg at UC Berkeley, supported by a DAAD fellowship from 2016 to 2017 and a postdoc in the crypto group of Aarhus University, working with Ivan Damgård and Jesper Buus Nielsen form 2014 to 2016. I finished my PhD in 2014 at the Karlsruhe Institute of Technology under the supervision of Jörn Müller-Quade. I am the 2014 winner of the biennial Erika and Dr. Wolfgang Eichelberger Dissertation Award.

Research Homepage

Daniel Gruss (Graz University of Technology)

Cyber Security in the Age of Large-Scale Adversaries

Transient Execution Attacks

ABSTRACT: In this talk we will deepen our understanding of transient execution attacks and defenses. We will discuss the differences between all the Spectre variants in terms of microarchitectural (prediction) elements, the attacker model, and the attack strategy. We will discuss blank spots that we should look at in the future.
With this knowledge we are prepared to discuss which defenses against transient execution attacks are effective. We will see that there are good defenses, but most are neither effective nor efficient. Finally we will discuss open problems for defenses.

Daniel Gruss (@lavados) is an Assistant Professor at Graz University of Technology. He finished his PhD with distinction in less than 3 years. He has been involved in teaching operating system undergraduate courses since 2010. Daniel's research focuses on side channels and transient execution attacks. He implemented the first remote fault attack running in a website, known as Rowhammer.js. He frequently speaks at top international venues, such as Black Hat, Usenix Security, IEEE S&P, ACM CCS, Chaos Communication Congress, and others. His research team was one of the teams that found the Meltdown and Spectre bugs published in early 2018.


Claudia Diaz (COSIC research group, Department of Electrical Engineering, KU Leuven)

Cyber Security in the Age of Large-Scale Adversaries

Strong network anonymity with mixnets

ABSTRACT: This talk will motivate the need for anonymity at the network layer and introduce basic anonymity concepts and metrics that are applicable to communication settings. We will review the relevant adversary models and introduce mixnets, a type of anonymous communication system that protects communications against more powerful adversaries than Tor. We will explain the different features that need to be considered when designing mixnet routing protocols and introduce the Katzenpost mixnet architecture, which is an implementation of the Loopix anonymity system developed by the EU funded project Panoramix, and conclude with open challenges for such systems.

Claudia Diaz is an Associate Professor at the COSIC research group of the Department of Electrical Engineering (ESAT) at the KU Leuven, where she leads the Privacy Technologies Team. She holds a Master's degree in Telecommunications Engineering at the University of Vigo (Spain, 2000), and a Ph.D. in Engineering at the KU Leuven (Belgium, 2005). Her research is focused on the design, analysis, and applications of technologies to protect online privacy, and in particular technologies that offer protection for metadata to prevent traffic analysis, tracking, localisation, or behavioral profiling.

Personal Homepage

Isbael Valera (Max Planck Institute for Intelligent Systems, Tübingen)

Cyber Security in the Age of Large-Scale Adversaries

Expressive, Robust and Accountable Machine Learning for Real-world Data

ABSTRACT: In this talk, I will start discussing the main challenges of the deployment of machine learning methods in real-world applications. Then, I will provide an overview of my research, where I aim to develop machine learning methods that are i) expressive to capture the  complex statistical properties of real-world data; ii) robust to provide accurate uncertainty estimates on these properties; and ii) accountable to ensure fairness and interpretability.  Here, I will enter into the details two of my main projects: describing  first how to design machine learning methods for event data using temporal point processes; and second, how to handle biases in the data and enforce a fairness notion in the outcomes of decision making systems. Finally, I will briefly describe my research agenda towards a trustworthy use of machine learning in the real-world.  

Isabel Valera is a Minerva research group leader at the Max Planck for Intelligent Systems (MPI-IS). Isabel obtained her PhD in 2014 and her MSc degree in 2012, both degrees in Multimedia and Communications from the University Carlos III in Madrid, Spain. After her PhD, she worked at the MPI for Software Systems as a postdoctoral fellow, under the supervision of Dr. Manuel Gomez Rodriguez; and at the University of Cambridge as an associated researcher, under the supervision of Prof. Ghahramani. She has held a German Humboldt Post-Doctoral Fellowship, and last year she was granted with a “Minerva fast track” research group from the Max Planck Society, Germany. On an annual basis, the Minerva fast track programme offers two outstanding female scientists a long-term career opportunity with the aim of establishing an own research group within an MPI.

Catalin Hritcu (Inria Paris, Prosecco team)

Cyber Security in the Age of Large-Scale Adversaries

When Good Components Go Bad: Formally Secure Compilation Despite Dynamic Compromise

ABSTRACT: We propose a new formal criterion for evaluating secure compartmentalization schemes for unsafe languages like C and C++, expressing end-to-end security guarantees for software components that may become compromised after encountering undefined behavior---for example, by accessing an array out of bounds. Our criterion is the first to model dynamic compromise in a system of mutually distrustful components with clearly specified privileges. It articulates how each component should be protected from all the others---in particular, from components that have encountered undefined behavior and become compromised.

To illustrate the model, we construct a secure compilation chain for a small unsafe language with buffers, procedures, and components, targeting a simple abstract machine with built-in compartmentalization. We give a machine-checked proof in Coq that this compiler satisfies our secure compilation criterion. Finally, we show that the protection guarantees offered by the compartmentalized abstract machine can be achieved at the machine-code level using either software fault isolation or a tag-based reference monitor.

Catalin Hritcu is a researcher at Inria Paris where he works on security foundations. He is particularly interested in formal methods for security (secure compilation, compartmentalization, memory safety, security protocols, integrity, information flow), programming languages (program verification, type systems, proof assistants, semantics, formal metatheory, certified tools, property-based testing), and the design and verification of security-critical systems (reference monitors, secure compilation chains, secure hardware). He was awarded an ERC Starting Grant on formally secure compilation ( and is also actively involved in the design of the F* verification system ( and its use for building a formally verified HTTPS stack (

Catalin received his PhD from Saarland University, supported by fellowships from the International Max Planck Research School for Computer Science and Microsoft Research Cambridge. Recently he received a Habilitation degree from ENS Paris, and was previously also a Research Associate at University of Pennsylvania and a Visiting Researcher at Microsoft Research Redmond.

Katharina Krombholz (CISPA Helmholtz Center for Information Security)

Cyber Security in the Age of Large-Scale Adversaries

A User-Centric Approach to Securing the HTTPS Ecosystem

ABSTRACT: HTTPS is one of the most important protocols used to secure communication and is, fortunately, becoming more pervasive. However, especially the long tail of websites is still not sufficiently secured. HTTPS involves different types of users, e.g., end users who are faced with trust indicators and warnings or administrators who are required to deal with cryptographic fundamentals and complex decisions concerning compatibility.

In this talk, I present recent users-centric research that explains why different types of users still struggle with making informed security decisions. Based on empirical studies with administrators and end users, I discuss multidimensional reasons for vulnerabilities in the HTTPS ecosystem and how a more human-centric approach to the design of cryptographic protocols could mitigate them.

CISPA - Helmholtz Center for Information Security/ Saarland

Mathias Peyer (EPFL school of computer and communication sciences/IC)

Cyber Security in the Age of Large-Scale Adversaries

Security Testing Hard to Reach Code

ABSTRACT: Memory corruption plagues systems since the dawn of computing. Attacks have evolved alongside the development of ever stronger defenses resulting in an eternal war in memory. Despite the rise of strong mitigations such as stack cookies, ASLR, DEP, or most recently Control-Flow Integrity, exploits are still prevalent as none of these defenses offers complete protection. This situation calls for program testing techniques that discover reachable vulnerabilities before the attacker. Finding and fixing bugs is the only way to protect against
any exploitation.

We develop fuzzing techniques that follow an adversarial approach, focusing on the exposed attack surface and exploring potentially reachable vulnerabilities. In this talk we will discuss two areas of hard to reach code: (i) areas of a program that are guarded through hard to satisfy checks (such as checksums or equivalence checks) and (ii) drivers that interact with peripherals.

First, whenever the fuzzer hits a coverage wall and no longer makes progress, we detect checks in the code that current input could not satisfy. Through transformational fuzzing we target these underexplored program components and fine-tune the program under test to particular use cases. Second, by providing a custom-tailored emulation environment we create mock Trojan devices that allow fuzzing the peripheral/driver interface. In these projects we develop new techniques to test different kinds of hard to reach code and exposed large
amounts of vulnerabilities.

Mathias Payer is a security researcher and an assistant professor at the EPFL school of computer and communication sciences (IC), leading the HexHive group. His research focuses on protecting applications in the presence of vulnerabilities, with a focus on memory corruption and type violations. He is interested in software security, system security, binary exploitation, effective mitigations, fault isolation/privilege separation, strong sanitization, and software testing (fuzzing) using a combination of binary analysis and compiler-based techniques.

After 4 years at Purdue university, he joined EPFL in 2018. Before joining Purdue in 2014 he spent two years as PostDoc in Dawn Song's BitBlaze group at UC Berkeley. He graduated from ETH Zurich with a Dr. sc. ETH in 2012, focusing on enforcing security policies through low-level binary translation. All prototype implementations are open-source. He co-founded the EPFL polygl0t and Purdue b01lers CTF teams.


Geben Sie Ihren Benutzernamen und Ihr Passwort ein, um sich an der Website anzumelden