Ruhr-Uni-Bochum

Liberating Libraries through Automated Fuzz Driver Generation: Striking a Balance without Consumer Code

2025

Konferenz / Journal

Autor*innen

Mathias Payer Zurab Tsinadze Nicolas Badoux Flavio Toffalini

Research Hub

Research Hub C: Sichere Systeme

Research Challenges

RC 7: Building Secure Systems

Abstract

Fuzz testing a software library requires developers to write fuzz drivers, specialized programs exercising the library. Given a driver, fuzzers generate interesting inputs that trigger the library’s bugs. Writing fuzz drivers manually is a cumbersome process and they frequently hit a coverage plateau, calling for more diverse drivers. To alleviate the need for human expert knowledge, emerging automatic driver generation techniques invest computational time for tasks besides input generation. Therefore, to maximize the number of bugs found, it is crucial to carefully balance the available computational resources between generating valid drivers and testing them thoroughly. Current works model driver generation and testing as a single problem, i.e., they mutate both the driver’s code and input together. This simple approach is limited, as many libraries need a combination of non-trivial library usage and complex inputs. For example, consider a JPEG manipulation library, bugs appear when specific library functions and corrupted images are coincidentally tested together, which, if both are mutated synchronously is difficult to trigger.

We introduce libErator, a novel library testing approach that balances constrained computational resources to achieve two goals: (a) quickly generate valid fuzz drivers and (b) deeply test these drivers to find bugs. To achieve these goals, libErator employs three main techniques. First, we leverage insights from a novel static analysis on the library code to improve the likelihood of generating meaningful drivers. Second, we design a method to quickly discard non-functional drivers, reducing even further resources wasted on unfruitful drivers. Finally, we show an effective driver selection method that avoids redundant tests. We deploy libErator on 15 open-source libraries and evaluate it against manually written and automatically generated drivers. We show that libErator reaches comparable coverage to manually written drivers and, on average, exceeds coverage from existing automated driver generation techniques. More importantly, libErator automatically finds 24 confirmed bugs, 21 of which are already fixed and upstreamed. Among the bugs found, one was assigned a CVE while others contributed to the project test suites, thus showcasing the ability of libErator to create valid library usages. Finally, libErator achieves 25% true positive ratio, doubling the state of the art.

 

Tags

Fuzzing
Program Analysis
Software Security