Mission Statement #1
addisoncrump
announced in
Announcements
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
Uh oh!
There was an error while loading. Please reload this page.
-
Overview
We have identified a general weakness in the ongoing evaluation of fuzzers: a lack of consistency in evaluation, limitations of available measurement methods, unmaintainability and irreproducibility of fuzzer experiments, and a lack of extensibility in existing benchmarking platforms. To address this, we are developing libfuzzbench: a component-based approach to the construction of benchmarking for specific measurement goals.
With this in mind, our goals are as follows:
Usage
The central components of our benchmarker will be defined generally; different types of fuzzer may specify how they interact with targets according to their preference, and different measurers define how to inspect each trial according to what occurs. When a user defines an experiment, it is done by specifying the fuzzers, benchmarks, and measurement strategies in configuration files. These configuration files are then concretised into a program by finding and emitting a corresponding code-based implementation which statically checks their compatibility.
When users build new fuzzers, benchmarks, or measurement strategies, they do so by implementing these generally in code, and define a mechanism by which they may be referred to later in configurations. They must specify how they interact with other components. For example, a potential design might be like the following:
LLVMHarness
for harnesses which defineLLVMFuzzerTestOneInput
) and for build types (which specify an AT that defines the corresponding harness type).In this way, components are incrementally checked for compatibility and are composed in a way that allows for flexible and expressive benchmarking. Users who wish to extend this only need to implement the corresponding support; for example, if one wants to implement a new way of sampling inputs, they must merely implement a corresponding monitor. Users who want to perform a new kind of analysis must only implement a corresponding measurer and analyser. Users wishing to implement a new runner (e.g., in the cloud, within docker, within a VM, etc.) need only specify how the fuzzer is launched and how to deploy the corresponding monitors.
Timeline
For now, we will discuss individual ideas that we want to support and consider how this would be accomplished within a composable framework. When we have an initial set of goals established, we will implement the generic core and the corresponding initial support.
Beta Was this translation helpful? Give feedback.
All reactions