Speaker: Dr Ulf Kargén, LiU
Due to its scalability, cost-effectiveness, and ability to uncover critical security bugs, fuzzing has become the de facto standard approach for automatic security testing. Today, many companies also stipulate fuzzing as a mandatory activity during development of security-critical software. Consequently, fuzzing has also received significant attention in academia lately. One area of fuzzing research that is still wanting, however, is fuzzing evaluation, measurement, and benchmarking. This makes it difficult to, for example, reason about why a particular fuzzing tool (fuzzer) performs better than others on certain software. In this talk, I will discuss some of our ongoing research on evaluating and comparing fuzzers, including a method for automatically generating fuzzing benchmark suites using techniques from automatic program repair. I will also give an overview of some work we have done in the past on improving fuzzing using dynamic data-flow analysis.