Quantcast
Channel: Planet Python
Viewing all articles
Browse latest Browse all 22462

David MacIver: Delta debugging with inferred separators

$
0
0

Delta debugging, or something akin to it, tends to be phase one of most file based shrinkers: Shrinking is much easier the faster your test function is, test functions tend to be faster on smaller data, and shrinking is also easier on smaller data because you’ve got fewer things to try, so anything you can do to prune down the file before you get onto the hard work is good, and delta debugging like algorithms do a pretty good job of that.

But delta debugging in its original form is very line oriented: You’re trying to make the file one-minimal in the sense that you can’t delete any single line. What do you do if you want to delete within lines? Or if your file is binary and not line oriented at all?

Delta debugging really works on arbitrary sequences, so you can always just do bytewise delta debugging and it will work perfectly well, but there are usually a lot more bytes than there are lines, and deleting an arbitrary block of bytes is much less likely to work than deleting a sequence of lines is. So it works, but it takes what is often a rather slow process and makes it even slower.

I spotted a trick this morning that works rather well. Define the set of separators for a string as containing all one or two byte substrings.

Now for each separator, we perform delta debugging on the file split by that separator (i.e. we treat it as if it were the newline in our classic delta debugging: We split by separator, try to minimize that sequence subject to the constraint that joining by that separator is interesting).

On it’s own, this works but not all that well because most separators don’t work or work no better than shrinking bytewise, but there’s a refinement we can perform, which is that we can give each separator a score: The score of a separator is the smallest length in bytes of any component in the split. We then process separators in decreasing order of score.

The reason this works is that this guarantees that on a successful shrink we always delete at least the score of the separator many bytes (because we always delete at least one component). This lets us focus on separators which produce large shrinks first, so by the time we get to less useful separators we should have much less to do and already be running pretty fast.

So far I’ve only run this on one example (John Regehr’s from Reducers are Fuzzers), but it works pretty well there – much better than a straightforward bytewise delta debugging certainly, and the file has a lot of reduction potential other than newlines).

In this example at least it’s particularly interesting noticing which separators seem to work well – the obvious ones like ‘ ‘ and ‘\n’ do of course, but ‘{‘ worked particularly well for this example, as does ‘.c’ and ‘e;’. I think this is because they are by luck or reality capturing some nicely exploitable facet of the structure of the example, which suggests this idea has broader application.

I’m not sure these techniques will make it into Hypothesis, as Hypothesis has some special hinting to figure out what’s best to delete that is probably already good enough, and this seems like it would be a technique that worked better for the problem of reducing slow operations on large files, but I’m still pretty pleased with it and will probably find an opportunity to use it somewhere.


Viewing all articles
Browse latest Browse all 22462

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>