Quantcast
Channel: Planet Python
Viewing all articles
Browse latest Browse all 22462

David MacIver: How to generate a list plus an element of it

$
0
0

A common problem when using Hypothesis is that you want to generate two arguments: One is a list, the other is some element of that list (or some index into it, or some subset of it, etc).

This is pretty easy to do, but there are different ways of doing it that work variably well, so I thought I would unpack some of the options. You should absolutely feel free to ignore this and do whatever is easiest for you: Almost everything that works will work reasonably well, this is really just if you’re interested in The Best way to do it.

That being said, first a caveat. Don’t use this one:

def list_and_element(elements):
   return tuples(lists(elements), elements).
       filter(lambda p: p[1]in p[0])

The main reason not to use this one is that it will appear to work fine but will actually be working really badly: It almost only ever generates pairs where the second element is one which has a very high probability of being drawn from the example distribution. e.g. if you try using this with string elements you’ll find that the second element of the pair is almost always the empty string and basically never a string with more than one character.

Basically as long as you don’t do something like that, which I don’t think I’ve ever seen someone do in the wild anyway, you will probably be fine. That caveat aside, lets look at some of the alternatives.

The easiest thing to do, and the one people most commonly reach for, is to use flatmap:

def list_and_element(elements):
   return lists(elements, min_value=1).flatmap(lambda xs: tuples(just(xs), sampled_from(xs)))

This is pretty self-explanatory if you understand what flatmap does: We’re doing literally exactly what the problem says, but we have to break it into two stages. First we draw a list (with at least one element so that there can be any elements in it), then we draw an element from that list and return that list and that drawn element.

There’s nothing wrong with this solution and you should feel free to use it. However there are two ways to improve upon it:

The first is that whenever you use flatmap you should have a think about whether there’s a reasonable way to instead not do that. It’s not that flatmap is especially dangerous per se, it’s just that because it has extremely general purpose semantics Hypothesis has to work a lot harder every time you use it, and performance and example quality may suffer a bit (if you don’t have a timeout it should never hurt the ability to find an example, only the quality of the end result).

map and filter do not have this problem (if you want a technical explanation why: Hypothesis operates on an intermediate representation which is where it does most of its work. map and filter have the same intermediate representation as the underlying strategy, but flatmap cannot and instead has a really complicated one which is quite hard to work with), and it’s often possible to replace flatmap usage with them, so sometimes it’s worth thinking about whether you can. In this case we can, quite easily. The following does this:

def list_and_element(elements):
   return tuples(integers(min_value=0), lists(elements, min_value=1)).
        filter(lambda p: p[0]<len(p[1])).
        map(lambda p: (p[1], p[1][p[0]]))

What we’re doing here is we’re simultaneously generating the list and an index into it, then we’re filtering to the range where that index is valid, then we’re taking the element at that index.

There’s a complication though: Did you notice how we’re swapping the order of the pair halfway through?

The reason for this is because it makes Hypothesis better able to simplify the example in some interesting cases. What will happen is that Hypothesis will first simplify the first element of the tuple as far as it can, then the second element, then back to the first, etc until it hits a point where it can simplify further.

And what this means is that by putting the index first, we automatically seek to the first element of the list that makes the example pass. This both gives us more freedom to trim down the array by deleting later elements and also stops us getting stuck in the case where we have an easy to satisfy property that happens not to be satisfied by the trivial element but can’t shrink the array. For example, when doing it the other way I found that Hypothesis gave me ([”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ”, ‘0’], ‘0’) as a minimal example of a pair where the element chosen was non-empty: It had first shrunk all the elements of the array leading up to the one we’d chosen, then it couldn’t shrink the index because all of the earlier elements were empty.
This is also potentially a problem with the sampled_from + flatmap approach, but as it happens in this particular case you get lucky: What will end up happening is that if you try to shrink the length of the array past the index you’d chosen, you’ll end up redrawing a new index. This will usually get you past being stuck in this case, although no guarantees.

Then there’s the final option, which is my favourite: Don’t do this at all.

A thing that I have often found to be a very useful reframing is instead of asserting “this example is not a counter-example to this property”, assert “A countere-example to this property does not exist within this example”. As long as the set of possible counter-examples isn’t too large a function of the size of the example (e.g. if it’s O(n) you should be fine) this often leads to much better tests: You both vastly increase the chances of your finding a counterexample and improve the end result because you gain the ability to shrink past a bunch of accidental blockers. So instead of writing something like:

@given(list_and_element(text()))def test_some_stuff(p):
    xs, x = p
    ...

You write:

@given(lists(text(), min_size=1))def test_some_stuff(xs):
    for x in xs:
        ...

This is a technique I first noticed in my properties for testing optimization work. It’s not without its problems: It can be awkward if your tests need to mutate your arguments, and it of course makes your tests slower. For a lot of cases it seems to be worth trying though, as it can improve matters significantly.


Viewing all articles
Browse latest Browse all 22462

Trending Articles