I noticed about two hours after releasing that there’s a massive performance problem with the recursive data implementation I shipped in Hypothesis 1.9.0. Obviously this made me rather sad.
You have to do something slightly weird in order to hit this: Use recursive data as the base case for another recursive data implementation (actually using it in the expansion would probably work too).
The reason for this turns out to be nothing like what I expected and is kinda interesting, and I was really stuck as to how to solve it until I realised that with a little bit of lazy evaluation the problem was literally trivial to solve.
We’ll need to look at the implementation a bit to see what was happening and how to fix it.
Internally, Hypothesis’s recursive data implementation is just one big union of strategies. recursive(A, f) is basically just A | f(A) | f(A | f(A)) | … until you’ve added enough clauses that any subsequent one will basically never get in under the limit.
In order to understand why this is a problem, you need to understand a bit about Hypothesis generates data. It happens in two stages: The first is that we draw a parameter value at random, and the second is we pass that parameter value in to the strategy again and draw a value of the type we actually want (actually it’s more complicated than that too, but we’ll ignore that). This approach gives us higher quality data and lets us shape the distribution better.
Parameters can be literally any object at all. There are no valid operations on a parameter except to pass it back to the strategy you got it from.
So what does the parameter for a strategy of the form x | y | … look like?
Well, it looks like a weighting amongst the branches plus a parameter for each of the individual values. You pick a branch, then you feed the parameter you have for that branch to the underlying strategy.
Notably, drawing this parameter requires drawing a parameter from each of the underlying strategies. i.e. it’s O(n) in the number of branches.
Which means that if you have something like the recursive case above, you’re doing O(n) operations which are each themselves O(n), and you’re accidentally quadratic. Moreover it turns out that the constant factor on this may be really bad.
But there turns out to be an easy fix here: Almost all of those O(n^2) leaf parameters we’re producing are literally never used – you only ever need the parameter for the strategy you’re calling.
Which means we can fix this problem with lazy evaluation. Instead of storing a parameter for each branch, we store a deferred calculation that will produce a parameter on need. Then when we select a branch, we force that calculation to be evaluated (and save the result in case we need it again) and use that. If we never need a particular parameter, we never evaluate it.
And this means we’re still doing O(n) work when we’re drawing the parameter for a branch, but we’re only doing O(1) work per individual element of the branch until we actually need that value. In the recursion case we’re also saving work when we evaluate it. This greatly reduces the amount of work we have to do because it means that we’re now doing only as much work as we needed to do anyway to draw the template and more or less removes this case as a performance bottleneck. It’s still a little slower than I’d like, but it’s in the category of “Hypothesis is probably less likely to be the bottleneck than typical tests are” again.
In retrospect this is probably obvious – it falls into the category of “the fastest code is the code that doesn’t execute” – but it wasn’t obvious to me up front until I thought of it in the right way, so I thought I’d share in case this helps anyone else.