In case you haven’t noticed, some parts of Hypothesis are designed with a lot of attention to detail. Some parts (particularly internals or anything that’s been around since the beginning) are a bit sloppy, some are quite well polished, and some of them are pedantic beyond the ken of mortal man and you would would be forgiven for wondering what on earth I was on when I was writing them.
The repr you get from standard strategies is one of those sections of which I am really quite proud, in a also slightly embarrassed sort of way.
>>>import hypothesis.strategiesas st >>> st.integers() integers()>>> st.integers(min_value=1) integers(min_value=1)>>> st.integers(min_value=1).map(lambda x: x * 2) integers(min_value=1).map(lambda x: )>>> st.integers(min_value=1) | st.booleans() integers(min_value=1) | booleans()>>> st.lists(st.integers(min_value=1) | st.booleans(), min_size=3) lists(elements=integers(min_value=1) | booleans(), min_size=3) |
Aren’t those reprs nice?
The lambda one bugs me a bit. If this had been in a file you’d have actually got the body of the lambda, but I can’t currently make that work in the python console. It works in ipython, and fixing it to work in the normal console would require me to write or vendor a decompiler in order to get good reprs and… well I’d be lying if I said I hadn’t considered it but so far a combination of laziness and judgement have prevailed.
This becomes more interesting when you realise that depending on the arguments you pass in a strategies function may return radically different implementations. e.g. if you do floats(min_value=-0.0, max_value=5e-324) then there are only three floating point numbers in that range, and you get back something that is more or less equivalent to sampled_from((-0.0, 0.0, 5e-324)).
How does all this work?
Well, most of this is done with a single decorator and a bunch of pain:
def defines_strategy(strategy_definition): from hypothesis.internal.reflectionimport proxies, arg_string, \ convert_positional_arguments argspec = getargspec(strategy_definition) defaults ={}if argspec.defaultsisnotNone: for k in hrange(1,len(argspec.defaults) + 1): defaults[argspec.args[-k]]= argspec.defaults[-k] @proxies(strategy_definition)def accept(*args, **kwargs): result = strategy_definition(*args, **kwargs) args, kwargs = convert_positional_arguments( strategy_definition, args, kwargs) kwargs_for_repr =dict(kwargs)for k, v in defaults.items(): if k in kwargs_for_repr and kwargs_for_repr[k]is defaults[k]: del kwargs_for_repr[k] representation = u'%s(%s)' % ( strategy_definition.__name__, arg_string(strategy_definition, args, kwargs_for_repr))return ReprWrapperStrategy(result, representation)return accept |
What’s this doing?
Well, ReprWrapper strategy is more or less what it sounds like: It wraps a strategy and provides it with a custom repr string. proxies is basically functools.wrap but with a bit more attention given to getting the argspec exactly right.
So in this what we’re doing is:
- Converting all positional arguments to their kwargs equivalent where possible
- Removing any keyword arguments that are exactly the default
- Producing an argument string that when invoked with the remaining args (from varargs) and any keyword args would be equivalent to the ones that were actually passed in (Special note: The keyword arguments are ordered in the order of the argument lists, alphabetically and after real keyword arguments for kwargs. This ensures that we have a stable repr that doesn’t depend on hash iteration order (why are kwargs not an OrderededDict?).
Most of the heavy lifting in here is done in the reflection module, which is named such mostly because myhateforthepythonobjectmodelburnswiththefireoftenthousandsuns was too long a module name.
Then we have the bit with map().
Here is the definition of repr for map:
def__repr__(self): ifnothasattr(self, u'_cached_repr'): self._cached_repr = u'%r.map(%s)' % (self.mapped_strategy, get_pretty_function_description(self.pack))returnself._cached_repr |
We cache the repr on first evaluation because get_pretty_function_description is quite slow (not outrageously slow, but quite slow), so we neither want to call it lots of times nor want to calculate it if you don’t need it.
For non-lambda functions, get_pretty_function_description returns their __name__. For lambdas, it tries to figure out their source code through a mix of inspect.getsource (which doesn’t actually work, and the fact that it doesn’t work is considered notabugwontfix) and some terrible terrible hacks. In the event of something going wrong here it returns the “lambda arg, names: <unknown>” we saw above. If you pass something that isn’t a function (e.g. a functools.partial) it just returns the repr so you see things like:
>>>from hypothesis.strategiesimport integers >>>from functools import partial >>>def add(x, y): ... return x + y ... >>>from functools import partial >>> integers().map(partial(add,1)) integers().map(functools.partial(,1)) |
I may at some point add a special case for functools.partial because I am that pedantic.
This union repr is much more straightforward in implementation but still worth having:
def__repr__(self): return u' | '.join(map(repr,self.element_strategies)) |
Is all this worth it? I don’t know. Almost nobody has commented on it, but it makes me feel better. Examples in documentation look a bit prettier, it renders some error messages and reporting better, and generally makes it a lot more transparent what’s actually going on when you’re looking at a repr.
It probably isn’t worth the amount of effort I’ve put into the functionality it’s built on top of, but most of the functionality was already there – I don’t think I added any new functions to reflection to write this, it’s all code I’ve repurposed from other things.
Should you copy me? No, probably not. Nobody actually cares about repr quality as much as I do, but it’s a nice little touch that makes interactive usage of the library a little bit easier, so it’s at least worth thinking about.