Quantcast
Channel: Planet Python
Viewing all 22646 articles
Browse latest View live

PyBites: PyBites Twitter Digest - Issue 27, 2018

$
0
0

One more day for PSF Fellowship Nominations! Get them in ASAP!

Decorator library to configure function arguments

Submitted by @clamytoe.

Mockaroo! Who knew? Create your own data to use in Sketch!

Submitted by @bohemianjack.

Very cool use case of OpenCV

Nice to see Netflix doing stuff like this

Deep Learning basics by Sentdex!

This is a great security step. Nice going PyPI!

This is why the UX is so important.

Altair version 2.2 released

Django v Wordpress

How to use Bootstrap 4 forms in Django

PyConAU is all sold out! Who's going next week?

Python text-to-speech!

Nice! A web UI for pdb!


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Podcast.__init__: Don't Just Stand There, Get Programming! with Ana Bell

$
0
0
Writing a book is hard work, especially when you are trying to teach such a broad concept as programming. In this episode Ana Bell discusses her recent work in writing Get Programming: Learn To Code With Python, including her views on how to separate the principles from the implementation, making the book evergreen in its appeal, and how her experience as a lecturer at MIT has helped her maintain the perspectives of beginners. She also shares her views on the values of learning about programming, even when you have no intention of doing it as a career and ways to take the next steps if that is your goal.

Summary

Writing a book is hard work, especially when you are trying to teach such a broad concept as programming. In this episode Ana Bell discusses her recent work in writing Get Programming: Learn To Code With Python, including her views on how to separate the principles from the implementation, making the book evergreen in its appeal, and how her experience as a lecturer at MIT has helped her maintain the perspectives of beginners. She also shares her views on the values of learning about programming, even when you have no intention of doing it as a career and ways to take the next steps if that is your goal.

Preface

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to scale up. Go to podcastinit.com/linode to get a $20 credit and launch a new server in under a minute.
  • As you know, Python has become one of the most popular programming languages in the world, due to the size, scope, and friendliness of the language and community. But, it can be tough learning it when you’re just starting out. Luckily, there’s an easy way to get involved. Written by MIT lecturer Ana Bell and published by Manning Publications, Get Programming: Learn to code with Python is the perfect way to get started working with Python. Ana’s experience as a teacher of Python really shines through, as you get hands-on with the language without being drowned in confusing jargon or theory. Filled with practical examples and step-by-step lessons to take on, Get Programming is perfect for people who just want to get stuck in with Python. Get your copy of the book with a special 40% discount for Podcast.__init__ listeners at podcastinit.com/get-programming using code: Bell40!
  • Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email hosts@podcastinit.com)
  • To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
  • Join the community in the new Zulip chat workspace at podcastinit.com/chat
  • Your host as usual is Tobias Macey and today I’m interviewing Ana Bell about her book, Get Programming: Learn to code with Python, and her approach to teaching how to code

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing your motivation for writing a book about learning to program?
    • Who is the target audience for this book?
    • What level of competence do you want the reader to have when they have completed it?
  • What were the most challenging aspects of writing a book for beginning programmers?
    • What did you do to recapture the “beginner mind” while writing?
  • There are a large variety of books on learning to program and at least as many approaches. Can you describe the techniques that you use in your book to help readers grasp the concepts that you cover?
  • One of the problems of writing a book about technology is that there is no stationary target to aim for due to the constant advancement of the industry. How do you reconcile that reality with the need for a book to remain relevant for an extended period of time?
    • How do you decide what to include and what to leave out when writing about learning how to program?
  • What advice do you have for people who have read your book and want to continue on to a career in development?

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Mike Driscoll: PyDev of the Week: Jessica Ingrassellino

$
0
0

This week we welcome Jessica Ingrassellino (@jess_ingrass) as our PyDev of the Week. Jessica is the founder of teachcode.org, where you can schedule teaching classes with Jessica. She is also the author of Python Projects for Kids. Let’s take some time to get to know her better!

Can you tell us a little about yourself (hobbies, education, etc):

Before I had a career in tech, I studied for my EdD in Music Education, and was a music teacher for ten years in public schools. I spend my free time reading (I really have a book problem!), writing, and playing violin and viola in local orchestras. I love learning, and so I go to different lectures that are available in NYC as well. Outside of work, I focus on doing teacher training and support through my organization, teachcode.org. I speak at conferences about software testing and using python & python libraries as software testing tools.

Why did you start using Python?

So, I actually stumbled into Python by accident. After about 6 months of doing software testing, I became really frustrated by regression testing after each release. I thought there must be a better way, and when I talked to developers, some of them knew about automated UI testing. They weren’t sure about the specifics, but they knew how to point me in the right direction. I looked up free coding classes, and found a Python MOOC by Rice University. I got about 3 assignments in, and then Hurricane Sandy knocked our company offline and I had to spend weeks helping to get things up and running again, but that class got me started in Python.

What other programming languages do you know and which is your favorite?

I know HTML, some CSS, Ruby, some C++, and Python. My favorite is Python because it makes the most sense to me. Honestly, I always call myself a “slow coder” (which annoys my colleagues) and Python is where I have understood code concepts with the most clarity and speed (which is fast for me, but slow for many others!).

What projects are you working on now?

Right now, I am working on a book for adults who are beginning to code. The biggest areas of this book I really want to focus on are addressing some of the assumptions about prior knowledge that I see in other books, and also including unit testing as a part of learning about code. It’s actually tricky to write those things in a way that makes sense and is not confusing, so I’m hoping I will be able to succeed.

At my day job, I’m helping to build out the robot framework for automated testing, although that has taken a bit of a backseat since I became the Director.

Which Python libraries are your favorite (core or 3rd party)?

I’m really enjoying pyglet right now, as I am using it for my book. For work, I use robot framework, and that’s been great in a testing context. For unit testing, I’ve enjoyed looking at pytest, and I am looking into Hypothesis for property testing. I love that the python ecosystem is so strong and has so many great libraries.

How did you get started with the Python Education Summit?

I got started with the education summit because I believe that Python is a great language for people to begin learning to code. I wanted to know how I could help others begin their code journey (because it is a journey!).

What are you most excited about when it comes to programming and education?

I am excited by the possibilities. When I am teaching my middle and high school code students, I like to focus on using code as a tool to solve a problem. What happens is that I can see how each student learns, innovates, and understands code. I am excited because code, as a discipline, has not yet been codified in the way that other disciplines have. There are still lots of arguments about the “best” ways to do things, but ultimately, there is room to try something, watch it fail or succeed, and learn how to do it better the next time. The freedom to get things wrong is almost artistic or improvisational, and I really think that environment is phenomenal for fostering learning and creative problem solving.

Is there anything else you’d like to say?

Wherever you are, keep going. We all start at the beginning. In fact, I would still consider myself in the beginning or maybe intermediate phases of my coding career and life. Move forward with a learners mindset and ask a lot of questions. Learning never ends.

Thanks for doing the interview, Jessica!

Dusty Phillips: Computer Vision in Three Lines of Code plus a bunch more lines

$
0
0
My wife and I both have a tendency to leave the garage door open. You’re in and out, grabbing garden tools or supplies, and at the end of the day you enter the house through the back door and forget to check the garage. Luckily, we live in rural Canada, surrounded by wonderful people, where the door could sit open for days without anything “disappearing”. But it still makes me feel nervous to discover it’s been forgotten, if only because it is a waste of heat in the winter (not to mention the chance of blowing full of snow!

Matthew Rocklin: Cloud Lock-in and Open Standards

$
0
0

This post is from conversations with Peter Wang, Yuvi Panda, and several others. Yuvi expresses his own views on this topic on his blog.

Summary

When moving to the cloud we should be mindful to avoid vendor lock-in by adopting open standards.

Adoption of cloud computing

Cloud computing is taking over both for-profit enterprises and public/scientific institutions. The Cloud is cheap, flexible, requires little up-front investment, and enables greater collaboration. Cloud vendors like Amazon Web Services (AWS), Google Cloud Platform (GCP), and Microsoft Azure compete to create stable, easy to use platforms to serve the needs of a variety of institutions, both big and small. This presents both a great opportunity for society, but also a risk of future lock-in at a large scale.

Cloud vendors build services to lock in users

Some of the competition between cloud vendors is about providing lower costs, higher availability, improved scaling, and so on, that are strictly a benefit for consumers. This is great.

However some of the competition is in the form of services that are specialized to a particular commercial cloud, like Amazon Lambda, Google Tensor Processing Units (TPUs), or Azure Notebooks. These services are valuable to enterprise and public institutions alike, but they lock users in long-term. If you build your system around one of these systems then moving to a different cloud in the future becomes expensive. This stickiness to your current cloud “locks you in” to using that particular cloud, even if policies change in the future in ways that you dislike.

This is OK, lock-in is a standard business practice. We shouldn’t fault these commercial companies for making good business decisions. However, it’s something that we should keep in mind as we invest public effort in these technologies.

Open standards counter lock-in technologies

One way to counter lock-in is to promote the adoption of open standard technologies that are shared among cloud platforms. If these open standard technologies become popular enough then cloud platforms must offer them alongside their proprietary technologies in order to stay competitive, removing one of their options for lock-in.

Examples with Kubernetes and Parquet

For example, consider Kubernetes, a popular resource manager for clusters. While Kubernetes was originally promoted by Google, it was developed in the open by a broader community, gained global adoption, and is now available across all three major commercial clouds. Today if you write your infrastructure on Kubernetes you can move your distributed services between clouds easily, or can move your system onto an on-premises cluster if that becomes necessary. You retain the freedom to move around in the future with low cost.

Consider also the open Parquet data format. If you store your data in Parquet then you can move that data between any cloud’s storage system easily, or can move that data to your in-house hardware without going through a painful database export process.

Technologies like Kubernetes and Parquet displace proprietary technologies like Amazon’s Elastic Container Service (ECS), which locks users into AWS, or Google’s BigQuery which keeps users on GCP with data gravity. This is fine, Amazon and Google can still compete for users with any of their other excellent services, but they’ve been pushed up the stack a bit, away from technologies that are infrastructural and towards technologies that are more about convenience and high-level usability.

What we can do

Wide adoption of open standard infrastructure protects us from the control of cloud vendors.

If you are a public institution considering the cloud then please consider the services that you plan to adopt, and their potential to lock your institutions in the long run. These services may still make sense, but you should probably have a conversation with your team and do it mindfully. You might consider developing a plan to extract yourself from that cloud in the future and see how your decisions affect the cost of that plan.

If you are an open source developer then please consider investing your effort around open standards instead of around proprietary tooling. By focusing our effort on open standards we provide public institutions with viable options for a safe cloud.

Real Python: Sets in Python

$
0
0

Perhaps you recall learning about sets and set theory at some point in your mathematical education. Maybe you even remember Venn diagrams:

Venn diagram

If this doesn’t ring a bell, don’t worry! This tutorial should still be easily accessible for you.

In mathematics, a rigorous definition of a set can be abstract and difficult to grasp. Practically though, a set can be thought of simply as a well-defined collection of distinct objects, typically called elements or members.

Grouping objects into a set can be useful in programming as well, and Python provides a built-in set type to do so. Sets are distinguished from other object types by the unique operations that can be performed on them.

Here’s what you’ll learn in this tutorial: You’ll see how to define set objects in Python and discover the operations that they support. As with the earlier tutorials on lists and dictionaries, when you are finished with this tutorial, you should have a good feel for when a set is an appropriate choice. You will also learn about frozen sets, which are similar to sets except for one important detail.

Defining a Set

Python’s built-in set type has the following characteristics:

  • Sets are unordered.
  • Set elements are unique. Duplicate elements are not allowed.
  • A set itself may be modified, but the elements contained in the set must be of an immutable type.

Let’s see what all that means, and how you can work with sets in Python.

A set can be created in two ways. First, you can define a set with the built-in set() function:

x=set(<iter>)

In this case, the argument <iter> is an iterable—again, for the moment, think list or tuple—that generates the list of objects to be included in the set. This is analogous to the <iter> argument given to the .extend() list method:

>>> x=set(['foo','bar','baz','foo','qux'])>>> x{'qux', 'foo', 'bar', 'baz'}>>> x=set(('foo','bar','baz','foo','qux'))>>> x{'qux', 'foo', 'bar', 'baz'}

Strings are also iterable, so a string can be passed to set() as well. You have already seen that list(s) generates a list of the characters in the string s. Similarly, set(s) generates a set of the characters in s:

>>> s='quux'>>> list(s)['q', 'u', 'u', 'x']>>> set(s){'x', 'u', 'q'}

You can see that the resulting sets are unordered: the original order, as specified in the definition, is not necessarily preserved. Additionally, duplicate values are only represented in the set once, as with the string 'foo' in the first two examples and the letter 'u' in the third.

Alternately, a set can be defined with curly braces ({}):

x={<obj>,<obj>,...,<obj>}

When a set is defined this way, each <obj> becomes a distinct element of the set, even if it is an iterable. This behavior is similar to that of the .append() list method.

Thus, the sets shown above can also be defined like this:

>>> x={'foo','bar','baz','foo','qux'}>>> x{'qux', 'foo', 'bar', 'baz'}>>> x={'q','u','u','x'}>>> x{'x', 'q', 'u'}

To recap:

  • The argument to set() is an iterable. It generates a list of elements to be placed into the set.
  • The objects in curly braces are placed into the set intact, even if they are iterable.

Observe the difference between these two set definitions:

>>> {'foo'}{'foo'}>>> set('foo'){'o', 'f'}

A set can be empty. However, recall that Python interprets empty curly braces ({}) as an empty dictionary, so the only way to define an empty set is with the set() function:

>>> x=set()>>> type(x)<class 'set'>>>> xset()>>> x={}>>> type(x)<class 'dict'>

An empty set is falsy in Boolean context:

>>> x=set()>>> bool(x)False>>> xor11>>> xand1set()

You might think the most intuitive sets would contain similar objects—for example, even numbers or surnames:

>>> s1={2,4,6,8,10}>>> s2={'Smith','McArthur','Wilson','Johansson'}

Python does not require this, though. The elements in a set can be objects of different types:

>>> x={42,'foo',3.14159,None}>>> x{None, 'foo', 42, 3.14159}

Don’t forget that set elements must be immutable. For example, a tuple may be included in a set:

>>> x={42,'foo',(1,2,3),3.14159}>>> x{42, 'foo', 3.14159, (1, 2, 3)}

But lists and dictionaries are mutable, so they can’t be set elements:

>>> a=[1,2,3]>>> {a}Traceback (most recent call last):
  File "<pyshell#70>", line 1, in <module>{a}TypeError: unhashable type: 'list'>>> d={'a':1,'b':2}>>> {d}Traceback (most recent call last):
  File "<pyshell#72>", line 1, in <module>{d}TypeError: unhashable type: 'dict'

Set Size and Membership

The len() function returns the number of elements in a set, and the in and not in operators can be used to test for membership:

>>> x={'foo','bar','baz'}>>> len(x)3>>> 'bar'inxTrue>>> 'qux'inxFalse

Operating on a Set

Many of the operations that can be used for Python’s other composite data types don’t make sense for sets. For example, sets can’t be indexed or sliced. However, Python provides a whole host of operations on set objects that generally mimic the operations that are defined for mathematical sets.

Operators vs. Methods

Most, though not quite all, set operations in Python can be performed in two different ways: by operator or by method. Let’s take a look at how these operators and methods work, using set union as an example.

Given two sets, x1 and x2, the union of x1 and x2 is a set consisting of all elements in either set.

Consider these two sets:

x1={'foo','bar','baz'}x2={'baz','qux','quux'}

The union of x1 and x2 is {'foo', 'bar', 'baz', 'qux', 'quux'}.

Note: Notice that the element 'baz', which appears in both x1 and x2, appears only once in the union. Sets never contain duplicate values.

In Python, set union can be performed with the | operator:

>>> x1={'foo','bar','baz'}>>> x2={'baz','qux','quux'}>>> x1|x2{'baz', 'quux', 'qux', 'bar', 'foo'}

Set union can also be obtained with the .union() method. The method is invoked on one of the sets, and the other is passed as an argument:

>>> x1.union(x2){'baz', 'quux', 'qux', 'bar', 'foo'}

The way they are used in the examples above, the operator and method behave identically. But there is a subtle difference between them. When you use the | operator, both operands must be sets. The .union() method, on the other hand, will take any iterable as an argument, convert it to a set, and then perform the union.

Observe the difference between these two statements:

>>> x1|('baz','qux','quux')Traceback (most recent call last):
  File "<pyshell#43>", line 1, in <module>x1|('baz','qux','quux')TypeError: unsupported operand type(s) for |: 'set' and 'tuple'>>> x1.union(('baz','qux','quux')){'baz', 'quux', 'qux', 'bar', 'foo'}

Both attempt to compute the union of x1 and the tuple ('baz', 'qux', 'quux'). This fails with the | operator but succeeds with the .union() method.

Available Operators and Methods

Below is a list of the set operations available in Python. Some are performed by operator, some by method, and some by both. The principle outlined above generally applies: where a set is expected, methods will typically accept any iterable as an argument, but operators require actual sets as operands.

x1.union(x2[, x3 ...])

x1 | x2 [| x3 ...]

Compute the union of two or more sets.

Set unionSet Union

x1.union(x2) and x1 | x2 both return the set of all elements in either x1 or x2:

>>> x1={'foo','bar','baz'}>>> x2={'baz','qux','quux'}>>> x1.union(x2){'foo', 'qux', 'quux', 'baz', 'bar'}>>> x1|x2{'foo', 'qux', 'quux', 'baz', 'bar'}

More than two sets may be specified with either the operator or the method:

>>> a={1,2,3,4}>>> b={2,3,4,5}>>> c={3,4,5,6}>>> d={4,5,6,7}>>> a.union(b,c,d){1, 2, 3, 4, 5, 6, 7}>>> a|b|c|d{1, 2, 3, 4, 5, 6, 7}

The resulting set contains all elements that are present in any of the specified sets.

x1.intersection(x2[, x3 ...])

x1 & x2 [& x3 ...]

Compute the intersection of two or more sets.

Set intersectionSet Intersection

x1.intersection(x2) and x1 & x2 return the set of elements common to both x1 and x2:

>>> x1={'foo','bar','baz'}>>> x2={'baz','qux','quux'}>>> x1.intersection(x2){'baz'}>>> x1&x2{'baz'}

You can specify multiple sets with the intersection method and operator, just like you can with set union:

>>> a={1,2,3,4}>>> b={2,3,4,5}>>> c={3,4,5,6}>>> d={4,5,6,7}>>> a.intersection(b,c,d){4}>>> a&b&c&d{4}

The resulting set contains only elements that are present in all of the specified sets.

x1.difference(x2[, x3 ...])

x1 - x2 [- x3 ...]

Compute the difference between two or more sets.

Set differenceSet Difference

x1.difference(x2) and x1 - x2 return the set of all elements that are in x1 but not in x2:

>>> x1={'foo','bar','baz'}>>> x2={'baz','qux','quux'}>>> x1.difference(x2){'foo', 'bar'}>>> x1-x2{'foo', 'bar'}

Another way to think of this is that x1.difference(x2) and x1 - x2 return the set that results when any elements in x2 are removed or subtracted from x1.

Once again, you can specify more than two sets:

>>> a={1,2,3,30,300}>>> b={10,20,30,40}>>> c={100,200,300,400}>>> a.difference(b,c){1, 2, 3}>>> a-b-c{1, 2, 3}

When multiple sets are specified, the operation is performed from left to right. In the example above, a - b is computed first, resulting in {1, 2, 3, 300}. Then c is subtracted from that set, leaving {1, 2, 3}:

[Insert image here (set-difference-multiple)]

x1.symmetric_difference(x2)

x1 ^ x2 [^ x3 ...]

Compute the symmetric difference between sets.

Set symmetric differenceSet Symmetric Difference

x1.symmetric_difference(x2) and x1 ^ x2 return the set of all elements in either x1 or x2, but not both:

>>> x1={'foo','bar','baz'}>>> x2={'baz','qux','quux'}>>> x1.symmetric_difference(x2){'foo', 'qux', 'quux', 'bar'}>>> x1^x2{'foo', 'qux', 'quux', 'bar'}

The ^ operator also allows more than two sets:

>>> a={1,2,3,4,5}>>> b={10,2,3,4,50}>>> c={1,50,100}>>> a^b^c{100, 5, 10}

As with the difference operator, when multiple sets are specified, the operation is performed from left to right.

Curiously, although the ^ operator allows multiple sets, the .symmetric_difference() method doesn’t:

>>> a={1,2,3,4,5}>>> b={10,2,3,4,50}>>> c={1,50,100}>>> a.symmetric_difference(b,c)Traceback (most recent call last):
  File "<pyshell#11>", line 1, in <module>a.symmetric_difference(b,c)TypeError: symmetric_difference() takes exactly one argument (2 given)

x1.isdisjoint(x2)

Determines whether or not two sets have any elements in common.

x1.isdisjoint(x2) returns True if x1 and x2 have no elements in common:

>>> x1={'foo','bar','baz'}>>> x2={'baz','qux','quux'}>>> x1.isdisjoint(x2)False>>> x2-{'baz'}{'quux', 'qux'}>>> x1.isdisjoint(x2-{'baz'})True

If x1.isdisjoint(x2) is True, then x1 & x2 is the empty set:

>>> x1={1,3,5}>>> x2={2,4,6}>>> x1.isdisjoint(x2)True>>> x1&x2set()

Note: There is no operator that corresponds to the .isdisjoint() method.

x1.issubset(x2)

x1 <= x2

Determine whether one set is a subset of the other.

In set theory, a set x1 is considered a subset of another set x2 if every element of x1 is in x2.

x1.issubset(x2) and x1 <= x2 return True if x1 is a subset of x2:

>>> x1={'foo','bar','baz'}>>> x1.issubset({'foo','bar','baz','qux','quux'})True>>> x2={'baz','qux','quux'}>>> x1<=x2False

A set is considered to be a subset of itself:

>>> x={1,2,3,4,5}>>> x.issubset(x)True>>> x<=xTrue

It seems strange, perhaps. But it fits the definition—every element of x is in x.

x1 < x2

Determines whether one set is a proper subset of the other.

A proper subset is the same as a subset, except that the sets can’t be identical. A set x1 is considered a proper subset of another set x2 if every element of x1 is in x2, and x1 and x2 are not equal.

x1 < x2 returns True if x1 is a proper subset of x2:

>>> x1={'foo','bar'}>>> x2={'foo','bar','baz'}>>> x1<x2True>>> x1={'foo','bar','baz'}>>> x2={'foo','bar','baz'}>>> x1<x2False

While a set is considered a subset of itself, it is not a proper subset of itself:

>>> x={1,2,3,4,5}>>> x<=xTrue>>> x<xFalse

Note: The < operator is the only way to test whether a set is a proper subset. There is no corresponding method.

x1.issuperset(x2)

x1 >= x2

Determine whether one set is a superset of the other.

A superset is the reverse of a subset. A set x1 is considered a superset of another set x2 if x1 contains every element of x2.

x1.issuperset(x2) and x1 >= x2 return True if x1 is a superset of x2:

>>> x1={'foo','bar','baz'}>>> x1.issuperset({'foo','bar'})True>>> x2={'baz','qux','quux'}>>> x1>=x2False

You have already seen that a set is considered a subset of itself. A set is also considered a superset of itself:

>>> x={1,2,3,4,5}>>> x.issuperset(x)True>>> x>=xTrue

x1 > x2

Determines whether one set is a proper superset of the other.

A proper superset is the same as a superset, except that the sets can’t be identical. A set x1 is considered a proper superset of another set x2 if x1 contains every element of x2, and x1 and x2 are not equal.

x1 > x2 returns True if x1 is a proper superset of x2:

>>> x1={'foo','bar','baz'}>>> x2={'foo','bar'}>>> x1>x2True>>> x1={'foo','bar','baz'}>>> x2={'foo','bar','baz'}>>> x1>x2False

A set is not a proper superset of itself:

>>> x={1,2,3,4,5}>>> x>xFalse

Note: The > operator is the only way to test whether a set is a proper superset. There is no corresponding method.

Modifying a Set

Although the elements contained in a set must be of immutable type, sets themselves can be modified. Like the operations above, there are a mix of operators and methods that can be used to change the contents of a set.

Augmented Assignment Operators and Methods

Each of the union, intersection, difference, and symmetric difference operators listed above has an augmented assignment form that can be used to modify a set. For each, there is a corresponding method as well.

x1.update(x2[, x3 ...])

x1 |= x2 [| x3 ...]

Modify a set by union.

x1.update(x2) and x1 |= x2 add to x2 any elements in x1 that x2 does not already have:

>>> x1={'foo','bar','baz'}>>> x2={'foo','baz','qux'}>>> x1|=x2>>> x1{'qux', 'foo', 'bar', 'baz'}>>> x1.update(['corge','garply'])>>> x1{'qux', 'corge', 'garply', 'foo', 'bar', 'baz'}

x1.intersection_update(x2[, x3 ...])

x1 &= x2 [& x3 ...]

Modify a set by intersection.

x1.intersection_update(x2) and x1 &= x2 update x1, retaining only elements found in both x1 and x2:

>>> x1={'foo','bar','baz'}>>> x2={'foo','baz','qux'}>>> x1&=x2>>> x1{'foo', 'baz'}>>> x1.intersection_update(['baz','qux'])>>> x1{'baz'}

x1.difference_update(x2[, x3 ...])

x1 -= x2 [| x3 ...]

Modify a set by difference.

x1.difference_update(x2) and x1 -= x2 update x1, removing elements found in x2:

>>> x1={'foo','bar','baz'}>>> x2={'foo','baz','qux'}>>> x1-=x2>>> x1{'bar'}>>> x1.difference_update(['foo','bar','qux'])>>> x1set()

x1.symmetric_difference_update(x2)

x1 ^= x2

Modify a set by symmetric difference.

x1.symmetric_difference_update(x2) and x1 ^= x2 update x1, retaining elements found in either x1 or x2, but not both:

>>> x1={'foo','bar','baz'}>>> x2={'foo','baz','qux'}>>> >>> x1^=x2>>> x1{'bar', 'qux'}>>> >>> x1.symmetric_difference_update(['qux','corge'])>>> x1{'bar', 'corge'}

Other Methods For Modifying Sets

Aside from the augmented operators above, Python supports several additional methods that modify sets.

x.add(<elem>)

Adds an element to a set.

x.add(<elem>) adds <elem>, which must be a single immutable object, to x:

>>> x={'foo','bar','baz'}>>> x.add('qux')>>> x{'bar', 'baz', 'foo', 'qux'}

x.remove(<elem>)

Removes an element from a set.

x.remove(<elem>) removes <elem> from x. Python raises an exception if <elem> is not in x:

>>> x={'foo','bar','baz'}>>> x.remove('baz')>>> x{'bar', 'foo'}>>> x.remove('qux')Traceback (most recent call last):
  File "<pyshell#58>", line 1, in <module>x.remove('qux')KeyError: 'qux'

x.discard(<elem>)

Removes an element from a set.

x.discard(<elem>) also removes <elem> from x. However, if <elem> is not in x, this method quietly does nothing instead of raising an exception:

>>> x={'foo','bar','baz'}>>> x.discard('baz')>>> x{'bar', 'foo'}>>> x.discard('qux')>>> x{'bar', 'foo'}

x.pop()

Removes a random element from a set.

x.pop() removes and returns an arbitrarily chosen element from x. If x is empty, x.pop() raises an exception:

>>> x={'foo','bar','baz'}>>> x.pop()'bar'>>> x{'baz', 'foo'}>>> x.pop()'baz'>>> x{'foo'}>>> x.pop()'foo'>>> xset()>>> x.pop()Traceback (most recent call last):
  File "<pyshell#82>", line 1, in <module>x.pop()KeyError: 'pop from an empty set'

x.clear()

Clears a set.

x.clear() removes all elements from x:

>>> x={'foo','bar','baz'}>>> x{'foo', 'bar', 'baz'}>>> >>> x.clear()>>> xset()

Frozen Sets

Python provides another built-in type called a frozenset, which is in all respects exactly like a set, except that a frozenset is immutable. You can perform non-modifying operations on a frozenset:

>>> x=frozenset(['foo','bar','baz'])>>> xfrozenset({'foo', 'baz', 'bar'})>>> len(x)3>>> x&{'baz','qux','quux'}frozenset({'baz'})

But methods that attempt to modify a frozenset fail:

>>> x=frozenset(['foo','bar','baz'])>>> x.add('qux')Traceback (most recent call last):
  File "<pyshell#127>", line 1, in <module>x.add('qux')AttributeError: 'frozenset' object has no attribute 'add'>>> x.pop()Traceback (most recent call last):
  File "<pyshell#129>", line 1, in <module>x.pop()AttributeError: 'frozenset' object has no attribute 'pop'>>> x.clear()Traceback (most recent call last):
  File "<pyshell#131>", line 1, in <module>x.clear()AttributeError: 'frozenset' object has no attribute 'clear'>>> xfrozenset({'foo', 'bar', 'baz'})

Deep Dive: Frozensets and Augmented Assignment

Since a frozenset is immutable, you might think it can’t be the target of an augmented assignment operator. But observe:

>>> f=frozenset(['foo','bar','baz'])>>> s={'baz','qux','quux'}>>> f&=s>>> ffrozenset({'baz'})

What gives?

Python does not perform augmented assignments on frozensets in place. The statement x &= s is effectively equivalent to x = x & s. It isn’t modifying the original x. It is reassigning x to a new object, and the object x originally referenced is gone.

You can verify this with the id() function:

>>> f=frozenset(['foo','bar','baz'])>>> id(f)56992872>>> s={'baz','qux','quux'}>>> f&=s>>> ffrozenset({'baz'})>>> id(f)56992152

f has a different integer identifier following the augmented assignment. It has been reassigned, not modified in place.

Some objects in Python are modified in place when they are the target of an augmented assignment operator. But frozensets aren’t.

Frozensets are useful in situations where you want to use a set, but you need an immutable object. For example, you can’t define a set whose elements are also sets, because set elements must be immutable:

>>> x1=set(['foo'])>>> x2=set(['bar'])>>> x3=set(['baz'])>>> x={x1,x2,x3}Traceback (most recent call last):
  File "<pyshell#38>", line 1, in <module>x={x1,x2,x3}TypeError: unhashable type: 'set'

If you really feel compelled to define a set of sets (hey, it could happen), you can do it if the elements are frozensets, because they are immutable:

>>> x1=frozenset(['foo'])>>> x2=frozenset(['bar'])>>> x3=frozenset(['baz'])>>> x={x1,x2,x3}>>> x{frozenset({'bar'}), frozenset({'baz'}), frozenset({'foo'})}

Likewise, recall from the previous tutorial on dictionaries that a dictionary key must be immutable. You can’t use the built-in set type as a dictionary key:

>>> x={1,2,3}>>> y={'a','b','c'}>>> >>> d={x:'foo',y:'bar'}Traceback (most recent call last):
  File "<pyshell#3>", line 1, in <module>d={x:'foo',y:'bar'}TypeError: unhashable type: 'set'

If you find yourself needing to use sets as dictionary keys, you can use frozensets:

>>> x=frozenset({1,2,3})>>> y=frozenset({'a','b','c'})>>> >>> d={x:'foo',y:'bar'}>>> d{frozenset({1, 2, 3}): 'foo', frozenset({'c', 'a', 'b'}): 'bar'}

Conclusion

In this tutorial, you learned how to define set objects in Python, and you became familiar with the functions, operators, and methods that can be used to work with sets.

You should now be comfortable with the basic built-in data types that Python provides.

Next, you will begin to explore how the code that operates on those objects is organized and structured in a Python program.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Vasudev Ram: Nice Vim trick for Python code

$
0
0
By Vasudev Ram

Here's a Vim editing trick which can be useful for indenting Python code (in some situations):

Let's say I have this program, p1bad.py:
$ type p1bad.py

def foo(args):
print "in foo"
def bar():
print "in bar"
Running it gives:

$ python p1bad.py
File "p1bad.py", line 3
print "in foo"
^
IndentationError: expected an indented block
The indentation is incorrect; the first print statement should be indented one level under the "def foo" line, and the second one should be similarly indented under the "def bar" line.

Imagine that the incorrect indentation was due to fast typing, or was done by a beginner. We can fix this by opening the file in vim and typing this vim command (with the cursor being anywhere in the file):

    gg=G

How does it work?

The gg moves the cursor to the first line of the file.

Then, the = means indent some text. What text? The text that would have been moved over by the following cursor movement. And the next movement command is G (which is short for $G). That means move to the last line of the file, since $ means the last line, and G means move to a specified line (in this context).

So the net result is that the part of the file that would have been moved over (in the absence of the = part of the command), gets indented instead. In this case, it is the whole file, since we were at the top and said to move to the bottom.

Then I save the file as p1good.py. The result is:
$ type p1good.py

def foo(args):
print "in foo"
def bar():
print "in bar"
We can see that the code is now correctly indented.

Another example, p2bad.py:
$ type p2bad.py

def foo(args):
print "in foo"
def bar():
print "in bar"
Here, the user has again not indented the two print statements correctly. Note that in this case, function bar is nested under function foo (which is okay, since nested functions are allowed in Python).

Running p2bad.py gives:
$ python p2bad.py
File "p2bad.py", line 3
print "in foo"
^
IndentationError: expected an indented block
We get the same error.

If we now type the same command, gg=G, and save the file as p2good.py, we get:
$ type t2good.py

def foo(args):
print "in foo"
def bar():
print "in bar"
Again, the indentation has been corrected.

If we were already at the start of the file, we could just type:

    =G

Note that this technique may not work in all cases. For example, in the case of p2bad.py above, the user may not have meant to nest function bar under function foo. They may have meant it to be a non-nested function, indented at the same level as foo. And vim may not be able to determine the intention of the user, in this and some other cases.

So use the technique with care, and preferably make a backup copy of your file before changing it with this technique.


If you're new to vi/vim, and want to learn its basics fast, check out my vi quickstart tutorial on Gumroad. It's short, so you can read it and get going with vi/vim in half an hour or less.

- Enjoy.

- Vasudev Ram - Online Python training and consulting

Get updates (via Gumroad) on my forthcoming apps and content.

Jump to posts: Python * DLang * xtopdf

Subscribe to my blog by email

My ActiveState Code recipes

Follow me on: LinkedIn * Twitter

Are you a blogger with some traffic? Get Convertkit:

Email marketing for professional bloggers



PyCon.DE & PyData Karlsruhe: PyLadies X Micropython @ PyConDE

$
0
0

As part of PyCon DE 18 PyLadies and MicroPython will be running a beginner friendly full day hands-on workshop on MicroPython and the Internet of Things. We welcome anyone with existing programming knowledge or who is code curious to join us.

MicroPython

You received a PyBoard last year during PyCon DE but did not really do something with it until today? We want to change this! So bring your board along or any other MicroPython related hardware.

If you have existing hardware feel free to bring it along, you can also purchase hardware here (https://store.micropython.org/product/PYBLITEv1.0-ACH) or on the day. We will also have units you can work with without having to purchase them. Lunch will be provided and at the end of the day we will run a presentation of what has been produced (optional).

Please note buying the hardware does not automatically sign you up for this workshop. MicroPython is an Open Source project that funds itself solely through selling hardware. If you like to support this – please consider buying the hardware.

Pyladies

is an international mentorship group for women with mission to promote, educate, and advance a diverse Python community and provide a friendly support network with a bridge to the larger Python world.

MicroPython

is a lean and efficient implementation of Python 3 that includes a small subset of the Python standard library and is optimised to run on microcontrollers and in constrained environments. The MicroPython Pyboard is a compact electronic circuit board that runs MicroPython on the bare metal, giving you a low-level Python operating system that can be used to control all kinds of electronic projects. The pyboard is the official MicroPython microcontroller board with full support for software features.

Learn more here: http://micropython.org

Gender policy

We believe knowledge is for all and at the same time our events aim primarily to empower women tech community. We request non female attendees to be aware of these situation and make their presence discrete. Eg. by coming with a female plus one to ensure gender balance, avoiding to be heard more than the rest of the attendees in discussions and question sections.

Photography / video consent

We take photos and videos during the event to use for documentation and in social media such as here in Photo albums, Facebook, Twitter, etc. By coming to the meetup, you willingly give consent to take photos and videos of you. If you do not want to give your consent, please let us know at check-in.

Contact

Interested in speaking at one of our events? Have a good idea for a Meetup? Get in touch with us at berlinpyladies@gmail.com

For help or questions before the event please join the PyLadies slack and go to the channel #pyladies-micropython

Invite: https://pyladies-berlin.herokuapp.com/

Slack: https://pyladies-berlin.slack.com


NumFOCUS: NumFOCUS Awards Development Grants to Open Source Projects – Summer 2018

Bhishan Bhandari: vis.js Network Examples

$
0
0

The intentions of this post is to host example code snippets so people can take ideas from it to make great visualization for themselves using visJS. VisJS is a dynamic, browser based visualization library. The library is designed to be easy to use, to handle large amounts of dynamic data, and to enable manipulation of […]

The post vis.js Network Examples appeared first on The Tara Nights.

Continuum Analytics Blog: Anaconda Funded by Citi Ventures

$
0
0

Scott Collison, CEO Today, we’re incredibly happy to announce funding from Citi Ventures and welcome them as a new investor and partner. Following its initial investment in Anaconda and led by a belief in our products and the success we’ve had, Citi also became an Anaconda customer to take advantage of our leading platform for …
Read more →

The post Anaconda Funded by Citi Ventures appeared first on Anaconda.

Codementor: Scaling Python Microservices with Kubernetes

$
0
0
We wrote in depth about setting up microservices in one of our previous posts (https://blog.apcelent.com/setup-microservices-architecture-in-python-with-zeromq-docker.html). In this post we are...

Python Anywhere: System update this morning

$
0
0

We deployed a new version of PythonAnywhere this morning. Everything went pretty smoothly; there were a few problems with some hosted websites shortly afterwards (an error in a load-distribution algorithm put too many websites on some servers, and not enough on others) but some sharp-eye customers spotted the problem and let us know, and we were able to rebalance things and fix the issue quickly.

There are a couple of great new features in the new system, but we're doing some last-minute testing before making them live -- watch this space for more information :-)

Codementor: Working with Strings in Python

$
0
0
In this article we look at how we can manipulate strings using basic functions and methods available in python.

Doug Hellmann: Planting Acorns

$
0
0
This post is based on the closing keynote I gave for PyTennessee in February 2018, where I talked about how the governance of an open source project impacts  the health of the project, and some lessons we learned in building the OpenStack community that can be applied to other projects. OpenStack is a cloud computing system …

Randy Zwitch: Creating a MapD ODBC Connection in RStudio Server

$
0
0

MapD ODBC RStudio Server

In my post Installing MapD on Microsoft Azure, I showed how to install MapD Community Edition on Microsoft Azure, using Ubuntu 16.04 LTS as the base image. One thing I glossed over during the firewall/security section was that I opened ports for Jupyter Notebook and other data science tools, but I didn’t actually show how to install any of those tools.

For this post, I’ll cover how to install MapD ODBC drivers and create a connection within RStudio server.

1. Installing RStudio Server on Microsoft Azure

With an Ubuntu VM running MapD, installing RStudio Server takes but a handful of commands. The RStudio Server download/install page has fantastic instructions, but if you are looking for Azure-specific RStudio Server install instructions, this blog post from Jumping Rivers does a great job.

2. Installing an ODBC Driver Manager

There are two major ODBC driver managers for Linux and macOS: unixODBC and iODBC. I have had more overall ODBC driver installation success with unixODBC than iODBC; here are the instructions for building unixODBC from source:

#download source and extract
wget ftp://ftp.unixodbc.org/pub/unixODBC/unixODBC-2.3.7.tar.gz
gunzip unixODBC*.tar.gz
tar xvf unixODBC*.tar

#compile and installcd unixODBC-2.3.7
./configure
make
sudo make install

If you want to check everything is installed correctly, you can run the following command:

odbc_config --cflags#result-DHAVE_UNISTD_H-DHAVE_PWD_H-DHAVE_SYS_TYPES_H-DHAVE_LONG_LONG-DSIZEOF_LONG_INT=8 -I/usr/local/include

3. Installing MapD ODBC Driver System-wide

With unixODBC installed, the next step is to install the MapD ODBC drivers. ODBC drivers for MapD are provided as part of MapD Enterprise Edition, so you’ll need to contact your sales representative to get the appropriate version for your MapD installation.

For Linux, the MapD ODBC drivers are provided as a tarball, which when extracted provides all of the necessary ODBC driver files:

#make a directory to extract files into
mkdir mapd_odbc &&cd mapd_odbc
tar-xvf ../mapd_odbc_installer_linux_3.80.1.36.tar.gz

#move to /opt/mapd/mapd_odbc (or wherever the other MapD files are)cd .. && mv mapd_odbc /opt/mapd/mapd_odbc

By convention, MapD suggests placing the ODBC drivers in the same directory as your installation (frequently, /opt/mapd). Wherever you choose to place the directory, you need add that location into the /etc/odbcinst.ini file:

[MapD Driver]
Driver          = /opt/mapd/mapd_odbc/libs/libODBC.so

At this point, we have everything we need to define a connection string within R using odbc:

library(odbc)conn<-dbConnect(odbc::odbc(),Driver="MapD Driver",Server="localhost",Database="mapd",UID="mapd",PWD="helloRusers!",Port=9091)

Depending on your use case/security preferences, there are two downsides to this method: 1) the credentials are in plain-text in the middle of the script and 2) the RStudio Connection window also shows the credentials in connection window in plain-text until you delete the connection. This can be remedied by defining a DSN (data source name).

4. Defining A DSN

A DSN is what people usually think of when installing ODBC drivers, as it holds some/all of the actual details for connecting to the database. DSN files can be placed in two locations: system-wide in /etc/obdc.ini or in an individual user’s home directory (needs to be ~/.odbc.ini, a hidden file).

In order to have the credentials completely masked in the RStudio session, place the following in the /etc/obdc.ini file:

[MapD Production]
Driver=MapD Driver
PWD=helloRusers!
UID=mapd
HOST=localhost
DATABASE=mapd
PORT=9091

Within the RStudio Connection pane, we can now test our DSN:

MapD ODBC RStudio Server DSN Test

With the DSN defined, the R connection code becomes much shorter, with no credentials exposed within the R session:

library(DBI)
con <- dbConnect(odbc::odbc(), "MapD Production")

ODBC: A Big Bag Of Hurt, But Super Useful

While the instructions above aren’t the easiest to work through, once you have ODBC set up and working one time, it’s usually just a matter of appending various credentials to the existing files to add databases.

From a MapD perspective, ODBC is supported through our Enterprise Edition, but it is the slowest way to work with the database. Up to this point, we’ve focused mostly on supporting Python through the pymapd package and the MapD Ibis backend, but there’s no reason technical reason why R can’t also be a first-class citizen.

So if you’re interested in helping develop an R package for MapD, whether using reticulate to wrap pymapd or to help develop Apache Thrift bindings and Apache Arrow native code, send me a Twitter message or connect via LinkedIn (or any other way to contact me) and we’ll figure out how to collaborate!

Mike Driscoll: Python 101: Episode #21 – Using Threads

PyCharm: PyCharm 2018.2.2

$
0
0

PyCharm 2018.2.2 is now available, with some small improvements. Get it now from our Website. If you’re still on PyCharm 2018.1, we’ve also got a release candidate for our new bugfix update PyCharm 2018.1.5

New in 2018.2.2

  • Some improvements to our pipenv support: if the pipfile specifies packages which aren’t compatible with your computer, they will no longer be suggested. Also, if you choose to create a pipenv for a project you’ve already opened, the project’s dependencies will now automatically be installed. This matches the behavior of pipenv on the command-line.
  • A regression where virtualenvs weren’t automatically detected has been resolved.
  • Some issues in version control support were ironed out: when you right-click a commit to reword it (change its commit message) in some cases PyCharm wasn’t able to identify the new hash of the commit correctly, this has been cleared up.
  • And much more, see the release notes for details.

Interested?

Get PyCharm from the JetBrains website

If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm, and stay up to date. You can find the installation instructions on our website.

PyCharm 2018.1.5 RC

If you’re using PyCharm 2018.1, and for some reason can’t upgrade to PyCharm 2018.2, we also have a new version for you. The release candidate for PyCharm 2018.1.5 can be downloaded from Confluence.

In this release, we’ve fixed an issue where unshelving files would create a large number of threads. If you’re interested in learning about other issues that were resolved, check out the release notes here.

Codementor: Build Email Verification from Scratch With Masonite Framework and JSON Web Tokens

$
0
0
Masonite Framework (https://github.com/MasoniteFramework/masonite) is a modern and developer centric Python web framework. The architecture of Masonite is much more similar to the Laravel...

Real Python: Primer on Python Decorators

$
0
0

In this tutorial on decorators, we’ll look at what they are and how to create and use them. Decorators provide a simple syntax for calling higher-order functions.

By definition, a decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it.

This sounds confusing, but it’s really not, especially after you’ve seen a few examples of how decorators work. You can find all the examples from this article here.

Free Bonus:Click here to get access to a free "The Power of Python Decorators" guide that shows you 3 advanced decorator patterns and techniques you can use to write to cleaner and more Pythonic programs.

Decorators Cheat Sheet:Click here to get access to a free 3-page Python decorators cheat sheet that summarizes the techniques explained in this tutorial.

Updates:

  • 08/22/2018: Major update adding more examples and more advanced decorators
  • 01/12/2016: Updated examples to Python 3 (v3.5.1) syntax and added a new example
  • 11/01/2015: Added a brief explanation on the functools.wraps() decorator

Functions

Before you can understand decorators, you must first understand how functions work. For our purposes, a function returns a value based on the given arguments. Here is a very simple example:

>>> defadd_one(number):... returnnumber+1>>> add_one(2)3

In general, functions in Python may also have side effects rather than just turning an input into an output. The print() function is a basic example of this: it returns None while having the side effect of outputting something to the console. However, to understand decorators, it is enough to think about functions as something that turns given arguments into a value.

Note: In functional programming, you work (almost) only with pure functions without side effects. While not a purely functional language, Python supports many of the functional programming concepts, including functions as first-class objects.

First-Class Objects

In Python, functions are first-class objects. This means that functions can be passed around and used as arguments, just like any other object (string, int, float, list, and so on). Consider the following three functions:

defsay_hello(name):returnf"Hello {name}"defbe_awesome(name):returnf"Yo {name}, together we are the awesomest!"defgreet_bob(greeter_func):returngreeter_func("Bob")

Here, say_hello() and be_awesome() are regular functions that expect a name given as a string. The greet_bob() function however, expects a function as its argument. We can, for instance, pass it the say_hello() or the be_awesome() function:

>>> greet_bob(say_hello)'Hello Bob'>>> greet_bob(be_awesome)'Yo Bob, together we are the awesomest!'

Note that greet_bob(say_hello) refers to two functions, but in different ways: greet_bob() and say_hello. The say_hello function is named without parentheses. This means that only a reference to the function is passed. The function is not executed. The greet_bob() function, on the other hand, is written with parentheses, so it will be called as usual.

Inner Functions

It’s possible to define functions inside other functions. Such functions are called inner functions. Here’s an example of a function with two inner functions:

defparent():print("Printing from the parent() function")deffirst_child():print("Printing from the first_child() function")defsecond_child():print("Printing from the second_child() function")second_child()first_child()

What happens when you call the parent() function? Think about this for a minute. The output will be as follows:

>>> parent()Printing from the parent() functionPrinting from the second_child() functionPrinting from the first_child() function

Note that the order in which the inner functions are defined does not matter. Like with any other functions, the printing only happens when the inner functions are executed.

Furthermore, the inner functions are not defined until the parent function is called. They are locally scoped to parent(): they only exist inside the parent() function as local variables. Try calling first_child(). You should get an error:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>NameError: name 'first_child' is not defined

Whenever you call parent(), the inner functions first_child() and second_child() are also called. But because of their local scope, they aren’t available outside of the parent() function.

Returning Functions From Functions

Python also allows you to use functions as return values. The following example returns one of the inner functions from the outer parent() function:

defparent(num):deffirst_child():return"Hi, I am Emma"defsecond_child():return"Call me Liam"ifnum==1:returnfirst_childelse:returnsecond_child

Note that you are returning first_child without the parentheses. Recall that this means that you are returning a reference to the function first_child. In contrast first_child() with parentheses refers to the result of evaluating the function. This can be seen in the following example:

>>> first=parent(1)>>> second=parent(2)>>> first<function parent.<locals>.first_child at 0x7f599f1e2e18>>>> second<function parent.<locals>.second_child at 0x7f599dad5268>

The somewhat cryptic output simply means that the first variable refers to the local first_child() function inside of parent(), while second points to second_child().

You can now use first and second as if they are regular functions, even though the functions they point to can’t be accessed directly:

>>> first()'Hi, I am Emma'>>> second()'Call me Liam'

Finally, note that in the earlier example you executed the inner functions within the parent function, for instance first_child(). However, in this last example, you did not add parentheses to the inner functions—first_child—upon returning. That way, you got a reference to each function that you could call in the future. Make sense?

Simple Decorators

Now that you’ve seen that functions are just like any other object in Python, you’re ready to move on and see the magical beast that is the Python decorator. Let’s start with an example:

defmy_decorator(func):defwrapper():print("Something is happening before the function is called.")func()print("Something is happening after the function is called.")returnwrapperdefsay_whee():print("Whee!")say_whee=my_decorator(say_whee)

Can you guess what happens when you call say_whee()? Try it:

>>> say_whee()Something is happening before the function is called.Whee!Something is happening after the function is called.

To understand what’s going on here, look back at the previous examples. We are literally just applying everything you have learned so far.

The so-called decoration happens at the following line:

say_whee=my_decorator(say_whee)

In effect, the name say_whee now points to the wrapper() inner function. Remember that you return wrapper as a function when you call my_decorator(say_whee):

>>> say_whee<function my_decorator.<locals>.wrapper at 0x7f3c5dfd42f0>

However, wrapper() has a reference to the original say_whee() as func, and calls that function between the two calls to print().

Put simply: decorators wrap a function, modifying its behavior.

Before moving on, let’s have a look at a second example. Because wrapper() is a regular Python function, the way a decorator modifies a function can change dynamically. So as not to disturb your neighbors, the following example will only run the decorated code during the day:

fromdatetimeimportdatetimedefnot_during_the_night(func):defwrapper():if7<=datetime.now().hour<22:func()else:pass# Hush, the neighbors are asleepreturnwrapperdefsay_whee():print("Whee!")say_whee=not_during_the_night(say_whee)

If you try to call say_whee() after bedtime, nothing will happen:

>>> say_whee()>>>

Syntactic Sugar!

The way you decorated say_whee() above is a little clunky. First of all, you end up typing the name say_whee three times. In addition, the decoration gets a bit hidden away below the definition of the function.

Instead, Python allows you to use decorators in a simpler way with the @ symbol, sometimes called the “pie” syntax. The following example does the exact same thing as the first decorator example:

defmy_decorator(func):defwrapper():print("Something is happening before the function is called.")func()print("Something is happening after the function is called.")returnwrapper@my_decoratordefsay_whee():print("Whee!")

So, @my_decorator is just an easier way of saying say_whee = my_decorator(say_whee). It’s how you apply a decorator to a function.

Reusing Decorators

Recall that a decorator is just a regular Python function. All the usual tools for easy reusability are available. Let’s move the decorator to its own module that can be used in many other functions.

Create a file called decorators.py with the following content:

defdo_twice(func):defwrapper_do_twice():func()func()returnwrapper_do_twice

Note: You can name your inner function whatever you want, and a generic name like wrapper() is usually okay. You’ll see a lot of decorators in this article. To keep them apart, we’ll name the inner function with the same name as the decorator but with a wrapper_ prefix.

You can now use this new decorator in other files by doing a regular import:

fromdecoratorsimportdo_twice@do_twicedefsay_whee():print("Whee!")

When you run this example, you should see that the original say_whee() is executed twice:

>>> say_whee()Whee!Whee!

Free Bonus:Click here to get access to a free the "The Power of Python Decorators" guide that shows you 3 advanced decorator patterns and techniques you can use to write to cleaner and more Pythonic programs.

Decorating Functions With Arguments

Say that you have a function that accepts some arguments. Can you still decorate it? Let’s try:

fromdecoratorsimportdo_twice@do_twicedefgreet(name):print(f"Hello {name}")

Unfortunately, running this code raises an error:

>>> greet("World")Traceback (most recent call last):
  File "<stdin>", line 1, in <module>TypeError: wrapper_do_twice() takes 0 positional arguments but 1 was given

The problem is that the inner function wrapper_do_twice() does not take any arguments, but name="World" was passed to it. You could fix this by letting wrapper_do_twice() accept one argument, but then it would not work for the say_whee() function you created earlier.

The solution is to use *args and **kwargs in the inner wrapper function. Then it will accept an arbitrary number of positional and keyword arguments. Rewrite decorators.py as follows:

defdo_twice(func):defwrapper_do_twice(*args,**kwargs):func(*args,**kwargs)func(*args,**kwargs)returnwrapper_do_twice

The wrapper_do_twice() inner function now accepts any number of arguments and passes them on to the function it decorates. Now both your say_whee() and greet() examples works:

>>> say_whee()Whee!Whee!>>> greet("World")Hello WorldHello World

Returning Values From Decorated Functions

What happens to the return value of decorated functions? Well, that’s up to the decorator to decide. Let’s say you decorate a simple function as follows:

fromdecoratorsimportdo_twice@do_twicedefreturn_greeting(name):print("Creating greeting")returnf"Hi {name}"

Try to use it:

>>> hi_adam=return_greeting("Adam")Creating greetingCreating greeting>>> print(hi_adam)None

Oops, your decorator ate the return value from the function.

Because the do_twice_wrapper() doesn’t explicitly return a value, the call return_greeting("Adam") ended up returning None.

To fix this, you need to make sure the wrapper function returns the return value of the decorated function. Change your decorators.py file:

defdo_twice(func):defwrapper_do_twice(*args,**kwargs):func(*args,**kwargs)returnfunc(*args,**kwargs)returnwrapper_do_twice

The return value from the last execution of the function is returned:

>>> return_greeting("Adam")Creating greetingCreating greeting'Hi Adam'

Who Are You, Really?

A great convenience when working with Python, especially in the interactive shell, is its powerful introspection ability. Introspection is the ability of an object to know about its own attributes at runtime. For instance, a function knows its own name and documentation:

>>> print<built-in function print>>>> print.__name__'print'>>> help(print)Help on built-in function print in module builtins:print(...)<full help message>

The introspection works for functions you define yourself as well:

>>> say_whee<function do_twice.<locals>.wrapper_do_twice at 0x7f43700e52f0>>>> say_whee.__name__'wrapper_do_twice'>>> help(say_whee)Help on function wrapper_do_twice in module decorators:wrapper_do_twice()

However, after being decorated, say_whee() has gotten very confused about its identity. It now reports being the wrapper_do_twice() inner function inside the do_twice() decorator. Although technically true, this is not very useful information.

To fix this, decorators should use the @functools.wraps decorator, which will preserve information about the original function. Update decorators.py again:

importfunctoolsdefdo_twice(func):@functools.wraps(func)defwrapper_do_twice(*args,**kwargs):func(*args,**kwargs)returnfunc(*args,**kwargs)returnwrapper_do_twice

You do not need to change anything about the decorated say_whee() function:

>>> say_whee<function say_whee at 0x7ff79a60f2f0>>>> say_whee.__name__'say_whee'>>> help(say_whee)Help on function say_whee in module whee:say_whee()

Much better! Now say_whee() is still itself after decoration.

Technical Detail: The @functools.wraps decorator uses the function functools.update_wrapper() to update special attributes like __name__ and __doc__ that are used in the introspection.

A Few Real World Examples

Let’s look at a few more useful examples of decorators. You’ll notice that they’ll mainly follow the same pattern that you’ve learned so far:

importfunctoolsdefdecorator(func):@functools.wraps(func)defwrapper_decorator(*args,**kwargs):# Do something beforevalue=func(*args,**kwargs)# Do something afterreturnvaluereturnwrapper_decorator

This formula is a good boilerplate template for building more complex decorators.

Note: In later examples, we will assume that these decorators are saved in your decorators.py file as well. Recall that you can download all the examples in this tutorial.

Timing Functions

Let’s start by creating a @timer decorator. It will measure the time a function takes to execute and print the duration to the console. Here’s the code:

importfunctoolsimporttimedeftimer(func):"""Print the runtime of the decorated function"""@functools.wraps(func)defwrapper_timer(*args,**kwargs):start_time=time.perf_counter()# 1value=func(*args,**kwargs)end_time=time.perf_counter()# 2run_time=end_time-start_time# 3print(f"Finished {func.__name__!r} in {run_time:.4f} secs")returnvaluereturnwrapper_timer@timerdefwaste_some_time(num_times):for_inrange(num_times):sum([i**2foriinrange(10000)])

This decorator works by storing the time just before the function starts running (at the line marked # 1) and just after the function finishes (at # 2). The time the function takes is then the difference between the two (at # 3). We use the time.perf_counter() function, which does a good job of measuring time intervals. Here are some examples of timings:

>>> waste_some_time(1)Finished 'waste_some_time' in 0.0010 secs>>> waste_some_time(999)Finished 'waste_some_time' in 0.3260 secs

Run it yourself. Work through the code line by line. Make sure you understand how it works. Don’t worry if you don’t get it, though. Decorators are advanced beings. Try to sleep on it or make a drawing of the program flow.

Note: The @timer decorator is great if you just want to get an idea about the runtime of your functions. If you want to do more precise measurements of code, you should instead consider the timeit module in the standard library. It temporarily disables garbage collection and runs multiple trials to strip out noise from quick function calls.

Debugging Code

The following @debug decorator will print the arguments a function is called with as well as its return value every time the function is called:

importfunctoolsdefdebug(func):"""Print the function signature and return value"""@functools.wraps(func)defwrapper_debug(*args,**kwargs):args_repr=[repr(a)forainargs]# 1kwargs_repr=[f"{k}={v!r}"fork,vinkwargs.items()]# 2signature=", ".join(args_repr+kwargs_repr)# 3print(f"Calling {func.__name__}({signature})")value=func(*args,**kwargs)print(f"{func.__name__!r} returned {value!r}")# 4returnvaluereturnwrapper_debug

The signature is created by joining the string representations of all the arguments. The numbers in the following list correspond to the numbered comments in the code:

  1. Create a list of the positional arguments. Use repr() to get a nice string representing each argument.
  2. Create a list of the keyword arguments. The f-string formats each argument as key=value where the !r specifier means that repr() is used to represent the value.
  3. The lists of positional and keyword arguments is joined together to one signature string with each argument separated by a comma.
  4. The return value is printed after the function is executed.

Let’s see how the decorator works in practice by applying it to a simple function with one position and one keyword argument:

@debugdefmake_greeting(name,age=None):ifageisNone:returnf"Howdy {name}!"else:returnf"Whoa {name}! {age} already, you are growing up!"

Note how the @debug decorator prints the signature and return value of the make_greeting() function:

>>> make_greeting("Benjamin")Calling make_greeting('Benjamin')'make_greeting' returned 'Howdy Benjamin!''Howdy Benjamin!'>>> make_greeting("Richard",age=112)Calling make_greeting('Richard', age=112)'make_greeting' returned 'Whoa Richard! 112 already, you are growing up!''Whoa Richard! 112 already, you are growing up!'>>> make_greeting(name="Dorrisile",age=116)Calling make_greeting(name='Dorrisile', age=116)'make_greeting' returned 'Whoa Dorrisile! 116 already, you are growing up!''Whoa Dorrisile! 116 already, you are growing up!'

This example might not seem immediately useful since the @debug decorator just repeats what you just wrote. It’s more powerful when applied to small convenience functions that you don’t call directly yourself.

The following example calculates an approximation to the mathematical constant e:

importmathfromdecoratorsimportdebug# Apply a decorator to a standard library functionmath.factorial=debug(math.factorial)defapproximate_e(terms=18):returnsum(1/math.factorial(n)forninrange(terms))

This example also shows how you can apply a decorator to a function that has already been defined. The approximation of e is based on the following series expansion:

Series for calculating mathematical constant e

When calling the approximate_e() function, you can see the @debug decorator at work:

>>> approximate_e(5)Calling factorial(0)'factorial' returned 1Calling factorial(1)'factorial' returned 1Calling factorial(2)'factorial' returned 2Calling factorial(3)'factorial' returned 6Calling factorial(4)'factorial' returned 242.708333333333333

In this example, you get a decent approximation to the true value e = 2.718281828, adding only 5 terms.

Slowing Down Code

This next example might not seem very useful. Why would you want to slow down your Python code? Probably the most common use case is that you want to rate-limit a function that continuously checks whether a resource—like a web page—has changed. The @slow_down decorator will sleep one second before it calls the decorated function:

importfunctoolsimporttimedefslow_down(func):"""Sleep 1 second before calling the function"""@functools.wraps(func)defwrapper_slow_down(*args,**kwargs):time.sleep(1)returnfunc(*args,**kwargs)returnwrapper_slow_down@slow_downdefcountdown(from_number):iffrom_number<1:print("Liftoff!")else:print(from_number)countdown(from_number-1)

To see the effect of the @slow_down decorator, you really need to run the example yourself:

>>> countdown(3)321Liftoff!

Note: The countdown() function is a recursive function. In other words, it’s a function calling itself. To learn more about recursive functions in Python, see our guide on Thinking Recursively in Python.

The @slow_down decorator always sleeps for one second. Later, you’ll see how to control the rate by passing an argument to the decorator.

Registering Plugins

Decorators don’t have to wrap the function they’re decorating. They can also simply register that a function exists and return it unwrapped. This can be used, for instance, to create a light-weight plug-in architecture:

importrandomPLUGINS=dict()defregister(func):"""Register a function as a plug-in"""PLUGINS[func.__name__]=funcreturnfunc@registerdefsay_hello(name):returnf"Hello {name}"@registerdefbe_awesome(name):returnf"Yo {name}, together we are the awesomest!"defrandomly_greet(name):greeter,greeter_func=random.choice(list(PLUGINS.items()))print(f"Using {greeter!r}")returngreeter_func(name)

The @register decorator simply stores a reference to the decorated function in the global PLUGINS dict. Note that you do not have to write an inner function or use @functools.wraps in this example because you are returning the original function unmodified.

The randomly_greet() function randomly chooses one of the registered functions to use. Note that the PLUGINS dictionary already contains references to each function object that is registered as a plugin:

>>> PLUGINS{'say_hello': <function say_hello at 0x7f768eae6730>, 'be_awesome': <function be_awesome at 0x7f768eae67b8>}>>> randomly_greet("Alice")Using 'say_hello''Hello Alice'

The main benefit of this simple plugin architecture is that you do not need to maintain a list of which plugins exist. That list is created when the plugins register themselves. This makes it trivial to add a new plugin: just define the function and decorate it with @register.

If you are familiar with globals() in Python, you might see some similarities to how the plugin architecture works. globals() gives access to all global variables in the current scope, including your plugins:

>>> globals(){..., # Lots of variables not shown here. 'say_hello': <function say_hello at 0x7f768eae6730>, 'be_awesome': <function be_awesome at 0x7f768eae67b8>, 'randomly_greet': <function randomly_greet at 0x7f768eae6840>}

Using the @register decorator, you can create your own curated list of interesting variables, effectively hand-picking some functions from globals().

Is the User Logged In?

The final example before moving on to some fancier decorators is commonly used when working with a web framework. In this example, we are using Flask to set up a /secret web page that should only be visible to users that are logged in or otherwise authenticated:

fromflaskimportFlask,g,request,redirect,url_forimportfunctoolsapp=Flask(__name__)deflogin_required(func):"""Make sure user is logged in before proceeding"""@functools.wraps(func)defwrapper_login_required(*args,**kwargs):ifg.userisNone:returnredirect(url_for("login",next=request.url))returnfunc(*args,**kwargs)returnwrapper_login_required@app.route("/secret")@login_requireddefsecret():...

While this gives an idea about how to add authentication to your web framework, you should usually not write these types of decorators yourself. For Flask, you can use the Flask-Login extension instead, which adds more security and functionality.

Fancy Decorators

So far, you’ve seen how to create simple decorators. You already have a pretty good understanding of what decorators are and how they work. Feel free to take a break from this article to practice everything you’ve learned.

In the second part of this tutorial, we’ll explore more advanced features, including how to use the following:

Decorating Classes

There are two different ways you can use decorators on classes. The first one is very close to what you have already done with functions: you can decorate the methods of a class. This was one of the motivations for introducing decorators back in the day.

Some commonly used decorators that are even built-ins in Python are @classmethod, @staticmethod, and @property. The @classmethod and @staticmethod decorators are used to define methods inside a class namespace that are not connected to a particular instance of that class. The @property decorator is used to customize getters and setters for class attributes. Expand the box below for an example using these decorators.

The following definition of a Circle class uses the @classmethod, @staticmethod, and @property decorators:

classCircle:def__init__(self,radius):self._radius=radius@propertydefradius(self):"""Get value of radius"""returnself._radius@radius.setterdefradius(self,value):"""Set radius, raise error if negative"""ifvalue>=0:self._radius=valueelse:raiseValueError("Radius must be positive")@propertydefarea(self):"""Calculate area inside circle"""returnself.pi()*self.radius**2defcylinder_volume(self,height):"""Calculate volume of cylinder with circle as base"""returnself.area*height@classmethoddefunit_circle(cls):"""Factory method creating a circle with radius 1"""returncls(1)@staticmethoddefpi():"""Value of π, could use math.pi instead though"""return3.1415926535

In this class:

  • .cylinder_volume() is a regular method.
  • .radius is a mutable property: it can be set to a different value. However, by defining a setter method, we can do some error testing to make sure it’s not set to a nonsensical negative number. Properties are accessed as attributes without parentheses.
  • .area is an immutable property: properties without .setter() methods can’t be changed. Even though it is defined as a method, it can be retrieved as an attribute without parentheses.
  • .unit_circle() is a class method. It’s not bound to one particular instance of Circle. Class methods are often used as factory methods that can create specific instances of the class.
  • .pi() is a static method. It’s not really dependent on the Circle class, except that it is part of its namespace. Static methods can be called on either an instance or the class.

The Circle class can for example be used as follows:

>>> c=Circle(5)>>> c.radius5>>> c.area78.5398163375>>> c.radius=2>>> c.area12.566370614>>> c.area=100AttributeError: can't set attribute>>> c.cylinder_volume(height=4)50.265482456>>> c.radius=-1ValueError: Radius must be positive>>> c=Circle.unit_circle()>>> c.radius1>>> c.pi()3.1415926535>>> Circle.pi()3.1415926535

Let’s define a class where we decorate some of its methods using the @debug and @timer decorators from earlier:

fromdecoratorsimportdebug,timerclassTimeWaster:@debugdef__init__(self,max_num):self.max_num=max_num@timerdefwaste_time(self,num_times):for_inrange(num_times):sum([i**2foriinrange(self.max_num)])

Using this class, you can see the effect of the decorators:

>>> tw=TimeWaster(1000)Calling __init__(<time_waster.TimeWaster object at 0x7efccce03908>, 1000)'__init__' returned None>>> tw.waste_time(999)Finished 'waste_time' in 0.3376 secs

The other way to use decorators on classes is to decorate the whole class. This is, for example, done in the new dataclasses module in Python 3.7:

fromdataclassesimportdataclass@dataclassclassPlayingCard:rank:strsuit:str

The meaning of the syntax is similar to the function decorators. In the example above, you could have done the decoration by writing PlayingCard = dataclass(PlayingCard).

A common use of class decorators is to be a simpler alternative to some use-cases of metaclasses. In both cases, you are changing the definition of a class dynamically.

Writing a class decorator is very similar to writing a function decorator. The only difference is that the decorator will receive a class and not a function as an argument. In fact, all the decorators you saw above will work as class decorators. When you are using them on a class instead of a function, their effect might not be what you want. In the following example, the @timer decorator is applied to a class:

fromdecoratorsimporttimer@timerclassTimeWaster:def__init__(self,max_num):self.max_num=max_numdefwaste_time(self,num_times):for_inrange(num_times):sum([i**2foriinrange(self.max_num)])

Decorating a class does not decorate its methods. Recall that @timer is just shorthand for TimeWaster = timer(TimeWaster).

Here, @timer only measures the time it takes to instantiate the class:

>>> tw=TimeWaster(1000)Finished 'TimeWaster' in 0.0000 secs>>> tw.waste_time(999)>>>

Later, you will see an example defining a proper class decorator, namely @singleton, which ensures that there is only one instance of a class.

Nesting Decorators

You can apply several decorators to a function by stacking them on top of each other:

fromdecoratorsimportdebug,do_twice@debug@do_twicedefgreet(name):print(f"Hello {name}")

Think about this as the decorators being executed in the order they are listed. In other words, @debug calls @do_twice, which calls greet(), or debug(do_twice(greet())):

>>> greet("Eva")Calling greet('Eva')Hello EvaHello Eva'greet' returned None

Observe the difference if we change the order of @debug and @do_twice:

fromdecoratorsimportdebug,do_twice@do_twice@debugdefgreet(name):print(f"Hello {name}")

In this case, @do_twice will be applied to @debug as well:

>>> greet("Eva")Calling greet('Eva')Hello Eva'greet' returned NoneCalling greet('Eva')Hello Eva'greet' returned None

Decorators With Arguments

Sometimes, it’s useful to pass arguments to your decorators. For instance, @do_twice could be extended to a @repeat(num_times) decorator. The number of times to execute the decorated function could then be given as an argument.

This would allow you to do something like this:

@repeat(num_times=4)defgreet(name):print(f"Hello {name}")
>>> greet("World")Hello WorldHello WorldHello WorldHello World

Think about how you could achieve this.

So far, the name written after the @ has referred to a function object that can be called with another function. To be consistent, you then need repeat(num_times=4) to return a function object that can act as a decorator. Luckily, you already know how to return functions! In general, you want something like the following:

defrepeat(num_times):defdecorator_repeat(func):...# Create and return a wrapper functionreturndecorator_repeat

Typically, the decorator creates and returns an inner wrapper function, so writing the example out in full will give you an inner function within an inner function. While this might sound like the programming equivalent of the Inception movie, we’ll untangle it all in a moment:

defrepeat(num_times):defdecorator_repeat(func):@functools.wraps(func)defwrapper_repeat(*args,**kwargs):for_inrange(num_times):value=func(*args,**kwargs)returnvaluereturnwrapper_repeatreturndecorator_repeat

It looks a little messy, but we have only put the same decorator pattern you have seen many times by now inside one additional def that handles the arguments to the decorator. Let’s start with the innermost function:

defwrapper_repeat(*args,**kwargs):for_inrange(num_times):value=func(*args,**kwargs)returnvalue

This wrapper_repeat() function takes arbitrary arguments and returns the value of the decorated function, func(). This wrapper function also contains the loop that calls the decorated function num_times times. This is no different from the earlier wrapper functions you have seen, except that it is using the num_times parameter that must be supplied from the outside.

One step out, you’ll find the decorator function:

defdecorator_repeat(func):@functools.wraps(func)defwrapper_repeat(*args,**kwargs):...returnwrapper_repeat

Again, decorator_repeat() looks exactly like the decorator functions you have written earlier, except that it’s named differently. That’s because we reserve the base name—repeat()—for the outermost function, which is the one the user will call.

As you have already seen, the outermost function returns a reference to the decorator function:

defrepeat(num_times):defdecorator_repeat(func):...returndecorator_repeat

There are a few subtle things happening in the repeat() function:

  • Defining decorator_repeat() as an inner function means that repeat() will refer to a function object—decorator_repeat. Earlier, we used repeat without parentheses to refer to the function object. The added parentheses are necessary when defining decorators that take arguments.
  • The num_times argument is seemingly not used in repeat() itself. But by passing num_times a closure is created where the value of num_times is stored until it will be used later by wrapper_repeat().

With everything set up, let’s see if the results are as expected:

@repeat(num_times=4)defgreet(name):print(f"Hello {name}")
>>> greet("World")Hello WorldHello WorldHello WorldHello World

Just the result we were aiming for.

Both Please, But Never Mind the Bread

With a little bit of care, you can also define decorators that can be used both with and without arguments. Most likely, you don’t need this, but it is nice to have the flexibility.

As you saw in the previous section, when a decorator uses arguments, you need to add an extra outer function. The challenge is for your code to figure out if the decorator has been called with or without arguments.

Since the function to decorate is only passed in directly if the decorator is called without arguments, the function must be an optional argument. This means that the decorator arguments must all be specified by keyword. You can enforce this with the special * syntax, which means that all following parameters are keyword-only:

defname(_func=None,*,kw1=val1,kw2=val2,...):# 1defdecorator_name(func):...# Create and return a wrapper function.if_funcisNone:returndecorator_name# 2else:returndecorator_name(_func)# 3

Here, the _func argument acts as a marker, noting whether the decorator has been called with arguments or not:

  1. If name has been called without arguments, the decorated function will be passed in as _func. If it has been called with arguments, then _func will be None, and some of the keyword arguments may have been changed from their default values. The * in the argument list means that the remaining arguments can’t be called as positional arguments.
  2. In this case, the decorator was called with arguments. Return a decorator function that can read and return a function.
  3. In this case, the decorator was called without arguments. Apply the decorator to the function immediately.

Using this boilerplate on the @repeat decorator in the previous section, you can write the following:

defrepeat(_func=None,*,num_times=2):defdecorator_repeat(func):@functools.wraps(func)defwrapper_repeat(*args,**kwargs):for_inrange(num_times):value=func(*args,**kwargs)returnvaluereturnwrapper_repeatif_funcisNone:returndecorator_repeatelse:returndecorator_repeat(_func)

Compare this with the original @repeat. The only changes are the added _func parameter and the if-else at the end.

Recipe 9.6 of the excellent Python Cookbook shows an alternative solution using functools.partial().

These examples show that @repeat can now be used with or without arguments:

@repeatdefsay_whee():print("Whee!")@repeat(num_times=3)defgreet(name):print(f"Hello {name}")

Recall that the default value of num_times is 2:

>>> say_whee()Whee!Whee!>>> greet("Penny")Hello PennyHello PennyHello Penny

Stateful Decorators

Sometimes, it’s useful to have a decorator that can keep track of state. As a simple example, we will create a decorator that counts the number of times a function is called.

Note: In the beginning of this guide, we talked about pure functions returning a value based on given arguments. Stateful decorators are quite the opposite, where the return value will depend on the current state, as well as the given arguments.

In the next section, you will see how to use classes to keep state. But in simple cases, you can also get away with using function attributes:

importfunctoolsdefcount_calls(func):@functools.wraps(func)defwrapper_count_calls(*args,**kwargs):wrapper_count_calls.num_calls+=1print(f"Call {wrapper_count_calls.num_calls} of {func.__name__!r}")returnfunc(*args,**kwargs)wrapper_count_calls.num_calls=0returnwrapper_count_calls@count_callsdefsay_whee():print("Whee!")

The state—the number of calls to the function—is stored in the function attribute .num_calls on the wrapper function. Here is the effect of using it:

>>> say_whee()Call 1 of 'say_whee'Whee!>>> say_whee()Call 2 of 'say_whee'Whee!>>> say_whee.num_calls2

Classes as Decorators

The typical way to maintain state is by using classes. In this section, you’ll see how to rewrite the @count_calls example from the previous section using a class as a decorator.

Recall that the decorator syntax @my_decorator is just an easier way of saying func = my_decorator(func). Therefore, if my_decorator is a class, it needs to take func as an argument in its .__init__() method. Furthermore, the class needs to be callable so that it can stand in for the decorated function.

For a class to be callable, you implement the special .__call__() method:

classCounter:def__init__(self,start=0):self.count=startdef__call__(self):self.count+=1print(f"Current count is {self.count}")

The .__call__() method is executed each time you try to call an instance of the class:

>>> counter=Counter()>>> counter()Current count is 1>>> counter()Current count is 2>>> counter.count2

Therefore, a typical implementation of a decorator class needs to implement .__init__() and .__call__():

importfunctoolsclassCountCalls:def__init__(self,func):functools.update_wrapper(self,func)self.func=funcself.num_calls=0def__call__(self,*args,**kwargs):self.num_calls+=1print(f"Call {self.num_calls} of {self.func.__name__!r}")returnself.func(*args,**kwargs)@CountCallsdefsay_whee():print("Whee!")

The .__init__() method must store a reference to the function and can do any other necessary initialization. The .__call__() method will be called instead of the decorated function. It does essentially the same thing as the wrapper() function in our earlier examples. Note that you need to use the functools.update_wrapper() function instead of @functools.wraps.

This @CountCalls decorator works the same as the one in the previous section:

>>> say_whee()Call 1 of 'say_whee'Whee!>>> say_whee()Call 2 of 'say_whee'Whee!>>> say_whee.num_calls2

More Real World Examples

We’ve come a far way now, having figured out how to create all kinds of decorators. Let’s wrap it up, putting our newfound knowledge into creating a few more examples that might actually be useful in the real world.

Slowing Down Code, Revisited

As noted earlier, our previous implementation of @slow_down always sleeps for one second. Now you know how to add parameters to decorators, so let’s rewrite @slow_down using an optional rate argument that controls how long it sleeps:

importfunctoolsimporttimedefslow_down(_func=None,*,rate=1):"""Sleep given amount of seconds before calling the function"""defdecorator_slow_down(func):@functools.wraps(func)defwrapper_slow_down(*args,**kwargs):time.sleep(rate)returnfunc(*args,**kwargs)returnwrapper_slow_downif_funcisNone:returndecorator_slow_downelse:returndecorator_slow_down(_func)

We’re using the boilerplate introduced in the Both Please, But Never Mind the Bread section to make @slow_down callable both with and without arguments. The same recursive countdown() function as earlier now sleeps two seconds between each count:

@slow_down(rate=2)defcountdown(from_number):iffrom_number<1:print("Liftoff!")else:print(from_number)countdown(from_number-1)

As before, you must run the example yourself to see the effect of the decorator:

>>> countdown(3)321Liftoff!

Creating Singletons

A singleton is a class with only one instance. There are several singletons in Python that you use frequently, including None, True, and False. It is the fact that None is a singleton that allows you to compare for None using the is keyword, like you saw in the Both Please section:

if_funcisNone:returndecorator_nameelse:returndecorator_name(_func)

Using is returns True only for objects that are the exact same instance. The following @singleton decorator turns a class into a singleton by storing the first instance of the class as an attribute. Later attempts at creating an instance simply return the stored instance:

importfunctoolsdefsingleton(cls):"""Make a class a Singleton class (only one instance)"""@functools.wraps(cls)defwrapper_singleton(*args,**kwargs):ifnotwrapper_singleton.instance:wrapper_singleton.instance=cls(*args,**kwargs)returnwrapper_singleton.instancewrapper_singleton.instance=Nonereturnwrapper_singleton@singletonclassTheOne:pass

As you see, this class decorator follows the same template as our function decorators. The only difference is that we are using cls instead of func as the parameter name to indicate that it is meant to be a class decorator.

Let’s see if it works:

>>> first_one=TheOne()>>> another_one=TheOne()>>> id(first_one)140094218762280>>> id(another_one)140094218762280>>> first_oneisanother_oneTrue

It seems clear that first_one is indeed the exact same instance as another_one.

Note: Singleton classes are not really used as often in Python as in other languages. The effect of a singleton is usually better implemented as a global variable in a module.

Caching Return Values

Decorators can provide a nice mechanism for caching and memoization. As an example, let’s look at a recursive definition of the Fibonacci sequence:

fromdecoratorsimportcount_calls@count_callsdeffibonacci(num):ifnum<2:returnnumreturnfibonacci(num-1)+fibonacci(num-2)

While the implementation is simple, its runtime performance is terrible:

>>> fibonacci(10)<Lots of output from count_calls>55>>> fibonacci.num_calls177

To calculate the tenth Fibonacci number, you should really only need to calculate the preceding Fibonacci numbers, but this implementation somehow needs a whopping 177 calculations. It gets worse quickly: 21891 calculations are needed for fibonacci(20) and almost 2.7 million calculations for the 30th number. This is because the code keeps recalculating Fibonacci numbers that are already known.

The usual solution is to implement Fibonacci numbers using a for loop and a lookup table. However, simple caching of the calculations will also do the trick:

importfunctoolsfromdecoratorsimportcount_callsdefcache(func):"""Keep a cache of previous function calls"""@functools.wraps(func)defwrapper_cache(*args,**kwargs):cache_key=args+tuple(kwargs.items())ifcache_keynotinwrapper_cache.cache:wrapper_cache.cache[cache_key]=func(*args,**kwargs)returnwrapper_cache.cache[cache_key]wrapper_cache.cache=dict()returnwrapper_cache@cache@count_callsdeffibonacci(num):ifnum<2:returnnumreturnfibonacci(num-1)+fibonacci(num-2)

The cache works as a lookup table, so now fibonacci() only does the necessary calculations once:

>>> fibonacci(10)Call 1 of 'fibonacci'...Call 11 of 'fibonacci'55>>> fibonacci(8)21

Note that in the final call to fibonacci(8), no new calculations were needed, since the eighth Fibonacci number had already been calculated for fibonacci(10).

In the standard library, a Least Recently Used (LRU) cache is available as @functools.lru_cache.

This decorator has more features than the one you saw above. You should use @functools.lru_cache instead of writing your own cache decorator:

importfunctools@functools.lru_cache(maxsize=4)deffibonacci(num):print(f"Calculating fibonacci({num})")ifnum<2:returnnumreturnfibonacci(num-1)+fibonacci(num-2)

The maxsize parameter specifies how many recent calls are cached. The default value is 128, but you can specify maxsize=None to cache all function calls. However, be aware that this can cause memory problems if you are caching many large objects.

You can use the .cache_info() method to see how the cache performs, and you can tune it if needed. In our example, we used an artificially small maxsize to see the effect of elements being removed from the cache:

>>> fibonacci(10)Calculating fibonacci(10)Calculating fibonacci(9)Calculating fibonacci(8)Calculating fibonacci(7)Calculating fibonacci(6)Calculating fibonacci(5)Calculating fibonacci(4)Calculating fibonacci(3)Calculating fibonacci(2)Calculating fibonacci(1)Calculating fibonacci(0)55>>> fibonacci(8)21>>> fibonacci(5)Calculating fibonacci(5)Calculating fibonacci(4)Calculating fibonacci(3)Calculating fibonacci(2)Calculating fibonacci(1)Calculating fibonacci(0)5>>> fibonacci(8)Calculating fibonacci(8)Calculating fibonacci(7)Calculating fibonacci(6)21>>> fibonacci(5)5>>> fibonacci.cache_info()CacheInfo(hits=17, misses=20, maxsize=4, currsize=4)

Adding Information About Units

The following example is somewhat similar to the Registering Plugins example from earlier, in that it does not really change the behavior of the decorated function. Instead, it simply adds unit as a function attribute:

defset_unit(unit):"""Register a unit on a function"""defdecorator_set_unit(func):func.unit=unitreturnfuncreturndecorator_set_unit

The following example calculates the volume of a cylinder based on its radius and height in centimeters:

importmath@set_unit("cm^3")defvolume(radius,height):returnmath.pi*radius**2*height

This .unit function attribute can later be accessed when needed:

>>> volume(3,5)141.3716694115407>>> volume.unit'cm^3'

Note that you could have achieved something similar using function annotations:

importmathdefvolume(radius,height)->"cm^3":returnmath.pi*radius**2*height

However, since annotations are used for type hints, it would be hard to combine such units as annotations with static type checking.

Units become even more powerful and fun when connected with a library that can convert between units. One such library is pint. With pint installed (pip install Pint), you can for instance convert the volume to cubic inches or gallons:

>>> importpint>>> ureg=pint.UnitRegistry()>>> vol=volume(3,5)*ureg(volume.unit)>>> vol<Quantity(141.3716694115407, 'centimeter ** 3')>>>> vol.to("cubic inches")<Quantity(8.627028576414954, 'inch ** 3')>>>> vol.to("gallons").m# Magnitude0.0373464440537444

You could also modify the decorator to return a pintQuantity directly. Such a Quantity is made by multiplying a value with the unit. In pint, units must be looked up in a UnitRegistry. The registry is stored as a function attribute to avoid cluttering the namespace:

defuse_unit(unit):"""Have a function return a Quantity with given unit"""use_unit.ureg=pint.UnitRegistry()defdecorator_use_unit(func):@functools.wraps(func)defwrapper_use_unit(*args,**kwargs):value=func(*args,**kwargs)returnvalue*use_unit.ureg(unit)returnwrapper_use_unitreturndecorator_use_unit@use_unit("meters per second")defaverage_speed(distance,duration):returndistance/duration

With the @use_unit decorator, converting units is practically effortless:

>>> bolt=average_speed(100,9.58)>>> bolt<Quantity(10.438413361169102, 'meter / second')>>>> bolt.to("km per hour")<Quantity(37.578288100208766, 'kilometer / hour')>>>> bolt.to("mph").m# Magnitude23.350065679064745

Validating JSON

Let’s look at one last use case. Take a quick look at the following Flask route handler:

@app.route("/grade",methods=["POST"])defupdate_grade():json_data=request.get_json()if"student_id"notinjson_data:abort(400)# Update databasereturn"success!"

Here we ensure that the key student_id is part of the request. Although this validation works, it really does not belong in the function itself. Plus, perhaps there are other routes that use the exact same validation. So, let’s keep it DRY and abstract out any unnecessary logic with a decorator. The following @validate_json decorator will do the job:

fromflaskimportFlask,request,abortimportfunctoolsapp=Flask(__name__)defvalidate_json(*expected_args):# 1defdecorator_validate_json(func):@functools.wraps(func)defwrapper_validate_json(*args,**kwargs):json_object=request.get_json()forexpected_arginexpected_args:# 2ifexpected_argnotinjson_object:abort(400)returnfunc(*args,**kwargs)returnwrapper_validate_jsonreturndecorator_validate_json

In the above code, the decorator takes a variable length list as an argument so that we can pass in as many string arguments as necessary, each representing a key used to validate the JSON data:

  1. The list of keys that must be present in the JSON is given as arguments to the decorator.
  2. The wrapper function validates that each expected key is present in the JSON data.

The route handler can then focus on its real job—updating grades—as it can safely assume that JSON data are valid:

@app.route("/grade",methods=["POST"])@validate_json("student_id")defupdate_grade():json_data=request.get_json()# Update database.return"success!"

Conclusion

This has been quite a journey! You started this tutorial by looking a little closer at functions, particularly how they can be defined inside other functions and passed around just like any other Python object. Then you learned about decorators and how to write them such that:

  • They can be reused.
  • They can decorate functions with arguments and return values.
  • They can use @functools.wraps to look more like the decorated function.

In the second part of the tutorial, you saw more advanced decorators and learned how to:

  • Decorate classes
  • Nest decorators
  • Add arguments to decorators
  • Keep state within decorators
  • Use classes as decorators

You saw that, to define a decorator, you typically define a function returning a wrapper function. The wrapper function uses *args and **kwargs to pass on arguments to the decorated function. If you want your decorator to also take arguments, you need to nest the wrapper function inside another function. In this case, you usually end up with three return statements.

You can find the code from this tutorial online.

Further Reading

If you are still looking for more, our book Python Tricks has a section on decorators, as does the Python Cookbook by David Beazley and Brian K. Jones.

For a deep dive into the historical discussion on how decorators should be implemented in Python, see PEP 318 as well as the Python Decorator Wiki. More examples of decorators can be found in the Python Decorator Library.

Also, we’ve put together a short & sweet Python decorators cheat sheet for you:

Decorators Cheat Sheet:Click here to get access to a free 3-page Python decorators cheat sheet that summarizes the techniques explained in this tutorial.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Viewing all 22646 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>