Quantcast
Channel: Planet Python
Viewing all 22641 articles
Browse latest View live

Stack Abuse: GUI Development with Python Tkinter: An Introduction

$
0
0

Introduction

If you're reading this article, there's a chance that you are one of those people who appreciate software operated via a simple command-line interface. It's quick, easy on your system's resources, and probably much faster to use for a keyboard virtuoso like yourself. However, it's no secret that if we want to reach a wider user-base with our software, offering only a command-line solution might scare a large portion of potential users off. For most people, the most obvious way of interacting with a program is using a GUI– a Graphical User Interface.

While using a GUI, the user interacts with and manipulates the elements of the interface called widgets. Some widgets, like buttons and checkboxes, let the user interact with the program. Others, like windows and frames, serve as containers for other widgets.

There are many packages for building GUIs in Python, but there's only one such package that is considered a de facto standard, and is distributed with all default Python installs. This package is called Tkinter. Tkinter is Python's binding to Tk - an open-source, cross-platform GUI toolkit.

Creating your First Window

As mentioned before, Tkinter is available with standard Python installs, so regardless of your operating system, creating your first window should be super quick. All you need are 3 lines of code:

import tkinter

root = tkinter.Tk()

root.mainloop()  

Output:

After importing the tkinter package in line 1, in line 3 we create our application's main (root) window widget. In order for the program to work properly, there should only be one root window widget in our interface, and, because all other widgets will be lower in the hierarchy than root, it has to be created before any other widgets.

In line 5, we initialize the root's mainloop. Thanks to this line, the window remains in a loop that waits for events (such as user interaction) and updates the interface accordingly. The loop ends when the user closes the window, or a quit() method is called.

Adding Simple Widgets to the Root Window

In the following example, we'll learn the general two-step philosophy of creating widgets, that can be applied to all widgets except windows. The first step is to create an instance of a specific widget's class. In the second step, we have to use one of the available methods to place the new widget inside another already-existing widget (a parent widget). The simplest widget you can put in your Tkinter interface is a label, which simply displays some text. The following example creates a simple label widget:

import tkinter

root = tkinter.Tk()

simple_label = tkinter.Label(root, text="Easy, right?")

simple_label.pack()

root.mainloop()  

Output:

We create the Label class instance in line 5 of the code above. In the first argument we point to the label's desired parent widget, which in this example is our root window. In the second argument we specify the text we want the label to display.

Then, in line 7, we apply a method of orienting our label inside the root window. The simplest method of orienting widgets that Tkinter offers is pack(). The label is the only widget inside the window, so it's simply displayed in the middle of the window.

We'll learn more on how it works in the next example, when we add another widget to the window. Note that the window's size automatically adjusts to the widget placed inside it.

Adding a Functional Button

Now, let's add something the user can interact with. The most obvious choice is a simple button. Let's put a button in our window that gives us an additional way of closing our window.

import tkinter

root = tkinter.Tk()

root.title("Hello!")

simple_label = tkinter.Label(root, text="Easy, right?")  
closing_button = tkinter.Button(root, text="Close window", command=root.destroy)

simple_label.pack()  
closing_button.pack()

root.mainloop()  

Ouput:

In line 8 we create our Button class instance in a very similar way we created our label. As you can probably see, though, we added a command argument where we tell the program what should happen after the button is clicked. In this case root's dramatically-sounding destroy() method is called, which will close our window when executed.

In lines 10 and 11 we again use the pack() method. This time we can understand it a bit better, as we now use it to place two widgets inside the window. Depending on the order in which we pack our widgets, the method just throws them one on top of the other, centered horizontally. The window's height and width adjust to the widgets' sizes.

You probably noticed another new line. In line 5, we specify the root window's title. Unfortunately, the widest widget of our interface is not wide enough for the window's title to become visible. Let's do something about it.

Controlling the Window's Size

Let's take a look at three new lines that will let us easily resize our window.

import tkinter

root = tkinter.Tk()

root.title("Hello!")

root.resizable(width="false", height="false")

root.minsize(width=300, height=50)  
root.maxsize(width=300, height=50)

simple_label = tkinter.Label(root, text="Easy, right?")  
closing_button = tkinter.Button(root, text="Close window", command=root.destroy)

simple_label.pack()  
closing_button.pack()

root.mainloop()  

Output:

In line 7 we define if the program's user should be able to modify the window's width and height. In this case, both arguments are set to "false", so the window's size depends only on our code. If it wasn't for lines 9 and 10, it would depend on sizes of the widgets oriented inside the window.

However, in this example, we use root's minsize and maxsize methods to control the maximum and minimum values of our window's width and height. Here, we define exactly how wide and tall the window is supposed to be, but I encourage you to play with these three lines to see how the resizing works depending on the size of our widgets, and on what minimum and maximum values we define.

More about Widget Orientation

As you probably already noticed, using the pack() method does not give us too much control over where the widgets end up after packing them in their parent containers. Not to say the pack() method is not predictable – it's just that obviously, sometimes throwing widgets into the window in a single column, where one widget is placed on top of the previous one, is not necessarily consistent with our sophisticated sense of aesthetics. For those cases, we can either use pack() with some clever arguments, or use grid()– another method of orienting widgets inside containers.

First, let's maybe give pack() one more chance. By modifying lines 15 and 16 from the previous example, we can slightly improve our interface:

simple_label.pack(fill="x")  
closing_button.pack(fill="x")  

Output:

In this simple manner we tell the pack() method to stretch the label and the button all the way along the horizontal axis. We can also change the way pack() throws new widgets inside the window. For example, by using the following argument:

simple_label.pack(side="left")  
closing_button.pack(side="left")  

Output:

We can pack widgets in the same row, starting from the window's left side. However, pack() is not the only method of orienting the widgets inside their parent widgets. The method that gives the prettiest results is probably the grid() method, which lets us order the widgets in rows and columns. Take a look at the following example.

import tkinter

root = tkinter.Tk()

simple_label = tkinter.Label(root, text="Easy, right?")  
another_label = tkinter.Label(root, text="More text")  
closing_button = tkinter.Button(root, text="Close window", command=root.destroy)  
another_button = tkinter.Button(root, text="Do nothing")

simple_label.grid(column=0, row=0, sticky="ew")  
another_label.grid(column=0, row=1, sticky="ew")  
closing_button.grid(column=1, row=0, sticky="ew")  
another_button.grid(column=1, row=1, sticky="ew")

root.mainloop()  

Output:

To make this example a bit clearer, we got rid of the lines that changed the root window's title and size. In lines 6 and 8 we added one more label and one more button (note that clicking on it won't do anything as we haven't attached any command to it).

Most importantly though, pack() was replaced by grid() in all cases. As you can probably easily figure out, the arguments column and row let us define which cell of the grid our widget will occupy. Keep in mind that if you define the same coordinates for two different widgets, the one rendered further in your code will be displayed on top of the other one.

The sticky argument is probably not as obvious. Using this option we can stick the edges of our widgets to edges of their respective grid cells – northern (upper), southern (bottom), eastern (right) and western (left). We do that by passing a simple string that contains a configuration of letters n, s, e and w.

In our example, we stick the edges of all four widgets to their cells' eastern and western edges, therefore the string is ew. This results in the widgets being stretched horizontally. You can play with different configurations of those four letters. Their order in the string doesn't matter.

Now that you know two different methods of orienting the widgets, keep in mind that you should never mix grid() and pack() inside the same container.

Frames

Windows are not the only widgets that can contain other widgets. In order to make your complex interfaces clearer, it is usually a good idea to segregate your widgets into frames.

Let's try to do that with our four simple widgets:

import tkinter

root = tkinter.Tk()

frame_labels = tkinter.Frame(root, borderwidth="2", relief="ridge")  
frame_buttons = tkinter.Frame(root, borderwidth="2", relief="ridge")

simple_label = tkinter.Label(frame_labels, text="Easy, right?")  
another_label = tkinter.Label(frame_labels, text="More text")

closing_button = tkinter.Button(frame_buttons, text="Close window", command=root.destroy)  
another_button = tkinter.Button(frame_buttons, text="Do nothing")

frame_labels.grid(column=0, row=0, sticky="ns")  
frame_buttons.grid(column=1, row=0)

simple_label.grid(column=0, row=0, sticky="ew")  
another_label.grid(column=0, row=1, sticky="ew")

closing_button.pack(fill="x")  
another_button.pack(fill="x")

root.mainloop()  

Output:

Let's carefully go through the example shown above. In lines 5 and 6 we define two new Frame widgets. Obviously, in the first argument we point to their parent widget, which is the root window.

By default, the frames' borders are invisible, but let's say we would like to see where exactly they are placed. In order to show their borders, we have to give them a certain width (in our example, 2 pixels) and the style of relief (a 3D effect of sorts) in which the border will be drawn. There are 5 different relief styles to choose from - in our example, we use ridge.

Label and Button definitions were also modified slightly (lines 8-12). We wanted to place our labels in our frame_labels frame and our buttons in our frame_buttons frame. Thus, we had to replace their previous parent, root, with their respective new frame parents.

In lines 14 and 15, we orient the frames inside the root window using the grid() method. Then, we use the grid() method to orient the labels (lines 17-18), and the pack() method to orient the buttons (lines 20-21). The labels and buttons are now in separate containers, so nothing stops us from orienting the widgets using different methods.

Top Level Windows

Your interface shouldn't contain more than one root window – but you can create many windows that are children of the root window. The best way to do that is by using the Toplevel class.

import tkinter

root = tkinter.Tk()

new_window = tkinter.Toplevel()  
new_window.withdraw()

fame_labels = tkinter.Frame(root, borderwidth="2", relief="ridge")  
frame_buttons = tkinter.Frame(root, borderwidth="2", relief="ridge")

simple_label = tkinter.Label(frame_labels, text="Easy, right?")  
another_label = tkinter.Label(frame_labels, text="More text")

closing_button = tkinter.Button(frame_buttons, text="Close window", command=root.destroy)  
window_button = tkinter.Button(frame_buttons, text="Show new window", command=new_window.deiconify)

frame_labels.grid(column=0, row=0, sticky="ns")  
frame_buttons.grid(column=1, row=0)

simple_label.grid(column=0, row=0, sticky="ew")  
another_label.grid(column=0, row=1, sticky="ew")

closing_button.pack(fill="x")  
window_button.pack(fill="x")

root.mainloop()  

In the example above, we create our new window in line 5. Because a window is an entity that is not anchored inside any other widget, we don't have to point to its parent, nor orient it inside a parent widget.

We'd like to show the new window after a button is pressed. Line 5 displays it right away, so we use the withdraw() method in line 6 in order to hide it. We then modify the button definition in line 15.

Aside from the new variable name and text, the button now executes a command – the new_window object's method, deiconify, which will make the window reappear after the user clicks the window_button button.

Conclusions

As you can see, using Tkinter you can easily and quickly create GUIs for non-expert users of your software. The library is included in all Python installs, so building your first, simple window is only a couple of lines of code away. The examples shown above barely scratch the surface of the package's capabilities. Stay tuned for further parts of the Tkinter tutorial, which hopefully will let you learn how to create complex, intuitive and pretty Graphical Interfaces.


Real Python: Python's range() Function (Guide)

$
0
0

Python’s built-in range function is a handy tool to know you need to perform an action a specific number of times.

By the end of this article, you’ll:

  • Understand how Python’s range function works
  • Know how the implementations differ in Python 2 and Python 3
  • Have seen a number of hands-on range() examples
  • Be equipped to work around some of its limitations

Let’s get cracking!

Free Bonus:Click here to get our free Python Cheat Sheet that shows you the basics of Python 3, like working with data types, dictionaries, lists, and Python functions.

The History of range()

Although range() in Python 2 and range() in Python 3 may share a name, they are entirely different animals. In fact, range() in Python 3 is just a renamed version of a function that is called xrange in Python 2.

Originally, both range() and xrange() produced numbers that could be iterated over with for-loops, but the former generated a list of those numbers all at once while the latter produced numbers lazily, meaning numbers were returned one at a time as they were needed.

Having huge lists hang around takes up memory, so it’s no surprise that xrange() replaced range(), name and all. You can read more about this decision and the xrange() vs range() background in PEP 3100.

Note: PEP stands for Python Enhancement Proposal. PEPs are documents that can cover a wide range of topics, including proposed new features, style, governance, and philosophy.

There are a ton of them. PEP 1 explains how they work and is a great place to start.

For the rest of this article, you’ll be using the function as it exists in Python 3.

Here we go!

Let’s Loop

Before we dive into seeing how range() works, we need to take a look at how looping works. Looping is a key computer science concept. If you want to be a good programmer, mastering loops is among the first steps you need to take.

Here’s an example of a for-loop in Python:

captains=['Janeway','Picard','Sisko']forcaptainincaptains:print(captain)

The output looks like this:

JanewayPicardSisko

As you can see, a for-loop enables you to execute a specific block of code however many times you want. In this case, we looped through a list of captains and printed each of their names.

Although Star Trek is great and everything, you may want to do more than simply loop through a list of captains. Sometimes, you just want to execute a block of code a specific number of times. Loops can help you do that!

Try the following code with numbers that are divisible by three:

numbers_divisible_by_three=[3,6,9,12,15]fornuminnumbers_divisible_by_three:quotient=num/3print("{} divided by 3 is {}.".format(num,int(quotient)))

The output of that loop will look like this:

3 divided by 3 is 1.
6 divided by 3 is 2.
9 divided by 3 is 3.
12 divided by 3 is 4.
15 divided by 3 is 5.

That’s the output we wanted, so the loop got the job done adequately, but there is another way to get the same result by using range().

Note: That last code example had some string formatting. To learn more on that topic, you can check out Python String Formatting Best Practices and Python 3’s f-Strings: An Improved String Formatting Syntax (Guide).

Now that you’re more familiar with loops, let’s see how you can use range() to simplify your life.

Getting Started With range()

So how does Python’s range function work? In simple terms, range() allows you to generate a series of numbers within a given range. Depending on how many arguments you pass to the function, you can decide where that series of numbers will begin and end as well as how big the difference will be between one number and the next.

Here’s a sneak peek of range() in action:

foriinrange(3,16,3):quotient=i/3print("{} divided by 3 is {}.".format(i,int(quotient)))

In this for-loop, you were able to simply create a range of numbers that are divisible by 3, so you didn’t have to provide each of them yourself.

Note: While this example shows an appropriate use of range(), it’s usually frowned upon to use range() too often in for-loops.

For example, the following use of range() would generally be considered not Pythonic:

captains=['Janeway','Picard','Sisko']foriinrange(len(captains)):print(captains[i])

range() is great for creating iterables of numbers, but it’s not the best choice when you need to iterate over data that could be looped over with the in operator.

If you want to know more, check out How to Make Your Python Loops More Pythonic.

There are three ways you can call range():

  1. range(stop) takes one argument.
  2. range(start, stop) takes two arguments.
  3. range(start, stop, step) takes three arguments.

range(stop)

When you call range() with one argument, you will get a series of numbers that starts at 0 and includes every whole number up to, but not including, the number you have provided as the stop.

Here’s what that looks like in practice:

foriinrange(3):print(i)

The output of your loop will look like this:

0
1
2

That checks out: we have all the whole numbers from 0 up to but not including 3, the number you provided as the stop.

range(start, stop)

When you call range() with two arguments, you get to decide not only where the series of numbers stops but also where it starts, so you don’t have to start at 0 all the time. You can use range() to generate a series of numbers from A to B using a range(A, B). Let’s find out how to generate a range starting at 1.

Try calling range() with two arguments:

foriinrange(1,8):print(i)

Your output will look like this:

1
2
3
4
5
6
7

So far, so good: you have all the whole numbers from 1 (the number you provided as the start) up to but not including 8 (the number you provided as the stop).

But if you add one more argument, then you’ll be able to reproduce the output you got earlier when you were using the list named numbers_divisible_by_three.

range(start, stop, step)

When you call range() with three arguments, you can choose not only where the series of numbers will start and stop but also how big the difference will be between one number and the next. If you don’t provide a step, then range() will automatically behave as if the step is 1.

Note:step can be a positive number or a negative number, but it can’t be 0:

>>>
>>> range(1,4,0)Traceback (most recent call last):
  File "<stdin>", line 1, in <module>ValueError: range() arg 3 must not be zero

If you try to use 0 as your step, then you’ll get an error.

Now that you know how to use step, you can finally revisit that loop we saw earlier with division by 3.

Try it for yourself:

foriinrange(3,16,3):quotient=i/3print("{} divided by 3 is {}.".format(i,int(quotient)))

Your output will look exactly like the output of the for-loop you saw earlier in this article, when you were using the list named numbers_divisible_by_three:

3 divided by 3 is 1.
6 divided by 3 is 2.
9 divided by 3 is 3.
12 divided by 3 is 4.
15 divided by 3 is 5.

As you see in this example, you can use the step argument to increase towards a higher number. That’s called incrementing.

Incrementing With range()

If you want to increment, then you need step to be a positive number. To get an idea of what this means in practice, type in the following code:

foriinrange(3,100,25):print(i)

If your step is 25, then the output of your loop will look like this:

3
28
53
78

You got a range of numbers that were each greater than the preceding number by 25, the step you provided.

Now that you’ve seen how you can step forwards through a range, it’s time to see how you can step backwards.

Decrementing With range()

If your step is positive, then you move through a series of increasing numbers and are incrementing. If your step is negative, then you move through a series of decreasing numbers and are decrementing. This allows you to go through the numbers backwards.

In the following example, your step is -2. That means that you’ll be decrementing by 2 for each loop:

foriinrange(10,-6,-2):print(i)

The output of your decrementing loop will look like this:

10
8
6
4
2
0
-2
-4

You got a range of numbers that were each smaller than the preceding number by 2, the absolute value of the step you provided.

The most Pythonic way to create a range that decrements is to use range(start, stop, step). But Python does have a built-in reversed function. If you wrap range() inside reversed(), then you can print the integers in reverse order.

Give this a try:

foriinreversed(range(5)):print(i)

You’ll get this:

4
3
2
1
0

range() makes it possible to iterate over a decrementing sequence of numbers, whereas reversed() is generally used to loop over a sequence in reverse order.

Note:reversed() also works with strings. You can learn more about the functionality of reversed() with strings in How to Reverse a String in Python.

Going Deeper With range()

Now that you know the basics of how to use range(), it’s time to dig a little deeper.

range() is mainly used for two purposes:

  1. Executing the body of a for-loop a specific number of times
  2. Creating more efficient iterables of integers than can be done using lists or tuples

The first use is probably the most common, and you could make the case that itertools gives you a more efficient way to construct iterables than range() does.

Here are a few more points to keep in mind when you use range.

range() is a type in Python:

>>>
>>> type(range(3))<class 'range'>

You can access items in a range() by index, just as you would with a list:

>>>
>>> range(3)[1]1>>> range(3)[2]2

You can even use slicing notation on a range(), but the output in a REPL may seem a little strange at first:

>>>
>>> range(6)[2:5]range(2, 5)

Although that output may look odd, slicing a range() just returns another range().

The fact that you can access elements of a range() by index and slice a range() highlights an important fact: range() is lazy, unlike a list, but isn’t an iterator.

Floats and range()

You may have noticed that all of the numbers we have been dealing with so far have been whole numbers, which are also called integers. That’s because range() can take only integers as arguments.

A Word on Floats

In Python, if a number is not a whole number, then it is a float. There are some differences between integers and floats.

An integer (int data type):

  • Is a whole number
  • Does not include a decimal point
  • Can be positive, negative, or 0

A floating point number (float data type):

  • Can be any number that includes a decimal point
  • Can be positive or negative

Try calling range() with a float and see what happens:

foriinrange(3.3):print(i)

You should get the following error message:

Traceback (most recent call last):
  File "<stdin>", line 1, in <module>TypeError: 'float' object cannot be interpreted as an integer

If you need to find a workaround that will allow you to use floats, then you can use NumPy.

Using NumPy

NumPy is a third-party Python library. If you are going to use NumPy, your first step is to check if you have it installed.

Here’s how you can do that in your REPL:

>>>
>>> importnumpy

If you get a ModuleNotFoundError, then you need to install it. To do so, go to your command line and enter pip install numpy.

Once you have it installed, put in the following:

importnumpyasnpnp.arange(0.3,1.6,0.3)

It will return this:

array([0.3, 0.6, 0.9, 1.2, 1.5])

If you want to print each number on its own line, you can do the following:

importnumpyasnpforiinnp.arange(0.3,1.6,0.3):print(i)

This is the output:

0.3
0.6
0.8999999999999999
1.2
1.5

Where did 0.8999999999999999 come from?

Computers have trouble saving decimal floating-point numbers in binary floating-point numbers. This leads to all sorts of unexpected representations of numbers.

Note: To learn more about why there are issues representing decimals, you can check out this article and the Python docs.

You might also want to take a look at the decimal library, which is a bit of a downgrade in terms of performance and readability but allows you to represent decimal numbers exactly.

Another option is to use round(), which you can read more about in How to Round Numbers in Python. Keep in mind that round() has its own quirks that might generate some surprising results!

Whether or not these floating point errors are an issue for you depends on the problem you’re solving. The errors are going to be in something like the 16th decimal place, which is insignificant most of the time. They are so small that, unless you’re working on calculating satellite orbital trajectories or something, you don’t need to worry about it.

Alternatively, you could also use np.linspace(). It does essentially the same thing but uses different parameters. With np.linspace(), you specify start and end (both inclusive) as well as the length of the array (instead of step).

For instance, np.linspace(1, 4, 20) gives 20 equally spaced numbers: 1.0, ..., 4.0. On the other hand, np.linspace(0, 0.5, 51) gives 0.00, 0.01, 0.02, 0.03, ..., 0.49, 0.50.

Note: To learn more, you can read Look Ma, No For-Loops: Array Programming With NumPy and this handy NumPy reference.

Go Forth and Loop

You now understand how to use range() and work around its limitations. You also have an idea of how this important function has evolved between Python 2 and Python 3.

The next time you need to perform an action a specific number of times, you’ll be all set to loop your heart out!

Happy Pythoning!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

PyCon: Pycon 2019 Call for Proposals is Open!

$
0
0
The time is upon us again! PyCon 2019’s Call for Proposals has officially opened for talks, tutorials, posters, education summit presentations, as well as the hatchery program PyCon Charlas. PyCon is made by you, so we need you to share what you’re working on, how you’re working on it, what you’ve learned, what you’re learning, and so much more.

Please make note of important deadlines for submissions:
  • Tutorial proposals are due November 26, 2018.
  • Talk, Charlas, Poster, and Education Summit proposals are due January 3, 2019.

Who should write a proposal? Everyone!

If you’re reading this post, you should write a proposal. PyCon is about uniting and building the Python community, and we won’t advance as an open community if we’re not open with each other about what we’ve learned throughout our time in it. It isn’t about being the smartest one in the room, so we don’t just pick all of the expert talks. It’s about helping everyone move together. “A rising tide lifts all boats,” if you will.

We need beginner, intermediate, and advanced proposals on all sorts of topics. We also need beginner, intermediate, and advanced speakers to give said presentations. You don’t need to be a 20 year veteran who has spoken at dozens of conferences. On all fronts, we need all types of people. That’s what this community is comprised of, so that’s what this conference’s schedule should be made from.

If you can speak Spanish, why not submit a proposal for PyCon Charlas? If you speak Spanish as a second, third or twelfth language, please do not hesitate to participate! The PyCon Charlas call for proposals opens with the rest of the CFP.

When should you write your proposal? As soon as possible!

What we need now is for your submissions to start rolling in. We review proposals as soon as they’re entered, maximizing your time in front of the program committee and before they begin voting to determining the schedule. While we accept proposals right up to the deadline, the longer your proposal has been available for review, the better we can help you make it. That extra help goes a long way when you consider the large volume of proposals we anticipate receiving.

For PyCon 2017, we received 705 talk proposals, which makes for a 14% acceptance rate. The tutorial acceptance rate was at 29%, with 107 submissions.

Who can help you with your proposal? A lot of people!

Outside of our program committee, a great source of assistance with proposals comes from your local community. User groups around the world have had sessions where people bring ideas to the table and walk away with a full fledged proposal. These sessions  are especially helpful if you’re new to the process, and if you’re experienced with the process, it’s a great way for you to reach out and help people level up. We’ll be sure to share these events as we find out about them, and be sure to tell us your plans if you want to host a proposal event of your own!

We’re again going to provide a mechanism to connect willing mentors and those seeking assistance through our site, helping not only with the brainstorming process but about the proposal, slides, and presentation itself.  Our goal is to improve the mentorship program by connecting those interested much sooner. Read on to find out more and checkout out the “Mentoring” section of https://us.pycon.org/2019/speaking/talks/.

Where should you submit your proposal? In your dashboard!

After you have created an account at https://us.pycon.org/2019/account/signup/,  you’ll want to create a speaker profile in your dashboard. While there, enter some details about yourself and check the various boxes about giving or receiving mentorship, as well as grant needs. Like proposals, you can come back and edit this later.

After that’s done, clicking on the “Submit a new proposal” button gives you the choice of proposal type, and from there you enter your proposal. We’ve provided some guidelines on the types of proposals you can submit, so please be sure to check out the following pages for more information:



We look forward to seeing all of your proposals in the coming months!
________________________________________________________________________
*Note: Main content is from post written by Brian Curtin for 2018 launch

Python Anywhere: Auto-renewing your Let's Encrypt certificate with scheduled tasks

$
0
0

Let's Encrypt certificates are really useful for custom domains -- you can get HTTPS working on your site for free. Their one downside is that the certificate only lasts for 90 days, so you need to remember to renew it.

The good news is that you can set up a scheduled task to do that all for you -- no need to put anything in your calendar. Once you've done the initial Let's Encrypt setup to get the original certificate installed, and you've confirmed that it's all working, go to the "Tasks" tab, and set up a daily task with this command:

cd ~/letsencrypt && ~/dehydrated/dehydrated --cron --domain www.yourdomain.com --out . --challenge http-01 && pa_install_webapp_letsencrypt_ssl.py www.yourdomain.com

Don't forget to replace both instances of www.yourdomain.com with your actual website's hostname.

Most days, this will fail with a message like this from the dehydrated script:

Valid till Nov 12 15:23:59 2018 GMT (Longer than 30 days). Skipping renew!

Followed by a message from the pa_install_webapp_letsencrypt_ssl.py saying something like this:

POST to set SSL details via API failed, got <Response [400]>:{"cert":["Certificate has not changed."]}

...but this is harmless. When your certificate really does have just 30 days to go, it will succeed and your certificate will be renewed, and the new one installed.

PyCharm: Support framework of a strong relationship. 30% off PyCharm and 100% to Django

$
0
0

In summer 2017, JetBrains PyCharm partnered with the Django Software Foundation for the second year in a row to generate a big boost to the Django fundraising campaign. The campaign was a huge success. We raised a total of $66,094 USD for the Django Software Foundation!

This year we really hope to repeat this success of the previous year. For the next three weeks, buy a new individual license for PyCharm Professional Edition at 30% OFF, and all the money raised will go to the DSF’s general fundraising and the Django Fellowship program.

Promotion details

Up until November 1, you can effectively donate to Django by purchasing a New Individual PyCharm Professional annual subscription at 30% off. It’s very simple:

1. When buying a new annual PyCharm subscription in our e-store, on the checkout page, сlick “Have a discount code?”.

2. Enter the following 30% discount promo code:  

ISUPPORTDJANGO 

3. Fill in the other required fields on the page and click the “Place order” button.

Alternatively, just click this shortcut link to go to the e-store with the code automatically applied

All of the income from this promotion code will go to the DSF fundraising campaign 2018– not just the profits, but actually the entire sales amount including taxes, transaction fees – everything. The campaign will help the DSF to maintain the healthy state of the Django project and help them continue contributing to their different outreach and diversity programs.

Read more details on the special promotion page.

“Django has grown to be a world-class web framework, and coupled with PyCharm’s Django support, we can give tremendous developer productivity,” says Frank Wiles, DSF President. “Last year JetBrains was a great partner for us in support of raising money for the Django Software Foundation, on behalf of the community, I would like to extend our deepest thanks for their generous help. Together we hope to make this a yearly event!”

If you have any questions, get in touch with Django at fundraising@djangoproject.com or JetBrains at sales@jetbrains.com.

Test and Code: 49: tox - Oliver Bestwalter

$
0
0

tox is a simple yet powerful tool that is used by many Python projects.

tox is not just a tool to help you test a Python project against multiple versions of Python. In this interview, Oliver and Brian just scratch the surface of this simple yet powerful automation tool.

This is from the tox documentation:

tox is a generic virtualenv management and test command line tool you can use for:

  • checking your package installs correctly with different Python versions and interpreters
  • running your tests in each of the environments, configuring your test tool of choice
  • acting as a frontend to Continuous Integration servers, greatly reducing boilerplate and merging CI and shell-based testing.

Yet tox is so much more. It can help create development environments, hold all of your admin scripts, ...

I hope you enjoy this wonderful discussion of tox with Oliver Bestwalter, one of the core maintainers of tox.

Special Guest: Oliver Bestwalter.

Sponsored By:

Support Test and Code

Links:

<p>tox is a simple yet powerful tool that is used by many Python projects.</p> <p>tox is not just a tool to help you test a Python project against multiple versions of Python. In this interview, Oliver and Brian just scratch the surface of this simple yet powerful automation tool.</p> <p>This is from the tox documentation:</p> <blockquote> <p>tox is a generic virtualenv management and test command line tool you can use for:</p> <ul> <li>checking your package installs correctly with different Python versions and interpreters</li> <li>running your tests in each of the environments, configuring your test tool of choice</li> <li>acting as a frontend to Continuous Integration servers, greatly reducing boilerplate and merging CI and shell-based testing.</li> </ul> </blockquote> <p>Yet tox is so much more. It can help create development environments, hold all of your admin scripts, ...</p> <p>I hope you enjoy this wonderful discussion of tox with Oliver Bestwalter, one of the core maintainers of tox.</p><p>Special Guest: Oliver Bestwalter.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://testandcode.com/pycharm">PyCharm Professional</a>: <a rel="nofollow" href="http://testandcode.com/pycharm">We have a special offer for you: any time before December 1, you can get an Individual PyCharm Professional 4-month subscription for free! If you value your time, you owe it to yourself to try PyCharm.</a></li></ul><p><a rel="payment" href="https://www.patreon.com/testpodcast">Support Test and Code</a></p><p>Links:</p><ul><li><a title="tox project documentation" rel="nofollow" href="https://tox.readthedocs.io/en/latest/">tox project documentation</a></li><li><a title="tox recreate : &quot;Have you turned it off and on again?&quot; for tox" rel="nofollow" href="https://tox.readthedocs.io/en/latest/config.html#conf-recreate">tox recreate : &quot;Have you turned it off and on again?&quot; for tox</a></li><li><a title="&quot;Hello world&quot; of tox" rel="nofollow" href="https://twitter.com/obestwalter/status/1042830213460250630">&quot;Hello world&quot; of tox</a></li><li><a title="tox also has plugins" rel="nofollow" href="https://tox.readthedocs.io/en/latest/plugins.html">tox also has plugins</a></li><li><a title="talk by Bernát Gábor about a tox based workflow at EuroPython 2018" rel="nofollow" href="https://www.youtube.com/watch?v=SFqna5ilqig&amp;feature=youtu.be">talk by Bernát Gábor about a tox based workflow at EuroPython 2018</a></li><li><a title="adding a description to your environments" rel="nofollow" href="https://tox.readthedocs.io/en/latest/config.html#conf-description">adding a description to your environments</a></li><li><a title="detox - distributed tox" rel="nofollow" href="https://github.com/tox-dev/detox">detox - distributed tox</a></li><li><a title="devpi: private package index" rel="nofollow" href="https://www.devpi.net/">devpi: private package index</a></li><li><a title="PyCharm plugin to easily set the project interpreter via context menu: PyVenvmanage" rel="nofollow" href="https://github.com/nokia/PyVenvManage">PyCharm plugin to easily set the project interpreter via context menu: PyVenvmanage</a></li><li><a title="power mode in atom" rel="nofollow" href="https://atom.io/packages/activate-power-mode">power mode in atom</a></li><li><a title="Power Mode for PyCharm" rel="nofollow" href="https://plugins.jetbrains.com/plugin/8251-power-mode-ii">Power Mode for PyCharm</a></li></ul>

Mike C. Fletcher: Yay, django 1.11 broken frozen/pyc migrations again

$
0
0

So somehow Django stopped being able to support .pyc files as migrations between our last builds with 1.11.lower and the current builds with 1.11.higher. Frozen environments use these to distribute just the .pyc (no source)... but somehow this got reverted because. But how did it get reverted in Django 1.11 (LTS) branch? Apparently it will be restored in 2.1, but we're still deploying on python 2.7, so that's out of reach as of yet. Argh.

Programming Ideas With Jake: Python Descriptors 2nd Edition!

$
0
0
The second edition of my book was just published and available at the source and on Amazon. To purchase, just click one of the links in the sidebar! Also, I’ll be writing up a new article to be published on here this weekend, so look forward to that! Advertisements

Will McGugan: Adding type hints to the Django ORM

$
0
0

It occurred to me that Django's ORM could do with a bit of a revamp to make use of recent developments in the Python language.

The main area where I think Django's models are missing out is the lack of type hinting (hardly surprising since Django pre-dates type hints). Adding type hints allows Mypy to detect bugs before you even run your code. It may only save you minutes each time, but multiply that by the number of code + run iterations you do each day, and it can save hours of development time. Multiply that by the lifetime of your project, and it could save weeks or months. A clear win.

Typing Django Models

I'd love to be able to use type hints with the Django ORM, but it seems that the magic required to create Django models is just too dynamic and would defy any attempts to use typing. Fortunately that may not necessarily be the case. Type hints can be inspected at runtime, and we could use this information when building the model, while still allowing Mypy to analyze our code. Take the following trivial Django model:

class Foo(models.Model):
    count = models.IntegerField(default=0)

The same information could be encoded in type hints as follows:

class Foo(TypedModel):
    count: int = 0

The TypedModel class could inspect the type hints and create the integer field in the same way as models.Model uses IntegerField and friends. But this would also tell Mypy that instances of Foo have an integer attribute called count.

But what of nullable fields. How can we express those in type hints? The following would cover it:

class Foo(TypedModel):
    count: Optional[int] = 0

The Optional type hint tells Mypy that the attribute could be None, which could also be used to instruct TypedModel to create a nullable field.

So type hints contain enough information to set the type of the field, the default value, and wether the field is nullable--but there are other pieces of information associated with fields in Django models; a CharField has a max_length attribute for instance:

class Bar(models.Model):
    name = models.CharField(max_length=30)

There's nowhere in the type hinting to express the maximum length of a string, so we would have to use a custom object in addition to the type hints. Here's how that might be implemented:

class Bar(TypedModel):
    name: str = String(max_length=30)

The String class contains the maximum length information and additional meta information for the field. This class would have to be a subclass of the type specified in the hint, i.e. str, or Mypy would complain. Here's an example implementation:

class String(str):
    def __new__(cls, max_length=None):
        obj = super().__new__(cls)
        obj.max_length = max_length
        return obj

The above class creates an object that acts like a str, but has properties that could be inspected by the TypedModel class.

The entire model could be built using these techniques. Here's an larger example of what the proposed changes might look like:

class Student(TypeModel):
    name: str = String(max_length=30)  # CharField
    notes: str = ""  # TextField with empty default 
    birthday: datetime  # DateTimeField
    teacher: Optional[Staff] = None  # Nullable ForeignKey to Staff table
    classes: List[Subject]   # ManyToMany 

Its more terse than a typical Django model, which is a nice benefit, but the main advantage is that Mypy can detect errors (VSCode will even highlight such errors right in the editor).

For instance there is a bug in this line of code:

return {"teacher_name": student.teacher.name}

If the teacher field is ever null, that line with throw something like NoneType has no attribute "name". A silly error which may go un-noticed, even after a code review and 100% unit test coverage. No doubt only occurring in production at the weekend when your boss/client is giving a demo. But with typing, Mypy would catch that.

Specifying Meta

Another area were I think modern Python could improve Django models, is specifying the models meta information.

This may be subjective, but I've never been a huge fan of the way Django uses a inner class (a class defined in a class) to store additional information about the model. Python3 gives us another option, we can add keyword args to the class statement (where you would specify the metaclass). This feels like a more better place to add addtional information about the Model. Let's compare...

Hare's an example taking from the docs:

class Ox(models.Model):
    horn_length = models.IntegerField()

    class Meta:
        ordering = ["horn_length"]
        verbose_name_plural = "oxen"

Here's the equivalent, using class keyword args:

class Ox(TypedModel, ordering=["horn_length"], verbose_name_plural="oxen"):
    horn_length : int

The extra keywords args may result in a large line, but these could be formatted differently (in the style preferred by black):

class Ox(
    TypedModel,
    ordering=["horn_length"],
    verbose_name_plural="oxen"
):
    horn_length : int

I think the class keyword args are neater, but YMMV.

Code?

I'm sorry to say that none of this exists in code form (unless somebody else has come up with the same idea). I do think it could be written in such a way that the TypedModel and traditional models.Model definitions would be interchangeable, since all I'm proposing is a little syntactical sugar and no changes in functionality.

It did occur to me to start work on this, but then I remembered I have plenty projects and other commitments to keep me busy for the near future. I'm hoping that this will be picked up by somebody strong on typing who understands metaclasses enough to take this on.

Mike Driscoll: Testing Jupyter Notebooks

$
0
0

The more you do programming, the more you will here about how you should test your code. You will hear about things like Extreme Programming and Test Driven Development (TDD). These are great ways to create quality code. But how does testing fit in with Jupyter? Frankly, it really doesn’t. If you want to test your code properly, you should write your code outside of Jupyter and import it into cells if you need to. This allows you to use Python’s unittest module or py.test to write tests for your code separately from Jupyter. This will also let you add on test runners like nose or put your code into a Continuous Integration setup using something like Travis CI or Jenkins.

However all is now lost. You can do some testing of your Jupyter Notebooks even though you won’t have the full flexibility that you would get from keeping your code separate. We will look at some ideas that you can use to do some basic testing with Jupyter.


Execute and Check

One popular method of “testing” a Notebook is to run it from the command line and send its output to a file. Here is the example syntax that you could use if you wanted to do the execution on the command line:

jupyter-nbconvert --to notebook --execute --output output_file_path input_file_path

Of course, we want to do this programmatically and we want to be able to capture errors. To do that, we will take our Notebook runner code from my exporting Jupyter Notebook article and re-use it. Here it is again for your convenience:

# notebook_runner.py 
import nbformat
importos 
from nbconvert.preprocessorsimport ExecutePreprocessor
 
 
def run_notebook(notebook_path):
    nb_name, _ = os.path.splitext(os.path.basename(notebook_path))
    dirname = os.path.dirname(notebook_path) 
    with open(notebook_path) as f:
        nb = nbformat.read(f, as_version=4) 
    proc = ExecutePreprocessor(timeout=600, kernel_name='python3')
    proc.allow_errors = True 
    proc.preprocess(nb, {'metadata': {'path': '/'}})
    output_path = os.path.join(dirname, '{}_all_output.ipynb'.format(nb_name)) 
    with open(output_path, mode='wt') as f:
        nbformat.write(nb, f) 
    errors = []for cell in nb.cells:
        if'outputs'in cell:
            for output in cell['outputs']:
                if output.output_type == 'error':
                    errors.append(output) 
    return nb, errors
 
if __name__ == '__main__':
    nb, errors = run_notebook('Testing.ipynb')print(errors)

You will note that I have updated the code to run a new Notebook. Let’s go ahead and create a Notebook that has two cells of code in it. After creating the Notebook, change the title to Testing and save it. That will cause Jupyter to save the file as Testing.ipynb. Now enter the following code in the first cell:

def add(a, b):
    return a + b
 
add(5, 6)

And enter the following code into cell #2:

1 / 0

Now you can run the Notebook runner code. When you do, you should get the following output:

[{'ename': 'ZeroDivisionError',
  'evalue': 'integer division or modulo by zero',
  'output_type': 'error',
  'traceback': ['\x1b[0;31m\x1b[0m',
                '\x1b[0;31mZeroDivisionError\x1b[0mTraceback (most recent call ''last)',
                '\x1b[0;32m<ipython-input-2-bc757c3fda29>\x1b[0m in ''\x1b[0;36m<module>\x1b[0;34m()\x1b[0m\n''\x1b[0;32m----> 1\x1b[0;31m \x1b[0;36m1\x1b[0m ''\x1b[0;34m/\x1b[0m ''\x1b[0;36m0\x1b[0m\x1b[0;34m\x1b[0m\x1b[0m\n''\x1b[0m',
                '\x1b[0;31mZeroDivisionError\x1b[0m: integer division or ''modulo by zero']}]

This indicates that we have some code that outputs an error. In this case, we did expect that as this is a very contrived example. In your own code, you probably wouldn’t want any of your code to output an error. Regardless, this Notebook runner script isn’t enough to actually do a real test. You need to wrap this code with testing code. So let’s create a new file that we will save to the same location as our Notebook runner code. We will save this script with the name “test_runner.py”. Put the following code in your new script:

importunittest 
import runner
 
 
class TestNotebook(unittest.TestCase):
 
    def test_runner(self):
        nb, errors = runner.run_notebook('Testing.ipynb')self.assertEqual(errors, []) 
 
if __name__ == '__main__':
    unittest.main()

This code uses Python’s unittest module. Here we create a testing class with a single test function inside of it called test_runner. This function calls our Notebook runner and asserts that the errors list should be empty. To run this code, open up a terminal and navigate to the folder that contains your code. Then run the following command:

python test_runner.py

When I ran this, I got the following output:

F
======================================================================
FAIL: test_runner (__main__.TestNotebook)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "test_runner.py", line 10, in test_runner
    self.assertEqual(errors, [])AssertionError: Lists differ: [{'output_type': u'error', 'ev... != []
 
First list contains 1 additional elements.
First extra element 0:
{'ename': 'ZeroDivisionError',
 'evalue': 'integer division or modulo by zero',
 'output_type': 'error',
 'traceback': ['\x1b[0;31m---------------------------------------------------------------------------\x1b[0m',
               '\x1b[0;31mZeroDivisionError\x1b[0m                         '
               'Traceback (most recent call last)',
               '\x1b[0;32m<ipython-input-2-bc757c3fda29>\x1b[0m in'
               '\x1b[0;36m<module>\x1b[0;34m()\x1b[0m\n'
               '\x1b[0;32m---->1\x1b[0;31m \x1b[0;36m1\x1b[0m '
               '\x1b[0;34m/\x1b[0m \x1b[0;36m0\x1b[0m\x1b[0;34m\x1b[0m\x1b[0m\n'
               '\x1b[0m',
               '\x1b[0;31mZeroDivisionError\x1b[0m: integer division or modulo '
               'by zero']}
 
Diff is 677 characters long. Set self.maxDiff to None to see it.
 
----------------------------------------------------------------------
Ran 1 test in 1.463s
 
FAILED (failures=1)

This clearly shows that our code failed. If you remove the cell that has the divide by zero issue and re-run your test, you should get this:

.
----------------------------------------------------------------------
Ran 1testin1.324s
 
OK

By removing the cell (or just correcting the error in that cell), you can make your tests pass.


The py.test Plugin

I discovered a neat plugin you can use that appears to help you out by making the workflow a bit easier. I am referring to the py.test plugin for Jupyter, which you can learn more about here.

Basically it gives py.test the ability to recognize Jupyter Notebooks and check if the stored inputs match the stored outputs and also that Notebooks run without error. After installing the nbval package, you can run it with py.test like this (assuming you have py.test installed):

py.test --nbval

Frankly you can actually run just py.test with no commands on the test file we already created and it will use our test code as is. The main benefit of adding nbval is that you won’t need to necessarily add wrapper code around Jupyter if you do so.


Testing within the Notebook

Another way to run tests is to just include some tests in the Notebook itself. Let’s add a new cell to our Testing Notebook that contains the following code:

importunittest 
class TestNotebook(unittest.TestCase):
 
    def test_add(self):
        self.assertEqual(add(2, 3), 5)

This will test the add function in the first cell eventually. We could add a bunch of different tests here. For example, we might want to test what happens if we add a string type with a None type. But you may have noticed that if you try to run this cell, you get to output. The reason is that we aren’t instantiating the class yet. We need to call unittest.main to do that. So while it’s good to run that cell to get it into Jupyter’s memory, we actually need to add one more cell with the following code:

unittest.main(argv=[''], verbosity=2, exit=False)

This code should be put in the last cell of your Notebook so it can run all the tests that you have added. It is basically telling Python to run with verbosity level of 2 and not to exit. When you run this code you should see the following output in your Notebook:

test_add (__main__.TestNotebook) ... ok 
----------------------------------------------------------------------
Ran 1testin0.003s
 
OK
 
<unittest.main.TestProgram at 0x7fbc8fffc0d0>

You can do something similar with Python’s doctest module inside of Jupyter Notebooks as well.


Wrapping Up

As I mentioned at the beginning, while you can test your code in your Jupyter Notebooks, it is actually much better if you just test your code outside of it. However there are workarounds and since some people like to use Jupyter for documentation purposes, it is good to have a way to verify that they are working correctly. In this chapter you learned how to run Notebooks programmatically and verify that the output was as you expected. You could enhance that code to verify certain errors are present if you wanted to as well.

You also learned how to use Python’s unittest module in your Notebook cells directly. This does offer some nice flexibility as you can now run your code all in one place. Use these tools wisely and they will serve you well.


Related Reading

PyBites: Code Challenge 54 - Query the Spotify API - Review

$
0
0

In this article we review last week's Python Clipboard History code challenge.

Reminder: new structure review post / Hacktoberfest is back!

From now on we will merge our solution into our Community branch and include anything noteworthy here, because:

  • we are learning just like you, we are all equals :)

  • we need the PRs too ;) ... as part of Hacktoberfest No. 5 that just kicked of (5 PRs and you get a cool t-shirt)

Don't be shy, share your work!

Community Pull Requests

A good 10+ PRs this week, amazing!

Check out the awesome PRs by our community for PCC54 (or from fork: git checkout community && git merge upstream/community):

Featured

vipinreyo's Clipboard Viewer

vipinreyo's Clipboard Viewer

Lanseuo's Clipboard

Lanseuo's Clipboard

PCC54 Lessons

Refreshed pypeclip and sqlite modules. PyQT5 documentation is evolving. Hence there are not much code available in the public domain to play around with, which is a constraint in designing GUIs for Python apps using QT.

I had to really think about how to monitor the clipboard and copy the text from it just ONCE, ie, no immediate duplicates. It was more the thought process around it.

I learned some new things about tkinter

Gave me the chance to finally play with python 3.7's dataclasses, although not by much though.

Really nice one to practice various skills. I made a clipboard cache queue, a bit like vim buffers (used: deque, clear terminal, class, property, pyperclip, termcolor)

Read Code for Fun and Profit

You can look at all submitted code here and/or on our Community branch.

Other learnings we spotted in Pull Requests for other challenges this week:

(PCC01) how with works in python

(PCC13) I tweaked your tests in order to make it pass with my data structure.

(PCC39) Played around with 'fixture' and the scope of the fixture.

(PCC47) This one was time consuming because I had to look up how to graph all of these, but it was an excellent learning exercise!

(PCC51) Expanded my skills of working with the databases within python and brushed up on some rusty SQL skills

Thanks to everyone for your participation in our blog code challenges! Keep the PRs coming and include a README.md with one or more screenshots if you want to be featured in this weekly review post.

Keep the PRs coming, again this month it counts for Hacktoberfest!

Need more Python Practice?

Subscribe to our blog (sidebar) to get a new PyBites Code Challenge (PCC) in your inbox every start of the week.

And/or take any of our 50+ challenges on our platform.

Prefer coding self contained Python exercises in the comfort of your browser? Try our growing collection of Bites of Py.

Want to do the #100DaysOfCode but not sure what to work on? Take our course and/or start logging your progress on our platform.


Keep Calm and Code in Python!

-- Bob and Julian

PyBites: Code Challenge 55 - #100DaysOfCode Curriculum Generator

$
0
0

There is an immense amount to be learned simply by tinkering with things. - Henry Ford

Hey Pythonistas,

It's time for another code challenge! This week we're asking you to create your own #100DaysOfCode Curriculum Generator.

Sounds exciting? It gets even better: with this challenge you can even be featured on our platform! Read on ...

The Challenge

Did you notice that every serious progress starts with a plan? This is why we are big advocates of the #100DaysOfCode. Heck we even build a whole Python course around it.

So here is the deal: PyBites is expanding its 100 Days tracker ("grid") feature: we want folks to add their own curriculums or learning paths.

Only one requirement: return a valid JSON response

You can make this as simple or sophisticated as you want, the only thing we request is a standard response JSON template so we can easily parse it on the platform:

Built with ObjGen -> http://www.objgen.com/json/models/q2S4Q

    {
    "title": "title of your 100 days",
    "version": 0.1,
    "startDate": "2018-10-14T00:00:00.000Z",
    "goals": "what do you want to achieve?",
    "github_repo": "https://github.com/pybites/100DaysOfCode",
    "tasks": [
        {
        "day": 1,
        "activity": "what you need to do this day?",
        "done": false
        },
        {
        "day": 2,
        "activity": "what you need to do this day?",
        "done": false
        },
        {
        "day": 3,
        "activity": "what you need to do this day?",
        "done": false
        },
    ...
    ...
        {
        "day": 100,
        "activity": "milestone ... 100 days done",
        "done": false
        }
    ]
    }

An example

Here is what we plan to do, maybe it serves as an idea how you could code this challenge up:

  • as I (Bob) want to learn Data Science I am selecting 4 or 5 books I want to go through
  • as #100DaysOfCode works best by spending an hour a day I am dividing the books in n pages to read every day
  • I am going to add the books to our reading list app
  • keeping it generic, my script will accept a bunch of book IDs (URLs) from that app and scrape the title and number of pages for each book
  • I calculate the daily number of pages to read every day and define page ranges for each of the 100 days
  • I convert this to the required JSON output above

More ideas

Of course it does not have to be centered around books, it can be any other way you like to plan your #100DaysOfCode. As long as you return the required JSON.

Other ideas that come to mind:

  • Set out your plan in a Google sheet and parse that,
  • Make a curriculum pointing to various Lynda/Safaribooks/Pluralsight courses and try to make a daily task list scraping those sites,
  • Make a curriculum parsing one or more (Pycon) YouTube feeds,
  • Make a curriculum parsing our blog challenges and Bites of Py exercises,
  • It all comes down to planning your resources and break them down into 100 digestible units.

As usual, this is a challenge that came about wanting to scratch our own itch. Lack ideas? Remember there is always something you can enhance or automate for yourself or somebody else, and by doing so sharpening your coding skills!

Be featured

If you want to share your learning path with our community let us know in your PR linking to your JSON file and a short description. We will then add it to our 100 days grid app.

If you need help getting ready with Github, see our new instruction video.

PyBites Community

A few more things before we take off:

  • Do you want to discuss this challenge and share your Pythonic journey with other passionate Pythonistas? Confirm your email on our platform then request access to our Slack via settings.

  • PyBites is here to challenge you because becoming a better Pythonista requires practice, a lot of it. For any feedback, issues or ideas use GH Issues, tweet us or ping us on our Slack.


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Codementor: Celery Task Routing: The Basics

Stack Abuse: A Brief Introduction to matplotlib for Data Visualization

$
0
0

Introduction

Python has a wide variety of useful packages for machine learning and statistical analysis such as TensorFlow, NumPy, scikit-learn, Pandas, and more. One package that is essential to most data science projects is matplotlib.

Available for any Python distribution, it can be installed on Python 3 with pip. Other methods are also available, check https://matplotlib.org/ for more details.

Installation

If you use an OS with a terminal, the following command would install matplotlib with pip:

$ python3 -m pip install matplotlib

Importing & Environment

In a Python file, we want to import the pyplot function that allows us to interface with a MATLAB-like plotting environment. We also import a lines function that lets us add lines to plots:

import matplotlib.pyplot as plt  
import matplotlib.lines as mlines  

Essentially, this plotting environment lets us save figures and their attributes as variables. These plots can then be printed and viewed with a simple command. For an example, we can look at the stock price of Google: specifically the date, open, close, volume, and adjusted close price (date is stored as an np.datetime64) for the most recent 250 days:

import numpy as np  
import matplotlib.pyplot as plt  
import matplotlib.cbook as cbook

with cbook.get_sample_data('goog.npz') as datafile:  
    price_data = np.load(datafile)['price_data'].view(np.recarray)
price_data = price_data[-250:] # get the most recent 250 trading days  

We then transform the data in a way that is done quite often for time series, etc. We find the difference, $d_i$, between each observation and the one before it:

$$d_i = y_i - y_{i - 1} $$

delta1 = np.diff(price_data.adj_close) / price_data.adj_close[:-1]  

We can also look at the transformations of different variables, such as volume and closing price:

# Marker size in units of points^2
volume = (15 * price_data.volume[:-2] / price_data.volume[0])**2  
close = 0.003 * price_data.close[:-2] / 0.003 * price_data.open[:-2]  

Plotting a Scatter Plot

To actually plot this data, you can use the subplots() functions from plt (matplotlib.pyplot). By default this generates the area for the figure and the axes of a plot.

Here we will make a scatter plot of the differences between successive days. To elaborate, x is the difference between day i and the previous day. y is the difference between day i+1 and the previous day (i):

fig, ax = plt.subplots()  
ax.scatter(delta1[:-1], delta1[1:], c=close, s=volume, alpha=0.5)

ax.set_xlabel(r'$\Delta_i$', fontsize=15)  
ax.set_ylabel(r'$\Delta_{i+1}$', fontsize=15)  
ax.set_title('Volume and percent change')

ax.grid(True)  
fig.tight_layout()

plt.show()  

We then create labels for the x and y axes, as well as a title for the plot. We choose to plot this data with grids and a tight layout.

plt.show() displays the plot for us.

Adding a Line

We can add a line to this plot by providing x and y coordinates as lists to a Line2D instance:

import matplotlib.lines as mlines

fig, ax = plt.subplots()  
line = mlines.Line2D([-.15,0.25], [-.07,0.09], color='red')  
ax.add_line(line)

# reusing scatterplot code
ax.scatter(delta1[:-1], delta1[1:], c=close, s=volume, alpha=0.5)

ax.set_xlabel(r'$\Delta_i$', fontsize=15)  
ax.set_ylabel(r'$\Delta_{i+1}$', fontsize=15)  
ax.set_title('Volume and percent change')

ax.grid(True)  
fig.tight_layout()

plt.show()  

Plotting Histograms

To plot a histogram, we follow a similar process and use the hist() function from pyplot. We will generate 10000 random data points, x, with a mean of 100 and standard deviation of 15.

The hist function takes the data, x, number of bins, and other arguments such as density, which normalizes the data to a probability density, or alpha, which sets the transparency of the histogram.

We will also use the library mlab to add a line representing a normal density function with the same mean and standard deviation:

import numpy as np  
import matplotlib.mlab as mlab  
import matplotlib.pyplot as plt

mu, sigma = 100, 15  
x = mu + sigma*np.random.randn(10000)

# the histogram of the data
n, bins, patches = plt.hist(x, 30, density=1, facecolor='blue', alpha=0.75)

# add a 'best fit' line
y = mlab.normpdf( bins, mu, sigma)  
l = plt.plot(bins, y, 'r--', linewidth=4)

plt.xlabel('IQ')  
plt.ylabel('Probability')  
plt.title(r'$\mathrm{Histogram\ of\ IQ:}\ \mu=100,\ \sigma=15$')  
plt.axis([40, 160, 0, 0.03])  
plt.grid(True)

plt.show()  

Bar Charts

While histograms helped us with visual densities, bar charts help us view counts of data. To plot a bar chart with matplotlib, we use the bar() function. This takes the counts and data labels as x and y, along with other arguments.

As an example, we could look at a sample of the number of programmers that use different languages:

import numpy as np  
import matplotlib.pyplot as plt

objects = ('Python', 'C++', 'Java', 'Perl', 'Scala', 'Lisp')  
y_pos = np.arange(len(objects))  
performance = [10,8,6,4,2,1]

plt.bar(y_pos, performance, align='center', alpha=0.5)  
plt.xticks(y_pos, objects)  
plt.ylabel('Usage')  
plt.title('Programming language usage')

plt.show()  

Plotting Images

Analyzing images is very common in Python. Not surprisingly, we can use matplotlib to view images. We use the cv2 library to read in images.

The read_image() function summary is below:

  • reads the image file
  • splits the color channels
  • changes them to RGB
  • resizes the image
  • returns a matrix of RGB values

The rest of the code reads in the first five images of cats and dogs from data used in an image recognition CNN. The pictures are concatenated and printed on the same axis:

import matplotlib.pyplot as plt  
import numpy as np  
import os, cv2

cwd = os.getcwd()  
TRAIN_DIR = cwd + '/data/train/'

ROWS = 256  
COLS = 256  
CHANNELS = 3

train_images = [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR)] # use this for full dataset  
train_dogs =   [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'dog' in i]  
train_cats =   [TRAIN_DIR+i for i in os.listdir(TRAIN_DIR) if 'cat' in i]

def read_image(file_path):  
    img = cv2.imread(file_path, cv2.IMREAD_COLOR) #cv2.IMREAD_GRAYSCALE
    b,g,r = cv2.split(img)
    img2 = cv2.merge([r,g,b])
    return cv2.resize(img2, (ROWS, COLS), interpolation=cv2.INTER_CUBIC)

for a in range(0,5):  
    cat = read_image(train_cats[a])
    dog = read_image(train_dogs[a])
    pair = np.concatenate((cat, dog), axis=1)
    plt.figure(figsize=(10,5))
    plt.imshow(pair)
    plt.show()

Conclusion

In this post we saw a brief introduction of how to use matplotlib to plot data in scatter plots, histograms, and bar charts. We also added lines to these plots. Finally, we saw how to read in images using the cv2 library and used matplotlib to plot the images.

Randy Zwitch: Using pandas and pymapd for ETL into OmniSci

$
0
0

I’ve got PyData NYC 2018 in two days and rather finishing up my talk, I just realized that my source data has a silent corruption due to non-standard timestamps. Here’s how I fixed this using pandas and then uploaded the data to OmniSci.

Computers Are Dumb, MAKE THINGS EASIER FOR THEM!

Literally every data tool in the world can read the ISO-8601 timestamp format. Conversely, not every tool in the world can read Excel or whatever horrible other tool people use to generate the CSV files seen in the wild. While I should’ve been more diligent checking my data ingestion, I didn’t until I created a wonky report…

Let’s take a look at the format that tripped me up:

Excel data format sucks

Month/Day/Year Hour:Minute:Second AM/PM feels very much like an Excel date format that you get when Excel is used as a display medium. Unfortunately, when you write CSV files like this, the next tool to read them has to understand 1) that these columns are timestamps and 2) if the user doesn’t specify the format, has to guess the format.

In my case, I didn’t do descriptive statistics on my timestamp columns and had a silent truncation(!) of the AM/PM portion of the data. So instead of having 24 hours in the day, the parser read the data as follows (the #AM and #PM are my comments for clarity):

datetime_beginning_utc
2001-01-01 01:00:00 #AM
2001-01-01 01:00:00 #PM
2001-01-01 02:00:00 #AM
2001-01-01 02:00:00 #PM
2001-01-01 03:00:00 #AM
2001-01-01 03:00:00 #PM
2001-01-01 04:00:00 #AM
2001-01-01 04:00:00 #PM
2001-01-01 05:00:00 #AM
2001-01-01 05:00:00 #PM
2001-01-01 06:00:00 #AM
2001-01-01 06:00:00 #PM
2001-01-01 07:00:00 #AM
2001-01-01 07:00:00 #PM
2001-01-01 08:00:00 #AM
2001-01-01 08:00:00 #PM
2001-01-01 09:00:00 #AM
2001-01-01 09:00:00 #PM
2001-01-01 10:00:00 #AM
2001-01-01 10:00:00 #PM
2001-01-01 11:00:00 #AM
2001-01-01 11:00:00 #PM
2001-01-01 12:00:00 #AM
2001-01-01 12:00:00 #PM

So while the data looks like it was imported correctly (because, it is a timestamp), it wasn’t until I realized that hours 13-23 were missing from my data that I realized I had an error.

Pandas To The Rescue!

Fixing this issue is as straight-forward as reading the CSV into python using pandas and specifying the date format:

importpandasaspdimportdatetimedf=pd.read_csv("/mnt/storage1TB/hrl_load_metered/hrl_load_metered.csv",parse_dates=[0,1],date_parser=lambdax:datetime.datetime.strptime(x,"%m/%d/%Y %I:%M:%S %p"))

Yay pandas!

We can see from the code above that pandas has taken our directive about the format and it appears the data have been parsed correctly. A good secondary check here is that the difference in timestamps is -5, which is the offset of the East Coast of the United States relative to UTC.

Uploading to OmniSci Directly From Pandas

Since my PyData talk is going to be using OmniSci, I need to upload this corrected data or rebuild all my work (I’ll opt for fixing my source). Luckily, the pymapd package provides tight integration to an OmniSci database, providing a means of uploading the data directly from a pandas dataframe:

importpymapd#connect to databaseconn=pymapd.connect(host="localhost",port=9091,user="mapd",password="HyperInteractive",dbname="mapd")#truncate table so that table definition can be reusedconn.execute("truncate table hrl_load_metered")#re-load data into table#with none of the optional arguments, pymapd infers that this is an insert operation, since table name existsconn.load_table_columnar("hrl_load_metered",df)

I have a pre-existing table hrl_load_metered on the database, so I can truncate the table to remove its (incorrect) data but keep the table structure. Then I can use load_table_columnar to insert the cleaned up data into my table and now my data is correct.

Computers May Be Dumb, But Humans Are Lazy

At the beginning, I joked that computers are dumb. Computers are just tools that do exactly what a human programs them to do, and really, it was my laziness that caused this data error. Luckily, I did catch this before my talk and the fix is pretty easy.

I’d like to say I’m going to remember to check my data going forward, but in reality, I’m just documenting this here for the next time I make the same, lazy mistake.


A. Jesse Jiryu Davis: Recap: PyGotham 2018 Speaker Coaching

$
0
0
With your help, we raised money for twelve PyGotham speakers to receive free training from opera singer and speaking coach Melissa Collom. Most of the speakers were new to the conference scene; Melissa helped them focus on their value to the audience, clarify their ideas, and speak with confidence and charisma. In a survey, nearly all speakers said the session was “very beneficial” and made them “much more likely” to propose conference talks again.

Python Bytes: #99 parse - the regex antidote in Python

Andrea Grandi: Using ipdb with Python 3.7.x breakpoint

$
0
0

Python 3.7.x introduced a new method to insert a breakpoint in the code. Before Python 3.7.x to insert a debugging point we had to write import pdb; pdb.set_trace() which honestly I could never remember (and I also created a snippet on VS Code to auto complete it).

Now you can just write breakpoint() that's it!

Now... the only problem is that by default that command will use pdb which is not exactly the best debugger you can have. I usually use ipdb but there wasn't an intuitive way of using it... and no, just installing it in your virtual environment, it won't be used by default.

How to use it then? It's very simple. The new debugging command will read an environment variable named PYTHONBREAKPOINT. If you set it properly, you will be able to use ipdb instead of pdb.

export PYTHONBREAKPOINT=ipdb.set_trace

At this point, any time you use breakpoint() in your code, ipdb will be used instead of pdb.

References

  • https://hackernoon.com/python-3-7s-new-builtin-breakpoint-a-quick-tour-4f1aebc444c

Vasudev Ram: The 2018 Python Developer Survey

$
0
0
By Vasudev Ram

Reposting a PSF-Community email as a PSA:

Participate in the 2018 Python Developer Survey.

Excerpt from an email to the psf-community@python.org and psf-members-announce@python.org mailing lists:

[ As some of you may have seen, the 2018 Python Developer Survey is available. If you haven't taken the survey yet, please do so soon! Additionally, we'd appreciate any assistance you all can provide with sharing the survey with your local Python groups, schools, work colleagues, etc. We will keep the survey open through October 26th, 2018.

Python Developers Survey 2018

We’re counting on your help to better understand how different Python developers use Python and related frameworks, tools, and technologies. We also hope you'll enjoy going through the questions.

The survey is organized in partnership between the Python Software Foundation and JetBrains. Together we will publish the aggregated results. We will randomly choose and announce 100 winners to receive a Python Surprise Gift Pack (must complete the full survey to qualify). ]

To my readers: I'll post the answer to A Python email signature puzzle soon, in my next post.


- Vasudev Ram - Online Python training and consulting

Hit the ground running with my vi quickstart tutorial, vetted by two Windows system administrator friends.Jump to posts: Python * DLang * xtopdfInterested in a Python, SQL or Linux course?Get WP Engine, powerful managed WordPress hosting.Subscribe to my blog (jugad2.blogspot.com) by emailMy ActiveState Code recipes
Follow me on:* Gumroad  * LinkedIn  * TwitterDo you create online products? Get Convertkit:Email marketing for digital product creators

Mike Driscoll: Jupyter Notebook Debugging

$
0
0

Debugging is an important concept. The concept of debugging is trying to figure out what is wrong with your code or just trying to understand the code. There are many times where I will come to unfamiliar code and I will need to step through it in a debugger to grasp how it works. Most Python IDEs have good debuggers built into them. I personally like Wing IDE for instance. Others like PyCharm or PyDev. But what if you want to debug the code in your Jupyter Notebook? How does that work?

In this chapter we will look at a couple of different methods of debugging a Notebook. The first one is by using Python’s own pdb module.


Using pdb

The pdb module is Python’s debugger module. Just as C++ has gdb, Python has pdb.

Let’s start by opening up a new Notebook and adding a cell with the following code in it:

def bad_function(var):
	return var + 0 
bad_function("Mike")

If you run this code, you should end up with some output that looks like this:

---------------------------------------------------------------------------
TypeError                                 Traceback (most recent call last)<ipython-input-1-2f23ed1cac1e>in<module>()2return var + 03 
---->4 bad_function("Mike") 
<ipython-input-1-2f23ed1cac1e>in bad_function(var)1def bad_function(var):
---->2return var + 034 bad_function("Mike") 
TypeError: cannot concatenate 'str'and'int' objects

What this means is that you cannot concatenate a string and an integer. This is a pretty common problem if you don’t know what types a function accepts. You will find that this is especially true when working with complex functions and classes, unless they happen to be using type hinting. One way to figure out what is going on is by adding a breakpoint using pdb’s set_trace() function:

def bad_function(var):
    importpdbpdb.set_trace()return var + 0 
bad_function("Mike")

Now when you run the cell, you will get a prompt in the output which you can use to inspect the variables and basically run code live. If you happen to have Python 3.7, then you can simplify the example above by using the new breakpoint built-in, like this:

def bad_function(var):
    breakpoint()return var + 0 
bad_function("Mike")

This code is functionally equivalent to the previous example but uses the new breakpoint function instead. When you run this code, it should act the same way as the code in the previous section did.

You can read more about how to use pdb here.

You can use any of pdb’s command right inside of your Jupyter Notebook. Here are some examples:

  • w(here) – Print the stack trace
  • d(own) – Move the current frame X number of levels down. Defaults to one.
  • u(p) – Move the current frame X number of levels up. Defaults to one.
  • b(reak) – With a *lineno* argument, set a break point at that line number in the current file / context
  • s(tep) – Execute the current line and stop at the next possible line
  • c(ontinue) – Continue execution

Note that these are single-letter commands: w, d, u and b are the commands. You can use these commands to interactively debug your code in your Notebook along with the other commands listed in the documentation listed above.


ipdb

IPython also has a debugger called ipdb. However it does not work with Jupyter Notebook directly. You would need to connect to the kernel using something like Jupyter console and run it from there to use it. If you would like to go that route, you can read more about using Jupyter console here.

However there is an IPython debugger that we can use called IPython.core.debugger.set_trace. Let’s create a cell with the following code:

from IPython.core.debuggerimport set_trace
 
def bad_function(var):
    set_trace()return var + 0 
bad_function("Mike")

Now you can run this cell and get the ipdb debugger. Here is what the output looked like on my machine:

The IPython debugger uses the same commands as the Python debugger does. The main difference is that it provides syntax highlighting and was originally designed to work in the IPython console.

There is one other way to open up the ipdb debugger and that is by using the %pdb magic. Here is some sample code you can try in a Notebook cell:

%pdb 
def bad_function(var):
    return var + 0 
bad_function("Mike")

When you run this code, you should end up seeing the `TypeError` traceback and then the ipdb prompt will appear in the output, which you can then use as before.


What about %%debug?

There is yet another way that you can open up a debugger in your Notebook. You can use `%%debug` to debug the entire cell like this:

%%debug
 
def bad_function(var):
    return var + 0 
bad_function("Mike")

This will start the debugging session immediately when you run the cell. What that means is that you would want to use some of the commands that pdb supports to step into the code and examine the function or variables as needed.

Note that you could also use `%debug` if you want to debug a single line.


Wrapping Up

In this chapter we learned of several different methods that you can use to debug the code in your Jupyter Notebook. I personally prefer to use Python’s pdb module, but you can use the IPython.core.debugger to get the same functionality and it could be better if you prefer to have syntax highlighting.

There is also a newer “visual debugger” package called the PixieDebugger from the pixiedust package:

I haven’t used it myself. Some reviewers say it is amazing and others have said it is pretty buggy. I will leave that one up to you to determine if it is something you want to add to your toolset.

As far as I am concerned, I think using pdb or IPython’s debugger work quite well and should work for you too.


Related Reading

Viewing all 22641 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>