Quantcast
Channel: Planet Python
Viewing all 22627 articles
Browse latest View live

Codementor: Quick Guide: Call a Celery Task Without Access to Casebase

$
0
0
This post was originally published on Distributed Python (https://www.distributedpython.com/) on June 19th, 2018. The standard Celery docs and tutorials usually assume that your Celery and your API...

Weekly Python StackOverflow Report: (cxliv) stackoverflow python report

$
0
0

Stefan Behnel: What's new in Cython 0.29?

$
0
0

I'm currently preparing the next Cython release, 0.29. Expect it in the wild within the next few weeks. Testing and reporting bugs is much appreciated, especially before we release. :) Download the latest install archive right away and give it a try.

In case you didn't hear about Cython before, it's the most widely used statically optimising Python compiler out there. It translates Python (2/3) code to C, and makes it as easy as Python itself to tune the code all the way down into fast native code.

So, what makes this another great Cython release?

The contributors

First of all, our contributors. A substantial part of the changes in this release was written by users and non-core developers and contributed via pull requests. A big "Thank You!" to all of our contributors and bug reporters! You really made this a great release.

Above all, Gabriel de Marmiesse has invested a remarkable amount of time into restructuring and rewriting the documentation. It now has a lot less historic smell, and much better, tested (!) code examples. And he obviously found more than one problematic piece of code in the docs that we were able to fix along the way.

Cython 3.0

And this will be the last 0.x release of Cython. The Cython compiler has been in production critical use for years, all over the world, and there is really no good reason for it to have an 0.x version scheme. In fact, the 0.x release series can easily be counted as 1.x, which is one of the reasons why we now decided to skip the 1.x series all together. And, while we're at it, why not the 2.x prefix as well. The next release will be 3.0. The main reason for that is that we want 3.0 to do two things: a) switch the default language compatibility level from Python 2.x to 3.x and b) break with some backwards compatibility issues that get more in the way than they help. We have started collecting a list of things to rethink and change in our bug tracker.

Turning the language level switch is a tiny code change for us, but a larger change for our users and the millions of source lines in their code bases. In order to avoid any resemblance with the years of effort that went into the Py2/3 switch, we are planning to take measures that allow users to choose how much effort they want to invest, from "almost none at all" to "as much as they want".

Cython has a long tradition of helping users adapt their code for both Python 2 and Python 3, ever since we ported it to Python 3.0. We used to joke back in 2008 that Cython was the easiest way to migrate an existing Py2 code base to Python 3, and it was never really meant as a joke. Many annoying details are handled internally in the compiler, such as the range versus xrange renaming, or dict iteration. Cython has supported dict and set comprehensions before they were backported to Py2.7, and has long provided three string types (or four, if you want) instead of two. It distinguishes between bytes, str and unicode (and it knows basestring), where str is the type that changes between Py2's bytes str and Py3's Unicode str. This distinction helps users to be explicit, even at the C level, what kind of character or byte sequence they want, and how it should behave across the Py2/3 boundary.

For Cython 3.0, what we plan is to switch only the default language level, which users can change via a command line option or the compiler directive language_level. To be clear, Cython will continue to support the existing language semantics. They will just no longer be the default, and users have to select them explicitly by setting language_level=2. That's the "almost none at all" case. In order to prepare this switch, Cython now issues a warning when no language level is explicitly requested, and thus pushes users into being explicit about what semantics their code requires. We obviously hope that many of our users will take the opportunity and migrate their code to the nicer Python 3 semantics, which Cython has long supported as language_level=3.

At the same time, we are considering to provide a new "in between" kind of setting, which would enable all the nice Python 3 goodies that are not syntax compatible with Python 2.x, but without requiring all unprefixed string literals to become Unicode strings. This was one of the biggest problems in the general Py3 migration. And in the context of Cython's integration with C code, it gets in the way of our users even a bit more than it would in Python code. Our goals are to make it easy for new users who come from Python 3 to compile their code with Cython and to allow existing (Cython/Python 2) code bases to make use of the benefits before they can make a 100% switch.

But all of this is still the (not so far) future, let's see what the present release has to offer.

Module initialisation like Python does

One great change under the hood is that we managed to enable the PEP-489 support (again). It was already mostly available in Cython 0.27, but lead to problems that made us back-pedal at the time. Now we believe that we found a way to bring the saner module initialisation of Python 3.5 to our users, without risking the previous breakage. Most importantly, features like subinterpreter support or module reloading are detected and disabled, so that Cython compiled extension modules cannot be mistreated in such environments. Actual support for these little used features will probably come at some point, but will certainly require an opt-in of the users, since it is expected to reduce the overall performance of Python operations quite visibly. The more important features like a correct __file__ path being available at import time, and in fact, extension modules looking and behaving exactly like Python modules during the import, are much more helpful to most users.

Compiling Python code with OpenMP and memory views

Another PEP is worth mentioning next, actually two PEPs: 484 and 526, vulgo type annotations. Cython has supported type declarations in Python code for years, has switched to PEP-484/526 compatible typing with release 0.27 (more than one year ago), and has now gained several new features that make static typing in Python code much more widely usable. Users can now declare their statically typed Python functions as not requiring the GIL, and thus call them from a parallel OpenMP loop, all without leaving Python code compatibility. Even exceptions can now be raised directly from thread-parallel code, without first having to acquire the GIL explicitly.

And memory views are available in Python typing notation:

importcythonfromcython.parallelimportprange@cython.cfunc@cython.nogildefcompute_one_row(row:cython.double[:])->cython.int:...defprocess_2d_array(data:cython.double[:,:]):i:cython.Py_ssize_tforiinprange(data.shape[0],num_threads=16,nogil=True):compute_one_row(data[i])

This code will work with NumPy arrays when run in Python, and with any data provider that supports the Python buffer interface when compiled with Cython. As a compiled extension module, it will execute at full C speed, in parallel, with 16 OpenMP threads, as requested. As a normal Python module, it will support all the great Python tools for code analysis, test coverage reporting, debugging, and what not. Although Cython also has direct support for a couple of those by now. Profiling (with cProfile) and coverage analysis (with coverage.py) have been around for several releases, for example. But debugging a Python module in the interpreter is obviously still easier than debugging a native extension module, with all the edit-compile-run cycle overhead.

Cython's support for compiling pure Python code combines the best of both worlds: native C speed, and easy Python code development, with full support for all the great Python 3.7 language features, even if you still need your (compiled) code to run in Python 2.7.

More speed

Several improvements make use of the new dict versioning in CPython 3.6. It allows module global names to be looked up much faster, close to the speed of static C globals. Also, the attribute lookup for calls to cpdef methods (C methods with Python wrappers) can benefit a lot, it can become up to 4x faster. The changelog lists several other optimisations and improvements.

Many important bug fixes

We've had a hard time following a change in CPython 3.7 that "broke the world", as Mark Shannon put it. It was meant as a mostly internal change on their side that improved the handling of exceptions inside of generators, but it turned out to break all extension modules out there that were built with Cython, and then some. A minimal fix was already released in Cython 0.28.4, but 0.29 brings complete support for the new generator exception stack in CPython 3.7, which allows exceptions raised or handled by Cython implemented generators to interact correctly with CPython's own generators. Upgrading is therefore warmly recommended for better CPython 3.7 support. As usual with Cython, translating your existing code with the new release will make it benefit from the new features, improvements and fixes.

Stackless Python has not been a big focus for Cython development so far, but the developers noticed a problem with Cython modules earlier this year. Normally, they try to keep Stackless binary compatible with CPython, but there are corner cases where this is not possible, and one of these broke the compatibility with Cython compiled modules. Cython 0.29 now contains a fix that makes it play nicely with Stackless 3.x.

A funny bug that is worth noting is a mysteriously disappearing string multiplier in earlier Cython versions. A constant expression like "x" * 5 results in the string "xxxxx", but "x" * 5 + "y" becomes "xy". Apparently not a common code construct, since no user ever complained about it.

Long-time users of Cython and NumPy will be happy to hear that Cython's memory views are now API-1.7 clean, which means that they can get rid of the annoying Using deprecated NumPy API warnings in the C compiler output. Simply append the C macro definition ('NPY_NO_DEPRECATED_API', 'NPY_1_7_API_VERSION') to the macro setup of your distutils extensions to make them disappear. Note that this does not apply to the old low-level ndarray[...] syntax, which exposes several deprecated internals of the NumPy C-API that are not easy to replace. Memory views are a fast high-level abstraction that does not rely specifically on NumPy and therefore does not suffer from these API constraints.

Less compilation :)

And finally, as if to make a point that static compilation is a great tool but not always a good idea, we decided to reduce the number of modules that Cython compiles of itself from 13 down to 8, thus keeping 5 more modules normally interpreted by Python. This makes the compiler runs about 5-7% slower, but reduces the packaged size and the installed binary size by about half, thus reducing download times in CI builds and virtualenv creations. Python is a very efficient language when it comes to functionality per line of code, and its byte code is similarly high-level and efficient. Compiled native code is a lot larger and more verbose in comparison, and this can easily make a difference of megabytes of shared libraries versus kilobytes of Python modules.

We therefore repeat our recommendation to focus Cython's usage on the major pain points in your application, on the critical code sections that a profiler has pointed you at. Compiling those, and tuning them at the C level, is what Cython is best at.

REPL|REBL: 3D rotating wireframe cube with MicroPython — Basic 3D model rotation and projection

$
0
0

An ESP2866 is never going to compete with an actual graphics card. It certainly won't produce anything approaching modern games. But it still makes a nice platform to explore the basics of 3D graphics. In this short tutorial we'll go through the basics of creating a 3D scene and displaying it on an OLED screen using MicroPython.

This kind of mono wireframe 3D reminds me of early ZX Spectrum 3D games, involving shooting one wobbly line at another, and looking at the resulting wobbly lines. It was awesome.

The 3D code here is based on this example for Pygame with some simplifications and the display code modified for working with framebuf.

Setting up

The display used here is a 128x64 OLED which communicates over I2C. We're using the ssd1306 module for OLED displays available in the MicroPython repository to handle this communication for us, and provide a framebuf drawing interface.

Upload the ssd1306.py file to your device's filesystem using the ampy tool (or the WebREPL).

ampy --port /dev/tty.wchusbserial141120 put ssd1306.py

With the ssd1306.py file on your Wemos D1, you should be able to import it as any other Python module. Connect to your device, and then in the REPL enter:

frommachineimportI2C,Pinimportssd1306

If the import ssd1306 succeeds, the package is correctly uploaded and you're good to go.

Wire up the OLED display, connecting pins D1 to SCL and D2 to SDA.Provide power from G and 5V.

I2C OLED display wired to Wemos D1

To work with the display, we need to create an I2C object, connecting via pins D1 and D2— hardware pin 4 & 5 respectively. Passing the resulting i2c object into our SSD1306_I2C class, along with screen dimensions, gets us our interface to draw with.

frommachineimportI2C,Pinimportssd1306importmathi2c=I2C(scl=Pin(5),sda=Pin(4))display=ssd1306.SSD1306_I2C(128,64,i2c)

Modelling 3D objects

The simplest way to model objects in 3D space is to store and manipulate their vertices only — for a cube, that means the 8 corners.

To rotate the cube we manipulate these points in 3 dimensional space. To draw the cube, we project these points onto a 2-dimensional plane, to give a set of x,y coordinates, and connect the vertices with our edge lines.

Rotation along each axis and the projection onto a 2D plane is described below.

The full code is available for download here if you want to skip ahead and start experimenting.

3D Rotation

Rotating an object in 3 dimensions is no different than rotating a object on a 2D surface, it's just a matter of perspective.

Take a square drawn on a flat piece of paper, and rotate it 90°. If you look before and after rotation the X and Y coordinates of any given corner change, but the square is still flat on the paper. This is analogous to rotating any 3D object along it's Z axis — the axis that is coming out of the middle of the object and straight up.

The same applies to rotation along any axis — the coordinates in the axis of rotation remain unchanged, while coordinates along other axes are modified.

# Rotation along X
y' = y*cos(a) - z*sin(a)
z' = y*sin(a) + z*cos(a)
x' = x


# Rotation along Y
z' = z*cos(a) - x*sin(a)
x' = z*sin(a) + x*cos(a)
y' = y

# Rotation along Z
x' = x*cos(a) - y*sin(a)
y' = x*sin(a) + y*cos(a)
z' = z

The equivalent Python code for the rotation along the X axis is shown below. It maps directly to the math already described. Note that when rotating in the X dimension, the x coordinates are returned unchanged and we also need to convert from degrees to radians (we could of course write this function to accept radians instead).

defrotateX(self,x,y,z,deg):""" Rotates this point around the X axis the given number of degrees. Return the x, y, z coordinates of the result"""rad=deg*math.pi/180cosa=math.cos(rad)sina=math.sin(rad)y=y*cosa-z*sinaz=y*sina+z*cosareturnx,y,z

Projection

Since we're displaying our 3D objects on a 2D surface we need to be able to convert, or project, the 3D coordinates onto 2D. The approach we are using here is perspective projection.

If you imagine an object moving away from you, it gradually shrinks in size until it disappears into the distance. If it is directly in front of you, the edges of the object will gradually move towards the middle as it recedes. Similarly, a large square transparent object will have the rear edges appear 'within' the bounds of the front edges. This is perspective.

To recreate this in our 2D projection, we need to move points towards the middle of our screen the further away from our 'viewer' they are. Our x & y coordinates are zero'd around the center of the screen (an x < 0 means to the left of the center point), so dividing x & y coordinates by some amount of Z will move them towards the middle, appearing 'further away'.

The specific formula we're using is shown below. We take into account the field of view— how much of an area the viewer can see — the viewer distance and the screen height and width to project onto our framebuf.

x' = x * fov / (z + viewer_distance) + screen_width / 2
y' = -y * fov / (z + viewer_distance) + screen_height / 2

Point3D code

The complete code for a single Point3D is shown below, containing the methods for rotation in all 3 axes, and for projection onto a 2D plane. Each of these methods return a new Point3D object, allow us to chain multiple transformations and avoid altering the original points we define.

classPoint3D:def__init__(self,x=0,y=0,z=0):self.x,self.y,self.z=x,y,zdefrotateX(self,angle):""" Rotates this point around the X axis the given number of degrees. """rad=angle*math.pi/180cosa=math.cos(rad)sina=math.sin(rad)y=self.y*cosa-self.z*sinaz=self.y*sina+self.z*cosareturnPoint3D(self.x,y,z)defrotateY(self,angle):""" Rotates this point around the Y axis the given number of degrees. """rad=angle*math.pi/180cosa=math.cos(rad)sina=math.sin(rad)z=self.z*cosa-self.x*sinax=self.z*sina+self.x*cosareturnPoint3D(x,self.y,z)defrotateZ(self,angle):""" Rotates this point around the Z axis the given number of degrees. """rad=angle*math.pi/180cosa=math.cos(rad)sina=math.sin(rad)x=self.x*cosa-self.y*sinay=self.x*sina+self.y*cosareturnPoint3D(x,y,self.z)defproject(self,win_width,win_height,fov,viewer_distance):""" Transforms this 3D point to 2D using a perspective projection. """factor=fov/(viewer_distance+self.z)x=self.x*factor+win_width/2y=-self.y*factor+win_height/2returnPoint3D(x,y,self.z)

3D Simulation

We can now create a scene by arranging Point3D objects in 3-dimensional space. To create a cube, rather than 8 discrete points, we will connect our vertices to their adjacent vertices after projecting them onto our 2D surface.

Vertices

The vertices for a cube are shown below. Our cube is centered around 0 in all 3 axes, and rotates around this centre.

self.vertices=[Point3D(-1,1,-1),Point3D(1,1,-1),Point3D(1,-1,-1),Point3D(-1,-1,-1),Point3D(-1,1,1),Point3D(1,1,1),Point3D(1,-1,1),Point3D(-1,-1,1)]

Polygons or Lines

As we're drawing a wireframe cube, we actually have a couple of options — polygons or lines.

The cube has 6 faces, which means 6 polygons. To draw a single polygon requires 4 lines, making a total draw for the wireframe cube with polygons of 24 lines. We draw more lines than needed, because each polygon shares sides with 4 others.

In contrast drawing only the lines that are required, a wireframe of the cube can be drawn using only 12 lines— half as many.

For a filled cube, polygons would make sense, but here we're going to use the lines only, which we call edges. This is an array of indices into our vertices list.

self.edges=[# Back(0,1),(1,2),(2,3),(3,0),# Front(5,4),(4,7),(7,6),(6,5),# Front-to-back(0,5),(1,4),(2,7),(3,6),]

On each iteration we apply the rotational transformations to each point, then project it onto our 2D surface.

r=v.rotateX(angleX).rotateY(angleY).rotateZ(angleZ)# Transform the point from 3D to 2Dp=r.project(*self.projection)# Put the point in the list of transformed verticest.append(p)

Then we iterate our list of edges, and retrieve the relevant transformed vertices from our list t. A line is then drawn between the x, y coordinates of two points making up the edge.

foreinself.edges:display.line(*to_int(t[e[0]].x,t[e[0]].y,t[e[1]].x,t[e[1]].y,1))

The to_int is just a simple helper function to convert lists of float into lists of int to make updating the OLED display simpler (you can't draw half a pixel).

defto_int(*args):return[int(v)forvinargs]

The complete simulation code is given below.

classSimulation:def__init__(self,width=128,height=64,fov=64,distance=4,rotateX=5,rotateY=5,rotateZ=5):self.vertices=[Point3D(-1,1,-1),Point3D(1,1,-1),Point3D(1,-1,-1),Point3D(-1,-1,-1),Point3D(-1,1,1),Point3D(1,1,1),Point3D(1,-1,1),Point3D(-1,-1,1)]# Define the edges, the numbers are indices to the vertices above.self.edges=[# Back(0,1),(1,2),(2,3),(3,0),# Front(5,4),(4,7),(7,6),(6,5),# Front-to-back(0,4),(1,5),(2,6),(3,7),]# Dimensionsself.projection=[width,height,fov,distance]# Rotational speedsself.rotateX=rotateXself.rotateY=rotateYself.rotateZ=rotateZdefrun(self):# Starting angle (unrotated in any dimension)angleX,angleY,angleZ=0,0,0while1:# It will hold transformed vertices.t=[]forvinself.vertices:# Rotate the point around X axis, then around Y axis, and finally around Z axis.r=v.rotateX(angleX).rotateY(angleY).rotateZ(angleZ)# Transform the point from 3D to 2Dp=r.project(*self.projection)# Put the point in the list of transformed verticest.append(p)display.fill(0)foreinself.edges:display.line(*to_int(t[e[0]].x,t[e[0]].y,t[e[1]].x,t[e[1]].y,1))display.show()# Continue the rotationangleX+=self.rotateXangleY+=self.rotateYangleZ+=self.rotateZ

Running a simulation

To display our cube we need to create a Simulation object, and then call .run() to start it running.

s=Simulation()s.run()

Simulation with default parameters

You can pass in different values for rotateX, rotateY, rotateZ to alter the speed of rotation. Set a negative value to rotate in reverse.

s=Simulation()s.run()

The fov and distance parameters are set at sensible values for the 128x64 OLED by default (based on testing). So you don't need to change these, but you can.

s=Simulation(fov=32,distance=8)s.run()

Simulation with default parameters

The width and height are defined by the display, so you won't want to change these unless you're using a different display output.

PyBites: PyBites Twitter Digest - Issue 30, 2018

$
0
0

Python 3 is the way!

Must Watch PyCon Videos

Submitted by @Erik

Learn about the OSI Model! Something every tech head should know!

Another quality TalkPy episode. Ned Batchelder this time round!

AI that writes music, very interesting. Ha!

Speaking of music, this looks promising!

Why we love the Python Community

A Python program that plays Super Mario Bros!

A walkthrough of unit testing in Python

It's all about the deliberate practice! Time for another Code Challenge right?

I always forget about this!

Python 3 only Matplotlib. It's all happening!

Cryptocurrency Predicting Neural Network in Python

OpenCV, OCR and Text Recognition

We have the power!!


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Codementor: PYTHON HELP

$
0
0
Hello, I'm learning Python currently and I needed some help solving this question, so I'd really appreciate some help! :/ A company wants to transmit data over the telephone, but it is concerned...

Vasudev Ram: How many ways can you substring a string? Part 2

$
0
0

By Vasudev Ram


Twine image attribution

Hi readers,

In my last post, i.e.:

How many ways can you substring a string? Part 1, I had said that there can be other ways of doing it, and that some enhancements were possible. This post (Part 2) is about that.

Here is another algorithm to find all substrings of a given string:

Let s be the input string.
Let n be the length of s.
Find and yield all substrings of s of length 1.
Find and yield all substrings of s of length 2.
...
Find and yield all substrings of s of length n.

Even without doing any formal analysis of the algorithm, we can intuitively see that it is correct, because it accounts for all possible cases (except for the empty string, but adding that is trivial).

[ BTW, what about uses for this sort of program? Although I just wrote it for fun, one possible use could be in word games like Scrabble. ]

The code for this new algorithm is in program all_substrings2.py below.
"""
all_substrings2.py
Function and program to find all substrings of a given string.
Author: Vasudev Ram
Copyright 2018 Vasudev Ram
Web site: https://vasudevram.github.io
Blog: https://jugad2.blogspot.com
Twitter: https://mobile.twitter.com/vasudevram
Product store: https://gumroad.com/vasudevram
"""

from __future__ import print_function
import sys
from error_exit import error_exit
from debug1 import debug1

def usage():
message_lines = [\
"Usage: python {} a_string".format(sa[0]),
"Print all substrings of a_string.",
"",
]
sys.stderr.write("\n".join(message_lines))

def all_substrings2(s):
"""
Generator function that yields all the substrings
of a given string s.
Algorithm used:
1. len_s = len(s)
2. if len_s == 0, return ""
3. (else len_s is > 0):
for substr_len in 1 to len_s:
find all substrings of s that are of length substr_len
yield each such substring
Expected output for some strings:
For "a":
"a"
For "ab":
"a"
"b"
"ab"
For "abc:
"a"
"b"
"c"
"ab"
"bc"
"abc"
For "abcd:
"a"
"b"
"c"
"d"
"ab"
"bc"
"cd"
"abc"
"bcd"
"abcd"
"""

len_s = len(s)
substr_len = 1
while substr_len <= len_s:
start = 0
end = start + substr_len
while end <= len_s:
debug1("s[{}:{}] = {}".format(\
start, end, s[start:end]))
yield s[start:end]
start += 1
end = start + substr_len
substr_len += 1

def main():
if lsa != 2:
usage()
error_exit("\nError: Exactly one argument must be given.\n")

if sa[1] == "":
print("")
sys.exit(0)

for substring in all_substrings2(sa[1]):
print(substring)

sa = sys.argv
lsa = len(sa)

if __name__ == "__main__":
main()
BTW, I added the empty string as the last item in the message_lines list (in the usage() function), as a small trick, to avoid having to explicitly add an extra newline after the joined string in the write() call.

Here are some runs of the program, with outputs, using Python 2.7 on Linux:

(pyo, in the commands below, is a shell alias I created for 'python -O', to disable debugging output. And a*2*y expands to all_substrings2.py, since there are no other filenames matching that wildcard pattern in my current directory. It's a common Unix shortcut to save typing. In fact, the bash shell expands that shortcut to the full filename when you type the pattern and then press Tab. But the expansion happens without pressing Tab too, if you just type that command and hit Enter. But you have to know for sure, up front, that the wildcard expands to only one filename (if you want that), or you can get wrong results, e.g. if such a wildcard expands to 3 filenames, and your program expects command-line arguments, the 2nd and 3rd filenames will be treated as command-line arguments for the program represented by the 1st filename. This will likely not be what you want, and may create problems.)

Run it without any arguments:
$ pyo a*2*y
Usage: python all_substrings2.py a_string
Print all substrings of a_string.

Error: Exactly one argument must be given.
Run a few times with some input strings of incrementally longer lengths:
$ pyo a*2*y a
a
$ pyo a*2*y ab
a
b
ab
$ pyo a*2*y abc
a
b
c
ab
bc
abc
$ pyo a*2*y abcd
a
b
c
d
ab
bc
cd
abc
bcd
abcd
Count the number of substrings in the above run for string abcd:
$ pyo a*2*y abcd | wc -l
10
$ pyo a*2*y abcde
a
b
c
d
e
ab
bc
cd
de
abc
bcd
cde
abcd
bcde
abcde
Count the number of substrings in the above run for string abcde:
$ pyo a*2*y abcde | wc -l
15
$ pyo a*2*y abcdef
a
b
c
d
e
f
ab
bc
cd
de
ef
abc
bcd
cde
def
abcd
bcde
cdef
abcde
bcdef
abcdef
Count the number of substrings in the above run for string abcdef:
$ pyo a*2*y abcdef | wc
21 21 77
Now a few more with only the count:
$ pyo a*2*y abcdefg | wc
28 28 112
$ pyo a*2*y abcdefgh | wc
36 36 156
$ pyo a*2*y abcdefghi | wc
45 45 210
Notice a pattern?

The count of substrings for each succeeding run (which has one more character in the input string than the preceding run has), is equal to the sum of the count for the preceding run and the length of the input string for the succeeding run; e.g. 10 + 5 = 15, 15 + 6 = 21, 21 + 7 = 28, etc. This is the same as the sum of the first n natural numbers.

There is a well-known formula for that sum: n * (n + 1) / 2.

There is a story (maybe apocryphal) that the famous mathematician Gauss was posed this problem - to find the sum of the numbers from 1 to 100 - by his teacher, after he misbehaved in class. To the surprise of the teacher, he gave the answer in seconds. From the Wikipedia article about Gauss:

[ Gauss's presumed method was to realize that pairwise addition of terms from opposite ends of the list yielded identical intermediate sums: 1 + 100 = 101, 2 + 99 = 101, 3 + 98 = 101, and so on, for a total sum of 50 × 101 = 5050. ]

From this we can see that the sum of this sequence satisfies the formula n * (n + 1) / 2, where n = 100, i.e. 100 * (100 + 1) / 2 = 50 * 101.

(Wikipedia says that Gauss "is ranked among history's most influential mathematicians".)

We can also run the all_substrings2.py program multiple times with different inputs, using a for loop in the shell:
$ for s in a ab abc abcd
> do
> echo All substrings of $s:
> pyo al*2*py $s
> done

All substrings of a:
a
All substrings of ab:
a
b
ab
All substrings of abc:
a
b
c
ab
bc
abc
All substrings of abcd:
a
b
c
d
ab
bc
cd
abc
bcd
abcd
Some remarks on the program versions shown (i.e. all_substrings.py and all_substrings2.py, in Parts 1 and 2 respectively):

Both versions use a generator function, to lazily yield each substring on demand. Either version can easily be changed to use a list instead of a generator (and the basic algorithm used will not need to change, in either case.) To do that, we have to delete the yield statement, collect all the generated substrings in a new list, and at the end, return that list to the caller. The caller's code will not need to change, although we will now be iterating over the list returned from the function, not over the values yielded by the generator. Some of the pros and cons of the two approaches (generator vs. list) are:

- the list approach has to create and store all the substrings first, before it can return them. So it uses memory proportional to the sum of the sizes of all the substrings generated, with some overhead due to Python's dynamic nature (but that per-string overhead exists for the generator approach too). (See this post: Exploring sizes of data types in Python.) The list approach will need a bit of overhead for the list too. But the generator approach needs to handle only one substring at a time, before yielding it to the caller, and does not use a list. So it will potentially use much less memory, particularly for larger input strings. The generator approach may even be faster than the list version, since repeated memory (re)allocation for the list (as it expands) has some overhead. But that is speculation on my part as of now. To be sure of it, one would have to do some analysis and/or some speed measurements of relevant test programs.

- the list approach gives you the complete list of substrings (after the function that generates them returns). So, in the caller, if you want to do multiple processing passes over them, you can. But the generator approach gives you each substring immediately as it is generated, you have to process it, and then it is gone. So you can only do one processing pass over the substrings generated. In other words, the generator's output is sequential-access, forward-only, one-item-at-a-time-only, and single-pass-only. (Unless you store all the yielded substrings, but then that becomes the same as the list approach.)

Another enhancement that can be useful is to output only the unique substrings. As I showed in Part 1, if there are any repeated characters in the input string, there can be duplicate substrings in the output. There are two obvious ways of getting only unique substrings:

1) By doing it internal to the program, using a Python dict. All we have to do is add each substring (as a key, with the corresponding value being anything, say None), to a dict, as and when the substring is generated. Then the substrings in the dict are guaranteed to be unique. Then at the end, we just print the substrings from the dict instead of from the list. If we want to print the substrings in the same order they were generated, we can use an OrderedDict.

See: Python 2 OrderedDict
and: Python 3 OrderedDict

(Note: In Python 3.7, OrderedDict may no longer be needed, because dicts are defined as keeping insertion order.)

2) By piping the output of the program (which is all the generated substrings, one per line) to the Unix uniq command, whose purpose is to select only unique items from its input. But for that, we have to sort the list first, since uniq requires sorted input to work properly. We can do that with pipelines like the following:

First, without sort and uniq; there are duplicates:

$ pyo all_substrings2.py aabbb | nl -ba
1 a
2 a
3 b
4 b
5 b
6 aa
7 ab
8 bb
9 bb
10 aab
11 abb
12 bbb
13 aabb
14 abbb
15 aabbb

Then with sort and uniq; now there are no duplicates:

$ pyo all_substrings2.py aabbb | sort | uniq | nl -ba
1 a
2 aa
3 aab
4 aabb
5 aabbb
6 ab
7 abb
8 abbb
9 b
10 bb
11 bbb

The man pages for sort and uniq are here:

sort
uniq

That's it for now. I have a few more points which I may want to add; if I decide to do so, I'll do them in a Part 3 post.

The image at the top of the post is of spools of twine (a kind of string) from Wikipedia.

- Enjoy.


- Vasudev Ram - Online Python training and consulting

I conduct online courses on Python programming, Unix/Linux (commands and shell scripting) and SQL programming and database design, with personal coaching sessions.

Contact me for details of course content, terms and schedule.

DPD: Digital Publishing for Ebooks and Downloads.

Hit the ground running with my vi quickstart tutorial. I wrote it at the request of two Windows system administrator friends who were given additional charge of some Unix systems. They later told me that it helped them to quickly start using vi to edit text files on Unix.

Check out WP Engine, powerful WordPress hosting.

Creating online products for sale? Check out ConvertKit, email marketing for online creators.

Teachable: feature-packed course creation platform, with unlimited video, courses and students.

Track Conversions and Monitor Click Fraud with Improvely.

Posts about: Python * DLang * xtopdf

My ActiveState Code recipes

Follow me on:


Podcast.__init__: Django, Channels, And The Asynchronous Web with Andrew Godwin

$
0
0
Once upon a time the web was a simple place with one main protocol and a predictable sequence of request/response interactions with backend applications. This is the era when Django began, but in the intervening years there has been an explosion of complexity with new asynchronous protocols and single page Javascript applications. To help bridge the gap and bring the most popular Python web framework into the modern age Andrew Godwin created Channels. In this episode he explains how the first version of the asynchronous layer for Django applications was created, how it has changed in the jump to version 2, and where it will go in the future. Along the way he also discusses the challenges of async development, his work on designing ASGI as the spiritual successor to WSGI, and how you can start using all of this in your own projects today.

Summary

Once upon a time the web was a simple place with one main protocol and a predictable sequence of request/response interactions with backend applications. This is the era when Django began, but in the intervening years there has been an explosion of complexity with new asynchronous protocols and single page Javascript applications. To help bridge the gap and bring the most popular Python web framework into the modern age Andrew Godwin created Channels. In this episode he explains how the first version of the asynchronous layer for Django applications was created, how it has changed in the jump to version 2, and where it will go in the future. Along the way he also discusses the challenges of async development, his work on designing ASGI as the spiritual successor to WSGI, and how you can start using all of this in your own projects today.

Preface

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to scale up. Go to podcastinit.com/linode to get a $20 credit and launch a new server in under a minute.
  • Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email hosts@podcastinit.com)
  • To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
  • Join the community in the new Zulip chat workspace at podcastinit.com/chat
  • Your host as usual is Tobias Macey and today I’m interviewing Andrew Godwin about Django Channels 2.x and the ASGI specification for modern, asynchronous web protocols

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start with an overview of the problem that Channels is aiming to solve?
  • Asynchronous frameworks have existed in Python for a long time. What are the tradeoffs in those frameworks that would lead someone to prefer the combination of Django and Channels?
  • For someone who is familiar with traditional Django or working on an existing application, what are the steps involved in integrating Channels?
  • Channels is a project that you have been working on for a significant amount of time and which you recently re-architected. What were the shortcomings in the 1.x release that necessitated such a major rewrite?
    • How is the current system architected?
  • What have you found to be the most challenging or confusing aspects of managing asynchronous web protocols both as an author of Channels/ASGI and someone building on top of them?
    • While reading through the documentation there were mentions of the synchronous nature of the Django ORM. What are your thoughts on asynchronous database access and how important that is for future versions of Django and Channels?
  • As part of your implementation of Channels 2.x you introduced a new protocol for asynchronous web applications in Python in the form of ASGI. How does this differ from the WSGI standard and what was your process for developing this specification?
    • What are your hopes for what the Python community will do with ASGI?
  • What are your plans for the future of Channels?
  • What are some of the most interesting or unexpected uses of Channels and/or ASGI?

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA


PyBites: Career Development for Programmers

$
0
0

What makes you excel in your career? Become an expert in x, y, z. Sure, you need to learn technical skills, quite a lot of them. However there is a lot more to it. If you want to succeed in your job, business and life, you want to build a portfolio, share your learning, become a reader and a good writer, and last but not least stick to daily exercising. I published this article on my blog 2 years ago and find a lot is still relevant today and will serve our community. I also added an updated towards the end.

Success is a matter of choice, not chance ~ Deepak Mehra

How to improve your programming career:

  • The first thing obviously is to code. A lot. I was reminded of this again reading this great post this morning:

    wakeUp();
    
    workOut();
    
    var currentHour = new Date().getHours();
    
    var bedtimeHour = 22;
    
    while(currentHour < bedtimeHour) {
      code();
      currentHour = new Date().getHours();
    };
    

    Source: myLife.js. I bought an ABC t-shirt to remind myself of this as well (OK also because I like Baldwin's sales speech in Glengarry Glen Ross lol ... Always Be Closing = Always Be Coding!). Wether the 10,000-Hour Rule is true or not, with practice comes skill, and generally the more (challenged) practice the more skills.

  • Related: build side projects. For example I was learning some new Python tricks and idioms the other day, but it was not until using it that this stuff really sunk in. I needed nested defaultdicts, * unpacking to a named tuple, and I went to StackOverflow a few times more. Books are a good foundation, but skills come from getting your hands dirty. It's then that exceptions happen and need creativity to solve emerging problems. Yesterday I wrote a Twitter bot. I had done this before, yet I still learned new things: using the requests and logging modules, writing my own exceptions, hiding credentials in a github repo, to name a few. Deploying a solution to a remote server (different environment) causes challenges too. So practice, practice, practice! Aim everyday to improve yourself's yesterday version.

  • Read the passionate programmer. I made some notes here. The Developer's code is also a great read. Books to reread from time to time.

  • Share you learning. Start a blog. It has a myriad of advantages. Technical blogging is a great resource. I can also recommend Show your work.

  • Become a time management freak. One thing John Sonmez recommends in Soft Skills is keeping a log of time spent. We loose a lot of time on trivia especially in this highly interruptive social media society. I probably will give this one a try ...

  • Muscles need stimulation to grow, equally a software developer needs constant challenge to stretch to new levels. The mentioned 10,000-Hour Rule is only as good as you finding yourself new challenges!

    Your comfort zone is a place where you keep yourself in a self-illusion and nothing can grow there but your potentiality can grow only when you can think and grow out of that zone." - Rashedur Ryan Rahman.

  • Use idle time for rest, but it never hurts to listen to podcasts or do some video courses. Any success starts with a thought, and feeding your mind with good ideas can have surprising effects. I devour podcasts on my walks.

  • Collaborate with other developers (and do code reviews). It is not until you have to explain your code, or make it maintainable/ extensible, that you really start thinking about the underlying design. When you manage to explain your skill to another person you generally master it. Blogging is a great tool for this.

  • Become a better writer. Apart from an essential skill in many areas of life, clear writing has a lot in common with good coding. And read! Good reading is a precursor to good writing. Similarly, reading lot of source code makes you a better coder (I picked up this advice in the fascinating book Coders at work).

  • Try new things. I have been using Python a lot which I love, but there are a lot of other good languages out there. I certified in Java which has very good parts. I'd like to learn more Ruby. In terms of concepts Machine Learning is a hot topic, worth checking out (as well as NLP). Or do you code solely in PHP? Try Django, or full stack JS. You get the point. Even within a language you already use you can always learn more, for example to learn more Python: collections, iterators, generators, data science modules like NumPy and Pandas, functional programming constructs, Unicode, advanced OOP features, etc. The important thing is to never become complacent.

  • Jump on things others don't like. You can gain specialization, persistence skills and relatively high reward. Take data analysis for instance: everybody loves the analytics part, however data is usually not delivered in a ready-to-use format. Data cleaning is not the sexiest job but it is what makes the fun stuff possible. You see this in Big Data as well: everybody talks about nice tools like Spark/ Hive/ Pig, but what about getting the a good copy of the data loaded in? ETL (Extract, Transform and Load) is a non-trivial part of the total effort. Same story with Excel).

  • Don't ignore the marketing side. We all love the tech side, but sometimes we have to step out of this mindset and wonder who the customer is and what is the need, the problem we're trying to solve. Sometimes we build stuff before checking the audience. Understanding the market is a fundamental skill. And more generally becoming effective (solve the right problem) before focusing on more efficiency per se (become proficient at a skill).

  • Have a liability partner. I note that when you are held accountable by a friend or colleague you perform much better. And the joined learning is fun and fruitful. Update: read Darren Hardy's The Compound Effect where he states:

    To up your chances of success, get a success buddy, someone who’ll keep you accountable as you cement your new habit while you return the favor.

  • Last but not least: exercise! Daily exercise instantly pays for itself: you feel better, you have a good health (your most important asset), and sharpen your mind (= better code). It's like garbage collection: you can only hold that many things at once, and stress increases with time, so you need to clear your mind periodically. Besides, oftentimes it's when we leave our desk that we actually solve a problem we're stuck on. Getting in shape can be overwhelming, this book is a good start.


Remember you are in charge of your career, it's not up to your boss or even family and close friends to tell you what to do. You have to seek opportunities, and when they present themselves, grab them with both hands.

Secondly, technical skills alone only get you so far. Often a decently skilled and personable guy outperforms a super expert who lacks people skills. In that regard I highly recommend Soft Skills which deals with a lot of this. I am also reading John Sonmez' second book - The Complete Software Developer’s Career Guide - which he is publishing on his blog. Update: this book went to press since and is on to-read list.

What are you consistently doing to get ahead in your career? Share your thoughts in the comments below ...


Update 23rd of Sept 2018:

Difficult roads often lead to beautiful destinations. The best is yet to come ~ Zig Ziglar

  • I missed the words habit and goals above. Both are paramount to achieve any form of success. Goals keep you focussed and you should regularly measure you performance and the necessary adjustments to your plan to stay on track. Secondly I think any success strongly depends on your habits. I highly recommend reading Charles Duhigg's The Power of Habit.

  • Since this article appeared I made an internal job transfer and can confirm that getting familiar with a new large and complex code base can be daunting. It was uncomfortable at times but after biting through (I did not think Bites of Py would help me verbally one day lol), I realized: 1. I have read more code than ever before, 2. by looking at more experienced developers, I significantly improved how I write code and think about design, and 3. getting into uncharted waters grows your confidence. So again: seek new opportunities, even if they seem scary at first. I like what Tim Ferriss says in this context:

    A person's success in life can usually be measured by the number of uncomfortable conversations he or she is willing to have.

  • This article was written exactly 6 months before we founded PyBites (scary)! I feel that a lot of these points were central to getting where we are today. I am also proud and happy that we have been able to provide the tools to help developers write more code, share their progress and guest post to our blog.

  • Leaders are readers, and to become succesful you need a constant stream of inspiration, ideas and knowledge. We made a Django app to keep track of your reading: PyBites My Reading List. Looking for titles? Check out my reading list which has some other pointers as well. Never stop learning!

    screenshot PyBites Reading App


Keep Calm and Code in Python!

-- Bob

Mike Driscoll: PyDev of the Week: Hillel Wayne

$
0
0

This week we welcome Hillel Wayne (@Hillelogram) as our PyDev of the Week! Hillel is the author of “Learn TLA+” and is currently writing “Practical TLA+” with Apress. You should check out his website / blog as it is a good place to learn more about him. Hillel also recently spoke at PyCon US on testing. Let’s take a few moments to chat with Hillel!

Can you tell us a little about yourself (hobbies, education, etc):

I was planning on being a physicist for the longest time until my fourth year of college, when I suddenly switched to wanting to be a programmer. I spent a while as a fullstack in the Bay Area and a backend dev in Chicago, where I discovered a deep love of formal methods, or the practice of “mathematically” designing and building software. If you’ve read The Coming Software Apocalypse, that’s a really good introduction to the why and the what. I now do consulting and workshops, helping people with the how; these tools are way too powerful and useful to stay niche.

Beyond formal methods, my interests in tech are as follows:

  • Software Safety: where do bugs actually come from, and how we do make them less likely?
  • Empirical Software Engineering: what do we actually know is true in software engineering, and what do we just think is true?
  • Software History: how did we get where we are, and what can we learn from the past?
  • Weird and interesting niche ideas, languages, techniques, etc.

Outside of tech, I do a lot of juggling and cooking. I’m a super avid confectioner and chocolatier. There’s a place in Chicago that sells 10-pound bars of chocolate for 50 bucks and I usually go through about four bars a year.

Why did you start using Python?

Around 2012 or so I suddenly realized that I didn’t want to do physics grad school and had to find some way of making money in the real world. Since the part of physics research I actually enjoyed was programming the hardware, I decided to look for programming jobs. But lab tech was mostly in Matlab and C so I picked up Python to diversify my skills and was instantly hooked. Not having to worry about memory or string problems was just so nice! It quickly became my primary hacking language, especially for learning new programming techniques. For example, I learned TDD by implementing Theseus and the Minotaur in Python, and for a while I used a Django app to help manage my depression.

I’m doing a lot less programming in general these days, but I still reach for Python whenever I need to test a new idea, or communicate an idea with a snippet of code. Even people who don’t know python can usually get a hang of what’s going on in a python snippet.

What other programming languages do you know and which is your favorite?

My last couple of jobs were as a Rails dev, so I know the tool but no so much the Ruby language. Besides that, I tend to pick new programming languages for how much they mess with my head. I’m in a Prolog class right now which I’m enjoying, so shoutouts to Annie Ogborn for running it. I also have done a lot in J and, funnily enough, AutoHotKey, which has been by far the most useful in my day to day life (except Python).

My specialty, though, is specification languages, the notations we use to describe our overall systems, as opposed to implement them. I’ve done a lot in both Alloy and TLA+ and am in the middle of writing a book on the latter. TLA+ is what I’m both the best at and the happiest working in, although it makes me feel like an idiot whenever I break a spec.

What projects are you working on now?

I’m currently writing a book on TLA+ and working on a 3-day workshop for companies. After that, one idea that’s been bouncing around in my head is doing a short piece called “Python for Science Coders”. It would be something aimed at teaching my grad school friends who use Python how to use it better. Things like “this is what an IDE is, here’s how it makes coding easier”, “copy/paste means mistakes, try using functions instead”, “here’s how to check you got the right result and didn’t make a typo”, etc. I think there’s a lot of opportunity to make our science code better and, by extension, our science research safer.

Which Python libraries are your favorite (core or 3rd party)?

Third party’s gotta be hypothesis, which is a “property based testing” library for python. Instead of defining tests as “check this input gives us this output”, you say “outputs should have these properties, now generate 100 inputs and check them all.” It takes a bit more thinking of what the properties of your code are, but it can catch really hairy bugs. I did a recent talk on it, actually, about how it can be used to generate integration tests for you.

Core library is abc. In particular, I’m kind of obsessed with __subclasshook__. In most languages things declare their own types: if Foo inherits from Bar, then Foo is a Bar. But with ABCs you can flip that around and have abstract types declare what other classes count as that type. For example, you can have an ABC say “every class who’s name is a palindrome counts as a TimeFarbler.” I have no idea how this is useful in practice, but the idea just delights me.

I see you are the author of a book on TLA+. Can you tell us what TLA+ is?

There are (very roughly) two kinds of errors in software: implementation errors and design errors. Implementation errors are when you make a mistake translating your design to code, such as writing return l.sort() instead of return sorted(l). Design errors are when your code faithfully follows your design, but the design doesn’t actually do what you thought it did. One common example is missing an edge case: what if someone clicks the “submit order” button twice? What happens if an item in the cart goes out of stock? How much of a discount should I get if I have both a “$5 off” and a “10% off” coupon?

Design errors are often more dangerous and more subtle than implementation errors. They are also much harder to check, and almost all of our current tools only help us with implementation errors. To test our designs we use a specification language like TLA+. We describe our system and the properties it should have, and then we use the model checker to explore all possible evolutions of that system and confirm it does what we expect. For example, if I was creating a message processor, I might write something like

\A m \in Message: m \in Sent ~> m \in Received

To say that every message that is sent is eventually received (for our definitions of “message”, “sent”, and “received”). I could then check that that property actually holds with respect to my existing system.

TLA+ has a strong track record in industry. Amazon used it to fix S3 and Microsoft used it to find bugs in the Xbox. At a less “international megacorp” level, I’ve used it to fix business logic and Murat Demirbas uses it to teach students distributed systems.

What did you learn from writing a book and what would you do differently if you could start over?

Oddly enough, the hardest part for me wasn’t the actual content, it was consistency. If I rename a chapter, did I remember to update all of the references to it? Am I certain I introduced concept A before concept B? Stuff like that. I started writing in Word and switched to markdown with pandoc halfway through, and neither really helped with this problem. I think if I started again I’d spend some time researching what format best enforced this kind of consistency. Probably RST or LaTeX.

(Fun fact: Leslie Lamport created both TLA+ and LaTeX!)

About halfway through I started building an automated toolchain for things like error checking, converting documents, and uploading. It’s a huge time saver. For the next book I’d spend a lot more time building a stronger toolchain from the start.

Last discovery: pair with the technical editor. The book is pretty dense, so it’s very slow to proofread. We started moving a lot faster (and I got much deeper feedback) when we started scheduling time to sit down and go over it together.

Thanks for doing the interview!

EuroPython: EuroPython 2019: Seeking venues

$
0
0

Dear EuroPython’istas,

We are in preparations of our venue RFP for the EuroPython 2019 edition and are asking for your help in finding the right locations for us to choose from.

If you know of a larger venue - hotel or conference center - that can accommodate at least 1400 attendees, please send the venue details to board@europython.eu. We will then make sure to include them in our RFP once we send it out.

The more venues we gather to reach out to, the better of a selection process we can guarantee, which in return, will ultimately result in a better conference experience for everybody involved.

When sending us venue suggestions, please make sure to provide us with the following: name and URL of the venue, country and city, as well as the contact details of the sales person in charge of inquiries (full name, email and phone).

We were planning to start the RFP process in the coming days, so please make sure you send us your recommendations as soon as possible.

Thank you,

EuroPython Society Board
https://www.europython-society.org/

EuroPython Society: EuroPython 2019: Seeking venues

$
0
0

Dear EuroPython’istas,

We are in preparations of our venue RFP for the EuroPython 2019 edition and are asking for your help in finding the right locations for us to choose from.

If you know of a larger venue - hotel or conference center - that can accommodate at least 1400 attendees, please send the venue details to board@europython.eu. We will then make sure to include them in our RFP once we send it out.

The more venues we gather to reach out to, the better of a selection process we can guarantee, which in return, will ultimately result in a better conference experience for everybody involved.

When sending us venue suggestions, please make sure to provide us with the following: name and URL of the venue, country and city, as well as the contact details of the sales person in charge of inquiries (full name, email and phone).

We were planning to start the RFP process in the coming days, so please make sure you send us your recommendations as soon as possible.

Thank you,

EuroPython Society Board
https://www.europython-society.org/

PyBites: Code Challenge 51 - Analyse NBA Data with SQL/sqlite3 - Review

$
0
0

In this article we review last week's Analyse NBA Data with SQL/sqlite3 code challenge.

Our solution

Check out our solution for this challenge.

Some learnings:

  • Use cursor.executemany to bulk insert records.

  • We were using cursor.fetchall but to get one record/row you can use fetchone (thanks @clamytoe)

  • Practice GROUP BY (year_with_most_drafts)

  • Simple SQLite arithmetic (games/active AS games_per_year)

  • Probably don't need CAST if you add types to DB columns (looking at other PRs!)

Community solutions

Check out solutions PR'd by our community.

Some learnings taken from these Pull Requests:

  • Refreshed SQL. Learned about sqlite command line. Learned PyCharm DataSource integration and querying. Refreshed git commands.

  • I used this challenge as a chance to experiment with Jupyter notebook to help visualize the data

Read Code for Fun and Profit

You can look at all submitted code here and/or pulling our Community branch.

Other learnings we spotted in Pull Requests week: itertools, difflib / similarity measures, collections, pytest and patch.

Thanks to everyone for your participation in our blog code challenges!

Need more Python Practice?

Subscribe to our blog (sidebar) to get a new PyBites Code Challenge (PCC) in your inbox each Monday.

And/or take any of our 50+ challenges on our platform.

Prefer coding self contained exercises in the comfort of your browser? Try our growing collection of Bites of Py.

Want to do the #100DaysOfCode but not sure what to work on? Take our course and/or start logging your progress on our platform.


Keep Calm and Code in Python!

-- Bob and Julian

PyBites: Code Challenge 52 - Create your own Pomodoro Timer

$
0
0

It's not that I'm so smart, it's just that I stay with problems longer. - A. Einstein

Hey Pythonistas,

It's TIME for another Code Challenge! (Pun totally intended!)

We're keeping it simple this week. Create your own Pomodoro Timer!

Pomodoro?

What's a Pomodoro Timer? We're glad you asked! (If you didn't ask, you can read anyway!)

A Pomodoro Timer is a countdown timer that enables you to focus on a given task. You set the timer for a specific duration, 20 minutes for example, and for that duration you are completely offline and focused. No email, no phone, no texts, no kids (a man can dream!)... no interruptions. Just pure, focus. This is the Pomodoro Technique.

At the end of the timer, you're back online.

The idea is that the minutes of focus time allow you to achieve more than you otherwise would given the usual swathe of interruptions we all suffer.

The real fanatics will do a set period of the Pomodoro Technique followed by a short break, then another Pomodoro set and repeat.

The Challenge

Now for the challenge:

  • At its simplest, create a timer for a set duration (eg 20 minutes) that "alarms" or notifies you at completion.

  • Go a step further and allow the user to specify the amount of time the Pomodoro Timer goes for.

  • Again, further develop the app by allowing it to loop. That is, Pomodoro Time > break time > Pomodoro Time > break time. Just like the pros!

  • Create a user interface if you have the time! PyGame or argparse perhaps? Maybe even make it web based with Flask or your other favourite web framework.

Here's an example: Tomato Timer!

It doesn't matter how complex or simple you make your app - just that you get coding.

If you need help getting ready with Github, see our new instruction video.

PyBites Community

A few more things before we take off:

  • Do you want to discuss this challenge and share your Pythonic journey with other passionate Pythonistas? Confirm your email on our platform then request access to our Slack via settings.

  • PyBites is here to challenge you because becoming a better Pythonista requires practice, a lot of it. For any feedback, issues or ideas use GH Issues, tweet us or ping us on our Slack.


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Real Python: Python Community Interview With Mahdi Yusuf

$
0
0

Today I’m joined by Mahdi Yusuf, one of the founders of Pycoder’s Weekly.

By day he’s the CTO of Gyroscope, the OS for the human body. By night, he’s a sports and movie fan with a controversial opinion on who is the best Batman… Let’s get into it.

Ricky:Let’s start with an easy one. How’d you get into programming, and when did you start using Python?

Mahdi Yusuf

Mahdi: I was actually quite a late bloomer. I didn’t even know what a pointer was until I was in university studying computer engineering, but I was always into computers and always playing around with machines. (Mostly Windows. Gasp!)

I started using Python in the last couple years of university, and I really enjoyed how simple and “batteries included” it was. Almost everything I wanted to do had a library for it, and in the last 10 years it’s only gotten better.

What drew me to it was that you could do anything you wanted. You weren’t niched into a specific role like you would be with some other programming languages.

Ricky:People may know you as one of the founders of Pycoder’s Weekly. For those reading who may not be aware of Pycoder’s Weekly, can you briefly explain what you offer to your readers, and how it came to fruition?

Mahdi: It was actually just a by-product of having no way to get Python news in a nice, curated manner that was consistent.

Mike (co-founder) and I were just chatting and thought it would be something cool to do, so we just got it started. I bought the domain and created our first landing page, and we were off to the races.

Next thing you know, before we even sent out the first issue, we had 2,000 subscribers.

Ricky:You gave a talk at PyCon Canada 2013, titled “How to Make Friends and Influence Developers.” It’s a great talk where you emphasize how developers can grow their community and products by focusing on the value they can offer and not just the problems they can solve. Do you still feel this is an area that developers can work on in 2018? What challenges do you think developers will be facing in 2019?

Mahdi: Absolutely. Moving forward, I think most problems have been trivialized, from racking your own servers in the early 2000s to points in click in the 2010s. Things are only getting easier, but the problems people have will always be there, so focusing on user goals instead of problems is always the best way to think.

Ricky:For your day job, you work as the CTO for Gyroscope. I first heard about the app in your interview with the legendary Scott Hanselman on the Hanselminutes Podcast. It’s an amazing app that has quite clear benefits for its users. Could you tell us a bit about the app and the plans you have for it going forward?

Mahdi:Gyroscope is essentially the operating system for your body. It helps people keep track of their bodies much like they would their computers. Tracking important health metrics like their heart rate, blood pressure, and blood sugar in addition to simpler things like where they are spending all their time and how productive they are during work hours.

We are working on a few things that are going to bridge the gap between your health and all the data you are generating and be a part of your everyday life. Stay tuned! You can follow us @gyroscope_app if this is something that interests you, or you can reach out to me @myusuf3.

Ricky:What other projects do you have going on that you’d like to share? What takes up your time outside of Gyroscope and Pycoder’s?

Mahdi: I have been playing with building an AI-powered home monitoring system that can identify members of my family with trained pictures of them gathered around the house. It could fire notifications of who is around the house and potentially detect when people it doesn’t know enter the camera’s field of view and ignore my family members as they pass throughout the house.

It’s been on the shelf for a couple of weeks. I’ve been busy with work, but it’s getting me writing more Python lately.

Ricky:And now for my last question: there’s more to us than coding, so what other hobbies and interests do you have? Any you’d like to share and/or plug?

Mahdi: I love playing sports and am a huge movie guy.

I have been playing basketball since I was young, and it’s the sport I enjoy playing the most, but as I am getting older, I am spending more time icing my knees than I would like.

As for movies, the summer is just wrapping up, and there was tons of great stuff this summer, but I am just going to put it out there… I am Team Marvel. Ben Affleck is the real Batman.


Thank you Mahdi for the interview. You can follow Mahdi on Twitter here. If you haven’t already, you can sign up to the Pycoder’s Weekly newsletter here.

If there is someone you would like me to interview in the future, reach out to me in the comments below, or send me a message on Twitter.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]


Codementor: Concurrency and Parallelism

$
0
0
This post was originally published on Distributed Python (https://www.distributedpython.com/) on September 14th, 2018. Concurrency is often misunderstood and mistaken for parallelism. However,...

pgcli: Release v1.11.0

$
0
0

Pgcli is a command line interface for Postgres database that does auto-completion and syntax highlighting. You can install this version using:

$ pip install -U pgcli

A very small release this time, but coming next is pgcli migration to Python Prompt Toolkit 2.0.

Features:

  • Respect \pset pager on and use pager when output is longer than terminal height (Thanks: Max Rothman)

Tiago Montes: The mystery behind del() and why it works

$
0
0

The other day, while reviewing the Assignment Section exercises for a training course I was about to deliver, wanting to type dir() into a Python REPL, my fingers went for del() instead. At first I didn’t even notice it but then, something in the back of my head called …

Wingware Blog: Developing and Debugging Python Code Running on Vagrant Containers

$
0
0
Learn how to use Wing Pro to develop, test, and debug Python Code running in Vagrant containers.

Mike Driscoll: Creating Presentations with Jupyter Notebook

$
0
0

Jupyter Notebook can be turned into a slide presentation that is kind of like using Microsoft Powerpoint, except that you can run the slide’s code live! It’s really neat how well it works. The only con in my book is that there isn’t a lot of theming that can be applied to your slides, so they do end up looking a bit plain.

In this article, we will look at two methods of creating a slideshow out of your Jupyter Notebook. The first method is by using Jupyter Notebook’s built-in slideshow capabilities. The second is by using a plug-in called RISE.

Let’s get started!

Note: This article assumes that you already have Jupyter Notebook installed. If you don’t, then you might want to go to their website and learn how to do so.


The first thing we need to do is to create a new Notebook. Once you have that done and running, let’s create three cells so that we can have three slides. Your Notebook should now look like the following:

An empty notebook with 3 cells

Now let’s turn on the “slideshow” tools. Go to the View menu and then click on the Cell Toolbar menu option. You will find a sub-menu in there that is called Slideshow. Choose that. Now your Notebook’s cell should look like this:

An empty slideshow

There are now little comboboxes on the top right of each cell. These widgets give you the following options:

  • Slide
  • Sub-Slide
  • Fragment
  • Skip
  • Notes

You can just create a series of Slides if you like, but you can make the slideshow a bit more interesting by adding Sub-Slides and Fragments. Sub-slides are just slides that are below the previous one while Fragments are basically fragments within the previous slide. As an aside, I have actually never used Fragments myself. Anyway you can also set a slide to Skip, which just allows you to skip a slide or Notes, which are just speaker notes.

Let’s add some text to our first cell. We will add the text “# Hello Slideshow” to it and set the cell type to Markdown. Note the pound sign at the beginning of the text. This will cause the text to be a heading.

In cell two, we can add a simple function. Let’s use the following code:

def double(x):
    print(x *2) 
double(4)

For the last cell, we will add the following text:

# The end

Make sure you set that to be a Markdown cell as well. This is what my cells ended up looking like when I was done:

Getting the slideshow ready

To make things simple, just set each of the cell’s individual comboboxes to Slide.

Now we just need to turn it into an actual slideshow. To do that, you will need save your Notebook and shut down the Jupyter Notebook server. Next you will need to run the following command:

jupyter nbconvert slideshow.ipynb --to slides --post serve
Running the slideshow

To navigate your slideshow, you can use your left and right arrow keys or you can use spacebar to go forward and shift_spacebar to go back. This creates a pretty nice and simple slideshow, but it doesn’t allow you to run the cells. For that, we will need to use the RISE plugin!

Getting Started with RISE

Reveal.js – Jupyter/IPython Slideshow Extension (RISE) is a plugin that uses *reveal.js* to make the slideshow run live. What that means is that you will now be able to run your code in the slideshow without exiting the slideshow. The first item that we need to learn about is how to get RISE installed.

Installing rise with conda

If you happen to be an Anaconda user, then this is the method you would use to install RISE:

conda install -c conda-forge rise

This is the easiest method of installing RISE. However most people still use regular CPython, so next we will learn how to use pip!

Installing rise with pip

You can use Python’s pip installer tool to install RISE like this:

pip install RISE

You can also do `python -m pip install RISE` is you want to. Once the package is installed, you have a second step of installing the JS and CSS in the proper places, which requires you to run the following command:

jupyter-nbextension install rise --py --sys-prefix

If you somehow get a version of RISE that is older than 5.3.0, then you would also need to enable the RISE extension in Jupyter. However, I recommend just using the latest version so you don’t have to worry about that.

Using RISE for a SlideShow

Now that we have RISE installed and enabled, let’s re-open the Jupyter Notebook we created earlier. Your Notebook should now look like this:

Adding RISE

You will notice that I circled a new button that was added by RISE to your Notebook. If you mouse over that button you will see that it has a tooltip that appears that says “Enter/Exit RISE Slideshow”. Click it and you should see a slideshow that looks a lot like the previous one. The difference here is that you can actually edit and run all the cells while in the slideshow. Just double-click on the first slide and you should see it transform to the following:

Running with RISE

After you are done editing, press SHIFT+ENTER to run the cell. Here are the primary shortcuts you will need to run the slideshow effectively:

  • SPACEBAR – Goes forward a slide in the slideshow
  • SHIFT+SPACEBAR – Goes back a slide in the slideshow
  • SHIFT+ENTER – Runs the cell on the current slide
  • DOUBLE-CLICK – To edit a Markdown cell

You can view all the Keyboard shortcuts by going to the Help menu when not in Slideshow mode and clicking the Keyboard Shortcuts option. Most if not all of these shortcuts should work inside of a RISE slideshow.

If you want to start the slideshow on a specific cell, just select that cell and then press the Enter Slideshow button.

RISE also works with Notebook widgets. Try creating a new cell with the following code:

from ipywidgets import interact
 
def my_function(x):
    return x
 
# create a slider
interact(my_function, x=20)

Now start the slideshow on that cell and try running the cell (SHIFT+ENTER). You should see something like this:

Using a widget in RISE

You can use RISE to add neat widgets, graphs and other interactive elements to your slideshow that you can edit live to demonstrate concepts to your attendees. It’s really quite fun and I have used RISE personally for presenting intermediate level material in Python to engineers.

RISE also has several different themes that you can apply as will as minimal support for slide transitions. See the documentation for full information.


Wrapping Up

In this chapter we learned about two good methods for creating presentations out of our Jupyter Notebooks. You can use Jupyter directly via their nbconvert tooling to generate a slideshow from the cells in your Notebook. This is nice to have, but I personally like RISE better. It makes the presentations so much more interactive and fun. I highly recommend it. You will find that using Jupyter Notebook for your presentations will make the slides that much more engaging and it is so nice to be able to fix slides during the presentation too!


Related Reading

Viewing all 22627 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>