Quantcast
Channel: Planet Python
Viewing all 22851 articles
Browse latest View live

Tryton News: Tryton Release 5.4

$
0
0

@ced wrote:

We are proud to announce the 5.4 release of Tryton.
In addition to my bug fixes and performance improvements, this release improves in many place the user experience. It also extends a lot the existing workflow to support more use cases. We see 8 new modules landing as official.

You can have a try it on the demo server, use the docker image or download it here.
As usual the migration from previous series is fully supported. Some manual operation may be required, see Migration from 5.2 to 5.4.

Here is the list of the most noticeable changes:
(For a more complete list, see the change log of each package)

Contents

Changes For The User

We can show a visual context on the rows or cells. Those visual context can be muted , success , warning or danger . Many modules have been updated to use them like the payable and receivable today on the party or when an invoice is due etc.

In the search bar of the clients, we enabled the direct search on fields of relational field types, like One2Many, Many2Many and Many2One. This is done by appending a dot to the relational field name and then the name of the field in the relation model. E.g. On products filter you can use the search clause Variants.Code: PROD, to find all products, which variants have a code named PROD.
The search entry provides completion for such related fields.
By default only one level of completion is activated but customization can activate more. This feature also works on the keys of dictionary fields like the product attributes.

Now the clients display a more user-friendly error messages for domain validation error. The clients display the exact failing constraint using the same format of the search bar.

All the list views have been reviewed. The most important fields are now expanded to take advantage of available space. For main editable view, the creation of new record is done on top instead of bottom to avoid to load all the records.

For now, an CSV export created by a user will be only available for him by default. The administrator can make an export available to a group of users.

Desktop Client

Now, when drag&drop is available on a view, we show a draggable icon to notify the user but also to provide a handle to drag which is easier when the list is editable.

Draggable handler on tryton

Web Client

The web client now supports drag and drop to order list and tree rows like in the desktop client. There is one small difference to insert a row inside a non-expanded row: The user must drop it below the row while pressing the CTRL key. Otherwise the row is dropped next to the row.

The column size of the web client has been improved. Now columns have a minimal width (depending of the type) and a double scrollbar (top and bottom) is displayed if there is not enough space to show all the columns on the view-port.

Accounting

The constraint that prevented to use twice the same invoice sequence per fiscal year, has been relaxed. Now Tryton checks only that the sequence was not used to number an invoice with a later date.

Since 14 September 2019, Strong Customer Authentication (SCA) has been introduced by EU regulators to reduce online fraud and make the internet a safer place to transact. For that Stripe has introduced the Setup Intent and Payment Intent mechanism for credit card payment. Tryton now supports them in addition to the former mechanism (for SEPA, SOFORT etc.).

Some financial institute have precise requirement about which initiator identifier to use the SEPA message. For that we added a configuration options on the payment journal. The available options for now are: “SEPA Creditor Identifier”, “Belgian Enterprise Number”, “Spanish VAT Number”.

Until now, it was possible to cancel a posted supplier invoice but not one from a customer. This was because in many countries it is not allowed. But in order to be more flexible, we added an option on the company to allow cancel of customer invoice.

When creating manually an accounting move, we set the date to today by default if the current period is selected. Otherwise it is still the start date of the period.

The wizard that renew automatically the fiscal year, now update the sequence name if it contains the year.

When migrating to Tryton, the accountant needs to fill depreciation of existing asset which has already been deprecated. To ease the encoding and ensure a correct computation, we added a field to store the already depreciated amount for the asset. This amount will be deducted from the asset value before continuing the depreciation computation.

To ensure that any tax line will be reported in tax statement, the tax is always required on the tax line.

It is now possible to define default values for the customer and supplier tax rules. This can be useful to apply a local tax rule based on subdivision by default.

In addition to the country, the tax rules can now be written using the subdivisions of origin and/or destination. A child subdivision will match the rule based on an upper level subdivision. This is useful for countries that have different tax rates for some subdivisions.

The income statement is included in the balance sheet for the Spanish accounting (as it is done for other countries). So the running income of the current year is already included before the year closing.

Bank

The BIC of banks are now validated and formatted. This avoid encoding error and ease the research.

Party

Until now, the subdivision on an address was limited to the top-level subdivision of the country. It is now possible to define which types of subdivision are allowed to be used. Tryton comes with some configurations following the country rules for postal address format.

Product

Now it is possible to configure a sequence for the product code that will be used to be filled at creation time. This may be used to ensure to have a unique code per product, even when it is duplicated.

Like for parties, we added on product a list of identifiers. By default, Tryton supports and validates these numbers: EAN, ISAN, ISBN, ISIL, ISIN and ISMN. Non-standard identifiers are supported also. These identifiers are used for matching when searching products by name.

The product cost price can be used in the price list. It uses the cost price of the company set in the context. This allows to build price lists by defining a margin to apply on the cost.

You can now define which unit of measure is the basis for quantities used in a price list. In standard modules we support the default unit (the original one) and the sale unit.

Purchase

It is now possible to configure the customer code of the current company on the supplier party. The code will be displayed on the request for quotation .

The same processing delay for purchases is added to the requisitions. This allows to reset an approved requisition to draft if it was not yet automatically processed.

Sale

We added the same processing delay for sales to the sale complaints. So you can reset to draft a complaint after being approved or rejected if it was not yet automatically processed.

We added an option to deactivate a subscription service. This prevents to use these service for new subscriptions.

We allow now to finish a subscription line before the next consumption. This gives more flexibility about ending subscription.

Stock

Now users are able to set a default warehouse in the preferences.
This is useful for companies with multiple warehouses. It saves time for the users as they could have the warehouse filled-in for which they work.

You can use consumable products in an inventory if needed. There are still no requirements and the inventory is not automatically filled with products of this type.

When you are looking at the evolution of the stock quantity for a product, you can open the date to see the moves involved for the changes.

When opening the graph of product quantities by warehouse, if there was no move for the current date, the user can not see the current quantity. Now we add always an entry for the current date.

We force now to always have a minimal quantity for the order point. This avoid confusion for the case where it was not set. Now if you do not want to trigger the purchase or production for any quantity, the user must set an explicit negative quantity.

Timesheet

When using the “Enter Timesheet” wizard, now we display the date in the window name (next to the employee name). The shown date is the one selected in the first step of the wizard.

New Modules

Secondary Unit

These modules allow to define a different secondary unit and factor on the product for sale and for purchase.
The quantity of sale and purchase lines can be defined using the secondary unit fields (quantity and unit price), the main unit fields are automatically updated using the product factor.
On related documents like the invoice or shipment, the secondary fields are displayed using the factor stored on the sale or purchase.

Amendment

The amendment modules allow you to change sales and purchases that are being processed and keeping track of those changes. When an amendment is validated the document is updated and given a new revision. If needed, the invoices or shipments are also updated or recreated to match the new order.

Purchase and Sale History

These modules activate the history on sales and purchases but also add a revision number which is incremented each time the document is reset to draft. The revision number is appended to the document number to ensure parties are communicating about the same version.

New Languages

  • Indonesian

Changes For The Developer

It is now possible to use SQL expressions as value with the create/write methods. The main purpose is to be able to use the time functions of the database server which are linked to the transaction instead of the one provided by the Tryton server.

The expand attribute has been changed from a Boolean (1 or 0) into an integer. The integer represents the proportion of available space which is taken among all expanded columns.

The format_date method on Report can now take an optional format parameter if you don’t want to use the default format of the language.
The Report receives also a new method format_timedelta. It uses the same representation as the clients to format duration field values.

There is now an environment variable to set the default logging level when running trytond as a WSGI application.

Now we have a lazy_gettext method which allows to defer the translation by using a LazyString . It can be used as label or help text of Fields. This is useful for base Model classes and Mixins to limit the duplication of the translation of the same string for each derived class.

Now we prevent to set a value for an unknown field in proteus scripts and in Tryton modules model definitions. For that we add __slots__ automatically on each model. A positive side effect is that it reduces also the memory consumption of each instance.

The PYSON Eval now supports the dotted notation. This feature is a common expectation from beginners. So we decided it is good to support it.

We have already a multiselection widget to use with a Many2Many field. But now we have also aMultiSelection field which stores a list of value as a JSON list in the database. This is useful when the selection has a few options. For now, the widget is also available on list views (but not editable). And the field is usable in the search bar of the client.

You can now define a different start date when using PYSON Date or DateTime with delta.

Now we give the possibility to define a different order (alphabetic) to the keys of a Dict field.

Even if cron jobs are relaunch, it is better to retry directly them few times when a DatabaseOperationalError is raised. This also avoid unnecessary errors in the logs.

Missing a depends on a method is a common mistake. We have improved the generic test to catch more cases like missing or empty parent or unknown field. All the modules have been checked and corrected against this new tests.

Accounting

The generic checkout page for Stripe has been updated to use Stripe.js and support setup and payment intent.
Until now, it was required to setup a webhook from Stripe to Tryton in order to receive the events to update asynchronously the workflow of the payments. Now if you do not setup such webhook, a cron task will fetch periodically the new events and process them. This is useful for testing or when Tryton can not be reached from outside.

We require now to have a fresh session to post a statement. If it is not the case, the client will request to re-enter the user password (or any other authentication method configured).

Country/Currency

The countries, subdivisions and currencies are no more loaded from XML at the module installation but using proteus scripts which use pycountry data: trytond_import_countries and trytond_import_currencies . The translations are also loaded by those scripts.
This reduces the maintenance load of each release and allows users to keep their database up to date without relying on Tryton releases.

Party

As the countries are no more managed as XML data in Tryton (but by import script). The address format are now using country and language code instead of Many2One. So a format can be created even if the country or the language do not yet exist.

Stock

As we now keep a link between the inventory moves and the outgoing moves, we simplified the synchronization algorithm to use this link. Another advantage is that if the product is changed on the inventory move, the outgoing move is also updated instead of creating a new move.

Posts: 2

Participants: 1

Read full topic


Programiz: Python time Module

$
0
0
In this article, we will explore time module in detail. We will learn to use different time-related functions defined in the time module with the help of examples.

Real Python: Cool New Features in Python 3.8

$
0
0

In this course, you’ll get a look into the newest version of Python. On October 14th, 2019 the first official version of Python 3.8 became ready.

You’ll learn about the following:

  • Using assignment expressions to simplify some code constructs
  • Enforcing positional-only arguments in your own functions
  • Specifying more precise type hints
  • Using f-strings for simpler debugging

With a few exceptions, Python 3.8 contains many small improvements over the earlier versions. Towards the end of the course, you’ll see many of these less attention-grabbing changes, as well as a discussion about some of the optimizations that make Python 3.8 faster than its predecessors.

If you want to learn more, additional resources will be referenced and linked to throughout the course.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

PyCon: CFP Deadline for PyCon 2020 Coming Up!

$
0
0

Call for Proposal deadlines are fast approaching.  PyCon US is looking for speakers of all experience levels and backgrounds to contribute to our conference program. We want you and your ideas at PyCon US!

Be sure to create your account on us.pycon.org/2020 in order to access all the submission forms.

More information about speaking at PyCon can be found here.

Tutorials

Tutorial proposals are due November 22, 2019.

We're looking for tutorials that can grow this community at any level. Tutorials that will advance Python, advance this community, and shape the future are preferred. More details about tutorial proposals and submission can be found here.

Talks

Talks, Charlas, Poster, and Education Summit proposals are due December 20, 2019.

Education Summit will be held Thursday April 16. Talks and Charlas are the part of the main conference schedule for Friday, April 17 through Sunday, April 19.  Poster session will be held on Sunday, April 19 along with the Job Fair. For details on each of the proposals go to the individual links provided.  Submission forms for each can be accessed through your dashboard.

Hatchery

Hatchery proposals are due January 3, 2020.

The Hatchery Program is an effort to establish a path for introduction of new tracks, summits, demos, etc which share and fulfill the mission of the Python Software Foundation into PyCon’s schedule. More information about this exciting program can be found here. Submit your proposal here.

StartUp Row

StartUp Row applications are due January 17, 2020.

Startup Row is where early-stage companies go to show off what they’re doing with Python at PyCon US. More information about this event can be found here. Submit your application here.

---------------------------------------

PyCon US is dedicated to featuring a diverse and inclusive mix of speakers in the lineup.

We need beginner, intermediate, and advanced proposals on all sorts of topics as well as beginner, intermediate, and advanced speakers to present talks. You don’t need to be a 20-year veteran who has spoken at dozens of conferences. On all fronts, we need all types of people. That’s what this community is comprised of, so that’s what this conference’s schedule should be made from.

Don't wait to submit your proposal!




Stack Abuse: Understanding OpenGL through Python

$
0
0

Introduction

Following this article by Muhammad Junaid Khalid, where basic OpenGL concepts and setup was explained, now we'll be looking at how to make more complex objects and how to animate them.

OpenGL is very old, and you won't find many tutorials online on how to properly use it and understand it because all the top dogs are already knee-deep in new technologies.

To understand modern OpenGL code, you have to first understand the ancient concepts that were written on stone tablets by the wise Mayan game developers.

In this article, we'll jump into several fundamental topics you'll need to know:

In the last section we'll take a look at how to actually use OpenGL with the Python libraries PyGame and PyOpenGL.

In the next article (coming soon!) we'll take a deeper look at how to use OpenGL with Python and the libraries mentioned above.

Basic Matrix Operations

To properly be able to use many of the functions in OpenGL, we'll need some geometry.

Every single point in space can be represented with Cartesian coordinates. Coordinates represent any given point's location by defining it's X, Y and Z values.

We'll be practically using them as 1x3 matrices, or rather 3-dimensional vectors (more on matrices later on).

Here are examples of some coordinates:

a = ( 5 , 3 , 4 )   b = ( 9 , 1 , 2 )

a and b being points in space, their x-coordinates being 5 and 9 respectively, y-coordinates being 3 and 1, and so on.

In computer graphics, more often than not, homogeneous coordinates are utilized instead of regular old Cartesian coordinates. They're basically the same thing, only with an additional utility parameter, which for the sake of simplicity we'll say is always 1.

So if the regular coordinates of a are (5,3,4), the corresponding homogeneous coordinates would be (5,3,4,1). There's a lot of geometric theory behind this, but it isn't really necessary for this article.

Next, an essential tool for representing geometric transformations are matrices. A matrix is basically a two-dimensional array (in this case of size n*n, it's very important for them to have the same number of rows and columns).

Now matrix operations are, more often than not, pretty straightforward, like addition, subtraction, etc. But of course the most important operation has to be the most complicated one - multiplication. Let's take a look at basic matrix operation examples:

A = [ 1 2 5 6 1 9 5 5 2 ] − Example matrix   [ 1 2 5 6 1 9 5 5 2 ] + [ 2 5 10 12 2 18 10 10 4 ] = [ 3 7 15 18 3 27 15 15 6 ] − Matrix addition   [ 2 4 10 12 2 18 10 10 4 ] − [ 1 2 5 6 1 9 5 5 2 ] = [ 1 2 5 6 1 9 5 5 2 ] − Matrix subtraction  

Now, as all math tends to do, it gets relatively complicated when you actually want something practical out of it.

The formula for matrix multiplication goes as follows:

$$
c[i,j] = \sum_{k=1}^{n}a[i,k]*b[k,j]
$$

c being the resulting matrix, a and b being the multiplicand and the multiplier.

There's a simple explanation for this formula, actually. Every element can be constructed by summing the products of all the elements in the i-th row and the j-th column. This is the reason why in a[i,k], the i is fixed and the k is used to iterate through the elements of the corresponding row. Same principle can be applied to b[k,j].

Knowing this, there's an additional condition that needs to be fulfilled for us to be able to use matrix multiplication. If we want to multiply matrices A and B of dimensions a*b and c*d. The number of elements in a single row in the first matrix (b) has to be the same as the number of elements in a column in the second matrix (c), so that the formula above can be used properly.

A very good way of visualizing this concept is highlighting the rows and columns who's elements are going to be utilized in the multiplication for a given element. Imagine the two highlighted lines over each other, as if they're in the same matrix.

The element where they intercept is the position of the resulting element of the summation of their products:

alt

Matrix multiplication is so important because if we want to explain the following expression in simple terms: A*B (A and B being matrices), we would say:

We are transforming A using B.

This is why matrix multiplication is the quintessential tool for transforming any object in OpenGL or geometry in general.

The last thing you need to know about matrix multiplication is that it has a neutral. This means there is a unique element (matrix in this case) E which when multiplied with any other element A doesn't change A's value, that is:

$$
(!\exists{E}\ \ \forall{A})\ E*A=A
$$

The exclamation point in conjunction with the exists symbol means: A unique element E exists which...

In case of multiplication with normal integers, E has the value of 1. In case of matrices, E has the following values in normal Cartesian (E1) and homogeneous coordinates (E2) respectively:

E 1 = [ 1 0 0 0 1 0 0 0 1 ] E 2 = [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]

Every single geometric transformation has it's own unique transformation matrix that has a pattern of some sort, of which the most important are:

  • Translation
  • Scaling
  • Reflection
  • Rotation
  • Sheering

Translation

Translation is the act of literally moving an object by a set vector. The object that's affected by the transformation doesn't change its shape in any way, nor does it change its orientation - it's just moved in space (that's why translation is classified as a movement transformation).

Translation can be described with the following matrix form:

T = [ 1 0 0 t x 0 1 0 t y 0 0 1 t z 0 0 0 1 ]

The t-s represents by how much the object's x,y and z location values will be changed.

So, after we transform any coordinates with the translation matrix T, we get:

$$
[x,y,z]*T=[t_x+x,t_y+y,t_z+z]
$$

Translation is implemented with the following OpenGL function:

void glTranslatef(GLfloat tx, GLfloat ty, GLfloat tz);

As you can see, if we know the form of the translation matrix, understanding the OpenGL function is very straightforward, this is the case with all OpenGL transformations.

Don't mind the GLfloat, it's just a clever data type for OpenGL to work on multiple platforms, you can look at it like this:

typedef float GLfloat;
typedef double GLdouble;
typedef someType GLsomeType;

This is a necessary measure because not all systems have the same storage space for a char, for example.

Rotation

Rotation is bit more complicated transformation, because of the simple fact it's dependent on 2 factors:

  • Pivot: Around what line in 3D space (or point in 2D space) we'll be rotating
  • Amount: By how much (in degrees or radians) we'll be rotating

Because of this, we first need to define rotation in a 2D space, and for that we need a bit of trigonometry.

Here's a quick reference:

These trigonometric functions can only be used inside a right-angled triangle (one of the angles has to be 90 degrees).

The base rotation matrix for rotating an object in 2D space around the vertex (0,0) by the angle A goes as follows:

[ c o s A − s i n A 0 s i n A c o s A 0 0 0 1 ]

Again, the 3rd row and 3rd column are just in case we want to stack translation transformations on top of other transformations (which we will in OpenGL), it's ok if you don't fully grasp why they're there right now. Things should clear up in the composite transformation example.

This was all in 2D space, now let's move on to 3D space. In 3D space we need to define a matrix that can rotate an object around any line.

As a wise man once said: "Keep it simple and stupid!" Fortunately, math magicians did for once keep it simple and stupid.

Every single rotation around a line can be broken down into a few transformations:

  • Rotation around the x axis
  • Rotation around the y axis
  • Rotation around the z axis
  • Utility translations (which will be touched upon later)

So, the only three things we need to construct for any 3D rotation are matrices that represent rotation around the x, y, and z axis by an angle A:

R x = [ 1 0 0 0 0 c o s A − s i n A 0 0 s i n A c o s A 0 0 0 0 1 ] R y = [ c o s A 0 s i n A 0 0 1 0 0 − s i n A 0 c o s A 0 0 0 0 1 ] R z = [ c o s A − s i n A 0 0 s i n A c o s A 0 0 0 0 1 0 0 0 0 1 ]

3D rotation is implemented with the following OpenGL function:

void glRotatef(GLfloat angle, GLfloat x, GLfloat y, GLfloat z);
  • angle: angle of rotation in degrees (0-360)
  • x,y,z: vector around which the rotation is executed

Scaling

Scaling is the act of multiplying any dimension of the target object by a scalar. This scalar can be <1 if we want to shrink the object, and it can be >1 if we want to enlarge the object.

Scaling can be described with the following matrix form:

S = [ s x 0 0 0 0 s y 0 0 0 0 s z 0 0 0 0 1 ]

sx, sy, sz are the scalars that are multiplied with the x, y, and z values of the target object.

After we transform any coordinates with the scaling matrix S we get:

[ x , y , z ] ∗ S = [ s x ∗ x , s y ∗ y , s z ∗ z ]

This transformation is particularly useful when scaling an object by factor k (this means the resulting object is two times bigger), this is achieved by setting sx=sy=sz=k:

[ x , y , z ] ∗ S = [ s x ∗ x , s y ∗ y , s z ∗ z ]

A special case of scaling is known as reflection. It's achieved by setting either sx, sy, or sz to -1. This just means we invert the sign of one of the object's coordinates.

In simpler terms, we put the object on the other side of the x, y, or z axis.

This transformation can be modified to work for any plain of reflection, but we don't really need it for now.

void glScalef(GLfloat sx, GLfloat sy, GLfloat sz);

Composite Transformations

Composite transformations are transformations which consist of more than 1 basic transformation (listed above). Transformations A and B are combined by matrix multiplying the corresponding transformation matrices M_a and M_b.

This may seem like very straightforward logic, however there are some things that can be confusing. For example:

  • Matrix multiplication is not commutable:
A ∗ B ≠ B ∗ A   A and B being matrices
  • Every single one of these transformations has an inverse transformation. An inverse transformation is a transformation that cancels out the original one:
T = [ 1 0 0 a 0 1 0 b 0 0 1 c 0 0 0 1 ] T − 1 = [ 1 0 0 − a 0 1 0 − b 0 0 1 − c 0 0 0 1 ] E = [ 1 0 0 0 0 1 0 0 0 0 1 0 0 0 0 1 ]     T ∗ T − 1 = E
  • When we want to make an inverse of a composite transformation, we have to change the order of elements utilized:
( A ∗ B ∗ C ) − 1 = C − 1 ∗ B − 1 ∗ A − 1

The point is - the topological order of matrix utilization is very important, just like ascending to a certain floor of a building.

If you're on the first floor, and you want to get to the fourth floor, first you need to go to the third floor and then to the fourth.

But if you want to descend back to the second floor, you would then have to go to the third floor and then to the second floor (in reverse topological order).

Transformations that Involve a Referral Point

As previously mentioned, when a transformation has to be done relative to a specific point in space, for example rotating around a referral point A=(a,b,c) in 3D space, not the origin O=(0,0,0), we need to turn that referral point A into O by translating everything by T(-a,-b,-c).

Then we can do any transformation we need to do, and when we're done, translate everything back by T(a,b,c), so that the original origin O again has the coordinates (0,0,0).

The matrix form of this example is:

T ∗ M ∗ T − 1 = [ 1 0 0 − a 0 1 0 − b 0 0 1 − c 0 0 0 1 ] ∗ M ∗ [ 1 0 0 a 0 1 0 b 0 0 1 c 0 0 0 1 ]

Where M is the transformation we wish to do on an object.

The whole point to learning these matrix operations is so that you can fully understand how OpenGL works.

Modeling Demonstration

With all of that out of the way, let's take a look at a simple modeling demonstration.

In order to do anything with OpenGL through Python, we'll use two modules - PyGame and PyOpenGL:

$ python3 -m pip install -U pygame --user
$ python3 -m pip install PyOpenGL PyOpenGL_accelerate

Because it's redundant to unload 3 books worth of graphics theory on yourself, we'll be using the PyGame library. It will essentially just shorten the process from project initialization to actual modeling and animating.

To start off, we need to import everything necessary from both OpenGL and PyGame:

import pygame as pg
from pygame.locals import *

from OpenGL.GL import *
from OpenGL.GLU import *

In the following example, we can see that to model unconventional object, all we need to know is how the complex object can be broken down into smaller and simpler pieces.

Because we still don't know what some of these functions do, I'll give some surface level definitions in the code itself, just so you can see how OpenGL can be used. In the next article, all of these will be covered in detail - this is just to give you a basic idea of how working with OpenGL looks like:

def draw_gun():
    # Setting up materials, ambient, diffuse, specular and shininess properties are all
    # different properties of how a material will react in low/high/direct light for
    # example.
    ambient_coeffsGray = [0.3, 0.3, 0.3, 1]
    diffuse_coeffsGray = [0.5, 0.5, 0.5, 1]
    specular_coeffsGray = [0, 0, 0, 1]
    glMaterialfv(GL_FRONT, GL_AMBIENT, ambient_coeffsGray)
    glMaterialfv(GL_FRONT, GL_DIFFUSE, diffuse_coeffsGray)
    glMaterialfv(GL_FRONT, GL_SPECULAR, specular_coeffsGray)
    glMateriali(GL_FRONT, GL_SHININESS, 1)

    # OpenGL is very finicky when it comes to transformations, for all of them are global,
    # so it's good to seperate the transformations which are used to generate the object
    # from the actual global transformations like animation, movement and such.
    # The glPushMatrix() ----code----- glPopMatrix() just means that the code in between
    # these two functions calls is isolated from the rest of your project.
    # Even inside this push-pop (pp for short) block, we can use nested pp blocks,
    # which are used to further isolate code in it's entirety.
    glPushMatrix()

    glPushMatrix()
    glTranslatef(3.1, 0, 1.75)
    glRotatef(90, 0, 1, 0)
    glScalef(1, 1, 5)
    glScalef(0.2, 0.2, 0.2)
    glutSolidTorus(0.2, 1, 10, 10)
    glPopMatrix()

    glPushMatrix()
    glTranslatef(2.5, 0, 1.75)
    glScalef(0.1, 0.1, 1)
    glutSolidCube(1)
    glPopMatrix()

    glPushMatrix()
    glTranslatef(1, 0, 1)
    glRotatef(10, 0, 1, 0)
    glScalef(0.1, 0.1, 1)
    glutSolidCube(1)

    glPopMatrix()

    glPushMatrix()
    glTranslatef(0.8, 0, 0.8)
    glRotatef(90, 1, 0, 0)
    glScalef(0.5, 0.5, 0.5)
    glutSolidTorus(0.2, 1, 10, 10)
    glPopMatrix()

    glPushMatrix()
    glTranslatef(1, 0, 1.5)
    glRotatef(90, 0, 1, 0)
    glScalef(1, 1, 4)
    glutSolidCube(1)
    glPopMatrix()

    glPushMatrix()
    glRotatef(8, 0, 1, 0)
    glScalef(1.1, 0.8, 3)
    glutSolidCube(1)
    glPopMatrix()

    glPopMatrix()

def main():
    # Initialization of PyGame modules
    pg.init()
    # Initialization of Glut library
    glutInit(sys.argv)
    # Setting up the viewport, camera, backgroud and display mode
    display = (800,600)
    pg.display.set_mode(display, DOUBLEBUF|OPENGL)
    glClearColor(0.1,0.1,0.1,0.3)
    gluPerspective(45, (display[0]/display[1]), 0.1, 50.0)
    gluLookAt(5,5,3,0,0,0,0,0,1)

    glTranslatef(0.0,0.0, -5)
    while True:
        # Listener for exit command
        for event in pg.event.get():
            if event.type == pg.QUIT:
                pg.quit()
                quit()

        # Clears the screen for the next frame to be drawn over
        glClear(GL_COLOR_BUFFER_BIT|GL_DEPTH_BUFFER_BIT)
        ############## INSERT CODE FOR GENERATING OBJECTS ##################
        draw_gun()
        ####################################################################
        # Function used to advance to the next frame essentially
        pg.display.flip()
        pg.time.wait(10)

This whole bunch of code yields us:

alt

Conclusion

OpenGL is very old, and you won't find many tutorials online on how to properly use it and understand it because all the top dogs are already knee-deep in new technologies.

To properly use OpenGL, one needs to grasp the basic concepts in order to understand the implementations through OpenGL functions.

In this article, we've covered basic matrix operations (translation, rotation, and scaling) as well as composite transformations and transformations that involve a referral point.

In the next article, we'll be using PyGame and PyOpenGL to initialize a project, draw objects, animate them, etc.

PyCoder’s Weekly: Issue #393 (Nov. 5, 2019)

$
0
0

#393 – NOVEMBER 5, 2019
View in Browser »

The PyCoder’s Weekly Logo


Python Adopts a 12-Month Release Cycle (PEP 602)

The CPython team moves to a consistent annual release schedule. More info here in PEP 602.
LWN.NET

Build a Mobile App With the Kivy Python Framework

Learn how to build a mobile application with Python and the Kivy GUI framework. You’ll discover how to develop an application that can run on your desktop as well as your phone. Then, you’ll package your app for iOS, Android, Windows, and macOS.
REAL PYTHON

Become a Python Guru With PyCharm

alt

PyCharm is the Python IDE for Professional Developers by JetBrains providing a complete set of tools for productive Python, Web and scientific development. Be more productive and save time while PyCharm takes care of the routine →
JETBRAINSsponsor

The 2019 Python Developer Survey

“[We] aim to identify how the Python development world looks today and how it compares to the last two years. The results of the survey will serve as a major source of knowledge about the current state of the Python community and how it is changing over the years, so we encourage you to participate and make an invaluable contribution to this community resource. The survey takes approximately 10 minutes to complete.”
PSF BLOG

You Don’t Have to Migrate to Python 3

“Python 3 is great! But not every Python 2 project has to be migrated. There are different ways how you can prepare for the upcoming Python 2 End of Life.”
SEBASTIAN WITOWSKI

Why You Should Use python -m pip

Arguments for why you should always use python -m pip over pip/pip3 to control exactly which Python environment is used.
BRETT CANNON

Thank You, Guido

“After six and a half years, Guido van Rossum, the creator of Python, is leaving Dropbox and heading into retirement.”
DROPBOX.COM

Python Jobs

Django Full Stack Web Developer (Austin, TX, USA)

Zeitcode

Full Stack Developer (Toronto, ON, Canada)

Beanfield Metroconnect

Full Stack Software Developer (Remote)

Cybercoders

Full-Stack Python/Django Developer (Remote)

Kimetrica, LLC

Sr. Python Data Engineer (Remote)

TEEMA Solutions Goup

More Python Jobs >>>

Articles & Tutorials

Cool New Features in Python 3.8

What does Python 3.8 bring to the table? Learn about some of the biggest changes and see you how you can best make use of them.
REAL PYTHONvideo

Practical Log Viewers With Sanic and Elasticsearch

How to view log output from Docker containers in an automated CI/CD system in your GitHub pull requests, using Elasticsearch and a Python REST API built with Sanic.
CRISTIAN MEDINA• Shared by Cristian Medina

Python Developers Are in Demand on Vettery

alt

Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today →
VETTERYsponsor

Traffic Sign Classification With Keras and Deep Learning

How to train your own traffic sign classifier/recognizer capable of obtaining over 95% accuracy using Keras and Deep Learning.
ADRIAN ROSEBROCK

Python REST APIs With Flask, Connexion, and SQLAlchemy

In Part 4 of this series, you’ll learn how to create a Single-Page Application (SPA) to interface with the REST API backend that you built in Part 3. Your SPA will use HTML, CSS, and JavaScript to present this REST API to a user as a browser-based web application.
REAL PYTHON

How We Spotted and Fixed a Performance Degradation in Our Python Code

A post-mortem of how Omer’s team tracked down and fixed a performance regression introduced by a switch from Celery to RQ.
OMER LACHISH

Python: Better Typed Than You Think

MyPy assisted error handling, exception mechanisms in other languages, fun with pattern matching and type variance.
DMITRII GERASIMOV

Finding Definitions From a Source File and a Line Number in Python

Considering a filename and a line number, can you tell which function, method or class a line of code belongs to?
JULIEN DANJOU

Visual Studio Online: Web-Based IDE & Collaborative Code Editor

Microsoft announced Visual Studio Online, an online IDE and cloud-based development environment based on VS Code.
MICROSOFT.COM

Serving Static Files From Flask With WhiteNoise and Amazon CloudFront

This tutorial shows how to manage static files with Flask, WhiteNoise, and Amazon CloudFront.
MICHAEL HERMAN

Easily Build Beautiful Video Experiences Into Your Python App

Mux Video is an API-first platform, powered by data and designed by video experts. Test it out to build video for your Python app that streams beautifully, everywhere.
MUXsponsor

Projects & Code

Events

Python Miami

November 9 to November 10, 2019
PYTHONDEVELOPERSMIAMI.COM

PiterPy Meetup

November 12, 2019
PITERPY.COM


Happy Pythoning!
This was PyCoder’s Weekly Issue #393.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

Catalin George Festila: Python 3.7.5 : About PEP 3107.

$
0
0
The PEP 3107 introduces a syntax for adding arbitrary metadata annotations to Python functions. The function annotations refer to syntax parameters with an expression. def my_function(x: expression, y: expression = 5): ...For example: >>> def show(myvar:np.float64): ... print(type(myvar)) ... print(myvar) ... >>> show(1.1) 1.1 >>> def files(filename: str, dot='.') -> list: ... print

Codementor: Top 5 Most Popular Web Programming Languages You Should Learn

$
0
0
We’re living in the digital era and technology information is developing quickly. In fact, there are various programming languages available in use globally. We’re entering 2020, here are the top 5 most popular web programming languages you should learn.

Zato Blog: Publish/subscribe, Zato services and asynchronous API integrations

$
0
0

This article introduces features built into Zato that let one take advantage of publish/subscribe topics and message queues in communication between Zato services, API clients and backend systems.

Overview

Let's start by recalling the basic means through which services can invoke each other.

  • Using self.invoke will invoke another service directly, in a blocking manner, with the calling service waiting for a response from the target one.
self.invoke('my.service','my.request')
  • With self.invoke_async, the calling service will invoke another one in background, without waiting for a response from the one being called.
self.invoke_async('my.service','my.request')

There are other variants too, likeasyncpatterns, and all of them work great but what they all have in common is that the entire communication between services takes place in RAM. In other words, as long as Zato servers are up and running, services will be able to communicate.

What happens, though, when a service invokes another one and the server the other one is running on is abruptly stopped? The answer is that, without publish/subscribe, a given request will be lost irrevocably - after all, it was in RAM only so there is no way for it to be available across server restarts.

Sometimes this is desirable and sometimes it is not - publish/subscribe topics and message queues focus on scenarios of the latter kind. Let's discuss publish/subscribe, then.

Introducing publish/subscribe


In its essence, publish/subscribe is about building message-based workflows and processes revolving around asynchronous communication.

Publishers send messages to topics, subscribers have queues for data from topics and Zato ensures that messages will be delivered to subscribers even if servers are not running or if subscribers are not currently available.

Publish/subscribe and Zato services

Publish/subscribe, as it pertains to Zato services, is an extension of the idea of message topics and queues. In this case, it is either internal or user-defined services that topics and queues are used by. Whereas previously, publishers and subscribers were external applications, here both of these roles are fulfilled by services running in Zato servers.

In the diagram above, an external API client invokes a channel service, for instance a REST one. Now, instead of using self.invoke or self.invoke_async the channel service will use self.pubsub.publish to publish the message received to a backend service which will in turn deliver it to external, backend systems.

The nice part of it is that, given how Zato publish/subscribe works, even if all servers are stopped and even if some of the recipients are not available, the message will be delivered eventually. That is the cornerstone of the design - if everything works smoothly, the message will be delivered immediately, but if anything goes wrong along the way, the message is retained and attempts are made periodically to deliver it to its destination.

Python code

As usual in Zato, the Python code needed is straightforward. Below, one service publishes a message to another - the programmer does not need to think about the inner details of publish/subscribe, about locations of servers, re-deliveries or guaranteed delivery. Merely using self.publish.publish suffices for everything to work out of the box.

# -*- coding: utf-8 -*-from__future__importabsolute_import,division,print_function,unicode_literals# Zatofromzato.server.serviceimportService# ################################################################################classMyService(Service):name='pub.sub.source.1'defhandle(self):# What service to publish the message totarget='pub.sub.target.1'# Data to invoke the service with, here, we are just taking as-is# what we were given on input.data=self.request.raw_request# An optional correlation ID to assign to the published message,# if givenm it can be anything as long as it is unique.cid=self.cidself.pubsub.publish(target,data=data,cid=cid)# Return the correlation ID to our callerself.response.payload=cid# ################################################################################classPubSubTarget(Service):name='pub.sub.target.1'defhandle(self):# This is how the incoming message can be accessedmsg=self.request.raw_request# Now, the message can be processed accordingly# The actual code is skipped here - it will depend# on one's particular needs.# ################################################################################

Asynchronous message flows

The kind of message flows that publish/subscribe topics promote are called asynchronous because, seeing as in a general case it is not possible to guarantee that communication will be instantaneous, the initial callers (e.g. API clients) and possibly other participants too should only submit their requests without expectations that responses, if any, will appear immediately.

Consider a simple case of topping up a pay-as-you go mobile phone. Such a process will invariably require participation from at least several backend systems, all of which can be coordinated by Zato.

Let's say that the initial caller, the API client initiating the process, is a cell phone itself, sending a text message with a top-up code from a gift card.

Clearly, there is no need for the phone itself to actively wait for the response. With several backend systems involved, it may take anything between seconds to minutes before the card is actually recharged and there is no need to keep all of the systems involved, including the cell phone, occupied.

At the same time, in case some of the backend systems are down and the initial request is lost, we cannot expect that the end user will keep purchasing more cards - we need some kind of a guaranteed delivery mechanism, which is precisely where Zato topics are useful with their ability to retain messages in case immediate delivery is not possible.

With topics, if a response is needed, instead of waiting in a synchronous manner, API callers can be given a correlation ID (CID) on output when they submit a request. A CID is just a random piece of string, uniquely identifying the request.

In the Python code example, self.cid is used for the CID. It is convenient to use it because it already available for each service and Zato knows how to use it in other parts of the platform - for instance, if the request is via HTTP (REST or SOAP), the correlation ID will be saved in Apache-style HTTP access logs. This facilitates answering of typical support questions, such as 'What happened to this or that message, when was it processed or when was the response produced?'

We have a CID but why is it useful? It is because it may be used an ID to key messages of two kinds:

  • API callers may save it and then be notified later on by Zato-based services that such and such request, one with a particular CID, has been already processed

  • API callers may save it and then periodically query Zato-based services if a given request is already processed

Which style to use ultimately depends on the overall business and technical processes that Zato publish/subscribe and services support - sometimes it is more desirable to receive notifications yet sometimes it is not possible at all, e.g. if the recipients are rarely accessible, for instance, if they join networks irregularly.

Web-admin GUI

In parting words, it needs to be mentioned that a very convenient aspect of Zato services' being part of the bigger publish/subscribe mechanism is that web-admin GUI treats them just like any other endpoint and we can browse their topics, inspect last messages published, consult web-admin to check how many messages were published, or carry out other tasks that it is capable of, like in the screenshots below:

Zato Blog: Configuring Zato for high-performance Oracle Database connections

$
0
0

If you need to configure Zato for Oracle DB connections and you want to ensure the highest performance possible, this is the post which goes through the process step-by-step. Read on for details.

Overview

Note that Zato 3.1+ is required. The tasks involved are:

  • Installing Zato
  • Installing an Oracle client
  • Greenifying the Oracle client
  • Starting servers
  • Creating database connection definitions
  • Confirming the installation

Installing Zato

  • Choose your preferred operating system and follow the general installation instructions - Oracle DB connections will work the same no matter the system Zato runs on

  • Create a Zato environment with as many servers as required. Again, there are no Oracle-specific steps at this point yet.

Installing an Oracle client

  • Download an Oracle client and install it on all the systems with Zato servers

  • Add the client's installation path to LD_LIBRARY_PATH. For instance, if the client is installed to /opt/zato/instantclient_19_3, add the following to ~/.bashrc:

    export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:/opt/zato/instantclient_19_3

Greenifying the Oracle client

  • This step is crucial for achieving the highest performance of SQL queries

  • For each Zato server installed, open its server.conf file and find stanza [greenify]

  • If there is no such stanza in server.conf, create it yourself

  • Modify the stanza to greenify the libclntsh.so library - this is a library that the Oracle client ships with. For instance, if the client's installation path is /opt/zato/instantclient_19_3, the full path to the library is /opt/zato/instantclient_19_3/libclntsh.so

  • The stanza should read as below, assuming the installation path as above:

  [greenify]
  /opt/zato/instantclient_19_3/libclntsh.so=True
  • Note that entries in this stanza are followed by =True, which is a signal to Zato that a particular library should be processed

Starting servers

  • Start Zato servers
  • For each, make sure that an entry such as below is saved in server logs

    INFO - Greenified library `/opt/zato/instantclient_19_3/libclntsh.so.19.1`
    

Creating database connection definitions

  • Go to web-admin and create a new Oracle DB connection via Connections -> Outgoing -> SQL, as below:


Create a new SQL connection
  • Update the newly created connection's password by clicking Change password

  • Click Ping to confirm the connectivity to the remote server

  • This concludes the process, the connection is ready for use in Zato services now

Hynek Schlawack: Python Application Dependency Management in 2018

$
0
0

We have more ways to manage dependencies in Python applications than ever. But how do they fare in production? Unfortunately this topic turned out to be quite polarizing and was at the center of a lot of heated debates. This is my attempt at an opinionated review through a DevOps lens.

Rene Dudfield: Draft 2 of, ^Let's write a unit test!^

$
0
0
So, I started writing this for people who want to 'contribute' to Community projects, and also Free Libre or Open source projects. Maybe you'd like to get involved, but are unsure of where to begin? Follow along with this tutorial, and peek at the end in the "what is a git for?" section for explanations of what some of the words mean.
Draft 1, 2018/07/18 - initial draft.
Draft 2, 2019/11/04 - two full unit test examples, assertions, making a pull request, use python 3 unittest substring search, "good first issue" is a thing now. Started "What is a git for? Jargon" section.


What's first? A test is first.

A unit test is a piece of code which tests one thing works well in isolation from other parts of software. In this guide, I'm going to explain how to write one using the standard python unittest module, for the pygame game library. You can apply this advice to most python projects, or free/libre open source projects in general.

A minimal test.

What pygame.draw.ellipse should do: http://www.pygame.org/docs/ref/draw.html#pygame.draw.ellipse
Where to put the test: https://github.com/pygame/pygame/blob/master/test/draw_test.py

def test_ellipse(self):
import pygame.draw
surf = pygame.Surface((320, 200))
pygame.draw.ellipse(surf, (255, 0, 0), (10, 10, 25, 20))

All the test does is call the draw function on the surface with a color, and a rectangle. That's it. A minimal, useful test. If you have a github account, you can even edit the test file in the browser to submit your PR. If you have email, or internet access you can email me or someone else on the internet and ask them to do add it to pygame.

An easy test to write... but it provides really good value.
  • Shows an example of using the code.
  • Makes sure the function arguments are correct.
  • Makes sure the code runs on 20+ different platforms and python versions.
  • No "regressions" (Code that starts failing because of a change) can be introduced in the future. The code for draw ellipse with these arguments should not crash in the future.

But why write a unit test anyway?

Unit tests help pygame make sure things don't break on multiple platforms. When your code is running on dozens of CPUs and just as many operating systems things get a little tricky to test manually. So we write a unit test and let all the build robots do that work for us.

A great way to contribute to libre/free and open source projects is to contribute a test. Less bugs in the library means less bugs in your own code. Additionally, you get some public credit for your contribution.

The best part about it, is that it's a great way to learn python, and about the thing you are testing. Want to know how graphics algorithms should work, in lots of detail? Start writing tests for them.
The simplest test is to just call the function. Just calling it is a great first test. Easy, and useful.

At the time of writing there are 39 functions that aren't even called when running the pygame tests. Why not join me on this adventure?


Let's write a unit test!

In this guide I'm going to write a test for an pygame.draw.ellipse to make sure a thick circle has the correct colors in it, and not lots of black spots. There's a bunch of tips and tricks to help you along your way. Whilst you can just edit a test in your web browser, and submit a PR, it might be more comfortable to do it in your normal development environment.

Grab a fork, and let's dig in.

Set up git for github if you haven't already. Then you'll want to 'fork' pygame on https://github.com/pygame/pygame so you have your own local copy.
Note, we also accept patches by email, or on github issues. So you can skip all this github business if you want to. https://www.pygame.org/wiki/patchesandbugs
  • Fork the repository (see top right of the pygame repo page)
  • Make the change locally. Push to your copy of the fork.
  • Submit a pull request
So you've forked the repo, and now you can clone your own copy of the git repo locally.

$ git clone https://github.com/YOUR-USERNAME/pygame
$ cd pygame/
$ python test/draw_test.py
...
----------------------------------------------------------------------
Ran 3 tests in 0.007s

OK

You'll see all of the tests in the test/ folder.

Browse the test folder online: https://github.com/pygame/pygame/tree/master/test


If you have an older version of pygame, you can use this little program to see the issue.


There is some more extensive documentation in the test/README file. Including on how to write a test that requires manual interaction.


Standard unittest module.

pygame uses the standard python unittest module. With a few enhancements to make it nicer for developing C code.
Fun fact: pygame included the unit testing module before python did.
We will go over the basics in this guide, but for more detailed information please see:
https://docs.python.org/3/library/unittest.html



How to run a single test?

Running all the tests at once can take a while. What if you just want to run a single test?

If we look inside draw_test.py, each test is a class name, and a function. There is a "DrawModuleTest" class, and there should be a "def test_ellipse" function.

So, let's run the test...

~/pygame/ $ python test/draw_test.py DrawModuleTest.test_ellipse
Traceback (most recent call last):
...
AttributeError: type object 'DrawModuleTest' has no attribute 'test_ellipse'


Starting with failure. Our test isn't there yet.

Good. This fails. It's because we don't have a test called "def test_ellipse" in there yet. What there is, is a method called 'todo_test_ellipse'. This is an extension pygame testing framework has so we can easily see which functionality we still need to write tests for.

~/pygame/ $ python -m pygame.tests --incomplete
...
FAILED (errors=39)

Looks like there are currently 39 functions or methods without a test. Easy pickings.

Python 3 to the rescue.

Tip: Python 3.7 makes it easier to run tests with the magic "-k" argument. With this you can run tests that match a substring. So to run all the tests with "ellipse" in their name you can do this:

~pygame/ $ python3 test/draw_test.py -k ellipse



Digression: Good first issue, low hanging fruit, and help wanted. 

Something that's easy to do.

A little digression for a moment... what is a good first issue?

Low hanging fruit is easy to get off the tree. You don't need a ladder, or robot arms with a claw on the end. So I guess that's what people are talking about in the programming world when they say "low hanging fruit".

pygame low hanging fruit


Many projects keep a list of "good first issue", "low hanging fruit", or "help wanted" labeled issues. Like the pygame "good first issue" list. Ones other people don't think will be all that super hard to do. If you can't find any on there labeled like this, then ask them. Perhaps they'll know of something easy to do, but haven't had the time to mark one yet.

One little trick is that writing a simple test is quite easy for most projects. So if they don't have any marked "low hanging fruit", or "good first issue" go take a look in their test folder and see if you can add something in there.

Don't be afraid to ask questions. If you look at an issue, and you can't figure it out, or get stuck on something, ask a nice question in there for help.

Digression: Contribution guide.

There's usually also a contribution guide.  Like the pygame Contribute wiki page. Or it may be called developer docs, or there may be a CONTRIBUTING.md file in the source code repository. Often there is a separate place the developers talk on. For pygame it is the pygame mailing list, but there is also a chat server which is a bit more informal.

A full example of a test.

The unittest module arranges tests inside functions that start with "test_" that live in a class.

Here is a full example:

import unittest


class TestEllipse(unittest.TestCase):

def test_ellipse(self):
import pygame.draw
surf = pygame.Surface((320, 200))
pygame.draw.ellipse(surf, (255, 0, 0), (10, 10, 25, 20))


if __name__ == '__main__':
unittest.main()

You can save that in a file yourself(test_draw1.py for example) and run it to see if it passes.

Committing your test, and making a Pull Request.

Here you need to make sure you have "git" setup. Also you should have "forked" the repo you want to make changes on, and done a 'git clone' of it.

# create a "branch"
git checkout -b my-draw-test-branch

# save your changes locally.
git commit test/draw_test.py -m "test for the draw.ellipse function"

# push your changes
git push origin my-draw-test-branch


Here we see a screenshot of a terminal running these commands.

Here we see the commands to commit something and push it up to a repo.
When you push your changes, it will print out some progress, and then give you a URL at which you can create a "pull request".

When you git push it prints out these instructions:
remote: Create a pull request for 'my-draw-test-branch' on GitHub by visiting:
remote: https://github.com/YOURUSERNAME/pygame/pull/new/my-draw-test-branch


You can also go to your online fork to create a pull request there.

Writing your pull request text.

When you create a pull request, you are saying "hey, I made these changes. Do you want them? What do you think? Do you want me to change anything? Is this ok?"

It's usually good to link your pull request to an "issue". Maybe you're starting to fix an existing problem with the code.


Different "checks" are run by robots to try and catch problems before the code is merged in.



Testing the result with assertEquals.


How about it we want to test if the draw function actually draws something?
Put this code into test_draw2.py


import unittest


class TestEllipse(unittest.TestCase):

def test_ellipse(self):
import pygame.draw
black = pygame.Color('black')
red = pygame.Color('red')

surf = pygame.Surface((320, 200))
surf.fill(black)

# The area the ellipse is contained in, is held by rect.
#
# 10 pixels from the left,
# 11 pixels from the top.
# 225 pixels wide.
# 95 pixels high.
rect = (10, 11, 225, 95)
pygame.draw.ellipse(surf, red, rect)

# To see what is drawn you can save the image.
# pygame.image.save(surf, "test_draw2_image.png")

# The ellipse should not draw over the black in the top left spot.
self.assertEqual(surf.get_at((0, 0)), black)

# It should be red in the middle of the ellipse.
middle_of_ellipse = (125, 55)
self.assertEqual(surf.get_at(middle_of_ellipse), red)


if __name__ == '__main__':
unittest.main()


Red ellipse drawn at (10, 11, 225, 95)



What is a git for? Jargon.

jargon - internet slang used by programmers. Rather than use a paragraph to explain something, people made up all sorts of strange words and phrases.
git - for sharing versions of source code. It lets people work together, and provides tools for people to.
pull request (PR) - "Dear everyone, I request that you git pull my commits.". A pull request is a conversation starter. "Hey, I made a PR. Can you have a look?". When you "git push" your commits (upload your changes).
unit test - does this thing(unit) even work(test)?!!? A program to test if another program works (how you think it should). Rather than test manually over and over again, a unit test can be written and then automatically test your code. A unit test is a nice example of how to use what you've made too. So when you do a pull request the people looking at it know what the code is supposed to do, and that the machine has already checked the code works for them.
assert - "assert 1 == 1". An assert is saying something is true. "I assert that one equals one!". You can also assert variables.


This is a draft remember? So what is there left to finish in this doc?


Any feedback? Leave an internet comment. Or send me an electronic mail to: rene@pygame.org







pygame book

This article will be part of a book called "pygame 4000". A book dedicated to the joy of making software for making. Teaching collaboration, low level programming in C, high level programming in Python, GPU graphics programming with a shader language, design, music, tools, quality, and shipping.

It's a bit of a weird book. There's a little bit of swearing in it (consider yourself fucking warned), and all profits go towards pygame development (the library, the community, and the website).

Rene Dudfield: post modern C tooling - draft 6

$
0
0
Contemporary C tooling for making higher quality C, faster or more safely.

DRAFT 0 - 10/11/18, 
DRAFT 1 - 9/16/19, 7:19 PM, I'm still working on this, but it's already useful and I'd like some feedback - so I decided to share it early.
DRAFT 2 - 10/1/19, mostly additions to static analysis tools.
DRAFT 3 - 10/4/19, updates on build systems, package management, and complexity analysis. 
DRAFT 4 - 10/6/19, run time dynamic verification and instrumentation, sanitizers (asan/ubsan/etc), performance tools, static analyzers.
DRAFT 5 - C interpreter(s). 
DRAFT 6 - 11/6/19, mention TermDebug vim,  more windows debugging tools, C drawing for intro.



In 2001 or so people started using the phrase "Modern C++". So now that it's 2019, I guess we're in the post modern era? Anyway, this isn't a post about C++ code, but some of this information applies there too.


No logo, but it's used everywhere.


Welcome to the post modern era.

Some of the C++ people have pulled off one of the cleverest and sneakiest tricks ever. They required 'modern' C99 and C11 features in 'recent' C++ standards. Microsoft has famously still clung onto some 80s version of C with their compiler for the longest time. So it's been a decade of hacks for people writing portable code in C. For a while I thought we'd be stuck in the 80s with C89 forever. However, now that some C99 and C11 features are more widely available in the Microsoft compiler, we can use these features in highly portable code (but forget about C17/C18 ISO/IEC 9899:2018/C2X stuff!!). Check out the "New" Features in C talk, and the Modern C book for more details.

So, we have some pretty new language features in C with C11.  But what about tooling?

Tools and protection for our feet.

C, whilst a work horse being used in everything from toasters, trains, phones, web browsers, ... (everything basically) - is also an excellent tool for shooting yourself in the foot.

Noun

footgun (pluralfootguns)
  1. (informal,humorous,derogatory) Any feature whose addition to a product results in the user shooting themselves in the foot. C.

Tools like linters, test coverage checkers, static analyzers, memory checkers, documentation generators, thread checkers, continuous integration, nice error messages, ... and such help protect our feet.

How do we do continuous delivery with a language that lets us do the most low level footgunie things ever? On a dozen CPU architectures, 32 bit, 64bit, little endian, big endian, 64 bit with 32bit pointers (wat?!?), with multiple compilers, on a dozen different OS, with dozens of different versions of your dependencies?

Surely there won't be enough time to do releases, and have time left to eat my vegan shaved ice desert after lunch?



Debuggers

Give me 15 minutes, and I'll change your mind about GDB. --
https://www.youtube.com/watch?v=PorfLSr3DDI
Firstly, did you know gdb had a curses based 'GUI' which works in a terminal? It's a quite a bit easier to use than the command line text interface. It's called TUI. It's built in, and uses emacs key bindings.

But what if you are used to VIM key bindings? cgdb to the rescue.

https://cgdb.github.io/

VIM has integrated gdb debugging with TermDebug since version 8.1.

Also, there's a fairly easy to use web based front end for GDB called gdbgui
 (https://www.gdbgui.com/). For those who don't use an IDE with debugging support built in (such as Visual studio by Microsoft or XCode by Apple).





Reverse debugger

Normally a program runs forwards. But what about when you are debugging and you want to run the program backwards?

Set breakpoints and data watchpoints and quickly reverse-execute to where they were hit.

How do you tame non determinism to allow a program to run the same way it did when it crashed? In C and with threads some times it's really hard to reproduce problems.

rr helps with this. It's actual magic.

https://rr-project.org/






LLDB - the LLVM debugger.

Apart from the ever improving gdb, there is a new debugger from the LLVM people - lldb ( https://lldb.llvm.org/ ).


IDE debugging

Visual Studio by Microsoft, and XCode by Apple are the two heavy weights here.

The free Visual Studio Code also supports debugging with GDB. https://code.visualstudio.com/docs/languages/cpp

Sublime is another popular editor, and there is good GDB integration for it too in the SublimeGDB package (https://packagecontrol.io/packages/SublimeGDB).

Windows debugging

Suppose you want to do post mortem debugging? With procdump and WinDbg you can.

Launch a process and then monitor it for exceptions:
C:\>procdump -e 1 -f "" -x c:\dumps consume.exe
This makes some process dumps when it crashes, which you can then open with WinDbg(https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/debugging-using-windbg-preview).


Portable building, and package management

C doesn't have a package manager... or does it?

Ever since Debian dpkg, Redhat rpm, and Perl started doing package management in the early 90s people world wide have been able to share pieces of software more easily. Following those systems, many other systems like Ruby gems, JavaScript npm, and Pythons cheese shop came into being. Allowing many to share code easily.

But what about C? How can we define dependencies on different 'packages' or libraries and have them compile on different platforms?

How do we build with Microsofts compiler, with gcc, with clang, or Intels C compiler? How do we build on Mac, on Windows, on Ubuntu, on Arch linux? Sometimes we want to use an Integrated Development Environment (IDE) because they provide lots of nice tools. But maybe also three IDEs (XCode, Microsoft Visual C, CLion, ...) depending on platform. But we don't want to have to keep several IDE project files up to date. But we also want to integrate nicely with different OS packagers like Debian, FreeBSD. We want people to be able to use apt-get install for their dependencies if they want. We also want to cross compile code on our beefy workstations to work on microcontrollers or slower low memory systems (like earlier RaspberryPi systems).



The Meson Build System.

If CMake is modern, then The Meson Build System (https://mesonbuild.com/index.html) is post modern.
"Meson is an open source build system meant to be both extremely fast, and, even more importantly, as user friendly as possible.
The main design point of Meson is that every moment a developer spends writing or debugging build definitions is a second wasted. So is every second spent waiting for the build system to actually start compiling code."
It's first major user was GStreamer, a multi platform multimedia toolkit which is highly portable. Now it is especially popular in the FreeDesktop world with projects like gstreamer, GTK, and systemd amongst many others using it.

The documentation is excellent, and it's very fast compared to autotools or cmake.
https://www.youtube.com/watch?v=gHdTzdXkhRY


Example meson.build for example project polysnake (https://github.com/jpakkane/polysnake):
"A Python extension module that uses C, C++, Fortran and Rust"?
 
project('polysnake', 'c', 'cpp', 'rust', 'fortran',
  default_options : ['cpp_std=c++14'],
  license : 'GPL3+')

py3_mod = import('python3')
py3_dep = dependency('python3')

# Rust integration is not perfect.
rustlib = static_library('func', 'func.rs')

py3_mod.extension_module('polysnake',
  'polysnake.c',
  'func.cpp',
  'ffunc.f90',
  link_with : rustlib,
  dependencies : py3_dep)

IDEs are supported by exporting to XCode+Visual Studio, and they provide their own interface (which a few less well known IDEs are starting to use).


Conan package manager

There are several packaging tools for C these days, but one of the top contenders is Conan (https://conan.io/).
"Conan, the C / C++ Package Manager for Developers  The open source, decentralized and multi-platform package manager to create and share all your native binaries."
What does a CMake conan project look like? (https://github.com/conan-io/hello)
What does a conan Meson project look like? (https://docs.conan.io/en/latest/reference/build_helpers/meson.html)


CMake

"Modern CMake" is the build tool of choice for many C projects.

Just don't read the official docs, or the official book - they're quite out of date.
An Introduction to Modern CMake (https://cliutils.gitlab.io/modern-cmake/)
CGold: The Hitchhiker’s Guide to the CMake (https://cgold.readthedocs.io/en/latest/)

"CMake is a meta build system. It can generate real native build tool files from abstracted text configuration. Usually such code lives in CMakeLists.txt files."

Around 2015-2016 a bunch of IDEs got support for CMake: Microsoft Visual Studio, CLion, QtCreator, KDevelop, and Android Studio (NDK). And a lot of people tried extra hard to like it, and a lot of C projects started supporting it.

Apart from wide IDE support, it is also supported quite well by package managers like VCPkg and Conan.


Interpreter and REPL

Usually C is compiled.
Bic is an interpreter for C (https://github.com/hexagonal-sun/bic).

bic: A C interpreter and API explorer

Additionally there is "Cling" which is based on the LLVM infrastructure and can even do C++.
https://github.com/root-project/cling




Testing coverage.

Tests let us know that some certain function is running ok. Which code do we still need to test?

gcov, a tool you can use in conjunction with GCC to test code coverage in your programs.
lcov, LCOV is a graphical front-end for GCC's coverage testing tool gcov.


Instructions from codecov.io on how to use it with C, and clang or gcc. (codecov.io is free for public open source repos).
https://github.com/codecov/example-c


Here's documentation for how CPython gets coverage results for C.
 https://devguide.python.org/coverage/#measuring-coverage-of-c-code-with-gcov-and-lcov

Here is the CPython Travis CI configuration they use.
https://github.com/python/cpython/blob/master/.travis.yml#L69
    - os: linux
language: c
compiler: gcc
env: OPTIONAL=true
addons:
apt:
packages:
- lcov
- xvfb
before_script:
- ./configure
- make coverage -s -j4
# Need a venv that can parse covered code.
- ./python -m venv venv
- ./venv/bin/python -m pip install -U coverage
- ./venv/bin/python -m test.pythoninfo
script:
# Skip tests that re-run the entire test suite.
- xvfb-run ./venv/bin/python -m coverage run --pylib -m test --fail-env-changed -uall,-cpu -x test_multiprocessing_fork -x test_multiprocessing_forkserver -x test_multiprocessing_spawn -x test_concurrent_futures
after_script: # Probably should be after_success once test suite updated to run under coverage.py.
# Make the `coverage` command available to Codecov w/ a version of Python that can parse all source files.
- source ./venv/bin/activate
- make coverage-lcov
- bash > (curl -s https://codecov.io/bash)




Static analysis

"Static analysis has not been helpful in finding bugs in SQLite. More bugs have been introduced into SQLite while trying to get it to compile without warnings than have been found by static analysis." -- https://www.sqlite.org/testing.html

According to David Wheeler in "How to Prevent the next Heartbleed" (https://dwheeler.com/essays/heartbleed.html#static-not-found the security problem with a logo, a website, and a marketing team) only one static analysis tool found the Heartbleed vulnerability before it was known. This tool is called CQual++. One reason for projects not using these tools is that they have been (and some still are) hard to use. The LLVM project only started using the clang static analysis tool on it's own projects recently for example. However, since Heartbleed in 2014 tools have improved in both usability and their ability to detect issues.

I think it's generally accepted that static analysis tools are incomplete, in that each tool does not guarantee detecting every problem or even always detecting the same issues all the time. Using multiple tools can therefore be said to find multiple different types of problems.


Compilers are kind of smart

The most basic of static analysis tools are compilers themselves. Over the years they have been getting more and more tools which used to only be available in dedicated Static Analyzers and Lint tools.
Variable shadowing and format-string mismatches can be detected reliably and quickly is because both gcc and clang do this detection as part of their regular compile. --  Bruce Dawson
Here we see two issues (which used to be) very common in C being detected by the two most popular C compilers themselves.

Compiling code with gcc "-Wall -Wextra -pedantic" options catches quite a number of potential or actual problems (https://gcc.gnu.org/onlinedocs/gcc/Warning-Options.html). Other compilers check different things as well. So using multiple compilers with their warnings can find plenty of different types of issues for you.

Compiler warnings should be turned in errors on CI.

By getting your errors down to zero on Continuous Integration there is less chance of new warnings being introduced that are missed in code review. There are problems with distributing your code with warnings turned into errors, so that should not be done.

Some points for people implementing this:
  • -Werror can be used to turn warnings into errors
  • -Wno-error=unknown-pragmas
  • should run only in CI, and not in the build by default. See werror-is-not-your-friend (https://embeddedartistry.com/blog/2017/5/3/-werror-is-not-your-friend).
  • Use most recent gcc, and most recent clang (change two travis linux builders to do this).
  • first have to fix all the warnings (and hopefully not break something in the process).
  • consider adding extra warnings to gcc: "-Wall -Wextra -Wpedantic" See C tooling
  • Also the Microsoft compiler MSVC on Appveyor can be configured to treat warnings as errors. The /WX argument option treats all warnings as errors. See MSVC warning levels
  • For MSVC on Appveyor, /wdnnnn Suppresses the compiler warning that is specified by nnnn. For example, /wd4326 suppresses compiler warning C4326.
If you run your code on different CPU architectures, these compilers can find even more issues. For example 32bit/64bit Big Endian, and Little Endian.


Static analysis tool overview.

Static analysis can be much slower than the analysis usually provided by compilation with gcc and clang. It trades off more CPU time for (hopefully) better results.


cppcheck focuses of low false positives and can find many actual problems.
Coverity, a commercial static analyzer, free for open source developers
CppDepend, a commercial static analyzer based on Clang
codechecker, https://github.com/Ericsson/codechecker
cpplint, Cpplint is a command-line tool to check C/C++ files for style issues following Google's C++ style guide.
Awesome static analysis, a page full of static analysis tools for C/C++. https://github.com/mre/awesome-static-analysis#cc
PVS-Studio, a commercial static analyzer, free for open source developers.


The Clang Static Analyzer

The Clang Static Analyzer (http://clang-analyzer.llvm.org/) is a free to use static analyzer that is quite high quality.
The Clang Static Analyzer is a source code analysis tool that finds bugs in C, C++, and Objective-C programs. Currently it can be run either as a standalone tool or within Apple Xcode. The standalone tool is invoked from the command line, and is intended to be run in tandem with a build of a codebase.
The talk "Clang Static Analysis" (https://www.youtube.com/watch?v=UcxF6CVueDM) talks about an LLVM tool called codechecker (https://github.com/Ericsson/codechecker).

On MacOS an up to date scan-build and scan-view is included with the brew install llvm.

$SCANBUILD=`ls /usr/local/Cellar/llvm/*/bin/scan-build`
$SCANBUILD -V python3 setup.py build

On Ubuntu you can install scan-view with apt-get install clang-tools.


cppcheck 

Cppcheck is an analysis tool for C/C++ code. It provides unique code analysis to detect bugs and focuses on detecting undefined behaviour and dangerous coding constructs. The goal is to detect only real errors in the code (i.e. have very few false positives).

The quote below was particularly interesting to me because it echos the sentiments of other developers, that testing will find more bugs. But here is one of the static analysis tools saying so as well.
"You will find more bugs in your software by testing your software carefully, than by using Cppcheck."
To Install cppcheck:
http://cppcheck.sourceforge.net/ and https://github.com/danmar/cppcheck
The manual can be found here: http://cppcheck.net/manual.pdf

brew install cppcheck bear
sudo apt-get install cppcheck bear

To run cppcheck on C code:
You can use bear (the build ear) tool to record a compilation database (compile_commands.json). cppcheck can then know what c files and header files you are using.

# call your build tool, like `bear make` to record. 
# See cppcheck manual for other C environments including Visual Studio.
bear python setup.py build
cppcheck --quiet --language=c --enable=all -D__x86_64__ -D__LP64__ --project=compile_commands.json

 It does seem to find some errors, and style improvements that other tools do not suggest. Note that you can control the level of issues found to errors, to portability and style issues plus more. See cppcheck --help and the manual for more details about --enable options.

For example these ones from the pygame code base:
[src_c/math.c:1134]: (style) The function 'vector_getw' is never used.
[src_c/base.c:1309]: (error) Pointer addition with NULL pointer.
[src_c/scrap_qnx.c:109]: (portability) Assigning a pointer to an integer is not portable.
[src_c/surface.c:832] -> [src_c/surface.c:819]: (warning) Either the condition '!surf' is redundant or there is possible null pointer dereference: surf.

/Analyze in Microsoft Visual Studio

Visual studio by Microsoft can do static code analysis too. ( https://docs.microsoft.com/en-us/visualstudio/code-quality/code-analysis-for-c-cpp-overview?view=vs-2017)

"Using SAL annotations to reduce code defects." (https://docs.microsoft.com/en-us/visualstudio/code-quality/using-sal-annotations-to-reduce-c-cpp-code-defects?view=vs-2019)
"In GNU C and C++, you can use function attributes to specify certain function properties that may help the compiler optimize calls or check code more carefully for correctness."
https://gcc.gnu.org/onlinedocs/gcc/Function-Attributes.html

Custom static analysis for API usage

Probably one of the most useful parts of static analysis is being able to write your own checks. This allows you to do checks specific to your code base in which general checks will not work. One example of this is the gcc cpychecker (https://gcc-python-plugin.readthedocs.io/en/latest/cpychecker.html). With this, gcc can find API usage issues within CPython extensions written in C. Including reference counting bugs, and NULL pointer de-references, and other types of issues. You can write custom checkers with LLVM as well in the "Checker Developer Manual" (https://clang-analyzer.llvm.org/checker_dev_manual.html)

There is a list of GCC plugins (https://gcc.gnu.org/wiki/plugins) among them are some Linux security plugins by grsecurity.


Runtime checks and Dynamic Verification

Dynamic verification tools examine code whilst it is running. By running your code under these dynamic verification tools you can help detect bugs. Either by testing manually, or by running your automated tests under the watchful eye of these tools. Runtime dynamic verification tools can detect certain errors that static analysis tools can't.

Some of these tools are quite easy to add to a build in Continuous Integration(CI). So you can run your automated tests with some extra dynamic runtime verification enabled.

Take a look at how easy they are to use?
./configure CFLAGS="-fsanitize=address,undefined -g" LDFLAGS="-fsanitize=address,undefined"
make
make check

Address Sanitizer

Doing a test run with an address sanitizer apparently helps to detect various types of bugs.
AddressSanitizer is a fast memory error detector. It consists of a compiler instrumentation module and a run-time library. The tool can detect the following types of bugs:
  • Out-of-bounds accesses to heap, stack and globals
  • Use-after-free
  • Use-after-return (runtime flag ASAN_OPTIONS=detect_stack_use_after_return=1)
  • Use-after-scope (clang flag -fsanitize-address-use-after-scope)
  • Double-free, invalid free
  • Memory leaks (experimental)
How to compile a python C extension with clang on MacOS:
LDFLAGS="-g -fsanitize=address" CFLAGS="-g -fsanitize=address -fno-omit-frame-pointer" python3 setup.py install



Undefined Behaviour Sanitizer

From https://clang.llvm.org/docs/UndefinedBehaviorSanitizer.html
UndefinedBehaviorSanitizer (UBSan) is a fast undefined behavior detector. UBSan modifies the program at compile-time to catch various kinds of undefined behavior during program execution, for example:
  • Using misaligned or null pointer
  • Signed integer overflow
  • Conversion to, from, or between floating-point types which would overflow the destination
You can use the Undefined Behaviour Sanitizer with clang and gcc. Here is the gcc documentation for Instrumentation Options and UBSAN (https://gcc.gnu.org/onlinedocs/gcc/Instrumentation-Options.html).
 
From https://www.sqlite.org/testing.html
To help ensure that SQLite does not make use of undefined or implementation defined behavior, the test suites are rerun using instrumented builds that try to detect undefined behavior. For example, test suites are run using the "-ftrapv" option of GCC. And they are run again using the "-fsanitize=undefined" option on Clang. And again using the "/RTC1" option in MSVC
To compile a python C extension with a UBSAN with clang on Mac do:
LDFLAGS="-g -fsanitize=undefined" CFLAGS="-g -fsanitize=undefined -fno-omit-frame-pointer" python3 setup.py install

Microsoft Visual Studio Runtime Error Checks

The Microsoft Visual Studio compiler can use the Run Time Error Checks feature to find some issues. /RTC (Run-Time Error Checks) (https://docs.microsoft.com/en-us/cpp/build/reference/rtc-run-time-error-checks?view=vs-2019)

From How to: Use Native Run-Time Checks (https://docs.microsoft.com/en-us/visualstudio/debugger/how-to-use-native-run-time-checks?view=vs-2019)
  • Stack pointer corruption.
  • Overruns of local arrays.
  • Stack corruption.
  • Dependencies on uninitialized local variables.
  • Loss of data on an assignment to a shorter variable.

App Verifier

"Any Windows developers that are listening to this: if you’re not using App Verifier, you are making a mistake." -- Bruce Dawson
Stangely App Verifier does not have very good online documentation. The best article available online about it is: Increased Reliability Through More Crashes (https://randomascii.wordpress.com/2011/12/07/increased-reliability-through-more-crashes/)
Application Verifier (AppVerif.exe) is a dynamic verification tool for user-mode applications. This tool monitors application actions while the application runs, subjects the application to a variety of stresses and tests, and generates a report about potential errors in application execution or design.
https://docs.microsoft.com/en-us/windows-hardware/drivers/debugger/application-verifier
https://docs.microsoft.com/en-us/windows/win32/win7appqual/application-verifier
https://docs.microsoft.com/en-us/security-risk-detection/concepts/application-verifier

  • Buffer over runs
  • Use after free issues
  • Thread issues including using TLS properly.
  • Low resource handling
  • Race conditions
If you have a memory corruption bug, App Verifier might be able to help you find it. If you are using Windows APIs wrong, have some threading issues, want to make sure you app runs under harsh conditions -- App Verifier might help you find it.

Related to App Verifier is the PageHeap tool(https://support.microsoft.com/en-us/help/286470/how-to-use-pageheap-exe-in-windows-xp-windows-2000-and-windows-server) It helps you find memory heap corruptions on Windows.




Performance profiling and measurement

“The objective (not always attained) in creating high-performance software is to make the software able to carry out its appointed tasks so rapidly that it responds instantaneously, as far as the user is concerned.”  Michael Abrash. “Michael Abrash’s Graphics Programming Black Book.”
Reducing energy usage, and run time requirements of apps can often be a requirement or very necessary. For a mobile or embedded application it can mean the difference of being able to run the program at all. Performance can directly be related to user happiness but also to the financial performance of a piece of software.

But how to we measure the performance of a program, and how to we know what parts of a program need improvement? Tooling can help.

Valgrind

Valgrind has its own section here because it does lots of different things for us. It's a great tool, or set of tools for improving your programs. It used to be available only on linux, but is now also available on MacOS.

Apparently Valgrind would have caught the heartbleed issue if it was used with a fuzzer.

http://valgrind.org/docs/manual/quick-start.html

Apple Performance Tools

Apple provides many performance related development tools. Along with the gcc and llvm based tools, the main tool is called Instruments. Instruments (part of Xcode) allows you to record and analyse programs for lots of different aspects of performance - including graphics, memory activity, file system, energy and other program events. By being able to record and analyse different types of events together can make it convienient to find performance issues.


LLVM performance tools.

Many of the low level parts of the tools in XCode are made open source through the LLVM project. See "LLVM Machine Code Analyzer" ( https://llvm.org/docs/CommandGuide/llvm-mca.html) as one example. See the LLVM XRay instrumentation (https://llvm.org/docs/XRayExample.html).

There's also an interesting talk on XRay here "XRay in LLVM: Function Call Tracing and Analysis" (https://www.youtube.com/watch?v=jyL-__zOGcU) by Dean Michael Berris.


GNU/Linux tools



Microsoft performance tools.


Intel performance tools.

https://software.intel.com/en-us/vtune




Caching builds

https://ccache.samba.org/

ccache is very useful for reducing the compile time of large C projects. Especially when you are doing a 'rebuild from scratch'. This is because ccache can cache the compilation of parts in this situation when the files do not change.
http://itscompiling.eu/2017/02/19/speed-cpp-compilation-compiler-cache/

This is also useful for speeding up CI builds, and especially when large parts of the code base rarely change.


Distributed building.


distcc https://github.com/distcc/distcc
icecream https://github.com/icecc/icecream


Complexity of code.

"Complex is better than complicated. It's OK to build very complex software, but you don't have to build it in a complicated way. Lizard is a free open source tool that analyse the complexity of your source code right away supporting many programming languages, without any extra setup. It also does code clone / copy-paste detection."
Lizard is a modern complexity command line tool,
that also has a website UI: http://www.lizard.ws/
https://github.com/terryyin/lizard

# install lizard
python3 -m pip install lizard
# show warnings only and include/exclude some files.
lizard src_c/ -x"src_c/_sdl2*" -w 

# Can also run it as a python module.
python3 -m lizard src_c/ -x"src_c/_sdl2*" -w

# Show a full report, not just warnings (-w).
lizard src_c/ -x"src_c/_sdl2*" -x"src_c/_*" -x"src_c/SDL_gfx*" -x"src_c/pypm.c"

Want people to understand your code? Want Static Analyzers to understand your code better? Want to be able to test your code more easily? Want your code to run faster because of less branches? Then you may want to find complicated code and refactor it.

Lizard can also make a pretty word cloud from your source.

Lizard complexity analysis can be run in Continuous Integration (CI). You can also give it lists of functions to ignore and skip if you can't refactor some function right away. Perhaps you want to stop new complex functions from entering your codebase? To stop new complex functions via CI make a supression list of all the current warnings and then make your CI use that and fail if there are new warnings.



Testing your code on different OS/architectures.

Sometimes you need to be able to fix an issue on an OS or architecture that you don't have access to. Luckily these days there are many tools available to quickly use a different system through emulation, or container technology.


Vagrant
Virtualbox
Docker
Launchpad, compile and run tests on many architectures.
Mini cloud (ppc machines for debugging)

If you pay Travis CI, they allow you to connect to the testing host with ssh when a test fails.


Code Formatting

clang-format

clang-format - rather than manually fix various formatting errors found with a linter, many projects are just using clang-format to format the code into some coding standard.



Services

LGTM is an 'automated code review tool' with github (and other code repos) support. https://lgtm.com/help/lgtm/about-automated-code-review

Coveralls provides a store for test coverage results with github (and other code repos) support. https://coveralls.io/




Coding standards for C

There are lots of coding standards for C, and there are tools to check them.


An older set of standards is the MISRA_C (https://en.wikipedia.org/wiki/MISRA_C) aims to facilitate code safety, security, and portability for embedded systems.

The Linux Kernel Coding standard (https://www.kernel.org/doc/html/v4.10/process/coding-style.html) is well known mainly because of the popularity of the Linux Kernel. But this is mainly concerned with readability.

A newer one is the CERT C coding standard (https://wiki.sei.cmu.edu/confluence/display/seccode/SEI+CERT+Coding+Standards), and it is a secure coding standard (not a safety one).

The website for the CERT C coding standard is quite amazing. It links to tools that can detect each of the problems automatically (when they can be). It is very well researched, and links each problem to other relevant standards, and gives issues priorities. A good video to watch on CERT C is "How Can I Enforce the SEI CERT C Coding Standard Using Static Analysis?" (https://www.youtube.com/watch?v=awY0iJOkrg4). They do releases of the website, which is edited as a wiki. At the time of writing the last release into book form was in 2016.







How are other projects tested?

We can learn a lot by how other C projects are going about their business today.
Also, thanks to CI testing tools defining things in code we can see how automated tests are run on services like Travis CI and Appveyor.

SQLite

"How SQLite Is Tested"

Curl

"Testing Curl"
https://github.com/curl/curl/blob/master/.travis.yml

Python

"How is CPython tested?"
https://github.com/python/cpython/blob/master/.travis.yml

OpenSSL

"How is OpenSSL tested?"

https://github.com/openssl/openssl/blob/master/.travis.yml
They use Coverity too: https://github.com/openssl/openssl/pull/9805
https://github.com/openssl/openssl/blob/master/fuzz/README.md

libsdl

"How is SDL tested?" [No response]

Linux

https://stackoverflow.com/questions/3177338/how-is-the-linux-kernel-testedhttps://www.linuxjournal.com/content/linux-kernel-testing-and-debugging

Haproxy

https://github.com/haproxy/haproxy/blob/master/.travis.yml



There is some discussion of Post Modern C Tooling on the "C_Programming" reddit forum.



pygame book

This article will be part of a book called "pygame 4000". A book dedicated to the joy of making software for making. Teaching collaboration, low level programming in C, high level programming in Python, GPU graphics programming with a shader language, design, music, tools, quality, and shipping(software).

It's a bit of a weird book. There's a little bit of swearing in it (consider yourself fucking warned), and all profits go towards pygame development (the library, the community, and the website).

Python Bytes: #155 Guido van Rossum retires

Talk Python to Me: #237 A gut feeling about Python

$
0
0
Let's start with a philosophical question: Are you human? Are you sure? We could begin to answer the question physically. Are you made up of cells that would typically be considered as belonging to the human body?

Real Python: When to Use a List Comprehension in Python

$
0
0

Python is famous for allowing you to write code that’s elegant, easy to write, and almost as easy to read as plain English. One of the language’s most distinctive features is the list comprehension, which you can use to create powerful functionality within a single line of code. However, many developers struggle to fully leverage the more advanced features of a list comprehension in Python. Some programmers even use them too much, which can lead to code that’s less efficient and harder to read.

By the end of this tutorial, you’ll understand the full power of Python list comprehensions and how to use their features comfortably. You’ll also gain an understanding of the trade-offs that come with using them so that you can determine when other approaches are more preferable.

In this tutorial, you’ll learn how to:

  • Rewrite loops and map() calls as a list comprehension in Python
  • Choose between comprehensions, loops, and map() calls
  • Supercharge your comprehensions with conditional logic
  • Use comprehensions to replace filter()
  • Profile your code to solve performance questions

Free Bonus:Click here to get access to a chapter from Python Tricks: The Book that shows you Python's best practices with simple examples you can apply instantly to write more beautiful + Pythonic code.

How to Create Lists in Python

There are a few different ways you can create lists in Python. To better understand the trade-offs of using a list comprehension in Python, let’s first see how to create lists with these approaches.

Using for Loops

The most common type of loop is the for loop. You can use a for loop to create a list of elements in three steps:

  1. Instantiate an empty list.
  2. Loop over an iterable or range of elements.
  3. Append each element to the end of the list.

If you want to create a list containing the first ten perfect squares, then you can complete these steps in three lines of code:

>>>
>>> squares=[]>>> foriinrange(10):... squares.append(i*i)>>> squares[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

Here, you instantiate an empty list, squares. Then, you use a for loop to iterate over range(10). Finally, you multiply each number by itself and append the result to the end of the list.

Using map() Objects

map() provides an alternative approach that’s based in functional programming. You pass in a function and an iterable, and map() will create an object. This object contains the output you would get from running each iterable element through the supplied function.

As an example, consider a situation in which you need to calculate the price after tax for a list of transactions:

>>>
>>> txns=[1.09,23.56,57.84,4.56,6.78]>>> TAX_RATE=.08>>> defget_price_with_tax(txn):... returntxn*(1+TAX_RATE)>>> final_prices=map(get_price_with_tax,txns)>>> list(final_prices)[1.1772000000000002, 25.4448, 62.467200000000005, 4.9248, 7.322400000000001]

Here, you have an iterable txns and a function get_price_with_tax(). You pass both of these arguments to map(), and store the resulting object in final_prices. You can easily convert this map object into a list using list().

Using List Comprehensions

List comprehensions are a third way of making lists. With this elegant approach, you could rewrite the for loop from the first example in just a single line of code:

>>>
>>> squares=[i*iforiinrange(10)]>>> squares[0, 1, 4, 9, 16, 25, 36, 49, 64, 81]

Rather than creating an empty list and adding each element to the end, you simply define the list and its contents at the same time by following this format:

>>>
new_list = [expression for member in iterable]

Every list comprehension in Python includes three elements:

  1. expression is the member itself, a call to a method, or any other valid expression that returns a value. In the example above, the expression i * i is the square of the member value.
  2. member is the object or value in the list or iterable. In the example above, the member value is i.
  3. iterable is a list, set, sequence, generator, or any other object that can return its elements one at a time. In the example above, the iterable is range(10).

Because the expression requirement is so flexible, a list comprehension in Python works well in many places where you would use map(). You can rewrite the pricing example with its own list comprehension:

>>>
>>> txns=[1.09,23.56,57.84,4.56,6.78]>>> TAX_RATE=.08>>> defget_price_with_tax(txn):... returntxn*(1+TAX_RATE)>>> final_prices=[get_price_with_tax(i)foriintxns]>>> final_prices[1.1772000000000002, 25.4448, 62.467200000000005, 4.9248, 7.322400000000001]

The only distinction between this implementation and map() is that the list comprehension in Python returns a list, not a map object.

Benefits of Using List Comprehensions

List comprehensions are often described as being more Pythonic than loops or map(). But rather than blindly accepting that assessment, it’s worth it to understand the benefits of using a list comprehension in Python when compared to the alternatives. Later on, you’ll learn about a few scenarios where the alternatives are a better choice.

One main benefit of using a list comprehension in Python is that it’s a single tool that you can use in many different situations. In addition to standard list creation, list comprehensions can also be used for mapping and filtering. You don’t have to use a different approach for each scenario.

This is the main reason why list comprehensions are considered Pythonic, as Python embraces simple, powerful tools that you can use in a wide variety of situations. As an added side benefit, whenever you use a list comprehension in Python, you won’t need to remember the proper order of arguments like you would when you call map().

List comprehensions are also more declarative than loops, which means they’re easier to read and understand. Loops require you to focus on how the list is created. You have to manually create an empty list, loop over the elements, and add each of them to the end of the list. With a list comprehension in Python, you can instead focus on what you want to go in the list and trust that Python will take care of how the list construction takes place.

How to Supercharge Your Comprehensions

In order to understand the full value that list comprehensions can provide, it’s helpful to understand their range of possible functionality. You’ll also want to understand the changes that are coming to the list comprehension in Python 3.8.

Using Conditional Logic

Earlier, you saw this formula for how to create list comprehensions:

>>>
new_list = [expression for member in iterable]

While this formula is accurate, it’s also a bit incomplete. A more complete description of the comprehension formula adds support for optional conditionals. The most common way to add conditional logic to a list comprehension is to add a conditional to the end of the expression:

>>>
new_list = [expression for member in iterable (if conditional)]

Here, your conditional statement comes just before the closing bracket.

Conditionals are important because they allow list comprehensions to filter out unwanted values, which would normally require a call to filter():

>>>
>>> sentence='the rocket came back from mars'>>> vowels=[iforiinsentenceifiin'aeiou']>>> vowels['e', 'o', 'e', 'a', 'e', 'a', 'o', 'a']

In this code block, the conditional statement filters out any characters in sentence that aren’t a vowel.

The conditional can test any valid expression. If you need a more complex filter, then you can even move the conditional logic to a separate function:

>>>
>>> sentence='The rocket, who was named Ted, came back \... from Mars because he missed his friends.'>>> defis_consonant(letter):... vowels='aeiou'... returnletter.isalpha()andletter.lower()notinvowels>>> consonants=[iforiinsentenceifis_consonant(i)]['T', 'h', 'r', 'c', 'k', 't', 'w', 'h', 'w', 's', 'n', 'm', 'd', \'T', 'd', 'c', 'm', 'b', 'c', 'k', 'f', 'r', 'm', 'M', 'r', 's', 'b', \'c', 's', 'h', 'm', 's', 's', 'd', 'h', 's', 'f', 'r', 'n', 'd', 's']

Here, you create a complex filter is_consonant() and pass this function as the conditional statement for your list comprehension. Note that the member value i is also passed as an argument to your function.

You can place the conditional at the end of the statement for simple filtering, but what if you want to change a member value instead of filtering it out? In this case, it’s useful to place the conditional near the beginning of the expression:

>>>
new_list = [expression (if conditional) for member in iterable]

With this formula, you can use conditional logic to select from multiple possible output options. For example, if you have a list of prices, then you may want to replace negative prices with 0 and leave the positive values unchanged:

>>>
>>> original_prices=[1.25,-9.45,10.22,3.78,-5.92,1.16]>>> prices=[iifi>0else0foriinoriginal_prices]>>> prices[1.25, 0, 10.22, 3.78, 0, 1.16]

Here, your expression i contains a conditional statement, if i > 0 else 0. This tells Python to output the value of i if the number is positive, but to change i to 0 if the number is negative. If this seems overwhelming, then it may be helpful to view the conditional logic as its own function:

>>>
>>> defget_price(price):... returnpriceifprice>0else0>>> prices=[get_price(i)foriinoriginal_prices]>>> prices[1.25, 0, 10.22, 3.78, 0, 1.16]

Now, your conditional statement is contained within get_price(), and you can use it as part of your list comprehension expression.

Using Set and Dictionary Comprehensions

While the list comprehension in Python is a common tool, you can also create set and dictionary comprehensions. A set comprehension is almost exactly the same as a list comprehension in Python. The difference is that set comprehensions make sure the output contains no duplicates. You can create a set comprehension by using curly braces instead of brackets:

>>>
>>> quote="life, uh, finds a way">>> unique_vowels={iforiinquoteifiin'aeiou'}>>> unique_vowels{'a', 'e', 'u', 'i'}

Your set comprehension outputs all the unique vowels it found in quote. Unlike lists, sets don’t guarantee that items will be saved in any particular order. This is why the first member of the set is a, even though the first vowel in quote is i.

Dictionary comprehensions are similar, with the additional requirement of defining a key:

>>>
>>> squares={i:i*iforiinrange(10)}>>> squares{0: 0, 1: 1, 2: 4, 3: 9, 4: 16, 5: 25, 6: 36, 7: 49, 8: 64, 9: 81}

To create the squares dictionary, you use curly braces ({}) as well as a key-value pair (i: i * i) in your expression.

Using the Walrus Operator

Python 3.8 will introduce the assignment expression, also known as the walrus operator. To understand how you can use it, consider the following example.

Say you need to make ten requests to an API that will return temperature data. You only want to return results that are greater than 100 degrees Fahrenheit. Assume that each request will return different data. In this case, there’s no way to use a list comprehension in Python to solve the problem. The formula expression for member in iterable (if conditional) provides no way for the conditional to assign data to a variable that the expression can access.

The walrus operator solves this problem. It allows you to run an expression while simultaneously assigning the output value to a variable. The following example shows how this is possible, using get_weather_data() to generate fake weather data:

>>>
>>> importrandom>>> defget_weather_data():... returnrandom.randrange(90,110)>>> hot_temps=[tempfor_inrange(20)if(temp:=get_weather_data())>=100]>>> hot_temperatures[107, 102, 109, 104, 107, 109, 108, 101, 104]

You won’t often need to use the assignment expression inside of a list comprehension in Python, but it’s a useful tool to have at your disposal when necessary.

When Not to Use a List Comprehension in Python

List comprehensions are useful and can help you write elegant code that’s easy to read and debug, but they’re not the right choice for all circumstances. They might make your code run more slowly or use more memory. If your code is less performant or harder to understand, then it’s probably better to choose an alternative.

Watch Out for Nested Comprehensions

Comprehensions can be nested to create combinations of lists, dictionaries, and sets within a collection. For example, say a climate laboratory is tracking the high temperature in five different cities for the first week of June. The perfect data structure for storing this data could be a Python list comprehension nested within a dictionary comprehension:

>>>
>>> cities=['Austin','Tacoma','Topeka','Sacramento','Charlotte']>>> temps={city:[0for_inrange(7)]forcityincities}>>> temps{    'Austin': [0, 0, 0, 0, 0, 0, 0],    'Tacoma': [0, 0, 0, 0, 0, 0, 0],    'Topeka': [0, 0, 0, 0, 0, 0, 0],    'Sacramento': [0, 0, 0, 0, 0, 0, 0],    'Charlotte': [0, 0, 0, 0, 0, 0, 0]}

You create the outer collection temps with a dictionary comprehension. The expression is a key-value pair, which contains yet another comprehension. This code will quickly generate a list of data for each city in cities.

Nested lists are a common way to create matrices, which are often used for mathematical purposes. Take a look at the code block below:

>>>
>>> matrix=[[iforiinrange(5)]for_inrange(6)]>>> matrix[    [0, 1, 2, 3, 4],    [0, 1, 2, 3, 4],    [0, 1, 2, 3, 4],    [0, 1, 2, 3, 4],    [0, 1, 2, 3, 4],    [0, 1, 2, 3, 4]]

The outer list comprehension [... for _ in range(6)] creates six rows, while the inner list comprehension [i for i in range(5)] fills each of these rows with values.

So far, the purpose of each nested comprehension is pretty intuitive. However, there are other situations, such as flattening nested lists, where the logic arguably makes your code more confusing. Take this example, which uses a nested list comprehension to flatten a matrix:

>>>
matrix = [... [0,0,0],... [1,1,1],... [2,2,2],... ]>>> flat=[numforrowinmatrixfornuminrow]>>> flat[0, 0, 0, 1, 1, 1, 2, 2, 2]

The code to flatten the matrix is concise, but it may not be so intuitive to understand how it works. On the other hand, if you were to use for loops to flatten the same matrix, then your code will be much more straightforward:

>>>
>>> matrix=[... [0,0,0],... [1,1,1],... [2,2,2],... ]>>> flat=[]>>> forrowinmatrix:... fornuminrow:... flat.append(num)...>>> flat[0, 0, 0, 1, 1, 1, 2, 2, 2]

Now you can see that the code traverses one row of the matrix at a time, pulling out all the elements in that row before moving on to the next one.

While the single-line nested list comprehension might seem more Pythonic, what’s most important is to write code that your team can easily understand and modify. When you choose your approach, you’ll have to make a judgment call based on whether you think the comprehension helps or hurts readability.

Choose Generators for Large Datasets

A list comprehension in Python works by loading the entire output list into memory. For small or even medium-sized lists, this is generally fine. If you want to sum the squares of the first one-thousand integers, then a list comprehension will solve this problem admirably:

>>>
>>> sum([i*iforiinrange(1000)])332833500

But what if you wanted to sum the squares of the first billion integers? If you tried then on your machine, then you may notice that your computer becomes non-responsive. That’s because Python is trying to create a list with one billion integers, which consumes more memory than your computer would like. Your computer may not have the resources it needs to generate an enormous list and store it in memory. If you try to do it anyway, then your machine could slow down or even crash.

When the size of a list becomes problematic, it’s often helpful to use a generator instead of a list comprehension in Python. A generator doesn’t create a single, large data structure in memory, but instead returns an iterable. Your code can ask for the next value from the iterable as many times as necessary or until you’ve reached the end of your sequence, while only storing a single value at a time.

If you were to sum the first billion squares with a generator, then your program will likely run for a while, but it shouldn’t cause your computer to freeze. The example below uses a generator:

>>>
>>> sum(i*iforiinrange(1000000000))333333332833333333500000000

You can tell this is a generator because the expression isn’t surrounded by brackets or curly braces. Optionally, generators can be surrounded by parentheses.

The example above still requires a lot of work, but it performs the operations lazily. Because of lazy evaluation, values are only calculated when they’re explicitly requested. After the generator yields a value (for example, 567 * 567), it can add that value to the running sum, then discard that value and generate the next value (568 * 568). When the sum function requests the next value, the cycle starts over. This process keeps the memory footprint small.

map() also operates lazily, meaning memory won’t be an issue if you choose to use it in this case:

>>>
>>> sum(map(lambdai:i*i,range(1000000000)))333333332833333333500000000

It’s up to you whether you prefer the generator expression or map().

Profile to Optimize Performance

So, which approach is faster? Should you use list comprehensions or one of their alternatives? Rather than adhere to a single rule that’s true in all cases, it’s more useful to ask yourself whether or not performance matters in your specific circumstance. If not, then it’s usually best to choose whatever approach leads to the cleanest code!

If you’re in a scenario where performance is important, then it’s typically best to profile different approaches and listen to the data. timeit is a useful library for timing how long it takes chunks of code to run. You can use timeit to compare the runtime of map(), for loops, and list comprehensions:

>>>
>>> importrandom>>> importtimeit>>> TAX_RATE=.08>>> txns=[random.randrange(100)for_inrange(100000)]>>> defget_price(txn):... returntxn*(1+TAX_RATE)...>>> defget_prices_with_map():... returnlist(map(get_price,txns))...>>> defget_prices_with_comprehension():... return[get_price(txn)fortxnintxns]...>>> defget_prices_with_loop():... prices=[]... fortxnintxns:... prices.append(get_price(txn))... returnprices...>>> timeit.timeit(get_prices_with_map,number=100)2.0554370979998566>>> timeit.timeit(get_prices_with_comprehension,number=100)2.3982384680002724>>> timeit.timeit(get_prices_with_loop,number=100)3.0531821520007725

Here, you define three methods that each use a different approach for creating a list. Then, you tell timeit to run each of those functions 100 times each. timeit returns the total time it took to run those 100 executions.

As the code demonstrates, the biggest difference is between the loop-based approach and map(), with the loop taking 50% longer to execute. Whether or not this matters depends on the needs of your application.

Conclusion

In this tutorial, you learned how to use a list comprehension in Python to accomplish complex tasks without making your code overly complicated.

Now you can:

  • Simplify loops and map() calls with declarative list comprehensions
  • Supercharge your comprehensions with conditional logic
  • Create set and dictionary comprehensions
  • Determine when code clarity or performance dictates an alternative approach

Whenever you have to choose a list creation method, try multiple implementations and consider what’s easiest to read and understand in your specific scenario. If performance is important, then you can use profiling tools to give you actionable data instead of relying on hunches or guesses about what works the best.

Remember that while Python list comprehensions get a lot of attention, your intuition and ability to use data when it counts will help you write clean code that serves the task at hand. This, ultimately, is the key to making your code Pythonic!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Armin Ronacher: Open Source, SaaS and Monetization

$
0
0

When you're reading this blog post Sentry which I have been working on for the last few years has undergone a license change. Making money with Open Source has always been a complex topic and over the years my own ideas of how this should be done have become less and less clear. The following text is an attempt to summarize my thoughts on it an to put some more clarification on how we ended up picking the BSL license for Sentry.

Making Money with Open Source

My personal relationship with Open Source and monetization is pretty clear cut: I never wanted money to be involved in libraries but I always encouraged people to monetize applications. This is also why I was always very liberal with my own choice of license (BSD, MIT, Apache) and encouraged others to do the same. Open Source libraries under permissive licenses helps us all as developers.

I understand that there are many developers out there who are trying to monetize libraries but I have no answer to that. Money and Open Source libraries is a tricky territory to which I have nothing to add.

However when it comes to monetizing Open Source applications I see many different approaches. One of them is what we did at Sentry: we Open Sourced our server and client libraries and monetized our SaaS installation. This from where I stand is a pretty optimal solution because it allows developers to use the software on their own and contribute to it, but also allows you to monetize the value you provide through the SaaS installation. In the case of Sentry it has worked out very well for us and there is very little I would change about that.

But there is a catch …

The SaaS Problem

Obviously there is an issue with this which is why we're playing around with changing the license. We love Open Source and continue to do so, but at one point someone has to make money somewhere and that better be done in the most clear way possible. I don't want a company that runs on donations or has a business model that just happens to run by accident. For SaaS businesses there is always the risk that it could turn into a margin business. What stops someone from taking the Sentry code and compete with the sentry.io installation not investing any development efforts into it?

This is not a new problem and many companies have faced it before. This is where a pragmatic solution is necessary.

The goal is to ensure that companies like Sentry can exist, can produce Open Source code but prevent competition on it's core business from its own forks.

Open Source — Eventually

Open Source is pretty clear cut: it does not discriminate. If you get the source, you can do with it what you want (within the terms of the license) and no matter who you are (within the terms of the license). However as Open Source is defined — and also how I see it — Open Source comes with no strings attached. The moment we restrict what you can do with it — like not compete — it becomes something else.

The license of choice is the BSL. We looked at many things and the one we can to is the idea of putting a form of natural delay into our releases and the BSL does that. We make sure that if time passes all we have, becomes Open Source again but until that point it's almost Open Source but with strings attached. This means for as long as we innovate there is some natural disadvantage for someone competing with the core product while still ensuring that our product stays around and healthy in the Open Source space.

If enough time passes everything becomes available again under the Apache 2 license.

This ensures that no matter what happens to Sentry the company or product, it will always be there for the Open Source community. Worst case, it just requires some time.

I'm personally really happy with the BSL. I cannot guarantee that after years no better ideas came around but this is the closest I have seen that I feel very satisfied with where I can say that I stand behind it.

Money and Libraries

The situation is much more complex however with libraries, frameworks and everything like this. The BSL would not solve anything here, it would cause a lot of friction with reusing code. For instance if someone wants to pull reusable code out of Sentry they would have to wait for the license conversion to kick in, find an older version that is already open source or reach out to us to get a snippet converted earlier. All of this would be a problem for libraries.

At Sentry we very purposefully selected what falls under the license. For instance we chose not to BSL license for components where we believe that pulling efforts together is particularly important. For instance our native symbolication libraries and underlying service (symbolicator) will not get the BSL because we want to encourage others to contribute to it and bundle efforts. Symbolicator like symbolic are components that are very similar to libraries. They are not products by themselves. I could not monetize Flask, Jinja or anything like this this way and I have absolutely no desire to do so.

At the same time I cannot count how many mails I got over the years from people asking why I don't monetize my stuff, questions from people about how they should go about monetizing their code.

I do not have an answer.

I feel like there is no answer to this. I remember too many cases of people that tried to do dual licensing with code and ended up regretting it after ownership was transferred or they had a fall out with other partners.

I however want to continue evaluating if there are ways libraries can be monetized. For now the best I have is the suggestion for people to build more Open Source companies with an Open Source (maybe BSL licensed) product and encourage true open source contributions to underlying libraries that become popular. Open Source companies dedicating some of their revenue to help libraries is a good thing from where I stand. We should do more of that.

I would however love to hear how others feel about money and Open Source. Reach out to me in person, by mail, twitter or whatever else.

Reuven Lerner: Podcasts, podcasts, and even more podcasts

$
0
0

I’ve recently appeared on a whole bunch of podcasts about Python, freelancing, and even (believe it or not) learning Chinese! If you’re interested in any or all of these subjects, then you might want to catch my interviews:

  • Talk Python to Me: I spoke with Michael Kennedy (and Casey Kinsen) about freelancing in Python — and things to consider when you’re thinking of freelancing.
  • Programming Leadership: I spoke with Marcus Blankenship about why companies offer training to their employees, how they should look for training, and how best to take advantage of a course.
  • Profitable Python: I spoke with Ben McNeill about the world of Python training — how training works (for me, companies that invite me to train, and the people in my courses), how to build up an online business, and the difference between B2C vs. B2B. You can watch the video on YouTube, or listen to the audio version of the podcast!
  • Teaching Python: I spoke with Kelly Paredes and Sean Tibor about what it’s like to teach adults vs. children, and what tricks I use to help keep my students engaged. I learned quite a bit about how they teach Python to middle-school students!
  • You Can Learn Chinese: I’ve been studying Chinese for a few years, and spent some time chatting with Jared Turner about my experience, how I continue to improve, and how my Chinese studies have affected my work teaching Python. The entire episode is great, and my interview starts about halfway through.

In related news, you might know that I’ve been a co-panelist on the Freelancers Show podcast for the last few years. The entire panel (including me) recently left the show, and we’re currently discussing how/when/where we’ll restart.

I’ll be sure to post to my blog here when there are updates — but if you’re a freelancer of any level (new or experienced) who might be interested in sharing your stories with us, please contact me, so we can speak with you when we re-start in our new format.

The post Podcasts, podcasts, and even more podcasts appeared first on Reuven Lerner.

Erik Marsja: How to Handle Coroutines with asyncio in Python

$
0
0

The post How to Handle Coroutines with asyncio in Python appeared first on Erik Marsja.

When a program becomes very long and complex, it is convenient to divide it into subroutines, each of which implements a specific task. However, subroutines cannot be executed independently, but only at the request of the main program, which is responsible for coordinating the use of subroutines.

In this post, we introduce a generalization of the concept of subroutines, known as coroutines: just like subroutines, coroutines compute a single computational step, but unlike subroutines, there is no main program to coordinate the results. The coroutines link themselves together to form a pipeline without any supervising function responsible for calling them in a particular order. 

This post is taken from the book Python Parallel Programming Cookbook  (2nd Ed.) by Giancarlo Zaccone. In this book, you will implement effective programming techniques in Python to build scalable software that saves time and memory. 

In a coroutine, the execution point can be suspended and resumed later, since the coroutine keeps track of the state of execution. Having a pool of coroutines, it is possible to interleave the computations: the first one runs until it yields control back, then the second runs and goes on down the line.

Read Also: Introducing Coil, an open-source Android image loading library backed by Kotlin Coroutines

The interleaving is managed by the event loop. It keeps track of all the coroutines and schedules when they will be executed.

Other important aspects of coroutines are as follows:

  • Coroutines allow for multiple entry points that can yield multiple times.
  • Coroutines can transfer execution to any other coroutine.

The term yield is used here to describe a coroutine pausing and passing the control flow to another coroutine.

Getting ready to work with coroutines

We will use the following notation to work with coroutines:

import asyncio 

@asyncio.coroutine
def coroutine_function(function_arguments):
    ............
    DO_SOMETHING
    ............

Coroutines use the yield from syntax introduced in PEP 380 (read more at https://www.python.org/dev/peps/pep-0380/) to stop the execution of the current computation and suspends the coroutine’s internal state.

In particular, in the case of yield from future, the coroutine is suspended until future is done, then the result of future will be propagated (or raise an exception); in the case of yield from coroutine, the coroutine waits for another coroutine to produce a result that will be propagated (or raise an exception).

As we shall see in the next example, in which the coroutines will be used to simulate a finite state machine, we will use the yield from coroutine notation.

More on coroutines with asyncio are available at https://docs.python.org/3.5/library/asyncio-task.html.

Using coroutines to simulate a finite state machine

In this example, we see how to use coroutines to simulate a finite state machine with five states.

finite state machine or finite state automaton is a mathematical model that is widely used in engineering disciplines, but also in sciences such as mathematics and computer science.

The automaton that we want to simulate the behavior of using coroutines is as follows:

The states of the system are S0S1S2S3, and S4, with 0 and 1: the values for which the automaton can pass from one state to the next state (this operation is called a transition). So, for example, state S0 can pass to state S1, but only for the value 1, and S0 can pass to state S2, but only for the value 0.

The following Python code simulates a transition of the automaton from state S0 (the start state), up to state S4 (the end state):

1) The first step is obviously to import the relevant libraries:

import asyncio
import time
from random import randint

2) Then, we define the coroutine relative to start_state. The input_value parameter is evaluated randomly; it can be 0 or 1. If it is 0, then the control goes to coroutinestate2; otherwise, it changes to coroutine state1:

@asyncio.coroutine
def start_state():
    print('Start State called\n')
    input_value = randint(0, 1)
    time.sleep(1)
    if input_value == 0:
        result = yield from state2(input_value)
    else:
        result = yield from state1(input_value)
    print('Resume of the Transition:\nStart State calling'+ result)

3) Here is the coroutine for state1. The input_value parameter is evaluated randomly; it can be 0 or 1. If it is 0, then the control goes tostate2; otherwise, it changes to state1:

@asyncio.coroutine
def state1(transition_value):
    output_value ='State 1 with transition value = %s\n'% \
                                             transition_value
    input_value = randint(0, 1)
    time.sleep(1)
    print('...evaluating...')
    if input_value == 0:
        result = yield from state3(input_value)
    else:
        result = yield from state2(input_value)
    return output_value + 'State 1 calling %s' % result

4) The coroutine for state1 has the transition_value argument that allowed the passage of the state. Also, in this case, input_value is randomly evaluated. If it is 0, then the state transitions to state3; otherwise, the control changes to state2:

@asyncio.coroutine
def state2(transition_value):
    output_value = 'State 2 with transition value = %s\n' %\
                                             transition_value
    input_value = randint(0, 1)
    time.sleep(1)
    print('...evaluating...')
    if input_value == 0:
        result = yield from state1(input_value)
    else:
        result = yield from state3(input_value)
    return output_value + 'State 2 calling %s' % result

5) The coroutine for state3 has the transition_value argument, which allowed the passage of the state. input_value is randomly evaluated. If it is 0, then the state transitions to state1; otherwise, the control changes to end_state:

@asyncio.coroutine
def state3(transition_value):
    output_value = 'State 3 with transition value = %s\n' %\
                                                 transition_value
    input_value = randint(0, 1)
    time.sleep(1)
    print('...evaluating...')
    if input_value == 0:
        result = yield from state1(input_value)
    else:
        result = yield from end_state(input_value)
    return output_value + 'State 3 calling %s' % result

end_state prints out the transition_value argument, which allowed the passage of the state, and then stops the computation:

@asyncio.coroutine
def end_state(transition_value):
    output_value = 'End State with transition value = %s\n'%\
                                                transition_value
    print('...stop computation...')
    return output_value

7) In the __main__ function, the event loop is acquired, and then we start the simulation of the finite state machine, calling the automaton’s start_state:

if __name__ == '__main__':
    print('Finite State Machine simulation with Asyncio Coroutine')
    loop = asyncio.get_event_loop()
    loop.run_until_complete(start_state())

How coroutines simulate a finite state machine

Each state of the automaton has been defined by using the decorator:

@asyncio.coroutine

For example, state S0 is defined here:

@asyncio.coroutine
def StartState():
    print ("Start State called \n")
    input_value = randint(0,1)
    time.sleep(1)
    if (input_value == 0):
        result = yield from State2(input_value)
    else :
        result = yield from State1(input_value)

The transition to the next state is determined by input_value, which is defined by the randint(0,1) function of Python’s random module. This function randomly provides a value of 0 or 1.

In this manner, randintrandomly determines the state to which the finite state machine will pass:

input_value = randint(0,1)

After determining the values to pass, the coroutine calls the next coroutine using the yield from command:

if (input_value == 0):
        result = yield from State2(input_value)
    else :
        result = yield from State1(input_value)

The result variable is the value that each coroutine returns. It is a string, and, at the end of the computation, we can reconstruct the transition from the initial state of the automaton, start_state, up to end_state.

The main program starts the evaluation inside the event loop:

if __name__ == "__main__":
    print("Finite State Machine simulation with Asyncio Coroutine")
    loop = asyncio.get_event_loop()
    loop.run_until_complete(StartState())

Running the code, we have an output like this:

Finite State Machine simulation with Asyncio Coroutine
Start State called
...evaluating...
...evaluating...
...evaluating...
...evaluating...
...stop computation...
Resume of the Transition : 
Start State calling State 1 with transition value = 1
State 1 calling State 2 with transition value = 1
State 2 calling State 1 with transition value = 0
State 1 calling State 3 with transition value = 0
State 3 calling End State with transition value = 1

Handling coroutines with asyncio in Python 3.5

Before Python 3.5 was released, the asyncio module used generators to mimic asynchronous calls and, therefore, had a different syntax than the current version of Python 3.5.

Python 3.5 introduced the async and await keywords. Notice the lack of parentheses around the await func() call.

The following is an example of “Hello, world!“, using asyncio with the new syntax introduced by Python 3.5+:

import asyncio
 
async def main():
    print(await func())
 
async def func():
    # Do time intensive stuff...
    return "Hello, world!"
 
if __name__ == "__main__":
    loop = asyncio.get_event_loop()
    loop.run_until_complete(main())

In this post, we learned how to handle coroutines with asyncio. To learn more features of asynchronous programming in Python, you may go through the book Python Parallel Programming Cookbook  (2nd Ed.) by Packt Publishing.

The post How to Handle Coroutines with asyncio in Python appeared first on Erik Marsja.

Shannon -jj Behrens: "How to Give a Talk" and "Building Video Games for Fun with PyGame"

Viewing all 22851 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>