Quantcast
Channel: Planet Python
Viewing all 23486 articles
Browse latest View live

Catalin George Festila: Using LibROSA python module.

$
0
0
This python module named LibROSA is a python package for music and audio analysis and provides the building blocks necessary to create music information retrieval systems.
C:\Python364>cd Scripts
C:\Python364\Scripts>pip install librosa
Collecting librosa
...
Successfully installed audioread-2.1.6 joblib-0.13.0 librosa-0.6.2 llvmlite-0.26.0 numba-0.41.0 resampy-0.2.1
scikit-learn-0.20.2
Let's create one waveform and a spectrogram with this python module.
The waveform (for sound) the term describes a depiction of the pattern of sound pressure variation (or amplitude) in the time domain.
A spectrogram (known also like sonographs, voiceprints, or voicegrams) is a visual representation of the spectrum of frequencies of sound or other signals as they vary with time.
I used a free WAV file sound from here.
The result of the waveform and spectrogram for that audio file is shown into next screenshots:


My example show first the waveform and you need to close the it to see the spectrogram.
Let's see the source code of this example:
import librosa
import librosa.display
import matplotlib.pyplot as plt
plt.figure(figsize=(14, 5))
path = "merry_christmas.wav"
out,samples = librosa.load(path)
print(out.shape, samples)
librosa.display.waveplot(out, sr=samples)
plt.show()
stft_array = librosa.stft(out)
stft_array_db = librosa.amplitude_to_db(abs(stft_array))
librosa.display.specshow(stft_array_db,sr=samples,x_axis='time', y_axis='hz')
plt.colorbar()
plt.show()

Codementor: Python, For The ❤ of It - part 1

$
0
0
My Journey Into One of World's Most Awesome Language

Python Sweetness: Mitogen v0.2.4 released

$
0
0

Mitogen for Ansible v0.2.4 has been released. This version is noteworthy as it contains major improvements to the core libary and Ansible extension to improve its behaviour in the face of larger Ansible runs.

Work in this area continues, as it progresses towards inclusion of a patch held back since last summer to introduce per-CPU multiplexers. The current goal is to exhaust profiling gains from a single process before landing that patch, as all single-CPU gains continue to apply in that case, and there is much less risk of inefficiency being hidden beneath the noise created by multiple multiplexer processes.

Please kick the tires, and as always, bug reports are welcome!

Just tuning in?

Codementor: Some simple CodeWars problems

$
0
0
A few solutions to some easy-ish CodeWars problems.

codingdirectional: Create a filter for the audio and image files with python

$
0
0

Hello and welcome back. In this chapter, we will create two methods used to filter out the unwanted audio and image file. The rules of filtering are as follows.

  1. The file name cannot contain a blank space, the ‘-‘ or the ‘_’ or the ‘,’ or numbers.
  2. The file extension cannot contain a capital letter.
  3. Only a few valid audio or image file extensions are allowed to get past the filter.
  4. There must be a file extension.

Any rule violation will make the program returns False or else the program will return True.

Below is the methods that will act as the filter.

def is_audio(file_name):

    if(prelim(file_name) == False):
        return False

    audio = ['.mp3', '.flac', '.alac', '.aac']

    for ex in audio:
        if(ex in file_name):
            return True

    return False

def is_img(file_name):

    if(prelim(file_name) == False):
        return False

    img = ['.jpg', '.jpeg', '.png', '.bmp', '.gif']

    for ex in img:
        if(ex in file_name):
            return True

    return False

def prelim(file_name):
    
    if(" " in file_name or "-" in file_name or "_" in file_name or "," in file_name):
        return False

    # no uppercase for the file extension
    try:
        index_ = file_name.index('.')
        if(file_name[index_:].isupper()):
            return False
    except ValueError:
        return False
    
    numbers = ['1','2','3','4','5','6','7','8','9','0']
    
    for num in numbers:
        if (num) in file_name[0:index_]:
            return False

OK, now let us try out the above program.

print(is_audio('x_.mp3')) // False
print(is_audio('x-.mp3')) // False
print(is_img('x10.gif'))  // False
print(is_img('x.PNG')) // False
print(is_img('x.png')) // True
print(is_audio('y,.flac'))  // False
print(is_img('x')) // False
print(is_audio('y.png')) // False
print(is_audio('y m c a.alac')) // False
print(is_audio('YOUNGMANYOUGOTNOPLACETOGO.alac')) // True

Well there you have it, we will start our next python project in the next chapter as I have promised you to.

Real Python: The Factory Method Pattern and Its Implementation in Python

$
0
0

This article explores the Factory Method design pattern and its implementation in Python. Design patterns became a popular topic in late 90s after the so-called Gang of Four (GoF: Gamma, Helm, Johson, and Vlissides) published their book Design Patterns: Elements of Reusable Object-Oriented Software.

The book describes design patterns as a core design solution to reoccurring problems in software and classifies each design pattern into categories according to the nature of the problem. Each pattern is given a name, a problem description, a design solution, and an explanation of the consequences of using it.

The GoF book describes Factory Method as a creational design pattern. Creational design patterns are related to the creation of objects, and Factory Method is a design pattern that creates objects with a common interface.

This is a recurrent problem that makes Factory Method one of the most widely used design patterns, and it’s very important to understand it and know how apply it.

By the end of this article, you will:

  • Understand the components of Factory Method
  • Recognize opportunities to use Factory Method in your applications
  • Learn to modify existing code and improve its design by using the pattern
  • Learn to identify opportunities where Factory Method is the appropriate design pattern
  • Choose an appropriate implementation of Factory Method
  • Know how to implement a reusable, general purpose solution of Factory Method

Free Bonus:5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Introducing Factory Method

Factory Method is a creational design pattern used to create concrete implementations of a common interface.

It separates the process of creating an object from the code that depends on the interface of the object.

For example, an application requires an object with a specific interface to perform its tasks. The concrete implementation of the interface is identified by some parameter.

Instead of using a complex if/elif/else conditional structure to determine the concrete implementation, the application delegates that decision to a separate component that creates the concrete object. With this approach, the application code is simplified, making it more reusable and easier to maintain.

Imagine an application that needs to convert a Song object into its string representation using a specified format. Converting an object to a different representation is often called serializing. You’ll often see these requirements implemented in a single function or method that contains all the logic and implementation, like in the following code:

# In serializer_demo.pyimportjsonimportxml.etree.ElementTreeasetclassSong:def__init__(self,song_id,title,artist):self.song_id=song_idself.title=titleself.artist=artistclassSongSerializer:defserialize(self,song,format):ifformat=='JSON':song_info={'id':song.song_id,'title':song.title,'artist':song.artist}returnjson.dumps(payload)elifformat=='XML':song_info=et.Element('song',attrib={'id':song.song_id})title=et.SubElement(song_info,'title')title.text=song.titleartist=et.SubElement(song_info,'artist')artist.text=song.artistreturnet.tostring(song_info,encoding='unicode')else:raiseValueError(format)

In the example above, you have a basic Song class to represent a song and a SongSerializer class that can convert a song object into its string representation according to the value of the format parameter.

The .serialize() method supports two different formats: JSON and XML. Any other format specified is not supported, so a ValueError exception is raised.

Let’s use the Python interactive shell to see how the code works:

>>>
>>> importserializer_demoassd>>> song=sd.Song('1','Water of Love','Dire Straits')>>> serializer=sd.SongSerializer()>>> serializer.serialize(song,'JSON')'{"id": "1", "title": "Water of Love", "artist": "Dire Straits"}'>>> serializer.serialize(song,'XML')'<song id="1"><title>Water of Love</title><artist>Dire Straits</artist></song>'>>> serializer.serialize(song,'YAML')Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./serializer_demo.py", line 30, in serializeraiseValueError(format)ValueError: YAML

You create a song object and a serializer, and you convert the song to its string representation by using the .serialize() method. The method takes the song object as a parameter, as well as a string value representing the format you want. The last call uses YAML as the format, which is not supported by the serializer, so a ValueError exception is raised.

This example is short and simplified, but it still has a lot of complexity. There are three logical or execution paths depending on the value of the format parameter. This may not seem like a big deal, and you’ve probably seen code with more complexity than this, but the above example is still pretty hard to maintain.

The Problems With Complex Conditional Code

The example above exhibits all the problems you’ll find in complex logical code. Complex logical code uses if/elif/else structures to change the behavior of an application. Using if/elif/else conditional structures makes the code harder to read, harder to understand, and harder to maintain.

The code above might not seem hard to read or understand, but wait till you see the final code in this section!

Nevertheless, the code above is hard to maintain because it is doing too much. The single responsibility principle states that a module, a class, or even a method should have a single, well-defined responsibility. It should do just one thing and have only one reason to change.

The .serialize() method in SongSerializer will require changes for many different reasons. This increases the risk of introducing new defects or breaking existing functionality when changes are made. Let’s take a look at all the situations that will require modifications to the implementation:

  • When a new format is introduced: The method will have to change to implement the serialization to that format.

  • When the Song object changes: Adding or removing properties to the Song class will require the implementation to change in order to accommodate the new structure.

  • When the string representation for a format changes (plain JSON vs JSON API): The .serialize() method will have to change if the desired string representation for a format changes because the representation is hard-coded in the .serialize() method implementation.

The ideal situation would be if any of those changes in requirements could be implemented without changing the .serialize() method. Let’s see how you can do that in the following sections.

Looking for a Common Interface

The first step when you see complex conditional code in an application is to identify the common goal of each of the execution paths (or logical paths).

Code that uses if/elif/else usually has a common goal that is implemented in different ways in each logical path. The code above converts a song object to its string representation using a different format in each logical path.

Based on the goal, you look for a common interface that can be used to replace each of the paths. The example above requires an interface that takes a song object and returns a string.

Once you have a common interface, you provide separate implementations for each logical path. In the example above, you will provide an implementation to serialize to JSON and another for XML.

Then, you provide a separate component that decides the concrete implementation to use based on the specified format. This component evaluates the value of format and returns the concrete implementation identified by its value.

In the following sections, you will learn how to make changes to existing code without changing the behavior. This is referred to as refactoring the code.

Martin Fowler in his book Refactoring: Improving the Design of Existing Code defines refactoring as “the process of changing a software system in such a way that does not alter the external behavior of the code yet improves its internal structure.”

Let’s begin refactoring the code to achieve the desired structure that uses the Factory Method design pattern.

Refactoring Code Into the Desired Interface

The desired interface is an object or a function that takes a Song object and returns a string representation.

The first step is to refactor one of the logical paths into this interface. You do this by adding a new method ._serialize_to_json() and moving the JSON serialization code to it. Then, you change the client to call it instead of having the implementation in the body of the if statement:

classSongSerializer:defserialize(self,song,format):ifformat=='JSON':returnself._serialize_to_json(song)# The rest of the code remains the samedef_serialize_to_json(self,song):payload={'id':song.song_id,'title':song.title,'artist':song.artist}returnjson.dumps(payload)

Once you make this change, you can verify that the behavior has not changed. Then, you do the same for the XML option by introducing a new method ._serialize_to_xml(), moving the implementation to it, and modifying the elif path to call it.

The following example shows the refactored code:

classSongSerializer:defserialize(self,song,format):ifformat=='JSON':returnself._serialize_to_json(song)elifformat=='XML':returnself._serialize_to_xml(song)else:raiseValueError(format)def_serialize_to_json(self,song):payload={'id':song.song_id,'title':song.title,'artist':song.artist}returnjson.dumps(payload)def_serialize_to_xml(self,song):song_element=et.Element('song',attrib={'id':song.song_id})title=et.SubElement(song_element,'title')title.text=song.titleartist=et.SubElement(song_element,'artist')artist.text=song.artistreturnet.tostring(song_element,encoding='unicode')

The new version of the code is easier to read and understand, but it can still be improved with a basic implementation of Factory Method.

Basic Implementation of Factory Method

The central idea in Factory Method is to provide a separate component with the responsibility to decide which concrete implementation should be used based on some specified parameter. That parameter in our example is the format.

To complete the implementation of Factory Method, you add a new method ._get_serializer() that takes the desired format. This method evaluates the value of format and returns the matching serialization function:

classSongSerializer:def_get_serializer(self,format):ifformat=='JSON':returnself._serialize_to_jsonelifformat=='XML':returnself._serialize_to_xmlelse:raiseValueError(format)

Note: The ._get_serializer() method does not call the concrete implementation, and it just returns the function object itself.

Now, you can change the .serialize() method of SongSerializer to use ._get_serializer() to complete the Factory Method implementation. The next example shows the complete code:

classSongSerializer:defserialize(self,song,format):serializer=self._get_serializer(format)returnserializer(song)def_get_serializer(self,format):ifformat=='JSON':returnself._serialize_to_jsonelifformat=='XML':returnself._serialize_to_xmlelse:raiseValueError(format)def_serialize_to_json(self,song):payload={'id':song.song_id,'title':song.title,'artist':song.artist}returnjson.dumps(payload)def_serialize_to_xml(self,song):song_element=et.Element('song',attrib={'id':song.song_id})title=et.SubElement(song_element,'title')title.text=song.titleartist=et.SubElement(song_element,'artist')artist.text=song.artistreturnet.tostring(song_element,encoding='unicode')

The final implementation shows the different components of Factory Method. The .serialize() method is the application code that depends on an interface to complete its task.

This is referred to as the client component of the pattern. The interface defined is referred to as the product component. In our case, the product is a function that takes a Song and returns a string representation.

The ._serialize_to_json() and ._serialize_to_xml() methods are concrete implementations of the product. Finally, the ._get_serializer() method is the creator component. The creator decides which concrete implementation to use.

Because you started with some existing code, all the components of Factory Method are members of the same class SongSerializer.

Usually, this is not the case and, as you can see, none of the added methods use the self parameter. This is a good indication that they should not be methods of the SongSerializer class, and they can become external functions:

classSongSerializer:defserialize(self,song,format):serializer=get_serializer(format)returnserializer(song)defget_serializer(format):ifformat=='JSON':return_serialize_to_jsonelifformat=='XML':return_serialize_to_xmlelse:raiseValueError(format)def_serialize_to_json(song):payload={'id':song.song_id,'title':song.title,'artist':song.artist}returnjson.dumps(payload)def_serialize_to_xml(song):song_element=et.Element('song',attrib={'id':song.song_id})title=et.SubElement(song_element,'title')title.text=song.titleartist=et.SubElement(song_element,'artist')artist.text=song.artistreturnet.tostring(song_element,encoding='unicode')

Note: The .serialize() method in SongSerializer does not use the self parameter.

The rule above tells us it should not be part of the class. This is correct, but you are dealing with existing code.

If you remove SongSerializer and change the .serialize() method to a function, then you’ll have to change all the locations in the application that use SongSerializer and replace the calls to the new function.

Unless you have a very high percentage of code coverage with your unit tests, this is not a change that you should be doing.

The mechanics of Factory Method are always the same. A client (SongSerializer.serialize()) depends on a concrete implementation of an interface. It requests the implementation from a creator component (get_serializer()) using some sort of identifier (format).

The creator returns the concrete implementation according to the value of the parameter to the client, and the client uses the provided object to complete its task.

You can execute the same set of instructions in the Python interactive interpreter to verify that the application behavior has not changed:

>>>
>>> importserializer_demoassd>>> song=sd.Song('1','Water of Love','Dire Straits')>>> serializer=sd.SongSerializer()>>> serializer.serialize(song,'JSON')'{"id": "1", "title": "Water of Love", "artist": "Dire Straits"}'>>> serializer.serialize(song,'XML')'<song id="1"><title>Water of Love</title><artist>Dire Straits</artist></song>'>>> serializer.serialize(song,'YAML')Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./serializer_demo.py", line 13, in serializeserializer=get_serializer(format)
  File "./serializer_demo.py", line 23, in get_serializerraiseValueError(format)ValueError: YAML

You create a song and a serializer, and use the serializer to convert the song to its string representation specifying a format. Since, YAML is not a supported format, ValueError is raised.

Recognizing Opportunities to Use Factory Method

Factory Method should be used in every situation where an application (client) depends on an interface (product) to perform a task and there are multiple concrete implementations of that interface. You need to provide a parameter that can identify the concrete implementation and use it in the creator to decide the concrete implementation.

There is a wide range of problems that fit this description, so let’s take a look at some concrete examples.

Replacing complex logical code: Complex logical structures in the format if/elif/else are hard to maintain because new logical paths are needed as requirements change.

Factory Method is a good replacement because you can put the body of each logical path into separate functions or classes with a common interface, and the creator can provide the concrete implementation.

The parameter evaluated in the conditions becomes the parameter to identify the concrete implementation. The example above represents this situation.

Constructing related objects from external data: Imagine an application that needs to retrieve employee information from a database or other external source.

The records represent employees with different roles or types: managers, office clerks, sales associates, and so on. The application may store an identifier representing the type of employee in the record and then use Factory Method to create each concrete Employee object from the rest of the information on the record.

Supporting multiple implementations of the same feature: An image processing application needs to transform a satellite image from one coordinate system to another, but there are multiple algorithms with different levels of accuracy to perform the transformation.

The application can allow the user to select an option that identifies the concrete algorithm. Factory Method can provide the concrete implementation of the algorithm based on this option.

Combining similar features under a common interface: Following the image processing example, an application needs to apply a filter to an image. The specific filter to use can be identified by some user input, and Factory Method can provide the concrete filter implementation.

Integrating related external services: A music player application wants to integrate with multiple external services and allow users to select where their music comes from. The application can define a common interface for a music service and use Factory Method to create the correct integration based on a user preference.

All these situations are similar. They all define a client that depends on a common interface known as the product. They all provide a means to identify the concrete implementation of the product, so they all can use Factory Method in their design.

You can now look at the serialization problem from previous examples and provide a better design by taking into consideration the Factory Method design pattern.

An Object Serialization Example

The basic requirements for the example above are that you want to serialize Song objects into their string representation. It seems the application provides features related to music, so it is plausible that the application will need to serialize other type of objects like Playlist or Album.

Ideally, the design should support adding serialization for new objects by implementing new classes without requiring changes to the existing implementation. The application requires objects to be serialized to multiple formats like JSON and XML, so it seems natural to define an interface Serializer that can have multiple implementations, one per format.

The interface implementation might look something like this:

# In serializers.pyimportjsonimportxml.etree.ElementTreeasetclassJsonSerializer:def__init__(self):self._current_object=Nonedefstart_object(self,object_name,object_id):self._current_object={'id':object_id}defadd_property(self,name,value):self._current_object[name]=valuedefto_str(self):returnjson.dumps(self._current_object)classXmlSerializer:def__init__(self):self._element=Nonedefstart_object(self,object_name,object_id):self._element=et.Element(object_name,attrib={'id':object_id})defadd_property(self,name,value):prop=et.SubElement(self._element,name)prop.text=valuedefto_str(self):returnet.tostring(self._element,encoding='unicode')

Note: The example above doesn’t implement a full Serializer interface, but it should be good enough for our purposes and to demonstrate Factory Method.

The Serializer interface is an abstract concept due to the dynamic nature of the Python language. Static languages like Java or C# require that interfaces be explicitly defined. In Python, any object that provides the desired methods or functions is said to implement the interface. The example defines the Serializer interface to be an object that implements the following methods or functions:

  • .start_object(object_name, object_id)
  • .add_property(name, value)
  • .to_str()

This interface is implemented by the concrete classes JsonSerializer and XmlSerializer.

The original example used a SongSerializer class. For the new application, you will implement something more generic, like ObjectSerializer:

# In serializers.pyclassObjectSerializer:defserialize(self,serializable,format):serializer=factory.get_serializer(format)serializable.serialize(serializer)returnserializer.to_str()

The implementation of ObjectSerializer is completely generic, and it only mentions a serializable and a format as parameters.

The format is used to identify the concrete implementation of the Serializer and is resolved by the factory object. The serializable parameter refers to another abstract interface that should be implemented on any object type you want to serialize.

Let’s take a look at a concrete implementation of the serializable interface in the Song class:

# In songs.pyclassSong:def__init__(self,song_id,title,artist):self.song_id=song_idself.title=titleself.artist=artistdefserialize(self,serializer):serializer.start_object('song',self.song_id)serializer.add_property('title',self.title)serializer.add_property('artist',self.artist)

The Song class implements the Serializable interface by providing a .serialize(serializer) method. In the method, the Song class uses the serializer object to write its own information without any knowledge of the format.

As a matter of fact, the Song class doesn’t even know the goal is to convert the data to a string. This is important because you could use this interface to provide a different kind of serializer that converts the Song information to a completely different representation if needed. For example, your application might require in the future to convert the Song object to a binary format.

So far, we’ve seen the implementation of the client (ObjectSerializer) and the product (serializer). It is time to complete the implementation of Factory Method and provide the creator. The creator in the example is the variable factory in ObjectSerializer.serialize().

Factory Method as an Object Factory

In the original example, you implemented the creator as a function. Functions are fine for very simple examples, but they don’t provide too much flexibility when requirements change.

Classes can provide additional interfaces to add functionality, and they can be derived to customize behavior. Unless you have a very basic creator that will never change in the future, you want to implement it as a class and not a function. These type of classes are called object factories.

You can see the basic interface of SerializerFactory in the implementation of ObjectSerializer.serialize(). The method uses factory.get_serializer(format) to retrieve the serializer from the object factory.

You will now implement SerializerFactory to meet this interface:

# In serializers.pyclassSerializerFactory:defget_serializer(self,format):ifformat=='JSON':returnJsonSerializer()elifformat=='XML':returnXmlSerializer()else:raiseValueError(format)factory=SerializerFactory()

The current implementation of .get_serializer() is the same you used in the original example. The method evaluates the value of format and decides the concrete implementation to create and return. It is a relatively simple solution that allows us to verify the functionality of all the Factory Method components.

Let’s go to the Python interactive interpreter and see how it works:

>>>
>>> importsongs>>> importserializers>>> song=songs.Song('1','Water of Love','Dire Straits')>>> serializer=serializers.ObjectSerializer()>>> serializer.serialize(song,'JSON')'{"id": "1", "title": "Water of Love", "artist": "Dire Straits"}'>>> serializer.serialize(song,'XML')'<song id="1"><title>Water of Love</title><artist>Dire Straits</artist></song>'>>> serializer.serialize(song,'YAML')Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "./serializers.py", line 39, in serializeserializer=factory.get_serializer(format)
  File "./serializers.py", line 52, in get_serializerraiseValueError(format)ValueError: YAML

The new design of Factory Method allows the application to introduce new features by adding new classes, as opposed to changing existing ones. You can serialize other objects by implementing the Serializable interface on them. You can support new formats by implementing the Serializer interface in another class.

The missing piece is that SerializerFactory has to change to include the support for new formats. This problem is easily solved with the new design because SerializerFactory is a class.

Supporting Additional Formats

The current implementation of SerializerFactory needs to be changed when a new format is introduced. Your application might never need to support any additional formats, but you never know.

You want your designs to be flexible, and as you will see, supporting additional formats without changing SerializerFactory is relatively easy.

The idea is to provide a method in SerializerFactory that registers a new Serializer implementation for the format we want to support:

# In serializers.pyclassSerializerFactory:def__init__(self):self._creators={}defregister_format(self,format,creator):self._creators[format]=creatordefget_serializer(self,format):creator=self._creators.get(format)ifnotcreator:raiseValueError(format)returncreator()factory=SerializerFactory()factory.register_format('JSON',JsonSerializer)factory.register_format('XML',XmlSerializer)

The .register_format(format, creator) method allows registering new formats by specifying a format value used to identify the format and a creator object. The creator object happens to be the class name of the concrete Serializer. This is possible because all the Serializer classes provide a default .__init__() to initialize the instances.

The registration information is stored in the _creators dictionary. The .get_serializer() method retrieves the registered creator and creates the desired object. If the requested format has not been registered, then ValueError is raised.

You can now verify the flexibility of the design by implementing a YamlSerializer and get rid of the annoying ValueError you saw earlier:

# In yaml_serializer.pyimportyamlimportserializersclassYamlSerializer(serializers.JsonSerializer):defto_str(self):returnyaml.dump(self._current_object)serializers.factory.register_format('YAML',YamlSerializer)

Note: To implement the example, you need to install PyYAML in your environment using pip install PyYAML.

JSON and YAML are very similar formats, so you can reuse most of the implementation of JsonSerializer and overwrite .to_str() to complete the implementation. The format is then registered with the factory object to make it available.

Let’s use the Python interactive interpreter to see the results:

>>>
>>> importserializers>>> importsongs>>> importyaml_serializer>>> song=songs.Song('1','Water of Love','Dire Straits')>>> serializer=serializers.ObjectSerializer()>>> print(serializer.serialize(song,'JSON')){"id": "1", "title": "Water of Love", "artist": "Dire Straits"}>>> print(serializer.serialize(song,'XML'))<song id="1"><title>Water of Love</title><artist>Dire Straits</artist></song>>>> print(serializer.serialize(song,'YAML')){artist: Dire Straits, id: '1', title: Water of Love}

By implementing Factory Method using an Object Factory and providing a registration interface, you are able to support new formats without changing any of the existing application code. This minimizes the risk of breaking existing features or introducing subtle bugs.

A General Purpose Object Factory

The implementation of SerializerFactory is a huge improvement from the original example. It provides great flexibility to support new formats and avoids modifying existing code.

Still, the current implementation is specifically targeted to the serialization problem above, and it is not reusable in other contexts.

Factory Method can be used to solve a wide range of problems. An Object Factory gives additional flexibility to the design when requirements change. Ideally, you’ll want an implementation of Object Factory that can be reused in any situation without replicating the implementation.

There are some challenges to providing a general purpose implementation of Object Factory, and in the following sections you will look at those challenges and implement a solution that can be reused in any situation.

Not All Objects Can Be Created Equal

The biggest challenge to implement a general purpose Object Factory is that not all objects are created in the same way.

Not all situations allow us to use a default .__init__() to create and initialize the objects. It is important that the creator, in this case the Object Factory, returns fully initialized objects.

This is important because if it doesn’t, then the client will have to complete the initialization and use complex conditional code to fully initialize the provided objects. This defeats the purpose of the Factory Method design pattern.

To understand the complexities of a general purpose solution, let’s take a look at a different problem. Let’s say an application wants to integrate with different music services. These services can be external to the application or internal in order to support a local music collection. Each of the services has a different set of requirements.

Note: The requirements I define for the example are for illustration purposes and do not reflect the real requirements you will have to implement to integrate with services like Pandora or Spotify.

The intent is to provide a different set of requirements that shows the challenges of implementing a general purpose Object Factory.

Imagine that the application wants to integrate with a service provided by Spotify. This service requires an authorization process where a client key and secret are provided for authorization.

The service returns an access code that should be used on any further communication. This authorization process is very slow, and it should only be performed once, so the application wants to keep the initialized service object around and use it every time it needs to communicate with Spotify.

At the same time, other users want to integrate with Pandora. Pandora might use a completely different authorization process. It also requires a client key and secret, but it returns a consumer key and secret that should be used for other communications. As with Spotify, the authorization process is slow, and it should only be performed once.

Finally, the application implements the concept of a local music service where the music collection is stored locally. The service requires that the the location of the music collection in the local system be specified. Creating a new service instance is done very quickly, so a new instance can be created every time the user wants to access the music collection.

This example presents several challenges. Each service is initialized with a different set of parameters. Also, Spotify and Pandora require an authorization process before the service instance can be created.

They also want to reuse that instance to avoid authorizing the application multiple times. The local service is simpler, but it doesn’t match the initialization interface of the others.

In the following sections, you will solve this problems by generalizing the creation interface and implementing a general purpose Object Factory.

Separate Object Creation to Provide Common Interface

The creation of each concrete music service has its own set of requirements. This means a common initialization interface for each service implementation is not possible or recommended.

The best approach is to define a new type of object that provides a general interface and is responsible for the creation of a concrete service. This new type of object will be called a Builder. The Builder object has all the logic to create and initialize a service instance. You will implement a Builder object for each of the supported services.

Let’s start by looking at the application configuration:

# In program.pyconfig={'spotify_client_key':'THE_SPOTIFY_CLIENT_KEY','spotify_client_secret':'THE_SPOTIFY_CLIENT_SECRET','pandora_client_key':'THE_PANDORA_CLIENT_KEY','pandora_client_secret':'THE_PANDORA_CLIENT_SECRET','local_music_location':'/usr/data/music'}

The config dictionary contains all the values required to initialize each of the services. The next step is to define an interface that will use those values to create a concrete implementation of a music service. That interface will be implemented in a Builder.

Let’s look at the implementation of the SpotifyService and SpotifyServiceBuilder:

# In music.pyclassSpotifyService:def__init__(self,access_code):self._access_code=access_codedeftest_connection(self):print(f'Accessing Spotify with {self._access_code}')classSpotifyServiceBuilder:def__init__(self):self._instance=Nonedef__call__(self,spotify_client_key,spotify_client_secret,**_ignored):ifnotself._instance:access_code=self.authorize(spotify_client_key,spotify_client_secret)self._instance=SpotifyService(access_code)returnself._instancedefauthorize(self,key,secret):return'SPOTIFY_ACCESS_CODE'

Note: The music service interface defines a .test_connection() method, which should be enough for demonstration purposes.

The example shows a SpotifyServiceBuilder that implements .__call__(spotify_client_key, spotify_client_secret, **_ignored).

This method is used to create and initialize the concrete SpotifyService. It specifies the required parameters and ignores any additional parameters provided through **._ignored. Once the access_code is retrieved, it creates and returns the SpotifyService instance.

Notice that SpotifyServiceBuilder keeps the service instance around and only creates a new one the first time the service is requested. This avoids going through the authorization process multiple times as specified in the requirements.

Let’s do the same for Pandora:

# In music.pyclassPandoraService:def__init__(self,consumer_key,consumer_secret):self._key=consumer_keyself._secret=consumer_secretdeftest_connection(self):print(f'Accessing Pandora with {self._key} and {self._secret}')classPandoraServiceBuilder:def__init__(self):self._instance=Nonedef__call__(self,pandora_client_key,pandora_client_secret,**_ignored):ifnotself._instance:consumer_key,consumer_secret=self.authorize(pandora_client_key,pandora_client_secret)self._instance=PandoraService(consumer_key,consumer_secret)returnself._instancedefauthorize(self,key,secret):return'PANDORA_CONSUMER_KEY','PANDORA_CONSUMER_SECRET'

The PandoraServiceBuilder implements the same interface, but it uses different parameters and processes to create and initialize the PandoraService. It also keeps the service instance around, so the authorization only happens once.

Finally, let’s take a look at the local service implementation:

# In music.pyclassLocalService:def__init__(self,location):self._location=locationdeftest_connection(self):print(f'Accessing Local music at {self._location}')defcreate_local_music_service(local_music_location,**_ignored):returnLocalService(local_music_location)

The LocalService just requires a location where the collection is stored to initialize the LocalService.

A new instance is created every time the service is requested because there is no slow authorization process. The requirements are simpler, so you don’t need a Builder class. Instead, a function returning an initialized LocalService is used. This function matches the interface of the .__call__() methods implemented in the builder classes.

A Generic Interface to Object Factory

A general purpose Object Factory (ObjectFactory) can leverage the generic Builder interface to create all kinds of objects. It provides a method to register a Builder based on a key value and a method to create the concrete object instances based on the key.

Let’s look at the implementation of our generic ObjectFactory:

# In object_factory.pyclassObjectFactory:def__init__(self):self._builders={}defregister_builder(self,key,builder):self._builders[key]=builderdefcreate(self,key,**kwargs):builder=self._builders.get(key)ifnotbuilder:raiseValueError(key)returnbuilder(**kwargs)

The implementation structure of ObjectFactory is the same you saw in SerializerFactory.

The difference is in the interface that exposes to support creating any type of object. The builder parameter can be any object that implements the callable interface. This means a Builder can be a function, a class, or an object that implements .__call__().

The .create() method requires that additional arguments are specified as keyword arguments. This allows the Builder objects to specify the parameters they need and ignore the rest in no particular order. For example, you can see that create_local_music_service() specifies a local_music_location parameter and ignores the rest.

Let’s create the factory instance and register the builders for the services you want to support:

# In music.pyimportobject_factory# Omitting other implementation classes shown abovefactory=object_factory.ObjectFactory()factory.register_builder('SPOTIFY',SpotifyServiceBuilder())factory.register_builder('PANDORA',PandoraServiceBuilder())factory.register_builder('LOCAL',create_local_music_service)

The music module exposes the ObjectFactory instance through the factory attribute. Then, the builders are registered with the instance. For Spotify and Pandora, you register an instance of their corresponding builder, but for the local service, you just pass the function.

Let’s write a small program that demonstrates the functionality:

# In program.pyimportmusicconfig={'spotify_client_key':'THE_SPOTIFY_CLIENT_KEY','spotify_client_secret':'THE_SPOTIFY_CLIENT_SECRET','pandora_client_key':'THE_PANDORA_CLIENT_KEY','pandora_client_secret':'THE_PANDORA_CLIENT_SECRET','local_music_location':'/usr/data/music'}pandora=music.factory.create('PANDORA',**config)pandora.test_connection()spotify=music.factory.create('SPOTIFY',**config)spotify.test_connection()local=music.factory.create('LOCAL',**config)local.test_connection()pandora2=music.services.get('PANDORA',**config)print(f'id(pandora) == id(pandora2): {id(pandora) == id(pandora2)}')spotify2=music.services.get('SPOTIFY',**config)print(f'id(spotify) == id(spotify2): {id(spotify) == id(spotify2)}')

The application defines a config dictionary representing the application configuration. The configuration is used as the keyword arguments to the factory regardless of the service you want to access. The factory creates the concrete implementation of the music service based on the specified key parameter.

You can now run our program to see how it works:

$ python program.py
Accessing Pandora with PANDORA_CONSUMER_KEY and PANDORA_CONSUMER_SECRETAccessing Spotify with SPOTIFY_ACCESS_CODEAccessing Local music at /usr/data/musicid(pandora) == id(pandora2): Trueid(spotify) == id(spotify2): True

You can see that the correct instance is created depending on the specified service type. You can also see that requesting the Pandora or Spotify service always returns the same instance.

Specializing Object Factory to Improve Code Readability

General solutions are reusable and avoid code duplication. Unfortunately, they can also obscure the code and make it less readable.

The example above shows that, to access a music service, music.factory.create() is called. This may lead to confusion. Other developers might believe that a new instance is created every time and decide that they should keep around the service instance to avoid the slow initialization process.

You know that this is not what happens because the Builder class keeps the initialized instance and returns it for subsequent calls, but this isn’t clear from just reading the code.

A good solution is to specialize a general purpose implementation to provide an interface that is concrete to the application context. In this section, you will specialize ObjectFactory in the context of our music services, so the application code communicates the intent better and becomes more readable.

The following example shows how to specialize ObjectFactory, providing an explicit interface to the context of the application:

# In music.pyclassMusicServiceProvider(object_factory.ObjectFactory):defget(self,service_id,**kwargs):returnself.create(service_id,**kwargs)services=MusicServiceProvider()services.register_builder('SPOTIFY',SpotifyServiceBuilder())services.register_builder('PANDORA',PandoraServiceBuilder())services.register_builder('LOCAL',create_local_music_service)

You derive MusicServiceProvider from ObjectFactory and expose a new method .get(service_id, **kwargs).

This method invokes the generic .create(key, **kwargs), so the behavior remains the same, but the code reads better in the context of our application. You also renamed the previous factory variable to services and initialized it as a MusicServiceProvider.

As you can see, the updated application code reads much better now:

importmusicconfig={'spotify_client_key':'THE_SPOTIFY_CLIENT_KEY','spotify_client_secret':'THE_SPOTIFY_CLIENT_SECRET','pandora_client_key':'THE_PANDORA_CLIENT_KEY','pandora_client_secret':'THE_PANDORA_CLIENT_SECRET','local_music_location':'/usr/data/music'}pandora=music.services.get('PANDORA',**config)pandora.test_connection()spotify=music.services.get('SPOTIFY',**config)spotify.test_connection()local=music.services.get('LOCAL',**config)local.test_connection()pandora2=music.services.get('PANDORA',**config)print(f'id(pandora) == id(pandora2): {id(pandora) == id(pandora2)}')spotify2=music.services.get('SPOTIFY',**config)print(f'id(spotify) == id(spotify2): {id(spotify) == id(spotify2)}')

Running the program shows that the behavior hasn’t changed:

$ python program.py
Accessing Pandora with PANDORA_CONSUMER_KEY and PANDORA_CONSUMER_SECRETAccessing Spotify with SPOTIFY_ACCESS_CODEAccessing Local music at /usr/data/musicid(pandora) == id(pandora2): Trueid(spotify) == id(spotify2): True

Conclusion

Factory Method is a widely used, creational design pattern that can be used in many situations where multiple concrete implementations of an interface exist.

The pattern removes complex logical code that is hard to maintain, and replaces it with a design that is reusable and extensible. The pattern avoids modifying existing code to support new requirements.

This is important because changing existing code can introduce changes in behavior or subtle bugs.

In this article, you learned:

  • What the Factory Method design pattern is and what its components are
  • How to refactor existing code to leverage Factory Method
  • Situations in which Factory Method should be used
  • How Object Factories provide more flexibility to implement Factory Method
  • How to implement a general purpose Object Factory and its challenges
  • How to specialize a general solution to provide a better context

Further Reading

If you want to learn more about Factory Method and other design patterns, I recommend Design Patterns: Elements of Reusable Object-Oriented Software by the GoF, which is a great reference for widely adopted design patterns.

Also, Heads First Design Patterns: A Brain-Friendly Guide by Eric Freeman and Elisabeth Robson provides a fun, easy-to-read explanation of design patterns.

Wikipedia has a good catalog of design patterns with links to pages for the most common and useful patterns.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Mike Driscoll: PyDev of the Week: Paolo Melchiorre

$
0
0

This week we welcome Paolo Melchiorre (@pauloxnet)as our PyDev of the Week! Paolo is a core developer of the Django web framework. He has spoken at several different Python-related conferences in Europe and also writes over on his blog. Let’s take a few minutes to get to know him better!

Can you tell us a little about yourself (hobbies, education, etc)?

Paulo Melchiorre

I graduated with a degree in Computer Science from the University of Bologna. My thesis was about Free Software and since then I’ve been a Free Software advocate.

I’ve been a GNU/Linux user for 20 years and now I’m a happy user of Ubuntu.

In 2007 I attended my first conference, the Plone Conference, and since then I’ve attended many other pythonic conferences in Europe.

In 2017 I presented a talk at PyCon Italy and at EuroPython and since then I have been a conference speaker for local and international events, both in Italian and in English.

Giving a talk at EuroPython 2017Giving a talk at EuroPython 2017

I’ve lived and worked in Rome and London, and since 2015 I’ve been a remote worker located in my hometown of Pescara in Italy, which is close to the beach and the mountains.

I love nature and spending my time swimming, snowboarding or hiking, but also traveling with my wife around the world.

I like improving my English skills by reading fiction books or listening to audiobooks, watching TV series and movies, listening to podcasts and attending local English speaking meetups.

I answer questions at stack overflow, tweet at @pauloxnet and occasionally post at paulox.net.

Why did you start using Python?

I started using Python in my first job because we developed websites with Plone and Zope.

I realized how much better Python was for me than other languages I’ve studied and used before because it’s easier to learn, it’s focused on code simplicity and readability, it’s extensible and fast to write and has a fantastic community.

When I stopped using Plone I continued using Python as main programming language.

What other programming languages do you know and which is your favorite?

I started programming with Pascal during high school and then I learned HTML and CSS on my own to develop my first website as high school final essay.

At university I studied some different languages like C, C++, C#, Java, SQL and Javascript and I used some of them at work in the past.

In the last 10 years I’ve predominantly used Python, and it’s without a doubt the language I prefer, although sometimes I still use SQL, Javascript and obviously HTML and CSS.

Which Python libraries are your favorite (core or 3rd party)?

I work every day a lot with Django and PostgreSQL so apart from the Django framework itself I think my favourite python library is the Python-PostgreSQL database adapter psycopg2 because it’s pretty solid and allows me to work with the database without the Django ORM when I need to do very low level operations and use all the great features of PostgreSQL.

How did you get started contributing to Django?

Sprinting at DjangoCon Europe 2017

I started contributing to the core of Django during the sprint day at DjangoCon Europe 2017 with a pull request that integrated the PostgreSQL crypto extension in it’s contrib package and it was merged in Django 2.0.

I presented a talk about the Django Full-text search feature at the Pycon Italy 2017 conference and then wrote the article “Full-text search with Django and PostgreSQL” based on this, but I realized that the Django Full-text search function was not used on the djangoproject.com site.

Sprinting at EuroPython 2017

At EuroPython 2017 I organized a sprint about the search module of the djangoproject.com.

I completed a pull request that replaced Elasticsearch with the PostgreSQL Full-text search function on the official Django website and I continued updating this function with improvements in speed and multilingual support.

I presented a talk about this experience as an example of contribution to the Django project.

Why did you choose Django over Python web frameworks?

I started working with Plone and the Zope application framework which stores all information in Zope’s built-in transactional object database (ZODB).

I started using Django when I needed to store data in a relational database like PostgreSQL, and after some research, I realized it was the best choice.

I appreciate it’s architecture, the ORM, the admin module, the PostgreSQL support, all its ready-to-use modules like GeoDjango, all the 3rd party packages, and particularly the community behind it.

What projects are you working on now?

Coaching at Django Girls EuroPython 2017

I contribute to the Django project, its website and some related packages.

I’m attending some Django Girls workshops as a coach and I’ve contributed to its tutorial.

In addition, I’m updating a django queries project with code I’ve used in my talks which lets people try it on their own.

I’m working on a Django project template we use at work to speed up the bootstrap of a project deployed on uWSGI.

I’m updating my Pelican-based technical blog where I post some articles, information about me, my projects and my talks.

I’m updating my YouTube channel with all my recorded talks and my Speaker Deck account with all my talk slides.

I’m also answering as many python-related questions as I can on Stack Overflow, particularly related to Django, Full-text search and PostgreSQL and I wrote an article based on one of them.

What are the top three things you have learned as an open source developer?

I think Free Software is the one of the best inventions in the last century, and being part of it is very rewarding.

In particular being a Free Software developer has taught me:

  1. Sharing knowledge (in form of ideas, code, documentation, skills) is the best way to better yourself as a person and a developer
  2. The best part about Free Software is its community of human beings
  3. Some things not code-related are very important for improving Free Software and its community, such as choosing a good license, adding contributing guidelines and not forgetting about documentation

Is there anything else you’d like to say?

Having fun at PyFiorentina during Pycon Italy 2018

Being a conference speaker at Free Software related conferences has given me the opportunity to meet a lot of people and become a better person.

I encourage everyone to join meetups, get out in the community and attend conferences and, of course, if we meet at some conference, please say hello.

I also want to say to all native English-speaking developers that there a lot of excellent developers who hesitate to contribute to Free Software because of their lack of English knowledge.

Personally, I waited a long time before contributing to projects and actively participating in the community and then I forced myself to improve my English skills with a lot of costs in term of time, effort and money.

So I would just like to remind people to be patient and inclusive when it comes to non-native English speakers as we need a bit more time and effort to open an issue, send a pull request, ask questions online and at conferences or simply speak and write about ourselves and our ideas in an interview like this.

Thanks for doing the interview, Paolo!

Tryton News: Tryton Unconference 2019: In Marseille on the 6th & 7th of June

$
0
0

@nicoe wrote:

The Tryton Foundation is happy to announce the venue and date of the next Tryton Unconference.

We will go in the sunny city of Marseille in south of France on the 6th and 7th of June. Contrary to previous editions of the Tryton Unconferences the coding sprint will be organized during the two days preceding the conference.

Both events will take place at the École de Commerce et de Management. We will publish a website with more detailed informations shortly.

Many thanks to adiczion which is the organizer of this year event!

Posts: 1

Participants: 1

Read full topic


Django Weblog: Django security releases issued: 2.1.6, 2.0.11 and 1.11.19

$
0
0

In accordance with our security release policy, the Django team is issuing Django 1.11.19, Django 2.1.6, and Django 2.0.11. These releases addresses the security issue detailed below. We encourage all users of Django to upgrade as soon as possible.

CVE-2019-6975: Memory exhaustion in django.utils.numberformat.format()

If django.utils.numberformat.format() -- used by contrib.admin as well as the the floatformat, filesizeformat, and intcomma templates filters -- received a Decimal with a large number of digits or a large exponent, it could lead to significant memory usage due to a call to '{:f}'.format().

To avoid this, decimals with more than 200 digits are now formatted using scientific notation.

Thanks Sjoerd Job Postmus for reporting this issue.

Affected supported versions

  • Django master branch
  • Django 2.2 (which will be released in a separate blog post later today)
  • Django 2.1
  • Django 2.0
  • Django 1.11

Per our supported versions policy, Django 1.10 and older are no longer supported.

Resolution

Patches to resolve the issue have been applied to Django's master branch and the 2.2, 2.1, 2.0, and 1.11 release branches. The patches may be obtained from the following changesets:

The following releases have been issued:

The PGP key ID used for these releases is Carlton Gibson: E17DF5C82B4F9D00.

General notes regarding security reporting

As always, we ask that potential security issues be reported via private email to security@djangoproject.com, and not via Django's Trac instance or the django-developers list. Please see our security policies for further information.

Django Weblog: Django 2.2 beta 1 released

$
0
0

Django 2.2 beta 1 is now available. It represents the second stage in the 2.2 release cycle and is an opportunity for you to try out the changes coming in Django 2.2.

Django 2.2 has a salmagundi of new features which you can read about in the in-development 2.2 release notes.

Only bugs in new features and regressions from earlier versions of Django will be fixed between now and 2.2 final (also, translations will be updated following the "string freeze" when the release candidate is issued). The current release schedule calls for a release candidate in a month from now with the final release to follow about two weeks after that around April 1. Early and often testing from the community will help minimize the number of bugs in the release. Updates on the release schedule schedule are available on the django-developers mailing list.

As with all beta and beta packages, this is not for production use. But if you'd like to take some of the new features for a spin, or to help find and fix bugs (which should be reported to the issue tracker), you can grab a copy of the beta package from our downloads page or on PyPI.

The PGP key ID used for this release is Carlton Gibson: E17DF5C82B4F9D00.

Zato Blog: Zato: A successful Python 3 migration story

$
0
0

Now that Python 3 support is available as a preview for developers, this post summarizes the effort that went into making sure that Zato works smoothly using both Python 2.7 and 3.x.

In fact, the works required were remarkably straightforward and trouble-free and the article discusses the thought process behind it, some of the techniques applied or tools used.

Background

Zato is an enterprise API integration platform and backend application server. We support a couple dozen of protocols, data formats, several sorts of IPC and other means to exchange messages across applications.

In other words, on the lowest level, passing bytes around, transforming, extracting, changing, collecting, manipulating, converting, encoding, decoding and comparing them, including support for all kinds of natural languages from around the world, is what Zato is about at its core when it is considered from the perspective of the programming language it is implemented in.

The codebase is around 130,000 lines big, out of which Python is 60,000 lines of code. This is not everything, though, because we also have 170+ external dependencies that also need to work with Python 2.7 and 3.x.

The works took two people a total of 80 hours. They were spread over much longer calendar time, except for the final sprint that required more attention for several days in a row.

Preparations

Since the very beginning, it was clear that Python 3 will have to be supported at one day so the number one thing that each and every Python module has always had is this preamble:

from__future__importabsolute_import,division,print_function,unicode_literals

This is what every Python file contains and it easily saved 90% of any potential work required to support Python 3 because, among other less demanding things, it enforced a separation, though still not as strict as in Python 3, between byte and Unicode objects. The separation is a good thing and the more one works with Python 3 the clearer it becomes.

In Python 2, it was sometimes possible to mix the two. Imagine that there is a Python-derived language where JSON dicts and Python dicts can be sometimes used interchangingly.

For instance, this is a JSON object: {"key1": "value1"} and it so happens that it is also a valid Python dict so in this hypothetical language, this would work:

json='{"key1": "value1"}'python={'key2':'value2'}result=json+python

Now the result is this:

{'key1':'value1','key2':'value2'}.

Or wait, perhaps it should be this?

'{"key1": "value1", "key2": "value2"}'

This is the central thing - they are distinct types and they should not be mixed merely they may be related or seem similar.

Conceptually, just like upon receiving a JSON request from the network a Python application will decode it into a canonical representation, such as a dict, list or another Python object.

The same should happen to other bytes, including ones that happen to represent text or similar information. In this case, the canonical format is called Unicode, and that is the whole point of employing it in one's application.

All of these was clear from the outset and the from __future__ statements helped in its execution, even if theoretically one could have been still able to mix bytes and Unicode - it was simply a matter of using the correct canonical format in a given context, i.e. a case of making sure the architecture was clean.

This particular __future__ statement was first announced in 2008 so there was plenty of time to prepare to it.

As part of the preparations, it is good to read a book about Unicode. Not just a 'Unicode for overburdened developers' kind of an article but an actual book that will let one truly appreciate the standard's breadth and scope. While reading it, do not resist a temptation to learn at least basics of two or more natural languages that you knew about before. This will only help you develop into a better person and this is not a joke.

While programming with bytes and Unicode, it is convenient simply to forget about whether it is a 'str', 'bytes' or 'unicode' object - it is easier simply to think about bytes and text or bytes and Unicode. There are bytes that can mean anything and there is text whose native, canonical form is Unicode. This is not always 100% accurate because Unicode can represent marvellous gems such as Byzantine musicalnotation and more but if a given application's scope is mostly constrained to text then this will work - there are bytes and there is text.

This is all fine with our own code but there are still the external libraries that Zato uses and some of them will want bytes, not text, or the other way around, in seemingly similar situations. There can be even cases like a library expecting for protocol header keys to be text and protocol header values to be bytes for rather unclear reasons. Just accept it as a fact of life and move on with your works.

Side projects

It was good to try out Python 3 first in a few new, smaller side-projects, GUI or command-line tools that are not part of the core yet they are important in the overall picture. The most important part of it was that creating a Python 3 application from scratch was in no different than in Python 2, this served as a gentle introduction to Python 3-specific constructs and this knowledge was easily transferred later on to the main porting job.

Dependencies

Out of a total of 170+ dependencies, around 10 were not Python 3-compatible. All of them had not been updated in eight, twelve or more years. At this point, it is safe to assume that if there is a dependency that was last updated in 2009 and it has no Python 3 support then it never will.

What to do next depended on a particular case, all of them were some kinds of convenience libraries and sometimes they had to be dropped and sometimes forked. Most complex changes required in a fork were changing 'print' to 'print()' or doing away with complex installation setups that predated contemporary pip-based configuration options.

Other than that, there were no issues with dependencies, all of them were ready for Python 3.

Idioms and imports

Most of the reference information needed to make use of Python 2 and 3 was available via the python-future project which itself is a great assistance. Installing this library, along with its dependencies, sufficed in 99%. There were some lesser requirements that were incorporated into a Zato-specific submodule directly, e.g. sys.maxint is useful as a loop terminator but ints in Python 3 have no limits so an equivalent had to be added to our own code.

Note that the page above does not show all the idioms and some changes were not always immediately obvious, like modifications to __slots__, or the way metaclasses can be declared, but there were no really impossible cases, just different things to use, either built in to Python 3 or available via future or six library.

A nice thing is that one is not required to immediately change all the imports in one go - they can be changed in smaller increments, e.g. 'basestring' is still available in 'from past.builtins import basestring'.

Testing

A really important aspect during the migration was the ability to test sub-components of an application in isolation. This does not only include unittests, which may be too low-level, but also things such starting only selected parts of Zato without a requirement to boot up whole servers which in turn meant each change could be tested within one second rather than ten. To a degree, this was an unexpected but really useful test of how modular our design was.

Intellectually, this was certainly the most challenging part because it required maintaining and traversing several trains of thought at once, sometimes for several days on end. This, in turn, means that it really is not a job for late afternoons only and it cannot be an afterthought, things can simply get complex very quickly.

String formatting

There is one thing that was not expected - the way str.format with bytes and text works.

For instance, this will fail in Python 3:

>>>'aaa'+b'bbb'Traceback(mostrecentcalllast):File"<stdin>",line1,in<module>TypeError:Can't convert 'bytes' object to str implicitly>>>

Just for reference, in Python 2 it does not fail:

>>>'aaa'+b'bbb''aaabbb'>>>

Still using Python 2, let's use string formatting:

>>>'aaa'+b'bbb''aaabbb'>>>template='{}.{}'>>>template.format('aaa',b'bbb')'aaa.bbb'>>>

In Python 3, this is the result:

>>> template = '{}.{}'
>>> template.format('aaa', b'bbb')
"aaa.b'bbb'"
>>> 

In the context of a Python 3 migration, it would be probably more in line with other changes to the language if this was special-cased to reject such constructs altogether. Otherwise, this initially led to rather inexplicable error messages because the code that produces such string constants may be completely unaware of where they are used further on. But witnessed once or twice, it was apparent later on what was the root cause and this could be easily dealt with.

Things that are missed

One small, yet convenient, feature of Python 2 was availability of some of the common codecs directly in the string objects, e.g.:

>>> u'abc'.encode('hex')
'616263'
>>> u'abc'.encode('base64')
'YWJj\n'
>>> u'ελληνική'.encode('idna')
'xn--jxangifdar'
>>>

This will not work as-is in Python 3:

>>> u'abc'.encode('hex')
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
LookupError: 'hex' is not a text encoding; use codecs.encode() to handle arbitrary codecs
>>>

Naturally, the functionality as such is still available in Python 3, just not via the same means.

Python 2.7

On server side, Python 2.7 will be around for many years. After all, this is a great language that let thousands and millions of people complete amazing projects and most of enterprise applications do not get rewritten solely because one of the technical components (here, Python) changes in a way that is partly incompatible with previous versions.

Both RHEL and Ubuntu ship with Python 2.7 and both of them have long-term support well into 2020s so the language as such will not go away. Yet, piece by piece, all the applications will be changed, modified, modularized or rewritten and gradually Python 2.7's usage will diminish.

In Zato, Python 2.7 will be supported for as long as it is feasible but part of the current migration's explicit goals was to make sure that existing user Zato environments based on Python 2.7 will continue to work out-of-the-box with Python 3 so there is no difference which Python version one chooses - both are supported and can be used.

Summary

An extraordinary aspect of the migration is that it was so unextraordinary. There were no really hard-won battles, no true gotchas and no unlooked-for hurdles. This can be likely attributed to the facts that:

  • Python developers offered information what to expect during such a job
  • Unicode was not treated as an afterthought
  • Zato reuses common libraries that are all ported to Python 3 already
  • Internet offers guides, hints and other pieces of information about what to do
  • It was easy to test Zato components in isolation
  • Time was explicitly put aside for the most difficult parts without having to share it with other tasks

The next version of Zato, to be released in June 2019, will come with pre-built packages using Python 2.7 and 3, but for now installation from source is needed.

PyPy Development: PyPy v7.0.0: triple release of 2.7, 3.5 and 3.6-alpha

$
0
0
/* :Author: David Goodger (goodger@python.org) :Id: $Id: html4css1.css 7952 2016-07-26 18:15:59Z milde $ :Copyright: This stylesheet has been placed in the public domain. Default cascading style sheet for the HTML output of Docutils. See http://docutils.sf.net/docs/howto/html-stylesheets.html for how to customize this style sheet. */ /* used to remove borders from tables and images */ .borderless, table.borderless td, table.borderless th { border: 0 } table.borderless td, table.borderless th { /* Override padding for "table.docutils td" with "! important". The right padding separates the table cells. */ padding: 0 0.5em 0 0 ! important } .first { /* Override more specific margin styles with "! important". */ margin-top: 0 ! important } .last, .with-subtitle { margin-bottom: 0 ! important } .hidden { display: none } .subscript { vertical-align: sub; font-size: smaller } .superscript { vertical-align: super; font-size: smaller } a.toc-backref { text-decoration: none ; color: black } blockquote.epigraph { margin: 2em 5em ; } dl.docutils dd { margin-bottom: 0.5em } object[type="image/svg+xml"], object[type="application/x-shockwave-flash"] { overflow: hidden; } /* Uncomment (and remove this text!) to get bold-faced definition list terms dl.docutils dt { font-weight: bold } */ div.abstract { margin: 2em 5em } div.abstract p.topic-title { font-weight: bold ; text-align: center } div.admonition, div.attention, div.caution, div.danger, div.error, div.hint, div.important, div.note, div.tip, div.warning { margin: 2em ; border: medium outset ; padding: 1em } div.admonition p.admonition-title, div.hint p.admonition-title, div.important p.admonition-title, div.note p.admonition-title, div.tip p.admonition-title { font-weight: bold ; font-family: sans-serif } div.attention p.admonition-title, div.caution p.admonition-title, div.danger p.admonition-title, div.error p.admonition-title, div.warning p.admonition-title, .code .error { color: red ; font-weight: bold ; font-family: sans-serif } /* Uncomment (and remove this text!) to get reduced vertical space in compound paragraphs. div.compound .compound-first, div.compound .compound-middle { margin-bottom: 0.5em } div.compound .compound-last, div.compound .compound-middle { margin-top: 0.5em } */ div.dedication { margin: 2em 5em ; text-align: center ; font-style: italic } div.dedication p.topic-title { font-weight: bold ; font-style: normal } div.figure { margin-left: 2em ; margin-right: 2em } div.footer, div.header { clear: both; font-size: smaller } div.line-block { display: block ; margin-top: 1em ; margin-bottom: 1em } div.line-block div.line-block { margin-top: 0 ; margin-bottom: 0 ; margin-left: 1.5em } div.sidebar { margin: 0 0 0.5em 1em ; border: medium outset ; padding: 1em ; background-color: #ffffee ; width: 40% ; float: right ; clear: right } div.sidebar p.rubric { font-family: sans-serif ; font-size: medium } div.system-messages { margin: 5em } div.system-messages h1 { color: red } div.system-message { border: medium outset ; padding: 1em } div.system-message p.system-message-title { color: red ; font-weight: bold } div.topic { margin: 2em } h1.section-subtitle, h2.section-subtitle, h3.section-subtitle, h4.section-subtitle, h5.section-subtitle, h6.section-subtitle { margin-top: 0.4em } h1.title { text-align: center } h2.subtitle { text-align: center } hr.docutils { width: 75% } img.align-left, .figure.align-left, object.align-left, table.align-left { clear: left ; float: left ; margin-right: 1em } img.align-right, .figure.align-right, object.align-right, table.align-right { clear: right ; float: right ; margin-left: 1em } img.align-center, .figure.align-center, object.align-center { display: block; margin-left: auto; margin-right: auto; } table.align-center { margin-left: auto; margin-right: auto; } .align-left { text-align: left } .align-center { clear: both ; text-align: center } .align-right { text-align: right } /* reset inner alignment in figures */ div.align-right { text-align: inherit } /* div.align-center * { */ /* text-align: left } */ .align-top { vertical-align: top } .align-middle { vertical-align: middle } .align-bottom { vertical-align: bottom } ol.simple, ul.simple { margin-bottom: 1em } ol.arabic { list-style: decimal } ol.loweralpha { list-style: lower-alpha } ol.upperalpha { list-style: upper-alpha } ol.lowerroman { list-style: lower-roman } ol.upperroman { list-style: upper-roman } p.attribution { text-align: right ; margin-left: 50% } p.caption { font-style: italic } p.credits { font-style: italic ; font-size: smaller } p.label { white-space: nowrap } p.rubric { font-weight: bold ; font-size: larger ; color: maroon ; text-align: center } p.sidebar-title { font-family: sans-serif ; font-weight: bold ; font-size: larger } p.sidebar-subtitle { font-family: sans-serif ; font-weight: bold } p.topic-title { font-weight: bold } pre.address { margin-bottom: 0 ; margin-top: 0 ; font: inherit } pre.literal-block, pre.doctest-block, pre.math, pre.code { margin-left: 2em ; margin-right: 2em } pre.code .ln { color: grey; } /* line numbers */ pre.code, code { background-color: #eeeeee } pre.code .comment, code .comment { color: #5C6576 } pre.code .keyword, code .keyword { color: #3B0D06; font-weight: bold } pre.code .literal.string, code .literal.string { color: #0C5404 } pre.code .name.builtin, code .name.builtin { color: #352B84 } pre.code .deleted, code .deleted { background-color: #DEB0A1} pre.code .inserted, code .inserted { background-color: #A3D289} span.classifier { font-family: sans-serif ; font-style: oblique } span.classifier-delimiter { font-family: sans-serif ; font-weight: bold } span.interpreted { font-family: sans-serif } span.option { white-space: nowrap } span.pre { white-space: pre } span.problematic { color: red } span.section-subtitle { /* font-size relative to parent (h1..h6 element) */ font-size: 80% } table.citation { border-left: solid 1px gray; margin-left: 1px } table.docinfo { margin: 2em 4em } table.docutils { margin-top: 0.5em ; margin-bottom: 0.5em } table.footnote { border-left: solid 1px black; margin-left: 1px } table.docutils td, table.docutils th, table.docinfo td, table.docinfo th { padding-left: 0.5em ; padding-right: 0.5em ; vertical-align: top } table.docutils th.field-name, table.docinfo th.docinfo-name { font-weight: bold ; text-align: left ; white-space: nowrap ; padding-left: 0 } /* "booktabs" style (no vertical lines) */ table.docutils.booktabs { border: 0px; border-top: 2px solid; border-bottom: 2px solid; border-collapse: collapse; } table.docutils.booktabs * { border: 0px; } table.docutils.booktabs th { border-bottom: thin solid; text-align: left; } h1 tt.docutils, h2 tt.docutils, h3 tt.docutils, h4 tt.docutils, h5 tt.docutils, h6 tt.docutils { font-size: 100% } ul.auto-toc { list-style-type: none }
The PyPy team is proud to release the version 7.0.0 of PyPy, which includes three different interpreters:
  • PyPy2.7, which is an interpreter supporting the syntax and the features of Python 2.7
  • PyPy3.5, which supports Python 3.5
  • PyPy3.6-alpha: this is the first official release of PyPy to support 3.6 features, although it is still considered alpha quality.
All the interpreters are based on much the same codebase, thus the triple release.
Until we can work with downstream providers to distribute builds with PyPy, we have made packages for some common packages available as wheels.
The GC hooks , which can be used to gain more insights into its performance, has been improved and it is now possible to manually manage the GC by using a combination of gc.disable and gc.collect_step. See the GC blog post.
We updated the cffi module included in PyPy to version 1.12, and the cppyy backend to 1.4. Please use these to wrap your C and C++ code, respectively, for a JIT friendly experience.
As always, this release is 100% compatible with the previous one and fixed several issues and bugs raised by the growing community of PyPy users. We strongly recommend updating.
The PyPy3.6 release and the Windows PyPy3.5 release are still not production quality so your mileage may vary. There are open issues with incomplete compatibility and c-extension support.
The utf8 branch that changes internal representation of unicode to utf8 did not make it into the release, so there is still more goodness coming. You can download the v7.0 releases here:
http://pypy.org/download.html
We would like to thank our donors for the continued support of the PyPy project. If PyPy is not quite good enough for your needs, we are available for direct consulting work.
We would also like to thank our contributors and encourage new people to join the project. PyPy has many layers and we need help with all of them: PyPy and RPython documentation improvements, tweaking popular modules to run on pypy, or general help with making RPython's JIT even better.

What is PyPy?

PyPy is a very compliant Python interpreter, almost a drop-in replacement for CPython 2.7, 3.5 and 3.6. It's fast (PyPy and CPython 2.7.x performance comparison) due to its integrated tracing JIT compiler.
We also welcome developers of other dynamic languages to see what RPython can do for them.
The PyPy release supports:
  • x86 machines on most common operating systems (Linux 32/64 bits, Mac OS X 64 bits, Windows 32 bits, OpenBSD, FreeBSD)
  • big- and little-endian variants of PPC64 running Linux,
  • s390x running Linux
Unfortunately at the moment of writing our ARM buildbots are out of service, so for now we are not releasing any binary for the ARM architecture.

What else is new?

PyPy 6.0 was released in April, 2018. There are many incremental improvements to RPython and PyPy, the complete listing is here.

Please update, and continue to help us make PyPy better.


Cheers, The PyPy team

Jahongir Rahmonov: How to write a Python web framework. Part I.

$
0
0

"Don't reinvent the wheel" is one of the most frequent mantras we hear every day. But what if I want to learn more about the wheel? What if I want to learn how to make this damn wheel? I think it is a great idea to reinvent it for the purpose of learning. Thus, in these series, we will write our own Python web framework to see how all that magic is done in Flask, Django and other frameworks.

In this first part of the series, we will build the most important parts of the framework. At the end of it, we will have request handlers (think Django views) and routing: both simple (like /books/) and parameterized (like /greet/{name}). If you like it after reading, please let me know in the comments what other features we should implement next.

Before I start doing something new, I like to think about the end result. In this case, at the end of the day, we want to be able to use this framework in production and thus we want our framework to be served by a fast, lightweight, production-level application server. I have been using gunicorn in all of my projects in the last few years and I am very satisfied with the results. So, let's go with gunicorn.

Gunicorn is a WSGI HTTP Server, so it expects a specific entrypoint to our application. If you don't know what WSGI is go find out, I will wait. Otherwise, you will not understand a huge chunk of this blog post.

Have you learnt what WSGI is? Good. Let's continue.

To be WSGI-compatible, we need a callable object (a function or a class) that expects two parameters (environ and start_response) and returns a WSGI-compatible response. Don't worry if it doesn't make sense yet. Hopefully it will "click" for you while writing the actual code. So, let's get started with the code.

Think of a name for your framework and create a folder with that name. I named it bumbo:

mkdir bumbo

Go into this folder, create a virtual env and activate it:

cd bumbo
python3.6 -m venv venv
source venv/bin/activate

Now, create the file named app.py where we will store our entrypoint for gunicorn:

touch app.py

Inside this app.py, let's write a simple function to see if it works with gunicorn:

# app.pydefapp(environ,start_response):response_body=b"Hello, World!"status="200 OK"start_response(status,headers=[])returniter([response_body])

As mentioned above, this entrypoint callable receives two params. One of them is environ where all kinds of info about request is stored such as a request method, url, query params and the like. The second is start_response which starts the response as the name suggests. Now, let's try to run this code with gunicorn. For that install gunicorn and run it like so:

pip install gunicorn
gunicorn app:app

The first app is the file which we created and the second app is the name of the function we just wrote. If all is good, you will see something like the following in the output:

[2019-02-09 17:58:56 +0500][30962][INFO] Starting gunicorn 19.9.0
[2019-02-09 17:58:56 +0500][30962][INFO] Listening at: http://127.0.0.1:8000 (30962)[2019-02-09 17:58:56 +0500][30962][INFO] Using worker: sync
[2019-02-09 17:58:56 +0500][30966][INFO] Booting worker with pid: 30966

If you see this, open your browser and go to http://localhost:8000. You should see our good old friend: the Hello, World! message. Awesome! We will build off of this.

Now, let's turn this function into a class because we will need quite a few helper methods and they are much easier to write inside a class. Create an api.py file:

touch api.py

Inside this file, create the following API class. I will explain what it does in a bit:

# api.pyclassAPI:def__call__(self,environ,start_response):response_body=b"Hello, World!"status="200 OK"start_response(status,headers=[])returniter([response_body])

Now, delete everything inside app.py and write the following:

# app.pyfromapiimportAPIapp=API()

Restart your gunicorn and check the result in the browser. It should be the same as before because we simply converted our function named app to a class called API and overrode its __call__ method which is called when you call the instances of this class:

app=API()app()#  this is where __call__ is called

Now that we created our class, I want to make the code more elegant because all those bytes (b"Hello World") and start_response seem confusing to me. Thankfully, there is a cool package called WebOb that provides objects for HTTP requests and responses by wrapping the WSGI request environment and response status, headers and body. By using this package, we can pass the environ and start_response to the classes provided by this package and not have to deal with them ourselves. Before we continue, I suggest you take a look at the documentation of WebOb to understand what I am talking about and the API of WebOb more.

Here is how we will go about refactoring this code. First, install WebOb:

pip install webob

Import the Request and Response classes at the beginning of the api.py file:

# api.pyfromwebobimportRequest,Response...

and now we can use them inside the __call__ method:

# api.pyfromwebobimportRequest,ResponseclassAPI:def__call__(self,environ,start_response):request=Request(environ)response=Response()response.text="Hello, World!"returnresponse(environ,start_response)

Looks much better! Restart the gunicorn and you should see the same result as before. And the best part is I don't have to explain what is being done here. It is all self-explanatory. We are creating a request, a response and then returning that response. Awesome! I do have to note that request is not being used here yet because we are not doing anything with it. So, let's use this chance and use the request object as well. Also, let's refactor the response creation into its own method. We will see why it is better later:

# api.pyfromwebobimportRequest,ResponseclassAPI:def__call__(self,environ,start_response):request=Request(environ)response=self.handle_request(request)returnresponse(environ,start_response)defhandle_request(self,request):user_agent=request.environ.get("HTTP_USER_AGENT","No User Agent Found")response=Response()response.text=f"Hello, my friend with this user agent: {user_agent}"returnresponse

Restart your gunicorn and you should see this new message in the browser. Did you see it? Cool. Let's go on.

At this point, we handle all the requests in the same way. Whatever request we receive, we simply return the same response which is created in the handle_request method. Ultimately, we want it to be dynamic. That is, we want to serve the request coming from /home/ differently than the one coming from /about/.

To that end, inside app.py, let's create two methods that will handle those two requests:

# app.pyfromapi.pyimportAPIapp=API()defhome(request,response):response.text="Hello from the HOME page"defabout(request,response):response.text="Hello from the ABOUT page"

Now, we need to somehow associate these two methods with the above mentioned paths: /home/ and /about/. I like the Flask way of doing it that would look like this:

# app.pyfromapi.pyimportAPIapp=API()@app.route("/home")defhome(request,response):response.text="Hello from the HOME page"@app.route("/about")defabout(request,response):response.text="Hello from the ABOUT page"

What do you think? Looks good? Then let's implement this bad boy!

As you can see, the route method is a decorator, accepts a path and wraps the methods. It shouldn't be too difficult to implement:

# api.pyclassAPI:def__init__(self):self.routes={}defroute(self,path):defwrapper(handler):self.routes[path]=handlerreturnhandlerreturnwrapper...

Here is what we did here. In the __init__ method, we simply defined a dict called self.routes where we will be storing paths as keys and handlers as values. It can look like this:

print(self.routes){"/home":<functionhomeat0x1100a70c8>,"/about":<functionaboutat0x1101a80c3>}

In the route method, we took path as an argument and in the wrapper method simply put this path in the self.routes dictionary as a key and the handler as a value.

At this point, we have all the pieces of the puzzle. We have the handlers and the paths associated with them. Now, when a request comes in, we need to check its path, find an appropriate handler, call that handler and return an appropriate response. Let's do that:

# api.pyfromwebobimportRequest,ResponseclassAPI:...defhandle_request(self,request):response=Response()forpath,handlerinself.routes.items():ifpath==request.path:handler(request,response)returnresponse...

Wasn't too difficult, was it? We simply iterated over self.routes, compared paths with the path of the request, if there is a match, called the handler associated with that path.

Restart the gunicorn and try those paths in the browser. First, go to http://localhost:8000/home/ and then go to http://localhost:8000/about/. You should see the corresponding messages. Pretty cool, right?

As the next step, we can answer the question of "What happens if the path is not found?". Let's create a method that returns a simple HTTP response of "Not found." with the status code of 404:

# api.pyfromwebobimportRequest,ResponseclassAPI:...defdefault_response(self,response):response.status_code=404response.text="Not found."...

Now, let's use it in our handle_request method:

# api.pyfromwebobimportRequest,ResponseclassAPI:...defhandle_request(self,request):response=Response()forpath,handlerinself.routes.items():ifpath==request.path:handler(request,response)returnresponseself.default_response(response)returnresponse...

Restart the gunicorn and try some nonexistent routes. You should see this lovely "Not found." page. Now, let's refactor out finding a handler to its own method for the sake of readability:

# api.pyfromwebobimportRequest,ResponseclassAPI:...deffind_handler(self,request_path):forpath,handlerinself.routes.items():ifpath==request_path:returnhandler...

Just like before, it is simply iterating over self.route, comparing paths with the request path and returning the handler if paths are the same. It returns None if no handler was found. Now, we can use it in our handle_request method:

# api.pyfromwebobimportRequest,ResponseclassAPI:...defhandle_request(self,request):response=Response()handler=self.find_handler(request_path=request.path)ifhandlerisnotNone:handler(request,response)else:self.default_response(response)returnresponse...

I think it looks much better and is pretty self explanatory. Restart your gunicorn to see that everything is working just like before.

At this point, we have routes and handlers. It is pretty awesome but our routes are simple. They don't support keyword parameters in the url path. What if we want to have this route of @app.route("/hello/{person_name}") and be able to use this person_name inside our handlers like this:

defsay_hello(request,response,person_name):resp.text=f"Hello, {person_name}"

For that, if someone goes to the /hello/Matthew/, we need to be able to match this path with the registered /hello/{person_name}/ and find the appropriate handler. Thankfully, there is already a package called parse that does exactly that for us. Let's go ahead and install it:

pip install parse

Let's test it out:

>>> from parse import parse
>>> result= parse("Hello, {name}", "Hello, Matthew")>>> print(result.named){'name': 'Matthew'}

As you can see, it parsed the string Hello, Matthew and was able to identify that Matthew corresponds to the {name} that we provided.

Let's use it in our find_handler method to find not only the method that corresponds to the path but also the keyword params that were provided:

# api.pyfromwebobimportRequest,ResponsefromparseimportparseclassAPI:...deffind_handler(self,request_path):forpath,handlerinself.routes.items():parse_result=parse(path,request_path)ifparse_resultisnotNone:returnhandler,parse_result.namedreturnNone,None...

We are still iterating over self.routes and now instead of comparing the path to the request path, we are trying to parse it and if there is a result, we are returning both the handler and keyword params as a dictionary. Now, we can use this inside handle_request to send those params to the handlers like this:

# api.pyfromwebobimportRequest,ResponsefromparseimportparseclassAPI:...defhandle_request(self,request):response=Response()handler,kwargs=self.find_handler(request_path=request.path)ifhandlerisnotNone:handler(request,response,**kwargs)else:self.default_response(response)returnresponse...

The only changes are, we are getting both handler and kwargs from self.find_handler, and passing that kwargs to the handler like this **kwargs.

Let's write a handler with this type of route and try it out:

# app.py...@app.route("/hello/{name}")defgreeting(request,response,name):resp.text=f"Hello, {name}"...

Restart your gunicorn and go to http://localhost:8000/hello/Matthew/. You should the wonderful message of Hello, Matthew. Awesome, right? Add a couple more such handlers of yours. You can also indicate the type of the given params. For example you can do @app.route("/tell/{age:d}") so that you have the param age inside the handler as a digit.

Conclusion

This was a long ride but I think it was great. I personally learned a lot while writing this. If you liked this blog post, please let me know in the comments what other features we should implement in our framework. I am thinking of class based handlers, support for templates and static files.

Fight on!

Python Software Foundation: The Steady Leader of the Python Community, Alex Gaynor, Receives Community Service Award

$
0
0
Going through the big names in the Python community, one would not likely miss Alex Gaynor. Alex was Director of both the Python Software Foundation as well as the Django Software Foundation, and he is currently an Infrastructure Staff member of the PSF. Overall, Alex has been a valuable member of the Python community, contributing to the structure of the PSF on an administrative level, and actively encouraging the growth of Python through his personal efforts.

For this reason, the Python Software Foundation has awarded Alex Gaynor the Q3 2018 Community Service Award:

RESOLVED, that the Python Software Foundation award the Q3 2018 Community Service Award to Alex Gaynor for his contributions to the Python Community and the Python Software Foundation. Alex previously served as a PSF Director in 2015-2016. He currently serves as an Infrastructure Staff member and contributes to legacy PyPI and the next generation warehouse and has helped legacy warehouse in security (disabling unsupported OpenID) and cutting bandwidth costs by compressing 404 images.

Alex attended Rensselaer Polytechnic Institute, where he received his Bachelor of Science degree in Computer Science. Originally from Chicago, he is currently living in Washington DC. In the past, Alex worked for the United States Digital Service on various impactful projects such as the United States Refugee Admissions Program and the Veterans Affairs disability benefits appeals process. He is now working for Mozilla on their Firefox Security Team.

Alex originally began contributing to the Python community by serving on the PyCon programming committee. “I was fortunate that right after I joined the community PyCon was in my hometown of Chicago, which made it easy to get involved.” Alex then decided to take up the responsibility of being a Director of the Python Software Foundation when the organization was going through many changes. “[W]e were adopting a Code of Conduct, starting to work on the new membership model, and significantly growing the grant funding we were offering. I think my proudest accomplishment is being a part of the team that kept all of these great initiatives on the rails (I certainly can't take credit for any of them on my own!); since my time the PSF has significantly scaled up its ability to help guide and support the global Python community.”

Aside from his contributions to the Python Software Foundation, Alex also served as a Director of the Django Software Foundation and a member of the Django core team. As mentioned in the resolution above, Alex is currently working as a PSF Infrastructure Staff member where he is working on legacy PyPI and the next generation warehouse. Alex has improved the legacy warehouse in security by disabling unsupported, the open standard and decentralized authentication protocol OpenID, as well as in cutting bandwidth costs by compressing 404 images.

Being on the PSF Infrastructure team with Alex, Director of Infrastructure Ernest W. Durbin III has enjoyed working with him and appreciates his contributions to the team:

“Alex has been one of the most steady and reliable motivators for improved security throughout our entire community. Alex stays ruthlessly up to date on current best practices and makes a consistent effort to help encourage and implement pragmatic security at all levels. While far from an exhaustive list, the Python community can thank Alex for his advocacy and knowledge on rock solid TLS for pypi.org, sharing his knowledge and experience with the Python Security Response Team, and contributions to security in the Python language as well as core cryptographic libraries on PyPI.”

Glyph Lefkowitz, creator of the Twisted framework, additionally observed that Alex’s contributions across multiple projects, from PyCA's Cryptography, to Django, to CPython, to PyPy to Twisted, have been transformational for the Python ecosystem, and have, in particular, made it a much safer and more secure community for users. “When he sees a problem that needs addressing, his willingness to work across projects and layers is an ongoing source of inspiration for everyone that calls themselves a 'maintainer',” noted Glyph.

As a long-time member of the Python community, Alex says what he appreciates most about the community is its commitment to getting more people involved in Python specifically, and programming in general. He is particularly impressed by the PSF’s efforts to support the growth of Python on multiple scales. “I don't think there's any organization like the PSF that does as much work issuing grants and supporting local groups teaching getting people involved in coding and Python.”


Moving forward, Alex hopes to see more knowledge being shared regarding potential funding in the community. “From PyPI to PyCon and beyond there's a lot of costs associated with making these community resources happen, and we've learned a lot about how to raise money to make them happen. I think we could do a better job sharing these lessons learned with the broader open source ecosystem and helping to push new innovation in this space.”

Additionally, to anyone out there looking to make impactful contributions to our community, Alex’s advice is to simply jump in and contribute in whatever ways that work for you. With numerous volunteering opportunities with the PSF working groups, local meetups, regional conferences, and many more, it is easier than ever to be a part of, and help promote the Python community.

As the final note, the PSF would like to congratulate Alex Gaynor again for this prestigious award, and thank him for his continued contributions to our organization in particular, and to the general Python community as a whole.

gamingdirectional: Create the player animation

$
0
0

Hello and welcome back, in this chapter we will create a method which will accept either an x increment or y increment from the game manager object that accepts those increments from the main pygame file when the user presses on the up, down, left or the right arrow key on the keyboard. We will not make the player moves yet in this chapter but just animate that player object, we will make the...

Source


Django Weblog: Django bugfix releases: 2.1.7, 2.0.12 and 1.11.20

$
0
0

Today we've issued the 2.1.7, 2.0.12 and 1.11.20 bugfix releases.

The release package and checksums are available from our downloads page, as well as from the Python Package Index. The PGP key ID used for this release is Carlton Gibson: E17DF5C82B4F9D00.

Mike Driscoll: Less than 2 Days to Go on wxPython Book Kickstarter

$
0
0

My latest book, Create GUI Applications with wxPython, is coming along nicely. I just wanted to let my readership know that the Kickstarter for it is coming to a close in a little less than 2 days.

If you’d like to get a copy at a cheaper price than it will be when it is released in May later this year, the Kickstarter is really the way to go. You can check out the current table of contents in this post from last week.

Thanks for your support!

Stack Abuse: Python Programming in Interactive vs Script Mode

$
0
0

In Python, there are two options/methods for running code:

  • Interactive mode
  • Script mode

In this article, we will see the difference between the modes and will also discuss the pros and cons of running scripts in both of these modes.

Interactive Mode

Interactive mode, also known as the REPL provides us with a quick way of running blocks or a single line of Python code. The code executes via the Python shell, which comes with Python installation. Interactive mode is handy when you just want to execute basic Python commands or you are new to Python programming and just want to get your hands dirty with this beautiful language.

To access the Python shell, open the terminal of your operating system and then type "python". Press the enter key and the Python shell will appear. This is the same Python executable you use to execute scripts, which comes installed by default on Mac and Unix-based operating systems.

C:\Windows\system32>python  
Python 3.5.0 (v3.5.0:374f501f4567, Sep 13 2015, 02:27:37) [MSC v.1900 64 bit (AMD64)] on win32  
Type "help", "copyright", "credits" or "license" for more information.  
>>>

The >>> indicates that the Python shell is ready to execute and send your commands to the Python interpreter. The result is immediately displayed on the Python shell as soon as the Python interpreter interprets the command.

To run your Python statements, just type them and hit the enter key. You will get the results immediately, unlike in script mode. For example, to print the text "Hello World", we can type the following:

>>> print("Hello World")
Hello World  
>>>

Here are other examples:

>>> 10
10  
>>> print(5 * 20)
100  
>>> "hi" * 5
'hihihihihi'  
>>>

We can also run multiple statements on the Python shell. A good example of this is when we need to declare many variables and access them later. This is demonstrated below:

>>> name = "Nicholas">>> age = 26
>>> course = "Computer Science">>> print("My name is " + name + ", aged " + str(age) + ", taking " + course)

Output

My name is Nicholas, aged 26, taking Computer Science  

Using the method demonstrated above, you can run multiple Python statements without having to create and save a script. You can also copy your code from another source then paste it on the Python shell.

Consider the following example:

>>> if 5 > 10:
...     print("5 is greater than 10")
... else:
...     print("5 is less than 10")
...
5 is less than 10  
>>>

The above example also demonstrates how we can run multiple Python statements in interactive mode. The two print statements have been indented using four spaces. Just like in script mode, if you don't indent properly you will get an error. Also, to get the output after the last print statement, you should press the enter key twice without typing anything.

Getting Help

You can also get help with regards to a particular command in interactive mode. Just type the help() command on the shell and then hit the enter key. You will see the following:

>>> help()

Welcome to Python 3.5's help utility!

If this is your first time using Python, you should definitely check out  
the tutorial on the Internet at http://docs.python.org/3.5/tutorial/.

Enter the name of any module, keyword, or topic to get help on writing  
Python programs and using Python modules.  To quit this help utility and  
return to the interpreter, just type "quit".

To get a list of available modules, keywords, or topics, type "modules",  
"keywords", or "topics".  Each module also comes with a one-line summary
of what it does; to list the modules whose summaries contain a given word  
such as "spam", type "modules spam".

help>  

Now to find the help for a particular command, simple type that command, for instance, to find help for the print command, simply type print and hit the enter key. The result will look like this:

Help on built-in function print in module builtins:

print(...)  
    print(value, ..., sep=' ', end='\n', file=sys.stdout, flush=False)

    Prints the values to a stream, or to sys.stdout by default.
    Optional keyword arguments:
    file:  a file-like object (stream); defaults to the current sys.stdout.
    sep:   string inserted between values, default a space.
    end:   string appended after the last value, default a newline.
    flush: whether to forcibly flush the stream.

As shown in the above output, the help utility returned useful information regarding the print command including what the command does and what are some of the arguments that can be used with the command.

To exit help, type q for "quit" and then hit the enter key. You will be taken back to the Python shell.

Pros and Cons of Interactive Mode

The following are the advantages of running your code in interactive mode:

  1. Helpful when your script is extremely short and you want immediate results.
  2. Faster as you only have to type a command and then press the enter key to get the results.
  3. Good for beginners who need to understand Python basics.

The following are the disadvantages of running your code in the interactive mode:

  1. Editing the code in interactive mode is hard as you have to move back to the previous commands or else you have to rewrite the whole command again.
  2. It's very tedious to run long pieces of code.

Next, we will be discussing the script mode.

Script Mode

If you need to write a long piece of Python code or your Python script spans multiple files, interactive mode is not recommended. Script mode is the way to go in such cases. In script mode, You write your code in a text file then save it with a .py extension which stands for "Python". Note that you can use any text editor for this, including Sublime, Atom, notepad++, etc.

If you are in the standard Python shell, you can click "File" then choose "New" or simply hit "Ctrl + N" on your keyboard to open a blank script in which you can write your code. You can then press "Ctrl + S" to save it.

After writing your code, you can run it by clicking "Run" then "Run Module" or simply press F5.

Let us create a new file from the Python shell and give it the name "hello.py". We need to run the "Hello World" program. Add the following code to the file:

print("Hello World")  

Click "Run" then choose "Run Module". This will run the program:

Output

Hello World  

Other than executing the program from the graphical user interface, we can do it from the terminal of the operating system. However, you must be aware of the path to the directory where you have saved the file.

Open the terminal of your operating system then navigate to the location of the file. Of course, you will use the "cd (change directory)" command for this.

Once you reach the directory with the file, you will need to invoke the Python interpreter on the file. This can be done using the following syntax:

> python <filename>

To run the Python file from the terminal, you just have to type the python keyword followed by the name of the file. In our case, we need to run a file named "hello.py". We need to type the following on the terminal of the operating system:

> python hello.py
Hello World  

If you want to get to the Python shell after getting the output, add the -i option to the command. This is demonstrated below:

> hello -i hello.py
Hello World  

The following example demonstrates how to execute the multiple lines of code using the Python script.

name = "Nicholas"  
age = 26  
course = "Computer Science"  
print("My name is", name, ",aged", age, ",taking", course)  

Pros and Cons of Script Mode

The following are the advantages of running your code in script mode:

  1. It is easy to run large pieces of code.
  2. Editing your script is easier in script mode.
  3. Good for both beginners and experts.

The following are the disadvantages of using the script mode:

  1. Can be tedious when you need to run only a single or a few lines of cod.
  2. You must create and save a file before executing your code.

Key Differences Between Interactive and Script Mode

Here are the key differences between programming in interactive mode and programming in script mode:

  1. In script mode, a file must be created and saved before executing the code to get results. In interactive mode, the result is returned immediately after pressing the enter key.
  2. In script mode, you are provided with a direct way of editing your code. This is not possible in interactive mode.

Conclusion

There are two modes through which we can create and run Python scripts: interactive mode and script mode. The interactive mode involves running your codes directly on the Python shell which can be accessed from the terminal of the operating system. In the script mode, you have to create a file, give it a name with a .py the extension then runs your code. The interactive mode is suitable when running a few lines of code. The script mode is recommended when you need to create large applications.

Programiz: Python IDEs and Code Editors

$
0
0
In this guide, you will learn about various Python IDEs and code editors for beginners and professionals.

Python Software Foundation: Python Community service award Q3: Mario Corchero

$
0
0



The PSF community service awards go to those individuals whose work and commitment complement and strengthen the PSF mission: to support and facilitate the growth of a diverse global Python community. So when thinking about individuals that go above and beyond to support the global community Mario Corchero is a name that comes easily to mind.


Not only is Mario a Senior Software Engineer for Bloomberg but he also devotes incredible amounts of his time to organise PyCon ES (Spain), PyLondinium, and more recently, the Spanish speaking track of PyCon: Las Pycon Charlas.

Mario is the true embodiment of the Python community spirit and for this reason, the Python Software Foundation has awarded Mario Corchero with the Q3 2018 Community Service Award.

RESOLVED, that the Python Software Foundation award the Q3 2018 Community Service Award to Mario Corchero for helping organize PyLondinium, the PyCon Charlas track, and PyCon Spain.


Mario's contributions to the Python Community


PyConES


With the growing popularity and global adoption of Python there also comes the need to bring together diverse community groups. Although large events such as PyCon US are incredibly important in bringing these groups together, these are not always accessible to the whole community. Smaller, localized events such as Python ES, France, Namibia, Colombia, and many many others help with the goal of bringing cohesion to the global community.

According to David Naranjo (co-organiser of PyConES), PyConES was the first event of this kind that he and Mario attended together. They loved it so much that while at PyConES16 they decided to submit an application to organise and bring this event to their region: Malaga.

On top of the many challenges that come with organising an event of this type (i.e. drafting the programme, getting talks accepted, running the event on the day), they have faced an additional layer of complexity: neither of them lives in the region anymore.

This has made the organisation of PyConES a true community effort: from the organising committee to the sponsors and the volunteers that work together to make this a huge success. PyConEs is now a  staple Python event in Europe with more than 600 attendees every year, and it owes its success in a great deal to Mario’s efforts.



PyLondinium


A year after organising his first PyConES, Mario embarked on yet another journey: the organisation of PyLondinium. An event focused on showcasing the many use cases of Python as opposed to other events such as the PyData events.


PyLondinium is not only focused on bringing together the Python community but also to raise money for the PSF and its programmes around the world. In this particular case, Bloomberg, a long-time Python supporter, has played an important role in the success of the event. Not only do they host the event at their Europe headquarters in the heart of London but they also help to cover some of the costs as the main event sponsor, keeping the ticket prices at an affordable level.



Pylondinium 2018

Accessibility for the wider community


As a passionate community builder, from a non-English speaking country, localization and accessibility of the Python language is something that matters to Mario. Most of the coding resources out in the world are written in English, which can be a barrier to those whose primary language is not English or simply do not speak the language at all. That is why when he was presented with the opportunity to chair the Spanish track of PyCon US 2017 (Las PyCon Charlas) he did so wholeheartedly, embarking into yet another community journey alongside PSF Director Naomi Ceder.

Again, like his other endeavours, Las Charlas was an absolute success. It gathered people from all over from Latin America and Spain for a full day of talks in Spanish on such topics as machine learning, astronomy and security. In fact, it was such a success that the Charlas is back this coming year and the organisers are already receiving talks submissions (for more details visit https://us.pycon.org/2019/speaking/).



PyCon Charlas 2018


When asked why he organises all of these events, his answer is rather simple and honest. It is usually driven by a ‘how come no one is doing this yet?’" says Mario. But when digging deeper it becomes evident that Mario’s motivations lie in bringing the community together and nurturing it. Mario is extremely dedicated to the community and helping others to get involved. From creating Spanish tracks for PyCon USA or creating events serving specific areas or regions, Mario is constantly finding ways to bring Pythonistas together.



Viewing all 23486 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>