Quantcast
Channel: Planet Python
Viewing all 22646 articles
Browse latest View live

Spyder IDE: Spyder featured on Episode 1 of Open Source Directions web show

$
0
0

Quansight, the company recently founded by NumPy, SciPy and Anaconda creator Travis Oliphant to help connect companies with open source communities built around data science and machine learning, just released Episode 1 of its live webcast series, and it was all about Spyder! Spyder maintainer Carlos Córdoba, recently hired by Quansight and funded part-time to work on Spyder development as we announced a few weeks ago, was the featured guest on the show.

Carlos first shared his perspective on some of the key moments in Spyder's nearly 10-year development history, from its original creation by Pierre Raybaut and Carlos' initial involvement in the project to its more recent challenges and successes. He also demonstrated basic usage of Spyder, as well as some of its standout features, in a live on-screen demo. Carlos then went on to outline the current roadmap for Spyder 4 in the near future, and explained some of the key new features planned for it. Finally, he took the time to answer a variety of Spyder-related questions asked live by viewers, ranging from specific current and planned Spyder features to suggestions on keeping a good work-life balance. While technical difficulties (since identified and resolved) interrupted some of his commentary, particularly in the roadmap section, a full account of the latter will be published here shortly.

If you missed the webcast, Quansight recorded it and uploaded it to their new Youtube channel, so you can watch it on-demand right here if you're curious about any of the above. Give it a like if you enjoy it to show Quansight some love for helping support Spyder's further development!

We'll have a new post on the release of Spyder 3.3.1 (and 3.3.0) in a few days time, plus articles on our new docs, Spyder 4 beta 1, and our full roadmap all in the next week or so; there's plenty going on that you won't want to miss. Keep it right here to catch all that—and in the meantime, happy Spydering!


Codementor: How to Build a Data Science Portfolio

$
0
0
How do you get a job in data science? Knowing enough statistics, machine learning, programming, etc to be able to get a job is difficult. One thing I have found lately is quite a few people may have the required skills to get a job, but no portfolio. While a resume matters, having a portfolio of public evidence of your data science skills can do wonders for your job prospects. Even if you have a referral, the ability to show potential employers what you can do instead of just telling them you can do something is important.

Mike Driscoll: Python 101: Episode #20 – The sys module

Semaphore Community: Getting Started with Mocking in Python

$
0
0

This article is brought with ❤ to you by Semaphore.

Introduction

Mocking is simply the act of replacing the part of the application you are testing with a dummy version of that part called a mock.

Instead of calling the actual implementation, you would call the mock, and then make assertions about what you expect to happen.

What are the benefits of mocking?

  • Increased speed— Tests that run quickly are extremely beneficial. E.g. if you have a very resource intensive function, a mock of that function would cut down on unnecessary resource usage during testing, therefore reducing test run time.

  • Avoiding undesired side effects during testing— If you are testing a function which makes calls to an external API, you may not want to make an actual API call every time you run your tests. You'd have to change your code every time that API changes, or there may be some rate limits, but mocking helps you avoid that.

Prerequisites

You will need to have Python 3.3 or higher installed. Get the correct version for your platform here. I will be using version 3.6.0 for this tutorial.

Once you have that installed, set up a virtual environment:

python3 -m venv mocking

Activate the virtual environment by running:

source mocking/bin/activate

After that, add a main.py file where our code will reside and a test.py file for our tests.

touch main.py test.py

Basic Usage

Imagine a simple class:

classCalculator:defsum(self,a,b):returna+b

This class implements one method, sum that takes two arguments, the numbers to be added, a and b. It returns a + b;

A simple test case for this could be as follows:

fromunittestimportTestCasefrommainimportCalculatorclassTestCalculator(TestCase):defsetUp(self):self.calc=Calculator()deftest_sum(self):answer=self.calc.sum(2,4)self.assertEqual(answer,6)

You can run this test case using the command:

python -m unittest

You should see output that looks approximately like this:

.
_____________________________________________________________

Ran 1test in 0.003s

OK

Pretty fast, right?

Now, imagine the code looked like this:

importtimeclassCalculator:defsum(self,a,b):time.sleep(10)# long running processreturna+b

Since this is a simple example, we are using time.sleep() to simulate a long running process. The previous test case now produces the following output:

.
_____________________________________________________________

Ran 1test in 10.003s

OK

That process has just considerably slowed down our tests. It is clearly not a good idea to call the sum method as is every time we run tests. This is a situation where we can use mocking to speed up our tests and avoid an undesired effect at the same time.

Let's refactor the test case so that instead of calling sum every time the test runs, we call a mock sum function with well defined behavior.

fromunittestimportTestCasefromunittest.mockimportpatchclassTestCalculator(TestCase):@patch('main.Calculator.sum',return_value=9)deftest_sum(self,sum):self.assertEqual(sum(2,3),9)

We are importing the patch decorator from unittest.mock. It replaces the actual sum function with a mock function that behaves exactly how we want. In this case, our mock function always returns 9. During the lifetime of our test, the sum function is replaced with its mock version. Running this test case, we get this output:

.
_____________________________________________________________

Ran 1test in 0.001s

OK

While this may seem counter-intuitive at first, remember that mocking allows you to provide a so-called fake implementation of the part of your system you are testing. This gives you a lot of flexibility during testing. You'll see how to provide a custom function to run when your mock is called instead of hard coding a return value in the section titled Side Effects.

A More Advanced Example

In this example, we'll be using the requests library to make API calls. You can get it via pip install.

pip install requests

Our code under test in main.py looks as follows:

importrequestsclassBlog:def__init__(self,name):self.name=namedefposts(self):response=requests.get("https://jsonplaceholder.typicode.com/posts")returnresponse.json()def__repr__(self):return'<Blog: {}>'.format(self.name)

This code defines a class Blog with a posts method. Invoking posts on the Blog object will trigger an API call to jsonplaceholder, a JSON generator API service.

In our test, we want to mock out the unpredictable API call and only test that a Blog object's posts method returns posts. We will need to patch all Blog objects' posts methods as follows.

fromunittestimportTestCasefromunittest.mockimportpatch,MockclassTestBlog(TestCase):@patch('main.Blog')deftest_blog_posts(self,MockBlog):blog=MockBlog()blog.posts.return_value=[{'userId':1,'id':1,'title':'Test Title','body':'Far out in the uncharted backwaters of the unfashionable  end  of the  western  spiral  arm  of  the Galaxy\ lies a small unregarded yellow sun.'}]response=blog.posts()self.assertIsNotNone(response)self.assertIsInstance(response[0],dict)

You can see from the code snippet that the test_blog_posts function is decorated with the @patch decorator. When a function is decorated using @patch, a mock of the class, method or function passed as the target to @patch is returned and passed as an argument to the decorated function.

In this case, @patch is called with the target main.Blog and returns a Mock which is passed to the test function as MockBlog. It is important to note that the target passed to @patch should be importable in the environment @patch is being invoked from. In our case, an import of the form from main import Blog should be resolvable without errors.

Also, note that MockBlog is a variable name to represent the created mock and can be you can name it however you want.

Calling blog.posts() on our mock blog object returns our predefined JSON. Running the tests should pass.

.
_____________________________________________________________

Ran 1test in 0.001s

OK

Note that testing the mocked value instead of an actual blog object allows us to make extra assertions about how the mock was used.

For example, a mock allows us to test how many times it was called, the arguments it was called with and even whether the mock was called at all. We'll see additional examples in the next section.

Other Assertions We Can Make on Mocks

Using the previous example, we can make some more useful assertions on our Mock blog object.

importmainfromunittestimportTestCasefromunittest.mockimportpatchclassTestBlog(TestCase):@patch('main.Blog')deftest_blog_posts(self,MockBlog):blog=MockBlog()blog.posts.return_value=[{'userId':1,'id':1,'title':'Test Title,'body':'Far out in the uncharted backwaters of the unfashionable  end  of the  western  spiral  arm  of  the Galaxy\ lies a small unregarded yellow sun.'}]response=blog.posts()self.assertIsNotNone(response)self.assertIsInstance(response[0],dict)# Additional assertionsassertMockBlogismain.Blog# The mock is equivalent to the originalassertMockBlog.called# The mock wasP calledblog.posts.assert_called_with()# We called the posts method with no argumentsblog.posts.assert_called_once_with()# We called the posts method once with no arguments# blog.posts.assert_called_with(1, 2, 3) - This assertion is False and will fail since we called blog.posts with no argumentsblog.reset_mock()# Reset the mock objectblog.posts.assert_not_called()# After resetting, posts has not been called.

As stated earlier, the mock object allows us to test how it was used by checking the way it was called and which arguments were passed, not just the return value.

Mock objects can also be reset to a pristine state i.e. the mock object has not been called yet. This is especially useful when you want to make multiple calls to your mock and want each one to run on a fresh instance of the mock.

Side Effects

These are the things that you want to happen when your mock function is called. Common examples are calling another function or raising exceptions.

Let us revisit our sum function. What if, instead of hard coding a return value, we wanted to run a custom sum function instead? Our custom function will mock out the undesired long running time.sleep call and only remain with the actual summing functionality we want to test. We can simply define a side_effect in our test.

fromunittestimportTestCasefromunittest.mockimportpatchdefmock_sum(a,b):# mock sum function without the long running time.sleepreturna+bclassTestCalculator(TestCase):@patch('main.Calculator.sum',side_effect=mock_sum)deftest_sum(self,sum):self.assertEqual(sum(2,3),5)self.assertEqual(sum(7,3),10)

Running the tests should pass:

.
_____________________________________________________________

Ran 1test in 0.001s

OK

Continous Integration Using Semaphore CI

Adding Continous Integration with Semaphore is very easy. Once you have everything committed and pushed to Github or Bitbucket, go here and create a new account or sign into an existing account. We'll be using a Github repo containing the Blog class example and test.

From your dashboard, click Add New Project.

add new

You will be asked to select either Github or Bitbucket as a source. Pick a source as per your preference.

add source

After selecting a source, select the repository.

select repo

Next, select the branch to build from.

select branch

Semaphore will analyze the project and show you the build settings:

analysis

Customize your plan to look as follows:

build settings

After that, click Build with these settings at the bottom of that page.

passed

Once your build passes, that's it. You have successfully set up continuous integration on Semaphore CI.

Conclusion

In this article, we have gone through the basics of mocking with Python. We have covered using the @patch decorator and also how to use side effects to provide alternative behavior to your mocks. We also covered how to run a build on Semaphore.

You should now be able to use Python's inbuilt mocking capabilities to replace parts of your system under test to write better and faster tests.

For more detailed information, the official docs are a good place to start.

Please feel free to leave your comments and questions in the comments section below.

This article is brought with ❤ to you by Semaphore.

Peter Bengtsson: Django lock decorator with django-redis

$
0
0

Here's the code. It's quick-n-dirty but it works wonderfully:

importfunctoolsimporthashlibfromdjango.core.cacheimportcachefromdjango.utils.encodingimportforce_bytesdeflock_decorator(key_maker=None):"""    When you want to lock a function from more than 1 call at a time."""defdecorator(func):@functools.wraps(func)definner(*args,**kwargs):ifkey_maker:key=key_maker(*args,**kwargs)else:key=str(args)+str(kwargs)lock_key=hashlib.md5(force_bytes(key)).hexdigest()withcache.lock(lock_key):returnfunc(*args,**kwargs)returninnerreturndecorator

How To Use It

This has saved my bacon more than once. I use it on functions that really need to be made synchronous. For example, suppose you have a function like this:

deffetch_remote_thing(name):try:returnThing.objects.get(name=name).resultexceptThing.DoesNotExist:# Need to go out and fetch thisresult=some_internet_fetching(name)# Assume this is sloooowThing.objects.create(name=name,result=result)returnresult

That function is quite dangerous because if executed by two concurrent web requests for example, they will trigger
two "identical" calls to some_internet_fetching and if the database didn't have the name already, it will most likely trigger two calls to Thing.objects.create(name=name, ...) which could lead to integrity errors or if it doesn't the whole function breaks down because it assumes that there is only 1 or 0 of these Thing records.

Easy to solve, just add the lock_decorator:

@lock_decorator()deffetch_remote_thing(name):try:returnThing.objects.get(name=name).resultexceptThing.DoesNotExist:# Need to go out and fetch thisresult=some_internet_fetching(name)# Assume this is sloooowThing.objects.create(name=name,result=result)returnresult

Now, thanks to Redis distributed locks, the function is always allowed to finish before it starts another one. All the hairy locking (in particular, the waiting) is implemented deep down in Redis which is rock solid.

Bonus Usage

Another use that has also saved my bacon is functions that aren't necessarily called with the same input argument but each call is so resource intensive that you want to make sure it only does one of these at a time. Suppose you have a Django view function that does some resource intensive work and you want to stagger the calls so that it only runs it one at a time. Like this for example:

defapi_stats_calculations(request,part):ifpart=='users-per-month':data=_calculate_users_per_month()# expensiveelifpart=='pageviews-per-week':data=_calculate_pageviews_per_week()# intensiveelifpart=='downloads-per-day':data=_calculate_download_per_day()# slowelifyou=='get'andthe=='idea':...returnhttp.JsonResponse({'data':data})

If you just put @lock_decorator() on this Django view function, and you have some (almost) concurrent calls to this function, for example from a uWSGI server running with threads and multiple processes, then it will not synchronize the calls.

The solution to this is to write your own function for generating the lock key, like this for example:

@lock_decorator(key_maker=lamnbdarequest,part:'api_stats_calculations')defapi_stats_calculations(request,part):ifpart=='users-per-month':data=_calculate_users_per_month()# expensiveelifpart=='pageviews-per-week':data=_calculate_pageviews_per_week()# intensiveelifpart=='downloads-per-day':data=_calculate_download_per_day()# slowelifyou=='get'andthe=='idea':...returnhttp.JsonResponse({'data':data})

Now it works.

How Time-Expensive Is It?

Perhaps you worry that 99% of your calls to the function don't have the problem of calling the function concurrently. How much is this overhead of this lock costing you? I wondered that too and set up a simple stress test where I wrote a really simple Django view function. It looked something like this:

@lock_decorator(key_maker=lambdarequest:'samekey')defsample_view_function(request):returnhttp.HttpResponse('Ok\n')

I started a Django server with uWSGI with multiple processors and threads enabled. Then I bombarded this function with a simple concurrent stress test and observed the requests per minute. The cost was extremely tiny and almost negligable (compared to not using the lock decorator). Granted, in this test I used Redis on redis://localhost:6379/0 but generally the conclusion was that the call is extremely fast and not something to worry too much about. But your mileage may vary so do your own experiments for your context.

What's Needed

You need to use django-redis as your Django cache backend. I've blogged before about using django-redis, for example Fastest cache backend possible for Django and Fastest Redis configuration for Django.

Djangostars: How to build your own blockchain for a financial product

$
0
0
How to build your own blockchain for a financial product

Technologies are changing fast; people are not. – Jakob Nielsen

Blockchain is a relatively new technology that many deem is used only for buying Bitcoins. They try to implement it in whatever sphere comes to mind, whether it is fashion, education or healthcare. I would say it is okay — too little time has passed to determine which area of human activity can benefit the most from applying this technology. To understand the practical application of blockchain, we must first define why it appeared, and then study cases when blockchain can make a significant difference.

Note: This article does not explain the blockchain concepts; instead, it focuses on developing a fintech application using this technology. I will explain why fintech can already adopt the blockchain, and most importantly, focus on developing a decentralized application using this technology.

Industries That Are Ready For Blockchain

Don Norman once wrote that many products failed because they were released at the wrong time. I can remake this statement and say: Many technologies fail to find practical applications. When the Internet became widely available in the beginning of the ‘90s, each sphere tried to apply it to their business. It was a catastrophe, and its consequences are still visible by thousands of never-visited websites with horrible interfaces, clumsily created by anyone who had a computer. We are currently witnessing virtually the same situation—the most potential technology of the decade is associated with speculations on crypto-exchanges. It is widely used for financial scams, although it was initially created for the contrary.

How to build your own blockchain for a financial productSource: Microsoft Azure

An attempt to exclude the human factor from the business was one reason why blockchain appeared. That is why the industries that may have blockchain successfully implemented are those that (1) heavily depend on human activity, and (2) suffer most from human errors, like finance.

Important: Blockchain is being applied to various products from different industries; we just need more daring entrepreneurs who are willing to put a lot at stake.

Fintech deals with a very thorny matter – money. It is exactly where most fraud takes place. The desire to become richer is one fundamental mechanism that pushes people to do things, often bad things. Fintech startups aim to improve the traditional financial institutions, for example, excluding the human factor from the financial activities.

Utilizing blockchain excludes third parties from the financial transactions, like a bank that verifies the person between which the transaction is made. It can be used for managing the inventory and logistics, trading goods, optimizing the person identification, tracking transactions and more.

Read: What you need to consider before building a fintech product

It does not mean that every fintech product can easily adopt blockchain. Here are some cases when you might want to use blockchain:

  • You want to attract investments. Like it or not, blockchain is still a buzzword. It attracts more investments than real, working products.
  • You want to increase your competitiveness in the market. If you manage to build a product on blockchain successfully, you will instantly show your professionalism, thus becoming more attractive to investors and customers.
  • You are ready to experiment. Yes, any blockchain-based product is an experiment because few know what this technology is capable of. If you are ready to make a breakthrough in your industry, blockchain could be a right choice.

I do not suggest implementing blockchain in the following cases:

  • You are limited in resources. It is a high-risk way to creating a product because (1) there are few blockchain engineers, and (2) it is expensive to have them in the team.
  • You are not ready for significant changes. The changes include operational management and human resources. If you are a bank that has implemented blockchain, you will most likely need to let many employees go since there will be less work for people.
  • You have short-term vision. Blockchain is about long-term perspective. It cannot be implemented in a month or so. Unless you have a long-term product roadmap, do not bother yourself with dreams about changing everything tomorrow.

What you can do tomorrow, and even while reading this article, is to build a simple blockchain. It is the focus of part 2. I will tell you about the main components that are required to build a blockchain for fintech products, propose some tools, and show real pieces of code with explanations.

How To Apply Blockchain In Fintech

‘Frameworks’ to use

CryptoNote

CryptoNote is an open-source project that allows you to create crypto coins. They have a simple, step-by-step guide to creating a cryptocurrency. To launch it, you will need to have two nodes which will be used to run the Monero server.

Useful links:

How to create a coin

How to create a wallet

Ethereum

A popular open software platform for building decentralized applications. Its focus is running the programming code of your blockchain-based app. Quoting the Ethereum website: “Ethereum is a decentralized platform that runs smart contracts: applications that run exactly as programmed without any possibility of downtime, censorship, fraud or third party interference.”

ZeroNet

ZeroNet is used for creating decentralized websites. It uses the Bitcoin addressing and verification mechanisms, and the BitTorrent distributed content delivery network to create sites that cannot be censored, forged or blocked.

Build simple blockchain

Now that you know the tools – Cryptonote, Ethereum and ZeroNet – we are moving to building a basic blockchain of our own. I will be using Python in this example, but if it is not your primary coding language, you will still understand the logic and be able to write it in another language.

First, I will explain the fundamental elements required to build a block. I will start with date of creation, nonce, checksum and transaction data. Transaction data in our case could be just a string to simplify the code.

Date of creation

It is the current and time in unix format. It is required for the future development of your blockchain; when there are many running nodes and you add a new block to your branch, the node will decide which block to use based on Date of Creation.

Nonce

It’s a unique set of symbols that we need to add to the block to build the checksum that fits the requirement. For example, if the nonce value is 5, then we have to add 5 zeros (00000) to the data block to calculate the right checksum.

Checksum

Also sometimes referred to as hash value, hash code, or simply a hash. It is block data with nonce plus checksum of previous block. SHA256 protects the ledger chain from being rewritten.
How it works: Node calculates the checksum and compares to the one of the new block; if they match, the block is added to the blockchain.

Data

It’s a set of data that will be stored in block and signed. It can contain any sort of data: e.g., bitcoin stores a list of transactions, not only the last transaction; or you can store the information about the computer that created the block, like its MAC address; or you can have a more detailed date of creation, say, adding the time zone.

Proof of work

Proof of work (PoW) is a unique consensus algorithm in a blockchain network. It is used to validate the operations and the creation of new chains in the blockchain network. The main idea of PoW is to add complexity to building a block on the client side and reduce the load on the server side. For example, I say checksum has to have 5 lead zeros; it means that we will increase nonce until checksum will not have 5 lead zeros.

Let’s start with a code

First of all, I will create a class for a block. It will be a very simple class with a constructor, a method for calculating the checksum and property to check that block is valid. We will have two constants, one for number of lead zeros in checksum and a second to identify which symbol we will use with the nonce.

import time  
from hashlib import sha256

class Block:  
    CHECKSUM_LEAD_ZEROS = 5
    NONCE_SYMBOL = 'Z'

    def __init__(self, prev_block, data):
        self._prev_block = prev_block
        self.data = data
        self.checksum = None
        self.nonce = 0
        self.timestamp = time.time()

    @property
    def is_valid(self):
        checksum = self.calculate_checksum()

        return (
            checksum[:self.CHECKSUM_LEAD_ZEROS] == '0' * self.CHECKSUM_LEAD_ZEROS
            and checksum == self.checksum
        )

    def calculate_checksum(self):
        data = '|'.join([
            str(self.timestamp),
            self.data,
            self._prev_block.checksum,
        ])
        data += self.NONCE_SYMBOL * self.nonce

        return sha256(bytes(data, 'utf-8')).hexdigest()
Constructor

Constructor accepts only two parameters; the first is a previous block, and the second is the current block data. Also, constructor creates the time mark and sets nonce to zero as its initial value.

Is valid

A property that calculates checksum and compares if the current one is equal to the calculated and has the right number of zeros.

Calculate checksum

The most complicated method in our code. This method packs the time mark, data, and checksum of the previous block to one string. Then we add a nonce string; in our case, it will be a list of ‘Z’. Then it calculates the checksum of result string.

Now we have a simple yet fully functional block. I will move on to creating a chain of blocks. For now, it will be a simple chain without the ability to store and load data, but it will convey the main idea.

import json

class Chain:

    def __init__(self):
        self._chain = [
            self._get_genesis_block(),
        ]

    def is_valid(self):
        prev_block = self._chain[0]
        for block in self._chain[1:]:
            assert prev_block.checksum == self._prev_block.checksum
            assert block.is_valid()
            prev_block = block

    def add_block(self, data):
        block = Block(self._chain[-1], data)
        block = self._find_nonce(block)
        self._chain.append(block)

        return block

    def printchain(self):
        print(json.dumps(self._chain, indent=4))

    @staticmethod
    def _get_genesis_block():
        genesis_block = Block(None, None)
        genesis_block.checksum = '00000453880b6f9179c0661bdf8ea06135f1575aa372e0e70a19b04de0d4cbc7'

        return genesis_block

    @staticmethod
    def _find_nonce(self, block):
        beginning = '0' * Block.CHECKSUM_LEAD_ZEROS
        while True:
            checksum = block.calculate_checksum()
            if checksum[:Block.CHECKSUM_LEAD_ZEROS] == beginning:
                break
            block.nonce += 1

        return block

Let’s take a look at methods in our chain class:

Constructor

I just created a chain with only one block – a genesis block. Genesis block is a first block of the chain and has only a checksum. This block is required for adding the first real block to the chain because the real block requires a checksum of the last block in the chain.

Adding a new block

It has only one parameter — data for a new block. This method creates a new block with the given data and run method to find a correct nonce value. Only then, it will append a new block to the chain.

Find the nonce

This method aims to find the right nonce for a block. It has an infinite loop where I increase the nonce and calculate a new checksum. Then it compares the checksum with the rules — for now, it is only the number of zeros.

Validate the chain

This method tells us if the chain is valid. It goes through all blocks in the chain and checks if each block is valid.

Bottom Line

In this article, I attempted to prove that building a simple yet working blockchain is not as difficult as it may seem. My general advice is to take the lesson you learned here and start playing with blockchain by experimenting with blocks and data. All great products, including blockchain, were experiments once.

If you are from the fintech industry, I suggest you study more about the products that are using blockchain. Some things about them are certain; they are more secure, more attractive for investments, and, if they succeed in the global market, will be called game changers. The first step of adapting the new technology has been taken. The next step is to spread the knowledge and educate people about the features of blockchain.

Stay tuned for the next part about blockchain and fintech, most likely with more complex pieces of code and suggestions on its practical application in fintech.

Spyder IDE: Spyder 3.3.0 and 3.3.1 released!

$
0
0

We're pleased to release the next significant update in the stable Spyder 3 line, 3.3.0, along with its follow-on bugfix point release, 3.3.1, which is now live on PyPI and conda. As always, you can update with conda update spyder in the Anaconda Prompt/Terminal/command line (on Windows/macOS/Linux, respectively) if on Anaconda (recommended), or pip update spyder otherwise. If you run into any trouble, please carefully read our new installation documentation and consult our Troubleshooting Guide, which contains straightforward solutions to the vast majority of install-related issues users have reported.

As a new minor version (3.3), it makes several substantial changes to Spyder's underpinnings that deserve some explanation, particularly the newly modular and portable console system that's now separated into its own spyder-kernels package, opening up several new options for users running Spyder in different environments. There's also a brand-new error reporting process, new options in the IPython console, usability and performance improvements for the Variable Explorer, multiple new and changed dependency requirements and more, so there's plenty to go over. Finally, we'd like to briefly share a few final notes on this release and the latest on our plans going forward.

Modular, flexible Console architecture

The biggest single change with version 3.3.0/3.3.1 is a major overhaul of how IPython Consoles are started and managed in Spyder. More precisely, we've moved all the kernel-related code from the Spyder core into a new modular package, spyder-kernels, available on the samedistributionchannels as Spyder itself (and installed automatically when updating to >=3.3.0). While the most dramatic differences are under the hood, there's plenty for everyone to like (and a few things to be aware of).

Most importantly, for our everyday users, this makes Spyder much more flexible and powerful when working with multiple Python environments. With the changes, Spyder itself does not need to be present in every environment you'd like to launch a kernel in; you can install the full IDE in whatever manner you prefer, and then set it to run code and consoles in any Anaconda environment, virtualenv/venv, or even a totally separate Python installation, so long as it has spyder-kernels package available. Just set the path under Tools -> Preferences -> Python interpreter -> Use the following Python interpreter to the desired Python executable, and any new Console you open will start in the selected environment. Check out our new wiki page on using environments with Spyder, for more details and tips on the subject, and keep an eye out for the further improvements coming in Spyder 4, which will greatly simplify the process and include full GUI-based project, package and environment management functionality built right in.

Python interpreter pane of the Spyder preferences dialog, with the

Furthermore, the new package allows you to independently launch a kernel from anywhere (on your local computer, or a remote machine, server or even supercomputing cluster), connect to it with Spyder, and use it just like a "natively" started one. After installing spyder-kernels on the host environment, you can start one with python -m spyder_kernels.console, and then enter the kernel's 4-digit ID (and SSH connection details, if a remote machine) in the Spyder Connect to an existing kernel dialog under the IPython Console pane context- or "gear"-menu). For more information on the process, see the Connecting to a Console section in our new documentation.

A remote kernel running in a system console alongside Spyder's connect to kernel dialog

Best of all, no matter how or where a kernel is started, every console now supports the full suite of Spyder's features, including completion, the Variable Explorer, interactive Help and more, unlike before. You can even mix and match internal, external and remote kernels in different environments, all in the same Spyder session, by either changing the Python interpreter preferences setting between starting a console, or starting and connecting to multiple consoles externally—or both! Finally, for those of us (and those of you!) who help develop Spyder, the changes also make it easier to maintain and improve the code, and opens the door to one of the biggest features coming in Spyder 4: a new, full-featured debugging kernel that many of you have been asking for.

The one key thing to remember: make sure you install the appropriate version of spyder-kernels for your version of Spyder. For most users, that will be spyder-kernels 0.x (currently 0.2.6) to match Spyder 3 on our stable 3.x branch; if testing a Spyder 4 beta or Github clone of the master branch, you'll want the latest 1.x version of spyder-kernels (currently 1.1.0). To install the correct build, you can use the following conda command,

conda install spyder-kernels=<0 or 1>.*

or with pip,

pip install spyder-kernels==<0 or 1>.*

replacing <0 or 1> with the major version number (0 or 1) to match your Spyder version. Further details specific to installing a development build can be found in our Contributing Guide or our install documentation.

New IPython Console completion and plotting features

Advanced tab of the IPython console pane of Spyder's preferences, with the new Jedi completion section highlighted

Spyder's IPython Consoles can now use an advanced jedi-based completion engine that, similar to the Editor, analyzes your code without actually having to run it first. This allows for advanced completion functionality on objects not yet assigned to a variable, similar to the existing "greedy" completion option, but without the need for dynamic evaluation. It can be slow if working with very large Pandas DataFrames so it is disabled by default, but you can activate it under Tools -> Preferences -> IPython console -> Advanced Settings -> Jedi completion. The descriptive text for the "greedy" completion option (also off by default) was also clarified, particularly to explain an IPython bug (not present in the jedi completer) with the feature and a consequent workaround.

Graphics tab of the IPython console pane of Spyder's preferences, with the new

We've also added a new plotting setting, Use a tight layout for inline plots, for the Inline Matplotlib graphics backend. The default behavior (as in previous Spyder versions) sets bbox_inches to "tight" in Matplotlib calls when drawing to the inline backend. However, if you prefer your own bbox_inches argument be respected even when plots are rendered in the Console, you can now do so by unchecking this option under Tools -> Preferences -> IPython console -> Graphics -> Inline backend.

Comparison of inline plots in Spyder's IPython Console with and without the

Better Variable Explorer usability and performance

We've made several changes and optimizations to greatly improve performance and efficiency in the Variable Explorer, to make it much faster and use less memory when opening and editing large objects. In particular, we've fixed several major memory leaks when saving edited objects and closing the Variable Explorer dialogs through better length validation and garbage collection, and now skip the whole saving process entirely if the object was not modified (or cannot be modified). We've also changed the names and functions of the Cancel and Ok buttons in Variable Explorer dialogs to be easier to understand and use. They now feature a Close button which exits the dialog without saving any edits to the object's contents, and a Save and Close button—automatically enabled once modifications are made—that commits the changes back to the kernel.

A Variable Explorer DataFrame editor dialog, showing the new

Streamlined error reporting experience

While we hope you never need to use it, Spyder 3.3.0 includes a brand-new error handling backend that can submit bug reports directly through the Github API. Based off Colin Duquesnoy's excellent QCrash framework, this is a major improvement in speed, functionality, reliability and user convenience over the old approach (essentially just opening a link in a web browser). Just as before, we won't send anything without your explicit consent, you need a Github account (or create one for free), and you can view and edit the report on Github at any time.

The new authentication dialogs for submitting a Github report, with a username/password and a token option

You will need to enter your Github credentials the first time you submit a report. For this, you can create an app token which only grants the very limited permissions needed to create a public issue report, can be easily revoked and re-created, and works with two-factor authentication (which you should be using); however, if you have not yet enabled 2FA, it also offers the option to enter your Github username and password. Either way, Spyder can securely remember your login using the keyring package, so you only have to do this once on any given machine (if you select the "remember" option).

The new error reporting interface, with a title field, more descriptive text, and a polished UI

The dialog itself has also been made more functional and user-friendly, designed to help encourage high-quality, useful reports, and with more accessible, descriptive text. The reports themselves also contain more useful data about the problem, and there is now a --safe-mode command-line option for Spyder to start in a clean, temporary config directly, so you can test to see if the problem reoccurs without the hassle of a spyder --reset, and play around with other settings without impacting your main configuration. Finally, we've fixed over 40 bugs in this release and further improved our documentation and troubleshooting material, so hopefully you'll see this less often.

Cleaner under the hood and more

Alongside the aforementioned internal changes, we've also made a number of other under-the-hood changes to clean out old cruft and improve maintainability, readability and performance of our codebase. In particular, we've officially dropped support for Python 3.3, PyQt4, and PyQt5 < 5.5, all versions which have been end-of-life for years, and (aside from PyQt4) have minimal or no remaining Spyder users. Furthermore, dropping PyQt4 in particular allows us to avoid or resolve a number of unfixable bugs specific to that version that have been causing problems for users, and opens the door to easier development in the future. Finally, we moved our legacy documentation (and its many associated images) from the main Spyder codebase to its own repo, executed a major overhaul to greatly modernize and expand the text, images, style, and presentation, and now deploy them onto their own subdomain of our site, all of which we will discuss in a separate post coming soon.

Even more fixes and refinements with Spyder 3.3.1

As a quick follow-on to the 3.3.0 release, Spyder 3.3.1 fixed a handful of bugs and minor issues with the new functionality and cleaned up several other existing ones, as well made a number of lower-level maintenance and development-oriented changes—over two dozen in all. Furthermore, several user-visible enhancements made it into the release, primary aimed at improving usability. To make it easier for users to manage multiple environments, the selection UI under Preferences > Python interpreter > Use the following Python interpreter remembers the executables you've previously selected and allows quick switching between them.

Python interpreter pane of Spyder's preferences, showing the new environment selection UI

In the Console, mundane ipdb commands are automatically filtered from the history, and the Editor now supports syntax highlighting for the new numeric literal syntax introduced in Python 3.6. Spyder's tutorial has been re-written for modern Spyder as well as to be clearer and more understandable, and overhauled for better and more consistent formatting and visuals with the rest of our documentation. Finally, our update checker now consults the Anaconda defaults channel rather than PyPI to determine if an update is available, so it doesn't bug the majority of our users on Anaconda until they can actually aquire the package.

What to know and what's next

If you have any questions, problems or feedback, we'd love to hear from you (just make sure you read our documentation, Troubleshooting Guide and the other previously-mentioned resources first)! For general questions or install issues that aren't addressed by the above, our Google Group and Gitter live chat are a good place to ask, while our Github is the place to report bugs, request features, or help develop Spyder itself (though make sure to search our issues list to ensure it hasn't already been submitted). Finally, you can follow our Facebook and Twitter for the latest Spyder news, releases, previews and tips, and help support Spyder development on OpenCollective.

There's plenty to look forward to in the coming days, with the official release of our all-new documentation (that's already live now), Spyder 4 beta 1 having just been released on PyPI, conda-forge and our own spyder-ide channel (with a blog post coming soon), an upcoming article on our official Spyder 4 feature roadmap and more, so stay tuned! In the meantime, happy Spydering and enjoy the new 3.3.1!

Ned Batchelder: SQLite data storage for coverage.py

$
0
0

I’m starting to make some progress on Who Tests What. The first task is to change how coverage.py records the data it collects during execution. Currently, all of the data is held in memory, and then written to a JSON file at the end of the process.

But Who Tests What is going to increase the amount of data. If your test suite has N tests, you will have roughly N times as much data to store. Keeping it all in memory will become unwieldy. Also, since the data is more complicated, you’ll want a richer way to access the data.

To solve both these problems, I’m switching over to using SQLite to store the data. This will give us a way to write the data as it is collected, rather than buffering it all to write at the end. BTW, there’s a third side-benefit to this: we would be able to measure processes without having to control their ending.

When running with --parallel, coverage adds the process id and a random number to the name of the data file, so that many processes can be measured independently. With JSON storage, we didn’t need to decide on this filename until the end of the process. With SQLite, we need it at the beginning. This has required a surprising amount of refactoring. (You can follow the carnage on the data-sqlite branch.)

There’s one problem I don’t know how to solve: a process can start coverage measurement, then fork, and continue measurement in both of the child processes, as described in issue 56. With JSON storage, the in-memory data is naturally forked when the processes fork, and then each copy proceeds on its way. When each process ends, it writes its data to a file that includes the (new) process id, and all the data is recorded.

How can I support that use case with SQLite? The file name will be chosen before the fork, and data will be dribbled into the file as it happens. After the fork, both child processes will be trying to write to the same database file, which will not work (SQLite is not good at concurrent access).

Possible solutions:

  1. Even with SQLite, buffer all the data in memory. This imposes a memory penalty on everyone just for the rare case of measuring forking processes, and loses the extra benefit of measuring non-ending processes.
  2. Make buffer-it-all be an option. This adds to the complexity of the code, and will complicate testing. I don’t want to run every test twice, with buffering and not. Does pytest offer tools for conveniently doing this only for a subset of tests?
  3. Keep JSON storage as an option. This doesn’t have an advantage over #2, and has all the complications.
  4. Somehow detect that two processes are now writing to the same SQLite file, and separate them then?
  5. Use a new process just to own the SQLite database, with coverage talking to it over IPC. That sounds complicated.
  6. Monkeypatch os.fork so we can deal with the split? Yuck.
  7. Some other thing I haven’t thought of?

Expect to see an alpha of coverage.py in the next few weeks with SQLite data storage, and please test it. I’m sure there are other use cases that might experience some turbulence...


Python Engineering at Microsoft: Python in Visual Studio 2017 version 15.8

$
0
0

We have released the 15.8 update to Visual Studio 2017. You will see a notification in Visual Studio within the next few days, or you can download the new installer from visualstudio.com.

In this post, we're going to look at some of the new features we have added for Python developers: IntelliSense with type shed definitions, faster debugging, and support for Python 3.7. For a list of all changes in this release, check out the Visual Studio release notes.

Faster debugging, on by default

We first released a preview of our ptvsd 4.0 debug engine in the 15.7 release of Visual Studio, in the 15.8 release this is now the default, offering faster and more reliable debugging for all users.

If you encounter issues with the new debug engine, you can revert back to the previous debug engine by selecting Use legacy debugger from Tools > Options > Python > Debugging.

Richer IntelliSense

We are continuing to make improvements to IntelliSense for Python in Visual Studio 2017. In this release you will notice completions that are faster, more reliable, and have better understanding of the surrounding code, and tooltips with more focused and useful information. Go To Definition and Find All References are better at taking you to the module a value was imported from, and Python packages that include type annotations will provide richer completions. These changes were made as part of our ongoing effort to make our Python analysis from Visual Studio available as an independent Microsoft Python Language Server.

As an example, below shows improved tooltips with richer information when hovering over the os module in 15.8 compared to 15.7:

We have also added initial support for using typeshed definitions to provide more completions for places where our static analysis is unable to infer complete information. We are still working through some known issues with this though, so results may be limited and expect to see better support for typeshed in future releases.

Support for Python 3.7

We have updated our Visual Studio so that all of our features work with Python 3.7, which was recently released. Most functionality of Visual Studio works with Python 3.7 in the 15.7 release, and in the 15.8 release we made specific fixes so that debug attach, profiling, and mixed-mode (cross-language) debugging features work with Python 3.7.

Give Feedback

Be sure to download the latest version of Visual Studio and try out the above improvements. If you encounter any issues, please use the Report a Problem tool to let us know (this can be found under Help, Send Feedback) or continue to use our GitHub page. Follow our Python blog to make sure you hear about our updates first, and thank you for using Visual Studio!

Mike Driscoll: Book Contest: ReportLab: PDF Processing with Python

$
0
0

I recently released a new book entitled ReportLab: PDF Processing with Python. In celebration of a successful launch, I have decided to do a little contest.

Rules

  • Post a comment telling me why you would want a copy
  • The most clever or heartfelt commenter will be chosen by me

The contest will run starting now until Friday, August 17th @ 11:59 p.m. CST.

Runners up will receive a free copy of the eBook. The grand prize will be a signed paperback copy + the eBook version!

Vasudev Ram: pyperclip, a cool Python clipboard module

$
0
0
By Vasudev Ram


I recently came across this neat Python library, pyperclip, while browsing the net. It provides programmatic copy-and-paste functionality. It's by Al Sweigart.

pyperclip is very easy to use.

I whipped up a couple of simple programs to try it out.

Here's the first one, pyperclip_json_test.py:
from __future__ import print_function
import pyperclip as ppc
import json

d1 = {}
keys = ("TS", "TB")
vals = [
["Tom Sawyer", "USA", "North America"],
["Tom Brown", "England", "Europe"],
]
for k, v in zip(keys, vals):
d1[k] = v
print("d1:")
for k in keys:
print("{}: {}".format(k, d1[k]))

ppc.copy(json.dumps(d1))
print("Data of dict d1 copied as JSON to clipboard.")
d2 = json.loads(ppc.paste())
print("Data from clipboard copied as Python object to dict d2.")
print("d1 == d2:", d1 == d2)
The program creates a dict, d1, with some values, converts it to JSON and copies that JSON data to the clipboard using pyperclip.
Then it pastes the clipboard data into a Python string and converts that to a Python dict, d2.

Here's a run of the program:
$ python pyperclip_json_test.py
d1:
TS: ['Tom Sawyer', 'USA', 'North America']
TB: ['Tom Brown', 'England', 'Europe']
Data of dict d1 copied as JSON to clipboard.
Data from clipboard copied as Python object to dict d2.
d1 == d2: True
Comparing d1 and d2 shows they are equal, which means the copy from Python program to clipboard and paste back to Python program worked okay.

Here's the next program, pyperclip_text_stats.py:
from __future__ import print_function
import pyperclip as ppc

text = ppc.paste()
words = text.split()
print("Text copied from clipboard:")
print(text)
print("Stats for text:")
print("Words:", len(words), "Lines:", text.count("\n"))

"""
the quick brown fox
jumped over the lazy dog
and then it flew over the rising moon
"""
The program pastes the current clipboard content into a string, then finds and prints the number of words and lines in that string. No copy in this case, just a paste, so your clipboard should already have some text in it.

Here are two runs of the program. Notice the three lines of text in a triple-quoted comment at the end of the program above. That's my test data. For the first run below, I selected the first two lines of that comment in my editor (gvim on Windows) and copied them to the clipboard with Ctrl-C. Then I ran the program. For the second run, I copied all the three lines and did Ctrl-C again. You can see from the results that it worked; it counted the number of lines and words that it pasted from the clipboard text, each time.
$ python pyperclip_text_stats.py
Text copied from clipboard:
the quick brown fox
jumped over the lazy dog

Stats for text:
Words: 9 Lines: 2

$ python pyperclip_text_stats.py
Text copied from clipboard:
the quick brown fox
jumped over the lazy dog
and then it flew over the rising moon

Stats for text:
Words: 17 Lines: 3
So we can see that pyperclip, as used in this second program, can be useful to do a quick word and line count of any text you are working on, such as a blog post or article. You just need that text to be in the clipboard, which can be arranged just by selecting your text in whatever app, and doing a Ctrl-C. Then you run the above program. Of course, this technique will be limited by the capacity of the clipboard, so may not work for large text files. That limit could be found out by trial and error, e.g. by copying successively larger chunks of text to the clipboard, pasting them back somewhere else, comparing the two, and checking whether or not the whole text was preserved across the copy-paste. There could be a workaround, and I thought of a partial solution. It would involve accumulating the stats for each paste, into variables, e.g. total_words += words and total_lines += lines. The user would need to keep copying successive chunks of text to the clipobard. How to sync the two, user and this modified program? Need to think it through, and it might be a bit clunky. Anyway, this was just a proof of concept.

As the pyperclip docs say, it only supports plain text from the clipboard, not rich text or other kinds of data. But even with that limitation, it is a useful library.

The image at the top of the post is a partial screenshot of my vim editor session, showing the menu icons for cut, copy and paste.

You can read about the history and evolution of cut, copy and paste here:

Cut, copy, and paste

New to vi/vim and want to learn its basics fast? Check out my vi quickstart tutorial. I first wrote it for a couple of Windows sysadmin friends of mine, who needed to learn vi to administer Unix systems they were given charge of. They said the tutorial helped them to quickly grasp the basics of text editing with vi.

Of course, vi/vim is present, or just a download away, on many other operating systems by now, including Windows, Linux, MacOS and many others. In fact, it is pretty ubiquitous, which is why vi is a good skill to have - you can edit text files on almost any machine with it.

- Enjoy.


- Vasudev Ram - Online Python training and consulting

Get updates (via Gumroad) on my forthcoming apps and content.

Jump to posts: Python * DLang * xtopdf

Subscribe to my main blog (jugad2) by email

My ActiveState Code recipes

Follow me on: LinkedIn * Twitter

Are you a blogger with some traffic? Get Convertkit:

Email marketing for professional bloggers

Mike Driscoll: Face Detection Using Python and OpenCV

$
0
0

Machine Learning, artificial intelligence and face recognition are big topics right now. So I thought it would be fun to see how easy it is to use Python to detect faces in photos. This article will focus on just detecting faces, not face recognition which is actually assigning a name to a face. The most popular and probably the simplest way to detect faces using Python is by using the OpenCV package. OpenCV is a computer vision library that’s written in C++ and had Python bindings. It can be kind of complicated to install depending on which OS you are using, but for the most part you can just use pip:

pip install opencv-python

I have had issues with OpenCV on older versions of Linux where I just can’t get the newest version to install correctly. But this works fine on Windows and seems to work okay for the latest versions of Linux right now. For this article, I am using the 3.4.2 version of OpenCV’s Python bindings.


Finding Faces

There are basically two primary ways to find faces using OpenCV:

  • Haar Classifier
  • LBP Cascade Classifier

Most tutorials use Haar because it is more accurate, but it is also much slower than LBP. I am going to stick with Haar for this tutorial. The OpenCV package actually has all the data you need to use Harr effectively. Basically you just need an XML file with the right face data in it. You could create your own if you knew what you were doing or you can just use what comes with OpenCV. I am not a data scientist, so I will be using the built-in classifier. In this case, you can find it in your OpenCV library that you installed. Just go to the /Lib/site-packages/cv2/data folder in your Python installation and look for the haarcascade_frontalface_alt.xml. I copied that file out and put it in the same folder I wrote my face detection code in.

Haar works by looking at a series of positive and negative images. Basically someone went and tagged the features in a bunch of photos as either relevant or not and then ran it through a machine learning algorithm or a neural network. Haar looks at edge, line and four-rectangle features. There’s a pretty good explanation over on the OpenCV site. Once you have the data, you don’t need to do any further training unless you need to refine your detection algorithm.

Now that we have the preliminaries out of the way, let’s write some code:

The first thing we do here are our imports. The OpenCV bindings are called cv2 in Python. Then we create a function that accepts a path to an image file. We use OpenCV’s imread method to read the image file and then we create a copy of it to prevent us from accidentally modifying the original image. Next we convert the image to gray scale. You will find that computer vision almost always works better with gray than it does in color or at least that is the case with OpenCV.

The next step is to load up the Haar classifier using OpenCV’s XML file. Now we can attempt to find faces in our image using the classifier object’s detectMultiScale method. I print out the number of faces that we found, if any. The classifier object actually returns an iterator of tuples. Each tuple contains the x/y coordinates of the face it found as well as width and height of the face. We use this information to draw a rectangle around the face that was found using OpenCV’s rectangle method. Finally we show the result:

That worked pretty well with a photo of myself looking directly at the camera. Just for fun, let’s try running this royalty free image I found through our code:

When I ran this image in the code, I ended up with the following:

As you can see, OpenCV only found two of the four faces, so that particular cascades file isn’t good enough for finding all the faces in the photo.


Finding Eyes in Photos

OpenCV also has a Haar Cascade eye XML file for finding the eyes in photos. If you do a lot of photography, you probably know that when you do portraiture, you want to try to focus on the eyes. In fact, some cameras even have an eye autofocus capability. For example, I know Sony has been bragging about their eye focus function for a couple of years now and it actually works pretty well in my tests of one of their cameras. It is likely using something like Haars itself to find the eye in real time.

Anyway, we need to modify our code a bit to make an eye finder script:

import cv2
importos 
def find_faces(image_path):
    image = cv2.imread(image_path) 
    # Make a copy to prevent us from modifying the original
    color_img = image.copy() 
    filename = os.path.basename(image_path) 
    # OpenCV works best with gray images
    gray_img = cv2.cvtColor(color_img, cv2.COLOR_BGR2GRAY) 
    # Use OpenCV's built-in Haar classifier
    haar_classifier = cv2.CascadeClassifier('haarcascade_frontalface_alt.xml')
    eye_cascade = cv2.CascadeClassifier('haarcascade_eye.xml') 
    faces = haar_classifier.detectMultiScale(gray_img, scaleFactor=1.1, minNeighbors=5)print('Number of faces found: {faces}'.format(faces=len(faces))) 
    for(x, y, width, height)in faces:
        cv2.rectangle(color_img, (x, y), (x+width, y+height), (0, 255, 0), 2)
        roi_gray = gray_img[y:y+height, x:x+width]
        roi_color = color_img[y:y+height, x:x+width]
        eyes = eye_cascade.detectMultiScale(roi_gray)for(ex,ey,ew,eh)in eyes:
            cv2.rectangle(roi_color,(ex,ey),(ex+ew,ey+eh),(0,255,0),2) 
    # Show the faces / eyes found
    cv2.imshow(filename, color_img)
    cv2.waitKey(0) 
    cv2.destroyAllWindows() 
if __name__ == '__main__':
    find_faces('headshot.jpg')

Here we add a second cascade classifier object. This time around, we use OpenCV’s built-in haarcascade_eye.xml file. The other change is in our loop where we loop over the faces found. Here we also attempt to find the eyes and loop over them while drawing rectangles around them. I tried running my original headshot image through this new example and got the following:

This did a pretty good job, although it didn’t draw the rectangle that well around the eye on the right.

Wrapping Up

OpenCV has lots of power to get you started doing computer vision with Python. You don’t need to write very many lines of code to create something useful. Of course, you may need to do a lot more work than is shown in this tutorial training your data and refining your dataset to make this kind of code work properly. My understanding is that the training portion is the really time consuming part. Anyway, I highly recommend checking out OpenCV and giving it a try. It’s a really neat library with decent documentation.


Related Reading

PyBites: How Promotions work in Large Corporations

$
0
0

We are stoked to have Cristian Medina (tryexceptpass.org) deliver our first soft skills article. He will go into depth on the topic of promotions and how to better position yourself as a developer. He will discuss performance reviews, the role your manager can play, networking and much more. Enjoy and keep challening yourself! Enter Cris ...

Introduction

I'm going on year 16 of my professional engineering career. Most of it spent in large and mid-size corporations. In this time, I was exposed to a number of interesting situations and processes related to performance reviews and promotions. And while I was at Pycon 2018, the topic came up in some hallway conversations. Specifically, what does one needs to do to get promoted?

Anthony Shaw actually covered the basics in his Can we talk about tech salaries article about negotiating better pay. But the hallway discussion left me thinking that there's a few more things I can add to the list. Perhaps some that might help explain the process better for folks that have never been through it. Especially how it works in larger corporations.

When the guys from Pybites suggested collaborating on a new post on soft-skills, I thought this would be a good topic to cover. So here we are.

The Not-So-Obvious Obvious

First thing is first: Be good at what you do (i.e. your primary job), otherwise the conversation is over before it even started. Were it gets tricky though, is how you measure "good". Each company has multiple ways of doing that, usually different between business units.

Second thing: Don't just stop at your job responsibilities, make it your mission to learn about how to improve your environment. I know this is broad, but that's the point. If it means learning a new programming language, a new tool, or new methodologies, it's on you to stay up to date with your chosen profession and how you can apply recent developments to your environment. Not only is it good for you, but keeping your department and organization up-to-date can even make it resilient to future complications.

Some businesses have a core set of values against which they'll evaluate what you're delivering. These tend to be "esoteric" things like "innovation that matters", usually very abstract and hard to measure (this is likely on purpose). To use this phrase as an example, who is the innovation supposed to matter to? your clients? your coworkers? the "business"?

Other organizations will rate you on specific measurable criteria. This brings the problem of understanding which criteria matters to your job role. For example, while it might mean a lot to you personally that you've made 5000 commits (more than anyone else) to the most important codebase the company owns, maybe the company wants to optimize dollars spent building code. In this case you just cost the company more money than everyone else, and therefore had the worst job performance.

Other companies will instead have a list of specific criteria for each of the "steps" in your career ladder. Keep in mind though that this ladder is for the career that your job category falls into, not necessarily the one that YOU want to climb. The criteria could also be abstract concepts, like step 1 would be "implements the vision", while step 2 is "interprets the vision", and step 3 is "has vision". And yes, there are all kinds of jokes you can make about what you need to do to have visions, but this is a real thing I've seen in several places.

It's very important that you first understand how to provide value inside your company. Which is not to say that it works the same inside your organization, or even your department. Usually there's other "flavoring" added to each of those items depending on where you work, who you work for, who they report to, and even who they report to.

Make sure you find mentors or other folks in the organization that have gone through several steps in the ladder WHILE WORKING AT YOUR COMPANY. They can help you understand what matters. Your manager can help point you to these folks, and if not, a peer manager should have some input as well.

Performance Reviews

Ok, now that you you've determined how value is measured, it's important to make your performance reviews, especially the written ones, all about how you deliver on that value. Sometimes you won't be able to put things in the same terms, but that's where you talk to your manager and ask for advice. Don't forget to mention any research or studies you completed, even if their conclusions were not what you expected.

Performance reviews are not a place to hold back, this is where you get to be a rockstar. Being humble will not help you here. You don't need to write an essay, bullet lists are usually better. You could categorize the bullets by your organization's criteria, if it helps.

Your Manager

Managing people is NOT easy. In general, management chains tend to get a bad rep for bad decisions, but they hardly ever talk about the good ones, though I suppose this is how it should be. I haven't been a manager myself, but I have been in team-lead or "coaching" roles in different organizations, and even outside of business environments. You learn real quick that people aren't easy. Keeping track of who has what problem, when, why, who can fix it, where they can fix it and with what kind of help is NOT a simple thing.

YOUR job is not only to be good at what you do, but to make your managers job easy. If your manager gets a ping from someone external to your department about something dumb you did, that's yet one more thing that they have to deal with in their day. If they show up at some higher-up meeting and are asked something that you did not prep them for, they might look stupid without a good answer. That's one more thing they have to worry about next time they present your project.

YOUR job is not only to be good at what you do, but to make your managers job easy. If you can't point to things you do to make your manager's job easier, then moving up the chain gets a little harder.

Money

Promotions DO NOT imply more money. Sometimes they do, sometimes they don't. When you're looking for a promotion, make sure to understand what you're getting yourself into. Sometimes there's a promise of money, but it winds up being a 1% raise for 50% more responsibility. If you don't ask, you'll find out the hard way. Don't make your life more complicated than it needs to be.

Back to an earlier point about the steps in the career ladder, it's important to have an understanding of salary ranges for each of those steps. Usually there's a very strict range for each step, and where you are on that range is very important. If you're on the lower end, then your higher priority is to keep doing what your doing and look for a raise. If you're on the higher end, there's no point in having a conversation about a raise, because you need to be promoted first before you can get one. Your manager can usually tell you where you stand, some companies even require them to do so.

Networking

On top of all this, there's "tribal knowledge". The grape vine is a real thing and it always has information about what certain management chains may or may not want, who might be leaving their position soon, who might be wanting to come in, who might be on the outs with their manager. This is NOT about gossip or hearsay, instead it's about taking the pulse for your organization. You need to understanding it such that you can gain insight into the opportunities that may or may not interest you.

Sometimes it's not until you have these conversations with your coworkers that you realize that things aren't heading in the direction you need them to go. This can help you determine whether it's better to spend time vying for a promotion, or to start looking for another job.

Networking also helps you find other jobs within the same company that may have the career ladders you'd prefer to climb. Or different environments where you think you can better excel. They might even be in departments with peer managers, which makes life easier because you already understand the organization.

The Meeting

Large corporations don't tend to go around thinking: "Oh! This guy did a great job! Promote him!" It doesn't matter how much they want you to think they do, that's not how it works. It's all a numbers game.

For example, the business may have a percentage of the budget set aside for promotions, which they usually equate to a count of how many people they can promote for the year. Then that number gets distributed amongst all the business units, which then divide it by the different steps in the ladders (i.e. we can do 50 step 0-to-1 promotions, 25 1-to-2, 10 2-to-3, etc.) They then trickle it down to the organizations and departments, normally stopping at the 2nd-line manager level. The distribution method varies greatly between companies, some base it on how the business units did against their goals for the year, some base it on % revenue generated by the units, etc.

At this point, there's usually a meeting where your 2nd-line gathers his troops (your manager and his peers), to decide who gets what promotion to which step. Now comes the tricky part. Each manager brings a list of his candidates, and they all discuss each candidate and their accomplishments.

Some folks don't really understand the significance of this point: Every peer-manager in your organization will likely have a say on whether you get a promotion or not. On top of that, as we discussed earlier, a promotion to each step of the ladder has its own set of rules, which also involves approvals. The higher the step in the ladder the higher up the management chain you go for approvals. That's why it's sometimes easy to get the very first promotion which only takes your direct manager and his manager approve.

What does this mean to you? It's not enough to do a good job for YOUR manager, you should also do things to help your coworkers in other departments. If the peer-managers haven't even heard of you, how can they be ok with giving up their guy's promotion to you! Back to making your manager's job easy, this is a key aspect of it.

What can you do to improve your chances?

If you help other people, make note of it in your performance reviews. Remember, it's not about bragging, it's about making your manager's job easy. When that peer manager doesn't know who you are, he could say: that's the guy that helped you with the XYZ task you were stuck with last month.

Let's say you were super helpful and your colleagues want to take you out for lunch. They want to thank you for this cool thing you made for them that greatly simplifies their lives. Tell them you'd love to go out for lunch with them, but they should save their money and instead of buying lunch, email your manager AND their manager thanking you for the work. On the flip side, you should do the same for them! When you think someone did a great job at something, email them and copy your managers. It helps everyone.

Finding opportunities to help

Helping other organizations or departments doesn't have to be complicated. As a programmer, this might be simpler than you think. Here's a few quick ideas:

  • Develop tests for fellow programmers. Having another set of eyes that don't know the codebase usually leads to interesting questions.

  • Help review someone else's code.

  • If other departments maintain APIs, try writing wrappers for those APIs. This usually provides good insight and will help them with testing.

  • Run some short training sessions on topics you find interesting or anything new that you've learned. Passing new knowledge onto other coworkers is a great way for you to easily retain the info.

  • Many times you'll engage with individuals or departments that have to run a lot of metrics. Sometimes, these folks are burdened by company "tools" and their limitations. Try lending a hand in configuring those tools so they make more sense, or help formulate "advanced queries" with actual SQL instead of the limited options given by GUIs.

  • Since you now know what your company finds important to measure, why not put some internal website together to help visualize it. You can graph defect data, support cases, performance data, test execution, etc.

  • Monitoring tools are also useful. If you run any kind of infrastructure that's expected to be up-and-running most of the time, it's not hard to make a few simple systems that can alert when they go down, or call REST APIs to check on statuses.

  • Keep an eye out for scriptable work. Generating reports is a great example, as well as tasks like onboarding recruits or cleaning up resources after other people.

  • There's always some kind of resource management or inventory system that could help a department track things better, or automate something.

  • Did you recently fail at trying to implement something with a given approach or technology? Pass on the knowledge in a tech talk.


When I was naive and early in my career, I definitely wish someone sat me down and went over these points. It would've saved me lots of frustration and heartache. I hope you find them useful.

Keep Calm and Code in Python!

Cris

Codementor: Taming Snakes inside a Container

$
0
0
In this post, let's talk about taming snakes inside a container. The article is a summary of lessons learned while dockerizing python microservices. In case you want to see a detailed...

Import Python: ImportPython - Issue 182

$
0
0
Worthy Read

This blog series from Sheroy Marker cover the principles of CD of microservices. Get a practical guide on designing CD workflows for microservices, testing strategies, trunk based development, feature toggles and environment plans.
microservices
,
advert

Several modern programming languages have so-called "null-coalescing" or "null- aware" operators, including C# , Dart, Perl, Swift, and PHP (starting in version 7). These operators provide syntactic sugar for common patterns involving null references.
PEP

In this review, we’ll be taking a look at our favorite options and explain which ones to use.
static analysis

Recently, I was given a dataset that contained sensitive information about customers and that should not under any circumstance be made public. The dataset resided on one of our servers which I deem to be a reasonably secure location. I wanted to copy the data to my local drive, in order to work with the data more comfortably and at the same time not having to fear that the data is less save. So, I wrote a little script that changes the data, while still preserving some key information. I will detail all the steps that I have taken, and highlight some handy tricks along the way.
pandas

I am creating a series of blog posts to help you develop, deploy and run (mostly) Python applications on AWS Lambda using Serverless Framwork.
aws lamda

Implementer’s Guide to Scalable and Robust Internet Telephony with Session Initiation Protocol in ClientServer and Peer-to-Peer modes in Python
SIP

Python extends its lead, and Assembly enters the Top Ten
ranking

We illustrate the application of two linear compression algorithms in python: Principal component analysis (PCA) and least-squares feature selection. Both can be used to compress a passed array, and they both work by stripping out redundant columns from the array. The two differ in that PCA operates in a particular rotated frame, while the feature selection solution operates directly on the original columns. As we illustrate below, PCA always gives a stronger compression. However, the feature selection solution is often comparably strong, and its output has the benefit of being relatively easy to interpret — a virtue that is important for many applications.
data science

In this tutorial you will learn how to build a “people counter” with OpenCV and Python. Using OpenCV, we’ll count the number of people who are heading “in” or “out” of a department store in real-time.
image processing

I have been experimenting with keyword extraction techniques against the NIPS Papers dataset, consisting of titles, abstracts and full text of all papers from the Neural Information Processing Systems (NIPS) conference from 1987-2017, and contributed by Ben Hamner. The collection has 7239 papers written by 9785 authors. The reason I preferred this dataset to others such as Reuters or Medline is because it is smaller, and I can be both programmer and domain expert, and because I might learn interesting things while combing through the text of the papers looking for patterns to exploit.
topic modeling

In this article, we will be going through building queries for Wikidata with Python and SPARQL by taking a look where mayors in Europe are born.
datascience
,
sparkql


Projects

Deep-Learning-World - 1373 Stars, 95 Fork
Organized Resources for Deep Learning Researchers and Developers

kefir - 288 Stars, 21 Fork
Kefir is a natural language processing kit for Turkic languages

zalo_landmark - 139 Stars, 19 Fork
Zalo landmark identification challenge, 103 classes, > 100k images (PyTorch)

SMBetray - 135 Stars, 15 Fork
SMB MiTM tool with a focus on attacking clients through file content swapping, lnk swapping, as well as compromising any data passed over the wire in cleartext.

img_term - 76 Stars, 5 Fork
Display image and video camera in your ANSI terminal!

PaperTTY - 72 Stars, 3 Fork
PaperTTY - Python module to render a TTY on e-ink

gluon-reid - 72 Stars, 4 Fork
A code gallery for person re-identification with mxnet-gluon, and I will reproduce many STOA algorithm.

fagan - 36 Stars, 11 Fork
A variant of the Self Attention GAN named: FAGAN (Full Attention GAN)

django-vue-template - 32 Stars, 3 Fork
Django Rest + Vue JS Template

django-deployment-book - 20 Stars, 4 Fork
The Unix system administration guide for Django developers

decli - 9 Stars, 0 Fork
Minimal, easy-to-use, declarative cli tool

aira - 8 Stars, 0 Fork
Aira is a simple script language based on python3

csv-position-reader - 5 Stars, 0 Fork
A custom CSV reader implementation with direct file access

cookiecutter-django-shop - 3 Stars, 0 Fork
Cookiecutter django-SHOP is a blueprint for an e-commerce site based on django-CMS.

ews - 3 Stars, 1 Fork
Ethereum Web Service


Real Python: The Ultimate Guide to Django Redirects

$
0
0

When you build a Python web application with the Django framework, you’ll at some point have to redirect the user from one URL to another.

In this guide, you’ll learn everything you need to know about HTTP redirects and how to deal with them in Django. At the end of this tutorial, you’ll:

  • Be able to redirect a user from one URL to another URL
  • Know the difference between temporary and permanent redirects
  • Avoid common pitfalls when working with redirects

This tutorial assumes that you’re familiar with the basic building blocks of a Django application, like views and URL patterns.

Django Redirects: A Super Simple Example

In Django, you redirect the user to another URL by returning an instance of HttpResponseRedirect or HttpResponsePermanentRedirect from your view. The simplest way to do this is to use the function redirect() from the module django.shortcuts. Here’s an example:

# views.pyfromdjango.shortcutsimportredirectdefredirect_view(request):response=redirect('/redirect-success/')returnresponse

Just call redirect() with a URL in your view. It will return a HttpResponseRedirect class, which you then return from your view.

A view returning a redirect has to be added to your urls.py, like any other view:

# urls.pyfromdjango.urlsimportpathfrom.viewsimportredirect_viewurlpatterns=[path('/redirect/',redirect_view)# ... more URL patterns here]

Assuming this is the main urls.py of your Django project, the URL /redirect/ now redirects to /redirect-success/.

To avoid hard-coding the URL, you can call redirect() with the name of a view or URL pattern or a model to avoid hard-coding the redirect URL. You can also create a permanent redirect by passing the keyword argument permanent=True.

This article could end here, but then it could hardly be called “The Ultimate Guide to Django Redirects.” We will take a closer look at the redirect() function in a minute and also get into the nitty-gritty details of HTTP status codes and different HttpRedirectResponse classes, but let’s take a step back and start with a fundamental question.

Why Redirect

You might wonder why you’d ever want to redirect a user to a different URL in the first place. To get an idea where redirects make sense, have a look at how Django itself incorporates redirects into features that the framework provides by default:

  • When you are not logged-in and request a URL that requires authentication, like the Django admin, Django redirects you to the login page.
  • When you log in successfully, Django redirects you to the URL you requested originally.
  • When you change your password using the Django admin, you are redirected to a page that indicates that the change was successful.
  • When you create an object in the Django admin, Django redirects you to the object list.

What would an alternative implementation without redirects look like? If a user has to log in to view a page, you could simply display a page that says something like “Click here to log in.” This would work, but it would be inconvenient for the user.

URL shorteners like http://bit.ly are another example of where redirects come in handy: you type a short URL into the address bar of your browser and are then redirected to a page with a long, unwieldy URL.

In other cases, redirects are not just a matter of convenience. Redirects are an essential instrument to guide the user through a web application. After performing some kind of operation with side effects, like creating or deleting an object, it’s a best practice to redirect to another URL to prevent accidentally performing the operation twice.

One example of this use of redirects is form handling, where a user is redirected to another URL after successfully submitting a form. Here’s a code sample that illustrates how you’d typically handle a form:

fromdjangoimportformsfromdjango.httpimportHttpResponseRedirectfromdjango.shortcutsimportredirect,renderdefsend_message(name,message):# Code for actually sending the message goes hereclassContactForm(forms.Form):name=forms.CharField()message=forms.CharField(widget=forms.Textarea)defcontact_view(request):# The request method 'POST' indicates# that the form was submittedifrequest.method=='POST':# 1# Create a form instance with the submitted dataform=ContactForm(request.POST)# 2# Validate the formifform.is_valid():# 3# If the form is valid, perform some kind of# operation, for example sending a messagesend_message(form.cleaned_data['name'],form.cleaned_data['message'])# After the operation was successful,# redirect to some other pagereturnredirect('/success/')# 4else:# 5# Create an empty form instanceform=ContactForm()returnrender(request,'contact_form.html',{'form':form})

The purpose of this view is to display and handle a contact form that allows the user to send a message. Let’s follow it step by step:

  1. First the view looks at the request method. When the user visits the URL connected to this view, the browser performs a GET request.

  2. If the view is called with a POST request, the POST data is used to instantiate a ContactForm object.

  3. If the form is valid, the form data is passed to send_message(). This function is not relevant in this context and therefore not shown here.

  4. After sending the message, the view returns a redirect to the URL /success/. This is the step we are interested in. For simplicity, the URL is hard-coded here. You’ll see later how you can avoid that.

  5. If the view receives a GET request (or, to be precise, any kind of request that is not a POST request), it creates an instance of ContactForm and uses django.shortcuts.render() to render the contact_form.html template.

If the user now hits reload, only the /success/ URL is reloaded. Without the redirect, reloading the page would re-submit the form and send another message.

Behind the Scenes: How an HTTP Redirect Works

Now you know why redirects make sense, but how do they work? Let’s have a quick recap of what happens when you enter a URL in the address bar of your web browser.

A Quick Primer on HTTP

Let’s assume you’ve created a Django application with a “Hello World” view that handles the path /hello/. You are running your application with the Django development server, so the complete URL is http://127.0.0.1:8000/hello/.

When you enter that URL in your browser, it connects to port 8000 on the server with the IP address 127.0.0.1 and sends an HTTP GET request for the path /hello/. The server replies with an HTTP response.

HTTP is text-based, so it’s relatively easy to look at the back and forth between the client and the server. You can use the command line tool curl with the option --include to have a look at the complete HTTP response including the headers, like this:

$ curl --include http://127.0.0.1:8000/hello/
HTTP/1.1 200 OKDate: Sun, 01 Jul 2018 20:32:55 GMTServer: WSGIServer/0.2 CPython/3.6.3Content-Type: text/html; charset=utf-8X-Frame-Options: SAMEORIGINContent-Length: 11Hello World

As you can see, an HTTP response starts with a status line that contains a status code and a status message. The status line is followed by an arbitrary number of HTTP headers. An empty line indicates the end of the headers and the start of the response body, which contains the actual data the server wants to send.

HTTP Redirects Status Codes

What does a redirect response look like? Let’s assume the path /redirect/ is handled by redirect_view(), shown earlier. If you access http://127.0.0.1:8000/redirect/ with curl, your console looks like this:

$ curl --include http://127.0.0.1:8000/redirect/
HTTP/1.1 302 FoundDate: Sun, 01 Jul 2018 20:35:34 GMTServer: WSGIServer/0.2 CPython/3.6.3Content-Type: text/html; charset=utf-8Location: /redirect-success/X-Frame-Options: SAMEORIGINContent-Length: 0

The two responses might look similar, but there are some key differences. The redirect:

  • Returns a different status code (302 versus 200)
  • Contains a Location header with a relative URL
  • Ends with an empty line because the body of the redirect response is empty

The primary differentiator is the status code. The specification of the HTTP standard says the following:

The 302 (Found) status code indicates that the target resource resides temporarily under a different URI. Since the redirection might be altered on occasion, the client ought to continue to use the effective request URI for future requests. The server SHOULD generate a Location header field in the response containing a URI reference for the different URI. The user agent MAY use the Location field value for automatic redirection. (Source)

In other words, whenever the server sends a status code of 302, it says to the client, “Hey, at the moment, the thing you are looking for can be found at this other location.”

A key phrase in the specification is “MAY use the Location field value for automatic redirection.” It means that you can’t force the client to load another URL. The client can choose to wait for user confirmation or decide not to load the URL at all.

Now you know that a redirect is just an HTTP response with a 3xx status code and a Location header. The key takeaway here is that an HTTP redirect is like any old HTTP response, but with an empty body, 3xx status code, and a Location header.

That’s it. We’ll tie this back into Django momentarily, but first let’s take a look at two types of redirects in that 3xx status code range and see why they matter when it comes to web development.

Temporary vs. Permanent Redirects

The HTTP standard specifies several redirect status codes, all in the 3xx range. The two most common status codes are 301 Permanent Redirect and 302 Found.

A status code 302 Found indicates a temporary redirect. A temporary redirect says, “At the moment, the thing you’re looking for can be found at this other address.” Think of it like a store sign that reads, “Our store is currently closed for renovation. Please go to our other store around the corner.” As this is only temporary, you’d check the original address the next time you go shopping.

Note: In HTTP 1.0, the message for status code 302 was Temporary Redirect. The message was changed to Found in HTTP 1.1.

As the name implies, permanent redirects are supposed to be permanent. A permanent redirect tells the browser, “The thing you’re looking for is no longer at this address. It’s now at this new address, and it will never be at the old address again.”

A permanent redirect is like a store sign that reads, “We moved. Our new store is just around the corner.” This change is permanent, so the next time you want to go to the store, you’d go straight to the new address.

Note: Permanent redirects can have unintended consequences. Finish this guide before using a permanent redirect or jump straight to the section “Permanent redirects are permanent.”

Browsers behave similarly when handling redirects: when a URL returns a permanent redirect response, this response is cached. The next time the browser encounters the old URL, it remembers the redirect and directly requests the new address.

Caching a redirect saves an unnecessary request and makes for a better and faster user experience.

Furthermore, the distinction between temporary and permanent redirects is relevant for Search Engine Optimization.

Redirects in Django

Now you know that a redirect is just an HTTP response with a 3xx status code and a Location header.

You could build such a response yourself from a regular HttpResponse object:

defhand_crafted_redirect_view(request):response=HttpResponse(status=302)response['Location']='/redirect/success/'returnresponse

This solution is technically correct, but it involves quite a bit of typing.

The HTTPResponseRedirect Class

You can save yourself some typing with the class HttpResponseRedirect, a subclass of HttpResponse. Just instantiate the class with the URL you want to redirect to as the first argument, and the class will set the correct status and Location header:

defredirect_view(request):returnHttpResponseRedirect('/redirect/success/')

You can play with the HttpResponseRedirect class in the Python shell to see what you’re getting:

>>> fromdjango.httpimportHttpResponseRedirect>>> redirect=HttpResponseRedirect('/redirect/success/')>>> redirect.status_code302>>> redirect['Location']'/redirect/success/'

There is also a class for permanent redirects, which is aptly named HttpResponsePermanentRedirect. It works the same as HttpResponseRedirect, the only difference is that it has a status code of 301 (Moved Permanently).

Note: In the examples above, the redirect URLs are hard-coded. Hard-coding URLs is bad practice: if the URL ever changes, you have to search through all your code and change any occurrences. Let’s fix that!

You could use django.urls.reverse() to build a URL, but there is a more convenient way as you will see in the next section.

The redirect() Function

To make your life easier, Django provides the versatile shortcut function you’ve already seen in the introduction: django.shortcuts.redirect().

You can call this function with:

  • A model instance, or any other object, with a get_absolute_url() method
  • A URL or view name and positional and/or keyword arguments
  • A URL

It will take the appropriate steps to turn the arguments into a URL and return an HTTPResponseRedirect. If you pass permanent=True, it will return an instance of HttpResponsePermanentRedirect, resulting in a permanent redirect.

Here are three examples to illustrate the different use cases:

  1. Passing a model:

    fromdjango.shortcutsimportredirectdefmodel_redirect_view(request):product=Product.objects.filter(featured=True).first()returnredirect(product)

    redirect() will call product.get_absolute_url() and use the result as redirect target. If the given class, in this case Product, doesn’t have a get_absolute_url() method, this will fail with a TypeError.

  2. Passing a URL name and arguments:

    fromdjango.shortcutsimportredirectdeffixed_featured_product_view(request):...product_id=settings.FEATURED_PRODUCT_IDreturnredirect('product_detail',product_id=product_id)

    redirect() will try to use its given arguments to reverse a URL. This example assumes your URL patterns contain a pattern like this:

    path('/product/<product_id>/', 'product_detail_view', name='product_detail')
    
  3. Passing a URL:

    fromdjango.shortcutsimportredirectdeffeatured_product_view(request):returnredirect('/products/42/')

    redirect() will treat any string containing a / or . as a URL and use it as redirect target.

The RedirectView Class-Based View

If you have a view that does nothing but returning a redirect, you could use the class-based view django.views.generic.base.RedirectView.

You can tailor RedirectView to your needs through various attributes.

If the class has a .url attribute, it will be used as a redirect URL. String formatting placeholders are replaced with named arguments from the URL:

# urls.pyfromdjango.urlsimportpathfrom.viewsimportSearchRedirectViewurlpatterns=[path('/search/<term>/',SearchRedirectView.as_view())]# views.pyfromdjango.views.generic.baseimportRedirectViewclassSearchRedirectView(RedirectView):url='https://google.com/?q=%(term)s'

The URL pattern defines an argument term, which is used in SearchRedirectView to build the redirect URL. The path /search/kittens/ in your application will redirect you to https://google.com/?q=kittens.

Instead of subclassing RedirectView to overwrite the url attribute, you can also pass the keyword argument url to as_view() in your urlpatterns:

#urls.pyfromdjango.views.generic.baseimportRedirectViewurlpatterns=[path('/search/<term>/',RedirectView.as_view(url='https://google.com/?q=%(term)s')),]

You can also overwrite get_redirect_url() to get a completely custom behavior:

fromrandomimportchoicefromdjango.views.generic.baseimportRedirectViewclassRandomAnimalView(RedirectView):animal_urls=['/dog/','/cat/','/parrot/']is_permanent=Truedefget_redirect_url(*args,**kwargs):returnchoice(self.animal_urls)

This class-based view redirects to a URL picked randomly from .animal_urls.

django.views.generic.base.RedirectView offers a few more hooks for customization. Here is the complete list:

  • .url

    If this attribute is set, it should be a string with a URL to redirect to. If it contains string formatting placeholders like %(name)s, they are expanded using the keyword arguments passed to the view.

  • .pattern_name

    If this attribute is set, it should be the name of a URL pattern to redirect to. Any positional and keyword arguments passed to the view are used to reverse the URL pattern.

  • .permanent

    If this attribute is True, the view returns a permanent redirect. It defaults to False.

  • .query_string

    If this attribute is True, the view appends any provided query string to the redirect URL. If it is False, which is the default, the query string is discarded.

  • get_redirect_url(*args, **kwargs)

    This method is responsible for building the redirect URL. If this method returns None, the view returns a 410 Gone status.

    The default implementation first checks .url. It treats .url as an “old-style” format string, using any named URL parameters passed to the view to expand any named format specifiers.

    If .url is not set, it checks if .pattern_name is set. If it is, it uses it to reverse a URL with any positional and keyword arguments it received.

    You can change that behavior in any way you want by overwriting this method. Just make sure it returns a string containing a URL.

Note: Class-based views are a powerful concept but can be a bit difficult to wrap your head around. Unlike regular function-based views, where it’s relatively straightforward to follow the flow of the code, class-based views are made up of a complex hierarchy of mixins and base classes.

A great tool to make sense of a class-based view class is the website Classy Class-Based Views.

You could implement the functionality of RandomAnimalView from the example above with this simple function-based view:

fromrandomimportchoicefromdjango.shortcutsimportredirectdefrandom_animal_view(request):animal_urls=['/dog/','/cat/','/parrot/']returnredirect(choice(animal_urls))

As you can see, the class-based approach does not provide any obvious benefit while adding some hidden complexity. That raises the question: when should you use RedirectView?

If you want to add a redirect directly in your urls.py, using RedirectView makes sense. But if you find yourself overwriting get_redirect_url, a function-based view might be easier to understand and more flexible for future enhancements.

Advanced Usage

Once you know that you probably want to use django.shortcuts.redirect(), redirecting to a different URL is quite straight-forward. But there are a couple of advanced use cases that are not so obvious.

Passing Parameters with Redirects

Sometimes, you want to pass some parameters to the view you’re redirecting to. Your best option is to pass the data in the query string of your redirect URL, which means redirecting to a URL like this:

http://example.com/redirect-path/?parameter=value

Let’s assume you want to redirect from some_view() to product_view(), but pass an optional parameter category:

fromdjango.urlsimportreversefromurllib.parseimporturlencodedefsome_view(request):...base_url=reverse('product_view')# 1 /products/query_string=urlencode({'category':category.id})# 2 category=42url='{}?{}'.format(base_url,query_string)# 3 /products/?category=42returnredirect(url)# 4defproduct_view(request):category_id=request.GET.get('category')# 5# Do something with category_id

The code in this example is quite dense, so let’s follow it step by step:

  1. First, you use django.urls.reverse() to get the URL mapping to product_view().

  2. Next, you have to build the query string. That’s the part after the question mark. It’s advisable to use urllib.urlparse.urlencode() for that, as it will take care of properly encoding any special characters.

  3. Now you have to join base_url and query_string with a question mark. A format string works fine for that.

  4. Finally, you pass url to django.shortcuts.redirect() or to a redirect response class.

  5. In product_view(), your redirect target, the parameter will be available in the request.GET dictionary. The parameter might be missing, so you should use requests.GET.get('category') instead of requests.GET['category']. The former returns None when the parameter does not exist, while the latter would raise an exception.

Note: Make sure to validate any data you read from query strings. It might seem like this data is under your control because you created the redirect URL.

In reality, the redirect could be manipulated by the user and must not be trusted, like any other user input. Without proper validation, an attacker might be able gain unauthorized access.

Special Redirect Codes

Django provides HTTP response classes for the status codes 301 and 302. Those should cover most use cases, but if you ever have to return status codes 303, 307, or 308, you can quite easily create your own response class. Simply subclass HttpResponseRedirectBase and overwrite the status_code attribute:

classHttpResponseTemporaryRedirect(HttpResponseRedirectBase):status_code=307

Alternatively, you can use the django.shortcuts.redirect() method to create a response object and change the return value. This approach makes sense when you have the name of a view or URL or a model you want to redirect to:

deftemporary_redirect_view(request):response=redirect('success_view')response.status_code=307returnresponse

Note: There is actually a third class with a status code in the 3xx range: HttpResponseNotModified, with the status code 304. It indicates that the content URL has not changed and that the client can use a cached version.

One could argue that 304 Not Modified response redirects to the cached version of a URL, but that’s a bit of a stretch. Consequently, it is no longer listed in the “Redirection 3xx” section of the HTTP standard.

Pitfalls

Redirects That Just Won’t Redirect

The simplicity of django.shortcuts.redirect() can be deceiving. The function itself doesn’t perform a redirect: it just returns a redirect response object. You must return this response object from your view (or in a middleware). Otherwise, no redirect will happen.

But even if you know that just calling redirect() is not enough, it’s easy to introduce this bug into a working application through a simple refactoring. Here’s an example to illustrate that.

Let’s assume you are building a shop and have a view that is responsible for displaying a product. If the product does not exist, you redirect to the homepage:

defproduct_view(request,product_id):try:product=Product.objects.get(pk=product_id)exceptProduct.DoesNotExist:returnredirect('/')returnrender(request,'product_detail.html',{'product':product})

Now you want to add a second view to display customer reviews for a product. It should also redirect to the homepage for non-existing products, so as a first step, you extract this functionality from product_view() into a helper function get_product_or_redirect():

defget_product_or_redirect(product_id):try:returnProduct.objects.get(pk=product_id)exceptProduct.DoesNotExist:returnredirect('/')defproduct_view(request,product_id):product=get_product_or_redirect(product_id)returnrender(request,'product_detail.html',{'product':product})

Unfortunately, after the refactoring, the redirect does not work anymore.

The result of redirect() is returned from get_product_or_redirect(), but product_view() does not return it. Instead, it is passed to the template.

Depending on how you use the product variable in the product_detail.html template, this might not result in an error message and just display empty values.

Redirects That Just Won’t Stop Redirecting

When dealing with redirects, you might accidentally create a redirect loop, by having URL A return a redirect that points to URL B which returns a redirect to URL A, and so on. Most HTTP clients detect this kind of redirect loop and will display an error message after a number of requests.

Unfortunately, this kind of bug can be tricky to spot because everything looks fine on the server side. Unless your users complain about the issue, the only indication that something might be wrong is that you’ve got a number of requests from one client that all result in a redirect response in quick succession, but no response with a 200 OK status.

Here’s a simple example of a redirect loop:

defa_view(request):returnredirect('another_view')defanother_view(request):returnredirect('a_view')

This example illustrates the principle, but it’s overly simplistic. The redirect loops you’ll encounter in real-life are probably going to be harder to spot. Let’s look at a more elaborate example:

deffeatured_products_view(request):featured_products=Product.objects.filter(featured=True)iflen(featured_products==1):returnredirect('product_view',kwargs={'product_id':featured_products[0].id})returnrender(request,'featured_products.html',{'product':featured_products})defproduct_view(request,product_id):try:product=Product.objects.get(pk=product_id,in_stock=True)exceptProduct.DoesNotExist:returnredirect('featured_products_view')returnrender(request,'product_detail.html',{'product':product})

featured_products_view() fetches all featured products, in other words Product instances with .featured set to True. If only one featured product exists, it redirects directly to product_view(). Otherwise, it renders a template with the featured_products queryset.

The product_view looks familiar from the previous section, but it has two minor differences:

  • The view tries to fetch a Product that is in stock, indicated by having .in_stock set to True.
  • The view redirects to featured_products_view() if no product is in stock.

This logic works fine until your shop becomes a victim of its own success and the one featured product you currently have goes out of stock. If you set .in_stock to False but forget to set .featured to False as well, then any visitor to your feature_product_view() will now be stuck in a redirect loop.

There is no bullet-proof way to prevent this kind of bug, but a good starting point is to check if the view you are redirecting to uses redirects itself.

Permanent Redirects Are Permanent

Permanent redirects can be like bad tattoos: they might seem like a good idea at the time, but once you realize they were a mistake, it can be quite hard to get rid of them.

When a browser receives a permanent redirect response for a URL, it caches this response indefinitely. Any time you request the old URL in the future, the browser doesn’t bother loading it and directly loads the new URL.

It can be quite tricky to convince a browser to load a URL that once returned a permanent redirect. Google Chrome is especially aggressive when it comes to caching redirects.

Why can this be a problem?

Imagine you want to build a web application with Django. You register your domain at myawesomedjangowebapp.com. As a first step, you install a blog app at https://myawesomedjangowebapp.com/blog/ to build a launch mailing list.

Your site’s homepage at https://myawesomedjangowebapp.com/ is still under construction, so you redirect to https://myawesomedjangowebapp.com/blog/. You decide to use a permanent redirect because you heard that permanent redirects are cached and caching make things faster, and faster is better because speed is a factor for ranking in Google search results.

As it turns out, you’re not only a great developer, but also a talented writer. Your blog becomes popular, and your launch mailing list grows. After a couple of months, your app is ready. It now has a shiny homepage, and you finally remove the redirect.

You send out an announcement email with a special discount code to your sizeable launch mailing list. You lean back and wait for the sign-up notifications to roll in.

To your horror, your mailbox fills with messages from confused visitors who want to visit your app but are always being redirected to your blog.

What has happened? Your blog readers had visited https://myawesomedjangowebapp.com/ when the redirect to https://myawesomedjangowebapp.com/blog/ was still active. Because it was a permanent redirect, it was cached in their browsers.

When they clicked on the link in your launch announcement mail, their browsers never bothered to check your new homepage and went straight to your blog. Instead of celebrating your successful launch, you’re busy instructing your users how to fiddle with chrome://net-internals to reset the cache of their browsers.

The permanent nature of permanent redirects can also bite you while developing on your local machine. Let’s rewind to the moment when you implemented that fateful permanent redirect for myawesomedjangowebapp.com.

You start the development server and open http://127.0.0.1:8000/. As intended, your app redirects your browser to http://127.0.0.1:8000/blog/. Satisfied with your work, you stop the development server and go to lunch.

You return with a full belly, ready to tackle some client work. The client wants some simple changes to their homepage, so you load the client’s project and start the development server.

But wait, what is going on here? The homepage is broken, it now returns a 404! Due to the afternoon slump, it takes you a while to notice that you’re being redirected to http://127.0.0.1:8000/blog/, which doesn’t exist in the client’s project.

To the browser, it doesn’t matter that the URL http://127.0.0.1:8000/ now serves a completely different application. All that matters to the browser is that this URL once in the past returned a permanent redirect to http://127.0.0.1:8000/blog/.

The takeaway from this story is that you should only use permanent redirects on URLs that you’ve no intention of ever using again. There is a place for permanent redirects, but you must be aware of their consequences.

Even if you’re confident that you really need a permanent redirect, it’s a good idea to implement a temporary redirect first and only switch to its permanent cousin once you’re 100% sure everything works as intended.

Unvalidated Redirects Can Compromise Security

From a security perspective, redirects are a relatively safe technique. An attacker cannot hack a website with a redirect. After all, a redirect just redirects to a URL that an attacker could just type in the address bar of their browser.

However, if you use some kind of user input, like a URL parameter, without proper validation as a redirect URL, this could be abused by an attacker for a phishing attack. This kind of redirect is called an open or unvalidated redirect.

There are legitimate use cases for redirecting to URL that is read from user input. A prime example is Django’s login view. It accepts a URL parameter next that contains the URL of the page the user is redirected to after login. To redirect the user to their profile after login, the URL might look like this:

https://myawesomedjangowebapp.com/login/?next=/profile/

Django does validate the next parameter, but let’s assume for a second that it doesn’t.

Without validation, an attacker could craft a URL that redirects the user to a website under their control, for example:

https://myawesomedjangowebapp.com/login/?next=https://myawesomedjangowebapp.co/profile/

The website myawesomedjangowebapp.co might then display an error message and trick the user into entering their credentials again.

The best way to avoid open redirects is to not use any user input when building a redirect URL.

If you cannot be sure that a URL is safe for redirection, you can use the function django.utils.http.is_safe_url() to validate it. The docstring explains its usage quite well:

is_safe_url(url, host=None, allowed_hosts=None, require_https=False)

Return True if the url is a safe redirection (i.e. it doesn’t point to a different host and uses a safe scheme). Always return False on an empty url. If require_https is True, only ‘https’ will be considered a valid scheme, as opposed to ‘http’ and ‘https’ with the default, False. (Source)

Let’s look at some examples.

A relative URL is considered safe:

>>> # Import the function first.>>> fromdjango.utils.httpimportis_safe_url>>>>>> is_safe_url('/profile/')True

A URL pointing to another host is generally not considered safe:

>>> is_safe_url('https://myawesomedjangowebapp.com/profile/')False

A URL pointing to another host is considered safe if its host is provided in allowed_hosts:

>>> is_safe_url('https://myawesomedjangowebapp.com/profile/',... allowed_hosts={'myawesomedjangowebapp.com'})True

If the argument require_https is True, a URL using the http scheme is not considered safe:

>>> is_safe_url('http://myawesomedjangowebapp.com/profile/',... allowed_hosts={'myawesomedjangowebapp.com'},... require_https=True)False

Summary

This wraps up this guide on HTTP redirects with Django. Congratulations: you have now touched on every aspect of redirects all the way from the low-level details of the HTTP protocol to the high-level way of dealing with them in Django.

You learned how an HTTP redirect looks under the hood, what the different status codes are, and how permanent and temporary redirects differ. This knowledge is not specific to Django and is valuable for web development in any language.

You can now perform a redirect with Django, either by using the redirect response classes HttpResponseRedirect and HttpResponsePermanentRedirect, or with the convenience function django.shortcuts.redirect(). You saw solutions for a couple of advanced use cases and know how to steer clear of common pitfalls.

If you have any further question about HTTP redirects leave a comment below and in the meantime, happy redirecting!

References


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Python Bytes: #91 Will there be a PyBlazor?

Continuum Analytics Blog: Introducing Skein: Deploy Python on Apache YARN the Easy Way

$
0
0

By Jim Crist *This post is reprinted with permission from Jim Crist’s blog. The original post can be found here.In this post, I introduce Skein, a new tool and library for deploying applications on Apache YARN. I provide background on why this work was necessary, and demonstrate deploying a simple Python application on a YARN cluster. Introduction …
Read more →

The post Introducing Skein: Deploy Python on Apache YARN the Easy Way appeared first on Anaconda.

Peter Bengtsson: django-pipeline and Zopfli

$
0
0

tl;dr; I wrote my own extension to django-pipeline that uses Zopfli to create .gz files from static assets collected in Django. Here's the code.

Nginx and Gzip

What I wanted was to continue to use django-pipeline which does a great job of reading a settings.BUNDLES setting and generating things like /static/js/myapp.min.a206ec6bd8c7.js. It has configurable options to not just make those files but also generate /static/js/myapp.min.a206ec6bd8c7.js.gz which means that with gzip_static in Nginx, Nginx doesn't have to Gzip compress static files on-the-fly but can basically just read it from disk. Nginx doesn't care how the file got there but an immediate advantage of preparing the file on disk is that the compression can be higher (smaller .gz files). That means smaller responses to be sent to the client and less CPU work needed from Nginx. Your job is to set gzip_static on; in your Nginx config (per location) and make sure every compressable file exists on disk with the same name but with the .gz suffix.

In other words, when the client does GET https://example.com/static/foo.js Nginx quickly does a read on the file system to see if there exists a ROOT/static/foo.js.gz and if so, return that. If the files doesn't exist, and you have gzip on; in your config, Nginx will read the ROOT/static/foo.js into memory, compress it (usually with a lower compression level) and return that. Nginx takes care of figuring out whether to do this, at all, dynamically by reading the Accept-Encoding header from the request.

Zopfli

The best solution today to generate these .gz files is Zopfli. Zopfli is slower than good old regular gzip but the files get smaller. To manually compress a file you can install the zopfli executable (e.g. brew install zopfli or apt install zopfli) and then run zopfli $ROOT/static/foo.js which creates a $ROOT/static/foo.js.gz file.

So your task is to build some pipelining code that generates .gz version of every static file your Django server creates.
At first I tried django-static-compress which has an extension to regular Django staticfiles storage. The default staticfiles storage is django.contrib.staticfiles.storage.StaticFilesStorage and that's what django-static-compress extends.

But I wanted more. I wanted all the good bits from django-pipeline (minification, hashes in filenames, concatenation, etc.) Also, in django-static-compress you can't control the parameters to zopfli such as the number of iterations. And with django-static-compress you have to install Brotli which I can't use because I don't want to compile my own Nginx.

Solution

So I wrote my own little mashup. I took some ideas from how django-pipeline does regular gzip compression as a post-process step. And in my case, I never want to bother with any of the other files that are put into the settings.STATIC_ROOT directory from the collectstatic command.

Here's my implementation: peterbecom.storage.ZopfliPipelineCachedStorage. Check it out. It's very tailored to my personal preferences and usecase but it works great. To use it, I have this in my settings.py: STATICFILES_STORAGE = "peterbecom.storage.ZopfliPipelineCachedStorage"

I know what you're thinking

Why not try to get this intodjango-pipeline or intodjango-compress-static. The answer is frankly laziness. Hopefully someone else can pick up this task. I have fewer and fewer projects where I use Django to handle static files. These days most of my projects are single-page-apps that are 100% static and using Django for XHR requests to get the data.

Codementor: How and why I built Transport Management System

Viewing all 22646 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>