Quantcast
Channel: Planet Python
Viewing all 22627 articles
Browse latest View live

Made With Mu: Announcing Mu-moot 2.0

$
0
0

Sign up for free for the second ever Mu-moot! Thursday October 18th at 6pm in the offices of Kano at 3 Finsbury Ave, London, EC2M 2PA. It’s a friendly and supportive meeting for learners, teachers and software developers interested in computing, mentoring, technology, pedagogy, programming and education. While Mu provides a focus for our meetings we are not limited to Mu related activities, discussions or presentations. Rather, Mu-moot is a place to encounter new ideas, share knowledge and meet a diverse group of like-minded people.

What will we be doing?

  • We’ll start with a welcome of pizza and refreshments from our wonderful hosts at Kano.
  • Things will kick of properly at 6:30pm with a presentation on the short, medium and long term future plans for Mu. This will be followed by a short question and answer session.
  • Next comes 15 minutes for lightning talks. If you’ll be there and have something to share, this is where and when you tell us!
  • We end with a practical workshop called, “Debugging With Mu - What is debugging? Why do I need it? How do I do it?”.


At the end, around 8pm-ish, we’ll find a local pub and continue our discussions in a more relaxed atmosphere.

Future Mu-moots look to be even more interesting. Dan Pope, one of the originators of Mu, will be presenting and running a workshop about PyGameZero at the November moot. Tim Golden, one of Mu’s most prolific contributors, will hopefully present and run a workshop about NetworkZero at the December-moot.

If you have an idea for contributing to a future Mu-moot, please get in touch.


NumFOCUS: Successful Sustainer Weeks Fundraiser for Open Source Scientific Computing Software

Kay Hayen: Nuitka Release 0.6.0

$
0
0

This is to inform you about the new stable release of Nuitka. It is the extremely compatible Python compiler. Please see the page "What is Nuitka?" for an overview.

This release adds massive improvements for optimization and a couple of bug fixes.

It also indicates reaching the mile stone of doing actual type inference, even if only very limited.

And with the new version numbers, lots of UI changes go along. The options to control recursion into modules have all been renamed, some now have different defaults, and finally the filenames output have changed.

Bug Fixes

  • Python3.5: Fix, the awaiting flag was not removed for exceptions thrown into a coroutine, so next time it appeared to be awaiting instead of finished.
  • Python3: Classes in generators that were using built-in functions crashed the compilation with C errors.
  • Some regressions for XML outputs from previous changes were fixed.
  • Fix, hasattr was not raising an exception if used with non-string attributes.
  • For really large compilations, MSVC linker could choke on the input file, line length limits, which is now fixed for the inline copy of Scons.
  • Standalone: Follow changed hidden dependency of PyQt5 to PyQt5.sip for newer versions
  • Standalone: Include certificate file using by requests module in some cases as a data file.

New Optimization

  • Enabled C target type nuitka_bool for variables that are stored with boolean shape only, and generate C code for those
  • Using C target type nuitka_bool many more expressions are now handled better in conditions.
  • Enhanced is and is not to be C source type aware, so they can be much faster for them.
  • Use C target type for bool built-in giving more efficient code for some source values.
  • Annotate the not result to have boolean type shape, allowing for more compile time optimization with it.
  • Restored previously lost optimization of loop break handling StopIteration which makes loops much faster again.
  • Restore lost optimization of subscripts with constant integer values making them faster again.
  • Optimize in-place operations for cases where left, right, or both sides have known type shapes for some values. Initially only a few variants were added, but there is more to come.
  • When adjacent parts of an f-string become known string constants, join them at compile time.
  • When there is only one remaining part in an f-string, use that directly as the result.
  • Optimize empty f-strings directly into empty strings constant during the tree building phase.
  • Added specialized attribute check for use in re-formulations that doesn't expose exceptions.
  • Remove locals sync operation in scopes without local variables, e.g. classes or modules, making exec and the like slightly leaner there.
  • Remove try nodes that did only re-raise exceptions.
  • The del of variables is now driven fully by C types and generates more compatible code.
  • Removed useless double exception exits annotated for expressions of conditions and added code that allows conditions to adapt themselves to the target shape bool during optimization.

New Features

  • Added support for using .egg files in PYTHONPATH, one of the more rare uses, where Nuitka wasn't yet compatible.
  • Output binaries in standalone mode with platform suffix, on non-Windows that means no suffix. In accelerated mode on non-Windows, use .bin as a suffix to avoid collision with files that have no suffix.
  • Windows: It's now possible to use clang-cl.exe for CC with Nuitka as a third compiler on Windows, but it requires an existing MSVC install to be used for resource compilation and linking.
  • Windows: Added support for using ccache.exe and clcache.exe, so that object files can now be cached for re-compilation.
  • For debug mode, report missing in-place helpers. These kinds of reports are to become more universal and are aimed at recognizing missed optimization chances in Nuitka. This features is still in its infancy. Subsequent releases will add more like these.

Organizational

  • Disabled comments on the web site, we are going to use Twitter instead, once the site is migrated to an updated Nikola.
  • The static C code is now formatted with clang-format to make it easier for contributors to understand.
  • Moved the construct runner to top level binary and use it from there, with future changes coming that should make it generally useful outside of Nuitka.
  • Enhanced the issue template to tell people how to get the develop version of Nuitka to try it out.
  • Added documentation for how use the object caching on Windows to the User Manual.
  • Removed the included GUI, originally intended for debugging, but XML outputs are more powerful anyway, and it had been in disrepair for a long time.
  • Removed long deprecated options, e.g. --exe which has long been the default and is no more accepted.
  • Renamed options to include plugin files to --include-plugin-directory and --include-plugin-files for more clarity.
  • Renamed options for recursion control to e.g. --follow-imports to better express what they actually do.
  • Removed --python-version support for switching the version during compilation. This has only worked for very specific circumstances and has been deprecated for a while.
  • Removed --code-gen-no-statement-lines support for not having line numbers updated at run time. This has long been hidden and probably would never gain all that much, while causing a lot of incompatibilty.

Cleanups

  • Moved command line arguments to dedicated module, adding checks was becoming too difficult.
  • Moved rich comparison helpers to a dedicated C file.
  • Dedicated binary and unary node bases for clearer distinction and more efficient memory usage of unuary nodes. Unary operations also no longer have in-place operation as an issue.
  • Major cleanup of variable accesses, split up into multiple phases and all including module variables being performed through C types, with no special cases anymore.
  • Partial cleanups of C type classes with code duplications, there is much more to resolve though.
  • Windows: The way exec was performed is discouraged in the subprocess documentation, so use a variant that cannot block instead.
  • Code proving information about built-in names and values was using not very portable constructs, and is now written in a way that PyPy would also like.

Tests

  • Avoid using 2to3 for basic operators test, removing test of some Python2 only stuff, that is covered elsewhere.
  • Added ability to cache output of CPython when comparing to it. This is to allow CI tests to not execute the same code over and over, just to get the same value to compare with. This is not enabled yet.

Summary

This release marks a point, from which on performance improvements are likely in every coming release. The C target types are a major milestone. More C target types are in the work, e.g. void is coming for expressions that are done, but not used, that is scheduled for the next release.

Although there will be a need to also adapt optimization to take full advantage of it, progress should be quick from here. There is a lot of ground to cover, with more C types to come, and all of them needing specialized helpers. But as soon as e.g. int, str are covered, many more programs are going to benefiting from this.

NumFOCUS: Inaugural NumFOCUS Awards and New Contributor Recognition

Codementor: Python Expressions | Values

$
0
0
Understand how python evaluates an expression.

Gocept Weblog: Saltlabs Sprint: last minute information

$
0
0

Earl Zope is now nearly settled down in Python 3 wonderland. On the Zope and Plone sprint from Monday, 1st until Friday, 5th of October 2018 in Halle (Saale), Germany we will work towards the final Zope 4 release aka the final permission for the Python 3 wonderland.

We are currently 33 participants for the sprint. So be prepared for a huge sprint with many interesting people. The Saltlabs have a café (called KOFFIJ) we can use, a big meeting room with big display (aka the Thronsaal) and many smaller rooms including the offices of gocept. So there will be enough room to work in bigger and smaller groups.

To keep the organisational overhead low with this amount of participants, we plan to separate in two teams: Zope and Plone. Those teams will organise themselves individually and we will have a short daily meeting after lunch to share the status in a condensed manner with the other team. Direct communication in case of a difficult problem is, of course, always possible.

We reserved up to one hour after the daily meeting for talks and presentations by you about interesting topics around Zope and Plone, successful migration stories, or something else you want to share with the community. So if you have some interesting slides, please bring them with you and register during the week for a slot.

Our current schedule:

  • Sunday
  • Monday
    • 9:00 Breakfast at KOFFIJ (This is the café in the ground floor of Saltlabs aka the window to the left on the picture above.)
    • 10:00 Welcome at KOFFIJ and start sprinting afterwards
    • 12:30 Lunch
    • 13:30 Sprint planning and introduction for all sprinters at Thronsaal
    • between 15:00 and 17:00 coffee break at KOFFIJ
    • 18:00 Lights out
  • All other days:
    • 8:30 Breakfast
    • 9:00 Standup in the team (Zope, Plone)
    • 12:30 Lunch
    • 13:30 Daily meeting at Thronsaal
    • 14:00 (Lightning) Talks at Thronsaal
    • between 15:00 and 17:00 coffee break at KOFFIJ
    • 18:00 Lights out
  • Tuesday:
    • 11:00 till 17:00 Massages, there will be a list to register on Monday
    • 19:00 social evening at Eigenbaukombinat (local hacker space) with pizza, beer and mate
  • Friday:
    • 13:30 Closing meeting with presentations at Thronsaal
    • 17:00 Lights out

If you cannot make it to the Welcome meeting, ask at KOFFIJ for one of the gocept staff to get a personal introduction.

Parking: As Saltlabs in located in a pedestrian zone, the availability of parking spots is rather low. Please use one of the parking decks nearby.

As organizational tool to coordinate the work, we try to use Github projects this time, as it allows cross-repository tracking of issues.

One last hint: The location of the sprint is Leipziger Str. 70, Halle (Saale), Germany.

Test and Code: 47: Automation Panda - Andy Knight

$
0
0

Interview with Andy Knight, the Automation Panda.

  • Selenium & WebDriver
  • Headless Chrome
  • Gherkin
  • BDD
  • Given When Then
  • pytest-bdd
  • PyCharm
  • Writing Good Gherkin
  • Overhead of Gherkin and if it's worth it
  • When to use pytest vs pytest-bdd
  • The art of test automation

Special Guest: Andy Knight.

Sponsored By:

Support Test and Code

Links:

<p>Interview with Andy Knight, the Automation Panda. </p> <ul> <li>Selenium &amp; WebDriver</li> <li>Headless Chrome</li> <li>Gherkin</li> <li>BDD</li> <li>Given When Then</li> <li>pytest-bdd</li> <li>PyCharm</li> <li>Writing Good Gherkin</li> <li>Overhead of Gherkin and if it&#39;s worth it</li> <li>When to use pytest vs pytest-bdd</li> <li>The art of test automation</li> </ul><p>Special Guest: Andy Knight.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://testandcode.com/pycharm">PyCharm Professional</a>: <a rel="nofollow" href="http://testandcode.com/pycharm">We have a special offer for you: any time before December 1, you can get an Individual PyCharm Professional 4-month subscription for free! If you value your time, you owe it to yourself to try PyCharm.</a></li></ul><p><a rel="payment" href="https://www.patreon.com/testpodcast">Support Test and Code</a></p><p>Links:</p><ul><li><a title="Automation Panda | A blog for software development and testing" rel="nofollow" href="https://automationpanda.com/">Automation Panda | A blog for software development and testing</a></li><li><a title="Karate REST API test framework" rel="nofollow" href="https://github.com/intuit/karate">Karate REST API test framework</a></li><li><a title="BDD | Automation Panda" rel="nofollow" href="https://automationpanda.com/bdd/">BDD | Automation Panda</a></li><li><a title="Testing | Automation Panda" rel="nofollow" href="https://automationpanda.com/testing/">Testing | Automation Panda</a></li><li><a title="The pytest Book" rel="nofollow" href="https://amzn.to/2IqkJZO">The pytest Book</a></li></ul>

Robin Wilson: PyCon UK 2018: My talk on xarray

$
0
0

Last week I attended PyCon UK 2018 in Cardiff, and had a great time. I’m going to write a few posts about this conference – and this first one is focused on my talk.

I spoke in the ‘PyData’ track, with a talk entitled XArray: the power of pandas for multidimensional arrays. PyCon UK always do a great job of getting the videos up online very quickly, so you can watch the video of my talk below:

The slides for my talk are available here and a Github repository with the notebook which was used to create the slides here.

I think the talk went fairly well, although I found my positioning a bit awkward as I was trying to keep out of the way of the projector, while also being in range of the microphone, and trying to use my pointer to point out specific parts of the screen.

Feedback was generally good, with some useful questions afterwards, and a number of positive comments from people throughout the rest of the conference. One person emailed me to say that my talk was “the highlight of the conference” for him – which was very pleasing. My tweet with a link to the video of my talk also got a number of retweets, including from the PyData and NumFocus accounts, which got it quite a few views

In the interests of full transparency, I have posted online the full talk proposal that I submitted, as this may be helpful to others trying to come up with PyCon talk proposals.

Next up in my PyCon UK series of posts: a general review of the conference.


Djangostars: What is Docker and How to Use it With Python (Tutorial)

$
0
0
What is Docker and How to Use it With Python (Tutorial)

This is an introductory tutorial on Docker containers. By the end of this article, you will know how to use Docker on your local machine. Along with Python, we are going to run Nginx and Redis containers. Those examples assume that you are familiar with the basic concepts of those technologies. There will be lots of shell examples, so go ahead and open the terminal.

Table of Contents

What is Docker?

Docker is an open-source tool that automates the deployment of an application inside a software container. The easiest way to grasp the idea behind Docker is to compare it to, well... standard shipping containers.

Back in the day, transportation companies faced the following challenges:

  • How to transport different (incompatible) types of goods side by side (like food and chemicals, or glass and bricks).
  • How to handle packages of various sizes using the same vehicle.

After the introduction of containers, bricks could be put over glass, and chemicals could be stored next to food. Cargo of various sizes can be put inside a standardized container and loaded/unloaded by the same vehicle.

Let’s go back to containers in software development.

When you develop an application, you need to provide your code along with all possible dependencies like libraries, the web server, databases, etc. You may end up in a situation when the application is working on your computer, but won’t even start on the staging server, or the dev or QA’s machine.

This challenge can be addressed by isolating the app to make it independent of the system.



How does this differ from virtualization?

Traditionally, virtual machines were used to avoid this unexpected behavior. The main problem with VM is that an “extra OS” on top of the host operating system adds gigabytes of space to the project. Most of the time your server will host several VMs that will take up even more space. And by the way, at the moment, most cloud-based server providers will charge you for that extra space. Another significant drawback of VM is a slow boot.

Docker eliminates all the above by simply sharing the OS kernel across all the containers running as separate processes of the host OS.

What is Docker and How to Use it With Python (Tutorial)

Keep in mind that Docker is not the first and not the only containerization platform. However, at the moment Docker is the biggest and the most powerful player on the market.



Why do we need Docker?

The short list of benefits includes:

  • Faster development process
  • Handy application encapsulation
  • The same behaviour on local machine / dev / staging / production servers
  • Easy and clear monitoring
  • Easy to scale

Faster development process

There is no need to install 3rd-party apps like PostgreSQL, Redis, Elasticsearch on the system – you can run it in containers. Docker also gives you the ability to run different versions of same application simultaneously. For example, say you need to do some manual data migration from an older version of Postgres to a newer version. You can have such a situation in microservice architecture when you want to create a new microservice with a new version of the 3rd-party software.

It could be quite complex to keep two different versions of the same app on one host OS. In this case, Docker containers could be a perfect solution – you receive isolated environments for your applications and 3rd-parties.

Handy application encapsulation

You can deliver your application in one piece. Most programming languages, frameworks and all operating systems have their own packaging managers. And even if your application can be packed with its native package manager, it could be hard to create a port for another system.

Docker gives you a unified image format to distribute you applications across different host systems and cloud services. You can deliver your application in one piece with all the required dependencies (included in an image) ready to run.

Same behaviour on local machine / dev / staging / production servers

Docker can’t guarantee 100% dev / staging / production parity, because there is always the human factor. But it reduces to almost zero the probability of error caused by different versions of operating systems, system-dependencies, etc.

With right approach to building Docker images, your application will use the same base image with the same OS version and the required dependencies.

Easy and clear monitoring

Out of the box, you have a unified way to read log files from all running containers. You don't need to remember all the specific paths where your app and its dependencies store log files and write custom hooks to handle this.
You can integrate an external logging driver and monitor your app log files in one place.

Easy to scale

A correctly wrapped application will cover most of the Twelve Factors. By design, Docker forces you follow its core principles, such as configuration over environment variables, communication over TCP/UDP ports, etc. And if you’ve done your application right, it will be ready for scaling not only in Docker.

Supported platforms

Docker’s native platform is Linux, as it’s based on features provided by the Linux kernel. However, you can still run it on macOS and Windows. The only difference is that on macOS and Windows, Docker is encapsulated into a tiny virtual machine. At the moment, Docker for macOS and Windows has reached a significant level of usability and feels more like a native app.

Installation

You can check out the installation instructions for Docker here.
If you're running Docker on Linux, you need to run all the following commands as root or add your user to docker group and re-login:

sudo usermod -aG docker $(whoami)`  

Terminology

  • Container– a running instance that encapsulates required software. Containers are always created from images. A container can expose ports and volumes to interact with other containers or/and the outer world. Containers can be easily killed / removed and re-created again in a very short time. Containers don't keep state.
  • Image– the basic element for every container. When you create an image, every step is cached and can be reused (Copy On Write model). Depending on the image, it can take some time to build. Containers, on the other hand, can be started from images right away.
  • Port– a TCP/UDP port in its original meaning. To keep things simple, let’s assume that ports can be exposed to the outer world (accessible from the host OS) or connected to other containers – i.e., accessible only from those containers and invisible to the outer world.
  • Volume– can be described as a shared folder. Volumes are initialized when a container is created. Volumes are designed to persist data, independent of the container’s lifecycle.
  • Registry– the server that stores Docker images. It can be compared to Github – you can pull an image from the registry to deploy it locally, and push locally built images to the registry.
  • Docker Hub– a registry with web interface provided by Docker Inc. It stores a lot of Docker images with different software. Docker Hub is a source of the "official" Docker images made by the Docker team or in cooperation with the original software manufacturer (it doesn't necessary mean that these "original" images are from official software manufacturers). Official images list their potential vulnerabilities. This information is available to any logged-in user. There are both free and paid accounts available. You can have one private image per account and an infinite amount of public images for free. Docker Store– a service very similar to Docker Hub. It's a marketplace with ratings, reviews, etc. My personal opinion is that it's marketing stuff. I'm totally happy with Docker Hub.

What is Docker and How to Use it With Python (Tutorial)

Example 1: hello world

It's time to run your first container:

docker run ubuntu /bin/echo 'Hello world'  

Console output:

Unable to find image 'ubuntu:latest' locally  
latest: Pulling from library/ubuntu  
6b98dfc16071: Pull complete  
4001a1209541: Pull complete  
6319fc68c576: Pull complete  
b24603670dc3: Pull complete  
97f170c87c6f: Pull complete  
Digest:sha256:5f4bdc3467537cbbe563e80db2c3ec95d548a9145d64453b06939c4592d67b6d  
Status: Downloaded newer image for ubuntu:latest  
Hello world  
  • docker run is a command to run a container.
  • ubuntu is the image you run. For example, the Ubuntu operating system image. When you specify an image, Docker looks first for the image on your Docker host. If the image does not exist locally, then the image is pulled from the public image registry – Docker Hub.
  • /bin/echo 'Hello world' is the command that will run inside a new container. This container simply prints “Hello world” and stops the execution.

Let’s try to create an interactive shell inside a Docker container:

docker run -i -t --rm ubuntu /bin/bash  
  • -t flag assigns a pseudo-tty or terminal inside the new container.
  • -i flag allows you to make an interactive connection by grabbing the standard input (STDIN) of the container.
  • --rm flag automatically removes the container when the process exits. By default, containers are not deleted. This container exists until we keep the shell session and terminates when we exit the session (like an SSH session with a remote server).

If you want to keep the container running after the end of the session, you need to daemonize it:

docker run --name daemon -d ubuntu /bin/sh -c "while true; do echo hello world; sleep 1; done"
  • --name daemon assigns daemon name to a new container. If you don’t specify a name explicitly, Docker will generate and assign it automatically.
  • -d flag runs the container in the background (i.e., daemonizes it).

Let’s see what containers we have at the moment:

docker ps -a  

Console output:

CONTAINER ID  IMAGE   COMMAND                 CREATED             STATUS                         PORTS  NAMES  
1fc8cee64ec2  ubuntu  "/bin/sh -c 'while..."  32 seconds ago      Up 30 seconds                         daemon  
c006f1a02edf  ubuntu  "/bin/echo 'Hello ..."  About a minute ago  Exited (0) About a minute ago         gifted_nobel  
  • docker ps is a command to list containers.
  • -a shows all containers (without -a flag ps will show only running containers).

The ps shows us that we have two containers:

  • gifted_nobel (the name for this container was generated automatically – it will be different on your machine). It's the first container we created, the one that printed 'Hello world' once.
  • daemon– the third container we created, which runs as a daemon.

Note: there is no second container (the one with interactive shell) because we set the --rm option. As a result, this container is automatically deleted right after execution.

Let’s check the logs and see what the daemon container is doing right now:

docker logs -f daemon  

Console output:

...
hello world  
hello world  
hello world  
  • docker logs fetch the logs of a container.
  • -f flag to follow the log output (works actually like tail -f).

Now let’s stop the daemon container:

docker stop daemon  

Make sure the container has stopped.

docker ps -a  

Console output:

CONTAINER ID  IMAGE   COMMAND                 CREATED        STATUS                      PORTS  NAMES  
1fc8cee64ec2  ubuntu  "/bin/sh -c 'while..."  5 minutes ago  Exited (137) 5 seconds ago         daemon  
c006f1a02edf  ubuntu  "/bin/echo 'Hello ..."  6 minutes ago  Exited (0) 6 minutes ago           gifted_nobel  

The container is stopped. We can start it again:

docker start daemon  

Let’s ensure that it’s running:

docker ps -a  

Console output:

CONTAINER ID  IMAGE   COMMAND                 CREATED        STATUS                    PORTS  NAMES  
1fc8cee64ec2  ubuntu  "/bin/sh -c 'while..."  5 minutes ago  Up 3 seconds                     daemon  
c006f1a02edf  ubuntu  "/bin/echo 'Hello ..."  6 minutes ago  Exited (0) 7 minutes ago         gifted_nobel  

Now, stop it again and remove all the containers manually:

docker stop daemon  
docker rm <your first container name>  
docker rm daemon  

To remove all containers, we can use the following command:

docker rm -f $(docker ps -aq)  
  • docker rm is the command to remove the container.
  • -f flag (for rm) stops the container if it's running (i.e., force deletion).
  • -q flag (for ps) is to print only container IDs.

Example 2: Environment variables and volumes

Starting from this example, you'll need several additional files you can find on my GitHub repo. You can clone my repo or simply use the following link to download the sample files.

It’s time to create and run more a meaningful container, like Nginx.
Change the directory to examples/nginx:

docker run -d --name "test-nginx" -p 8080:80 -v $(pwd):/usr/share/nginx/html:ro nginx:latest  

Warning: This command looks quite heavy, but it's just an example to explain volumes and env variables. In 99% of real-life cases, you won't start Docker containers manually – you'll use orchestration services (we'll cover docker-compose in example #4) or write a custom script to do it.

Console output:

Unable to find image 'nginx:latest' locally  
latest: Pulling from library/nginx  
683abbb4ea60: Pull complete  
a470862432e2: Pull complete  
977375e58a31: Pull complete  
Digest: sha256:a65beb8c90a08b22a9ff6a219c2f363e16c477b6d610da28fe9cba37c2c3a2ac  
Status: Downloaded newer image for nginx:latest  
afa095a8b81960241ee92ecb9aa689f78d201cff2469895674cec2c2acdcc61c  
  • -p is a ports mapping HOST PORT:CONTAINER PORT.
  • -v is a volume mounting HOST DIRECTORY:CONTAINER DIRECTORY.

Important: run command accepts only absolute paths. In our example, we've used $(pwd) to set the current directory absolute path. Now check this url - 127.0.0.1:8080 in your web browser.

We can try to change /example/nginx/index.html (which is mounted as a volume to /usr/share/nginx/html directory inside the container) and refresh the page.

Let’s get the information about the test-nginx container:

docker inspect test-nginx  

This command displays system-wide information about the Docker installation. This information includes the kernel version, number of containers and images, exposed ports, mounted volumes, etc.

Example 3: Writing your first Dockerfile

To build a Docker image, you need to create a Dockerfile. It is a plain text file with instructions and arguments. Here is the description of the instructions we’re going to use in our next example:

  • FROM -- set base image
  • RUN -- execute command in container
  • ENV -- set environment variable
  • WORKDIR -- set working directory
  • VOLUME -- create mount-point for a volume
  • CMD -- set executable for container

You can check Dockerfile reference for more details.

Let’s create an image that will get the contents of the website with curl and store it to the text file. We need to pass the website url via the environment variable SITE_URL. The resulting file will be placed in a directory, mounted as a volume.

Place the file name Dockerfile in examples/curl directory with the following contents:

FROM ubuntu:latest  
RUN apt-get update \  
    && apt-get install --no-install-recommends --no-install-suggests -y curl \
    && rm -rf /var/lib/apt/lists/*
ENV SITE_URL http://example.com/  
WORKDIR /data  
VOLUME /data  
CMD sh -c "curl -Lk $SITE_URL > /data/results"

Dockerfile is ready. It’s time to build the actual image.

Go to examples/curl directory and execute the following command to build an image:

docker build . -t test-curl  

Console output:

Sending build context to Docker daemon  3.584kB  
Step 1/6 : FROM ubuntu:latest  
 ---> 113a43faa138
Step 2/6 : RUN apt-get update     && apt-get install --no-install-recommends --no-install-suggests -y curl     && rm -rf /var/lib/apt/lists/*  
 ---> Running in ccc047efe3c7
Get:1 http://archive.ubuntu.com/ubuntu bionic InRelease [242 kB]  
Get:2 http://security.ubuntu.com/ubuntu bionic-security InRelease [83.2 kB]  
...
Removing intermediate container ccc047efe3c7  
 ---> 8d10d8dd4e2d
Step 3/6 : ENV SITE_URL http://example.com/  
 ---> Running in 7688364ef33f
Removing intermediate container 7688364ef33f  
 ---> c71f04bdf39d
Step 4/6 : WORKDIR /data  
Removing intermediate container 96b1b6817779  
 ---> 1ee38cca19a5
Step 5/6 : VOLUME /data  
 ---> Running in ce2c3f68dbbb
Removing intermediate container ce2c3f68dbbb  
 ---> f499e78756be
Step 6/6 : CMD sh -c "curl -Lk $SITE_URL > /data/results"  
 ---> Running in 834589c1ac03
Removing intermediate container 834589c1ac03  
 ---> 4b79e12b5c1d
Successfully built 4b79e12b5c1d  
Successfully tagged test-curl:latest  
  • docker build command builds a new image locally.
  • -t flag sets the name tag to an image.

Now we have the new image, and we can see it in the list of existing images:

docker images  

Console output:

REPOSITORY  TAG     IMAGE ID      CREATED         SIZE  
test-curl   latest  5ebb2a65d771  37 minutes ago  180 MB  
nginx       latest  6b914bbcb89e  7 days ago      182 MB  
ubuntu      latest  0ef2e08ed3fa  8 days ago      130 MB  

We can create and run the container from the image. Let’s try it with the default parameters:

docker run --rm -v $(pwd)/vol:/data/:rw test-curl  

To see the results saved to file, run:

cat ./vol/results  

Let’s try with Facebook.com:

docker run --rm -e SITE_URL=https://facebook.com/ -v $(pwd)/vol:/data/:rw test-curl  

To see the results saved to file, run:

cat ./vol/results  

What Best practices for creating images

  • Include only necessary context– use a .dockerignore file (like .gitignore in git)
  • Avoid installing unnecessary packages– it will consume extra disk space.
  • Use cache. Add context that changes a lot (for example, the source code of your project) at the end of Dockerfile – it will utilize Docker cache effectively.
  • Be careful with volumes. You should remember what data is in volumes. Because volumes are persistent and don’t die with the containers, the next container will use data from the volume created by the previous container.
  • Use environment variables (in RUN, EXPOSE, VOLUME). It will make your Dockerfile more flexible.

Alpine images

A lot of Docker images (versions of images) are created on top of Alpine Linux– this is a lightweight distro that allows you to reduce the overall size of Docker images.

I recommend that you use images based on Alpine for third-party services, such as Redis, Postgres, etc. For your app images, use images based on buildpack– it will be easy to debug inside the container, and you'll have a lot of pre-installed system-wide requirements.

Only you can decide which base image to use, but you can get the maximum benefit by using one basic image for all images, because in this case the cache will be used more effectively.

Example 4: Connection between containers

Docker compose -- is an CLI utility used to connect containers with each other. You can install docker-compose via pip:

sudo pip install docker-compose  

In this example, I am going to connect Python and Redis containers.

version: '3.6'  
services:  
  app:
    build:
      context: ./app
    depends_on:
      - redis
    environment:
      - REDIS_HOST=redis
    ports:
      - "5000:5000"
  redis:
    image: redis:3.2-alpine
    volumes:
      - redis_data:/data
volumes:  
  redis_data:

Go to examples/compose and execute the following command:

docker-compose up  

Console output:

Building app  
Step 1/9 : FROM python:3.6.3  
3.6.3: Pulling from library/python  
f49cf87b52c1: Pull complete  
7b491c575b06: Pull complete  
b313b08bab3b: Pull complete  
51d6678c3f0e: Pull complete  
09f35bd58db2: Pull complete  
1bda3d37eead: Pull complete  
9f47966d4de2: Pull complete  
9fd775bfe531: Pull complete  
Digest: sha256:cdef88d8625cf50ca705b7abfe99e8eb33b889652a9389b017eb46a6d2f1aaf3  
Status: Downloaded newer image for python:3.6.3  
 ---> a8f7167de312
Step 2/9 : ENV BIND_PORT 5000  
 ---> Running in 3b6fe5ca226d
Removing intermediate container 3b6fe5ca226d  
 ---> 0b84340fa920
Step 3/9 : ENV REDIS_HOST localhost  
 ---> Running in a4f9a1d6f541
Removing intermediate container a4f9a1d6f541  
 ---> ebe63bf5959e
Step 4/9 : ENV REDIS_PORT 6379  
 ---> Running in fd06aa65fd33
Removing intermediate container fd06aa65fd33  
 ---> 2a581c31ff4f
Step 5/9 : COPY ./requirements.txt /requirements.txt  
 ---> 671093a12829
Step 6/9 : RUN pip install -r /requirements.txt  
 ---> Running in b8ea53bc6ba6
Collecting flask==1.0.2 (from -r /requirements.txt (line 1))  
  Downloading https://files.pythonhosted.org/packages/7f/e7/08578774ed4536d3242b14dacb4696386634607af824ea997202cd0edb4b/Flask-1.0.2-py2.py3-none-any.whl (91kB)
Collecting redis==2.10.6 (from -r /requirements.txt (line 2))  
  Downloading https://files.pythonhosted.org/packages/3b/f6/7a76333cf0b9251ecf49efff635015171843d9b977e4ffcf59f9c4428052/redis-2.10.6-py2.py3-none-any.whl (64kB)
Collecting click>=5.1 (from flask==1.0.2->-r /requirements.txt (line 1))  
  Downloading https://files.pythonhosted.org/packages/34/c1/8806f99713ddb993c5366c362b2f908f18269f8d792aff1abfd700775a77/click-6.7-py2.py3-none-any.whl (71kB)
Collecting Jinja2>=2.10 (from flask==1.0.2->-r /requirements.txt (line 1))  
  Downloading https://files.pythonhosted.org/packages/7f/ff/ae64bacdfc95f27a016a7bed8e8686763ba4d277a78ca76f32659220a731/Jinja2-2.10-py2.py3-none-any.whl (126kB)
Collecting itsdangerous>=0.24 (from flask==1.0.2->-r /requirements.txt (line 1))  
  Downloading https://files.pythonhosted.org/packages/dc/b4/a60bcdba945c00f6d608d8975131ab3f25b22f2bcfe1dab221165194b2d4/itsdangerous-0.24.tar.gz (46kB)
Collecting Werkzeug>=0.14 (from flask==1.0.2->-r /requirements.txt (line 1))  
  Downloading https://files.pythonhosted.org/packages/20/c4/12e3e56473e52375aa29c4764e70d1b8f3efa6682bef8d0aae04fe335243/Werkzeug-0.14.1-py2.py3-none-any.whl (322kB)
Collecting MarkupSafe>=0.23 (from Jinja2>=2.10->flask==1.0.2->-r /requirements.txt (line 1))  
  Downloading https://files.pythonhosted.org/packages/4d/de/32d741db316d8fdb7680822dd37001ef7a448255de9699ab4bfcbdf4172b/MarkupSafe-1.0.tar.gz
Building wheels for collected packages: itsdangerous, MarkupSafe  
  Running setup.py bdist_wheel for itsdangerous: started
  Running setup.py bdist_wheel for itsdangerous: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/2c/4a/61/5599631c1554768c6290b08c02c72d7317910374ca602ff1e5
  Running setup.py bdist_wheel for MarkupSafe: started
  Running setup.py bdist_wheel for MarkupSafe: finished with status 'done'
  Stored in directory: /root/.cache/pip/wheels/33/56/20/ebe49a5c612fffe1c5a632146b16596f9e64676768661e4e46
Successfully built itsdangerous MarkupSafe  
Installing collected packages: click, MarkupSafe, Jinja2, itsdangerous, Werkzeug, flask, redis  
Successfully installed Jinja2-2.10 MarkupSafe-1.0 Werkzeug-0.14.1 click-6.7 flask-1.0.2 itsdangerous-0.24 redis-2.10.6  
You are using pip version 9.0.1, however version 10.0.1 is available.  
You should consider upgrading via the 'pip install --upgrade pip' command.  
Removing intermediate container b8ea53bc6ba6  
 ---> 3117d3927951
Step 7/9 : COPY ./app.py /app.py  
 ---> 84a82fa91773
Step 8/9 : EXPOSE $BIND_PORT  
 ---> Running in 8e259617b7b5
Removing intermediate container 8e259617b7b5  
 ---> 55f447f498dd
Step 9/9 : CMD [ "python", "/app.py" ]  
 ---> Running in 2ade293ecb25
Removing intermediate container 2ade293ecb25  
 ---> b85b4246e9f8

Successfully built b85b4246e9f8  
Successfully tagged compose_app:latest  
WARNING: Image for service app was built because it did not already exist. To rebuild this image you must use `docker-compose build` or `docker-compose up --build`.  
Creating compose_redis_1 ... done  
Creating compose_app_1   ... done  
Attaching to compose_redis_1, compose_app_1  
redis_1  | 1:C 08 Jul 18:12:21.851 # Warning: no config file specified, using the default config. In order to specify a config file use redis-server /path/to/redis.conf  
redis_1  |                 _._  
redis_1  |            _.-``__ ''-._  
redis_1  |       _.-``    `.  `_.  ''-._           Redis 3.2.12 (00000000/0) 64 bit  
redis_1  |   .-`` .-```.  ```\/    _.,_ ''-._  
redis_1  |  (    '      ,       .-`  | `,    )     Running in standalone mode  
redis_1  |  |`-._`-...-` __...-.``-._|'` _.-'|     Port: 6379  
redis_1  |  |    `-._   `._    /     _.-'    |     PID: 1  
redis_1  |   `-._    `-._  `-./  _.-'    _.-'  
redis_1  |  |`-._`-._    `-.__.-'    _.-'_.-'|  
redis_1  |  |    `-._`-._        _.-'_.-'    |           http://redis.io  
redis_1  |   `-._    `-._`-.__.-'_.-'    _.-'  
redis_1  |  |`-._`-._    `-.__.-'    _.-'_.-'|  
redis_1  |  |    `-._`-._        _.-'_.-'    |  
redis_1  |   `-._    `-._`-.__.-'_.-'    _.-'  
redis_1  |       `-._    `-.__.-'    _.-'  
redis_1  |           `-._        _.-'  
redis_1  |               `-.__.-'  
redis_1  |  
redis_1  | 1:M 08 Jul 18:12:21.852 # WARNING: The TCP backlog setting of 511 cannot be enforced because /proc/sys/net/core/somaxconn is set to the lower value of 128.  
redis_1  | 1:M 08 Jul 18:12:21.852 # Server started, Redis version 3.2.12  
redis_1  | 1:M 08 Jul 18:12:21.852 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.  
redis_1  | 1:M 08 Jul 18:12:21.852 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.  
redis_1  | 1:M 08 Jul 18:12:21.852 * The server is now ready to accept connections on port 6379  
app_1    |  * Serving Flask app "app" (lazy loading)  
app_1    |  * Environment: production  
app_1    |    WARNING: Do not use the development server in a production environment.  
app_1    |    Use a production WSGI server instead.  
app_1    |  * Debug mode: on  
app_1    |  * Running on http://0.0.0.0:5000/ (Press CTRL+C to quit)  
app_1    |  * Restarting with stat  
app_1    |  * Debugger is active!  
app_1    |  * Debugger PIN: 170-528-240  

The current example will increment view counter in Redis. Open adress 127.0.0.1:5000 in your web browser and check it.

How to use docker-compose is a topic for a separate tutorial. To get started, you can play with some images from Docker Hub. If you want to create your own images, follow the best practices listed above. The only thing I can add in terms of using docker-compose is that you should always give explicit names to your volumes in docker-compose.yml (if the image has volumes). This simple rule will save you from an issue in the future when you'll be inspecting your volumes.

version: '3.6'  
services:  
  ...
  redis:
    image: redis:3.2-alpine
    volumes:
      - redis_data:/data
volumes:  
  redis_data:

In this case, redis_data will be the name inside the docker-compose.yml file; for the real volume name, it will be prepended with project name prefix.

To see volumes, run:

docker volume ls  

Console output:

DRIVER              VOLUME NAME  
local               apptest_redis_data  

Without an explicit volume name, there will be a UUID. Here’s an example from my local machine:

DRIVER              VOLUME NAME  
local               ec1a5ac0a2106963c2129151b27cb032ea5bb7c4bd6fe94d9dd22d3e72b2a41b  
local               f3a664ce353ba24dd43d8f104871594de6024ed847054422bbdd362c5033fc4c  
local               f81a397776458e62022610f38a1bfe50dd388628e2badc3d3a2553bb08a5467f  
local               f84228acbf9c5c06da7be2197db37f2e3da34b7e8277942b10900f77f78c9e64  
local               f9958475a011982b4dc8d8d8209899474ea4ec2c27f68d1a430c94bcc1eb0227  
local               ff14e0e20d70aa57e62db0b813db08577703ff1405b2a90ec88f48eb4cdc7c19  
local               polls_pg_data  
local               polls_public_files  
local               polls_redis_data  
local               projectdev_pg_data  
local               projectdev_redis_data  

Docker way

Docker has some restrictions and requirements, depending on the architecture of your system (applications that you pack into containers). You can ignore these requirements or find some workarounds, but in this case, you won't get all the benefits of using Docker. My strong advice is to follow these recommendations:

  • 1 application = 1 container.
  • Run the process in the foreground (don't use systemd, upstart or any other similar tools).
  • Keep data out of containers– use volumes.
  • Do not use SSH (if you need to step into container, you can use the docker exec command).
  • Avoid manual configurations (or actions) inside container.

Conclusion

To summarize this tutorial, alongside with IDE and Git, Docker has become a must-have developer tool. It's a production-ready tool with a rich and mature infrastructure.

Docker can be used on all types of projects, regardless of size and complexity. In the beginning, you can start with compose and Swarm. When the project grows, you can migrate to cloud services like Amazon Container Services or Kubernetes.

Like standard containers used in cargo transportation, wrapping your code in Docker containers will help you build faster and more efficient CI/CD processes. This is not just another technological trend promoted by a bunch of geeks – it's a new paradigm that is already being used in the architecture of large companies like PayPal, Visa, Swisscom, General Electric, Splink, etc.

#new-forn-style { background: #7bbf25; padding: 30px 50px 10px; } #new-forn-style h2 { font-weight: 400; padding: 0 0 30px; font-size: 1.3em; margin: 0; color:#fff } #new-forn-style .mc-field-group label { display: block; margin-bottom: 3px; font-size: 14px; } #new-forn-style form { text-align: center; } #new-forn-style form .email { width: 300px; text-align:center; border:0 } #new-forn-style .button { background: #111; margin-top: 15px; width: 300px; text-align:center } #new-forn-style .button:hover { background: #fff; color: #7bbf25 } #new-forn-style #mc-embedded-subscribe-form div.mce_inline_error { padding: 10px 0 0 0; background: none; color: #c00; margin: 0; font-weight: normal; font-size: 16px; } #new-forn-style .response {color:#111; padding-top:10px} #new-forn-style .response a {color:#fff; border-bottom:1px solid #fff}

You are in one click from Django Stars knowledge base

Python Bytes: #97 Java goes paid

Davy Wybiral: Building a Programmable Laser Turret

$
0
0
Using only a couple of servos, a laser module, some hot glue, and a microcontroller of your choice, you can easily build your own laser turret to annoy your cat.

Catalin George Festila: Python 2.7 : Python geocoding without key.

$
0
0
Today I will come with a simple example about geocoding.
I used json and requests python modules and python version 2.7.
About geocoding I use this service provide by datasciencetoolkit.
You can use this service free and you don't need to register to get a key.
Let's see the python script:
import requests
import json

url = u'http://www.datasciencetoolkit.org/maps/api/geocode/json'
par = {
u'sensor': False,
u'address': u'London'
}

my = requests.get(
url,
par
)
json_out = json.loads(my.text)

if json_out['status'] == 'OK':
print([r['geometry']['location'] for r in json_out['results']])
I run this script and I test with google map to see if this works well.
This is output and working well with the geocoding service:

EuroPython Society: EuroPython 2019: RFP for Venues

$
0
0

We are happy to announce that we have started the RFP for venues to host the EuroPython 2019 conference.

We have sent out the details to almost 40 venues.

Unlike last year, we also want to give the chance to other venues who were not on our list to participate in the RFP. For this purpose, we are making the details available in this blog post as well.

RFP Introduction

The EuroPython Society is the organization behind the EuroPython conference, is the largest Python programming language conference in Europe, with more than 1100 professionals from IT, science and educational fields attending to learn about new developments, network and learn from experience of others in the field.

Python is a very popular open source programming language, with a large following in the web development and data science fields (see http://python.org/ for details).

EuroPython was initiated in 2002, with the first conference in Charleroi, Belgium, and has since toured Europe for a total of 17 editions so far. For EuroPython 2019 we are looking for a new location and venue and are reaching out to potential venues we have identified to participate in an RFP selection process.

If you’d like to participate in this process, please have a look at the RFP spreadsheet in form of an Excel spreadsheet with a list of questions and our EuroPython 2017 sponsor brochure with more details about the conference, the demographics and our offerings for sponsors, to give you an idea of what we are looking for.

Please see the first tab in the spreadsheet for a description of the the submission process. If you have questions, please write to board@europython.eu.

You can also check these other resources:

Timeline

This is the timeline for the RFP:

First round:

  • Start of RFP process: 2018-09-28
  • Deadline for RFP vendor questions: 2018-10-05
  • Vendor questions answered by: 2018-10-12
  • First round submission deadline: 2018-10-19
  • Second round candidates will be informed until: 2018-10-31

Second round:

  • Second round RFP questions posted: 2018-11-09
  • Deadline for RFP vendor questions: 2018-11-14
  • Vendor questions answered by: 2018-11-16
  • Final submission deadline: 2018-11-21
  • Final candidate will be informed until: 2018-11-30

Many thanks,
– 
EuroPython Society Board
https://www.europython-society.org/

EuroPython: EuroPython 2019: RFP for Venues

$
0
0

europythonsociety:

We are happy to announce that we have started the RFP for venues to host the EuroPython 2019 conference.

We have sent out the details to almost 40 venues.

Unlike last year, we also want to give the chance to other venues who were not on our list to participate in the RFP. For this purpose, we are making the details available in this blog post as well.

RFP Introduction

The EuroPython Society is the organization behind the EuroPython conference, is the largest Python programming language conference in Europe, with more than 1100 professionals from IT, science and educational fields attending to learn about new developments, network and learn from experience of others in the field.

Python is a very popular open source programming language, with a large following in the web development and data science fields (see http://python.org/ for details).

EuroPython was initiated in 2002, with the first conference in Charleroi, Belgium, and has since toured Europe for a total of 17 editions so far. For EuroPython 2019 we are looking for a new location and venue and are reaching out to potential venues we have identified to participate in an RFP selection process.

If you’d like to participate in this process, please have a look at the RFP spreadsheet in form of an Excel spreadsheet with a list of questions and our EuroPython 2017 sponsor brochure with more details about the conference, the demographics and our offerings for sponsors, to give you an idea of what we are looking for.

Please see the first tab in the spreadsheet for a description of the the submission process. If you have questions, please write to board@europython.eu.

You can also check these other resources:

Timeline

This is the timeline for the RFP:

First round:

  • Start of RFP process: 2018-09-28
  • Deadline for RFP vendor questions: 2018-10-05
  • Vendor questions answered by: 2018-10-12
  • First round submission deadline: 2018-10-19
  • Second round candidates will be informed until: 2018-10-31

Second round:

  • Second round RFP questions posted: 2018-11-09
  • Deadline for RFP vendor questions: 2018-11-14
  • Vendor questions answered by: 2018-11-16
  • Final submission deadline: 2018-11-21
  • Final candidate will be informed until: 2018-11-30

Many thanks,
– 
EuroPython Society Board
https://www.europython-society.org/

Python Celery - Weekly Celery Tutorials and How-tos: Custom Celery task states

$
0
0

Celery tasks always have a state. If a task finished executing successfully, its state is SUCCESS. If a task execution resulted in an exception, its state is FAILURE. Celery knows six built-in states:

  • PENDING (waiting for execution or unknown task id)

  • STARTED (task has been started)

  • SUCCESS (task executed successfully)

  • FAILURE (task execution resulted in exception)

  • RETRY (task is being retried)

  • REVOKED (task has been revoked)

In case you wonder why you have never come across the STARTED state, it is not reported by default. You have to enable it explicitly via the Celery config, setting task_track_started = True.

The update_state method

The Celery task object provides an update_state method. This method lets you do three things:

  • set the task’s state to one of the built-in states

  • provide additional meta data

  • set the task’s state to any custom state you define.

All you need to define your own state is a unique name. It is just a string and does not need to be registered anywhere. For example, if you have a long running task, you can define a PROGRESS state and publish the progress made via the meta json argument:

import time
from worker import app


@app.task(bind=True)
def task(self):
    n = 30
    for i in range(0, n):
        self.update_state(state='PROGRESS', meta={'done': i, 'total': n})
        time.sleep(1)

    return n

This task runs for ~30 seconds and sends a task state update every ~1 second, broadcasting a custom PROGRESS state and the number of total and completed iterations. Let’s execute the task asynchronously, wait for the task to finish and capture the state and meta data while it’s still running:

import time
import tasks

task = tasks.task.s().delay()

while not task.ready():
    print(f'State={t.state}, info={t.info}')
    time.sleep(1)

print(f'State={t.state}, info={t.info}')

Which produces something like this:

State=PENDING, info=None
State=PROGRESS, info={'done': 0, 'total': 30}
State=PROGRESS, info={'done': 1, 'total': 30}
State=PROGRESS, info={'done': 2, 'total': 30}
State=PROGRESS, info={'done': 3, 'total': 30}
...
State=SUCCESS, info=29

This is a very simple example. But if we take a closer look, there are a few very interesting learnings:

  • any string can be a custom state
  • a custom state is only temporary and is eventually overriden by a Celery built-in state as soon as the task finishes successfully - or throws an exception, is retried or revoked (the same applies if we uset update_state with a built-in state but custom meta data - the custom meta data is ultimatemy overwritten by Celery)
  • while the task is in a custom state, the meta argument we published via update_state is available as info property o the AsyncResult object (the object .delay() returns on the execution side)
  • when the task is in the built-in SUCCESS state, the info property returns the task result (when the task failed, the info property returns the exception type and stacktrace, try it yourself by throwing an exception in the implementation of the task function above)

Built-in state with manual task result handling

Say, you want to provide some additional custom data for a failed tasks. Unfortunately, as we established above, Celery will overwrite the custom meta data, even if we use a built-in state type. Fortunately, there is a way to prevent this, raising an celery.exceptions.Ignore() exception. This means, no state will be recorded for the task, but the message is still removed from the queue

from celery import states
from celery.exceptions import Ignore
from worker import app


@app.task(bind=True)
def task(self):
    try:
        raise ValueError('Some error')
    except Exception as ex:
        self.update_state(state=states.FAILURE, meta={'custom': '...'})
        raise Ignore()

This works… at least, kind of. This time Celery does not overwrite the meta data:

>>> import tasks
>>> task = tasks.task.s().delay()
>>> print(task.backend.get(t.backend.get_key_for_task(task.id)))
b'{"status": "FAILURE", "result": {"custom": "..."}, "traceback": null, "children": [], "task_id": "1df4b70c-1206-41e5-bcd3-786295d21267"}'

But, it turns out that, depending on the built-in task state, Celery expects the corresponding meta data dictionary to be in a particular format. And here, the meta data itself is incompatible with the FAILURE state:

>>> print(task.state)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
  File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 471, in state
    return self._get_task_meta()['status']
  File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 410, in _get_task_meta
    return self._maybe_set_cache(self.backend.get_task_meta(self.id))
  File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 359, in get_task_meta
    meta = self._get_task_meta_for(task_id)
  File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 674, in _get_task_meta_for
    return self.decode_result(meta)
  File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 278, in decode_result
    return self.meta_from_decoded(self.decode(payload))
  File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 274, in meta_from_decoded
    meta['result'] = self.exception_to_python(meta['result'])
  File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 248, in exception_to_python
    from_utf8(exc['exc_type']), __name__)
KeyError: 'exc_type'

We can fix this by adding the exc_type and exc_message keys to our custom meta dictionary, effectively mimicking Celery’s default FAILURE meta structure.

@app.task(bind=True)
def task(self):
    try:
        raise ValueError('Some error')
    except Exception as ex:
        self.update_state(
            state=states.FAILURE,
            meta={
                'exc_type': type(ex).__name__,
                'exc_message': traceback.format_exc().split('\n')
                'custom': '...'
            })
        raise Ignore()

And this time we can get the task’s state and info without Celery throwing an exception. And we also have access to the custom field. Note that we have to retrieve the result from the backend via task.backend.get(...) as Celery parses the result dict depending on the task’s state.

>>> import tasks
>>> task = tasks.task.s().delay()
>>> print(task.state)
'FAILURE'
>>> print(task.info)
ValueError('Traceback (most recent call last):', '  File "/app/tasks.py", line 16, in task', "    raise ValueError('some exception')", 'ValueError: some exception', '')
>>> print(task.backend.get(task.backend.get_key_for_task(task.id)))
b'{"status": "FAILURE", "result": {"exc_type": "ValueError", "exc_message": ["Traceback (most recent call last):", "  File \\"/app/tasks.py\\", line 16, in task", "    raise ValueError(\'some exception\')", "ValueError: some exception", ""], "custom": "..."}, "traceback": null, "children": [], "task_id": "d2f60111-aec6-4c58-83a7-24f0edb7ac5f"}'

Custom state

We can use the same Ignore() trick from above to instruct Celery to not overwrite our temporary custom state from the initial example:

from celery.exceptions import Ignore
from worker import app


@app.task(bind=True)
def task(self):
    self.update_state(state='SOME-CUSTOM-STATE', meta={'custom': '...'})
    raise Ignore()

This time, the task remains in our custom state. Also, Celery does not assume any specific meta dict structure:

>>> import tasks
>>> task = tasks.task.s().delay()
>>> print(task.state)
'SOME-CUSTOM-STATE'
>>> print(task.info)
{'custom': '...'}
>>> print(task.result)
{'custom': '...'}

Conclusion

Celery provides a lot of flexibility when it comes to custom task states and custom meta data. Transient custom states in combination with custom meta data can be used to implement task progress trackers. Or, you might have a good reason to implement your own final custom task state, which Celery can equally cater for. You can even enrich a built-in the FAILURE task state with additional data. For further information, I encourage you to read the docs and play around with a few code examples.


Codementor: Python Vs R : The Eternal Question for Data Scientists

$
0
0
Python Vs R , which is better? This has been an important question for data science students and practitioners. In this blog, we will evaluate both.

pgcli: Release v2.0.0

$
0
0

Pgcli is a command line interface for Postgres database that does auto-completion and syntax highlighting. You can install this version using:

$ pip install -U pgcli

This pgcli release has only one feature in it: migration to Python Prompt Toolkit 2.0. The migration was non-trivial, because prompt-toolkit was reworked almost from the ground up, and 2.0 is not compatible with 1.x. But now, pgcli can finally use those exciting new features that prompt-toolkit 2.0 has.

Many thanks to Jonathan Slenders and Dick Marinus, who worked hard on the migration!

Things to be aware of

  • mycli did not yet migrate to prompt-toolkit 2.0. Until it does, pgcli and mycli can't be installed into the same venv.
  • Same goes for ipython. It already migrated, but is not yet released. To install ipython into the same venv as pgcli, you'll have to do it from master:
$ pip install pip install git+https://github.com/ipython/ipython.git

As always, we are here to help in case of any issues with the new release:

https://github.com/dbcli/pgcli/issues

REPL|REBL: Dictionary Views & Set Operations — Working with dictionary view objects

$
0
0

The keys, values and items from a dictionary can be accessed using the .keys(), .values() and .items() methods. These methods return view objects which provide a view on the source dictionary.

The view objects dict_keys and dict_items support set-like operations (the latter only when all values are hashable) which can be used to combine and filter dictionary elements.

Keys

Dictionary keys are always hashable, so set operations are always available on the dict_keys view object.

All keys (set union)

To get all keys from multiple dictionaries, you can use the set union.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d1.keys()|d2.keys(){'key5','key3','key2','key1'}# this is a set, not a dict

You can use the same approach to combine dict_items and merge dictionaries.

Keys in common (set intersection)

The keys common to two dictionaries can be determined using set intersection (&).

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d1.keys()&d2.keys(){'key3'}# this is a set, not a dictionary

You could use the resulting set to filter your dictionary using a dictionary comprehension.

>>> {k:d1[k] for k in keys}
{'key2':'value2', 'key1':'value1'}

Unique keys (set difference)

To retrieve keys unique to a given dictionary, you can use set difference (-). Keys from the right hand dict_keys are removed from the left, resulting in a set of the remaining keys.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d1.keys()-d2.keys(){'key1','key2'}

Unique keys from both (set symmetric difference)

If you want items unique to both dictionaries, the set symmetric difference (^) returns this. The result is items unique to both the left and right hand of the comparison.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d1.keys()^d2.keys(){'key5','key2','key1'}

Items

If both the keys and values of a dictionary are hashable, the dict_items view will support set-like operations.

If the values are not hashable all of these2 operations will all raise a TypeError.

Merge (set union)

You can use setunion operations to merge two dictionaries.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d3={'key4':'value4','key6':'value6'}>>>d=dict(d1.items()|d2.items()|d3.items())>>>d{'key1':'value1','key2':'value2','key3':'value3-new','key5':'value5','key4':'value4','key6':'value6'}

Since it is quite common for dictionary values to not be hashable, you will probably want to use one of the other approaches for merging dictionaries instead.

Common entries (set intersection)

The items common to two dictionaries can be determined using set intersection (&). Both the key and value must match — items are compared as (key, value) tuples.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key1':'value1','key5':'value5'}>>>d1.items()&d2.items(){('key1','value1')}# this is a set, not a dict

Unique entries (set difference)

To retrieve items unique to a given dictionary, you can use set difference (-). Items from the right hand dict_keys are removed from the left, resulting in a set of the remaining item (key, value) tuples.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d1.items()-d2.items(){('key3',3),('key2','value2'),('key1','value1')}

Unique entries from both (set symmetric difference)

If you want items unique to both dictionaries, the set symmetric difference (^) returns this. The result is item (key, value) tuples unique to both the left and right hand of the comparison.

>>>d1={'key1':'value1','key2':'value2','key3':3}>>>d2={'key3':'value3-new','key5':'value5'}>>>d1.items()^d2.items(){('key2','value2'),('key5','value5'),('key1','value1'),('key3',3),('key3','value3-new')}

Weekly Python StackOverflow Report: (cxlv) stackoverflow python report

$
0
0

Nikola: Nikola v8.0.1 is out!

$
0
0

On behalf of the Nikola team, I am pleased to announce the immediate availability of Nikola v8.0.1. Some bugs were fixed; more importantly, we pinned down the Markdown package version due to incompatibilities.

What is Nikola?

Nikola is a static site and blog generator, written in Python. It can use Mako and Jinja2 templates, and input in many popular markup formats, such as reStructuredText and Markdown — and can even turn Jupyter Notebooks into blog posts! It also supports image galleries, and is multilingual. Nikola is flexible, and page builds are extremely fast, courtesy of doit (which is rebuilding only what has been changed).

Find out more at the website: https://getnikola.com/

Downloads

Install using pip install Nikola. (Python 3-only since v8.0.0.)

Changes

  • Not compatible with Markdown 3.x yet, this release pins the requirement down to 2.x (until we get 3.x support done)

Features

  • Make URL displayed by nikola auto and nikola serve clickable (Issue #3166)

Bugfixes

  • Pandoc compiler was passing deleted argument (Issue #3171)
  • Make nikola version --check work again (Issue #3170)
  • Set logging level for reST to warning in order to limit noise
  • Fix docinfo removal for sites that use reST docinfo metadata (Issue #3167)
Viewing all 22627 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>