Quantcast
Channel: Planet Python
Viewing all 22640 articles
Browse latest View live

Real Python: Python Community Interview With Mike Grouchy

$
0
0

If you saw the last Python Community Interview with Mahdi Yusuf, then you have already met one half of the Pycoder’s Weekly team. This time, I’m joined by Mahdi’s partner in crime, Mike Grouchy.

Mike tells us how Pycoder’s really began and what it means to be a “hoops junkie.” We’ll also learn more about Mike’s secret projects. Let’s get into it.

Ricky:Welcome! Let’s start with how you got into programming and when you started using Python.

Mike Grouchy

Mike: Hi! My first introduction to computers was from my dad when I was probably 8 or 9 years old. He wasn’t computer-savvy by any stretch of the imagination, but he has always been a tinkerer, and when he learned about BBSs and the communities that existed there, he played around with them and showed me how to play games over them on our 2400 baud modem. (Ha!)

At the time, I was just fascinated that you could connect to other computers remotely and play games, write messages, etc. This stoked my interest and got me generally tinkering with computers until, a few years later, when we got dial-up internet through the local university. (My dad was taking a course there through work.)

That was it for me. The internet appealed to me in the same way BBSs did early on. The fact that you could interact with web pages that people from all over the world published on the internet just unlocked my imagination.

So, like lots of people, the first thing I ever did with programming was learning how to put web pages on the internet. There was a logical progression here. As I wanted to make more complicated websites, I delved into writing CGI scripts in Perl, and later in high school I learned PHP and got really into Linux.

This lead me naturally to start writing a lot of Python to automate things on my Linux box. That was my first real introduction to Python.

Ricky:People may know you as one half of Pycoder’s Weekly, which has been going now for nearly 7 years. How did the idea for Pycoder’s come about, and did it meet your expectations?

Mike: The idea of Pycoder’s came along pretty naturally. Mahdi and I were co-workers at a startup at the time, and we had toyed around with a bunch of projects outside of work before this.

But, day in and day out, we were writing Python, and we were attending local meetup groups. (I had started a Django group that just got folded into the local Python group.) We were constantly looking for Python resources.

Programming newsletters were getting kind of “hot” at that time, and we couldn’t really find a newsletter out there that we would want to read, so we decided we would try to see if people were interested in that type of thing. We just immediately whipped up a landing page with a form to collect emails, and threw it up on Hacker News. Even though there were a few skeptics in that humble thread, we got ~2000 signups and launched the first issue 2 weeks later.

I would say Pycoder’s has far exceeded my expectations. We have been around for a long time, and it’s been a bit crazy. It certainly helped Mahdi and me grow our Python knowledge. We have also learned a few things about creating and managing communities and building an audience.

The best thing that came from it in all this time has been all the great people we have interacted with over the years. We have met a lot of people through PyCons over the years and had some really fun experiences meeting those people and being an active part of the community.

Ricky:Curating the content for the newsletter must take up quite a bit of time? But have you come across something that has helped you solve a problem or changed how you approach a task differently, that you may not have come across otherwise?

Mike: It can be a little time consuming, but it’s just something I have built into my daily routine, and I read a lot normally anyway, so it’s not a chore.

In terms of whether it has helped me solve a problem or approach a task differently, I can’t pinpoint one single thing, but I guarantee you it has. One of the great things about reading and accumulating a breadth of knowledge on a topic (or many topics) is that you end up seeing so many things over the years that it helps you recognize things that you have already seen so you can go back and find a novel solution to some problem you may have.

Also, having seen all these things changes your perspective on how you solve all kinds of problems and can really help you hone your approach to problem solving.

Ricky:Okay, it’s time to switch gears and talk about basketball. You’re a self-proclaimed “hoops junkie.” What is it about basketball that fascinated you so much, and have you found a way to combine your love for coding and basketball?

Mike: I do love Basketball. (Go Lakers!)

I have been playing since I have been a kid, and the game has always been beautiful to me. It’s a game of skill and incredible athleticism, so I have always enjoyed watching. With the new age of sports analytics, it has added a whole new way to look at the game. (I would be embarrassed to tell you how much time I have spent on Basketball Reference.) But it’s incredible how deep you can dig into a game of basketball.

Analytics is still new to basketball, unlike a game like baseball, so people are still just figuring it out, so its really exciting. I’d love to attend the MIT-Sloan sports analytics conference.

The NBA really has a lot of characters and certainly quite a bit of good sports writing. This inspired me to work on a project a few years ago that attempted to combine a little bit of NLP with Machine Learning (ML) to auto-curate sports news—similar to Techmeme, but without the use of editors. It was called hoopsmachine!

Needless to say, it’s a project that never really made it off the ground, but it’s something I think about sparking up from time to time with all the new and improved ML tools that are available today.

Ricky:Now for my last question: what other hobbies and interests do you have, aside from Python? Any you’d like to share and/or plug?

Mike: I have 2 kids at home (ages 4 and 2) and am VP, Engineering at PageCloud, a software startup building the next generation of website creation tools. So that takes up quite a bit of time!

However, in the little bit of downtime that I have, I read quite a bit, and finding the next book is always a hassle. So I have been working on a little side project that I hope will help me and other people discover the book they should be reading next.

I am also pretty into tinkering with home automation, home lab, hacking on Raspberry Pis, that kind of stuff. (So is Mahdi.) We have a project we are working on for people who are interested in that, too. Both these things aren’t public yet, but if it sounds interesting, throw me a follow on Twitter, and I will be sure to post about both of these things when there is something to show!


Thank you, Mike, for joining me for this week’s interview. To keep up-to-date with Mike’s secret projects, you can follow him on Twitter or look him up on his website. Personally, I’m off to spy on his GitHub for clues…

If there is someone you would like me to interview in the future, reach out to me in the comments below, or send me a message on Twitter.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]


Continuum Analytics Blog: Anaconda Enterprise 5.2.2: Now With Apache Zeppelin and GPU improvements

$
0
0

Anaconda Enterprise 5.2 introduced exciting features such as GPU-acceleration, scalable machine learning, and cloud-native model management in July. Today we’re releasing Anaconda Enterprise 5.2.2 with a number of enhancements in IDEs (Integrated Development Environments), GPU resource management, source code control, and (of course) bug fixes.One of the biggest new benefits is the addition of Apache Zeppelin …
Read more →

The post Anaconda Enterprise 5.2.2: Now With Apache Zeppelin and GPU improvements appeared first on Anaconda.

PyCharm: PyCharm 2018.3 EAP 6

$
0
0

You can now get the sixth release in the Early Access Program (EAP) for PyCharm 2018.3. Download now from our website

New in This Version

Gitignore File Generation

Gitignore

Have you ever accidentally checked in files in the .idea folder which should have stayed private? PyCharm now helps you by creating a .gitignore file for you, both when you create a git repository from the VCS menu, or when you open a project that has a git repository, but no ignore file yet.

Docker-Compose Command Customization

Compose Command

Do you have specific needs for your Docker Compose configuration? You can now specify custom parameters in run configuration that are configured with a Docker Compose interpreter.

Further Improvements

  • AT TIME ZONE is now correctly highlighted for Microsoft SQL Server. PyCharm Professional Edition bundles all SQL features from JetBrains DataGrip
  • SCSS and LESS now get their own code style settings. PyCharm Professional Edition has all the web language features from JetBrains WebStorm
  • And more, see the release notes here

Interested?

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP.

If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP, and stay up to date. You can find the installation instructions on our website.

PyCharm 2018.3 is in development during the EAP phase, therefore not all new features are already available. More features will be added in the coming weeks. As PyCharm 2018.3 is pre-release software, it is not as stable as the release versions. Furthermore, we may decide to change and/or drop certain features as the EAP progresses.

All EAP versions will ship with a built-in EAP license, which means that these versions are free to use for 30 days after the day that they are built. As EAPs are released weekly, you’ll be able to use PyCharm Professional Edition EAP for free for the duration of the EAP program, as long as you upgrade at least once every 30 days.

The No Title® Tech Blog: Optimize Images v1.3 – Dynamic by default

$
0
0

In this new release, Optimize Images will, by default, try to determine dynamically the best quality setting for each JPEG image.

Peter Bengtsson: Fancy linkifying of text with Bleach and domain checks (with Python)

$
0
0

Bleach is awesome. Thank you for it @willkg! It's a Python library for sanitizing text as well as "linkifying" text for HTML use. For example, consider this:

>>> import bleach
>>> bleach.linkify("Here is some text with a url.com.")
'Here is some text with a <a href="http://url.com" rel="nofollow">url.com</a>.'

Note that sanitizing is separate thing, but if you're curious, consider this example:

>>> bleach.linkify(bleach.clean("Here is <script> some text with a url.com."))
'Here is &lt;script&gt; some text with a <a href="http://url.com" rel="nofollow">url.com</a>.'

With that output you can confidently template interpolate that string straight into your HTML.

Getting fancy

That's a great start but I wanted a more. For one, I don't always want the rel="nofollow" attribute on all links. In particular for links that are within the site. Secondly, a lot of things look like a domain but isn't. For example This is a text.at the start which would naively become...:

>>> bleach.linkify("This is a text.at the start")
'This is a <a href="http://text.at" rel="nofollow">text.at</a> the start'

...because text.at looks like a domain.

So here is how I use it here on www.peterbe.com to linkify blog comments:

defcustom_nofollow_maker(attrs,new=False):href_key=(None,u"href")ifhref_keynotinattrs:returnattrsifattrs[href_key].startswith(u"mailto:"):returnattrsp=urlparse(attrs[href_key])ifp.netlocnotinsettings.NOFOLLOW_EXCEPTIONS:# Before we add the `rel="nofollow"` let's first check that this is a# valid domain at all.root_url=p.scheme+"://"+p.netloctry:response=requests.head(root_url)ifresponse.status_code==301:redirect_p=urlparse(response.headers["location"])# If the only difference is that it redirects to https instead# of http, then amend the href.if(redirect_p.scheme=="https"andp.scheme=="http"andp.netloc==redirect_p.netloc):attrs[href_key]=attrs[href_key].replace("http://","https://")exceptConnectionError:returnNonerel_key=(None,u"rel")rel_values=[valforvalinattrs.get(rel_key,"").split(" ")ifval]if"nofollow"notin[rel_val.lower()forrel_valinrel_values]:rel_values.append("nofollow")attrs[rel_key]=" ".join(rel_values)returnattrshtml=bleach.linkify(text,callbacks=[custom_nofollow_maker])

This basically taking the default nofollow callback and extending it a bit.

By the way, here is the complete code I use for sanitizing and linkifying blog comments here on this site: render_comment_text.

Caveats

This is slow because it requires network IO every time a piece of text needs to be linkified (if it has domain looking things in it) but that's best alleviated by only doing it once and either caching it or persistently storing the cleaned and rendered output.

Also, the check uses try: requests.head() except requests.exceptions.ConnectionError: as the method to see if the domain works. I considered doing a whois lookup or something but that felt a little wrong because just because a domain exists doesn't mean there's a website there. Either way, it could be that the domain/URL is perfectly fine but in that very unlucky instant you checked your own server's internet or some other DNS lookup thing is busted. Perhaps wrapping it in a retry and doing try: requests.head() except requests.exceptions.RetryError: instead.

Lastly, the business logic I chose was to rewrite all http:// to https://only if the URL http://domain does a 301 redirect to https://domain. So if the original link was http://bit.ly/redirect-slug it leaves it as is. Perhaps a fancier version would be to look at the domain name ending. For example HEAD http://google.com 301 redirects to https://www.google.com so you could use the fact that "www.google.com".endswith("google.com").

UPDATE Oct 10 2018

Moments after publishing this, I discovered a bug where it would fail badly if the text contained a URL with an ampersand in it. Turns out, it was a known bug in Bleach. It only happens when you try to pass a filter to the bleach.Cleaner() class.

So I simplified my code and now things work. Apparently, using bleach.Cleaner(filters=[...]) is faster so I'm losing that. But, for now, that's OK in my context.

Also, in another later fix, I improved the function some more by avoiding non-HTTP links (with the exception of mailto: and tel:). Otherwise it would attempt to run requests.head('ssh://server.example.com') which doesn't make sense.

Marc Richter: Create your own Telegram bot with Django on Heroku – Part 8 – Integrating the database

$
0
0

This article was published at marc-richter.info .
If you are reading this on any other page, which is not some “planet” or aggregator, you are reading stolen content. Please read this article at its source, which is linked before; thank you! ❤

Django_Pony

In the previous part of this series, we had a bit of a term definition to make it easier for beginners of Django to understand what I am talking about. Also, we created a Django – App called “bot” and created a URL routing for it to be available at

(https://dry-tundra-61874.herokuapp.com)/bot/*
 (or whatever your URL looks like) and how to direct URLs to a view.

Originally, I planned to also show how to start using a database in Django to hold your bot’s data. But since the article grew larger than I anticipated before, I had to cut that down, unfortunately (sorry for that 😰).
Today, I will deliver that part in its own article. We will learn how to work with databases in Django, what migrations are and how to interact with the database from within Django’s Admin-Backend.

Why do we need a database?

A database is needed to store and retrieve all data for your applications, which is neither code, nor file-like assets like pictures, audio-files, CSS, and so on. Regarding our Telegram project, a database is needed to store those parts of the JSON-Elements of Telegram messages, which are forwarded to our webhook from our Telegram bot, for instance as we saw in Part 4. We could also have our applications write this data into plain text files somewhere on our storage. But that is not a mature solution, since it does not scale well in a matter of indexing and finding pieces of information and does not deal well with concurrency. Also, since DB abstraction for many database systems is already built into Django, it’s even easier to simply make use of this than to write something on your own.
Last but not least, Django makes it easy to not have to touch a single line of SQL code, since it creates the necessary queries from pure Python code, which makes it also easy to fetch and filter data from it.

Supported database systems

I like MariaDB / MySQL / SQLite3 / Oracle better than PostgreSQL – can’t I use that instead?

Django supports a wide variety of common relational database systems (RDBS), including PostgreSQL, MariaDB/MySQL, SQLite3 and Oracle, already out of the box. If it comes to somewhat more exotic like Firebird, you need to look out for some 3rd party extension module for that (like django-firebird for example; attention: I did not test this! ⚠).
But I can absolutely not recommend doing this since the most benevolent description I can name this is experimental. Better stick with one of these built-in backends.

As already mentioned in Part 6, we will stick to PostgreSQL for various reasons, including personal preference of the author of these articles.

Will I have to write several variants of code then, for each database system I want my app to support?

Absolutely not! One of the benefits of the Django framework is that it offers an abstraction layer between your code and the database backend used, so you can easily write your code without caring for the database backend at all. You can even use something else on your workstation when you are developing locally than you use for your deployed application in production later, without changing any line of code.

For example, one common pattern is to make use of the SQLite3 database backend on your workstation, which only creates one single file for all your database content storage inside your project directory, to not have the overhead of installing and maintaining a local PostgreSQL- or MySQL-service.

Migrations

First, I need to clarify something, which I did not really pick up in a previous part of this series (Part 6): I advised you to issue the command 

python manage.py migrate
 more or less blindly, without explaining what is happening by doing that or what a migration is. Let me explain this now:

A migration is a collection of files, which are semi-automatically created by Django for you and contain SQL commands which create the database structure which is needed by Django to provide your project, so far.
“Semi”-automatically, since you still need to execute a management command which triggers Django to inspect your code and then creates these files to align the database layout to match what you have defined by it. And “so far” means, that it is not a one-shot only approach, but necessary to create additional migration-files, as soon as your model definitions change.
I will describe what that means in detail in a minute; for now, you just need to know that there are a bunch of commands built into Django, which take care of all database management for you to match the requirements of your code.

Initiate your database

To recap a bit and to have a clear state for everyone, please stop the server now if you have it running and (re-)move the file 

db.sqlite3
 from the root of your Django project by deleting it or renaming it.
Now, your Django is in a state as if we never had initialized the database before using 
python manage.py migrate
.

Let’s have Django searching for any necessary migrations and create them, first:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py makemigrations
No changes detected
(dtbot-hT9CNosh) ~/dtbot $

None – fine. This does not mean that no change needs to be applied to the database. This only means that Django has already all migration files created to reflect your models inside of a database if they would be applied. 

makemigrations
 is about preparing migration files from your code. If there are any for you, you probably changed more than we did in this series so far on your own or deleted existing migration files. Anyways: There is no need to be concerned in that case: As long as no error is reported, you should be fine.

Next, let’s check which migrations would be applied to the database if we would issue 

python manage.py migrate
:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py showmigrations
admin
 [ ] 0001_initial
 [ ] 0002_logentry_remove_auto_add
 [ ] 0003_logentry_add_action_flag_choices
auth
 [ ] 0001_initial
 [ ] 0002_alter_permission_name_max_length
 [ ] 0003_alter_user_email_max_length
 [ ] 0004_alter_user_username_opts
 [ ] 0005_alter_user_last_login_null
 [ ] 0006_require_contenttypes_0002
 [ ] 0007_alter_validators_add_error_messages
 [ ] 0008_alter_user_username_max_length
 [ ] 0009_alter_user_last_name_max_length
contenttypes
 [ ] 0001_initial
 [ ] 0002_remove_content_type_name
sessions
 [ ] 0001_initial
(dtbot-hT9CNosh) ~/dtbot $

What do we see here? Any section heading like “admin” or “auth” represents an app, which has one or more migrations defined. These are built-in apps, which provide some core functionality for the Django frameworks, like the admin-backend for example, which we will see in a minute.
The name of these is listed in the following strings. As you can see, these are prefixed with a ‘####‘ pattern (like ‘0001*’). This is since the order of appliance is important. Each of the migrations depends on a clean state the former ones had defined. This is, why it is a bad idea to manipulate something in the database layout manually without using this migrations mechanism.
Each migration has a leading checkbox ( 

[ ]
 ), informing about the state of the migration. This reflects which of these were already applied to the database backend and which are not. In this case, none have been applied yet.

Before we (re-)create that SQLite3 database file, please add it to the list of ignored files by Git to prevent this file being added to your Git repository and gets distributed to your production servers or version control that way:

echo "db.sqlite3">> .gitignore

So, let’s do this: Let’s apply these outstanding migrations to our new SQLite3 database by executing 

python manage.py migrate
:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py migrate
Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying sessions.0001_initial... OK
(dtbot-hT9CNosh) ~/dtbot $

When we now check the status of the migrations again, we should notice the change in the status checkboxes which turned from

[ ]
  to 
[X]
 , idicating that this migration already was applied :

(dtbot-hT9CNosh) ~/dtbot $ python manage.py showmigrations
admin
 [X] 0001_initial
 [X] 0002_logentry_remove_auto_add
 [X] 0003_logentry_add_action_flag_choices
auth
 [X] 0001_initial
 [X] 0002_alter_permission_name_max_length
 [X] 0003_alter_user_email_max_length
 [X] 0004_alter_user_username_opts
 [X] 0005_alter_user_last_login_null
 [X] 0006_require_contenttypes_0002
 [X] 0007_alter_validators_add_error_messages
 [X] 0008_alter_user_username_max_length
 [X] 0009_alter_user_last_name_max_length
contenttypes
 [X] 0001_initial
 [X] 0002_remove_content_type_name
sessions
 [X] 0001_initial
(dtbot-hT9CNosh) ~/dtbot $

Perfect! All are recognized as being applied! 👍

Create a superuser for your project 💪

If not disabled, Django comes with an admin-backend configured by default. With this, you can log in to your Django project and make some changes to the content of your database tables, manage users, etc. Before we can log in, we need to create an administrative user for this (the superuser), since by default there is none.

Create it using the following command; make sure to use some fair complexity when choosing your password, since per default, there are some password validators enabled, which prevent a user to choose too simple passwords (like being too short, being just numbers, being too similar to the username, etc.). Take note of the password you set here:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py createsuperuser
Username (leave blank to use 'testuser'): mrichter
Email address: none@nowhere.abc
Password: 
Password (again): 
Superuser created successfully.
(dtbot-hT9CNosh) ~/dtbot $

Login to Django’s admin site

This user was now created for that Django data structure which is stored in your local 

db.sqlite3
 file. Do not expect this to work in your production environment yet, since that is a different database.

To log in, we first need to start a local instance of our Django project either by using 

python manage.py runserver
 or 
heroku local
 ; I recommend the first one since it is the same for any hosting provider and gives a bit more info without further configuration:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py runserver
Performing system checks...

System check identified no issues (0 silenced).
October 10, 2018 - 13:09:51
Django version 2.1.2, using settings 'dtbot.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.

You can navigate to the admin-backend by pointing your browser to http://127.0.0.1:8000/admin/ now, but don’t be surprised to see “Server Error (500)” in your browser and on the shell:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py runserver
Performing system checks...

System check identified no issues (0 silenced).
October 10, 2018 - 13:09:51
Django version 2.1.2, using settings 'dtbot.settings'
Starting development server at http://127.0.0.1:8000/
Quit the server with CONTROL-C.
[10/Oct/2018 13:10:01] "GET /admin/ HTTP/1.1" 302 0
[10/Oct/2018 13:10:01] "GET /admin/login/?next=/admin/ HTTP/1.1" 500 27

We did nothing wrong, but again, the 

heroku_django
  – module makes it necessary to take one additional step:
Since it introduces the 
whitenoise
 – middleware (PyPi), which makes it easy to have any web-based application to keep track of its static files on its own without having to rely on complicated Apache or nginx configurations, we need to collect all static files (like CSS files, images, etc.) in a defined directory. There’s a 
manage.py
 – task for that:
First, we need to stop the Django server (CONTROL-C) and execute the following command to make sure all needed files are available at the expected location:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py collectstatic --noinput

119 static files copied to '/home/testuser/dtbot/staticfiles', 375 post-processed.
(dtbot-hT9CNosh) ~/dtbot $

It’s also wise to add this folder to the 

.gitignore
 file, since this command is executed on each Heroku-deployment anyways and files in
staticfiles/
 are nothing that should be in the VCS:

echo "staticfiles/">> .gitignore

When we restart the Django server again using 

python manage.py runserver
 , we should be able to access http://127.0.0.1:8000/admin/ successfully:

Django admin login mask

Using the credentials you just created the superuser with before, logging in to it should work and look somewhat similar to this:

Django admin backend

Create your models

Well, this still looks a bit boring, isn’t it? 😴
Let’s populate this with something useful: Our own models to store the users your bot will accept messages from 🤩

Wait, what is a model?

A model is Python code which defines a data structure by creating classes which extend specific Django classes. … if I would read that sentence in a tutorial, I’d be discouraged to continue now, since it is soooo not clear to me what to do now 😰. But stick with me, I will show you what I mean in an example:

Creating a model which holds your users

Once more, fire up your code editor and open the file 

bot/models.py
 . By default, it looks like this:

from django.db import models

# Create your models here.

Let’s keep our first example easy and change this to read:

from django.db import models

class User(models.Model):
    user_id     = models.IntegerField(unique=True, primary_key=True)
    first_name  = models.CharField(max_length=64)
    last_name   = models.CharField(max_length=64)

Let me explain line by line what this does:

  1. The first line hasn’t changed. It just imports what will be used as a basis for our models.
  2. In line 3 we define a new class called 
    User
    , which extends the class 
    models.Model
     , which we imported in the first line. This class later will be inserted into the database as a table.
  3. From line 4 onwards, we define the fields for our model (which later will become columns in the table of the class 
    User
     . So far so clear: How this is done is the interesting part here:
    The 
    models
     module contains additional classes, each defining a field type. If you are familiar with SQL, you most certainly recognize this from the database definition and creation. In SQL, you need to define a skeleton for a table layout before you can add any data to them. This is not limited to naming the columns, but you also need to define the data-type for fields in that column, like “this is an integer“, “this is a string” or “this is a date“. Also, you need to define several other things which variate from data-type to data-type, like the maximal length of a string stored to a “string field” (which really is called a “CharField” in Django, but I think “string” is more commonly to understand for Pythonistas).
    Here in line 4, we are defining that the column 
    user_id
     inside of the table 
    User
     should be an 
    IntegerField
     , which must be unique (no other line in the whole table is allowed to have the same content like any other in this column) and which is a primary key (something which makes it possible to uniquely and reliably select one and only one specific row).
  4. first_name
     is defined as being a 
    CharField
     . Char fields are used to store strings which are not considered “large”. Otherwise, it’s encouraged to use a 
    TextField
     instead.
    This mainly is not so very relevant for the database, but for Django, since this type decides what kind of input field is used to edit these fields in forms: When smaller text strings are expected, like a name or a state name, then a form should offer a one-line input field to ask for this. If it’s a whole article for a blog or similar, than a whole input-box should be rendered instead. Django decides this depending on this field type you define here.
    Since we do not expect any 
    first_name
     to extend 64 characters, we add 
    max_length=64
     as an argument here. The reason why you limit this in SQL usually is that the RDBS reserves a specific amount of storage for each line of this table which is always the same size, no matter if the value is 6 or 64 characters long. The more you define here which remains unused, the more “waste” of storage and performance you risk here. This might not appear like the worst thing for a user table, but for tables which soon contain millions of rows, it becomes relevant quite quickly. So take this as “good practice” advice.
  5. Exactly the same like in line 5 here.

What have we just done and why?

With this background, you surely can tell what we just did here: We defined a database table named “User” which holds some or many records consisting of 3 pieces of information per set:

  • a numeric user id
  • an up to 64 characters string for a first name
  • an up to 64 characters string for the last name

The idea is: When you operate a bot on Telegram, everyone can send messages to that, right? So: Whatever your bot should do, you eventually do not want it to process messages from everyone but only to a fixed list of users. This depends on the intention of your bot: If it should be a public service for everyone in the world, which sends everyone who registers to it a message when a new article was published, you do not need to limit who can do that.
If you are planning to create a bot which tracks some household-cash-information for just a few people (like we do in this article series), then you definitely do not want that everybody can ask your bot to “add 1000 💵 for cocaine” to your and your wife’s monthly calculation.

Let’s have a look at the JSON data which Telegram sends to the Webhook of our bot as described in Part 4:

{'message': {'chat': {'first_name': 'Marc',
                       'id': REMOVED,
                       'last_name': 'Richter',
                       'type': 'private'},
              'date': 1533248344,
              'from': {'first_name': 'Marc',
                       'id': REMOVED,
                       'is_bot': False,
                       'language_code': 'de',
                       'last_name': 'Richter'},
              'message_id': 4,
              'text': 'Test'},
  'update_id': 941430900}

Where it says 

REMOVED
 here, normally a numerical, unique number is listed, which definitely identifies a specific user in Telegram. The plan is, that this value can be entered as 
user_id
 in the 
User
 table to decide in your code if you process an incoming message (like storing it in the database, send a reply, add the received numbers to your monthly sum, …) or not.
We will see how this can be done later. For now, let’s get familiar with database modeling, first.

Register the app to Django

For a start, this is enough to enable Django to create a migration for it. But if you execute 

python manage.py makemigrations
 now, you will notice that Django states 
No changes detected
 .
This is because Django does not know about your app yet. If you remember from Part 7, the 
python manage.py startapp bot
 command just creates a new folder which holds some files; none of the existing files is altered to make Django aware of this new app folder. Thus, we need to do that now, after the initial preparations were made.

Once more, edit 

settings.py
 and head for 
INSTALLED_APPS
 . You will notice that this contains some built-in apps by default already. We add our app to this list now to make it look like this:

INSTALLED_APPS = [
    'django.contrib.admin',
    'django.contrib.auth',
    'django.contrib.contenttypes',
    'django.contrib.sessions',
    'django.contrib.messages',
    'django.contrib.staticfiles',
    'bot.apps.BotConfig',
]

Having the migrations created and applied

If you now fire up 

python manage.py makemigrations
 , a migration for our model definition should be detected and created:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py makemigrations
Migrations for 'bot':
  bot/migrations/0001_initial.py
    - Create model User
(dtbot-hT9CNosh) ~/dtbot $ ls -l bot/migrations/
total 8
-rw-rw-r-- 1 testuser testuser  536 Oct 10 17:01 0001_initial.py
-rw-rw-r-- 1 testuser testuser    0 Sep 21 22:06 __init__.py
drwxrwxr-x 2 testuser testuser 4096 Oct 10 17:01 __pycache__
(dtbot-hT9CNosh) ~/dtbot $

If this has worked well, we can apply that migration and by that, have the table created in the database:

(dtbot-hT9CNosh) ~/dtbot $ python manage.py migrate
Operations to perform:
  Apply all migrations: admin, auth, bot, contenttypes, sessions
Running migrations:
  Applying bot.0001_initial... OK
(dtbot-hT9CNosh) ~/dtbot $

Let’s have a look at the database! 👁

What has all this caused? Let’s have a look!
With the following commands, I’m connecting to the SQLite3 database file 

db.sqlite3
 and list its content:

(dtbot-hT9CNosh) ~/dtbot $ sqlite3
SQLite version 3.23.1 2018-04-10 17:39:29
Enter ".help" for usage hints.
Connected to a transient in-memory database.
Use ".open FILENAME" to reopen on a persistent database.
sqlite> .open db.sqlite3
sqlite> .tables
auth_group                  bot_user                  
auth_group_permissions      django_admin_log          
auth_permission             django_content_type       
auth_user                   django_migrations         
auth_user_groups            django_session            
auth_user_user_permissions
sqlite> .schema bot_user
CREATE TABLE IF NOT EXISTS "bot_user" ("user_id" integer NOT NULL PRIMARY KEY, "first_name" varchar(64) NOT NULL, "last_name" varchar(64) NOT NULL);
sqlite> .quit
(dtbot-hT9CNosh) ~/dtbot $

Ignore the other tables for now; these come from the built-in apps migrations.
What is interesting here is that our table 

User
 ends up as 
bot_user
 in the database. This can be changed, but normally, this is quite a meaningful default: The name is lowercased and prefixed by the name of the app + “_”, so everyone can see what a table belongs to, immediately.
The 
CREATE TABLE
 is plain SQL. We will find all our models definitions in that query.

How to enter data to this?

We either could already use Python code to do this or we could do so, using the admin backend, first. Let’s do the latter, since entering data by Python code will be done a lot later when we are storing our messages. This way, we learn to know both ways.

Launch your Django server and head for the admin backend at http://127.0.0.1:8000/admin/ . You will notice that nothing has changed yet. This is because we first need to register our modules to the admin backend.
This is done by editing the file 

bot/admin.py
 to look like this:

from django.contrib import admin
from .models import User

admin.site.register(User)

Save this, restart the server and hit the admin backend again; you should notice another section called “BOT” is listed now, containing one element called “Users”:

Django admin backend with bot

This is our model!! 🤑
Let’s click that “Users” link. You will see a more or less blank page, stating “Select user to change“. Pretty not impressing without any record ready to get edited. But there’s also a button at the upper right corner labeled “ADD USER” – let’s click that!
You will get shown a form, which asks for 3 things:

  1. User id
  2. First name
  3. Last name

You may start by doing some experiments with this:

  • From what we know, the “User id” should only be capable to store integers, since we defined that field to be of type 
    IntegerField
     . Try to save something else: A float, a string, whatever.
  • “First name” and “Last Name” should allow 64 characters max. Try to store a string which is 65 characters long and see what happens (take a close look, comparing both strings).
  • From the SQLite3 schema, we know that all three fields got created with 
    NOT NULL
     , which means they must not be blank. Try if you can only fill two fields and get away with saving.

When you are done experimenting with this, delete all records again.

Make it pretty 👯

It may have come to your attention that a record which is created is displayed in the record overview as “User object (#)“. I do not like that. I’d prefer it to be “First_name Last_name”. Let’s change that!

Create at least one record, so that you have at least one, so we can see the effect.

Edit your 

models.py
 file once more and overwrite the 
__str__
 – function of the class 
Users
 by changing it like this:

class User(models.Model):
    user_id     = models.IntegerField(unique=True, primary_key=True)
    first_name  = models.CharField(max_length=64)
    last_name   = models.CharField(max_length=64)

    def __str__(self):
        return f'{self.first_name} {self.last_name}'

Save this, restart the Django server and reload the admin backend; “User object (#)” should now be displayed like whatever you did choose as “First_name Last_name” when you created those records.

Example of how 
__str__
 works

In case you do not know what 

__str__
 does: It defines how an object is represented if it is displayed with the 
print()
 – function, for example. Let’s see an easy example of this:

>>> class foobar():
...     def __init__(self, name, mood="Good"):
...         self.name = name
...         self.mood = mood
... 
>>> a = foobar('Carl')
>>> print(a)
<foobar object at 0x7f8cf4c94cc0>

>>> class foobar():
...     def __init__(self, name, mood="Good"):
...         self.name = name
...         self.mood = mood
...     def __str__(self):
...         return f"Hi, I'm {self.name} and my mood is {self.mood}!"
... 
>>> b = foobar('Curt')
>>> print(b)
Hi, I'm Curt and my mood is Good!
>>>

Outlook for the next part of the series

Phew – again, this turned out to become a quite exhaustive article! Again, let’s make a stop to it before it becomes even longer.
We just learned about what a database is, what it’s good for, what database systems are supported by Django and how to utilize a database in your Django apps and how to use the admin-backend to manipulate the records in the database. Also, we learned what “migrations” and “models” are.

In the next article of this series, we will see how these new moves can be used in our Python code to receive and store messages and how you can interact with your database from your Python code.

If you liked or disliked this article, I’d love to read that in the comments!

Enjoy coding!

Born in 1982, Marc Richter is an IT enthusiastic since 1994. He became addicted when he first put hands on their family’s pc and never stopped investigating and exploring new things since then.
He is married to Jennifer Richter and proud father of two wonderful children, Lotta and Linus.
His current professional focus is DevOps and Python development.

An exhaustive bio can be found at this blog post.

Found my articles useful? Maybe you would like to support my efforts and give me a tip then?

Trey Hunner: Asterisks in Python: what they are and how to use them

$
0
0

There are a lot of places you’ll see * and ** used in Python. These two operators can be a bit mysterious at times, both for brand new programmers and for folks moving from many other programming languages which may not have completely equivalent operators. I’d like to discuss what those operators are and the many ways they’re used.

The * and ** operators have grown in ability over the years and I’ll be discussing all the ways that you can currently use these operators and noting which uses only work in modern versions of Python. So if you learned * and ** back in the days of Python 2, I’d recommend at least skimming this article because Python 3 has added a lot of new uses for these operators.

If you’re newer to Python and you’re not yet familiar with keyword arguments (a.k.a. named arguments), I’d recommend reading my article on keyword arguments in Python first.

What we’re not talking about

When I discuss * and ** in this article, I’m talking about the * and **prefix operators, not the infix operators.

So I’m not talking about multiplication and exponentiation:

1234
>>> 2*510>>> 2**532

So what are we talking about?

We’re talking about the * and ** prefix operators, that is the * and ** operators that are used before a variable. For example:

1234
>>>numbers=[2,1,3,4,7]>>>more_numbers=[*numbers,11,18]>>>print(*more_numbers,sep=', ')2,1,3,4,7,11,18

Two of the uses of * are shown in that code and no uses of ** are shown.

This includes:

  1. Using * and ** to pass arguments to a function
  2. Using * and ** to capture arguments passed into a function
  3. Using * to accept keyword-only arguments
  4. Using * to capture items during tuple unpacking
  5. Using * to unpack iterables into a list/tuple
  6. Using ** to unpack dictionaries into other dictionaries

Even if you think you’re familiar with all of these ways of using * and **, I recommend looking at each of the code blocks below to make sure they’re all things you’re familiar with. The Python core developers have continued to add new abilities to these operators over the last few years and it’s easy to overlook some of the newer uses of * and **.

Asterisks for unpacking into function call

When calling a function, the * operator can be used to unpack an iterable into the arguments in the function call:

12345
>>> fruits=['lemon','pear','watermelon','tomato']>>> print(fruits[0],fruits[1],fruits[2],fruits[3])lemon pear watermelon tomato>>> print(*fruits)lemon pear watermelon tomato

That print(*fruits) line is passing all of the items in the fruits list into the print function call as separate arguments, without us even needing to know how many arguments are in the list.

The * operator isn’t just syntactic sugar here. This ability of sending in all items in a particular iterable as separate arguments wouldn’t be possible without *, unless the list was a fixed length.

Here’s another example:

12345
deftranspose_list(list_of_lists):return[list(row)forrowinzip(*list_of_lists)]

Here we’re accepting a list of lists and returning a “transposed” list of lists.

12
>>> transpose_list([[1,4,7],[2,5,8],[3,6,9]])[[1, 2, 3], [4, 5, 6], [7, 8, 9]]

The ** operator does something similar, but with keyword arguments. The ** operator allows us to take a dictionary of key-value pairs and unpack it into keyword arguments in a function call.

1234
>>> date_info={'year':"2020",'month':"01",'day':"01"}>>> filename="{year}-{month}-{day}.txt".format(**date_info)>>> filename'2020-01-01.txt'

From my experience, using ** to unpack keyword arguments into a function call isn’t particularly common. The place I see this most is when practicing inheritance: calls to super() often include both * and **.

Both * and ** can be used multiple times in function calls, as of Python 3.5.

Using * multiple times can sometimes be handy:

1234
>>> fruits=['lemon','pear','watermelon','tomato']>>> numbers=[2,1,3,4,7]>>> print(*numbers,*fruits)2 1 3 4 7 lemon pear watermelon tomato

Using ** multiple times looks similar:

12345678
>>> date_info={'year':"2020",'month':"01",'day':"01"}>>> track_info={'artist':"Beethoven",'title':'Symphony No 5'}>>> filename="{year}-{month}-{day}-{artist}-{title}.txt".format(... **date_info,... **track_info,... )>>> filename'2020-01-01-Beethoven-Symphony No 5.txt'

You need to be careful when using ** multiple times though. Functions in Python can’t have the same keyword argument specified multiple times, so the keys in each dictionary used with ** must be distinct or an exception will be raised.

Asterisks for packing arguments given to function

When defining a function, the * operator can be used to capture an unlimited number of positional arguments given to the function. These arguments are captured into a tuple.

1234
fromrandomimportrandintdefroll(*dice):returnsum(randint(1,die)fordieindice)

This function accepts any number of arguments:

123456
>>> roll(20)18>>> roll(6,6)9>>> roll(6,6,6)8

Python’s print and zip functions accept any number of positional arguments. This argument-packing use of * allows us to make our own function which, like print and zip, accept any number of arguments.

The ** operator also has another side to it: we can use ** when defining a function to capture any keyword arguments given to the function into a dictionary:

123456
deftag(tag_name,**attributes):attribute_list=[f'{name}="{value}"'forname,valueinattributes.items()]returnf"<{tag_name} {' '.join(attribute_list)}>"

That ** will capture any keyword arguments we give to this function into a dictionary which will that attributes arguments will reference.

1234
>>> tag('a',href="http://treyhunner.com")'<a href="http://treyhunner.com">'>>> tag('img',height=20,width=40,src="face.jpg")'<img height="20" width="40" src="face.jpg">'

Positional arguments with keyword-only arguments

As of Python 3, we now have a special syntax for accepting keyword-only arguments to functions. Keyword-only arguments are function arguments which can only be specified using the keyword syntax, meaning they cannot be specified positionally.

To accept keyword-only arguments, we can put named arguments after a * usage when defining our function:

12345
defget_multiple(*keys,dictionary,default=None):return[dictionary.get(key,default)forkeyinkeys]

The above function can be used like this:

123
>>> fruits={'lemon':'yellow','orange':'orange','tomato':'red'}>>> get_multiple('lemon','tomato','squash',dictionary=fruits,default='unknown')['yellow', 'red', 'unknown']

The arguments dictionary and default come after *keys, which means they can only be specified as keyword arguments. If we try to specify them positionally we’ll get an error:

12345
>>> fruits={'lemon':'yellow','orange':'orange','tomato':'red'}>>> get_multiple('lemon','tomato','squash',fruits,'unknown')Traceback (most recent call last):  File "<stdin>", line 1, in <module>TypeError: get_multiple() missing 1 required keyword-only argument: 'dictionary'

This behavior was introduced to Python through PEP 3102.

Keyword-only arguments without positional arguments

That keyword-only argument feature is cool, but what if you want to require keyword-only arguments without capturing unlimited positional arguments?

Python allows this with a somewhat strange *-on-its-own syntax:

123456
defwith_previous(iterable,*,fillvalue=None):"""Yield each iterable item along with the item before it."""previous=fillvalueforiteminiterable:yieldprevious,itemprevious=item

This function accepts an iterable argument, which can be specified positionally (as the first argument) or by its name and a fillvalue argument which is a keyword-only argument. This means we can call with_previous like this:

12
>>> list(with_previous([2,1,3],fillvalue=0))[(0, 2), (2, 1), (1, 3)]

But not like this:

1234
>>> list(with_previous([2,1,3],0))Traceback (most recent call last):  File "<stdin>", line 1, in <module>TypeError: with_previous() takes 1 positional argument but 2 were given

This function accepts two arguments and one of them, fillvaluemust be specified as a keyword argument.

I usually use keyword-only arguments used while capturing any number of positional arguments, but I do sometimes use this * to enforce an argument to only be specified positionally.

Python’s built-in sorted function actually uses this approach. If you look at the help information on sorted you’ll see the following:

12345678
>>> help(sorted)Help on built-in function sorted in module builtins:sorted(iterable, /, *, key=None, reverse=False)    Return a new list containing all items from the iterable in ascending order.    A custom key function can be supplied to customize the sort order, and the    reverse flag can be set to request the result in descending order.

There’s an *-on-its-own, right in the documented arguments for sorted.

Asterisks in tuple unpacking

Python 3 also added a new way of using the * operator that is only somewhat related to the *-when-defining-a-function and *-when-calling-a-function features above.

The * operator can also be used in tuple unpacking now:

12345678910
>>> fruits=['lemon','pear','watermelon','tomato']>>> first,second,*remaining=fruits>>> remaining['watermelon', 'tomato']>>> first,*remaining=fruits>>> remaining['pear', 'watermelon', 'tomato']>>> first,*middle,last=fruits>>> middle['pear', 'watermelon']

If you’re wondering “where could I use this in my own code”, take a look at the examples in my article on tuple unpacking in Python. In that article I show how this use of the * operator can sometimes be used as an alternative to sequence slicing.

Usually when I teach * I note that you can only use one * expression in a single multiple assignment call. That’s technically incorrect because it’s possible to use two in a nested unpacking (I talk about nested unpacking in my tuple unpacking article):

123456
>>> fruits=['lemon','pear','watermelon','tomato']>>> ((first_letter,*remaining),*other_fruits)=fruits>>> remaining['e', 'm', 'o', 'n']>>> other_fruits['pear', 'watermelon', 'tomato']

I’ve never seen a good use for this though and I don’t think I’d recommend using it even if you found one because it seems a bit cryptic.

The PEP that added this to Python 3.0 is PEP 3132 and it’s not a very long one.

Asterisks in list literals

Python 3.5 introduced a ton of new *-related features through PEP 448. One of the biggest new features is the ability to use * to dump an iterable into a new list.

Say you have a function that takes any sequence and returns a list with the sequence and the reverse of that sequence concatenated together:

12
defpalindromify(sequence):returnlist(sequence)+list(reversed(sequence))

This function needs to convert things to lists a couple times in order to concatenate the lists and return the result. In Python 3.5, we can type this instead:

12
defpalindromify(sequence):return[*sequence,*reversed(sequence)]

This code removes some needless list calls so our code is both more efficient and more readable.

Here’s another example:

12
defrotate_first_item(sequence):return[*sequence[1:],sequence[0]]

That function returns a new list where the first item in the given list (or other sequence) is moved to the end of the new list.

This use of the * operator is a great way to concatenate iterables of different types together. The * operator works for any iterable, whereas using the + operator only works on particular sequences which have to all be the same type.

This isn’t just limited to creating lists either. We can also dump iterables into new tuples or sets:

123456
>>> fruits=['lemon','pear','watermelon','tomato']>>> (*fruits[1:],fruits[0])('pear', 'watermelon', 'tomato', 'lemon')>>> uppercase_fruits=(f.upper()forfinfruits)>>> {*fruits,*uppercase_fruits}{'lemon', 'watermelon', 'TOMATO', 'LEMON', 'PEAR', 'WATERMELON', 'tomato', 'pear'}

Notice that the last line above takes a list and a generator and dumps them into a new set. Before this use of *, there wasn’t previously an easy way to do this in one line of code. There was a way to do this before, but it wasn’t easy to remember or discover:

12
>>> set().union(fruits,uppercase_fruits){'lemon', 'watermelon', 'TOMATO', 'LEMON', 'PEAR', 'WATERMELON', 'tomato', 'pear'}

Double asterisks in dictionary literals

PEP 448 also expanded the abilities of ** by allowing this operator to be used for dumping key/value pairs from one dictionary into a new dictionary:

12345
>>> date_info={'year':"2020",'month':"01",'day':"01"}>>> track_info={'artist':"Beethoven",'title':'Symphony No 5'}>>> all_info={**date_info,**track_info}>>> all_info{'year': '2020', 'month': '01', 'day': '01', 'artist': 'Beethoven', 'title': 'Symphony No 5'}

I wrote another article on how this is now the idiomatic way to merge dictionaries in Python.

This can be used for more than just merging two dictionaries together though.

For example we can copy a dictionary while adding a new value to it:

1234
>>> date_info={'year':'2020','month':'01','day':'7'}>>> event_info={**date_info,'group':"Python Meetup"}>>> event_info{'year': '2020', 'month': '01', 'day': '7', 'group': 'Python Meetup'}

Or copy/merge dictionaries while overriding particular values:

1234
>>> event_info={'year':'2020','month':'01','day':'7','group':'Python Meetup'}>>> new_info={**event_info,'day':"14"}>>> new_info{'year': '2020', 'month': '01', 'day': '14', 'group': 'Python Meetup'}

Python’s asterisks are powerful

Python’s * and ** operators aren’t just syntactic sugar. Some of the things they allow you to do could be achieved through other means, but the alternatives to * and ** tend to be more cumbersome and more resource intensive. And some of the features they provide are simply impossible to achieve without them: for example there’s no way to accept any number of positional arguments to a function without *.

After reading about all the features of * and **, you might be wondering what the names for these odd operators are. Unfortunately, they don’t really have succinct names. I’ve heard * called the “packing” and “unpacking” operator. I’ve also heard it called “splat” (from the Ruby world) and I’ve heard it called simply “star”.

I tend to call these operators “star” and “double star” or “star star”. That doesn’t distinguish them from their infix relatives (multiplication and exponentiation), but context usually makes it obvious whether we’re talking about prefix or infix operators.

If you don’t understand * and ** or you’re concerned about memorizing all of their uses, don’t be! These operators have many uses and memorizing the specific use of each one isn’t as important as getting a feel for when you might be able to reach for these operators. I suggest using this article as a cheat sheet or to making your own cheat sheet to help you use * and ** in Python.

Codementor: The One reason you should learn Python

$
0
0
When you speak with researchers, data scientists, and practitioners who are involved in any capacity with data, you are bound to here one word multiple times in a conversation: Python. It is...

NumFOCUS: What Happened at the NumFOCUS Summit 2018?

Matt Layman: Build Native Mobile Apps with Python (BeeWare)

$
0
0
You can build mobile applications with Python? Absolutely. At Python Frederick’s October 2018 presentation, Bob Marchese showed us how to use BeeWare, a suite of tools for building mobile apps (among other things). Bob’s presentation material are available on the Python Frederick talks repository on GitHub.

Jeff Knupp: Write Better Python Functions

$
0
0

In Python, like most modern programming languages, the function is a primary method of abstraction and encapsulation. You've probably written hundreds of functions in your time as a developer. But not all functions are created equal. And writing "bad" functions directly affects the readability and maintainability of your code. So what, then, is a "bad" function and, more importantly, what makes a "good" function?

A Quick Refresher

Math is lousy with functions, though we might not remember them, so let's think back to everyone's favorite topic: calculus. You may remember seeing formulas like the following f(x) = 2x + 3. This is a function, called f, that takes an argument x, and "returns" two times x + 3. While it may not look like the functions we're used to in Python, this is directly analogous to the following code:

deff(x):return2*x+3

Functions have long existed in math, but have far more power in computer science. With this power, though, comes various pitfalls. Let's now discuss what makes a "good" function and warning signs of functions that may need some refactoring.

Keys To A Good Function

What differentiates a "good" Python function from a crappy one? You'd be surprised at how many definitions of "good" one can use. For our purposes, I'll consider a Python function "good" if it can tick off most of the items on this checklist (some are not always possible):

  • Is sensibly named
  • Has a single responsibility
  • Includes a docstring
  • Returns a value
  • Is not longer than 50 lines
  • Is idempotent and, if possible, pure

For many of you, this list may seem overly draconian. I promise you, though, if your functions follow these rules, your code will be so beautiful it will make unicorns weep. Below, I'll devote a section to each of the items, then wrap things up with how they work in harmony to create "good" functions.

Naming

There's a favorite saying of mine on the subject, often misatributed to Donald Knuth, but which actually came from Phil Karlton:

ThereareonlytwohardthingsinComputerScience:cacheinvalidationandnamingthings.--PhilKarlton

As silly as it sounds, naming things well is difficult. Here's an example of a "bad" function name:

defget_knn_from_df(df):

Now, I've seen bad names literally everywhere, but this example comes from Data Science (really, Machine Learning), where its practitioners typically write code in Jupyter notebooks and later try to turn those various cells into a comprehensible program.

The first issue with the name of this function is its use of acronyms/abbreviations. Prefer full English words to abbreviations and non-universally known acronyms. The only reason one might abbreviate words is to save typing, but every modern editor has autocomplete, so you'll only be typing that full name once. Abbreviations are an issue because they are often domain specific. In the code above, knn refers to "K-Nearest Neighbors", and df refers to "DataFrame", the ubiquitous pandas data structure. If another programmer not familiar with those acronyms is reading the code, almost nothing about the name will be comprehensible to her.

There are two other minor gripes about this function's name: the word "get" is extraneous. For most well-named functions, it will be clear that something is being returned from the function, and its name will reflect that. The from_df bit is also unnecessary. Either the function's docstring or (if living on the edge) type annotation will describe the type of the parameter if it's not already made clear by the parameter's name.

So how might we rename this function? Simple:

defk_nearest_neighbors(dataframe):

It is now clear even to the lay person what this function calculates, and the parameter's name (dataframe) makes it clear what type of argument should be passed to it.

Single Responsibility

Straight from "Uncle" Bob Martin, the Single Responsibility Principle applies just as much to functions as it does classes and modules (Mr. Martin's original targets). It states that (in our case) a function should have a single responsibility. That is, it should do one thing and only one thing. One great reason is that if every function only does one thing, there is only one reason ever to change it: if the way in which it does that thing must change. It also becomes clear when a function can be deleted: if, when making changes elsewhere, it becomes clear the function's single responsibility is no longer needed, simply remove it.

An example will help. Here's a function that does more than one "thing":

defcalculate_andprint_stats(list_of_numbers):sum=sum(list_of_numbers)mean=statistics.mean(list_of_numbers)median=statistics.median(list_of_numbers)mode=statistics.mode(list_of_numbers)print('-----------------Stats-----------------')print('SUM: {}'.format(sum)print('MEAN: {}'.format(mean)print('MEDIAN: {}'.format(median)print('MODE: {}'.format(mode)

This function does two things: it calculates a set of statistics about a list of numbers and prints them to STDOUT. The function is in violation of the rule that there should be only one reason to change a function. There are two obvious reasons this function would need to change: new or different statistics might need to be calculated or the format of the output might need to be changed. This function is better written as two separate functions: one which performs and returns the results of the calculations and another that takes those results and prints them. One dead giveaway that a function has multiple responsibilities is the word and in the functions name.

This separation also allows for much easier testing of the function's behavior and also allows the two parts to be separated not just into two functions in the same module, but possibly live in different modules altogether if appropriate. This, too, leads to cleaner testing and easier maintenance.

Finding a function that only does two things is actually rare. Much more often, you'll find functions that do many, many more things. Again, for readability and testability purposes, these jack-of-all-trade functions should be broken up into smaller functions that each encapsulate a single unit of work.

Docstrings

While everyone seems to be aware of PEP-8, defining the style guide for Python, far fewer seem to be aware of PEP-257, which does the same for docstrings. Rather than simply rehash the contents of PEP-257, feel free to read it at your leisure. The main takeaways, however, are:

  • Every function requires a docstring
  • Use proper grammar and punctuation; write in complete sentences
  • Begins with a one-sentence summary of what the function does
  • Uses prescriptive rather than descriptive language

This is an easy one to tick off when writing functions. Just get in the habit of always writing docstrings, and try to write them before you write the code for the function. If you can't write a clear docstring describing what the function will do, it's a good indication you need to think more about why you're writing the function in the first place.

Return Values

Functions can (and should) be thought of as little self-contained programs. They take some input in the form of parameters and return some result. Parameters are, of course, optional. Return values, however, are not optional, from a Python internals perspective. Even if you try to create a function that doesn't return a value, you can't. If a function would otherwise not return a value, the Python interpreter "forces it" to return None. Don't believe me? Test out the following yourself:

❯ python3
Python 3.7.0 (default, Jul 23 2018, 20:22:55)[Clang 9.1.0 (clang-902.0.39.2)] on darwin
Type "help", "copyright", "credits" or "license"for more information.
>>> def add(a, b):
...   print(a + b)
...
>>> b= add(1, 2)
3
>>> b
>>> b is None
True

You'll see that the value of b really is None. So, even if you write a function with no return statement, it's still going to return something. And it should return something. After all, it's a little program, right. How useful are programs that produce no output, including whether or not they executed correctly? But most importantly, how would you test such a program?

I'll even go so far as to make the following statement: every function should return a useful value, even if only for testability purposes. Code that you write should be tested (that's not up for debate). Just think of how gnarly testing the add function above would be (hint: you'd have to redirect I/O and things go south from there quickly). Also, returning a value allows for method chaining, a concept that allows us to write code like this:

withopen('foo.txt','r')asinput_file:forlineininput_file:ifline.strip().lower().endswith('cat'):# ... do something useful with these lines

The line if line.strip().lower().endswith('cat'): works because each of the string methods (strip(), lower(), endswith()) return a string as the result of calling the function.

Here are some common reasons people give when asked why a given function they wrote doesn't return a value:

"All it does is [some I/O related thing like saving a value to a database]. I can't return anything useful."

I disagree. The function can return True if the operation completed successfully.

"We modify one of the parameters in place, using it like a reference parameter."""

Two points, here. First, do your best to avoid this practice. For others, providing something as an argument to your function only to find that it has been changed can be surprising in the best case and downright dangerous in the worst. Instead, much like the string methods, prefer returning a new instance of the parameter with the changes applied to it. Even when this isn't feasible because making a copy of some parameter is prohibitively expensive, you can still fall back to the old "Return True if the operation completed successfully" suggestion.

"I need to return multiple values. There is no single value I could return that would make sense."

This is a bit of a straw-man argument, but I have heard it. The answer, of course, is to do exactly what the author wanted to do but didn't know how to do: use a tuple to return more than one value.

And perhaps the most compelling argument for always returning a useful value is that callers are always free to ignore them. In short, returning a value from a function is almost certainly a good idea and very unlikely to break anything, even in existing code bases.

Function Length

I've said a number of times that I'm pretty dumb. I can only hold about 3 things in my head at once. If you make me read a 200 line function and ask what it does, my eyes are likely to glaze over after about 10 seconds. The length of a function directly affects readability and, thus, maintainability. So keep your functions short. 50 lines is a totally arbitrary number that seemed reasonable to me. Most functions you write will (hopefully) be quite a bit shorter.

If a function is following the Single Responsibility Principle, it is likely to be quite short. If it is pure or idempotent (discussed below), it is also likely to be short. These ideas all work in concert together to produce good, clean code.

So what do you do if a function is too long? REFACTOR!Refactoring is something you probably do all the time, even if the term isn't familiar to you. It simply means changing a program's structure without changing its behavior. So extracting a few lines of code from a long function and turning them into a function of their own is a type of refactoring. It's also happens to be the fastest and most common way to shorten a long function in a productive way. And since you're giving all those new functions appropriate names, the resulting code reads much more easily. I could write a whole book on refactoring (in fact it's been done many times) and won't go into specifics here. Just know that if you have a function that's too long, the way to fix it is through refactoring.

Idempotency and Functional Purity

The title of this subsection may sound a bit intimidating, but the concepts are simple. An idempotent function always returns the same value given the same set of arguments, regardless of how many times it is called. The result does not depend on non-local variables, the mutability of arguments, or data from any I/O streams. The following add_three(number) function is idempotent:

defadd_three(number):"""Return *number* + 3."""returnnumber+3

No matter how many times one calls add_three(7), the answer will always be 10. Here's a different take on the function that is not idempotent:

defadd_three():"""Return 3 + the number entered by the user."""number=int(input('Enteranumber:'))returnnumber+3

This admittedly contrived example is not idempotent because the return value of the function depends on I/O, namely the number entered by the user. It's clearly not true that every call to add_three() will return the same value. If it is called twice, the user could enter 3 the first time and 7 the second, making the call to add_three() return 6 and 10, respectively.

A real-world example of idempotency is hitting the "up" button in front of an elevator. The first time it's pushed, the elevator is "notified" that you want to go up. Because the pressing the button is idempotent, pressing it over and over again is harmless. The result is always the same.

Why is idempotency important

Testability and maintainability. Idempotent functions are easy to test because they are guaranteed to always return the same result when called with the same arguments. Testing is simply a matter of checking that the value returned by various different calls to the function return the expected value. What's more, these tests will be fast, an important and often overlooked issue in Unit Testing. And refactoring when dealing with idempotent functions is a breeze. No matter how you change your code outside the function, the result of calling it with the same arguments will always be the same.

What is a "pure" function?

In functional programming, a function is considered pure if it is both idempotent and has no observable side effects. Remember, a function is idempotentif it always returns the same value for a given set of arguments. Nothing external to the function can be used to compute that value. However, that doesn't mean the function can't affect things like non-local variables or I/O streams. For example, if the idempotent version of add_three(number) above printed the result before returning it, it is still considered idempotent because while it accessed an I/O stream, that access had no bearing on the value returned from the function. The call to print() is simply a side effect: some interaction with the rest of the program or the system itself aside from returning a value.

Let's take our add_three(number) example one step further. We can write the following snippet of code to determine how many times add_three(number) was called:

add_three_calls=0defadd_three(number):"""Return *number* + 3."""globaladd_three_callsprint(f'Returning {number + 3}')add_three_calls+=1returnnumber+3defnum_calls():"""Return the number of times *add_three* was called."""returnadd_three_calls

We're now printing to the console (a side effect) and modifying a non-local variable (another side effect), but since neither of these affect the value returned by the function, it is still idempotent.

A pure function has no side effects. Not only does it not use any "outside data" to compute its value, it has no interaction with the rest of the system/program other than computing and returning said value. Thus while our new add_three(number) definition is still idempotent, it is no longer pure.

Pure functions do not have logging statements or print() calls. They do not make use of database or internet connections. They don't access or modify non-local variables. And they don't call any other non-pure functions.

In short, they are incapable of what Einstein called "spooky action at a distance" (in a Computer Science setting). They don't modify the rest of the program or system in any way. In imperative programming (the kind you're doing when you write Python code), they are the safest functions of all. They are eminently testable and maintainable and, even more so than mere idempotent functions, testing them is guaranteed to basically be as fast as executing them. And the test(s) itself is simple: there are no database connections or other external resources to mock, no setup code required, and nothing to clean up afterwards.

To be clear, idempotency and purity are aspirational, not required. That is, we'd love to only write pure or idempotent functions because of the benefits mentioned, but that isn't always possible. The key, though, is that we naturally begin to arrange our code to isolate side effects and external dependencies. This has the effect of making every line of code we write easier to test, even if we're not always writing pure or idempotent functions.

Summing Up

So that's it. The secret to writing good functions is not a secret at all. It just involves following a number of established best-practices and rules-of-thumb. I hope you found this article helpful. Now go forth and tell your friends! Let's all agree to just always write great code in all cases :). Or at least do our best not to put more "bad" code into the world. I'd be able to live with that...

Made With Mu: Allez Mu!

$
0
0

Check out Girls can Code!, programming workshops for girls based in France. Guess what? They use Mu!

I was contacted by Antoine Pietri, one of the organisers of the event, who tells me,

I went to Nicholas’ presentation of Micro:bits at PyParis 2017, and it struck me as an excellent learning tool for the Girls Can Code! summer camp we organize at Prologin.

We decided to use Micro:bits with MicroPython for this edition, in the Paris and Lyon camps, so I naturally thought of using the Mu editor and its Micro:bit mode for that. I followed the development of Mu in the last couple of years and saw it slowly grow into a fully-fledged general purpose editor, so I suggested that we use it for the whole camp.

We were thrilled at how easy and useful it was as a learning tool for all our activities, ranging from Hello World to writing games with PyGame. We will definitely use it in the future, probably even when teaching to more experimented students.

This is great to hear. Back in 2017 I was invited to give a keynote address at PyParis. It proved to be a fruitful trip: not only was the conference a wonderful experience and I made lots of new friends, but I was able to meet French computing educators like Antoine.

This proved to be a key moment in the development of Mu as I was able to gather feedback from users for whom English is an additional language. Our discussions led to the following conclusion about non-English speaking learners: the biggest barrier to learning Python isn’t that its keywords are in English, but that the tools and resources for programming in Python are.

Put simply, “je voudrais Mu en français”.

Upon my return to the UK, I immediately investigated Python’s robust capabilities in internationalisation (also known as “i18n”) and this has led to Mu being available in ten human languages (including French) with more on the way.

I hope you’ll be hearing more about Mu’s use in a French speaking context very soon. I also welcome non-English contributions to this blog too. Mu is definitely cosmopolitan in outlook and we want to encourage and help beginner programmers no matter their native tongue. If you have a story to tell about how you use Mu in your own region and language, please don’t hesitate to get in touch.

Allez Mu!

Codementor: Hitchhiker's guide to Exploratory Data Analysis

$
0
0
Hitchhiker's guide to Exploratory Data Analysis is a complete guide to get you started in the field of Data Science. Learn about Python libraries and how to architect questions to get conclusive results from the data.

Matthew Rocklin: So you want to contribute to open source

$
0
0

Welcome new open source contributor!

I appreciated receiving the e-mail where you said you were excited about getting into open source and were particularly interested in working on a project that I maintain. This post has a few thoughts on the topic.

First, please forgive me for sending you to this post rather than responding with a personal e-mail. Your situation is common today, so I thought I’d write up thoughts in a public place, rather than respond personally.

This post has two parts:

  1. Some pragmatic steps on how to get started
  2. A personal recommendation to think twice about where you focus your time

Look for good first issues on Github

Most open source software (OSS) projects have a “Good first issue” label on their Github issue tracker. Here is a screenshot of how to find the “good first issue” label on the Pandas project:

(note that this may be named something else like “Easy to fix”)

This contains a list of issues that are important, but also good introductions for new contributors like yourself. I recommend looking through that list to see if something interests you. The issue should include a clear description of the problem, and some suggestions on how it might be resolved. If you need to ask questions, you can make an account on Github and ask them there.

It is very common for people to ask questions on Github. We understand that this may cause some anxiety your first time (I always find it really hard to introduce myself to a new community), but a “Good first issue” issue is a safe place to get started. People expect newcomers to show up there.

Read developer guidelines

Many projects will specify developer guidelines like how to check out a codebase, run tests, write and style code, formulate a pull request, and so on. This is usually in their documentation under a label like “Developer guidelines”, “Developer docs”, or “Contributing”.

If you do a web search for “pandas developer docs” then this page in the first hit: pandas.pydata.org/pandas-docs/stable/contributing.html

These pages can be long, but they have a lot of good information. Reading through them is a good learning experience.

But this may not be as fun as you think

Open source software is a field of great public interest today, but day-to-day it may be more tedious than you expect. Most OSS work is dull. Maintainers spend most of their time discussing grammatical rules for documentation, discovering obscure compiler bugs, or handling e-mails. They spend very little time inventing cool algorithms. You may notice this yourself as you look through the issue list. What fraction of them excite you?

I say this not to discourage you (indeed, please come help!) but just as a warning. Many people leave OSS pretty quickly. This can be for many reasons, but lack of interest is certainly one of them.

The desire to maintain software is rarely enough to keep people engaged in open source long term

So work on projects that are personal to you

You are more than a programmer. You already have life experience and skills that can benefit your programming, and you have life needs that programming can enhance.

The people who stay with an OSS project are often people who need that project for something else.

  • A musician may contribute to a composition or recording software that they use at work
  • A teacher may contribute to educational software to help their students
  • Community organizers may contribute to geospatial software to help them plan activities or understand local issues.

So my biggest piece of advice to you is not to try to contribute to a package because it is popular or exciting, but rather wait until you run into a problem with a piece of software that you use daily, and then contribute a fix to that project. It can be more rewarding to contribute to something that is already in your life and as an active user you already have a ton of experience and a place in the community. You are much more likely to be successful contributing to a project if you have been using it for a long time.

Davy Wybiral: LoRa IoT Network Programming

$
0
0
Hey everyone, so I just got some LoRa modules from REYAX to experiment with long range network applications and these things are so cool! So far I've made a long range security alarm, a button to water plants on the other side of my property, and some bridge code to interact with IP and BLE networks.

Just thought I'd do a quick video update on this stuff:


Dusty Phillips: Should languages be designed with editor support in mind?

$
0
0
One of many things I love about Python is how whitespace is an integral part of the language. Python was the first popular programming language designed with the idea that “code is read much more often than it is written.” Forcing authors to indent code in a maintainable fashion seemed a brilliant idea when I first encountered Python fifteen years ago. The lack of braces scattered throughout the code made for easier reading.

Talk Python to Me: #181 30 amazing Python projects

$
0
0
Listeners often tell me one of the really valuable aspects of this podcast is the packages and libraries that they learn about and start using in their projects from guests and myself. On this episode, I've invited Brian Okken (my co-host over on Python Bytes) to take this to 11. We are going to cover the top 30 Python packages from the past year (metric to be determined later in the show).

NumFOCUS: NumFOCUS Announces NVIDIA as Gold Sponsor

Peter Bengtsson: Switching from AWS S3 (boto3) to Google Cloud Storage (google-cloud-storage) in Python

$
0
0

I'm in the midst of rewriting a big app that currently uses AWS S3 and will soon be switched over to Google Cloud Storage. This blog post is a rough attempt to log various activities in both Python libraries:

Disclaimer: I'm manually copying these snippets from a real project and I have to manually scrub the code clean of unimportant quirks, hacks, and other unrelated things that would just add noise.

Install

boto3

$ pip install boto3
$ emacs ~/.aws/credentials

google-cloud-storage

$ pip install google-cloud-storage
$ cat ./google_service_account.json

Note: You need to create a service account and then that gives you a .json file which you download and make sure you pass its path when you create a client.

I suspect there are more/other ways to do this with environment variables alone but I haven't got there yet.

Making a "client"

boto3

Note, there are easier shortcuts for this but with this pattern you can have full control over things like like read_timeout, connect_timeout, etc. with that confi_params keyword.

importboto3frombotocore.configimportConfigdefget_s3_client(region_name=None,**config_params):options={"config":Config(**config_params)}ifregion_name:options["region_name"]=region_namesession=boto3.session.Session()returnsession.client("s3",**options)

google-cloud-storage

fromgoogle.cloudimportstoragedefget_gcs_client():returnstorage.Client.from_service_account_json(settings.GOOGLE_APPLICATION_CREDENTIALS_PATH)

Checking if a bucket exists and if you have access to it

boto3 (for s3_client here, see above)

frombotocore.exceptionsimportClientError,EndpointConnectionErrortry:s3_client.head_bucket(Bucket=bucket_name)exceptClientErrorasexception:ifexception.response["Error"]["Code"]in("403","404"):raiseBucketHardError(f"Unable to connect to bucket={bucket_name!r} "f"ClientError ({exception.response!r})")else:raiseexceptEndpointConnectionError:raiseBucketSoftError(f"Unable to connect to bucket={bucket.name!r} "f"EndpointConnectionError")else:print("It exists and we have access to it.")

google-cloud-storage

fromgoogle.api_core.exceptionsimportBadRequesttry:gcs_client.get_bucket(bucket_name)exceptBadRequestasexception:raiseBucketHardError(f"Unable to connect to bucket={bucket_name!r}, "f"because bucket not found due to {exception}")else:print("It exists and we have access to it.")

Checking if an object exists

boto3

frombotocore.exceptionsimportClientErrordefkey_existing(client,bucket_name,key):"""return a tuple of (        key's size if it exists or 0,        S3 key metadata    )    If the object doesn't exist, return None for the metadata."""try:response=client.head_object(Bucket=bucket_name,Key=key)returnresponse["ContentLength"],response.get("Metadata")exceptClientErrorasexception:ifexception.response["Error"]["Code"]=="404":return0,Noneraise

Note, if you do this a lot and often find that the object doesn't exist the using list_objects_v2 is probably faster.

google-cloud-storage

defkey_existing(client,bucket_name,key):"""return a tuple of (        key's size if it exists or 0,        S3 key metadata    )    If the object doesn't exist, return None for the metadata."""bucket=client.get_bucket(bucket_name)blob=bucket.get_blob(key)ifblob:returnblob.size,blob.metadatareturn0,None

Uploading a file with a special Content-Encoding

Note: You have to use your imagination with regards to the source. In this example, I'm assuming that the source is a file on disk and that it might have already been compressed with gzip.

boto3

defupload(file_path,bucket_name,key_name,metadata=None,compressed=False):content_type=get_key_content_type(key_name)metadata=metadataor{}# boto3 will raise a botocore.exceptions.ParamValidationError# error if you try to do something like:##  s3.put_object(Bucket=..., Key=..., Body=..., ContentEncoding=None)## ...because apparently 'NoneType' is not a valid type.# We /could/ set it to something like '' but that feels like an# actual value/opinion. Better just avoid if it's not something# really real.extras={}ifcontent_type:extras["ContentType"]=content_typeifcompressed:extras["ContentEncoding"]="gzip"ifmetadata:extras["Metadata"]=metadatawithopen(file_path,"rb")asf:s3_client.put_object(Bucket=bucket_name,Key=key_name,Body=f,**extras)

google-cloud-storage

defupload(file_path,bucket_name,key_name,metadata=None,compressed=False):content_type=get_key_content_type(key_name)metadata=metadataor{}bucket=gcs_client.get_bucket(bucket_name)blob=bucket.blob(key_name)ifcontent_type:blob.content_type=content_typeifcompressed:blob.content_encoding="gzip"blob.metadata=metadatablob.upload_from_file(f)

Downloading and uncompressing a gzipped object

boto3

fromioimportBytesIOfromgzipimportGzipFilefrombotocore.exceptionsimportClientErrorfrom.utilsimportiter_linesdefget_stream(bucket_name,key_name):try:response=source.s3_client.get_object(Bucket=bucket_name,Key=key)exceptClientErrorasexception:ifexception.response["Error"]["Code"]=="NoSuchKey":raiseKeyHardError("key not in bucket")raisestream=response["Body"]# But if the content encoding is gzip we have re-wrap the stream.ifresponse.get("ContentEncoding")=="gzip":body=response["Body"].read()bytestream=BytesIO(body)stream=GzipFile(None,"rb",fileobj=bytestream)forlineiniter_lines(stream):yieldline.decode("utf-8")

google-cloud-storage

fromioimportBytesIOfromgzipimportGzipFilefrombotocore.exceptionsimportClientErrorfrom.utilsimportiter_linesdefget_stream(bucket_name,key_name):bucket=gcs_client.get_bucket(bucket_name)blob=bucket.get_blob(key)ifblobisNone:raiseKeyHardError("key not in bucket")bytestream=BytesIO()blob.download_to_file(bytestream)bytestream.seek(0)forlineiniter_lines(bytestream):yieldline.decode("utf-8")

Note That here blob.download_to_file works a bit like requests.get() in that it automatically notices the Content-Encoding metadata and does the gunzip on the fly.

Conclusion

It's not fair to compare them on style because I think boto3 came out of boto which probably started back in the day when Google was just web search and web emails.

I wanted to include a section about how to unit test against these. Especially how to mock them. But what I had for a draft was getting ugly. Yes, it works for the testing needs I have in my app but it's very personal taste (aka. appropriate for the context) and admittedly quite messy.

Django Weblog: Support framework of a strong relationship. 30% off PyCharm and 100% to Django

$
0
0

Support framework of a strong relationship. 30% off PyCharm and 100% to Django

In summer 2017, JetBrains PyCharm partnered with the Django Software Foundation for the second year in a row to generate a big boost to the Django fundraising campaign. The campaign was a huge success. We raised a total of $66,094 USD for the Django Software Foundation!

This year we really hope to repeat this success of the previous year. For the next three weeks, buy a new individual license for PyCharm Professional Edition at 30% OFF, and all the money raised will go to the DSF’s general fundraising and the Django Fellowship program.

Promotion details

Up until November 1, you can effectively donate to Django by purchasing a New Individual PyCharm Professional annual subscription at 30% off. It’s very simple:

  1. When buying a new annual PyCharm subscription in our e-store, on the checkout page, сlick “Have a discount code?”.
  2. Enter the following 30% discount promo code:
    IDONATETODJANGO

Alternatively, just click this shortcut link to go to the e-store with the code automatically applied

  1. Fill in the other required fields on the page and click the “Place order” button.

All of the income from this promotion code will go to the DSF fundraising campaign 2018– not just the profits, but actually the entire sales amount including taxes, transaction fees – everything! The campaign will help the DSF to maintain the healthy state of the Django project and help them continue contributing to their different outreach and diversity programs.

Read more details on the special promotion page.

“Django has grown to be a world-class web framework, and coupled with PyCharm’s Django support, we can give tremendous developer productivity,” says Frank Wiles, DSF President. “Last year JetBrains was a great partner for us in support of raising money for the Django Software Foundation, on behalf of the community, I would like to extend our deepest thanks for their generous help. Together we hope to make this a yearly event!”

If you have any questions, get in touch with Django at fundraising@djangoproject.com or JetBrains at sales@jetbrains.com.

Viewing all 22640 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>