Quantcast
Channel: Planet Python
Viewing all 22414 articles
Browse latest View live

Codementor: Wanted: Microsoft Dynamics data in Python scripts

$
0
0
There is plenty of exciting stuff in Microsoft Dynamics that you can use for data analysis or data visualization projects in large organizations. So how do you get your hands on it using a Python script? Good news, it is not that complicated!

Roberto Alsina: Airflow By Example (II)

$
0
0

Second twitter thread about following a simple example to learn Apache Airflow.

Read more… (2 min remaining to read)

PyCoder’s Weekly: Issue #408 (Feb. 18, 2020)

$
0
0

#408 – FEBRUARY 18, 2020
View in Browser »

The PyCoder’s Weekly Logo


Finding the Perfect Python Code Editor

Find your perfect Python development setup with this review of Python IDEs and code editors. Writing Python using IDLE or the Python REPL is great for simple things, but not ideal for larger programming projects. With this course you’ll get an overview of the most common Python coding environments to help you make an informed decision.
REAL PYTHONvideo

Overloading Functions in Python

Python does not natively support function overloading (having multiple functions with the same name.) See how you can implement and add this functionality using common language constructs like decorators and dictionaries. Related discussion on Hacker News.
ARPIT BHAYANI

Python Developers Are in Demand on Vettery

alt

Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today →
VETTERYsponsor

PEP 614 (Draft): Relaxing Grammar Restrictions on Decorators

“Python currently requires that all decorators consist of a dotted name, optionally followed by a single call. This PEP proposes removing these limitations and allowing decorators to be any valid expression.” For example, this would become a valid decoration: @buttons[1].clicked.connect
PYTHON.ORG

Building Good Python Tests

A collection of testing maxims, tips, and gotchas, with a few pytest-specific notes. Things to do and not to do when it comes to writing automated tests.
CHRIS NEJAME• Shared by Chris NeJame

Types at the Edges in Python

Adding more strict typing around the edges of a Python system for better error messages and design, using Pydantic and mypy. Interesting read!
STEVE BRAZIER

Python 3.9 StatsProfile

The author of the profiling API improvements coming to Python 3.9 demonstrates the feature and explains how it was added to CPython.
DANIEL OLSHANSKY

Robots and Generative Art and Python, Oh My!

How to make cool looking plotter art with NumPy, SciPy, and Matplotlib.
GEOFFREY BRADWAY

Python Jobs

Senior Python/Django Engineer (London, UK)

Zego

Python Developer (Malta)

Gaming Innovation Group

Senior Python Software Engineer (London, UK)

Tessian

Senior Backend Engineer (Denver, CO)

CyberGRX

Senior Software Developer (Vancouver, BC, Canada)

AbCellera

More Python Jobs >>>

Articles & Tutorials

Python Community Interview With Brett Slatkin

Brett Slatkin is a principal software engineer at Google and the author of the Python programming book Effective Python. Join us as we discuss Brett’s experience working with Python at Google, refactoring, and the challenges he faced when writing the second edition of his book.
REAL PYTHON

My Unpopular Opinion About Black Code Formatter

“In this post, I will try to gather all my thoughts on the topic of automatic code formatting and why I personally don’t like this approach.”
LUMINOUSMEN.COM

How to Build a Blockchain in Python

alt

Blockchain, the system behind Bitcoin, is immutable, unhackable, persistent and distributed, and has many potential applications. Check out ActiveState’s tutorial on how to build a blockchain in Python and Flask using a pre-built runtime environment →
ACTIVESTATEsponsor

Uniquely Managing Test Execution Resources Using WebSockets

Learn about managing resources for test execution, while building an asynchronous WebSocket client-server application that tracks them using Python and Sanic.
CRISTIAN MEDINA

Refactoring and Asking for Forgiveness

“Recently, I had a great interaction with one of my coworkers that I think is worth sharing, with the hope you may learn a bit about refactoring and Python.”
CHRIS MAY

Guide to Python’s Newer String Format Techniques

In the last tutorial in this series, you learned how to format string data using the string modulo operator. In this tutorial, you’ll see two more items to add to your Python string formatting toolkit. You’ll learn about Python’s string format method and the formatted string literal, or f-string.
REAL PYTHON

Full Text Search With Postgres and Django [2017]

“In this post I will walk through the process of building decent search functionality for a small to medium sized website using Django and Postgres.”
SCOTT CZEPIEL

Python Tools for Record Linking and Fuzzy Matching

Useful Python tools for linking record sets and fuzzy matching on text fields. These concepts can also be used to deduplicate data.
CHRIS MOFFITT

Classify Valentine’s Day Texts With TensorFlow and Twilio

Use TensorFlow and Machine Learning to classify Twilio texts into two categories: “loves me” and “loves me not.”
LIZZIE SIEGLE• Shared by Lizzie Siegle

Tour of Python Itertools

Explore the itertools and more_itertools Python libraries and see how to leverage them for data processing.
MARTIN HEINZ• Shared by Martin Heinz

Python Static Analysis Tools

Find and fix the bugs and code smells in your Python code with the popular tools for analyzing code.
LUMINOUSMEN.COM

Getting the Most Out of Python Collections

A guide to comprehensions, generators and useful functions and classes.
NICK THAPEN

Blackfire Profiler Public Beta Open—Get Started in Minutes

Blackfire Profiler now supports Python, through a Public Beta. Profile Python code with Blackfire’s intuitive developer experience and appealing user interface. Spot bottlenecks in your code, and compare code iterations profiles.
BLACKFIREsponsor

Projects & Code

Deadsnakes PPA Builds for Debian in Docker

The Deadsnakes PPA project builds older and newer Python versions not found on a specific Ubuntu release. Originally based on the Debian source packages, they can still be built on Debian and not just on Ubuntu.
GITHUB.COM/JHERMANN• Shared by Jürgen Hermann

Events

PyCon Namibia 2020

February 18 to February 21, 2020
PYCON.ORG

Python Northwest

February 20, 2020
PYNW.ORG.UK

PyLadies Dublin

February 20, 2020
PYLADIES.COM

Open Source Festival

February 20 to February 23, 2020
OSCAFRICA.ORG

PyCon Belarus 2020

February 21 to February 23, 2020
PYCON.ORG


Happy Pythoning!
This was PyCoder’s Weekly Issue #408.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

PyCon: The Hatchery Returns with Nine Events!

$
0
0
Since its start in 2018, the PyCon US Hatchery Program has become a fundamental part of how PyCon as a conference adapts to best serve the Python community as it grows and changes with time. In keeping with that focus on innovation, the Hatchery Program itself has continued to evolve.
Initially we wanted to gauge community interest for this type of program, and in 2018 we launched our first trial program to learn more about what kind of events the community might propose. At the end of that inaugural program, we accepted the PyCon Charlas as our first Hatchery event and it has grown into a permanent track offered at PyCon US.
PyCon US 2019 presented three new hatchery programs, Mentored Sprints, the Maintainers Summit, and the Art of Python. Those events were quite different from one another, but they all foreshadowed trends we are seeing now.
For PyCon US 2020 there were a dozen proposals, which set us to thinking about how we could accommodate as many events as possible. In addition to the return of the successful offerings from last year we received several other proposals which seem to show three general directions - summits, code events, and artistic presentations. In response to those trends, we decided to tweak the structure a bit, and starting in 2020 the Hatchery will reflect those three broad areas.
While we will always be open to new and innovative proposals, this framework will allow us to better plan for coming years. Within our venue's limits to the space available, thinking in terms of these categories will make us better able to provide resources for as many events as possible. We're excited to see this year's hatchlings!
Here are the Hatchery areas and their events for 2020

The Hatchery Codes

Events where members of the community come together to teach, learn, and practice the art of coding in Python.
This year there will be three events in this area:

Mentored Sprints for Diverse Beginners

A newcomer’s introduction to contributing to an open source project. These are mentored sprints for individuals from underrepresented groups willing to start contributing to Python projects. This event will provide a supportive, friendly, and safe environment for all the attendees and partner open source projects.To achieve this goal, we are seeking to work with a number of Python projects and their maintainers interested in providing mentorship to these individuals. In return, we will provide guidance and advise on how to prepare the projects for the day and to better serve a diverse range of contributors.
To learn more about how to take part in this event see https://us.pycon.org/2020/hatchery/mentoredsprints/.

Trans*Code Hackday

Trans*Code aims to help draw attention to trans/non-binary issues and community through a topic-focused hackday. Any person is welcomed to participate, regardless if they are from tech or not. This event is free of expectations, and free of schedule. You can come, present an idea that you have, or listen to other ideas, or just get together with other participants to explore a new technology, or brainstorm something.

Beginners Data Workshop for Minorities

Would you like to learn to code but don’t know where to start? Learning how to code can seem like an impossible task so we’ve decided to put on a workshop to show 60 beginners how it can be done and get them excited about the world of technology! Join us on 18th April 2020 for a workshop where you’ll learn the basics of programming in Python, as well as how to use tools such as Jupyter Notebook to analyze data.

The Hatchery Summits

There are many smaller sub-communities within the larger Python community that struggle to find a dedicated time and space to meet and discuss their issues. PyCon, as one of the largest gatherings of Python folk in the world seems like a great place to offer this option.
This year there will be four summits as part of the Hatchery:

Regional PyCon Organizers' Summit

The Regional Conference Organizers’ Summit is a place for people who run or are interested in running Python conferences to gather to share knowledge, seek advice, and work together to help build better Python conferences throughout the world. The Summit is a half-day “unconference”-style event. That means we aren’t calling for prepared presentations. Instead we’ll have moderated discussions where anyone attending can contribute: from experienced organizers offering advice, to new and interested organizers asking questions about where to start. To guide the day, we will have a set agenda, so you can prepare questions, or come along with ideas.

Python Trainers Summit

This summit seeks to forge connections within the trainers from all practices to formalize their community of practice and connect them with others working within education contexts. This summit will provide the space and platform for professional trainers to engage with the more formal education practices, and those educators to connect with the needs of corporate and professional development audiences.
To learn more or submit a talk proposal visit https://us.pycon.org/2020/hatchery/trainers/.

Maintainers Summit

Python is much more than a programming language. It is a vibrant community made up of individuals with diverse skills and backgrounds. Maintainers Summit at PyCon USA is where the community comes together to discuss and build a community of practice for Python project maintainers and key contributors. Come and learn from your peers how to maintain and develop sustainable projects and thriving communities.
We are inviting Python community members to get on stage and share their insight and experience with the PyCon 2020 audience. Talk proposals from first-time speakers and Pythonistas from the underrepresented groups within the tech community are strongly encouraged.

Python Packaging Summit

The Python Packaging Summit is an event primarily to people contributing to the python packaging ecosystem, any interpreter or distribution (CPython, PyPy, Conda, and so on) to share information, discuss our shared problems, and — hopefully — come up with solutions to tackle them. These issues might be related to any of the python packaging projects, independently if it’s hosted or not under the PyPa umbrella. The Summit focuses on discussions and consensus-seeking on problems faced, and how we should solve them.
We welcome developers who maintain any of the python packaging tools, or active contributors to these tools.
If you want to tackle some problem in the world of Python packaging please visit https://us.pycon.org/2020/hatchery/packaging/ to learn more about how to submit your topic, and sign up to attend.

The Hatchery Presents

Finally, it should come as no surprise to anyone that the Python world is full of creative people, and that PyCon is a natural place for them to express that creativity. Building on the success of The Art of Python last year, this year we will have two events with an artistic flair:

The Art of Python

The event this year will be roughly 2 hours, the evening of Friday, April 17th. The first half will be composed of five 5-15 minute performances. The second half will involve creative exercises to inspire and workshopping new pieces. There are lots of performances and venues for code that creates art. However, Art of Python is a venue for technologists to create creative works from their experience of working in technology. Remember, the goal of this festival is to impart perspective about the emotional and challenging work of programming Python through the medium of entertainment.

No Signal: Python for Computational Arts

The purpose of this project is to showcase the artistic possibilities that exist when creative coders leverage Python and computer science. Making art with code is an engaging way for beginner technologists to start learning new skills, and also allows folks with existing skills to express themselves in new ways. Python has become a prominent language for scripting in game engines and 3D creation suites, graphic design, sound design, and circuitry; along with its active user community and so many additional add-on libraries, Python is an approachable tool to use for artistic purposes. We hope to give pythonistas, career computer scientists, and all patrons of the conference a chance to see exciting work and projects, and in turn find inspiration for their own projects and learning.
To find out how to submit your project go to https://us.pycon.org/2020/hatchery/computationalarts/.

Michał Bultrowicz: Universal app reload with entr

$
0
0

A useful feature many web frameworks have is auto-reload. Your app is running in the background, you change the code, and the app is restarted with those changes, so you can try them out immediately. What if you wanted that behavior for everything that you’re writing? And without any coding to implement it over and over in every little project?

Kushal Das: Which verison of Python are you running?

$
0
0

The title of the is post is misleading.

I actually want to ask you which version of Python3 are you running? Yes, it is a question I have to ask myself based on projects I am working on. I am sure there are many more people in the world who are also in the similar situation.

Just to see what all versions of Python(3) I am running in different places:

  • Python 3.7.3
  • Python 3.5.2
  • Python 3.6.9
  • Python 3.7.4
  • Python 2.7.5
  • Python 3.7.6

What about you?

Test and Code: 101: Application Security - Anthony Shaw

$
0
0

Application security is best designed into a system from the start.
Anthony Shaw is doing something about it by creating an editor plugin that actually helps you write more secure application code while you are coding.

On today's Test & Code, Anthony and I discuss his security plugin, but also application security in general, as well as other security components you need to consider.

Security is something every team needs to think about, whether you are a single person team, a small startup, or a large corporation.

Anthony and I also discuss where to start if it's just a few of you, or even just one of you.

Topics include:

  • Finding security risks while writing code.
  • What are the risks for your applications.
  • Thinking about attack surfaces.
  • Static and dynamic code analysis.
  • Securing the environment an app is running in.
  • Tools for scanning live sites for vulnerabilities.
  • Secret management.
  • Hashing algorithms.
  • Authentication systems.
  • and Anthony's upcoming cPython Internals book.

Special Guest: Anthony Shaw.

Sponsored By:

Support Test & Code: Python Software Testing & Engineering

Links:

<p>Application security is best designed into a system from the start.<br> Anthony Shaw is doing something about it by creating an editor plugin that actually helps you write more secure application code while you are coding.</p> <p>On today&#39;s Test &amp; Code, Anthony and I discuss his security plugin, but also application security in general, as well as other security components you need to consider.</p> <p>Security is something every team needs to think about, whether you are a single person team, a small startup, or a large corporation.</p> <p>Anthony and I also discuss where to start if it&#39;s just a few of you, or even just one of you.</p> <p>Topics include:</p> <ul> <li>Finding security risks while writing code.</li> <li>What are the risks for your applications.</li> <li>Thinking about attack surfaces.</li> <li>Static and dynamic code analysis.</li> <li>Securing the environment an app is running in.</li> <li>Tools for scanning live sites for vulnerabilities.</li> <li>Secret management.</li> <li>Hashing algorithms.</li> <li>Authentication systems.</li> <li>and Anthony&#39;s upcoming cPython Internals book.</li> </ul><p>Special Guest: Anthony Shaw.</p><p>Sponsored By:</p><ul><li><a href="https://oxylabs.io/testandcode" rel="nofollow">Oxylabs</a>: <a href="https://oxylabs.io/testandcode" rel="nofollow">Visit oxylabs.io/testandcode to find out more about their services and to apply for a free trial of their Next-Generation Residential Proxies.</a></li></ul><p><a href="https://www.patreon.com/testpodcast" rel="payment">Support Test & Code: Python Software Testing & Engineering</a></p><p>Links:</p><ul><li><a href="https://plugins.jetbrains.com/plugin/13609-python-security" title="Python Security - plugin for PyCharm" rel="nofollow">Python Security - plugin for PyCharm</a></li><li><a href="https://bandit.readthedocs.io/en/latest/" title="Bandit" rel="nofollow">Bandit</a></li><li><a href="https://www.hackthebox.eu/" title="Hack The Box " rel="nofollow">Hack The Box </a></li></ul>

Programiz: Python Programming


Catalin George Festila: Python 3.7.5 : The PyQtChart from python Qt5.

$
0
0
The PyQtChart is a set of Python bindings for The Qt Company’s Qt Charts library and is implemented as a single module. Let's install this python package with the pip3 tool: [mythcat@desk ~]$ pip3 install PyQtChart --user ... Installing collected packages: PyQtChart Successfully installed PyQtChart-5.14.0 Let's test with a simple example: from PyQt5.QtWidgets import QApplication, QMainWindow

Peter Bengtsson: Build pyenv Python versions on macOS Catalina 10.15

$
0
0

I'm still working on getting pyenv in my bloodstream. It seems like totally the right tool for having different versions of Python available on macOS that don't suddenly break when you run brew upgrade periodically. But every thing I tried failed with an error similar to this:

python-build: use openssl from homebrew
python-build: use readline from homebrew
Installing Python-3.7.0...
python-build: use readline from homebrew

BUILD FAILED (OS X 10.15.x using python-build 20XXXXXX)

Inspect or clean up the working tree at /var/folders/mw/0ddksqyn4x18lbwftnc5dg0w0000gn/T/python-build.20190528163135.60751
Results logged to /var/folders/mw/0ddksqyn4x18lbwftnc5dg0w0000gn/T/python-build.20190528163135.60751.log

Last 10 log lines:
./Modules/posixmodule.c:5924:9: warning: this function declaration is not a prototype [-Wstrict-prototypes]
    if (openpty(&master_fd, &slave_fd, NULL, NULL, NULL) != 0)
        ^
./Modules/posixmodule.c:6018:11: error: implicit declaration of function 'forkpty' is invalid in C99 [-Werror,-Wimplicit-function-declaration]
    pid = forkpty(&master_fd, NULL, NULL, NULL);
          ^
./Modules/posixmodule.c:6018:11: warning: this function declaration is not a prototype [-Wstrict-prototypes]
2 warnings and 2 errors generated.
make: *** [Modules/posixmodule.o] Error 1
make: *** Waiting for unfinished jobs....

I read through the Troubleshooting FAQ and the "Common build problems" documentation. xcode was up to date and I had all the related brew packages upgraded. Nothing seemed to work.

Until I saw this comment on an open pyenv issue: "Unable to install any Python version on MacOS"

All I had to do was replace the 10.14 for 10.15 and now it finally worked here on Catalina 10.15. So, the magical line was this:

SDKROOT=/Applications/Xcode.app/Contents/Developer/Platforms/MacOSX.platform/Developer/SDKs/MacOSX10.15.sdk \
MACOSX_DEPLOYMENT_TARGET=10.15 \
PYTHON_CONFIGURE_OPTS="--enable-framework" \
pyenv install -v 3.7.6

Hopefully, by blogging about it you'll find this from Googling and I'll remember the next time I need it because it did eat 2 hours of precious evening coding time.

Real Python: Null in Python: Understanding Python's NoneType Object

$
0
0

If you have experience with other programming languages, like C or Java, then you’ve probably heard of the concept of null. Many languages use this to represent a pointer that doesn’t point to anything, to denote when a variable is empty, or to mark default parameters that you haven’t yet supplied. null is often defined to be 0 in those languages, but null in Python is different.

Python uses the keyword None to define null objects and variables. While None does serve some of the same purposes as null in other languages, it’s another beast entirely. As the null in Python, None is not defined to be 0 or any other value. In Python, None is an object and a first-class citizen!

In this tutorial, you’ll learn:

  • What None is and how to test for it
  • When and why to use None as a default parameter
  • What None and NoneType mean in your traceback
  • How to use None in type checking
  • How null in Python works under the hood

Free Bonus:Click here to get a Python Cheat Sheet and learn the basics of Python 3, like working with data types, dictionaries, lists, and Python functions.

Understanding Null in Python

None is the value a function returns when there is no return statement in the function:

>>>
>>> defhas_no_return():... pass>>> has_no_return()>>> print(has_no_return())None

When you call has_no_return(), there’s no output for you to see. When you print a call to it, however, you’ll see the hidden None it returns.

In fact, None so frequently appears as a return value that the Python REPL won’t print None unless you explicitly tell it to:

>>>
>>> None>>> print(None)None

None by itself has no output, but printing it displays None to the console.

Interestingly, print() itself has no return value. If you try to print a call to print(), then you’ll get None:

>>>
>>> print(print("Hello, World!"))Hello, World!None

It may look strange, but print(print("...")) shows you the None that the inner print() returns.

None also often used as a signal for missing or default parameters. For instance, None appears twice in the docs for list.sort:

>>>
>>> help(list.sort)Help on method_descriptor:sort(...)    L.sort(key=None, reverse=False) -> None -- stable sort *IN PLACE*

Here, None is the default value for the key parameter as well as the type hint for the return value. The exact output of help can vary from platform to platform. You may get different output when you run this command in your interpreter, but it will be similar.

Using Python’s Null Object None

Often, you’ll use None as part of a comparison. One example is when you need to check and see if some result or parameter is None. Take the result you get from re.match. Did your regular expression match a given string? You’ll see one of two results:

  1. Return a Match object: Your regular expression found a match.
  2. Return a None object: Your regular expression did not find a match.

In the code block below, you’re testing if the pattern "Goodbye" matches a string:

>>>
>>> importre>>> match=re.match(r"Goodbye","Hello, World!")>>> ifmatchisNone:... print("It doesn't match.")It doesn't match.

Here, you use is None to test if the pattern matches the string "Hello, World!". This code block demonstrates an important rule to keep in mind when you’re checking for None:

  • Do use the identity operators is and is not.
  • Do not use the equality operators == and !=.

The equality operators can be fooled when you’re comparing user-defined objects that override them:

>>>
>>> classBrokenComparison:... def__eq__(self,other):... returnTrue>>> b=BrokenComparison()>>> b==None# Equality operatorTrue>>> bisNone# Identity operatorFalse

Here, the equality operator == returns the wrong answer. The identity operator is, on the other hand, can’t be fooled because you can’t override it.

Note: For more info on how to compare with None, check out Do’s and Dont’s: Python Programming Recommendations.

None is falsy, which means not None is True. If all you want to know is whether a result is falsy, then a test like the following is sufficient:

>>>
>>> some_result=None>>> ifsome_result:... print("Got a result!")... else:... print("No result.")...No result.

The output doesn’t show you that some_result is exactly None, only that it’s falsy. If you must know whether or not you have a None object, then use is and is not.

The following objects are all falsy as well:

For more on comparisons, truthy, and falsy values, check out How to Use the Python or Operator.

Declaring Null Variables in Python

In some languages, variables come to life from a declaration. They don’t have to have an initial value assigned to them. In those languages, the initial default value for some types of variables might be null. In Python, however, variables come to life from assignment statements. Take a look at the following code block:

>>>
>>> print(bar)Traceback (most recent call last):
  File "<stdin>", line 1, in <module>NameError: name 'bar' is not defined>>> bar=None>>> print(bar)None

Here, you can see that a variable with the value None is different from an undefined variable. All variables in Python come into existence by assignment. A variable will only start life as null in Python if you assign None to it.

Using None as a Default Parameter

Very often, you’ll use None as the default value for an optional parameter. There’s a very good reason for using None here rather than a mutable type such as a list. Imagine a function like this:

defbad_function(new_elem,starter_list=[]):starter_list.append(new_elem)returnstarter_list

bad_function() contains a nasty surprise. It works fine when you call it with an existing list:

>>>
>>> my_list=['a','b','c']>>> bad_function('d',my_list)['a', 'b', 'c', 'd']

Here, you add `’d” to the end of the list with no problems.

But if you call this function a couple times with no starter_list parameter, then you start to see incorrect behavior:

>>>
>>> bad_function('a')['a']>>> bad_function('b')['a', 'b']>>> bad_function('c')['a', 'b', 'c']

The default value for starter_list evaluates only once at the time the function is defined, so the code reuses it every time you don’t pass an existing list.

The right way to build this function is to use None as the default value, then test for it and instantiate a new list as needed:

>>>
 1 >>> defgood_function(new_elem,starter_list=None): 2 ... ifstarter_listisNone: 3 ... starter_list=[] 4 ... starter_list.append(new_elem) 5 ... returnstarter_list 6 ... 7 >>> good_function('e',my_list) 8 ['a', 'b', 'c', 'd', 'e'] 9 >>> good_function('a')10 ['a']11 >>> good_function('b')12 ['b']13 >>> good_function('c')14 ['c']

good_function() behaves as you want by making a new list with each call where you don’t pass an existing list. It works because your code will execute lines 2 and 3 every time it calls the function with the default parameter.

Using None as a Null Value in Python

What do you do when None is a valid input object? For instance, what if good_function() could either add an element to the list or not, and None was a valid element to add? In this case, you can define a class specifically for use as a default, while being distinct from None:

>>>
>>> classDontAppend:pass...>>> defgood_function(new_elem=DontAppend,starter_list=None):... ifstarter_listisNone:... starter_list=[]... ifnew_elemisnotDontAppend:... starter_list.append(new_elem)... returnstarter_list...>>> good_function(starter_list=my_list)['a', 'b', 'c', 'd', 'e']>>> good_function(None,my_list)['a', 'b', 'c', 'd', 'e', None]

Here, the class DontAppend serves as the signal not to append, so you don’t need None for that. That frees you to add None when you want.

You can use this technique when None is a possibility for return values, too. For instance, dict.get returns None by default if a key is not found in the dictionary. If None was a valid value in your dictionary, then you could call dict.get like this:

>>>
>>> classKeyNotFound:pass...>>> my_dict={'a':3,'b':None}>>> forkeyin['a','b','c']:... value=my_dict.get(key,KeyNotFound)... ifvalueisnotKeyNotFound:... print(f"{key}->{value}")...a->3b->None

Here you’ve defined a custom class KeyNotFound. Now, instead of returning None when a key isn’t in the dictionary, you can return KeyNotFound. That frees you to return None when that’s the actual value in the dictionary.

Deciphering None in Tracebacks

When NoneType appears in your traceback, it means that something you didn’t expect to be None actually was None, and you tried to use it in a way that you can’t use None. Almost always, it’s because you’re trying to call a method on it.

For instance, you called append() on my_list many times above, but if my_list somehow became anything other than a list, then append() would fail:

>>>
>>> my_list.append('f')>>> my_list['a', 'b', 'c', 'd', 'e', None, 'f']>>> my_list=None>>> my_list.append('g')Traceback (most recent call last):
  File "<stdin>", line 1, in <module>AttributeError: 'NoneType' object has no attribute 'append'

Here, your code raises the very common AttributeError because the underlying object, my_list, is not a list anymore. You’ve set it to None, which doesn’t know how to append(), and so the code throws an exception.

When you see a traceback like this in your code, look for the attribute that raised the error first. Here, it’s append(). From there, you’ll see the object you tried to call it on. In this case, it’s my_list, as you can tell from the code just above the traceback. Finally, figure out how that object got to be None and take the necessary steps to fix your code.

Checking for Null in Python

There are two type checking cases where you’ll care about null in Python. The first case is when you’re returning None:

>>>
>>> defreturns_None()->None:... pass

This case is similar to when you have no return statement at all, which returns None by default.

The second case is a bit more challenging. It’s where you’re taking or returning a value that might be None, but also might be some other (single) type. This case is like what you did with re.match above, which returned either a Match object or None.

The process is similar for parameters:

fromtypingimportAny,List,Optionaldefgood_function(new_elem:Any,starter_list:Optional[List]=None)->List:pass

You modify good_function() from above and import Optional from typing to return an Optional[Match].

Taking a Look Under the Hood

In many other languages, null is just a synonym for 0, but null in Python is a full-blown object:

>>>
>>> type(None)<class 'NoneType'>

This line shows that None is an object, and its type is NoneType.

None itself is built into the language as the null in Python:

>>>
>>> dir(__builtins__)['ArithmeticError', ..., 'None', ..., 'zip']

Here, you can see None in the list of __builtins__ which is the dictionary the interpreter keeps for the builtins module.

None is a keyword, just like True and False. But because of this, you can’t reach None directly from __builtins__ as you could, for instance, ArithmeticError. However, you can get it with a getattr() trick:

>>>
>>> __builtins__.ArithmeticError<class 'ArithmeticError'>>>> __builtins__.None
  File "<stdin>", line 1__builtins__.None^SyntaxError: invalid syntax>>> print(getattr(__builtins__,'None'))None

When you use getattr(), you can fetch the actual None from __builtins__, which you can’t do by simply asking for it with __builtins__.None.

Even though Python prints the word NoneType in many error messages, NoneType is not an identifier in Python. It’s not in builtins. You can only reach it with type(None).

None is a singleton. That is, the NoneType class only ever gives you the same single instance of None. There’s only one None in your Python program:

>>>
>>> my_None=type(None)()# Create a new instance>>> print(my_None)None>>> my_NoneisNoneTrue

Even though you try to create a new instance, you still get the existing None.

You can prove that None and my_None are the same object by using id():

>>>
>>> id(None)4465912088>>> id(my_None)4465912088

Here, the fact that id outputs the same integer value for both None and my_None means they are, in fact, the same object.

Note: The actual value produced by id will vary across systems, and even between program executions. Under CPython, the most popular Python runtime, id() does its job by reporting the memory address of an object. Two objects that live at the same memory address are the same object.

If you try to assign to None, then you’ll get a SyntaxError:

>>>
>>> None=5Traceback (most recent call last):
  File "<stdin>", line 1, in <module>SyntaxError: can't assign to keyword>>> None.age=5Traceback (most recent call last):
  File "<stdin>", line 1, in <module>AttributeError: 'NoneType' object has no attribute 'age'>>> setattr(None,'age',5)Traceback (most recent call last):
  File "<stdin>", line 1, in <module>AttributeError: 'NoneType' object has no attribute 'age'>>> setattr(type(None),'age',5)Traceback (most recent call last):
  File "<stdin>", line 1, in <module>TypeError: can't set attributes of built-in/extension type 'NoneType'

All the examples above show that you can’t modify None or NoneType. They are true constants.

You can’t subclass NoneType, either:

>>>
>>> classMyNoneType(type(None)):... pass...Traceback (most recent call last):
  File "<stdin>", line 1, in <module>TypeError: type 'NoneType' is not an acceptable base type

This traceback shows that the interpreter won’t let you make a new class that inherits from type(None).

Conclusion

None is a powerful tool in the Python toolbox. Like True and False, None is an immutable keyword. As the null in Python, you use it to mark missing values and results, and even default parameters where it’s a much better choice than mutable types.

Now you can:

  • Test for None with is and is Not
  • Choose when None is a valid value in your code
  • Use None and its alternatives as default parameters
  • Decipher None and NoneType in your tracebacks
  • Use None and Optional in type hints

How do you use the null in Python? Leave a comment down in the comments section below!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Python Bytes: #169 Jupyter Notebooks natively on your iPad

Mike Driscoll: Python 101 2nd Edition Fully Funded + Stretch Goals

$
0
0

The second edition of my book, Python 101, has been successfully funded on Kickstarter. As is tradition, I have added a couple of stretch goals for adding more content to this already hefty book.

Python 101

Here are the goals:

1) $5000 – Get 4 Bonus Chapters

These chapters would cover the following topics:

  • Assignment Expressions
  • How to Create a GUI
  • How to Create Graphs
  • How to Work with Images in Python

2) $7500 – Add Chapter Review Questions

The additional chapters are pretty exciting to me as they are fun things to do with Python while also being useful. The assignment expression chapter is also something that is new in Python and may be of use to you soon.

Adding chapter review questions was something I have always wanted to do with Python 101. Hopefully you will find that idea interesting as well.

If you are interested in getting the book or supporting this site, you can head over to Kickstarter now. There are some really good deals for some of my other books there too!

The post Python 101 2nd Edition Fully Funded + Stretch Goals appeared first on The Mouse Vs. The Python.

Moshe Zadka: Forks and Threats

$
0
0

What is a threat? From a game-theoretical perspective, a threat is an attempt to get a better result by saying: "if you do not give me this result, I will do something that is bad for both of us". Note that it has to be bad for both sides: if it is good for the threatening side, they would do it anyway. While if it is good for the threatened side, it is not a threat.

Threats rely on credibility and reputation: the threatening side has to be believed for the threat to be useful. One way to gain that reputation is to follow up on threats, and have that be a matter of public record. This means that the threatening side needs to take into account that they might have to act on the threat, thereby doing something against their own interests. This leads to the concept of a "credible" or "proportionate" threat.

For most of our analysis, we will use the example of a teacher union striking. Similar analysis can be applied to nuclear war, or other cases. People mostly have positive feelings for teachers, and when teacher unions negotiate, they want to take advantage of those feelings. However, the one thing that leads people to be annoyed with teachers is a strike: this causes large amounts of unplanned scheduling crisis in people's lives.

In our example, a teacher union striking over, say, a minor salary raise disagreement is not credible: the potential harm is small, while the strike will significantly harm the teachers' image.

However, strikes are, to a first approximation, the only tool teacher unions have in their arsenal. Again, take the case of a minor salary raise. Threatening with a strike is so disproportional that there is no credibility. We turn to one of the fundamental insights of game theory: rational actors treat utility as linear in probability. So, while starting a strike that is twice as long is not twice as bad, increasing the probability of starting a strike from 0 to 1 is twice as bad (exactly!) as increasing the probability from 0 to 0.5.

(If you are a Bayesian who does not believe in 0 and 1 as probabilities, note that the argument works with approximations too: increasing the probability from a small e to 0.5 is approximately twice as bad as increasing it from e to 1-e.)

All one side has is a strike. Assume the disutility of a strike to that side is -1,000,000. Assume the utility of winning the salary negotiation is 1. They can threaten that if their position is not accepted, they will generate a random number, and if it is below 1/1,000,000, they will start the strike. Now the threat is credible. But to be gain that reputation, this number has to be generated in public, in an uncertain way: otherwise, no reputation is gained for following up on threats.

In practice, usually the randomness is generated by "inflaming the base". The person in charge will give impassioned speeches on how important this negotiation is. With some probability, their base will pressure them to start the strike, without them being able to resist it.

Specifically, note that often a strike is determined by a direct vote of the members, not the union leaders. This means that union leaders can credibly say, "please do not vote for the strike, we are against it". With some probability, that depends on how much they inflamed the base, the membership will ignore the request. The more impassioned the speech, the higher the probability. By limiting their direct control over the decision to strike, union leaders gain the ability to threaten probabilistically.

Nuclear war and union strikes are both well-studied topics in applied game theory. The explanation above is a standard part of many text books: in my case, I summarized the explanation from Games of Strategy, pg. 487.

What is not well studied are the dynamics of open source projects. There, we have a set of owners who can directly influence such decisions as which patches land, and when versions are released. More people will offer patches, or ask for a release to happen. The only credible threat they have is to fork the project if they do not like how it is managed. But forking is often a disproportinate threat: a patch not landing often just means an ugly work-around in user code. There is a cost, but the cost of maintaining a fork is much greater.

But similar to a union strike, or launching a nuclear war, we can consider a "probabilistic fork". Rant on twitter, or appropriate mailing lists. Link to the discussion, especially to places which make the owners not in the best light. Someone might decide to "rage-fork". More rants, or more extreme rants, increase the probability. A fork has to be possible in the first place: this is why the best way to evaluate whether something is open source is to consider "how possible is a fork".

This is why the possibility of a fork changes the dynamics of a project, even if forks are rare: because the main thing that happens are "low-probability maybe-forks".

Codementor: The Best Android Apps for Learning How to Code

$
0
0
As a senior software developer, I’m often asked for advice on learning programming. Since I believe that the tech market always benefits from having more high-quality developers, I’m happy to share tips and hacks that helped me become a better software engineer.

Codementor: Learn To Code By Playing These Games

$
0
0
Apart from an ambition to become a programmer and have an interesting well-paid job, there are plenty of reasons to learn coding even for those who see themselves in other professions.

Ned Batchelder: Getting Started Testing with pytest

$
0
0

Next week I am presenting Getting Started Testing: pytest edition at Boston Python (event page).

This talk has been through a few iterations. In 2011, I gave a presentation at Boston Python about Getting Started Testing, based on the standard library unittest module. In 2014, I updated it and presented it at PyCon. Now I’ve updated it again, and will be presenting it at Boston Python.

The latest edition, Getting Started Testing: pytest edition, uses pytest throughout. It’s a little long for one evening of talking, but I really wanted to cover the material in it. I wanted to touch on not just the mechanics of testing, but the philosophy and central challenges as well.

I’m sure there are important things I left out, and probably digressions I could trim, but it’ll do. Thoughts welcome.

Ruslan Spivak: Let’s Build A Simple Interpreter. Part 18: Executing Procedure Calls

$
0
0

Do the best you can until you know better. Then when you know better, do better.” ― Maya Angelou

It’s a huge milestone for us today! Because today we will extend our interpreter to execute procedure calls. If that’s not exciting, I don’t know what is. :)

Are you ready? Let’s get to it!


Here is the sample program we’ll focus on in this article:

programMain;procedureAlpha(a:integer;b:integer);varx:integer;beginx:=(a+b)*2;end;begin{ Main }Alpha(3+5,7);{ procedure call }end.{ Main }

It has one procedure declaration and one procedure call. We will limit our focus today to procedures that can access their parameters and local variables only. We will cover nested procedure calls and accessing non-local variables in the next two articles.


Let’s describe an algorithm that our interpreter needs to implement to be able to execute the Alpha(3 + 5, 7) procedure call in the program above.

Here is the algorithm for executing a procedure call, step by step:

  1. Create an activation record

  2. Save procedure arguments (actual parameters) in the activation record

  3. Push the activation record onto the call stack

  4. Execute the body of the procedure

  5. Pop the activation record off the stack

Procedure calls in our interpreter are handled by the visit_ProcedureCall method. The method is currently empty:

classInterpreter(NodeVisitor):...defvisit_ProcedureCall(self,node):pass

Let’s go over each step in the algorithm and write code for the visit_ProcedureCall method to execute procedure calls.

Let’s get started!


Step 1. Create an activation record

If you remember from the previous article, an activation record (AR) is a dictionary-like object for maintaining information about the currently executing invocation of a procedure or function, and also the program itself. The activation record for a procedure, for example, contains the current values of its formal parameters and the current values of its local variables. So, to store the procedure’s arguments and local variables, we need to create an AR first. Recall that the ActivationRecord constructor takes 3 parameters: name, type, and nesting_level. And here’s what we need to pass to the constructor when creating an AR for a procedure call:

  • We need to pass the procedure’s name as the name parameter to the constructor

  • We also need to specify PROCEDURE as the type of the AR

  • And we need to pass 2 as the nesting_level for the procedure call because the program’s nesting level is set to 1 (You can see that in the visit_Program method of the interpreter)

Before we extend the visit_ProcedureCall method to create an activation record for a procedure call, we need to add the PROCEDURE type to the ARType enumeration. Let’s do this first:

classARType(Enum):PROGRAM='PROGRAM'PROCEDURE='PROCEDURE'

Now, let’s update the visit_ProcedureCall method to create an activation record with the appropriate arguments that we described earlier in the text:

defvisit_ProcedureCall(self,node):proc_name=node.proc_namear=ActivationRecord(name=proc_name,type=ARType.PROCEDURE,nesting_level=2,)

Writing code to create an activation record was easy once we figured out what to pass to the ActivationRecord constructor as arguments.


Step 2. Save procedure arguments in the activation record

ASIDE: Formal parameters are parameters that show up in the declaration of a procedure. Actual parameters (also known as arguments) are different variables and expressions passed to the procedure in a particular procedure call.

Here is a list of steps that describes the high-level actions the interpreter needs to take to save procedure arguments in the activation record:

  1. Get a list of the procedure’s formal parameters
  2. Get a list of the procedure’s actual parameters (arguments)
  3. For each formal parameter, get the corresponding actual parameter and save the pair in the procedure’s activation record by using the formal parameter’s name as a key and the actual parameter (argument), after having evaluated it, as the value

If we have the following procedure declaration and procedure call:

procedureAlpha(a:integer;b:integer);Alpha(3+5,7);

Then after the above three steps have been executed, the procedure’s AR contents should look like this:

2:PROCEDUREAlphaa:8b:7

Here is the code that implements the steps above:

proc_symbol=node.proc_symbolformal_params=proc_symbol.formal_paramsactual_params=node.actual_paramsforparam_symbol,argument_nodeinzip(formal_params,actual_params):ar[param_symbol.name]=self.visit(argument_node)

Let’s take a closer look at the steps and the code.


a) First, we need to get a list of the procedure’s formal parameters. Where can we get them from? They are available in the respective procedure symbol created during the semantic analysis phase. To jog your memory, here is the definition of the ProcedureSymbol class:

classSymbol:def__init__(self,name,type=None):self.name=nameself.type=typeclassProcedureSymbol(Symbol):def__init__(self,name,formal_params=None):super().__init__(name)# a list of VarSymbol objectsself.formal_params=[]ifformal_paramsisNoneelseformal_params

And here’s the contents of the global scope (program level), which shows a string representation of the Alpha procedure symbol with its formal parameters:

SCOPE (SCOPED SYMBOL TABLE)===========================
Scope name     : global
Scope level    : 1
Enclosing scope: None
Scope (Scoped symbol table) contents------------------------------------
INTEGER: <BuiltinTypeSymbol(name='INTEGER')>
   REAL: <BuiltinTypeSymbol(name='REAL')>
  Alpha: <ProcedureSymbol(name=Alpha, parameters=[<VarSymbol(name='a', type='INTEGER')>, <VarSymbol(name='b', type='INTEGER')>])>

Okay, we now know where to get the formal parameters from. How do we get to the procedure symbol from the ProcedureCallASTnode variable? Let’s take a look at the visit_ProcedureCall method code that we’ve written so far:

defvisit_ProcedureCall(self,node):proc_name=node.proc_namear=ActivationRecord(name=proc_name,type=ARType.PROCEDURE,nesting_level=2,)

We can get access to the procedure symbol by adding the following statement to the code above:

proc_symbol=node.proc_symbol

But if you look at the definition of the ProcedureCall class from the previous article, you can see that the class doesn’t have proc_symbol as a member:

classProcedureCall(AST):def__init__(self,proc_name,actual_params,token):self.proc_name=proc_nameself.actual_params=actual_params# a list of AST nodesself.token=token

Let’s fix that and extend the ProcedureCall class to have the proc_symbol field:

classProcedureCall(AST):def__init__(self,proc_name,actual_params,token):self.proc_name=proc_nameself.actual_params=actual_params# a list of AST nodesself.token=token# a reference to procedure declaration symbolself.proc_symbol=None

That was easy. Now, where should we set the proc_symbol so that it has the right value (a reference to the respective procedure symbol) for the interpretation phase? As I’ve mentioned earlier, the procedure symbol gets created during the semantic analysis phase. We can store it in the ProcedureCallAST node during the node traversal done by the semantic analyzer’s visit_ProcedureCall method.

Here is the original method:

classSemanticAnalyzer(NodeVisitor):...defvisit_ProcedureCall(self,node):forparam_nodeinnode.actual_params:self.visit(param_node)

Because we have access to the current scope when traversing the AST tree in the semantic analyzer, we can look up the procedure symbol by a procedure name and then store the procedure symbol in the proc_symbol variable of the ProcedureCallAST node. Let’s do this:

classSemanticAnalyzer(NodeVisitor):...defvisit_ProcedureCall(self,node):forparam_nodeinnode.actual_params:self.visit(param_node)proc_symbol=self.current_scope.lookup(node.proc_name)# accessed by the interpreter when executing procedure callnode.proc_symbol=proc_symbol

In the code above, we simply resolve a procedure name to its procedure symbol, which is stored in one of the scoped symbol tables (in our case in the global scope, to be exact), and then assign the procedure symbol to the proc_symbol field of the ProcedureCallAST node.

For our sample program, after the semantic analysis phase and the actions described above, the AST tree will have a link to the Alpha procedure symbol in the global scope:

As you can see in the picture above, this setup allows us to get the procedure’s formal parameters from the interpreter’s visit_ProcedureCall method - when evaluating a ProcedureCall node - by simply accessing the formal_params field of the proc_symbol variable stored in the ProcedureCallAST node:

proc_symbol=node.proc_symbolproc_symbol.formal_params# aka parameters


b) After we get the list of formal parameters, we need to get a list of the procedure’s actual parameters (arguments). Getting the list of arguments is easy because they are readily available from the ProcedureCallASTnode itself:

node.actual_params# aka arguments


c) And the last step. For each formal parameter, we need to get the corresponding actual parameter and save the pair in the procedure’s activation record by using the formal parameter’s name as the key and the actual parameter (argument), after having evaluated it, as the value

Let’s take a look at the code that does building of the key-value pairs using the Python zip() function:

proc_symbol=node.proc_symbolformal_params=proc_symbol.formal_paramsactual_params=node.actual_paramsforparam_symbol,argument_nodeinzip(formal_params,actual_params):ar[param_symbol.name]=self.visit(argument_node)

Once you know how the Python zip() function works, the for loop above should be easy to understand. Here’s a Python shell demonstration of the zip() function in action:

>>> formal_params = ['a', 'b', 'c']
>>> actual_params = [1, 2, 3]
>>>
>>> zipped = zip(formal_params, actual_params)
>>>
>>> list(zipped)
[('a', 1), ('b', 2), ('c', 3)]

The statement to store the key-value pairs in the activation record is very straightforward:

ar[param_symbol.name]=self.visit(argument_node)

The key is the name of a formal parameter, and the value is the evaluated value of the argument passed to the procedure call.

Here is the interpreter’s visit_ProcedureCall method with all the modifications we’ve done so far:

classInterpreter(NodeVisitor):...defvisit_ProcedureCall(self,node):proc_name=node.proc_namear=ActivationRecord(name=proc_name,type=ARType.PROCEDURE,nesting_level=2,)proc_symbol=node.proc_symbolformal_params=proc_symbol.formal_paramsactual_params=node.actual_paramsforparam_symbol,argument_nodeinzip(formal_params,actual_params):ar[param_symbol.name]=self.visit(argument_node)


Step 3. Push the activation record onto the call stack

After we’ve created the AR and put all the procedure’s parameters into the AR, we need to push the AR onto the stack. It’s super easy to do. We need to add just one line of code:

self.call_stack.push(ar)

Remember: an AR of a currently executing procedure is always at the top of the stack. This way the currently executing procedure has easy access to its parameters and local variables. Here is the updated visit_ProcedureCall method:

defvisit_ProcedureCall(self,node):proc_name=node.proc_namear=ActivationRecord(name=proc_name,type=ARType.PROCEDURE,nesting_level=2,)proc_symbol=node.proc_symbolformal_params=proc_symbol.formal_paramsactual_params=node.actual_paramsforparam_symbol,argument_nodeinzip(formal_params,actual_params):ar[param_symbol.name]=self.visit(argument_node)self.call_stack.push(ar)


Step 4. Execute the body of the procedure

Now that everything has been set up, let’s execute the body of the procedure. The only problem is that neither the ProcedureCallASTnode nor the procedure symbol proc_symbol knows anything about the body of the respective procedure declaration.

How do we get access to the body of the procedure declaration during execution of a procedure call? In other words, when traversing the AST tree and visiting the ProcedureCallAST node during the interpretation phase, we need to get access to the block_node variable of the corresponding ProcedureDecl node. The block_node variable holds a reference to an AST sub-tree that represents the body of the procedure. How can we access that variable from the visit_ProcedureCall method of the Interpreter class? Let’s think about it.

We already have access to the procedure symbol that contains information about the procedure declaration, like the procedure’s formal parameters, so let’s find a way to store a reference to the block_node in the procedure symbol itself. The right spot to do that is the semantic analyzer’s visit_ProcedureDecl method. In this method we have access to both the procedure symbol and the procedure’s body, the block_node field of the ProcedureDeclAST node that points to the procedure body’s AST sub-tree.

We have a procedure symbol, and we have a block_node. Let’s store a pointer to the block_node in the block_ast field of the proc_symbol:

classSemanticAnalyzer(NodeVisitor):defvisit_ProcedureDecl(self,node):proc_name=node.proc_nameproc_symbol=ProcedureSymbol(proc_name)...self.log(f'LEAVE scope: {proc_name}')# accessed by the interpreter when executing procedure callproc_symbol.block_ast=node.block_node

And to make it explicit, let’s also extend the ProcedureSymbol class and add the block_ast field to it:

classProcedureSymbol(Symbol):def__init__(self,name,formal_params=None):...# a reference to procedure's body (AST sub-tree)self.block_ast=None

In the picture below you can see the extended ProcedureSymbol instance that stores a reference to the corresponding procedure’s body (a Block node in the AST):

With all the above, executing the body of the procedure in the procedure call becomes as simple as visiting the procedure declaration’s BlockAST node accessible through the block_ast field of the procedure’s proc_symbol:

self.visit(proc_symbol.block_ast)


Here is the fully updated visit_ProcedureCall method of the Interpreter class:

defvisit_ProcedureCall(self,node):proc_name=node.proc_namear=ActivationRecord(name=proc_name,type=ARType.PROCEDURE,nesting_level=2,)proc_symbol=node.proc_symbolformal_params=proc_symbol.formal_paramsactual_params=node.actual_paramsforparam_symbol,argument_nodeinzip(formal_params,actual_params):ar[param_symbol.name]=self.visit(argument_node)self.call_stack.push(ar)# evaluate procedure bodyself.visit(proc_symbol.block_ast)

If you remember from the previous article, the visit_Assignment and visit_Var methods use an AR at the top of the call stack to access and store variables:

defvisit_Assign(self,node):var_name=node.left.valuevar_value=self.visit(node.right)ar=self.call_stack.peek()ar[var_name]=var_valuedefvisit_Var(self,node):var_name=node.valuear=self.call_stack.peek()var_value=ar.get(var_name)returnvar_value

These methods stay unchanged. When interpreting the body of a procedure, these methods will store and access values from the AR of the currently executing procedure, which will be at the top of the stack. We’ll see shortly how it all fits and works together.


Step 5. Pop the activation record off the stack

After we’re done evaluating the body of the procedure, we no longer need the procedure’s AR, so we pop it off the call stack right before leaving the visit_ProcedureCall method. Remember, the top of the call stack contains an AR for a currently executing procedure, function, or program, so once we’re done evaluating one of those routines, we need to pop their respective AR off the call stack using the call stack’s pop() method:

self.call_stack.pop()

Let’s put it all together and also add some logging to the visit_ProcedureCall method to log the contents of the call stack right after pushing the procedure’s AR onto the call stack and right before popping it off the stack:

defvisit_ProcedureCall(self,node):proc_name=node.proc_namear=ActivationRecord(name=proc_name,type=ARType.PROCEDURE,nesting_level=2,)proc_symbol=node.proc_symbolformal_params=proc_symbol.formal_paramsactual_params=node.actual_paramsforparam_symbol,argument_nodeinzip(formal_params,actual_params):ar[param_symbol.name]=self.visit(argument_node)self.call_stack.push(ar)self.log(f'ENTER: PROCEDURE {proc_name}')self.log(str(self.call_stack))# evaluate procedure bodyself.visit(proc_symbol.block_ast)self.log(f'LEAVE: PROCEDURE {proc_name}')self.log(str(self.call_stack))self.call_stack.pop()


Let’s take our modified interpreter for a ride and see how it executes procedure calls. Download the following sample program from GitHub or save it as part18.pas:

programMain;procedureAlpha(a:integer;b:integer);varx:integer;beginx:=(a+b)*2;end;begin{ Main }Alpha(3+5,7);{ procedure call }end.{ Main }

Download the interpreter file spi.py from GitHub and run it on the command line with the following arguments:

$ python spi.py part18.pas --stack
ENTER: PROGRAM Main
CALL STACK
1: PROGRAM Main


ENTER: PROCEDURE Alpha
CALL STACK
2: PROCEDURE Alpha
   a                   : 8
   b                   : 71: PROGRAM Main


LEAVE: PROCEDURE Alpha
CALL STACK
2: PROCEDURE Alpha
   a                   : 8
   b                   : 7
   x                   : 301: PROGRAM Main


LEAVE: PROGRAM Main
CALL STACK
1: PROGRAM Main

So far, so good. Let’s take a closer look at the output and inspect the contents of the call stack during program and procedure execution.

1. The interpreter first prints

ENTER:PROGRAMMainCALLSTACK1:PROGRAMMain

when visiting the ProgramAST node before executing the body of the program. At this point the call stack has one activation record. This activation record is at the top of the call stack and it’s used for storing global variables. Because we don’t have any global variables in our sample program, there is nothing in the activation record.

2. Next, the interpreter prints

ENTER:PROCEDUREAlphaCALLSTACK2:PROCEDUREAlphaa:8b:71:PROGRAMMain

when it visits the ProcedureCallAST node for the Alpha(3 + 5, 7) procedure call. At this point the body of the Alpha procedure hasn’t been evaluated yet and the call stack has two activation records: one for the Main program at the bottom of the stack (nesting level 1) and one for the Alpha procedure call, at the top of the stack (nesting level 2). The AR at the top of the stack holds the values of the procedure arguments a and b only; there is no value for the local variable x in the AR because the body of the procedure hasn’t been evaluated yet.

3. Up next, the interpreter prints

LEAVE:PROCEDUREAlphaCALLSTACK2:PROCEDUREAlphaa:8b:7x:301:PROGRAMMain

when it’s about to leave the ProcedureCallAST node for the Alpha(3 + 5, 7) procedure call but before popping off the AR for the Alpha procedure.

From the output above, you can see that in addition to the procedure arguments, the AR for the currently executing procedure Alpha now also contains the result of the assignment to the local variable x, the result of executing the x := (a + b ) * 2; statement in the body of the procedure. At this point the call stack visually looks like this:

4. And finally the interpreter prints

LEAVE:PROGRAMMainCALLSTACK1:PROGRAMMain

when it leaves the ProgramAST node but before it pops off the AR for the main program. As you can see, the activation record for the main program is the only AR left in the stack because the AR for the Alpha procedure call got popped off the stack earlier, right before finishing executing the Alpha procedure call.


That’s it. Our interpreter successfully executed a procedure call. If you’ve reached this far, congratulations!

It is a huge milestone for us. Now you know how to execute procedure calls. And if you’ve been waiting for this article for a long time, thank you for your patience.

That’s all for today. In the next article, we’ll expand on the current material and talk about executing nested procedure calls. So stay tuned and see you next time!


Resources used in preparation for this article (links are affiliate links):

  1. Language Implementation Patterns: Create Your Own Domain-Specific and General Programming Languages (Pragmatic Programmers)
  2. Writing Compilers and Interpreters: A Software Engineering Approach
  3. Programming Language Pragmatics, Fourth Edition

PyCharm: PyCharm 2020.1 EAP 4

$
0
0

We have a new Early Access Program (EAP) version of PyCharm that can be now downloaded from our website.

We’ve been hard at work making PyCharm easier to use and adding and improving features to get PyCharm 2020.1 ready for release. We have some good ones for you to try in this build. This EAP also includes loads of fixes from the IntelliJ Platform teams.

New in PyCharm

Flake8-style # noqa suppression

Linters are incredibly useful tools for Python programmers. But sometimes the linter makes mistakes, and you get false positives. In such cases, you might want to disable or suppress the warnings.

To suppress warnings, # noqa comments have become a community standard for various third-party Python linters, such as pycodestyle.py and Flake8. Before, people who used these tools in addition to PyCharm (e.g. by running them as commit hooks or on CI) had to use both the IntelliJ-specific # noinspection XXX and # noqa comments to suppress warnings about the same error, which was both tedious and messy in the code.

noqa

We have improved our inspection capabilities. So now, not only can you use the # noinspection comments, but Flake8-style # noqa comments are now also recognized and treated as universal suppressing markers.

noqa2

What’s more, in cases where an existing Flake8 check directly matches one of our inspections, it’s possible to specify an exact Flake8 error code to suppress a particular message. The same is true for pycodestyle.py errors. So, for example, suppressing “E711 comparison to None should be ‘if cond is None:’” doesn’t prevent formatting errors on the same line from being reported.

noqa3

To learn more about this support, check out our documentation on disabling and enabling inspections.

Auto-import for Django custom template tags

With Django, you can set up a custom template tag to introduce functionality that is not covered by the built-in template tags. You are now prompted to auto-import and add {% load %} if a tag used in a Django template exists in the project but it wasn’t loaded. Place the caret at the custom tag name, press Alt+Enter, and select the file to load. PyCharm adds the {% load %} tag to the template file to fix the reference. Check out the documentation for more about this feature.

tag1

Other Improvements

In the spirit of making changes to improve the experience of working with PyCharm a little smoother:

  • PyCharm will apply all the settings from your previous version to your new version without you having to explicitly tell it to.
  • You can now update multiple plugins more effectively using the UpdateAll action. PyCharm will wait until all the plugins are downloaded before prompting you to restart. This way, you only need to restart PyCharm once to apply all the changes.
  • Git users can now see their favorite branches first in the branch dashboard if grouping is enabled in the tree.
  • If you are working with databases, you will be glad to know that TRUNCATE
    doesn’t trigger schema synchronization.
  • Using “Dump with ‘mysqldump’” on your local MySql database no longer fails if your user password is empty.
  • Starting from v2020.1, the configuration files will be stored in a different folder. For more information on where exactly these files will be stored on your machine, please refer to this article.
  • For the full list of what’s in this version, see the release notes.

Interested?

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP.
If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP and stay up to date. You can find the installation instructions on our website.

NumFOCUS: MDAnalysis joins NumFOCUS Sponsored Projects

$
0
0

NumFOCUS is pleased to announce the newest addition to our fiscally sponsored projects: MDAnalysis MDAnalysis is a Python library for the analysis of computer simulations of many-body systems at the molecular scale, spanning use cases from interactions of drugs with proteins to novel materials. It is widely used in the scientific community and is written […]

The post MDAnalysis joins NumFOCUS Sponsored Projects appeared first on NumFOCUS.

Viewing all 22414 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>