Quantcast
Channel: Planet Python
Viewing all 22419 articles
Browse latest View live

Catalin George Festila: Python Qt5 - Create a spectrum equalizer.

$
0
0
I haven't written much for a while on these issues about python and PyQt5. Today I will show a complex example of QtMultimedia and how to create a spectrum equalizer. First, the PyQt5 bindings come with this python module named QtMultimedia. The main reason was the lack of time and focus of my effort on more stringent elements of my life. Let's start with the few lines of source code that show us

Catalin George Festila: Python 3.6.9 : My colab tutorials - part 001.

$
0
0
Today I start this tutorials series for the Colab tool. To share my working with the Colab tool I created this GitHub project. This project has two colab files : catafest_001.ipynb Created using Colaboratory catafest_002.ipynb Created using Colaboratory First colab notebook come with a simple tutorial. The next colab notebook is a little bit more complex and shares more information about how

Ruslan Spivak: EOF is not a character

$
0
0

I was reading Computer Systems: A Programmer’s Perspective the other day and in the chapter on Unix I/O the authors mention that there is no explicit “EOF character” at the end of a file.

If you’ve spent some time reading and/or playing with Unix I/O and have written some C programs that read text files and run on Unix/Linux, that statement is probably obvious. But let’s take a closer look at the following two points related to the statement in the book:

  1. EOF is not a character
  2. EOF is not a character you find at the end of a file


1. Why would anyone say or think that EOF is a character? I think it may be because in some C programs you can find code that explicitly checks for EOF using getchar() and getc() routines:

#include<stdio.h>...while((c=getchar())!=EOF)putchar(c);ORFILE*fp;intc;...while((c=getc(fp))!=EOF)putc(c,stdout);

And if you check the man page for getchar() or getc(), you’ll read that both routines get the next character from the input stream. So that could be what leads to a confusion about the nature of EOF, but that’s just me speculating. Let’s get back to the point that EOF is not a character.

What is a character anyway? A character is the smallest component of a text. ‘A’, ‘a’, ‘B’, ‘b’ are all different characters. A character has a numeric value that is called a code pointin the Unicode standard. For example, the English character ‘A’ has a numeric value of 65 in decimal. You can check this quickly in a Python shell:

$python
>>> ord('A')
65
>>> chr(65)
'A'


Or you could look it up in the ASCII table on your Unix/Linux box:

$ man ascii


Let’s check the value of EOF by writing a little C program. In ANSI C, EOF is defined in <stdio.h> as part of the standard library. Its value is usually -1. Save the following code in file printeof.c, compile it, and run it:

#include<stdio.h>intmain(intargc,char*argv[]){printf("EOF value on my system: %d\n",EOF);return0;}


$ gcc -o printeof printeof.c

$ ./printeof
EOF value on my system: -1

Okay, so on my system the value is -1 (I tested it both on Mac OS and Ubuntu Linux). Is there a character with a numerical value of -1? Again, you could check the available numeric values in the ASCII table or check the official Unicode page to find the legitimate range of numeric values for representing characters. But let’s fire up a Python shell and use the built-in chr() function to return a character for -1:

$ python
>>> chr(-1)
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
ValueError: chr() arg not in range(0x110000)

As expected, there is no character with a numeric value of -1. Okay, so EOF (as seen in C programs) is not a character.

Onto the second point.


2. Is EOF a character that you can find at the end of a file? I think at this point you already know the answer, but let’s double check our assumption.

Let’s take a simple text file helloworld.txt and get a hexdump of the contents of the file. We can use xxd for that:

$ cat helloworld.txt
Hello world!

$ xxd helloworld.txt
00000000: 4865 6c6c 6f20 776f 726c 6421 0a         Hello world!.

As you can see, the last character at the end of the file is the hex 0a. You can find in the ASCII table that 0a represents nl, the newline character. Or you can check it in a Python shell:

$ python
>>> chr(0x0a)'\n'


Okay. If EOF is not a character and it’s not a character that you find at the end of a file, what is it then?

EOF (end-of-file) is a condition that can be detected by an application when a read operation reaches the end of a file.

Let’s see how we can detect the EOF condition in various programming languages when reading a text file using high-level I/O routines provided by the languages. For this purpose, we’ll write a very simple cat version called mcat that reads an ASCII-encoded text file byte by byte (character by character) and explicitly checks for EOF. Let’s write our cat version in the following programming languages:

  • ANSI C
  • Python
  • Go
  • JavaScript (node.js)

You can find source code for all of the examples in this article on GitHub. Okay, let’s get started with the venerable C programming language.

  1. ANSI C (a modified cat version from The C Programming Language book)

    /* mcat.c */#include<stdio.h>intmain(intargc,char*argv[]){FILE*fp;intc;if((fp=fopen(*++argv,"r"))==NULL){printf("mcat: can't open %s\n",*argv);return1;}while((c=getc(fp))!=EOF)putc(c,stdout);fclose(fp);return0;}

    Compile

    $ gcc -o mcat mcat.c
    

    Run

    $ ./mcat helloworld.txt
    Hello world!
    


    Quick explanation of the code above:

    • The program opens a file passed as a command line argument
    • The while loop copies data from the file to the standard output one byte at a time until it reaches the end of the file.
    • On reaching EOF, the program closes the file and terminates
  2. Python 3

    Python doesn’t have a mechanism to explicitly check for EOF like in ANSI C, but if you read a text file one character at a time, you can determine the end-of-file condition by checking if the character read is empty:

    # mcat.pyimportsyswithopen(sys.argv[1])asfin:whileTrue:c=fin.read(1)# read max 1 charifc=='':# EOFbreakprint(c,end='')


    $ python mcat.py helloworld.txt
    Hello world!
    

    Python 3.8+ (a shorter version of the above using the walrus operator):

    # mcat38.pyimportsyswithopen(sys.argv[1])asfin:while(c:=fin.read(1))!='':# read max 1 char at a time until EOFprint(c,end='')


    $ python3.8 mcat38.py helloworld.txt
    Hello world!
    
  3. Go

    In Go we can explicitly check if the error returned by Read() is EOF.

    //mcat.gopackagemainimport("fmt""os""io")funcmain(){file,err:=os.Open(os.Args[1])iferr!=nil{fmt.Fprintf(os.Stderr,"mcat: %v\n",err)os.Exit(1)}buffer:=make([]byte,1)//1-bytebufferfor{bytesread,err:=file.Read(buffer)iferr==io.EOF{break}fmt.Print(string(buffer[:bytesread]))}file.Close()}


    $ go run mcat.go helloworld.txt
    Hello world!
    
  4. JavaScript (node.js)

    There is no explicit check for EOF, but the end event on a stream is fired when the end of a file is reached and a read operation tries to read more data.

    /* mcat.js */constfs=require('fs');constprocess=require('process');constfileName=process.argv[2];varreadable=fs.createReadStream(fileName,{encoding:'utf8',fd:null,});readable.on('readable',function(){varchunk;while((chunk=readable.read(1))!==null){process.stdout.write(chunk);/* chunk is one byte */}});readable.on('end',()=>{console.log('\nEOF: There will be no more data.');});


    $ node mcat.js helloworld.txt
    Hello world!
    
    EOF: There will be no more data.
    


How do the high-level I/O routines in the examples above determine the end-of-file condition? On Linux systems the routines either directly or indirectly use the read() system call provided by the kernel. The getc() function (or macro) in C, for example, uses the read() system call and returns EOF if read() indicated the end-of-file condition. The read() system call returns 0 to indicate the EOF condition.

Let’s write a cat version called syscat using Unix system calls only, both for fun and potentially some profit. Let’s do that in C first:

/* syscat.c */#include<sys/types.h>#include<sys/stat.h>#include<fcntl.h>#include<unistd.h>intmain(intargc,char*argv[]){intfd;charc;fd=open(argv[1],O_RDONLY,0);while(read(fd,&c,1)!=0)write(STDOUT_FILENO,&c,1);return0;}


$ gcc -o syscat syscat.c

$ ./syscat helloworld.txt
Hello world!

In the code above, you can see that we use the fact that the read() function returns 0 to indicate EOF.

And the same in Python 3:

# syscat.pyimportsysimportosfd=os.open(sys.argv[1],os.O_RDONLY)whileTrue:c=os.read(fd,1)ifnotc:# EOFbreakos.write(sys.stdout.fileno(),c)


$ python syscat.py helloworld.txt
Hello world!

And in Python3.8+ using the walrus operator:

# syscat38.pyimportsysimportosfd=os.open(sys.argv[1],os.O_RDONLY)whilec:=os.read(fd,1):os.write(sys.stdout.fileno(),c)


$ python3.8 syscat38.py helloworld.txt
Hello world!


Let’s recap the main points about EOF again:

  • EOF is not a character
  • EOF is not a character that you find at the end of a file
  • EOF is a condition provided by the kernel that can be detected by an application when a read operation reaches the end of a file

Happy learning and have a great day!


Resources used in preparation for this article (some links are affiliate links):

  1. Computer Systems: A Programmer’s Perspective (3rd Edition)
  2. C Programming Language, 2nd Edition
  3. The Unix Programming Environment (Prentice-Hall Software Series)
  4. Advanced Programming in the UNIX Environment, 3rd Edition
  5. Go Programming Language, The (Addison-Wesley Professional Computing Series)
  6. Unicode HOWTO
  7. Node.js Stream module
  8. Go io package
  9. cat (Unix)

Roberto Alsina: Episodio 26: como el REPL, pero mejor

$
0
0

El modo interactivo del intérprete de Python está buenísimo! Es re útil! Pero ... se puede mejorar. Veamos 3 o 4 alternativas.

Roberto Alsina: Episodio 26: El REPL, pero mejor.

Roberto Alsina: Episodio 24: I like Windows!

Matt Layman: A Week At A Time - Building SaaS #46

$
0
0
In this episode, we worked on a weekly view for the Django app. We made navigation that would let users click from one week to the next, then fixed up the view to pull time from that particular week. The first thing that I did was focus on the UI required to navigate to a new weekly view in the app. We mocked out the UI and talked briefly about the flexbox layout that is available to modern browsers.

Mike Driscoll: PyDev of the Week: Doug Farrell

$
0
0

This week we welcome Doug Farrell (@writeson) as our PyDev of the Week! Doug is working on Python book entitled The Well-Grounded Python Developer for Manning. He is also a contributor for Real Python. You can find out more about Doug on his website. Now let’s spend some time learning more about Doug!

Doug Farrell

Can you tell us a little about yourself (hobbies, education, etc.):

I’m a developer with a lot of other interests and have a varied background. After a couple passes through college, I graduated with an AS degree in commercial art in 1980, and a BS in Physics in 1983. Two clearly related fields. Part of why I graduated so late was having spent five years working at a bronze sculpture foundry. As fun as that was, it took me a while to realize the physical toll of working there wasn’t sustainable, and I went back to school. I guess I’m a slow learner.

During my last year of school, I bought a Tandy Color Computer and learned basic and a little 6809 assembler, and the programming hook was set in me. I’ve worked as a software developer in quite a few industries; process control, embedded systems, retail CDRom software, Internet reference titles, and web applications for production systems. I’ve also worked in several languages during that time; Pascal, Fortran, C/C++, Visual Basic, PHP, Python, and JavaScript.

My wife and are bicyclists and have ridden quite a few organized century rides. We’ve shortened our distances and ride more for enjoyment and fitness now, and of course, competing with each other. I also have gotten back into artistic pursuits and have started painting. This is challenging for me as I never did any creative painting work, or in a larger format. I know I tend to be a realist, but I’m trying to get more expressive fooling around with abstraction.

Susan and I have one daughter and son-in-law, and one grandson who just turned 3 and is fantastic!

Why did you start using Python?

In 2000 I changed jobs to join a school and library publisher putting some of their encyclopedia’s online. Before this, I’d been a long time Windows developer, and now was jumping into a Sun/Unix world. At the time, they were developing their web applications using C/C++ as CGI programs. Offline tasks and processing work was beginning to be done with Perl. I was horrified by the Perl syntax and was fortunate enough to find Python. Python appealed to me immediately because of its clean syntax, “one obvious way to do things” philosophy, and in particular, its native support for Object-Oriented Programming. I was firmly in the OOP camp from working with C++ for a few years.

In 2006 I was fortunate enough to join a company where I could program exclusively in Python and have stayed with it as my favorite language ever since.

What other programming languages do you know and which is your favorite?

As mentioned before, I’ve worked in several different languages over the years, Assembler (6809), Pascal, Fortran, C/C++, VB, PHP, Python, and JavaScript. Though I was a “speed freak” working with C for embedded systems and loved that very close to the hardware kind of work, I don’t really want to go back to doing that. Python is my current favorite language, and language of choice for most, if not all, my projects. I also like JavaScript/TypeScript because I like to work on both the front and backend of applications. I enjoy that interface between Python and JavaScript, and in many ways, find lots of parallels between them.

What projects are you working on now?

The book project is pretty much taking up most of my free time, but I do have a few other projects I was working on and want to get back to. Ages ago, I wrote a Python Twisted based logging server that would allow multiple applications across multiple machines log to a single file. This was very handy to see the chronological logging activity of an entire system in one place. I want to rebuild that to use Python asyncio with a WebSocket interface to an Angular web application to display a live/searchable view of the logs in the system.

There is a STEM place near me called Robotics & Beyond, where I’ve taught Python programming a few times. The material I wrote for the classes is also used by the teenage mentors to teach the course as well. This is fun, as kids are fun and challenging to present to!

I also want to/need to rebuild my personal website as it’s out of date. Since I sort of use that to “show off”, it’s shameful the state it’s in!! Here’s a link to my website.

As a complete geek, I have a Lego EV3 Mindstorms system. Now that Lego has released MicroPython for it, I want to get into that as I really like controlling hardware devices with code. This might also lead to trying to do some contracting (possibly) using MicroPython rather than PLC’s or custom C.

Which Python libraries are your favorite (core or 3rd party)?

Some of my answers are obvious. I like and use the requests library quite a bit. I use the Flask web framework for the web applications I build because of it’s “add it as you need it” approach to what it provides. This also means I use SqlAlchemy quite a bit for database work because I’d instead work with Python objects than rows of raw data. I’m not much of a database wonk, so the abstraction SqlAlchemy provides appeals to me. Because I like to build rich web applications, I create quite a few REST API’s as part of my Flask work. For this I use two libraries. The Connexion library is a great way to generate these API’s. It’s configuration file approach to defining the API, what it accepts, and the path to how it’s handled really appeals to me. And the automatic Swagger interface to the API is a huge bonus! In conjunction, I use Marshmallow to serialize the data back and forth between SqlAlchemy and Connexion.

You are a contributor to Real Python. Can you tell us how that came about?

I got into giving presentations at the engineering all-hands at my place of work. This was fun because where I work is a heterogeneous tech stack, and I’m part of the mission to promote and convert people to Python. I turned some of my presentations into articles for Dan on dbader.org, and have written for RealPython since he acquired the site.

You can find the articles I’ve written there by going to realpython.com and searching for “farrell”.

How did you end up writing a book on Python?

It was the articles at RealPython that attracted the attention of one of the Manning Publishing acquisition editors. We talked quite a bit about the kinds of things we’d like to do/see, and eventually I wrote a book proposal they were interested in. I’m currently writing “The Well-Grounded Python Developer” for them, which has been a fun, interesting, and sometimes nerve-wracking project.

Here’s a link to the book.

What have you learned as an author?

Writing is wonderfully improved if approached as a collaborative process, the work benefits a great deal from the input of others. The editors at RealPython have helped me improve my writing quite a bit. I also find writing to have certain aspects in common with programming. It’s like optimization, the process could go on nearly forever, but eventually you have to call it “Done!” and walk away. Shipping is a feature!

My Dad, who was a great writer, taught me the beauty of a simple, declarative sentence.

I’ve also learned that editing and read-throughs are essential and never fall in love with a sentence. Cutting is important.

Is there anything else you’d like to say?

I’ve been working from home for the last two years, and while I don’t miss the commute at all, being able to interact with my colleagues and peers directly is something I do miss. Being part of the RealPython community has been great because I’ve broadened my circle of peers to include people who work in all kinds of different domains. Besides helping me to stay sharp, it’s exposed me to all sorts of different thoughts about how to solve problems.

Thanks so much for doing the interview, Doug!

The post PyDev of the Week: Doug Farrell appeared first on The Mouse Vs. The Python.


PyBites: Productivity Mondays - What Can You Do Consistently This Week?

$
0
0

Continuous effort - not strength or intelligence - is the key to unlocking our potential ― Winston S. Churchill.

Last week I learned about lagging and leading indicators and why it's important to focus on the latter.

Without going into too much theory the difference can be summed up as:

  • Lagging indicators are all about OUTPUTS, they are easy to measure but hard to improve or influence.

  • Leading indicators then, you guessed it, are about INPUTS, they are easier to influence but hard(er) to measure.

Lagging indicators are history / reactive, leading indicators are future oriented / proactive.

You want to focus on the latter.

A great example to further clarify this I found here:

For many of us a personal goal is weight loss. A clear lagging indicator that is easy to measure. You step on a scale and you have your answer. But how do you actually reach your goal? For weight loss there are 2 leading indicators: 1. Calories taken in and 2. Calories burned. These 2 indicators are easy to influence but very hard to measure.

This resonated with me because I lost quite some weight last few years thanks to small daily things I could control and I was consistent about:

  • Manage calories
  • Eat healthy foods
  • Go to the gym every weekday
  • Get more hours of sleep

So let's translate this to Python:

Goal: land a developer job or upgrade an existing one.

This can be a monstrous goal. Depending where you are in your journey, it will also take quite some time.

However if you translate this into smaller steps you'll be amazed how much more likely this becomes:

  • Read 30 min about SW development -> this can result in 10-15 books after a year.

  • Write a post about SW / Python every week -> 52 articles on your blog after a year.

  • Connect with 3-5 people on IN every week -> 200-250 people added to your network after a year who see your stuff and might reach out.

  • Solve an exercise every day -> become a black belt on our platform in 6-12 months.

  • Write 30 min of code towards your project every day -> have some high quality projects on your porfolio after 6-12 months.

  • Etc.

Again, YOU CONTROL the lead indicators, and THESE WILL GET YOU TO the lag indicators.

However accept that it takes time. I am the first one to be impatient at times, but when that happens I always remind myself what Bill Gates (and Tony Robbins) said:

Most people overestimate what they can do in one year and underestimate what they can do in ten years.

The fastest way to burnout is to stuff a month's work into a week. It leads to unfocused action, frustration and therefor messes with your confidence.

Instead take small CONSISTENT steps towards your goal, every single day.

To a large extent consistency beats smarts.


Comment below what your daily reps will be this week.

-- Bob

With so many avenues to pursue in Python it can be tough to know what to do. If you're looking for some direction or want to take your Python code and career to the next level, schedule a call with us now. We can help you!

IslandT: What is python programming language?

$
0
0

Python is the most popular programming language in the world, above Java and above C/C++/C#. We can use Python for free to develop web applications or desktop software and then sell that application or software in the marketplace. Just like Perl, Python source code is also available under the GNU General Public License (GPL) which guarantees end-users the freedom to run, study, share and modify the source code. Python is created by Guido van Rossum. In my opinion, Python programming language looks like the combination of Java, Javascript, and Perl programming language, therefore there is nothing new and nothing we have not seen before if we have already learned those programming languages above.

Python is a high-level, interpreted (processed at runtime by the interpreter, no need to compile our program before executing it but it also can be compiled to byte-code for building large scale applications), interactive (Python has support for an interactive mode that allows interactive testing and debugging of snippets of code), object-oriented (a programming language model that organizes software design around data, or objects), functional (create a set of instruction within a function block) and structured programming (conditional programming) scripting language. Python provides very high-level dynamic data types (A dynamic type escapes type checking at compile-time; instead, it resolves type at run time) and supports dynamic type checking. It supports automatic garbage collection just like Java. Python can be easily integrated with C, C++, COM, ActiveX, CORBA, and Java. Python’s bulk of the library is very portable and cross-platform compatible with UNIX, Windows, and Macintosh. Python can run on a wide variety of operating systems (Windows, Linux, and Mac) and has almost the same interface on all platforms. You can add low-level modules to the Python interpreter. These modules enable programmers to add to or customize their tools to be more efficient. Python provides interfaces to all major commercial databases. Python supports GUI applications that can be created using Tkinter.

Python is used in server-side web development, desktop software development, develop the mathematics solution (using NumPy) and system scripting. Python uses English keywords just like other programming languages which makes it so easy to learn and understand. Python also can connect to database systems or read and modify files on your computer hard drive.

The most recent version of Python is Python 3.8.2, which we shall be using in this tutorial series. However, Python 2, although not being updated with anything other than security updates, is still quite popular because some old python modules are still based on Python 2 instead of 3 to operate!

Here are two quick views about Python programming code structure.

  1. The semicolon is not needed in Python code.
  2. Python relies on indentation, using whitespace, to define scope; such as the scope of loops, functions, and classes instead of curly-brackets for this purpose.

There you have it, this will mark the beginning of our Python programming language tutorial series. In the next chapter, we will look at various IDEs that we can use to write the Python program.

PyCharm: PyCharm 2020.1 EAP 5

$
0
0

We have a new Early Access Program (EAP) version of PyCharm that can be now downloaded from our website.

We are getting closer every week to the 2020.1 release. We are pushing hard to get through all the new features we want to make it into it. There are some big ones to try out in this EAP.

New in PyCharm

Generate requirements.txt

Wouldn’t it be nice to generate a requirements.txt file from the packages list in project interpreter pane and then also have it stay in sync with that list? Actually, yes it would be nice. Really nice. So that is what we have now done.
PyCharm provides integration with the major means of requirements management and makes it possible to track the unsatisfied requirements in your projects. And now PyCharm has the ability to generate/update requirements.txt from projects interpreter packages. From the Tools menu, select Sync Python Requirements. Check out the documentation here.

py_requirements_txt_example

Smart step into is the default

If there is any way to make debugging easier, you can rest assured we are going to take it seriously. Stepping into and out of methods when debugging your code was incredibly helpful and a ‘step’ (yeah we went there) in the right direction. Now though, we have taken more of a leap and made Smart Step Into the default. Smart step into is helpful when there are several method calls on a line, and say, you want to be specific about which method to enter. This feature will allow you to specifically select the method call you are interested in. It is just much smarter, really.

Smart Step into

Simpler Jupyter notebook editing

For our PyCharm professional users, we’ve added a simple way to edit Jupyter notebook files and apply editing actions to the single cells and the whole notebook. It is like a “Select All” action that can be applied to a cell. Simply press Ctrl+A once to select a single cell at the caret, or press Ctrl+A twice to select all cells in the notebook.

Edit Jupyter notebooks

When editing notebook files, mind that PyCharm updates the source code and preview of the notebook if it has been changed externally.

Further Improvements

  • On the list of developers most wanted: the capability to display terminal sessions vertically/horizontally side-by-side is now a reality. Use “Split Vertically”/”Split Horizontally” from the context menu.
  • The Python Interpreter widget has been tinkered with to make it even better. It used to be that you had to open at least one file in the editor to show the interpreter settings in the status bar. This limitation has been removed.
  • The list of Recent projects after importing settings from previous versions now shows you the recent projects in the right order and it doesn’t include deleted projects.
  • As we are getting ever closer to the release, we are coming down hard on bugs. You can check out all the fixes that have been made to this release in the release notes.

Interested?

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP.

If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP and stay up to date. You can find the installation instructions on our website.

Podcast.__init__: The Advanced Python Task Scheduler

$
0
0
Most long-running programs have a need for executing periodic tasks. APScheduler is a mature and open source library that provides all of the features that you need in a task scheduler. In this episode the author, Alex Grönholm, explains how it works, why he created it, and how you can use it in your own applications. He also digs into his plans for the next major release and the forces that are shaping the improved feature set. Spare yourself the pain of triggering events at just the right time and let APScheduler do it for you.

Summary

Most long-running programs have a need for executing periodic tasks. APScheduler is a mature and open source library that provides all of the features that you need in a task scheduler. In this episode the author, Alex Grönholm, explains how it works, why he created it, and how you can use it in your own applications. He also digs into his plans for the next major release and the forces that are shaping the improved feature set. Spare yourself the pain of triggering events at just the right time and let APScheduler do it for you.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With 200 Gbit/s private networking, node balancers, a 40 Gbit/s public network, and a brand new managed Kubernetes platform, all controlled by a convenient API you’ve got everything you need to scale up. And for your tasks that need fast computation, such as training machine learning models, they’ve got dedicated CPU and GPU instances. Go to pythonpodcast.com/linode to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
  • You listen to this show to learn and stay up to date with the ways that Python is being used, including the latest in machine learning and data analysis. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to pythonpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
  • Your host as usual is Tobias Macey and today I’m interviewing Alex Grönholm about APScheduler, a library for scheduling tasks in your Python projects

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you start by describing what APScheduler is and the main use cases that APScheduler is designed for?
    • What was your movitvation for creating it?
  • What is the workflow for integrating APScheduler into an application?
    • In the documentation it says not to run more than one instance of the scheduler, what are some strategies for scaling schedulers?
  • What are some common architectures for applications that take advantage of APScheduler?
    • What are some potential pitfalls that developers should be aware of?
  • Can you describe how APScheduler is implemented and how its design has evolved since you first began working on it?
    • What have you found to be the most complex or challenging aspects of building or using a scheduling framework?
  • What are some of the most interesting/innovative/unexpected ways that you have seen APScheduler used?
  • What are some of the features or capabilities that you have consciously left out?
    • What design strategies or features of APScheduler are often overlooked or underappreciated?
  • What are some of the most useful or interesting lessons that you have learned while building and maintaining APScheduler?
  • When is APScheduler the wrong choice for managing task execution?
  • What do you have planned for the future of the project?

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Real Python: Python Bindings: Calling C or C++ From Python

$
0
0

Are you a Python developer with a C or C++ library you’d like to use from Python? If so, then Python bindings allow you to call functions and pass data from Python to C or C++, letting you take advantage of the strengths of both languages. Throughout this tutorial, you’ll see an overview of some of the tools you can use to create Python bindings.

In this tutorial, you’ll learn about:

  • Why you want to call C or C++ from Python
  • How to pass data between C and Python
  • What tools and methods can help you create Python bindings

This tutorial is aimed at intermediate Python developers. It assumes basic knowledge of Python and some understanding of functions and data types in C or C++. You can get all of the example code you’ll see in this tutorial by clicking on the link below:

Get Sample Code:Click here to get the sample code you'll use to learn about Python Bindings in this tutorial.

Let’s dive into looking at Python bindings!

Python Bindings Overview

Before you dive into how to call C from Python, it’s good to spend some time on why. There are several situations where creating Python bindings to call a C library is a great idea:

  1. You already have a large, tested, stable library written in C++ that you’d like to take advantage of in Python. This may be a communication library or a library to talk to a specific piece of hardware. What it does is unimportant.

  2. You want to speed up a particular section of your Python code by converting a critical section to C. Not only does C have faster execution speed, but it also allows you to break free from the limitations of the GIL, provided you’re careful.

  3. You want to use Python test tools to do large-scale testing of their systems.

All of the above are great reasons to learn to create Python bindings to interface with your C library.

Note: Throughout this tutorial, you’ll be creating Python bindings to both C and C++. Most of the general concepts apply to both languages, and so C will be used unless there’s a specific difference between the two languages. In general, each of the tools will support either C or C++, but not both.

Let’s get started!

Marshalling Data Types

Wait! Before you start writing Python bindings, take a look at how Python and C store data and what types of issues this will cause. First, let’s define marshalling. This concept is defined by Wikipedia as follows:

The process of transforming the memory representation of an object to a data format suitable for storage or transmission. (Source)

For your purposes, marshalling is what the Python bindings are doing when they prepare data to move it from Python to C or vice versa. Python bindings need to do marshalling because Python and C store data in different ways. C stores data in the most compact form in memory possible. If you use an uint8_t, then it will only use 8 bits of memory total.

In Python, on the other hand, everything is an object. This means that each integer uses several bytes in memory. How many will depend on which version of Python you’re running, your operating system, and other factors. This means that your Python bindings will need to convert a C integer to a Python integer for each integer passed across the boundary.

Other data types have similar relationships between the two languages. Let’s look at each in turn:

  • Integers store counting numbers. Python stores integers with arbitrary precision, meaning that you can store very, very, large numbers. C specifies the exact sizes of integers. You need to be aware of data sizes when you’re moving between languages to prevent Python integer values from overflowing C integer variables.

  • Floating-point numbers are numbers with a decimal place. Python can store much larger (and much smaller) floating-point numbers than C. This means that you’ll also have to pay attention to those values to ensure they stay in range.

  • Complex numbers are numbers with an imaginary part. While Python has built-in complex numbers, and C has complex numbers, there’s no built-in method for marshalling between them. To marshal complex numbers, you’ll need to build a struct or class in the C code to manage them.

  • Strings are sequences of characters. For being such a common data type, strings will prove to be rather tricky when you’re creating Python bindings. As with other data types, Python and C store strings in quite different formats. (Unlike the other data types, this is an area where C and C++ differ as well, which adds to the fun!) Each of the solutions you’ll examine have slightly different methods for dealing with strings.

  • Boolean variables can have only two values. Since they’re supported in C, marshalling them will prove to be fairly straightforward.

Besides data type conversions, there are other issues you’ll need to think about as you build your Python bindings. Let’s keep exploring them.

Understanding Mutable and Immutable Values

In addition to all of these data types, you’ll also have to be aware of how Python objects can be mutable or immutable. C has a similar concept with function parameters when talking about pass-by-value or pass-by-reference. In C, all parameters are pass-by-value. If you want to allow a function to change a variable in the caller, then you need to pass a pointer to that variable.

You might be wondering if you can get around the immutable restriction by simply passing an immutable object to C using a pointer. Unless you go to ugly and non-portable extremes, Python won’t give you a pointer to an object, so this just doesn’t work. If you want to modify a Python object in C, then you’ll need to take extra steps to achieve this. These steps will be dependent on which tools you use, as you’ll see below.

So, you can add immutability to your checklist of items to consider as you create Python bindings. Your final stop on the grand tour of creating this checklist is how to handle the different ways in which Python and C deal with memory management.

Managing Memory

C and Python manage memory differently. In C, the developer must manage all memory allocations and ensure they’re freed once and only once. Python takes care of this for you using a garbage collector.

While each of these approaches has its advantages, it does add an extra wrinkle into creating Python bindings. You’ll need to be aware of where the memory for each object was allocated and ensure that it’s only freed on the same side of the language barrier.

For example, a Python object is created when you set x = 3. The memory for this is allocated on the Python side and needs to be garbage collected. Fortunately, with Python objects, it’s quite difficult to do anything else. Take a look at the converse in C, where you directly allocate a block of memory:

int*iPtr=(int*)malloc(sizeof(int));

When you do this, you need to ensure that this pointer is freed in C. This may mean manually adding code to your Python bindings to do this.

That rounds out your checklist of general topics. Let’s start setting up your system so you can write some code!

Setting Up Your Environment

For this tutorial, you’re going to be using pre-existing C and C++ libraries from the Real Python GitHub repo to show a test of each tool. The intent is that you’ll be able to use these ideas for any C library. To follow along with all of the examples here, you’ll need to have the following:

  • A C++ library installed and knowledge of the path for command-line invocation
  • Python development tools:
    • For Linux, this is the python3-dev or python3-devel package, depending on your distro.
    • For Windows, there are multiple options.
  • Python 3.6 or greater
  • A virtual environment (recommended, but not required)
  • The invoke tool

The last one might be new to you, so let’s take a closer look at it.

Using the invoke Tool

invoke is the tool you’ll be using to build and test your Python bindings in this tutorial. It has a similar purpose to make but uses Python instead of Makefiles. You’ll need to install invoke in your virtual environment using pip:

$ python3 -m pip install invoke

To run it, you type invoke followed by the task you wish to execute:

$ invoke build-cmult
=================================================== Building C Library* Complete

To see which tasks are available, you use the --list option:

$ invoke --list
Available tasks:  all              Build and run all tests  build-cffi       Build the CFFI Python bindings  build-cmult      Build the shared library for the sample C code  build-cppmult    Build the shared library for the sample C++ code  build-cython     Build the cython extension module  build-pybind11   Build the pybind11 wrapper library  clean            Remove any built objects  test-cffi        Run the script to test CFFI  test-ctypes      Run the script to test ctypes  test-cython      Run the script to test Cython  test-pybind11    Run the script to test PyBind11

Note that when you look in the tasks.py file where the invoke tasks are defined, you’ll see that the name of the second task listed is build_cffi. However, the output from --list shows it as build-cffi. The minus sign (-) can’t be used as part of a Python name, so the file uses an underscore (_) instead.

For each of the tools you’ll examine, there will be a build- and a test- task defined. For example, to run the code for CFFI, you could type invoke build-cffi test-cffi. An exception is ctypes, as there’s no build phase for ctypes. In addition, there are two special tasks added for convenience:

  • invoke all runs the build and test tasks for all tools.
  • invoke clean removes any generated files.

Now that you’ve got a feeling for how to run the code, let’s take a peek at the C code you’ll be wrapping before hitting the tools overview.

C or C++ Source

In each of the example sections below, you’ll be creating Python bindings for the same function in either C or C++. These sections are intended to give you a taste of what each method looks like, rather than an in-depth tutorial on that tool, so the function you’ll wrap is small. The function you’ll create Python bindings for takes an int and a float as input parameters and returns a float that’s the product of the two numbers:

// cmult.cfloatcmult(intint_param,floatfloat_param){floatreturn_value=int_param*float_param;printf("    In cmult : int: %d float %.1f returning  %.1f\n",int_param,float_param,return_value);returnreturn_value;}

The C and C++ functions are almost identical, with minor name and string differences between them. You can get a copy of all of the code by clicking on the link below:

Get Sample Code:Click here to get the sample code you'll use to learn about Python Bindings in this tutorial.

Now you’ve got the repo cloned and your tools installed, you can build and test the tools. So let’s dive into each section below!

ctypes

You’ll start with ctypes, which is a tool in the standard library for creating Python bindings. It provides a low-level toolset for loading shared libraries and marshalling data between Python and C.

How It’s Installed

One of the big advantages of ctypes is that it’s part of the Python standard library. It was added in Python version 2.5, so it’s quite likely you already have it. You can import it just like you do with the sys or time modules.

Calling the Function

All of the code to load your C library and call the function will be in your Python program. This is great since there are no extra steps in your process. You just run your program, and everything is taken care of. To create your Python bindings in ctypes, you need to do these steps:

  1. Load your library.
  2. Wrap some of your input parameters.
  3. Tellctypes the return type of your function.

You’ll look at each of these in turn.

Library Loading

ctypes provides several ways for you to load a shared library, some of which are platform-specific. For your example, you’ll create a ctypes.CDLL object directly by passing in the full path to the shared library you want:

# ctypes_test.pyimportctypesimportpathlibif__name__=="__main__":# Load the shared library into ctypeslibname=pathlib.Path().absolute()/"libcmult.so"c_lib=ctypes.CDLL(libname)

This will work for cases when the shared library is in the same directory as your Python script, but be careful when you attempt to load libraries that are from packages other than your Python bindings. There are many details for loading libraries and finding paths in the ctypes documentation that are platform and situation-specific.

NOTE: Many platform-specific issues can arise during library loading. It’s best to make incremental changes once you get an example working.

Now that you have the library loaded into Python, you can try calling it!

Calling Your Function

Remember that the function prototype for your C function is as follows:

// cmult.hfloatcmult(intint_param,floatfloat_param);

You need to pass in an integer and a float and can expect to get a float returned. Integers and floats have native support in both Python and in C, so you expect this case will work for reasonable values.

Once you’ve loaded the library into your Python bindings, the function will be an attribute of c_lib, which is the CDLL object you created earlier. You can try to call it just like this:

x,y=6,2.3answer=c_lib.cmult(x,y)

Oops! This doesn’t work. This line is commented out in the example repo because it fails. If you attempt to run with that call, then Python will complain with an error:

$ invoke test-ctypes
Traceback (most recent call last):  File "ctypes_test.py", line 16, in <module>    answer = c_lib.cmult(x, y)ctypes.ArgumentError: argument 2: <class 'TypeError'>: Don't know how to convert parameter 2

It looks like you need to tell ctypes about any parameters that aren’t integers. ctypes doesn’t have any knowledge of the function unless you tell it explicitly. Any parameter that’s not marked otherwise is assumed to be an integer. ctypes doesn’t know how to convert the value 2.3 that’s stored in y to an integer, so it fails.

To fix this, you’ll need to create a c_float from the number. You can do that in the line where you’re calling the function:

# ctypes_test.pyanswer=c_lib.cmult(x,ctypes.c_float(y))print(f"    In Python: int: {x} float {y:.1f} return val {answer:.1f}")

Now, when you run this code, it returns the product of the two numbers you passed in:

$ invoke test-ctypes
    In cmult : int: 6 float 2.3 returning  13.8    In Python: int: 6 float 2.3 return val 48.0

Wait a minute… 6 multiplied by 2.3 is not 48.0!

It turns out that, much like the input parameters, ctypesassumes your function returns an int. In actuality, your function returns a float, which is getting marshalled incorrectly. Just like the input parameter, you need to tell ctypes to use a different type. The syntax here is slightly different:

# ctypes_test.pyc_lib.cmult.restype=ctypes.c_floatanswer=c_lib.cmult(x,ctypes.c_float(y))print(f"    In Python: int: {x} float {y:.1f} return val {answer:.1f}")

That should do the trick. Let’s run the entire test-ctypes target and see what you’ve got. Remember, the first section of output is before you fixed the restype of the function to be a float:

$ invoke test-ctypes
=================================================== Building C Library* Complete=================================================== Testing ctypes Module    In cmult : int: 6 float 2.3 returning  13.8    In Python: int: 6 float 2.3 return val 48.0    In cmult : int: 6 float 2.3 returning  13.8    In Python: int: 6 float 2.3 return val 13.8

That’s better! While the first, uncorrected version is returning the wrong value, your fixed version agrees with the C function. Both C and Python get the same result! Now that it’s working, take a look at why you may or may not want to use ctypes.

Strengths and Weaknesses

The biggest advantage ctypes has over the other tools you’ll examine here is that it’s built into the standard library. It also requires no extra steps, as all of the work is done as part of your Python program.

In addition, the concepts used are low-level, which makes exercises like the one you just did manageable. However, more complex tasks grow cumbersome with the lack of automation. In the next section, you’ll see a tool that adds some automation to the process.

CFFI

CFFI is the C Foreign Function Interface for Python. It takes a more automated approach to generate Python bindings. CFFI has multiple ways in which you can build and use your Python bindings. There are two different options to select from, which gives you four possible modes:

  • ABI vs API: API mode uses a C compiler to generate a full Python module, whereas ABI mode loads the shared library and interacts with it directly. Without running the compiler, getting the structures and parameters correct is error-prone. The documentation heavily recommends using the API mode.

  • in-line vs out-of-line: The difference between these two modes is a trade-off between speed and convenience:

    • In-line mode compiles the Python bindings every time your script runs. This is convenient, as you don’t need an extra build step. It does, however, slow down your program.
    • Out-of-line mode requires an extra step to generate the Python bindings a single time and then uses them each time the program is run. This is much faster, but that may not matter for your application.

For this example, you’ll use the API out-of-line mode, which produces the fastest code and, in general, looks similar to other Python bindings you’ll create later in this tutorial.

How It’s Installed

Since CFFI is not a part of the standard library, you’ll need to install it on your machine. It’s recommended that you create a virtual environment for this. Fortunately, CFFI installs with pip:

$ python3 -m pip install cffi

This will install the package into your virtual environment. If you’ve already installed from the requirements.txt, then this should be taken care of. You can take a look at requirements.txt by accessing the repo at the link below:

Get Sample Code:Click here to get the sample code you'll use to learn about Python Bindings in this tutorial.

Now that you have CFFI installed, it’s time to take it for a spin!

Calling the Function

Unlike ctypes, with CFFI you’re creating a full Python module. You’ll be able to import the module like any other module in the standard library. There is some extra work you’ll have to do to build your Python module. To use your CFFI Python bindings, you’ll need to take the following steps:

  • Write some Python code describing the bindings.
  • Run that code to generate a loadable module.
  • Modify the calling code to import and use your newly created module.

That might seem like a lot of work, but you’ll walk through each of these steps and see how it works.

Write the Bindings

CFFI provides methods to read a C header file to do most of the work when generating Python bindings. In the documentation for CFFI, the code to do this is placed in a separate Python file. For this example, you’ll place that code directly into the build tool invoke, which uses Python files as input. To use CFFI, you start by creating a cffi.FFI object, which provides the three methods you need:

# tasks.pyimportcffi...""" Build the CFFI Python bindings """print_banner("Building CFFI Module")ffi=cffi.FFI()

Once you have the FFI, you’ll use .cdef() to process the contents of the header file automatically. This creates wrapper functions for you to marshal data from Python:

# tasks.pythis_dir=pathlib.Path().absolute()h_file_name=this_dir/"cmult.h"withopen(h_file_name)ash_file:ffi.cdef(h_file.read())

Reading and processing the header file is the first step. After that, you need to use .set_source() to describe the source file that CFFI will generate:

# tasks.pyffi.set_source("cffi_example",# Since you're calling a fully-built library directly, no custom source# is necessary. You need to include the .h files, though, because behind# the scenes cffi generates a .c file that contains a Python-friendly# wrapper around each of the functions.'#include "cmult.h"',# The important thing is to include the pre-built lib in the list of# libraries you're linking against:libraries=["cmult"],library_dirs=[this_dir.as_posix()],extra_link_args=["-Wl,-rpath,."],)

Here’s a breakdown of the parameters you’re passing in:

  • "cffi_example" is the base name for the source file that will be created on your file system. CFFI will generate a .c file, compile it to a .o file, and link it to a .<system-description>.so or .<system-description>.dll file.

  • '#include "cmult.h"' is the custom C source code that will be included in the generated source before it’s compiled. Here, you just include the .h file for which you’re generating bindings, but this can be used for some interesting customizations.

  • libraries=["cmult"] tells the linker the name of your pre-existing C library. This is a list, so you can specify several libraries if required.

  • library_dirs=[this_dir.as_posix(),] is a list of directories that tells the linker where to look for the above list of libraries.

  • extra_link_args=['-Wl,-rpath,.'] is a set of options that generate a shared object, which will look in the current path (.) for other libraries it needs to load.

Build the Python Bindings

Calling .set_source() doesn’t build the Python bindings. It only sets up the metadata to describe what will be generated. To build the Python bindings, you need to call .compile():

# tasks.pyffi.compile()

This wraps things up by generating the .c file, .o file, and the shared library. The invoke task you just walked through can be run on the command line to build the Python bindings:

$ invoke build-cffi
=================================================== Building C Library* Complete=================================================== Building CFFI Module* Complete

You have your CFFI Python bindings, so it’s time to run this code!

Calling Your Function

After all of the work you did to configure and run the CFFI compiler, using the generated Python bindings looks just like using any other Python module:

# cffi_test.pyimportcffi_exampleif__name__=="__main__":# Sample data for your callx,y=6,2.3answer=cffi_example.lib.cmult(x,y)print(f"    In Python: int: {x} float {y:.1f} return val {answer:.1f}")

You import the new module, and then you can call cmult() directly. To test it out, use the test-cffi task:

$ invoke test-cffi
=================================================== Testing CFFI Module    In cmult : int: 6 float 2.3 returning  13.8    In Python: int: 6 float 2.3 return val 13.8

This runs your cffi_test.py program, which tests out the new Python bindings you’ve created with CFFI. That completes the section on writing and using your CFFI Python bindings.

Strengths and Weaknesses

It might seem that ctypes requires less work than the CFFI example you just saw. While this is true for this use case, CFFI scales to larger projects much better than ctypes due to automation of much of the function wrapping.

CFFI also produces quite a different user experience. ctypes allows you to load a pre-existing C library directly into your Python program. CFFI, on the other hand, creates a new Python module that can be loaded like other Python modules.

What’s more, with the out-of-line-API method you used above, the time penalty for creating the Python bindings is done once when you build it and doesn’t happen each time you run your code. For small programs, this might not be a big deal, but CFFI scales better to larger projects in this way, as well.

Like ctypes, using CFFI only allows you to interface with C libraries directly. C++ libraries require a good deal of work to use. In the next section, you’ll see a Python bindings tool that focuses on C++.

PyBind11

PyBind11 takes a quite different approach to create Python bindings. In addition to shifting the focus from C to C++, it also uses C++ to specify and build the module, allowing it to take advantage of the metaprogramming tools in C++. Like CFFI, the Python bindings generated from PyBind11 are a full Python module that can be imported and used directly.

PyBind11 is modeled after the Boost::Python library and has a similar interface. It restricts its use to C++11 and newer, however, which allows it to simplify and speed things up compared to Boost, which supports everything.

How It’s Installed

The First Steps section of the PyBind11 documentation walks you through how to download and build the test cases for PyBind11. While this doesn’t appear to be strictly required, working through these steps will ensure you’ve got the proper C++ and Python tools set up.

Note: Most of the examples for PyBind11 use cmake, which is a fine tool for building C and C++ projects. For this demo, however, you’ll continue to use the invoke tool, which follows the instructions in the Building Manually section of the docs.

You’ll want to install this tool into your virtual environment:

$ python3 -m pip install pybind11

PyBind11 is an all-header library, similar to much of Boost. This allows pip to install the actual C++ source for the library directly into your virtual environment.

Calling the Function

Before you dive in, please note that you’re using a different C++ source file, cppmult.cpp, instead of the C file you used for the previous examples. The function is essentially the same in both languages.

Writing the Bindings

Similar to CFFI, you need to create some code to tell the tool how to build your Python bindings. Unlike CFFI, this code will be in C++ instead of Python. Fortunately, there’s a minimal amount of code required:

// pybind11_wrapper.cpp#include<pybind11/pybind11.h>#include<cppmult.hpp>PYBIND11_MODULE(pybind11_example,m){m.doc()="pybind11 example plugin";// Optional module docstringm.def("cpp_function",&cppmult,"A function that multiplies two numbers");}

Let’s look at this a piece at a time, as PyBind11 packs a lot of information into a few lines.

The first two lines include the pybind11.h file and the header file for your C++ library, cppmult.hpp. After that, you have the PYBIND11_MODULE macro. This expands into a block of C++ code that’s well described in the PyBind11 source:

This macro creates the entry point that will be invoked when the Python interpreter imports an extension module. The module name is given as the first argument and it should not be in quotes. The second macro argument defines a variable of type py::module which can be used to initialize the module. (Source)

What this means for you is that, for this example, you’re creating a module called pybind11_example and that the rest of the code will use m as the name of the py::module object. On the next line, inside the C++ function you’re defining, you create a docstring for the module. While this is optional, it’s a nice touch to make your module more Pythonic.

Finally, you have the m.def() call. This will define a function that’s exported by your new Python bindings, meaning it will be visible from Python. In this example, you’re passing three parameters:

  • cpp_function is the exported name of the function that you’ll use in Python. As this example shows, it doesn’t need to match the name of the C++ function.
  • &cppmult takes the address of the function to be exported.
  • "A function..." is an optional docstring for the function.

Now that you have the code for the Python bindings, take a look at how you can build this into a Python module.

Build the Python Bindings

The tool you use to build the Python bindings in PyBind11 is the C++ compiler itself. You may need to modify the defaults for your compiler and operating system.

To begin, you must build the C++ library for which you’re creating bindings. For an example this small, you could build the cppmult library directly into the Python bindings library. However, for most real-world examples, you’ll have a pre-existing library you want to wrap, so you’ll build the cppmult library separately. The build is a standard call to the compiler to build a shared library:

# tasks.pyinvoke.run("g++ -O3 -Wall -Werror -shared -std=c++11 -fPIC cppmult.cpp ""-o libcppmult.so ")

Running this with invoke build-cppmult produces libcppmult.so:

$ invoke build-cppmult
=================================================== Building C++ Library* Complete

The build for the Python bindings, on the other hand, requires some special details:

 1 # tasks.py 2 invoke.run( 3 "g++ -O3 -Wall -Werror -shared -std=c++11 -fPIC " 4 "`python3 -m pybind11 --includes` " 5 "-I /usr/include/python3.7 -I .  " 6 "{0}" 7 "-o {1}`python3.7-config --extension-suffix` " 8 "-L. -lcppmult -Wl,-rpath,.".format(cpp_name,extension_name) 9 )

Let’s walk through this line-by-line. Line 3 contains fairly standard C++ compiler flags that indicate several details, including that you want all warnings caught and treated as errors, that you want a shared library, and that you’re using C++11.

Line 4 is the first step of the magic. It calls the pybind11 module to have it produce the proper include paths for PyBind11. You can run this command directly on the console to see what it does:

$ python3 -m pybind11 --includes
-I/home/jima/.virtualenvs/realpython/include/python3.7m-I/home/jima/.virtualenvs/realpython/include/site/python3.7

Your output should be similar but show different paths.

In line 5 of your compilation call, you can see that you’re also adding the path to the Python dev includes. While it’s recommended that you don’t link against the Python library itself, the source needs some code from Python.h to work its magic. Fortunately, the code it uses is fairly stable across Python versions.

Line 5 also uses -I . to add the current directory to the list of include paths. This allows the #include <cppmult.hpp> line in your wrapper code to be resolved.

Line 6 specifies the name of your source file, which is pybind11_wrapper.cpp. Then, on line 7 you see some more build magic happening. This line specifies the name of the output file. Python has some particular ideas on module naming, which include the Python version, the machine architecture, and other details. Python also provides a tool to help with this called python3.7-config:

$ python3.7-config --extension-suffix
.cpython-37m-x86_64-linux-gnu.so

You may need to modify the command if you’re using a different version of Python. Your results will likely change if you’re using a different version of Python or are on a different operating system.

The final line of your build command, line 8, points the linker at the libcppmult library you built earlier. The rpath section tells the linker to add information to the shared library to help the operating system find libcppmult at runtime. Finally, you’ll notice that this string is formatted with the cpp_name and the extension_name. You’ll be using this function again when you build your Python bindings module with Cython in the next section.

Run this command to build your bindings:

$ invoke build-pybind11
=================================================== Building C++ Library* Complete=================================================== Building PyBind11 Module* Complete

That’s it! You’ve built your Python bindings with PyBind11. It’s time to test it out!

Calling Your Function

Similar to the CFFI example above, once you’ve done the heavy lifting of creating the Python bindings, calling your function looks like normal Python code:

# pybind11_test.pyimportpybind11_exampleif__name__=="__main__":# Sample data for your callx,y=6,2.3answer=pybind11_example.cpp_function(x,y)print(f"    In Python: int: {x} float {y:.1f} return val {answer:.1f}")

Since you used pybind11_example as the name of your module in the PYBIND11_MODULE macro, that’s the name you import. In the m.def() call you told PyBind11 to export the cppmult function as cpp_function, so that’s what you use to call it from Python.

You can test it with invoke as well:

$ invoke test-pybind11
=================================================== Testing PyBind11 Module    In cppmul: int: 6 float 2.3 returning  13.8    In Python: int: 6 float 2.3 return val 13.8

That’s what PyBind11 looks like. Next, you’ll see when and why PyBind11 is the right tool for the job.

Strengths and Weaknesses

PyBind11 is focused on C++ instead of C, which makes it different from ctypes and CFFI. It has several features that make it quite attractive for C++ libraries:

  • It supports classes.
  • It handles polymorphic subclassing.
  • It allows you to add dynamic attributes to objects from Python and many other tools, which would be quite difficult to do from the C-based tools you’ve examined.

That being said, there’s a fair bit of setup and configuration you need to do to get PyBind11 up and running. Getting the installation and build correct can be a bit finicky, but once that’s done, it seems fairly solid. Also, PyBind11 requires that you use at least C++11 or newer. This is unlikely to be a big restriction for most projects, but it may be a consideration for you.

Finally, the extra code you need to write to create the Python bindings is in C++ and not Python. This may or may not be an issue for you, but it is different than the other tools you’ve looked at here. In the next section, you’ll move on to Cython, which takes quite a different approach to this problem.

Cython

The approach Cython takes to creating Python bindings uses a Python-like language to define the bindings and then generates C or C++ code that can be compiled into the module. There are several methods for building Python bindings with Cython. The most common one is to use setup from distutils. For this example, you’ll stick with the invoke tool, which will allow you to play with the exact commands that are run.

How It’s Installed

Cython is a Python module that can be installed into your virtual environment from PyPI:

$ python3 -m pip install cython

Again, if you’ve installed the requirements.txt file into your virtual environment, then this will already be there. You can grab a copy of requirements.txt by clicking on the link below:

Get Sample Code:Click here to get the sample code you'll use to learn about Python Bindings in this tutorial.

That should have you ready to work with Cython!

Calling the Function

To build your Python bindings with Cython, you’ll follow similar steps to those you used for CFFI and PyBind11. You’ll write the bindings, build them, and then run Python code to call them. Cython can support both C and C++. For this example, you’ll use the cppmult library that you used for the PyBind11 example above.

Write the Bindings

The most common form of declaring a module in Cython is to use a .pyx file:

 1 # cython_example.pyx 2 """ Example cython interface definition """ 3  4 cdefexternfrom"cppmult.hpp": 5 floatcppmult(intint_param,floatfloat_param) 6  7 defpymult(int_param,float_param): 8 returncppmult(int_param,float_param)

There are two sections here:

  1. Lines 3 and 4 tell Cython that you’re using cppmult() from cppmult.hpp.
  2. Lines 6 and 7 create a wrapper function, pymult(), to call cppmult().

The language used here is a special mix of C, C++, and Python. It will look fairly familiar to Python developers, though, as the goal is to make the process easier.

The first section with cdef extern... tells Cython that the function declarations below are also found in the cppmult.hpp file. This is useful for ensuring that your Python bindings are built against the same declarations as your C++ code. The second section looks like a regular Python function—because it is! This section creates a Python function that has access to the C++ function cppmult.

Now that you’ve got the Python bindings defined, it’s time to build them!

Build the Python Bindings

The build process for Cython has similarities to the one you used for PyBind11. You first run Cython on the .pyx file to generate a .cpp file. Once you’ve done this, you compile it with the same function you used for PyBind11:

 1 # tasks.py 2 defcompile_python_module(cpp_name,extension_name): 3 invoke.run( 4 "g++ -O3 -Wall -Werror -shared -std=c++11 -fPIC " 5 "`python3 -m pybind11 --includes` " 6 "-I /usr/include/python3.7 -I .  " 7 "{0}" 8 "-o {1}`python3.7-config --extension-suffix` " 9 "-L. -lcppmult -Wl,-rpath,.".format(cpp_name,extension_name)10 )11 12 defbuild_cython(c):13 """ Build the cython extension module """14 print_banner("Building Cython Module")15 # Run cython on the pyx file to create a .cpp file16 invoke.run("cython --cplus -3 cython_example.pyx -o cython_wrapper.cpp")17 18 # Compile and link the cython wrapper library19 compile_python_module("cython_wrapper.cpp","cython_example")20 print("* Complete")

You start by running cython on your .pyx file. There are a few options you use on this command:

  • --cplus tells the compiler to generate a C++ file instead of a C file.
  • -3 switches Cython to generate Python 3 syntax instead of Python 2.
  • -o cython_wrapper.cpp specifies the name of the file to generate.

Once the C++ file is generated, you use the C++ compiler to generate the Python bindings, just as you did for PyBind11. Note that the call to produce the extra include paths using the pybind11 tool is still in that function. It won’t hurt anything here, as your source will not need those.

Running this task in invoke produces this output:

$ invoke build-cython
=================================================== Building C++ Library* Complete=================================================== Building Cython Module* Complete

You can see that it builds the cppmult library and then builds the cython module to wrap it. Now you have the Cython Python bindings. (Try saying that quickly…) It’s time to test it out!

Calling Your Function

The Python code to call your new Python bindings is quite similar to what you used to test the other modules:

 1 # cython_test.py 2 importcython_example 3  4 # Sample data for your call 5 x,y=6,2.3 6  7 answer=cython_example.pymult(x,y) 8 print(f"    In Python: int: {x} float {y:.1f} return val {answer:.1f}")

Line 2 imports your new Python bindings module, and you call pymult() on line 7. Remember that the .pyx file provided a Python wrapper around cppmult() and renamed it to pymult. Using invoke to run your test produces the following:

$ invoke test-cython
=================================================== Testing Cython Module    In cppmul: int: 6 float 2.3 returning  13.8    In Python: int: 6 float 2.3 return val 13.8

You get the same result as before!

Strengths and Weaknesses

Cython is a relatively complex tool that can provide you a deep level of control when creating Python bindings for either C or C++. Though you didn’t cover it in depth here, it provides a Python-esque method for writing code that manually controls the GIL, which can significantly speed up certain types of problems.

That Python-esque language is not quite Python, however, so there’s a slight learning curve when you’re coming up to speed in figuring out which parts of C and Python fit where.

Other Solutions

While researching this tutorial, I came across several different tools and options for creating Python bindings. While I limited this overview to some of the more common options, there are several other tools I stumbled across. The list below is not comprehensive. It’s merely a sampling of other possibilities if one of the above tools doesn’t fit your project.

PyBindGen

PyBindGen generates Python bindings for C or C++ and is written in Python. It’s targeted at producing readable C or C++ code, which should simplify debugging issues. It wasn’t clear if this has been updated recently, as the documentation lists Python 3.4 as the latest tested version. There have been yearly releases for the last several years, however.

Boost.Python

Boost.Python has an interface similar to PyBind11, which you saw above. That’s not a coincidence, as PyBind11 was based on this library! Boost.Python is written in full C++ and supports most, if not all, versions of C++ on most platforms. In contrast, PyBind11 restricts itself to modern C++.

SIP

SIP is a toolset for generating Python bindings that was developed for the PyQt project. It’s also used by the wxPython project to generate their bindings, as well. It has a code generation tool and an extra Python module that provides support functions for the generated code.

Cppyy

cppyy is an interesting tool that has a slightly different design goal than what you’ve seen so far. In the words of the package author:

“The original idea behind cppyy (going back to 2001), was to allow Python programmers that live in a C++ world access to those C++ packages, without having to touch C++ directly (or wait for the C++ developers to come around and provide bindings).” (Source)

Shiboken

Shiboken is a tool for generating Python bindings that’s developed for the PySide project associated with the Qt project. While it was designed as a tool for that project, the documentation indicates that it’s neither Qt- nor PySide-specific and is usable for other projects.

SWIG

SWIG is a different tool than any of the others listed here. It’s a general tool used to create bindings to C and C++ programs for many other languages, not just Python. This ability to generate bindings for different languages can be quite useful in some projects. It, of course, comes with a cost as far as complexity is concerned.

Conclusion

Congrats! You’ve now had an overview of several different options for creating Python bindings. You’ve learned about marshalling data and issues you need to consider when creating bindings. You’ve seen what it takes to be able to call a C or C++ function from Python using the following tools:

  • ctypes
  • CFFI
  • PyBind11
  • Cython

You now know that, while ctypes allow you to load a DLL or shared library directly, the other three tools take an extra step, but still create a full Python module. As a bonus, you’ve also played a little with the invoke tool to run command-line tasks from Python. You can get all of the code you saw in this tutorial by clicking the link below:

Get Sample Code:Click here to get the sample code you'll use to learn about Python Bindings in this tutorial.

Now pick your favorite tool and start building those Python bindings! Special thanks to Loic Domaigne for the extra technical review of this tutorial.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Anarcat: Moving dconf entries to git

$
0
0

I've been managing my UNIX $HOME repository with version control for almost two decades now (first under CVS, then with git). Once in a while, I find a little hack to make this work better.

Today, it's dconf/gsettings, or more specifically, Workrave that I want to put in git. I noticed my laptop was extremely annoying compared with my office workstation and realized I never figured out how to write Workrave's configuration to git. The reason is that its configuration is stored in dconf, a binary database format, but, blissfully, I had forgotten about this and tried to figure out where the heck its config was.

I was about to give up when I re-remembered this, when I figured I would just do a quick search ("dconf commit to git") and that brought me to Josh Triplett's Debconf 14 talk about this exact topic. The slides are... a little terse, but I could figure out the gist of it. The key is to change the DCONF_PROFILE environment to point to a new config file (say in your .bashrc):

export DCONF_PROFILE=$HOME/.config/dconf/profile

That file (~/.config/dconf/profile) should be created with the following content:

user-db:user
service-db:keyfile/user
  1. The first line is the default: store everything in this huge binary file.

  2. The second is the magic: it stores configuration in a precious text file, in .config/dconf/user.txt specifically.

Then the last step was to migrate config between the two. For that I need a third config file, a DCONF_PROFILE that has only the text database so settings are forcibly written there, say ~/.config/dconf/profile-edit:

service-db:keyfile/user

And then I can migrate my workrave configuration with:

gsettings list-recursively org.workrave | while read schema key val ; do DCONF_PROFILE=~/.config/dconf/profile-edit gsettings set "$schema""$key""$val" ; done

Of course, a bunch of those settings are garbage and do not differ from the default. Unfortuantely, there doesn't seem to be a way to tell gsettings to only list non-default settings, so I had to do things by hand from there, by comparing my generated config with:

DCONF_PROFILE=/dev/null gsettings list-recursively org.workrave

I finally ended up with the following user.txt, which is now my workrave config:

[org/workrave/timers/daily-limit]
snooze=1800
limit=25200

[org/workrave/timers/micro-pause]
auto-reset=10
snooze=150
limit=900

[org/workrave/timers/rest-break]
auto-reset=600
snooze=1800
limit=10800

[org/workrave/breaks/micro-pause]
max-preludes=0

That's a nice setup: "YOLO" settings end up in the binary database that I don't care about tracking in git, and a precious few snowflakes get tracked in a text file. Triplett also made a command to change settings in the text file, but I don't think I need to bother with that. It's more critical to copy settings between the two, in my experience, as I rarely have this moment: "oh I know exactly the key setting I want to change and i'll write it down". What actually happens is that I'm making a change in a GUI and later realize it should be synchronized over.

(It looks like Triplett does have a tool to do those diffs and transitions, but unfortunately git://joshtriplett.org/git/home doesn't respond at this time so I can't find that source code.)

Now if only Firefox bookmarks were reasonable again...

PyCon: March 2 Update on COVID-19

$
0
0
The coronavirus (also known as COVID-19) is a new virus that causes respiratory illness in people and can spread from person-to-person. Since PyCon US 2020 is scheduled in April, we want to give our community an update on our status and more information about our policy for attendees pertaining to COVID-19.

As of March 2, PyCon 2020 in Pittsburgh, PA is scheduled to happen.

The staff and board directors are actively watching the situation closely, as it continues to change rapidly. We plan to reassess the situation weekly and more frequently as we get closer to the event. This includes checking in with our Pittsburgh team for updates including from vendors and local authorities.

Currently, there have not been any COVID-19 cases in Pennsylvania and conferences continue to happen at the David L. Lawrence Convention Center. On February 28th, the Pennsylvania Department of Health stated“For the general American public, who are unlikely to be exposed to this virus at this time, the immediate health risk from COVID-19 is considered low”. These evaluations from the CDC and Pennsylvania Department of Health are the basis for our current decision to move forward with PyCon US 2020 as planned.

That said, we understand that the situation varies depending on where attendees live and work. PyCon will refund 100% of registration fees for anyone that has their travel impacted by COVID-19 or has any concerns about traveling, especially if traveling internationally. If at the time of PyCon you feel sick, or are worried that you might have been in contact with people who have been diagnosed with COVID-19, we encourage you to stay home. Your registration will be 100% refunded. If you have any questions about this, please reach out to pycon-reg at python dot org.

Additional resources on the subject:


PyCon will continue to monitor this situation and we plan to internally reassess regularly. We will publish another update on Friday, March 6, and plan to keep you informed of our plans at least each week as we approach PyCon.

Andre Roberge: True constants in Python - part 2, and a challenge

$
0
0
Like the title says, this is the second post on this topic. If you have not done so, you should really read the first one, which is much shorter, before continuing to read here.

The first clue


The first clue is that, rather than executing test.py as the main module, I imported it. Can you think of a situation where this would make a difference?

I'll leave a bit of space below to give you the opportunity to possibly go back to part 1, without reading about the solution below.















PEP 302


Back in 2002, with the adoption of PEP 302, Python enabled programmers to modify what happens when a module is imported; this is known as an import hook.  For example, it is possible to modify the source code in a module prior to it being executed by Python. This is NOT what I have done here - but I have done this for other examples that I will refer to below.

If the only thing required would be to modify the source, one could use what is described in PEP 263 and define a custom encoding that would transform the source. In this case, by adding an encoding declaration, it would have been possible to run test.py directly rather than executing it.  I thought of it a while ago but, in order to cover all possible cases, one would pretty much have to write a complete parser for Python that could be used to identify and replace the various assignments statement done by print statements so as to show what you saw in part 1.  However, this still would not be enough to protect against the reassignement done externally, like I did with

test.UPPERCASE = 1

The actual solution I used required three separate steps. The challenge I will mention at the end is to reduce this to two steps - something that I think is quite possible but that I have not been able to do yet - and to remove one left-over "cheat" which would allow one to redefine a constant by monkeypatching.  I think that this is possible but I have not actually sat down to actually do it. I thought of waiting for a few days to give an added incentive for anyone who would like to try and get the bragging rights of having it done first! ;-)

Step 1

Step 1 and 2 involve an import hook. They are independent one of another and can be done in any order.

When importing a module, Python roughly does the following:


  1. Find the source code
  2. Create a module object
  3. Execute the source code in the module object's dict.
The module object created by Python comes with a dict that is, in some sense, "read-only": you cannot write code to modify its behaviour, nor replace it by a custom dict. (However, see the challenge.)  However, one can define a custom dict, which is designed so that its various methods (__setitem__, __delitem__, etc.) prevent the reassignment of variables we intend to be constants. In the example I have chosen, these are variables whose names are in UPPERCASE.  (Not shown in the example of part 1: I have also added a scan of the code to identify any variable that used the type hint Final and add them automatically to the list of variables intended to be constants.)

Instead of executing the code in the module object's dict, it is executed in this special dict. The content of that dict is then copied into the module object's dict.

Doing this ensures that code run directly in the module is guaranteed to prevent variable reassignement. At least, I have not found a way to cheat from within a module and change the value of variables intended to be a constant.

Step 2

Step 2 is to define a custom class that prevent changes of attributes. This custom class is used to replace the module's own class, something that can be done.

Step 3

Step 3 is to make Python use our import hook. To do so, we must have some code being executed earlier than what is shown. There are a couple of ways to do this as describe in the Site-specific configuration hook section of the Python documentation. The method I have chosen is one that is easily done on an ad-hoc basis.  I created a file named usercustomize.py whose content is the following:

from ideas.examples import constants
constants.add_hook()

This calls my code that sets up an import hook as described above. To have Python execute this code, I set the environment variable PYTHONPATH to be equal to the directory where usercustomize is located. On Windows (which is what I use), this is most easily achieve by navigating to that directory in the terminal and entering the following:

set PYTHONPATH=%CD%

Doing so will ensure that the code in usercustomize.py is executed before any user code.

The challenge


As mentioned in part 1, attempting to modify the value of a constant from outside, as in:

This leaves one possible cheat. From an external module, instead of writing

import test
test.UPPERCASE = "new value"

which is prevented, one can use the following cheat

import test
test.__dict__["UPPERCASE"] = "new value"

This is because the module's __dict__ is a "normal" Python dict. 

However, instead of using a module object created by Python, it should be possible to create a custom module object that uses something like the special dict mentioned before. Thus one would not need to change the way that Python execute code in the module's dict.

The challenge is to write code that creates such a module object.   I would not be surprised if there remained some other ways to cheat after doing so, but hopefully none as obvious as the one shown above.

Resources


The code I have written is part of my project named ideas.  The actual code for the constants example is given by this link.  See also the documentation for the project.  Note that, token_utils mentioned in the documentation has been put in a separate project; I need to update the documentation.

Both ideas and token-utils can be installed from Pypi.org as usual.




Andre Roberge: True constants in Python - part 1

$
0
0
tl;dr: I'm always trying to see if what everyone "knows to be true" is really true...

In many programming languages, you can define constants via some special declaration. For example, in Java you can apparently write something like:

publicstaticfinalString CONST_NAME ="Name";

and this will result in a value that cannot be changed.  I wrote "apparently" since I do not program in Java and rely on what other people write.

Everyone "knows" that you cannot do the same in Python.  If you want to define constants, according to Python's PEP 8, what you should do is the following
Constants are usually defined on a module level and written in all capital letters with underscores separating words. Examples include MAX_OVERFLOW and TOTAL.
and rely on the fact that everyone will respect this convention.  However, nothing prevents you from redefining the value of these variables later in the same module, or from outside (monkeypatching) when importing the module.

Thus, if I write in a module

TOTAL = 1
# some code
TOTAL = 2

the value of the variable will have changed.

If you are willing to use optional type declaration and either use Python 3.8 with the typing module, or some earlier version of Python but using also the third-party typing-extension package, you can use something like the following:

from typing importFinal
TOTAL:Final=0 

and use a tool like mypy that will check to see if the value of is changed anywhere, reporting if it does so.  However, if you do run such an incorrect program (according to mypy), it will still execute properly, and the value of the "constant" will indeed change.

For people that want something a bit more robust, it is often recommended to use some special object (that could live in a separate module) whose attributes cannot change once assigned. However, this does not prevent one from deleting the value of the "constant object" (either by mistake within the module, or by monkeypatching) and reassign it.

Every Python programmer knows that the situation as described above is the final word on the possibility of creating constants in Python ... or is it?

For example, here's a screen capture of an actual module (called test.py)


Notice how the linter in my editor has flagged an apparent error (using UPPERCASE after deleting it.)  And here's the result of importing this module, and then attempting to change the value of the constant.


Can you think of how I might have done this?  (No photoshop, only the normal Python interpreter used.)

In part 2  , I explain how I have done this and will leave you with a (small) challenge.

DaPythonista: API? It’s not that scary!

$
0
0

There are way too many services out there that provide a free API which
waits to be adjusted into your favorite language.
Also, API services could be generated from any visible data such as Facebook (which I’ve covered here), Twitter or any public databases.
In this article, we’ll be focusing on Paypal API service.

I assume you guys have minimal experience with some basic Python and basic web concepts.

Paypal has a massive API but it has no intuitive references for
tracking our daily transactions.
I mean, they do have a REST API for it, but not officially in Python 🙂

We’re gonna implement a connector to Paypal REST API and
implement a parser for the transactions.

Transactions? Huh?

First, we need to talk a bit about the Paypal transactions API.
(We’re gonna use Paypal’s deprecated API in this article)

Paypal’s old API used an NVP method (Name-value pair, basically POST with data).
The official API URL is https://api-3t.paypal.com/nvp and our wanted endpoint is “TransactionSearch”.
As for connecting to this API, we need to generate a Paypal developer account:
a user name, a password and an API signature.

Each transaction contains a lot of attributes; timestamp, payer details, amount status, etc.
Here we’ll parse the data into dictionaries but the sky is the limit: We can create graphs and calculations from the balances and dates, run the data with database and a lot of great other things!

Let’s try to retrieve some data!

According to the official Paypal documentation, we need to provide the following parameters:

  • USER // user-id
  • PWD // password
  • SIGNATURE // API signature
  • VERSION // the release version of the API (we’ll use 98.0)
  • STARTDATE // Paypal demands at least a start date in this format: “1980-01-01T00:00:00Z”

Now, to send all this to Paypal’s servers, we’ll use the Requests package.

First, let’s build the data we want to send.
As we use a POST request we need to provide a URL and a data dictionary:

import requests

params = {
'data':
 {
   'VERSION': '98.0',
   'METHOD': 'TransactionSearch', 
   'USER': 'xxx_api1.xxx.com',
   'PWD': 'XXXXXXXXXXXX',
   'SIGNATURE': 'XXXXXXXXXXXX-XXXXXXXXXXXX',
   'STARTDATE': '2020-01-01T00:00:00Z'
 }, 
  'url': 'https://api-3t.paypal.com/nvp',
  'timeout': 300
}

res = requests.post(**params)

GREAT! We’ve got some data!

b'L_TIMESTAMP0=2020%2d03%2d03T03%3a14%3a49Z
&L_TIMESTAMP1=2020%2d03%2d03T01%3a59%3a28Z
&L_TIMESTAMP2=2020%2d03%2d02T23%3a06%3a26Z
&L_TIMESTAMP3=2020%2d03%2d02T05%3a39%3a25Z
&L_TIMESTAMP4=2020%2d03%2d01T11%3a35%3a37Z

The data we get is given as a string argument (data of type application/x-www-form-urlencoded)
To handle it we use urllib.parse.parse_qs, a function which parses it into one big dictionary:

<class 'dict'>:
 {'L_TIMESTAMP0': ['2020-03-03T03:14:49Z'],
'L_TIMESTAMP1': ['2020-03-03T01:59:28Z'],
'L_TIMESTAMP2': ['2020-03-02T23:06:26Z']
.......}

now, all attributes with the same index belong to one completed transaction.
so, in theory, we want to take all attributes with the same index into one object:

import re

transactions_dict = {}

for t in raw_transactions:
    index = re.findall(r'\d+', t)  # getting the index
    key = str(index[0])

    if key not in transactions_dict:
        transactions_dict[key] = {}

    transaction_property = t[2:len(t) - len(key)]  # extract property
    value = raw_transactions[t][0]
    transaction_dict[index].update({transaction_property: value})
    

The results are:

<class 'dict'>: {'0':
 {'TIMESTAMP': '2020-03-03T03:14:49Z',
 'TIMEZONE': 'GMT',
 'TYPE': 'Payment',
 'EMAIL': 'xxxx@xxxx.com',
 'NAME': 'John Doe',
 'TRANSACTIONID': '111111111111',
 'STATUS': 'Completed',
 'AMT': '12.00', 
 'CURRENCYCODE': 'USD',
 'FEEAMT': '-0.83',
 'NETAMT': '11.17'}

Conclusion:

We’ve learned how to use an API such as Paypal, how to parse this data and use it as we wish.
We can add so many features to this code such as graphs, calculation and so many other cool stuff!
I hope this article taught you something new, and I am looking forward to your feedback. Please, do tell — was this useful for you?

The full project can be found on Github here.

The post API? It’s not that scary! appeared first on DaPythonista.

Matt Layman: Views On Views

$
0
0
In the previous Understand Django article, I covered URLs and the variety of tools that Django gives us to describe the outside interface to the internet for your project. In this article, we’ll examine the core building block that makes those URLs work: the Django view. What Is A View? A view is a chunk of code that receives an HTTP request and returns an HTTP response. Views describe Django’s entire purpose: to respond to requests made to an application on the internet.

Real Python: How to Implement a Python Stack

$
0
0

Have you heard of stacks and wondered what they are? Do you have a general idea but are wondering how to implement a Python stack? You’ve come to the right place!

In this course, you’ll learn:

  • How to recognize when a stack is a good choice for a data structure
  • How to decide which implementation is best for your program
  • What extra considerations to make about stacks in a threading or multiprocessing environment

This course is for Pythonistas who are comfortable running scripts, know what a list is and how to use it, and are wondering how to implement Python stacks.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Viewing all 22419 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>