Quantcast
Channel: Planet Python
Viewing all 24360 articles
Browse latest View live

Doug Hellmann: math — Mathematical Functions — PyMOTW 3

$
0
0
The math module implements many of the IEEE functions that would normally be found in the native platform C libraries for complex mathematical operations using floating point values, including logarithms and trigonometric operations. Read more… This post is part of the Python Module of the Week series for Python 3. See PyMOTW.com for more articles … Continue reading math — Mathematical Functions — PyMOTW 3

pgcli: Sensible Defaults

$
0
0

When I first set out to create pgcli, my goal was to design a Postgresql client that shipped with sensible defaults. It shouldn't require fiddling with config files to enable features.

How did we do on that goal? This is one of those subjective goals that is not so easy to measure. But fortunately, Craig posted a useful blogpost that shows how to configure your psql shell to make it powerful.

I figured I'll use that as a scoring card to see how many of those features are shipped by default in pgcli.

Prompt

One of the things covered in the blogpost is customizing the prompt to show the server name and the database name.

Pgcli ships with a default prompt that has user@host:dbname>. When you're writing a multi-line query the subsequent lines will be indented and filled with ......

prompt

Both of these can be overridden via the config file (~/.config/pgcli/config) but you hardly ever have to change the defaults.

Null values

By default psql won't show the NULL values in a table. You could force psql to show a placeholder value via pset null. In pgcli the null values are always shown and they are show as <null>.

null

Once again this can be overridden to any character of your preference via the config file.

Timing

You can time your sql queries by enabling \timing in psql. This is enabled by default in pgcli. Every query is timed and the results are displayed at the bottom of the output.

History

History in pgcli is unlimited. But pgcli has no option to separate the history for different databases.

The feature described in the blogpost talks about a cool feature to store the history of a session scoped to a database. This is an awesome feature that we might adopt in pgcli in the future versions.

Output Formatting

Expanded mode in psql is a way to output the results of a query if the output is too wide to fit in the screen when shown as a table. This can be toggled by \x on or \x off.

The \x auto command in psql will intelligently choose between the table format or the expanded format based on the screen width.

In pgcli, we have the ability to do this but this is not enabled by default. It has to be enabled via the config file (~/.config/pgcli/config) by the user.

I don't think we'll change this behavior.

Keyword Casing

One of the psql options is to choose either the upper case or lower case for keyword completion. So when you type sel and then hit tab, psql will auto-complete SELECT if upper-case was chosen as the option.

In pgcli we have those explicit 'upper' and 'lower' options. In addition we have an 'auto' option which is set as default. This will choose the casing based on what the user has typed so far. So if you type sel and hit tab, pgcli will suggest select, whereas if you type SEL and hit tab it will suggest SELECT. I'm sure there is a way achieve this in psql, but this is the option that pgcli ships with.

Conclusion

We have plenty of other config options that ship with default values. You can see a list of all the config values in ~/.config/pgcli/config if you have pgcli installed or you can checkout the pgclirc file for a quick view. Let us know how we did.

"Fredrik Håård's Blaag": Another kind of nomad?

$
0
0

I used to enjoy a lot of the writing about digital nomadry, but the more I've started living the life I'm aiming for, the more what I read seems to describe an adolescent "workation" while depleting your saving rather than a sustainable career and lifestyle. In the interest of banging my own drum giving a different perspective to a life of travel, I'd like to share our journey so far.

One of the many views from my office

Planning and preparation

We started planning our mobile lifestyle over ten years ago, when we realized that house, office jobs and kids weren't our dream, and also that the weeks we spent in our (8.5m small) sailing boat was when we really felt at home. Now, we did not just save some money to go on an extended cruise, but rather started planning our life and careers to be able to start a potentially never-ending cruise. Instead, we planned our careers - Veronica runs a business offering IT support to logistics providers called Visual Units and I'm a contractor.

Around this time I started spending more time building my personal brand, going to conferences to speak, getting involved in open source, taking care of my LinkedIn profile, and in general spending a bit of extra effort not just to be good at my job, but to be seen as being good at my job. Since I'm not amused by sales, I figured that the best route would be to try to let clients find me rather than the other way around.

In spring, 2013, I resigned from a job and an employer I still liked very much, but which did not and could not provide me with the freedom I wanted. I started my own company (my wife was already self-employed), we sold our apartment, and took a leap into the unknown.

A short leap

Well, we took a short leap. My first contract was for my previous employer, and we did not have a boat at this time, so we rented a cottage outside of Karlskrona, just some ten kilometers from where we used to live. During that winter, I finished up my contracts and we found and bought the boat we wanted - not the biggest, but one we can live on and could afford without taking a loan. In spring 2014 we packed our things in a one-way trailer, returned the key to the cottage, and left for the Swedish west coast, where our new home had just been prepared, masted, and put into the water.

A steady pace

Since then, we've been on the move, more or less. Northern European winters being what they are, we've rented places in winter, mostly coinciding with on-site contracts for me. We've lived most of the winter in Stockholm (twice) and Edinburgh, and we've traveled by car and AirBnB - a few weeks at most - in Germany, Italy, France, England and Scotland.

Chassan, one of our homes this year

But it's the summers we're planning for. Not quite aimlessly drifting, we've slowly taken us from the Swedish west coast up to the very far north of the Baltic sea. We generally (try to) get up early, walk the dogs, eat breakfast and then set sail. Around noon we'll reach whatever goal we had in mind for the day, have lunch, and then set to work. We don't exactly have a normal work week - hours depend on customers needs more than day of the week, but generally we both work around 20-30 hours a week - although like most self-employed I know of, sometimes we also go way over 40 hours for a short while.

When winter comes, we'll put the boat on land, wherever we happen to find ourselves, figure out how to get our car, pack up, and find a place - or more likely a couple of places - to stay until spring. South of France seems likely right now. Or somewhere else entirely. Next winter, we hope to avoid leaving the boat by sailing to the Mediterranean instead. In time, we'll probably buy a slightly bigger boat, that we can stay in even during a Swedish winter.

Money in, money out

Of course - living in Chiag Mai is probably cheaper than our current setup. On the other hand, with no mortgage to pay it's not as much as you might think, and with our slow pace moving around we can get actual, proper work done that pays actual, proper rates. Remote work does not pay as much as say, on-site work in Stockholm city, but on the other hand our summer rent averages out around 300 EUR a month, all inclusive. Staying in Europe has an extra benefit on the income front - 4G/LTE network is commonplace, and with the roaming charges gone, working remote has never been easier. A mid-range 4G modem and an extra MIMO-antenna, and we're set to work almost anywhere.

In the end, we're not trying to live on a shoestring and not trying (hard) to get rich, but rather to balance the amount and type of work we do with he money we need to sustain our lifestyle in the long run.

Right now, I'm working through Toptal on a project for an American customer - and my total sales investment was showing up for the video interview while we were living in a mountain cottage above the Aosta Valley, in the Italian alps. I can take making a few percent less for that kind of freedom.

Family, flock and friends

People seem to assume that living and working as close as we do - and we're very rarely apart for more than a couple of hours - would bring problems. I can only respond that we had probably not gotten married unless we enjoyed spending time together.

Pedro and Gordon, our dogs

Living and working together on 12 square meters does lead to friction, but on the other hand you learn to resolve your conflicts pretty fast when there's nowhere to run and sulk. Besides, it upsets our dogs something fierce when we have an argument, and we don't want to upset the dogs do we?

Speaking of the dogs, taking a long walk with the dogs is an excellent way to get some of that alone time I think everybody needs, and with the bonus to give some exercise and force you to take a break no matter how deep in work you're buried. As an added bonus, the dogs help us find new friends wherever we go, and we've now got friends spread over Europe, and come winter we have more opportunity than most to visit.

Future

Right now, we've got no great plans to change our lives - while we've not run out of things we'd like to do, there are very few, if any, things we feel we need to do or change. Our long-term plans (anything more than a month or so) are ever fluid, but that's OK, because with nothing to hold us down, we can go with the flow.

Alvilda, our home, office and transportation

I'm planning to do a few shorter write-ups on specific subjects around remote work, nomadry and freelancing, and if and when I get around to it I will update this post with links as well.

This has turned into a - for me - monster of a blog post, so a huge thank you if you made it this far!

PyCharm: PyCharm 2017.2 EAP 7

$
0
0

The release of PyCharm 2017.2 is only a couple of weeks away. This week we have the seventh early access program (EAP) version ready for you, go to our website to get it now!

New in this version:

  • Many bugs have been fixed
  • Any virtualenvs detected for a certain project will now be visible only for that project
  • For more details, see the release notes.

Please let us know how you like it! Users who actively report about their experiences with the EAP can win prizes in our EAP competition. To participate: just report your findings on YouTrack, and help us improve PyCharm.

To get all EAP builds as soon as we publish them, set your update channel to EAP (go to Help | Check for Updates, click the ‘Updates’ link, and then select ‘Early Access Program’ in the dropdown). If you’d like to keep all your JetBrains tools updates, try JetBrains Toolbox!

-PyCharm Team
The Drive to Develop

Weekly Python Chat: Ranges in Python

$
0
0

Want to loop over consecutive numbers? Backwards? In steps of 3? You want range.

The range function changed quite a bit between Python 2 and Python 3. We're going to discuss the behaviors of range and xrange in Python 2 and compare each to range in Python 3.

If you think xrange in Python 2 is the same as range in Python 3, you're in for a surprise.

Sign up and ask a range-related question!

NumFOCUS: NumFOCUS Projects at SciPy 2017

$
0
0
SciPy 2017 is a wrap! As you’d expect, we had lots of participation by NumFOCUS sponsored projects. Here’s a collection of links to talks given by our projects:   Tutorials Software Carpentry Scientific Python Course Tutorial by Maxim Belkin (Part 1) (Part 2) The Jupyter Interactive Widget Ecosystem Tutorial by Matt Craig, Sylvain Corlay, & Jason […]

Daniel Bader: How to Check if a File Exists in Python

$
0
0

How to Check if a File Exists in Python

A tutorial on how to find out whether a file (or directory) exists using Python built-ins and functions from the standard library.

Checking if a file exists in Python

The ability to check whether a file exists on disk or not is important for many types of Python programs:

Maybe you want to make sure a data file is available before you try to load it, or maybe you want to prevent overwriting an existing file. The same is true for directories—maybe you need to ensure an output folder is available before your program runs.

In Python, there are several ways to verify a file or directory exists using functions built into the core language and the Python standard library.

In this tutorial you’ll see three different techniques for file existence checks in Python, with code examples and their individual pros and cons.

Let’s take a look!

Option #1: os.path.exists() and os.path.isfile()

The most common way to check for the existence of a file in Python is using the exists() and isfile() methods from the os.path module in the standard library.

These functions are available on Python 2 and 3, and they’re usually the first suggestion that comes up when you consult the Python docs or a search engine on how to solve this problem.

Here’s a demo of how to work with the os.path.exists() function. I’m checking several paths (files and directories) for existence in the example below:

>>>importos.path>>>os.path.exists('mydirectory/myfile.txt')True>>>os.path.exists('does-not-exist.txt')False>>>os.path.exists('mydirectory')True

As you just saw, calling os.path.exists() will return True for files and directories. If you want to ensure that a given path points to a file and not to a directory, you can use the os.path.isfile() function:

>>>importos.path>>>os.path.isfile('mydirectory/myfile.txt')True>>>os.path.isfile('does-not-exist.txt')False>>>os.path.isfile('mydirectory')False

With both functions it’s important to keep in mind that they will only check if a file exists—and not if the program actually has access to it. If verifying access is important then you should consider simply opening the file while looking out for an I/O exception (IOError) to be raised.

We’ll come back to this technique in the summary at the end of the tutorial. But before we do that, let’s take a look at another option for doing file existence checks in Python.

Option #2: open() and try...except

You just saw how functions in the os.path module can be used to check for the existence of a file or a folder.

Here’s another straightforward Python algorithm for checking whether a file exists: You simply attempt to open the file with the built-in open() function, like so:

>>>open('does-not-exist.txt')FileNotFoundError:"[Errno 2] No such file or directory: 'does-not-exist.txt'"

If the file exists the open call will complete successfully and return a valid file handle. If the file does not exist however, a FileNotFoundError exception will be raised:

FileNotFoundError is raised when a file or directory is requested but doesn’t exist. Corresponds to errno ENOENT.”” (Source: Python Docs)

This means you can watch for this FileNotFoundError exception type in your own code, and use it to detect whether a file exists or not. Here’s a code example that demonstrates this technique:

try:f=open('myfile.txt')f.close()exceptFileNotFoundError:print('File does not exist')

Notice how I’m immediately calling the close() method on the file object to release the underlying file handle. This is generally considered a good practice when working with files in Python:

If you don’t close the file handle explicitly it is difficult to know when exactly it will be closed automatically by the Python runtime. This wastes system resources and can make your programs run less efficiently.

Now, the same “just attempt to open it” technique also works for ensuring a file is both readable and accessible. Instead of watching for FileNotFoundError exceptions you’ll want to look out for any kind of IOError:

try:f=open('myfile.txt')f.close()exceptIOError:print('File is not accessible')print('File is accessible')

If you frequently use this pattern you can factor it out into a helper function that will allow you to test whether a file exists and is accessible at the same time:

defis_accessible(path,mode='r'):"""    Check if the file or directory at `path` can    be accessed by the program using `mode` open flags."""try:f=open(path,mode)f.close()exceptIOError:returnFalsereturnTrue

Alternatively, you can use the os.access() function in the standard library to check whether a file exists and is accessible at the same time. This would be more similar to using the os.path.exists() function for checking if a file exists.

Using open() and a try...except clause has some advantages when it comes to file handling in Python. It can help you avoid bugs caused by file existence race conditions:

Imagine a file exists in the instant you run the check, only to get removed a millisecond later. When you actually want to open the file to work with it, it’s gone and your program aborts with an error.

I’ll cover this edge case in some more detail in the summary below. But before we get down another rabbit hole—let’s take a look at one more option for checking if a file or folder exists in Python.

Option #3: pathlib.Path.exists() (Python 3.4+)

Python 3.4 and above include the pathlib module that provides an object-oriented interface for dealing with file system paths. Using this module is much nicer than treating file paths as simple string objects.

It provides abstractions and helper functions for many file system operations, including existence checks and finding out whether a path points to a file or a directory.

To check whether a path points to a valid file you can use the Path.exists() method. To find out whether a path is a file or a symbolic link, instead of a directory, you’ll want to use Path.is_file().

Here’s a working example for both pathlib.Path methods:

>>>importpathlib>>>path=pathlib.Path('myfile.txt')>>>path.exists()True>>>path.is_file()True

As you can tell, this approach is very similar to doing an existence check with functions from the os.path module.

The key difference is that pathlib provides a cleaner object-oriented interface for working with the file system. You’re no longer dealing with plain str objects representing file paths—but instead you’re handling Path objects with relevant methods and attributes on them.

Using pathlib and taking advantage of its object-oriented interface can make your file handling code more readable and more maintainable. I’m not going to lie to you and say this is a panacea. But in some cases it can help you write “better” Python programs.

The pathlib module is also available as a backported third-party module on PyPI that works on Python 2.x and 3.x. You can find it here: pathlib2

Summary: Checking if a File Exists in Python

In this tutorial we compared three different methods for determining whether a file exists in Python. One method also allowed us to check if a file exists and is accessible at the same time.

Of course, with three implementations to choose from you might be wondering:

What’s the preferred way to check if a file exists using Python?

In most cases where you need a file existence check I’d recommend you use the built-in pathlib.Path.exists() method on Python 3.4 and above, or the os.path.exists() function on Python 2.

However, there’s one important caveat to consider:

Keep in mind that just because a file existed when the check ran won’t guarantee that it will still be there when you’re ready to open it:

While unlikely under normal circumstances, it’s entirely possible for a file to exist in the instant the existence check runs, only to get deleted immediately afterwards.

To avoid this type of race condition, it helps to not only rely on a “Does this file exist?” check. Instead it’s usually better to simply attempt to carry out the desired operation right away. This is also called an “easier to ask for forgiveness than permission” (EAFP) style that’s usually recommended in Python.

For example, instead of checking first if a file exists before opening it, you’ll want to simply try to open it right away and be prepared to catch a FileNotFoundError exception that tells you the file wasn’t available. This avoids the race condition.

So, if you plan on working with a file immediately afterwards, for example by reading its contents or by appending new data to it, I would recommend that you do the existence check via the open() method and exception handling in an EAFP style. This will help you avoid race conditions in your Python file handling code.

Curtis Miller: Let’s Create Our Own Cryptocurrency

$
0
0
Originally posted on cranklin.com:
I’ve been itching to build my own cryptocurrency… and I shall give it an unoriginal & narcissistic name: Cranky Coin. After giving it a lot of thought, I decided to use Python. GIL thread concurrency is sufficient. Mining might suffer, but can be replaced with a C mining module. Most…

Glyph Lefkowitz: Beyond ThunderDock

$
0
0

This weekend I found myself pleased to receive a Kensington SD5000T Thunderbolt 3 Docking Station.

Some of its functionality was a bit of a weird surprise.

The Setup

Due to my ... accretive history with computer purchases, I have 3 things on my desk at home: a USB-C macbook pro, a 27" Thunderbolt iMac, and an older 27" Dell display, which is old enough at this point that I can’t link it to you. Please do not take this to be some kind of totally sweet setup. It would just be somewhat pointlessly expensive to replace this jumble with something nicer. I purchased the dock because I want to have one cable to connect me to power & both displays.

For those not familiar, iMacs of a certain vintage1 can be jury-rigged to behave as Thunderbolt displays with limited functionality (no access from the guest system to the iMac’s ethernet port, for example), using Target Display Mode, which extends their useful lifespan somewhat. (This machine is still, relatively speaking, a powerhouse, so it’s not quite dead yet; but it’s nice to be able to swap in my laptop and use the big screen.)

The Link-up

On the back of the Thunderbolt dock, there are 2 Thunderbolt 3 ports. I plugged the first one into a Thunderbolt 3 to Thunderbolt 2 adapter which connects to the back of the iMac, and the second one into the Macbook directly. The Dell display plugs into the DisplayPort; I connected my network to the Ethernet port of the dock. My mouse, keyboard, and iPhone were plugged into the USB ports on the dock.

The Problem

I set it up and at first it seemed to be delivering on the “one cable” promise of thunderbolt 3. But then I switched WiFi off to test the speed of the wired network and was surprised to see that it didn’t see the dock’s ethernet port at all. Flipping wifi back on, I looked over at my router’s control panel and noticed that a new device (with the expected manufacturer) was on my network. nmap seemed to indicate that it was... running exactly the network services I expected to see on my iMac. VNCing into the iMac to see what was going on, I popped open the Network system preference pane, and right there alongside all the other devices, was the thunderbolt dock’s ethernet device.

The Punch Line

Despite the miasma of confusion surrounding USB-C and Thunderbolt 32, the surprise here is that apparently Thunderbolt is Thunderbolt, and (for this device at least) Thunderbolt devices connected across the same bus can happily drive whatever they’re plugged in to. The Thunderbolt 2 to 3 adapter isn’t just a fancy way of plugging in hard drives and displays with the older connector; as far as I can tell all the functionality of the Thunderbolt interface remains intact as both “host” and “guest”. It’s like having an ethernet switch for your PCI bus.

What this meant is that when I unplugged everything and then carefully plugged in the iMac before the Macbook, it happily lit up the Dell display, and connected to all the USB devices plugged into the USB hub. When I plugged the laptop in, it happily started charging, but since it didn’t “own” the other devices, nothing else connected to it.

Conclusion

This dock works a little bit too well; when I “dock” now I have to carefully plug in the laptop first, give it a moment to grab all the devices so that it “owns” them, then plug in the iMac, then use this handy app to tell the iMac to enter Target Display mode.

On the other hand, this does also mean that I can quickly toggle between “everything is plugged in to the iMac” and “everything is plugged in to the MacBook” just by disconnecting and reconnecting a single cable, which is pretty neat.


  1. Sadly, not the most recent fancy 5K ones. 

  2. which are, simultaneously, both the same thing and not the same thing. 

Matthew Rocklin: Scikit-Image and Dask Performance

$
0
0

This weekend at the SciPy 2017 sprints I worked alongside Scikit-image developers to investigate parallelizing scikit-image with Dask.

Here is a notebook of our work.

S. Lott: Yet Another Python Problem List

$
0
0
This was a cool thing to see in my Twitter feed:

Dan Bader (@dbader_org)
"Why Python Is Not My Favorite Language"zenhack.net/2016/12/25/why…

More Problems with Python. Here's the short list.

1. Encapsulation (Inheritance, really.)
2. With Statement
3. Decorators
4. Duck Typing (and Documentation)
5. Types

I like these kinds of posts because they surface problems that are way, way out at the fringes of Python. What's important to me is that most of the language is fine, but the syntaxes for a few things are sometimes irksome. Also important to me is that it's almost never the deeper semantics; it seems to be entirely a matter of syntax.

The really big problem is people who take the presence of a list like this as a reason dismiss Python in its entirety because they found a few blog posts identifying specific enhancements. That "Python must be bad because people are proposing improvements" is madding. And dismayingly common.

Even in a Python-heavy workplace, there are Java and Node.js people who have opinions shaped by little lists like these. The "semantic whitespace" argument coming from JavaScript people is ludicrous, but there they are: JavaScript has a murky relationship with semi-colons and they're complaining about whitespace. Minifying isn't a virtue. It's a hack. Really.

My point in general is not to say this list is wrong. It's to say that these points are minor. In many cases, I don't disagree that these can be seen as problems. But I don't think they're toweringly important.

1. The body of first point seems to be more about inheritance and accidentally overiding something that shouldn't have been overridden. Java (and C++) folks like to use private for this. Python lets you read the source. I vote for reading the source.

2. Yep. There are other ways to do this. Clever approach. I still prefer with statements.

3. I'm not sold on the syntax change being super helpful.

4. People write bad documentation about their duck types. Good point. People need to be more clear.

5. Agree. A lot of projects need to implement type hints to make it more useful.

PyCharm: Webinar: “Teaching Python 3.6 with Games” with Paul Craven, August 2nd

$
0
0

Want to teach Python? Want to use Python 3 type hinting in your applications? Want to write games? In this webinar, we look at all three, together, using the Arcade library for Python and its creator, Paul Craven.

  • Wednesday, August 2nd
  • 17:00 European Time, 11AM Eastern Daylight Time
  • Register here

paul_craven_splash

Paul Craven is the creator of Arcade, a 2d game library, and author of Program Arcade Games with Python and Pygame. He’s also a college professor teaching beginning programming using Python and game-writing.

He has settled on Arcade, Python 3.6 and PyCharm CE in his instruction. Arcade is written for the purpose of teaching, and thus heavily uses Python 3.6 type hinting. In this webinar, Paul will discuss:

  • Teaching programming using Python and game-writing
  • 2d games, and how Arcade was designed to help teach students
  • How Python 3.6 type hinting, combined with an IDE, helps both teaching and writing a framework

Speaking to you

Paul Vincent Craven (@professorcraven) is head of the Computer Science Department at Simpson College in Indianola, Iowa. He has a Ph.D. in computer science from the University of Idaho, and Masters from Missouri S&T. He worked for several years in IT before becoming a full-time professor. He finds that teaching students to program video games is a lot more fun than writing software to track mortgages.

Mike Driscoll: Python: All About Decorators

$
0
0

Decorators can be a bit mind-bending when first encountered and they can also be a bit tricky to debug. But they are a neat way to add functionality to functions and classes. Decorators are also known as a “higher-order function”. What this means is that they can take one or more functions as arguments and return a function as its result. In other words, decorators will take the function they are decorating and extend its behavior while not actually modifying what the function itself does.

There have been two decorators in Python since version 2.2, namely classmethod() and staticmethod(). Then PEP 318 was put together and the decorator syntax was added to make decorating functions and methods possible in Python 2.4. Class decorators were proposed in PEP 3129 to be included in Python 2.6. They appear to work in Python 2.7, but the PEP indicates that they weren’t accepted until Python 3, so I’m not sure what happened there.

Let’s start off by talking about functions in general to get a foundation to work from.


The Humble Function

A function in Python and in many other programming languages is just a collection of reusable code. Some programmers will take an almost bash-like approach and just write all their code out in a file with no functions at all. The code just runs from top to bottom. This can lead to a lot of copy-and-paste spaghetti code. When ever you see two pieces of code that are doing the same thing, they can almost always be put into a function. This will make updating your code easier since you’ll only have one place to update them.

Here’s a basic function:

def doubler(number):
    return number *2

This function accepts one argument, number. Then it multiplies it be 2 and returns the result. You can call the function like this:

>>> doubler(5)10

As you can see, the result will be 10.


Function are Objects Too

In Python, a lot of authors will describe a function as a “first-class object”. When they say this, they mean that a function can be passed around and used as arguments to other functions just as you would with a normal data type such as an integer or string. Let’s look at a few examples so we can get used to the idea:

>>>def doubler(number):
       return number *2>>>print(doubler)<function doubler at 0x7f7886b92f50>>>>print(doubler(10))20>>> doubler.__name__
'doubler'>>> doubler.__doc__
None>>>def doubler(number):
        """Doubles the number passed to it"""return number *2>>> doubler.__doc__
'Doubles the number passed to it'>>>dir(doubler)['__call__', '__class__', '__closure__', '__code__', '__defaults__', '__delattr__', '__dict__', '__doc__', '__format__', '__get__', '__getattribute__', '__globals__', '__hash__', '__init__', '__module__', '__name__', '__new__', '__reduce__', '__reduce_ex__', '__repr__', '__setattr__', '__sizeof__', '__str__', '__subclasshook__', 'func_closure', 'func_code', 'func_defaults', 'func_dict', 'func_doc', 'func_globals', 'func_name']

As you can see, you can create a function and then pass it to Python’s print() function or any other function. You will also note that once a function is defined, it automatically has attributes that we can access. For example, in the example above, we accessed func_doc which was empty at first. This attribute holds the contents of the function’s docstring. Since we didn’t have a docstring, it returned None. So we redefined the function to add a docstring and accessed func_doc again to see the docstring. We can also get the function’s name via the func_name attributes. Feel free to check out some of the other attributes that are shown in the last example above..


Our First Decorator

Creating a decorator is actually quite easy. As mentioned earlier, all you need to do to create a decorator is to create a function that accepts another function as its argument. Let’s take a look:

>>>def doubler(number):
        """Doubles the number passed to it"""return number *2>>>def info(func):
        def wrapper(*args):
            print('Function name: ' + func.__name__)print('Function docstring: ' + str(func.__doc__))return func(*args)return wrapper 
>>> my_decorator = info(doubler)>>>print(my_decorator(2))
Function name: doubler
Function docstring: Doubles the number passed to it
4

You will note that are decorator function, info(), has a function nested inside of it, called wrapper(). You can call the nested function whatever you like. The wrapper function accepts the arguments (and optionally the keyword arguments) of the function you are wrapping with your decorator. In this example, we print out the wrapped function’s name and docstring, if it exists. Then we return the function, calling it with its arguments. Lastly, we return the wrapper function.

To use the decorator, we create a decorator object:

>>> my_decorator = info(doubler)

Then to call the decorator, we call it just like we would a normal function: my_decorator(2).

However this is not the usual method of calling a decorator. Python has a special syntax just for that!


Using Decorator Syntax

Python allows you to call a decorator by using the following syntax: @info. Let’s update our previous example to use proper decorator syntax:

def info(func):
    def wrapper(*args):
        print('Function name: ' + func.__name__)print('Function docstring: ' + str(func.__doc__))return func(*args)return wrapper
 
@info
def doubler(number):
    """Doubles the number passed to it"""return number *2 
print(doubler(4))

Now you can call doubler() itself instead of calling the decorator object. The @info above the function definition tells Python to automatically wrap (or decorate) the function and call the decorator when the function is called.


Stacked Decorators

You can also stack or chain decorators. What this means is that you can use more than one decorator on a function at the same time! Let’s take a look at a silly example:

def bold(func):
    def wrapper():
        return"<b>" + func() + "</b>"return wrapper
 
def italic(func):
    def wrapper():
        return"<i>" + func() + "</i>"return wrapper
 
@bold
@italic
def formatted_text():
    return'Python rocks!' 
print(formatted_text())

The bold() decorator will wrap the text with your standard bold HTML tags, while the italic() decorator does the same thing but with italic HTML tags. You should try reversing the order of the decorators to see what kind of effect it has. Give it a try before continuing.

Now that you’ve done that, you will have noticed that you Python appears to run the decorator closest to the function first and go up the chain. So in the version of code above, the text will get wrapped in italics first and then that text will get wrapped in bold tags. If you swap them, then the reverse will occur.


Adding Arguments to Decorators

Adding arguments to decorators is a bit different than you might think it is. You can’t just do something like @my_decorator(3, ‘Python’) as the decorator expects to take the function itself as it’s argument…or can you?

def info(arg1, arg2):
    print('Decorator arg1 = ' + str(arg1))print('Decorator arg2 = ' + str(arg2)) 
    def the_real_decorator(function):
 
        def wrapper(*args, **kwargs):
            print('Function {} args: {} kwargs: {}'.format(
                function.__name__, str(args), str(kwargs)))return function(*args, **kwargs) 
        return wrapper
 
    return the_real_decorator
 
@info(3, 'Python')def doubler(number):
    return number *2 
print(doubler(5))

As you can see, we have a function nested in a function nested in a function! How does this work? The function argument doesn’t even seem to be defined anywhere. Let’s remove the decorator and do what we did before when we created the decorator object:

def info(arg1, arg2):
    print('Decorator arg1 = ' + str(arg1))print('Decorator arg2 = ' + str(arg2)) 
    def the_real_decorator(function):
 
        def wrapper(*args, **kwargs):
            print('Function {} args: {} kwargs: {}'.format(
                function.__name__, str(args), str(kwargs)))return function(*args, **kwargs) 
        return wrapper
 
    return the_real_decorator
 
def doubler(number):
    return number *2 
decorator = info(3, 'Python')(doubler)print(decorator(5))

This code is the equivalent of the previous code. When you call info(3, ‘Python’), it returns the actual decorator function, which we then call by passing it the function, doubler. This gives us the decorator object itself, which we can then call with the original function’s arguments. We can break this down further though:

def info(arg1, arg2):
    print('Decorator arg1 = ' + str(arg1))print('Decorator arg2 = ' + str(arg2)) 
    def the_real_decorator(function):
 
        def wrapper(*args, **kwargs):
            print('Function {} args: {} kwargs: {}'.format(
                function.__name__, str(args), str(kwargs)))return function(*args, **kwargs) 
        return wrapper
 
    return the_real_decorator
 
def doubler(number):
    return number *2 
decorator_function = info(3, 'Python')print(decorator_function) 
actual_decorator = decorator_function(doubler)print(actual_decorator) 
# Call the decorated functionprint(actual_decorator(5))

Here we show that we get the decorator function object first. Then we get the decorator object which is the first nested function in info(), namely the_real_decorator(). This is where you want to pass the function that is being decorated. Now we have the decorated function, so the last line is to call the decorated function.

I also found a neat trick you can do with Python’s functools module that will make creating decorators with arguments a bit shorter:

from functools import partial
 
 
def info(func, arg1, arg2):
    print('Decorator arg1 = ' + str(arg1))print('Decorator arg2 = ' + str(arg2)) 
    def wrapper(*args, **kwargs):
        print('Function {} args: {} kwargs: {}'.format(
            function.__name__, str(args), str(kwargs)))return function(*args, **kwargs) 
    return wrapper
 
decorator_with_arguments = partial(info, arg1=3, arg2='Py') 
@decorator_with_arguments
def doubler(number):
    return number *2 
print(doubler(5))

In this case, you can create a partial function that takes the arguments you are going to pass to your decorator for you. This allows you to pass the function to be decorated AND the arguments to the decorator to the same function. This is actually quite similar to how you can use functools.partial for passing extra arguments to event handlers in wxPython or Tkinter.


Class Decorators

When you look up the term “class decorator”, you will find a mix of articles. Some talk about creating decorators using a class. Others talk about decorating a class with a function. Let’s start with creating a class that we can use as a decorator:

class decorator_with_arguments:
 
    def__init__(self, arg1, arg2):
        print('in __init__')self.arg1 = arg1
        self.arg2 = arg2
        print('Decorator args: {}, {}'.format(arg1, arg2)) 
    def__call__(self, f):
        print('in __call__')def wrapped(*args, **kwargs):
            print('in wrapped()')return f(*args, **kwargs)return wrapped
 
@decorator_with_arguments(3, 'Python')def doubler(number):
    return number *2 
print(doubler(5))

Here we have a simple class that accepts two arguments. We override the __call__() method which allows us to pass the function we are decorating to the class. Then in our __call__() method, we just print out that where we’re at in the code and return the function. This works in much the same way as the examples in the previous section. I personally like this method because we don’t have functions nested 2 levels inside another function, although some could argue that the partial example also fixed that issue.

Anyway the other use case that you will commonly find for a class decorator is a type of meta-programming. So let’s say we have the following class:

class MyActualClass:
    def__init__(self):
        print('in MyActualClass __init__()') 
    def quad(self, value):
        return value *4 
obj = MyActualClass()print(obj.quad(4))

That’s pretty simple, right? Now let’s say we want to add special functionality to our class without modifying what it already does. For example, this might be code that we can’t change for backwards compatibility reasons or some other business requirement. Instead, we can decorate it to extend it’s functionality. Here’s how we can add a new method, for example:

def decorator(cls):
    class Wrapper(cls):
        def doubler(self, value):
            return value *2return Wrapper
 
@decorator
class MyActualClass:
    def__init__(self):
        print('in MyActualClass __init__()') 
    def quad(self, value):
        return value *4 
obj = MyActualClass()print(obj.quad(4))print(obj.doubler(5)

Here we created a decorator function that has a class inside of it. This class will use the class that is passed to it as it’s parent. In other words, we are creating a subclass. This allows us to add new methods. In this case, we add our doubler() method. Now when you create an instance of the decorated MyActualClass() class, you will actually end up with the Wrapper() subclass version. You can actually see this if you print the obj variable.


Wrapping Up

Python has a lot of decorator functionality built-in to the language itself. There are @property, @classproperty, and @staticmethod that you can use directly. Then there is the functools and contextlib modules which provide a lot of handy decorators. For example, you can fix decorator obfuscation using functools.wraps or make any function a context manager via contextlib.contextmanager.

A lot of developers use decorators to enhance their code by creating logging decorators, catching exceptions, adding security and so much more. They are worth the time to learn as they can make your code more extensible and even more readable. Decorators also promote code reuse. Give them a try sometime soon!


Related Reading

Also include class decorators, functools.wrap, @contextlib, decorators with arguments

Python 201: Decorating the main function

Anwesha Das: My Free Software journey, from today to yesteryear and back!

$
0
0

My life can be divided into major two parts.
Life before knowing “Libre Office” and after.
No, I won’t go on about how good the software is (which it is; fairly good).

I was fascinated by the name, the meaning.
Libre, means “free, at liberty.”
The name suggested a philosophy, a vision, an idea to me.

Libre != Gratis

Though I was introduced to the Free Software World briefly a while back at FOSS.in, 2012, a conference that altered the way I look at my soul-mate forever.

But the concept of open source, free software, the industry did not quite settle in clearly, in my mind. The past leads us to the present. So I became more interested in its history. The history of the movement of liberating cyber space. I asked Kushal, who happens to be a free software advocate, and a devout follower to tell me the story, and guide me through the path.

We embarked on a wonderfully meandering journey where I was visiting older, simpler times and Kushal was revisiting the movement with me. The era of sharing, the beginning of the complexities and the fight against it.

Our primary source of knowledge was the book, Hackers: Heroes of the Computer Revolution, apart from several old articles, newsletters etc.

I was astonished to know that there exists software that respects freedom; the freedom of end users.
There exists software that believes in cooperation and collective building.
By strict definition, the first one is known as free software and the second one is open source software.
To my utmost surprise there are people sacrificing their material wants, over the peace of mind, and easy fame to help others. To preach freedom and openness.

How did it help me?

It made me understand the present state of the world, a lot better.
I now see the words community, & collaboration in a different light altogether.
I now understand why people are so passionate about words, phrases or things.
Why these terms make all the difference.

Most importantly it gave me a refuge. I may not be someone with a coding background, but I can still very much be a part of this community. Coding may be a big part of the technical community but it’s not the only part. It gave me the power to ignore the cold shoulder and the occasional jibe (and laugh at their incomprehension about history), whenever someone treated me like an outsider. It gave myself the justification, and confidence to take the decision of quitting a fairly established career as mortgage attorney and start afresh at 30.

Community, collaboration, and freedom are the some my most used words (apart from “No Py”!) in a day since then.

And now I know why.
And it’s time for me to spread the word.

Coming to the present

July 2017, a tweet and its reply made us relive that journey. It was the discussion of the recent travel ban. A tweet came from @gnome about this. Totally appropriate given the primary idea of gnome. But many could not figure out why such statements are coming from @gnome.
I was taken aback. We were sad, surprised and somewhat angry. People were enjoying the fruits of so many people’s hard work and effort and were yet ignorant of the history and the toil behind it all.
Among many other replies, there was a reply from Miguel de Icaza that seemed to echo our (Kushal’s and mine) thoughts.

We understand why he said that. We thought it’s time for us too, to do our bit, to spread the word.
And so, we decided to do two things -

Firstly, to tell newer, younger lot about it.
For which Kushal & I have taken classes on
- the history of hacker ethics - licensing and how the choices we make affect us, the society, and the world at large, at DGPLUG Summer training session.
A similar session will follow in PyLadies Pune August meetup.

Secondly, to write an article on the Hacker Ethic and the Free Software movement, together.
We have published this on 12th July, 2017. Here is the link
. Please read it (though it is a bit lengthy) and be a part of our journey.

Enthought: Webinar: Python for Scientists & Engineers: A Tour of Enthought’s Professional Training Course

$
0
0

When: Thursday, July 27, 2017, 11-11:45 AM CT (Live webcast, all registrants will receive a recording)

What:  A guided walkthrough and Q&A about Enthought’s technical training course Python for Scientists & Engineers with Enthought’s VP of Training Solutions, Dr. Michael Connell

Who Should Attend: individuals, team leaders, and learning & development coordinators who are looking to better understand the options to increase professional capabilities in Python for scientific and engineering applications

REGISTER  (if you can’t attend we’ll send all registrants a recording)


“Writing software is not my job…I just have to do it every day.”  
-21st Century Scientist or Engineer

Many scientists, engineers, and analysts today find themselves writing a lot of software in their day-to-day work even though that’s not their primary job and they were never formally trained for it. Of course, there is a lot more to writing software for scientific and analytic computing than just knowing which keyword to use and where to put the semicolon.

Software for science, engineering, and analysis has to solve the technical problem it was created to solve, of course, but it also has to be efficient, readable, maintainable, extensible, and usable by other people — including the original author six months later!

It has to be designed to prevent bugs and — because all reasonably complex software contains bugs — it should be designed so as to make the inevitable bugs quickly apparent, easy to diagnose, and easy to fix. In addition, such software often has to interface with legacy code libraries written in other languages like C or C++, and it may benefit from a graphical user interface to substantially streamline repeatable workflows and make the tools available to colleagues and other stakeholders who may not be comfortable working directly with the code for whatever reason.

Enthought’s Python for Scientists and Engineers is designed to accelerate the development of skill and confidence in addressing these kinds of technical challenges using some of Python’s core capabilities and tools, including:

  • The standard Python language
  • Core tools for science, engineering, and analysis, including NumPy (the fast array programming package), Matplotlib (for data visualization), and Pandas (for data analysis); and
  • Tools for crafting well-organized and robust code, debugging, profiling performance, interfacing with other languages like C and C++, and adding graphical user interfaces (GUIs) to your applications.

 

 

 

 

 

 

In this webinar, we give you the key information and insight you need to evaluate whether Enthought’s Python for Scientists and Engineers course is the right solution to take your technical skills to the next level, including:

  • Who will benefit most from the course
  • A guided tour through the course topics
  • What skills you’ll take away from the course, how the instructional design supports that
  • What the experience is like, and why it is different from other training alternatives (with a sneak peek at actual course materials)
  • What previous course attendees say about the course

REGISTER


michael_connell-enthought-vp-trainingPresenter: Dr. Michael Connell, VP, Enthought Training Solutions

Ed.D, Education, Harvard University
M.S., Electrical Engineering and Computer Science, MIT


Python for Scientists & Engineers Training: The Quick Start Approach to Turbocharging Your Work

If you are tired of running repeatable processes manually and want to (semi-) automate them to increase your throughput and decrease pilot error, or you want to spend less time debugging code and more time writing clean code in the first place, or you are simply tired of using a multitude of tools and languages for different parts of a task and want to replace them with one comprehensive language, then Enthought’s Python for Scientists and Engineers is definitely for you!

This class has been particularly appealing to people who have been using other tools like MATLAB or even Excel for their computational work and want to start applying their skills using the Python toolset.  And it’s no wonder — Python has been identified as the most popular coding language for five years in a row for good reason.

One reason for its broad popularity is its efficiency and ease-of-use. Many people consider Python more fun to work in than other languages (and we agree!). Another reason for its popularity among scientists, engineers, and analysts in particular is Python’s support for rapid application development and extensive (and growing) open source library of powerful tools for preparing, visualizing, analyzing, and modeling data as well as simulation.

Python is also an extraordinarily comprehensive toolset – it supports everything from interactive analysis to automation to software engineering to web app development within a single language and plays very well with other languages like C/C++ or FORTRAN so you can continue leveraging your existing code libraries written in those other languages.

Many organizations are moving to Python so they can consolidate all of their technical work streams under a single comprehensive toolset. In the first part of this class we’ll give you the fundamentals you need to switch from another language to Python and then we cover the core tools that will enable you to do in Python what you were doing with other tools, only faster and better!

Additional Resources

Upcoming Open Python for Scientists & Engineers Sessions:

Los Alamos, NM, Aug 14-18, 2017
Albuquerque, NM, Sept 11-15, 2017
Washington, DC, Sept 25-29, 2017
Los Alamos, NM, Oct 9-13, 2017
Cambridge, UK, Oct 16-20, 2017
San Diego, CA, Oct 30-Nov 3, 2017
Albuquerque, NM, Nov 13-17, 2017
Los Alamos, NM, Dec 4-8, 2017
Austin, TX, Dec 11-15, 2017

Have a group interested in training? We specialize in group and corporate training. Contact us or call 512.536.1057.

Learn More

Download Enthought’s Machine Learning with Python’s Scikit-Learn Cheat Sheets
Enthought's Machine Learning with Python Cheat SheetsAdditional Webinars in the Training Series:

Python for Data Science: A Tour of Enthought’s Professional Technical Training Course

Python for Professionals: The Complete Guide to Enthought’s Technical Training Courses

An Exclusive Peek “Under the Hood” of Enthought Training and the Pandas Mastery Workshop

Download Enthought’s Pandas Cheat SheetsEnthought's Pandas Cheat Sheets

The post Webinar: Python for Scientists & Engineers: A Tour of Enthought’s Professional Training Course appeared first on Enthought Blog.


Python Bytes: #35 How developers change programming languages over time

$
0
0
<p><strong>Brian #1:</strong> <a href="https://medium.com/@PhilipTrauner/python-quirks-comments-324bbf88612c"><strong>Python Quirks</strong></a> <a href="https://medium.com/@PhilipTrauner/python-quirks-comments-324bbf88612c"><strong>: Comments</strong></a></p> <ul> <li>Python developers put comments in their code.</li> </ul> <pre><code> # Like this """ And like this """"And like this." ["Not usually like this","but it's possible"] </code></pre> <ul> <li>Philip Trauner timed all of these.</li> <li>Actual # comments are obviously way faster.</li> <li>He also shows the AST difference.</li> <li>Don’t abuse the language. Unused unreferenced strings are not free.</li> </ul> <p><strong>Michael #2:</strong> <a href="https://docs.python.org/3.6/whatsnew/changelog.html#python-3-6-2"><strong>Python 3.6.2 is out!</strong></a></p> <ul> <li><strong>Security</strong> <ul> <li>bpo-30730: Prevent environment variables injection in subprocess on Windows. Prevent passing other environment variables and command arguments.</li> <li>bpo-30694: Upgrade expat copy from 2.2.0 to 2.2.1 to get fixes of multiple security vulnerabilities including: CVE-2017-9233 (External entity infinite loop DoS), CVE-2016-9063 (Integer overflow, re-fix), CVE-2016-0718 (Fix regression bugs from 2.2.0’s fix to CVE-2016-0718) and CVE-2012-0876 (Counter hash flooding with SipHash). Note: the CVE-2016-5300 (Use os-specific entropy sources like getrandom) doesn’t impact Python, since Python already gets entropy from the OS to set the expat secret using XML_SetHashSalt().</li> <li>bpo-30500: Fix urllib.parse.splithost() to correctly parse fragments. For example, splithost('//127.0.0.1#@evil.com/') now correctly returns the 127.0.0.1 host, instead of treating @evil.com as the host in an authentification (login@host).</li> </ul></li> <li><strong>Core and Builtins</strong> <ul> <li>bpo-29104: Fixed parsing backslashes in f-strings.</li> <li>bpo-27945: Fixed various segfaults with dict when input collections are mutated during searching, inserting or comparing. Based on patches by Duane Griffin and Tim Mitchell.</li> <li>bpo-30039: If a KeyboardInterrupt happens when the interpreter is in the middle of resuming a chain of nested ‘yield from’ or ‘await’ calls, it’s now correctly delivered to the innermost frame.</li> <li>Library</li> <li>bpo-30038: Fix race condition between signal delivery and wakeup file descriptor. Patch by Nathaniel Smith.</li> <li>bpo-23894: lib2to3 now recognizes rb'...' and f'...' strings.</li> <li>bpo-24484: Avoid race condition in multiprocessing cleanup (#2159)</li> </ul></li> <li><strong>Windows</strong> <ul> <li>bpo-30687: Locate msbuild.exe on Windows when building rather than vcvarsall.bat</li> <li>bpo-30450: The build process on Windows no longer depends on Subversion, instead pulling external code from GitHub via a Python script. If Python 3.6 is not found on the system (via py -3.6), NuGet is used to download a copy of 32-bit Python.</li> </ul></li> <li><strong>Plus about 40 more fixes / changes</strong></li> </ul> <p><strong>Brian #3:</strong> <a href="https://github.com/adriennefriend/imposter-syndrome-disclaimer"><strong>Contributing to Open Source Projects: Imposter Syndrome Disclaimer</strong></a></p> <ul> <li>“How to contribute” often part of OSS projects.</li> <li>Adrienne Lowe of codingwithknives.com has an “Imposter Syndrome Disclaimer” to include in your contributing documentation that’s pretty great.</li> <li>She’s also <a href="https://github.com/adriennefriend/imposter-syndrome-disclaimer/blob/master/examples.md">collecting examples</a> of people using it, or similar.</li> <li>From the disclaimer: </li> </ul> <blockquote> <p>“<em>Imposter syndrome disclaimer</em>: I want your help. No really, I do. There might be a little voice inside that tells you you're not ready; that you need to do one more tutorial, or learn another framework, or write a few more blog posts before you can help me with this project. I assure you, that's not the case. … And you don't just have to write code. You can help out by writing documentation, tests, or even by giving feedback about this work. (And yes, that includes giving feedback about the contribution guidelines.)“</p> </blockquote> <p><strong>Michael #4:</strong> <a href="https://www.technologyreview.com/s/604087/the-dark-secret-at-the-heart-of-ai/"><strong>The Dark Secret at the Heart of AI</strong></a></p> <ul> <li>via MIT Technology Review</li> <li>There’s a big problem with AI: even its creators can’t explain how it works</li> <li>Last year, an experimental vehicle, developed by researchers at the chip maker Nvidia, didn’t look different from other autonomous cars, but it was unlike anything demonstrated by Google, Tesla, or General Motors, and it showed the rising power of artificial intelligence. The car didn’t follow a single instruction provided by an engineer or programmer. Instead, it relied entirely on an algorithm that had taught itself to drive by watching a human do it.</li> <li>The result seems to match the responses you’d expect from a human driver. But what if one day it did something unexpected—crashed into a tree, or sat at a green light? </li> <li>As things stand now, it might be difficult to find out why.</li> <li>And you can’t ask it: there is no obvious way to design such a system so that it could always explain why it did what it did.</li> <li>There’s already an argument that being able to interrogate an AI system about how it reached its conclusions is a fundamental legal right</li> <li>We’ve never before built machines that operate in ways their creators don’t understand. How well can we expect to communicate—and get along with—intelligent machines that could be unpredictable and inscrutable</li> </ul> <p><strong>Brian #5:</strong> <a href="http://jamescooke.info/arrange-act-assert-pattern-for-python-developers.html"><strong>Arrange Act Assert pattern for Python developers</strong></a></p> <ul> <li>James Cooke</li> <li>Good introduction to test case structure.</li> <li>Split your tests into setup, action, assertions.</li> <li>Pattern also known by: <ul> <li>Given, When, Then</li> <li>Setup, Test, Teardown</li> <li>Setup, Exercise, Verify, Teardown</li> </ul></li> <li>Also covered in: <ul> <li><a href="http://testandcode.com/10">testandcode.com/10</a></li> <li><a href="http://pythontesting.net/strategy/given-when-then-2/">pythontesting.net/strategy/given-when-then-2</a></li> </ul></li> </ul> <p><strong>Michael #6:</strong> <a href="https://blog.sourced.tech/post/language_migrations/"><strong>Analyzing GitHub, how developers change programming languages over time</strong></a></p> <ul> <li>From source{d}: Building the first AI that understands code</li> <li>Have you ever been struggling with an nth obscure project, thinking : “I could do the job with this language but why not switch to another one which would be more enjoyable to work with” ?</li> <li>Derived from <a href="https://erikbern.com/2017/03/15/the-eigenvector-of-why-we-moved-from-language-x-to-language-y.html"><strong>The eigenvector of “Why we moved from language X to language Y”</strong></a><strong>,</strong> <a href="https://github.com/erikbern/eigenstuff"><strong>Erik Bernhardsson</strong></a> <em>*</em>*</li> <li>Dataset available <ul> <li>4.5 Million GitHub users</li> <li>393 different languages</li> <li>10 TB of source code in total</li> </ul></li> <li>I find it nice to visualize developer’s language usage history with a kind of <a href="https://en.wikipedia.org/wiki/Gantt_chart"><strong>Gantt diagram</strong></a>.</li> <li>We did not include Javascript because</li> <li>Most popular languages on GitHub</li> <li>At last! Here is the reward: the stationary distribution of our Markov chain. This probability distribution is independent of the initial distribution. It gives information about the stability of the process of random switching between languages. </li> <li><table> <thead> <tr> <th>Rank</th> <th>Language</th> <th>Popularity, %</th> <th>Source code, %</th> </tr> </thead> <tbody> <tr> <td>1.</td> <td>Python</td> <td>16.1</td> <td>11.3</td> </tr> <tr> <td>2.</td> <td>Java</td> <td>15.3</td> <td>16.6</td> </tr> <tr> <td>3.</td> <td>C</td> <td>9.2</td> <td>17.2</td> </tr> <tr> <td>4.</td> <td>C++</td> <td>9.1</td> <td>12.6</td> </tr> <tr> <td>5.</td> <td>PHP</td> <td>8.5</td> <td>24.4</td> </tr> <tr> <td>6.</td> <td>Ruby</td> <td>8.3</td> <td>2.6</td> </tr> <tr> <td>7.</td> <td>C#</td> <td>6.1</td> <td>6.5</td> </tr> </tbody> </table></li> <li><p>Python (16.1 %) appears to be the most attractive language, followed closely by Java (15.3 %). It’s especially interesting since only 11.3 % of all source code on GitHub is written in Python.</p></li> <li>Although there are ten times more lines of code on GitHub in PHP than in Ruby, they have the same stationary distribution.</li> <li>What about sticking to a language ? <ul> <li>Developers coding in one of the 5 most popular languages (Java, C, C++, PHP, Ruby) are most likely to switch to Python with approx. 22% chance on average.</li> <li>Similarly, a Visual Basic developer has more chance (24%) to move to C# while Erik’s is almost sure in this transition with 92% chance.</li> <li>People using numerical and statistical environments such as Fortran (36 %), Matlab (33 %) or R (40 %) are most likely to switch to Python in contrast to Erik’s matrix which predicts C as their future language.</li> </ul></li> </ul>

Talk Python to Me: #121 Microservices in Python

$
0
0
Do you have big, monolith web applications or services that are hard to manage, hard to change, and hard to scale? Maybe breaking them into microservices would give you many more options to evolve and grow that app. <br/> <br/> This week, we meet up again with Miguel Grinberg to discuss the trades offs and advantages of microservices.<br/> <br/> Links from the show:<br/> <br/> <div style="font-size: .85em;"><b>Miguel on Twitter</b>: <a href="https://twitter.com/miguelgrinberg" target="_blank">@miguelgrinberg</a><br/> <b>Miguel's blog</b>: <a href="http://blog.miguelgrinberg.com" target="_blank">blog.miguelgrinberg.com</a><br/> <b>Microservices Tutorial at PyCon</b>: <a href="https://www.youtube.com/watch?v=nrzLdMWTRMM" target="_blank">youtube.com/watch?v=nrzLdMWTRMM</a><br/> <b>Flask Web Development (Amazon)</b>: <a href="https://amzn.to/1oVnibk" target="_blank">amzn.to/1oVnibk</a><br/> <b>Flask Web Development (O'Reilly)</b>: <a href="http://shop.oreilly.com/product/0636920031116.do?cmp=af-webplatform-books-videos-product_cj_9781449372620_%25zp" target="_blank">oreilly.com</a><br/></div>

Possbility and Probability: pip and private repositories: vendoring python

Dataquest: Python Cheat Sheet for Data Science

$
0
0

The printable version of this cheat sheet

It’s common when first learning Python for Data Science to have trouble remembering all the syntax that you need. While at Dataquest we advocate getting used to consulting the Python documentation, sometimes it’s nice to have a handy reference, so we’ve put together this cheat sheet to help you out!

If you’re interested in learning Python, we have a free Python Programming: Beginner course which can start you on your data science journey.

–>

Key Basics, Printing and Getting Help

x = 3Assign 3 to the variable x
print(x)Print the value of x
type(x)Return the type of the variable x (in this case, int for integer)
help(x)Show documentation for the str data type
help(print)Show documentation for the print() function

Reading Files

f...

PythonClub - A Brazilian collaborative blog about Python: Peewee - Um ORM Python minimalista

$
0
0

Peeweeé um ORM destinado a criar e gerenciar tabelas de banco de dados relacionais através de objetos Python. Segundo a wikipedia, um ORM é:

Mapeamento objeto-relacional (ou ORM, do inglês: Object-relational mapping) é uma técnica de desenvolvimento utilizada para reduzir a impedância da programação orientada aos objetos utilizando bancos de dados relacionais. As tabelas do banco de dados são representadas através de classes e os registros de cada tabela são representados como instâncias das classes correspondentes.

O que o ORM faz é, basicamente, transformar classes Python em tabelas no banco de dados, além de permitir construir querys usando diretamente objetos Python ao invés de SQL.

O Peewee é destinado a projetos de pequeno/médio porte, se destacando pela simplicidade quando comparado a outros ORM mais conhecidos, como o SQLAlchemy. Uma analogia utilizada pelo autor da API e que acho muito interessante é que Peewee está para o SQLAlchemy assim como SQLite está para o PostgreSQL.

Em relação aos recursos por ele oferecidos, podemos citar que ele possui suporte nativo a SQLite, PostgreSQL e MySQL, embora seja necessário a instalação de drivers para utilizá-lo com PostgreSQL e MySQL e suporta tanto Python 2.6+ quanto Python 3.4+.

Neste tutorial, utilizaremos o SQLite, por sua simplicidade de uso e por não precisar de nenhuma configuração.

Instalação

O Peewee pode ser facilmente instalado com o gerenciador de pacotes pip:

pip install peewee

Criando o banco de dados

Para criar as tabelas é bem simples. Inicialmente passamos o nome do nosso banco de dados (a extensão *.db indica um arquivo do SQLite).

importpeeweedb=peewee.SqliteDatabase('codigo_avulso.db')

Diferente de outros bancos de dados que funcionam através um servidor, o SQLite cria um arquivo de extensão *.db, onde todos os nossos dados são armazenados.

DICA: caso deseje ver as tabelas existentes no arquivo codigo_avulso.db, instale o aplicativo SQLiteBrowser. Com ele fica fácil monitorar as tabelas criadas e acompanhar o tutorial.

 sudo apt-get install sqlitebrowser

A título de exemplo, vamos criar um banco destinado a armazenar nomes de livros e de seus respectivos autores. Comecemos primeiro com a classe que representa os autores.

importpeeweedb=peewee.SqliteDatabase('codigo_avulso.db')classAuthor(peewee.Model):"""    Classe que representa a tabela Author"""# A tabela possui apenas o campo 'name', que# receberá o nome do autorname=peewee.CharField()classMeta:# Indica em qual banco de dados a tabela# 'author' sera criada (obrigatorio). Neste caso,# utilizamos o banco 'codigo_avulso.db' criado anteriormente.database=db

Em seguida, criamos a classe que representa os livros. Ela possui uma relação de "muitos para um" com a tabela de autores, ou seja, cada livro possui apenas um autor, mas um autor pode possuir vários livros.

importpeeweedb=peewee.SqliteDatabase('codigo_avulso.db')classBook(peewee.Model):"""    Classe que representa a tabela Book"""# A tabela possui apenas o campo 'title', que# receberá o nome do livrotitle=peewee.CharField()# Chave estrangeira para a tabela Authorauthor=peewee.ForeignKeyField(Author)classMeta:# Indica em qual banco de dados a tabela# 'author' sera criada (obrigatorio). Neste caso,# utilizamos o banco 'codigo_avulso.db' criado anteriormente.database=db

Agora, vamos reunir tudo em um único arquivo model.py. Como exemplo, eu criei um arquivo main.py para utilizarmos as classes que acabamos de criar.

importpeeweefrommodelimportAuthor,Bookif__name__=='__main__':try:Author.create_table()exceptpeewee.OperationalError:print'Tabela Author ja existe!'try:Book.create_table()exceptpeewee.OperationalError:print'Tabela Book ja existe!'

Após executarmos o código, será criado um arquivo de nome codigo_avulso.db no mesmo diretório do nosso arquivo main.py, contendo as tabelas Author e Book. A estrutura do diretório ficou assim:

.
├── codigo_avulso.db
├── main.py
├── model.py

Inserindo dados no banco

Agora, vamos popular nosso banco com alguns autores e seus respectivos livros. Isso pode ser feito de dois modos. Através do método create, quando desejamos inserir um registro apenas; ou pelo método insert_many, quando desejamos inserir vários registros de uma vez em uma mesma tabela.

# Inserimos um autor de nome "H. G. Wells" na tabela 'Author'author_1=Author.create(name='H. G. Wells')book_1={'title':'A Máquina do Tempo','author':author_1,}book_2={'title':'Guerra dos Mundos','author':author_1,}# Inserimos um autor de nome "Julio Verne" na tabela 'Author'author_2=Author.create(name='Julio Verne')book_3={'title':'Volta ao Mundo em 80 Dias','author':author_2,}book_4={'title':'Vinte Mil Leguas Submarinas','author_id':author_1,}books=[book_1,book_2,book_3,book_4]# Inserimos os quatro livros na tabela 'Book'Book.insert_many(books).execute()

Consultando dados no banco

O Peewee possui comandos destinados a realizar consultas no banco. De maneira semelhante ao conhecido SELECT. Podemos fazer essa consulta de duas maneiras. Se desejamos o primeiro registro que corresponda a nossa pesquisa, podemos utilizar o método get().

book=Book.get(Book.title=="Volta ao Mundo em 80 Dias").get()book.title

Porém, se desejamos mais de um registro, utilizamos o método select. Por exemplo, para consultar todos os livros escritos pelo autor "H. G. Wells".

books=Book.select().join(Author).where(Author.name=='H. G. Wells')# Exibe a quantidade de registros que corresponde a nossa pesquisaprintbooks.count()forbookinbooks:book.title# Resultado:# * A Máquina do Tempo# * Guerra dos Mundos# * Vinte Mil Leguas Submarinas

Também podemos utilizar outras comandos do SQL como limit e group (para mais detalhes, ver a documentação aqui).

Alterando dados no banco

Alterar dados também é bem simples. No exemplo anterior, se observarmos o resultado da consulta dos livros do autor "H. G. Wells", iremos nos deparar com o livro de título "Vinte Mil Léguas Submarinas". Se você, caro leitor, gosta de contos de ficção-científica, sabe que esta obra foi escrito por "Julio Verne", coincidentemente um dos autores que também estão cadastrados em nosso banco. Sendo assim, vamos corrigir o autor do respectivo livro.

Primeiro vamos buscar o registro do autor e do livro:

new_author=Author.get(Author.name=='Julio Verne')book=Book.get(Book.title=="Vinte Mil Leguas Submarinas")

Agora vamos alterar o autor e gravar essa alteração no banco.

# Alteramos o autor do livrobook.author=new_author# Salvamos a alteração no bancobook.save()

Deletando dados do banco

Assim como as operações anteriores, também podemos deletar registros do banco de maneira bem prática. Como exemplo, vamos deletar o livro "Guerra dos Mundos" do nosso banco de dados.

# Buscamos o livro que desejamos excluir do bancobook=Book.get(Book.title=="Guerra dos Mundos")# Excluimos o livro do bancobook.delete_instance()

Simples não?

Conclusão

É isso pessoal. Este tutorial foi uma introdução bem enxuta sobre o Peewee. Ainda existem muitos tópicos que não abordei aqui, como a criação de primary_key, de campos many2many entre outros recursos, pois foge do escopo deste tutorial. Se você gostou do ORM, aconselho a dar uma olhada também na sua documentação, para conseguir extrair todo o potencial da ferramenta. A utilização de um ORM evita que o desenvolvedor perca tempo escrevendo query SQL e foque totalmente no desenolvimento de código.

O Peewee também possui suporte ao flamework flask, então dependendo do tamanho do projeto, pode ser uma alternativa interessante no lugar de ORM mais complexos como o SQLAlchemy.

É isso pessoal. Obrigado pela leitura e até o próximo tutorial!

Referências

Viewing all 24360 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>