Quantcast
Channel: Planet Python
Viewing all 22638 articles
Browse latest View live

John Cook: Physical constants in Python

$
0
0

You can find a large collection of physical constants in scipy.constants. The most frequently used constants are available directly, and hundreds more are in a dictionary physical_constants.

The fine structure constantα is defined as a function of other physical constants:

\alpha = \frac{e^2}{4 \pi \varepsilon_0 \hbar c}

The following code shows that the fine structure constant and the other constants that go into it are available in scipy.constants.

    import scipy.constants as sc

    a = sc.elementary_charge**2
    b = 4 * sc.pi * sc.epsilon_0 * sc.hbar * sc.c
    assert( abs(a/b - sc.fine_structure) < 1e-12 )

Eddington’s constant

In the 1930’s Arthur Eddington believed that the number of photons in the observable universe was exactly the Eddington number

N_{\mathrm{Edd}} = \frac{2^{256}}{\alpha}

Since at the time the fine structure constant was thought to be 1/136, this made the number of photons a nice even 136 × 2256.  Later he revised his number when it looked like the fine structure constant was 1/137. According to the Python code above, the current estimate is more like 1/137.036.

Eddington was a very accomplished scientist, though he had some ideas that seem odd today. His number is a not a bad estimate, though nobody believes it could be exact.

Related posts

The constants in scipy.constants have come up in a couple previous blog posts.

The post on Koide’s coincidence shows how to use the physical_constants dictionary, which includes not just the physical constant values but also their units and uncertainty.

The post on Benford’s law shows that the leading digits of the constants in scipy.constants follow the logarithmic distribution observed by Frank Benford (and earlier by Simon Newcomb).


Continuum Analytics Blog: Preparing Your Organization for Implementing an AI Platform

$
0
0

By Victor Ghadban You already know that implementing an enterprise-ready AI enablement platform is key to executing your organization’s AI and machine learning initiatives. But can software so complex really be easy to implement? How can you avoid disruptions to your business? What can you do to prepare? After spending the last 20 years working …
Read more →

The post Preparing Your Organization for Implementing an AI Platform appeared first on Anaconda.

NumFOCUS: Updates to the NumFOCUS Code of Conduct

Yasoob Khalid: Privacy & Why it Matters

$
0
0

⚠ Long post about privacy. Doesn’t relate to Python at all. Do read if you care and want to learn more about how your privacy is being violated.

Last couple of weeks have totally changed my whole viewpoint on privacy. I was also a part of that bandwagon in which people say that “Why should I be worried about privacy when I don’t have anything to hide or I don’t do anything illegal?”. You start taking it seriously only when your privacy is breached. Let me reiterate, privacy is a fundamental right and should be granted to everyone.

In my efforts to shift away from the big 3 (Facebook, Microsoft, Google) I have started using a paid email service Protonmail. It might sound dumb to pay for something so basic which is being offered by so many companies for free but it really is not dumb. There is a very famous saying which is particularly famous in the tech circle:
“if you don’t pay for the product you are the product”

I am linking some articles below which can provide you some context on what and how you are being tracked. This is just a teaser and the reality is bleaker than it sounds. I urge you to read these articles. If you want to discuss them with someone you are more than welcome to talk to me.


👉The Big Hack: How China Used a Tiny Chip to Infiltrate U.S. Companies

👉Why I am worried about Google

👉Report: Google Tracks What You Buy Offline Using Data from Mastercard

👉Spying on Congress and Israel: NSA Cheerleaders Discover Value of Privacy Only When Their Own Is Violated

👉 Exclusive: WhatsApp Cofounder Brian Acton Gives The Inside Story On #DeleteFacebook And Why He Left $850 Million Behind

The reason why it’s so hard to move away from the big 3 is that as soon as you move away you start feeling like a second class citizen. However, it’s not as true anymore as it used to be a couple of years back. You can find some really good alternatives for famous Google services at this link: https://degooglisons-internet.org/en/alternatives

Privacy conscious services which you can use:

  • Chrome -> Firefox
  • GMail -> Protonmail/Fastmail
  • Google Drive -> Dropbox or better yet NextCloud

I will write another post in the near future about how to start the process of moving away from these services.


🔒 Now, for those who make the same argument against privacy violation which I used to make, here are some responses against it:

Edward Snowden: “Arguing that you don’t care about the right to privacy because you have nothing to hide is no different than saying you don’t care about free speech because you have nothing to say.”

“When you say, ‘I have nothing to hide,’ you’re saying, ‘I don’t care about this right.’ You’re saying, ‘I don’t have this right, because I’ve got to the point where I have to justify it.’ The way rights work is, the government has to justify its intrusion into your rights.”

“[p]eople often feel immune from state surveillance because they’ve done nothing wrong” an entity or group can distort a person’s image and harm one’s reputation, or guilt by association can be used to defame a person.”

🔗Source


 

At one point I was a huge opponent of GDPR (European Privacy Law) because of how much effort small entrepreneurs had to put in just to be compliant with that law but after I realized how easy it is for big companies to breach the privacy of people I have started supporting it. It might take me extra time to make my projects compliant but if it means that the big companies will also have a tough time keeping people’s data without their explicit knowledge and consent, I am all in favor for it.

It goes without saying that if you ever want to talk to anyone about this please let me know and I can talk to you. I don’t claim to know even the tiniest bit about privacy and what it means but I am trying my best to learn about it as much as possible so that I can better guide myself and others.

Wallaroo Labs: Wallaroo goes full Apache 2.0

$
0
0
I’m writing today to announce that with the release of Wallaroo 0.5.3, we have switched our licensing over to a pure open source model. What does all this mean for you? Well, if you are a current Wallaroo user, you get all the features you’ve been using plus no limit on the numbers of CPUs your application can use. Previously, you had to get a license from us if you were using more than 24 cores to run Wallaroo.

Vinta Software: PyGotham 2018 Talks

$
0
0
Pluggable Libs Through Design Patterns Presenter: Filipe Ximenes Slides: http://bit.ly/pluggable-libs

Codementor: Journey from Non-Technical background to an expert in Data Science

$
0
0
Journey from Non-Technical background to an expert in Data Science

Continuum Analytics Blog: Intake: Parsing Data from Filenames and Paths

$
0
0

By Julia Signell Motivation Do you have data in collections of files, where information is encoded both in the contents and the file/directory names? Perhaps something like '{year}/{month}/{day}/{site}/measurement.csv'? This is a very common problem for which people build custom code all the time. Intake provides a systematic way to declare that information in a concise spec. …
Read more →

The post Intake: Parsing Data from Filenames and Paths appeared first on Anaconda.


Ian Ozsvald: Keynote at EuroPython 2018 on “Citizen Science”

$
0
0

I’ve just had the privilege of giving my first keynote at EuroPython (and my second keynote this year), I’ve just spoken on “Citizen Science”. I gave a talk aimed at engineers showing examples of projects around healthcare and humanitarian topics using Python that make the world a better place. The main point was “gather your data, draw graphs, start to ask questions”– this is something that anyone can do.

Last day. Morning keynote by @IanOzsvald (sp.) “Citizen Science”. Really cool talk! – @bz_sara

EuroPython crowd for my keynote

In the talk I covered 4 short stories and then gave a live demo of a Jupyter Lab to graph some audience-collected data:

  • Gorjan‘s talk on Macedonian awful-air-quality from PyDataAmsterdam 2018
  • My talks on solving Sneeze Diagnosis given at PyDataLondon 2017, ODSC 2017 and elsewhere
  • Anna‘s talk on improving baby-delivery healthcare from PyDataWarsaw 2017
  • Dirk‘s talk on saving Orangutangs with Drones from PyDataAmsterdam 2017
  • Jupyter Lab demo on “guessing my dog’s weight” to crowd-source guesses which we investigate using a Lab

The goal of the live demo was to a) collect data (before and after showing photos of my dog) and b) show some interesting results that come out of graphing the results using histograms so that c) everyone realises that drawing graphs of their own data is possible and perhaps is something they too can try. Whilst having folk estimate my dog’s weight won’t change the world, getting them involved in collecting and thinking about data will, I hope, get more folk engaged outside of the conference.

The slides are here.

One of the audience members took some notes:

Here’s some output. Approximately 440 people participated in the two single-answer surveys. The first (poor-information estimate) is “What’s the weight of my dog in kg when you know nothing about the dog?” and the second (good-information estimate) is “The same, but now you’ve seen 8+ pictures of my dog”.

With poor information folk tended to go for the round numbers (see the spikes at 15, 20, 30, 35, 40). After the photos were shown the variance reduced (the talk used more graphs to show this), which is what I wanted to see. Ada’s actual weight is 17kg so the “wisdom of the crowds” estimate was off, but not terribly so and since this wasn’t a dog-fanciers crowd, that’s hardly surprising!

Before showing the photos the median estimate was 12.85kg (mean 14.78kg) from 448 estimates. The 5% quantile was 4kg, 95% quantile 34kg, so 90% of the estimates had a range of 30kg.

After showing the photos the median estimate was 12kg (mean 12.84kg) from 412 estimates. The 5% quantile was 5kg, 95% quantile 25kg, so 90% of the estimates had a range of 20kg.

There were only a couple of guesses above 80kg before showing the photos, none after showing the photos. A large heavy dog can weight over 100kg so a guess that high, before knowing anything about my dog, was feasible.

Around 3% of my audience decided to test my CSV parsing code during my live demo (oh, the wags) with somewhat “tricky” values including “NaN”, “None”, “Null”, “Inf”, “∞”, “-15”, “⁴4”, “1.00E+106”, “99999999999”, “Nana”, “1337” (i.e. leet!), “1-30”, “+[[]]” (???). The “show the raw values in a histogram” cell blew up with this input but the subsequent cells (using a mask to select only a valid positive range) all worked fine. Ah, live demos.

The slides conclude with two sets of links, one of which points the reader at open data sources which could be used in your own explorations. Source code is linked on my github.


Ian applies Data Science as an AI/Data Scientist for companies in ModelInsight and in his Mor Consulting, sign-up for Data Science tutorials in London. He also founded the image and text annotation API Annotate.io, lives in London and is a consumer of fine coffees.

The post Keynote at EuroPython 2018 on “Citizen Science” appeared first on Entrepreneurial Geekiness.

PyBites: Persistent Virtualenv Environment Variables with python-dotenv

$
0
0

I can't count the amount of times I've followed a tutorial or guide that's said something along the lines of "Store your API Keys in environment variables".

It's easy enough to do with os.getenv but the thing that drives me crazy is having to, 1) hardcode the environment variables as global variables in my OS or 2) redeclare them every time I initiate the terminal session.

In this article I'm going to show you (and to document for myself!) how to declare persistent environment variables in Python Virtual Environments with python-dotenv.

Environment Variables in a Virtual Environment

With a UNIX based OS you'd traditionally declare your environment variables by explicitly defining them at the OS level. Take variables CLIENT_ID and CLIENT_SECRET as an example:

#CLIENT_ID="01234509876"
#CLIENT_SECRET="julianandsilentbobstrikeback"
#
#echo $CLIENT_ID
0123450987
#
#echo $CLIENT_SECRET
julianandsilentbobstrikeback

To do this within your Python Virtual Environment (venv) you'd do the same declarations as above after you've activated your venv. Doing this means your environment variables are within the virtual environment, not the global OS environment.

Read more on Python Virtual Environments here.

This is all well and good except that when you deactivate and reactivate the venv, the environment variables are lost and you'll need to redeclare them.

Enter python-dotenv

python-dotenv is a Python module that allows you to specify environment variables in traditional UNIX-like ".env" (dot-env) file within your Python project directory. This includes in venvs!

Here's the flow:

  1. Populate the .env file with your environment variables.
  2. Import python-dotenv and call it at the start of your script.
  3. Use whatever "getenv" method you use to import the environment variables, eg: os.getenv().

python-dotenv Example

1. Create your .env file

(venv)#vim .env
(venv)#cat .env
CLIENT_ID="1234509876"
CLIENT_SECRET="julianandsilentbobstrikeback"
(venv)#

2. Import and Call python-dotenv

I'm using a file called routes.py as my primary Python script here:

(venv)#cat routes.pyfromdotenvimportload_dotenvload_dotenv()

I begin by importing load_dotenv from the dotenv.

It's then just as simple as running load_dotenv() to make the .env file accessible to your script as your source of environment variables.

3. Access the Environment Variables

I've continued with routes.py by writing a function that uses os.getenv to pull and print the environment variables I specified in my .env file:

fromdotenvimportload_dotenvimportosload_dotenv()client=os.getenv("CLIENT_ID")secret=os.getenv("CLIENT_SECRET")defprintenvironment():print(f'The client id is: {client}.')print(f'The secret id is: {secret}.')if__name__=="__main__":printenvironment()


Running this script returns the following:

#python routes.py 
The client id is: 1234509876.
The secret id is: julianandsilentbobstrikeback.
#

PERSISTENCE AT LAST!

The original problem was that when you'd deactivate and reactivate the venv you'd lose the environment variables.

Watch the persistence in action!

(venv)#deactivate
#
#
#source venv/bin/activate
(venv)#
(venv)#python routes.py 
The client id is: 1234509876.
The secret id is: julianandsilentbobstrikeback.
#

Okay... I know. Anticlimactic. No! What am I saying?! Super handy and so time saver-y!

.env Example File

Pro-tip: if you're committing to a public repo, make sure .env files are listed in the .gitignore file. You don't want your environment variables being pushed to a public repo!

That said, you'll want to let people know what environment variables to configure for themselves if they're going to clone your repo or use your script.

The nice way to do this is to create an "empty" .env.example file:

#vim .env.example
CLIENT_ID=""
CLIENT_SECRET=""

Conclusion

This is one of those Python things I'll be taking with me to the grave. With the numerous apps and scripts I've created, managing these env variables has always been a pain.

But no more I say! I'll be using python-dotenv to manage everything.


Keep Calm and Code in Python!

-- Julian

Weekly Python StackOverflow Report: (cxlvi) stackoverflow python report

$
0
0

py.CheckIO: How To Publish A Package On PyPI

$
0
0
publishing on PyPI

Every development has its back-story consisting of a great amount of work, research, failed or simpler prototypes, and people who have influenced it in one way or another. The knowledge and skills we get from our predecessors shape our own work. That’s why the new programs are being based on already existing ones and programmers share their code in order to inspire and help each other achieve new greater things. The Python Package Index (PyPI) is a suitable and very popular Python software repository to do just that. In this article we want to guide you through the steps in which you can publish your work on PyPI.

PyBites: PyBites Twitter Digest - Issue 31, 2018

$
0
0

Read about how Dropbox migrated to Python 3!

Submitted by @dgjustice

Tutorial on creating GUI apps in Python with PyQt5

Submitted by @clamytoe

Great write up from Trey on Lambda Expressions

A wonderful Test and Code interview on testing and Selenium with our mate Andy!

A Flask newsletter by Import Python!

String Operations covered in typical Real Python fashion - have a read!

NumPy bug in Python 3.7 causing data science issues

Submitted by @clamytoe.

An introduction to using Black for code compliance

How to write C like our very own Erik!

Submitted and written by @Erik.

Cool! List of Python YouTube channels

Python Developer Survey 2018! Fill it out now!

Data Analysis with NumPy and Pandas

Two of my favourite things! Flask and AWS!

Google Colaboratory! This is cool. Also, congrats Jake!

Pretty printing dictionaries with json.dumps() - good tip!


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Davy Wybiral: DIY Pumpkin Bluetooth Stereo

$
0
0
It's October and Halloween is my favorite holiday so to celebrate I built this Jack-o-lantern Bluetooth stereo out of a real pumpkin. It sounds great and should be really easy to scare people with (nobody expects the Jack-o-lantern to talk to them).

Check it out:

Ned Batchelder: Who tests what is here

$
0
0

A long-awaited feature of coverage.py is now available in a rough form: Who Tests What annotates coverage data with the name of the test function that ran the code.

To try it out:

  • Install coverage.py v5.0a3.
  • Add this line literally to the [run] section of your .coveragerc file:
  • [run]
    dynamic_context = test_function
  • Run your tests.
  • The .coverage file is now a SQLite database. There is no change to reporting yet, so you will need to do your own querying of the SQLite database to get information out. See below for a description of the database schema.

The database has a few tables:

  • file: maps full file paths to file_ids: id, path
  • context: maps contexts (test function names) to contexts_ids: id, context
  • line: the line execution data: file_id, context_id, lineno
  • arc: similar to line, but for branch coverage: file_id, context_id, fromno, tono

It’s not the most convenient, but the information is all there. If you used branch coverage, then the important data is in the “arc” table, and “line” is empty. If you didn’t use branch coverage, then “line” has data and “arc” is empty. For example, using the sqlite3 command-line tool, here’s a query to see which tests ran a particular line:

sqlite> select
   ...> distinct context.context from arc, file, context
   ...> where arc.file_id = file.id
   ...> and arc.context_id = context.id
   ...> and file.path like '%/xmlreport.py'
   ...> and arc.tono = 122;
context
------------------------------------------------------------
XmlPackageStructureTest.test_package_names
OmitIncludeTestsMixin.test_omit
OmitIncludeTestsMixin.test_include
OmitIncludeTestsMixin.test_omit_2
XmlReportTest.test_filename_format_showing_everything
XmlReportTest.test_no_source
OmitIncludeTestsMixin.test_include_as_string
OmitIncludeTestsMixin.test_omit_and_include
XmlReportTest.test_empty_file_is_100_not_0
OmitIncludeTestsMixin.test_omit_as_string
XmlReportTest.test_nonascii_directory
OmitIncludeTestsMixin.test_nothing_specified
XmlReportTest.test_curdir_source
XmlReportTest.test_deep_source
XmlPackageStructureTest.test_package_depth
XmlPackageStructureTest.test_source_prefix
XmlGoldTest.test_a_xml_2
XmlGoldTest.test_a_xml_1
XmlReportTest.test_filename_format_including_module
XmlReportTest.test_reporting_on_nothing
XmlReportTest.test_filename_format_including_filename
ReportingReturnValueTest.test_xml
OmitIncludeTestsMixin.test_include_2

BTW, there are also “static contexts” if you are interested in keeping coverage data from different test runs separate: see Measurement Contexts in the docs for details.

Some things to note and think about:

  • The test function name recorded includes the test class if we can figure it out. Sometimes this isn’t possible. Would it be better to record the filename and line number?
  • Is test_function too fine-grained for some people? Maybe chunking to the test class or even the test file would be enough?
  • Better would be to have test runnner plugins that could tell us the test identifier. Anyone want to help with that?
  • What other kinds of dynamic contexts might be useful?
  • What would be good ways to report on this data? How are you navigating the data to get useful information from it?
  • How is the performance?
  • We could have a “coverage extract” command that would be like the opposite of “coverage combine”: it could pull out a subset of the data so a readable report could be made from it.

Please try this out, and let me know how it goes. Thanks.


Codementor: Web scraping using Python and BeautifulSoup

$
0
0
Intro In the era of data science it is common to collect data from websites for analytics purposes. Python is one of the most commonly used programming languages for data science projects. Using...

Jeff Knupp: Extended Absence

$
0
0

For those who have been keeping score, I've been on somewhat of an extended absence, especially since October of last year. Much of it was the result of events in my personal life which I'll not discuss here. Regardless, consider this my "comeback" announcement. I plan to return to writing regularly, hoping to match the pace of 2014 or so, which already seems so long ago. Heck, Writing Idiomatic Python is three months older than my older daughter and she just started Kindergarten this a few weeks ago. This blog is older than most startups, but it's been a while since I've been producing (I think) useful or interesting content at a reliable pace. I'm going to change that.

Slightly Less Personal News

Braze

At the end of July of this year I joined Braze (formerly Appboy) as a Senior Engineer in my return to being an IC (Individual Contributor in management-ese). Management was fun but somewhat thrust upon me. I wanted to take a more measured approach.

In Braze, I couldn't have found a better match. Braze counts some of the biggest brands in the world (Gap, Microsoft, Domino's, ABC News, NASCAR, Lyft, Venmo, HBO, and Burger King are a few, to give you a sense of breadth of industry they work width) as customers. Those brands use Braze's platform to manage their mobile marketing strategies. And though you probably haven't heard of Braze (formerly Appboy) (which is actually how the company, for now, refers to itself), they are no small player: #21 On Deloitte's Annual Fast 500 (the 500 fastest growing technology companies in America), A "Leader" in the 2018 Gartner Magic Quadrant Survey of Mobile Marketing Platforms (and that's from a field of companies that included Salesforce, IBM, and Oracle), and profiled by publications like Fortune and Inc. They've built an incredible company full of nice, thoughtful people who genuinely care about the product they work on and services they offer.

So what does it do? And what do I now do? I'm glad you asked.

Braze's product is a B2B SaaS product, which is a fancy way of saying a cloud-hosted, web-based app sold directly to businesses and not individuals. Companies that purchase the product have marketing departments with more money and employees than most startups. Of course, they have a number of mediums through which to advertise: TV, outdoor ad space, radio, print media, etc. And, oh yeah, digital. You know, the primary way you interact every day with almost any major brand? Suffice it to say the digital marketing is a big business.

But how does one go about creating, managing, and implementing their digital marketing strategy? Once they've determined that they should run an email marketing campaign to announce the newest changes to their product, where do they go to actually write the email, choose the users to receive it, send the emails, and measure and analyze engagement data? And wait, a large brand may have a dozen email-based marketing campaigns going at one time with new ones starting all the time. And they also have campaigns they want to run for users of their mobile app using push notifications and in-app messages. And web push/browser campaigns for visitors to their website. They need a platform that allows them to

A customer can use Braze's platform to create, manage, and actually run (i.e. send the emails, push notifications, etc. to the actual end user's device) all of those campaigns. And what about all that data that gets generated, the kind capable of answering questions like "who bought something (and what did they buy) within 3 days of receiving this notification on their phone?" or "what's the most effective messaging channel to reach my female customers from Canada who are over 35 and have used my mobile app at least three times in the past week"? All of that is collected, analyzed, presented, and ultimately fed back into the product itself, as well as to the company itself.

Adding features to that web application and owning the backend platform, which is responsible for actually sending all those messages and managing the life-cycles of automated marketing campaigns, is the responsibility of my team. And it's all in Ruby on Rails and in JavaScript. No, I'm not joking. I'm working on an RoR app (including frontend development), having never used Ruby, as a guy who's known as a "Python Guy". And I couldn't be happier.

What the engineering team at Braze has been able to accomplish using RoR, mongo, redis, memcached, and sidekiq (a distributed task execution framework along the same lines as Celery for you Python folk, which Braze co-founder and CTO Jon Hyman happens to be one of the main contributors to), is nothing short of remarkable. Aside from a few Go microservices acting as the last mile before sending billions of messages a month and some Java for data processing and integrations, the entire platform is a single Rails app. And they've been able to scale that app in ways that would make John Carmack smile. In terms of sheer amount of storage and compute power used, it's the largest distributed system I've ever worked on (algorithmic trading systems, which I started my career building at Goldman Sachs in 2005, were somewhat distributed back then but more often just sharded instances of the same application that didn't share any data).

But then, it's also 2018.

Microservices developed and deployed at the speed and scale we see today simply weren't feasible until fairly recently. There was a sticky problem: orchestration. Before the orchestration tools we have today existed, the increase in maintenance and support effort to deploy a new service was quite high in most organizations. But almost no effort is required to do so with today's orchestration tools. At whiteboards around the world, engineers are discussing ways to break up their monolithic application into a series of disparate services with focused responsibilities and well-defined interfaces. And Braze is no different.

As an engineer who loves working on distributed systems, there are two really fun times to work on them: when they are first being built or when they've hit an inflection point and need to scale in ways that require rethinking everything. And "scale" here doesn't just mean "process more data". It also means "support concurrent development by dozens of developers" and "decrease our maintenance and support burden". Those are the opportunities I live for, and that's why I'm so excited to be at Braze.

Ever Onward

Yes, yes, I know. "The Python guy is working on a Ruby on Rails app?". To be completely honest, it's only been in the past few years have I used it professionally. So in terms of my earlier "comeback" announcement, the fact that I'm not writing Python at work doesn't mean I'm not writing it, and I do and will have plenty to of topics to discuss on all things Python. Because Python is reaching an inflection point as well, and learning Python concepts and how to think about the writing code has never been a hotter (nor more important) topic.

So yeah, my life has changed a lot. And even calling myself "The Python guy" is a stretch at this point. I haven't produced a lot in the last few years. But I miss that. And I aim to change it. For those of you who emailed or tweeted me while the site was down last month, thank you. It's up now and staying up. And you're going to be finding yourself on it more often now.

Time to get to work...

Jaime Buelta: I wrote a Python book!

$
0
0
So, great news, I wrote a book and it’s available! It’s called Python Automation Cookbook, and it’s aimed to people that already know a bit of Python (not necessarily developers only), but would like to use it to automate common tasks like search files, creating different kind of documents, adding graphs, sending emails, text messages, … Continue reading I wrote a Python book!

Mike Driscoll: PyDev of the Week: K Lars Lohn

$
0
0

This week we welcome K Lars Lohn (@2braids) as our PyDev of the Week! He has been a part of the Python community for quite a few years. You can learn a bit more about him over on his blog or by checking out his Github account. Let’s spend some time getting to know him a bit more!

Can you tell us a little about yourself (hobbies, education, etc):

I’m a product of the education system of the State of Montana in the 1980s. I studied Electrical Engineering at Montana State, but switched to Computer Science at the University of Montana. I switched universities to get access to U of M’s VAX-750 running Unix. I graduated with a BS degree in ’83 and then, later, an MS in’ 91.

My hobbies include unusual plants, intricate drawing and baroque music. I have seven greenhouses filled with organic veggies, orchids and carnivorous plants. In the last few years I’ve discovered that I can draw well enough show pieces in art galleries. Finally, while I own an oboe and a family of baroque recorders, I’ve settled on an electronic woodwind instrument, a Yamaha WX-5. Oh yeah, then there are the Harleys: ’08 FX-STB Night Train and a ’15 Fat Boy Low.

I work for the Mozilla Corporation. My first contributions to Mozilla projects began while working at the OSUOSL in ’04. Later as an employee, I was the lead developer in Socorro, the Python based server side of the Firefox crash reporting system.

Why did you start using Python?

After considering several languages, I chose Python because it offered both OOP and functional paradigms. It’s quick to learn and quick from concept to working code.

Microsoft inadvertently pushed me into Python and Open Source way back in ’02. At that time, I was running a online nursery specializing in rose bushes. The business ran custom software I wrote using a Microsoft stack: Windows, C++, Access, SQL Server. One day, I received a letter from the Business Software Alliance threatening a lawsuit over unlicensed Microsoft Windows and MS Office software. I was unable to find any of the “Certificates of Authenticity” that came with the Dell machines. It turned out that it was a protection racket. Microsoft in cahoots with the BSA trying to shake down small businesses to scare them into paying “settlements” for software for which they already held legitimate license. I reported the scheme to the Oregon Attorney General.

Infuriated by their cavalier ethics, I decided that Microsoft had no business having a hand in my revenue stream. After some research in Open Source Software, I chose Python, Postgres and Linux to re-implement the software that ran my business. A month later, Microsoft was excised from both my business and personal life.

I hold a grudge against Microsoft to this day. I still have pursed lips about GitHub.

What other programming languages do you know and which is your favorite?

C++ will always have a place in my heart. I learned OOP in a graduate school course using Smalltalk in ’88. For the final project we had to use some other OOP language. After some research, I chose C++. I was a great choice as it launched the next era of my career at Rogue Wave software in the early to mid ’90s. I was the senior developer in the team that created the SQL encapsulation library known as DBTools.h++ (later renamed Source Pro DB). It was essentially SQL-Alchemy for C++.

One of my favorite languages from the past is APL, a cryptic exercise in unreadable code. It has its own character set and special keyboard. I was fascinated with its brevity and power. Conway’s Life cellular automaton can be implemented in one line: life←{↑1 ⍵∨.∧3 4=+/,¯1 01∘.⊖¯1 0 1∘.⌽⊂⍵} (from wikipedia)

What projects are you working on now?

I’m currently attached to the IoT group in Mozilla’s Emerging Technologies unit. The goal is to promote a standard that encourages the Internet of Things into becoming the Web of Things. There are too many proprietary systems and vertical silos in IoT. The same standards that drive the Open Web could be applied to IoT. Controlling a switch or a thermostat shouldn’t require the cloud unless control is from outside the local area network is desired and warranted.

The IoT group created the Things Gateway, an implementation of the Web Things API. It enables devices to be treated as Web apps, allowing control using Web tools: a browser and just about any language that can use a RESTful API and Web Sockets. The Things Gateway also serves as a bridge between non-IP technologies (Z-Wave, Zigbee) and the Web Thing API.

My work these days is exercising the Web Thing API with Python. I’m taking devices and services and wrapping them in a Web Thing blanket so they can be access and controlled as home automation devices. Some examples can be found on my blog: https://www.twobraids.com/search/label/IoT

One of the more interesting recent projects was to make a Tide Light (http://www.twobraids.com/2018/06/things-gateway-restful-api-and-tide.html). Using a Philip’s Hue bulb, make the color of the bulb follow real time tide information: green for low tide, red for high tide. For the low to high transition, fade the bulb from green to yellow to orange to red. For the high to low cycle, fade from red to magenta to blue to
cyan to green.

The nature of this sort of programming goes hand in hard with the asynchronous programming paradigm. This was my first opportunity to really start to exercise the abilities of Python in that realm. In the early days of asynchronous Python, it was all done with generators and I was skeptical because the intent seemed so obfuscated. In Python 3.6, it has achieved clarity and is really fun to use.

Which Python libraries are your favorite (core or 3rd party)?

The most common modules that I import are functools, itertools and contextlib. Erik Rose’s more-itertools is wonderfully useful (https://more-itertools.readthedocs.io/en/latest/api.html)

Of course, I have a lot of my own tools that I use repeatedly. configman is prominent in a majority of my code, but I’m not sure I can recommend others using it. While it is richly featured, it is poorly documented.

Is there anything else you’d like to say?

Deep within this field that we call Software Engineering is a hidden Law of Conservation of Complexity. I have serious doubts that complexity can be eliminated, it can only be shifted to somebody else. Want to simplify your complicated server architecture by breaking it into micro services? You’ve just delegated the complexity to the developers one step higher in the stack and the ops people one step lower. It seems we eventually push complexity all the way out to the
end users. At that point we declare that it’s not our problem anymore.

Do not fear complexity. If you fear and loath complexity, then you fear and loath the world in which we live. Life itself is unfathomably complicated, I choose to make peace with it and embrace it when I can. Complexity underlies life’s ability for graceful recovery which is always better than graceful failure.

Thanks for doing the interview!

Test and Code: 48: A GUI for pytest

$
0
0

The story of how I came to find a good user interface for running and debugging automated tests is interleaved with a multi-year effort of mine to have a test workflow that’s works smoothly with product development and actually speeds things up. It’s also interleaved with the origins of the blog pythontesting.net, this podcast, and the pytest book I wrote with Pragmatic.

It’s not a long story. And it has a happy ending. Well. It’s not over. But I’m happy with where we are now. I’m also hoping that this tale of my dedication to, or obsession with, quality and developer efficiency helps you in your own efforts to make your daily workflow better and to extend that to try to increase the efficiency of those you work with.

Sponsored By:

Support Test and Code

Links:

<p>The story of how I came to find a good user interface for running and debugging automated tests is interleaved with a multi-year effort of mine to have a test workflow that’s works smoothly with product development and actually speeds things up. It’s also interleaved with the origins of the blog pythontesting.net, this podcast, and the pytest book I wrote with Pragmatic.</p> <p>It’s not a long story. And it has a happy ending. Well. It’s not over. But I’m happy with where we are now. I’m also hoping that this tale of my dedication to, or obsession with, quality and developer efficiency helps you in your own efforts to make your daily workflow better and to extend that to try to increase the efficiency of those you work with.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="http://amzn.to/2E6cYZ9">Python Testing with pytest</a>: <a rel="nofollow" href="http://amzn.to/2E6cYZ9">Simple, Rapid, Effective, and Scalable The fastest way to learn pytest. From 0 to expert in under 200 pages.</a></li><li><a rel="nofollow" href="https://www.patreon.com/testpodcast">Patreon Supporters</a>: <a rel="nofollow" href="https://www.patreon.com/testpodcast">Help support the show with as little as $1 per month. Funds help pay for expenses associated with the show.</a></li></ul><p><a rel="payment" href="https://www.patreon.com/testpodcast">Support Test and Code</a></p><p>Links:</p><ul><li><a title="pythontesting.net" rel="nofollow" href="http://pythontesting.net/">pythontesting.net</a></li></ul>
Viewing all 22638 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>