Quantcast
Channel: Planet Python
Viewing all 22860 articles
Browse latest View live

PyCoder’s Weekly: Issue #388 (Oct. 1, 2019)

$
0
0

#388 – OCTOBER 1, 2019
View in Browser »

The PyCoder’s Weekly Logo


How to Use Generators and Yield in Python

Learn about generators and yielding in Python. You’ll create generator functions and generator expressions using multiple Python yield statements. You’ll also see how to build data pipelines that take advantage of these Pythonic tools.
REAL PYTHON

Comparison of Python, Julia, Matlab, IDL and Java

“We use simple test cases to compare various high level programming languages. We implement the test cases from an angle of a novice programmer who is not familiar with the optimization techniques available in the languages. The goal is to highlight the strengths and weaknesses of each language but not to claim that one language is better than the others.”
NASA.GOV

Become a Python Guru With PyCharm

alt

PyCharm is the Python IDE for Professional Developers by JetBrains providing a complete set of tools for productive Python, Web and scientific development. Be more productive and save time while PyCharm takes care of the routine →
JETBRAINSsponsor

Regex Performance in Python

“Working with regex, you have to understand what you are doing: the regex engine for Python, the type of statement you are writing, and alternative tools that are available for your purposes. Yes, there are instances when the re package may not be the best tool to use.”
JUN WU

PEG at the Core Developer Sprint

“Every year for the past four years a bunch of Python core developers get together for a week-long sprint at an exotic location. These sprints are sponsored by the PSF as well as by the company hosting the sprint.”
GUIDO VAN ROSSUM

Mypy 0.730 Released

Mypy 0.730 is out, with prettier, colored output and error code support, along with many other fixes and improvements.
MYPY-LANG.BLOGSPOT.COM

Discussions

Python Jobs

Backend Developer (Kfar Saba, Israel)

3DSignals

More Python Jobs >>>

Articles & Tutorials

Preventing SQL Injection Attacks With Python

SQL injection attacks are one of the most common web application security risks. In this step-by-step tutorial, you’ll learn how you can prevent Python SQL injection. You’ll learn how to compose SQL queries with parameters, as well as how to safely execute those queries in your database.
REAL PYTHON

Rectified Adam (RAdam) Optimizer With Keras

Learn how to use Keras and the Rectified Adam optimizer as a drop-in replacement for the standard Adam optimizer, potentially leading to a higher accuracy model (and in fewer epochs).
ADRIAN ROSEBROCK

Python Developers Are in Demand on Vettery

alt

Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today →
VETTERYsponsor

Simple Introduction to StringIO and BytesIO in Python

“For some reason IO streams are a totally underused feature that rarely comes up in most code. We all know that memory if faster than disk IO, this is what I use IO streams for.”
DANIEL BEACH• Shared by Daniel Beach

Strings and Character Data in Python

Learn how to use Python’s rich set of operators, functions, and methods for working with strings. You’ll learn how to access and extract portions of strings, and also become familiar with the methods that are available to manipulate and modify string data in Python 3.
REAL PYTHONvideo

Using iloc and loc for Indexing and Slicing Pandas Dataframes

Learn how to work with Pandas iloc and loc to slice, index, and subset your dataframes, for example by row and columns.
ERIK MARSJA

Projects & Code

pire: Python Interactive Regular Expressions

PIRE is an interactive command-line interface allowing you to edit regexes live and see how your changes match against the input you specify.
GITHUB.COM/JOHANNESTAAS

Events

PyCon Estonia

October 3 to October 4, 2019
PYCON.EE

PyCon Balkan 2019

October 3 to October 6, 2019
PYCONBALKAN.COM

SciPy Latam

October 8 to October 11, 2019
SCIPYLA.ORG

PyCon ZA 2019

October 9 to October 14, 2019
PYCON.ORG

PyConDE & PyData Berlin 2019

October 9 to October 12, 2019
PYCON.ORG

PyTennessee 2020 CFP

March 7 to March 8, 2020 in Nashville, TN
PAPERCALL.IO• Shared by Bill Israel


Happy Pythoning!
This was PyCoder’s Weekly Issue #388.
View in Browser »

alt

[ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]


Catalin George Festila: Python 3.7.4 : Install the protobuf from sources on Fedora distro.

$
0
0
Today I will show you how to build protobuf from sources using the Fedora distro. The google team comes with this intro: Protocol buffers are Google's language-neutral, platform-neutral, extensible mechanism for serializing structured data – think XML, but smaller, faster, and simpler... This google project comes with these tutorials. The GitHub project can be found here. To install the compiler,

tryexceptpass: 9 Organizational Test Practices Guaranteed to Lower Quality and Customer Satisfaction

$
0
0
A contribution to freeCodeCamp.org examining test practices opposite to our usual goals.

Python Insider: Python 3.7.5rc1 is now available for testing

$
0
0
Python 3.7.5rc1 is now available for testing. 3.7.5rc1 is the release preview of the next maintenance release of Python 3.7, the latest feature release of Python. Assuming no critical problems are found prior to 2019-10-14, no code changes are planned between now and the final release. This release candidate is intended to give you the opportunity to test the new security and bug fixes in 3.7.5. We strongly encourage you to test your projects and report issues found to bugs.python.org as soon as possible. Please keep in mind that this is a preview release and, thus, its use is not recommended for production environments.

You can find the release files, a link to the changelog, and more information here:

Robin Wilson: Calculating Rayleigh Reflectance using Py6S

$
0
0

A user of Py6S recently contacted me to ask if it was possible to get an output of Rayleigh reflectance from Py6S. Unfortunately this email wasn’t sent to the Py6s Google Group, so I thought I’d write a blog post explaining how to do this, and showing a few outputs (reminder: please post Py6S questions there rather than emailing me directly, then people with questions in the future can find the answers there rather than asking again).

So, first of all, what is Rayleigh reflectance? Well, it’s the reflectance (as measured at the top-of-atmosphere) that is caused by Rayleigh scattering in the atmosphere. This is the wavelength-dependent scattering of light by gas molecules in the atmosphere – and it is an inescapable effect of light passing through the atmosphere.

So, on to how to calculate it in Py6S. Unfortunately the underlying 6S model doesn’t provide Rayleigh reflectance as an output, so we have to do a bit more work to calculate it.

First, let’s import Py6S and set up a few basic parameters:

from Py6S import *

s = SixS()

# Standard altitude settings for the sensor
# and target
s.altitudes.set_sensor_satellite_level()
s.altitudes.set_target_sea_level()

# Wavelength of 0.5nm
s.wavelength = Wavelength(0.5)

Now, to calculate the reflectance which is entirely due to Rayleigh scattering we need to ‘turn off’ everything else that is going on that could contribute to the reflectance. First, we ‘turn off’ the ground reflectance by setting it to zero, so we won’t have any contribution from the ground reflectance:

s.ground_reflectance = GroundReflectance.HomogeneousLambertian(0)

Then we turn off aerosol scattering:

s.aero_profile = AeroProfile.PredefinedType(AeroProfile.NoAerosols)

and also atmospheric absorption by gases:

s.atmos_profile = AtmosProfile.PredefinedType(AtmosProfile.NoGaseousAbsorption)

We can then run the simulation (using s.run()) and look at the outputs. The best way to do this is to just run:

print(s.outputs.fulltext)

to look at the ‘pretty’ text output that Py6S provides. The value we want is the ‘apparent reflectance’ – which is the reflectance at the top-of-atmosphere. Because we’ve turned off everything else, this will be purely caused by the Rayleigh reflectance.

We can access this value programmatically as s.outputs.apparent_reflectance.

So, that’s how to get the Rayleigh reflectance – but there are a few more interesting things to say…

Firstly, we don’t actually have to set the ground reflectance to zero. If we set the ground reflectance to something else – for example:

s.ground_reflectance = GroundReflectance.HomogeneousLambertian(GroundReflectance.GreenVegetation)

and run the simulation, then we will get a different answer for the apparent radiance – because the ground reflectance is now being taken into account – but we will see the value we want as the atmospheric intrinsic reflectance. This is the reflectance that comes directly from the atmosphere (in this case just from Rayleigh scattering, but in normal situations this would include aerosol scattering as well). This can be accessed programmatically as s.outputs.atmospheric_intrinsic_reflectance.

One more thing, just to show that Rayleigh reflectance in Py6S behaves in the manner that we’d expect from what we know of the physics… We can put together a bit of code that will extract the Rayleigh reflectance at various wavelengths and plot a graph – we’d expect an exponentially-decreasing curve, showing high Rayleigh reflectance at low wavelengths, and vice versa.

The code below will do this:

from Py6S import *
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline

s = SixS()

s.altitudes.set_sensor_satellite_level()
s.altitudes.set_target_sea_level()
s.aero_profile = AeroProfile.PredefinedType(AeroProfile.NoAerosols)
s.atmos_profile = AtmosProfile.PredefinedType(AtmosProfile.NoGaseousAbsorption)

wavelengths = np.arange(0.3, 1.0, 0.05)
results = []

for wv in wavelengths:
    s.wavelength = Wavelength(wv)
    s.run()

    results.append({'wavelength': wv,
                   'rayleigh_refl': s.outputs.atmospheric_intrinsic_reflectance})

results = pd.DataFrame(results)

results.plot(x='wavelength', y='rayleigh_refl', style='x-', label='Rayleigh Reflectance', grid=True)
plt.xlabel('Wavelength ($\mu m$)')
plt.ylabel('Rayleigh Reflectance (no units)')

This produces the following graph, which shows exactly what the physics predicts:

file

There’s nothing particularly revolutionary in that chunk of code – we’ve just combined the code I demonstrated earlier, and then looped through various wavelengths and run the model for each wavelength.

The way that we’re storing the results from the model deserves a brief explanation, as this is a pattern I use a lot. Each time the model is run, a new dict is appended to a list – and this dict has entries for the various parameters we’re interested in (in this case just wavelength) and the various results we’re interested in (in this case just Rayleigh reflectance). After we’ve finished the loop we can simply pass this list of dicts to pd.DataFrame() and get a nice pandas DataFrame back – ready to display, plot or analyse further.

Real Python: Using the Python zip() Function for Parallel Iteration

$
0
0

Python’s zip() function creates an iterator that will aggregate elements from two or more iterables. You can use the resulting iterator to quickly and consistently solve common programming problems, like creating dictionaries. In this tutorial, you’ll discover the logic behind the Python zip() function and how you can use it to solve real-world problems.

By the end of this tutorial, you’ll learn:

  • How zip() works in both Python 3 and Python 2
  • How to use the Python zip() function for parallel iteration
  • How to create dictionaries on the fly using zip()

Free Bonus:5 Thoughts On Python Mastery, a free course for Python developers that shows you the roadmap and the mindset you'll need to take your Python skills to the next level.

Understanding the Python zip() Function

zip() is available in the built-in namespace. If you use dir() to inspect __builtins__, then you’ll see zip() at the end of the list:

>>>
>>> dir(__builtins__)['ArithmeticError', 'AssertionError', 'AttributeError', ..., 'zip']

You can see that 'zip' is the last entry in the list of available objects.

According to the official documentation, Python’s zip() function behaves as follows:

Returns an iterator of tuples, where the i-th tuple contains the i-th element from each of the argument sequences or iterables. The iterator stops when the shortest input iterable is exhausted. With a single iterable argument, it returns an iterator of 1-tuples. With no arguments, it returns an empty iterator. (Source)

You’ll unpack this definition throughout the rest of the tutorial. As you work through the code examples, you’ll see that Python zip operations work just like the physical zipper on a bag or pair of jeans. Interlocking pairs of teeth on both sides of the zipper are pulled together to close an opening. In fact, this visual analogy is perfect for understanding zip(), since the function was named after physical zippers!

Using zip() in Python

Python’s zip() function is defined as zip(*iterables). The function takes in iterables as arguments and returns an iterator. This iterator generates a series of tuples containing elements from each iterable. zip() can accept any type of iterable, such as files, lists, tuples, dictionaries, sets, and so on.

Passing n Arguments

If you use zip() with n arguments, then the function will return an iterator that generates tuples of length n. To see this in action, take a look at the following code block:

>>>
>>> numbers=[1,2,3]>>> letters=['a','b','c']>>> zipped=zip(numbers,letters)>>> zipped# Holds an iterator object<zip object at 0x7fa4831153c8>>>> type(zipped)<class 'zip'>>>> list(zipped)[(1, 'a'), (2, 'b'), (3, 'c')]

Here, you use zip(numbers, letters) to create an iterator that produces tuples of the form (x, y). In this case, the x values are taken from numbers and the y values are taken from letters. Notice how the Python zip() function returns an iterator. To retrieve the final list object, you need to use list() to consume the iterator.

If you’re working with sequences like lists, tuples, or strings, then your iterables are guaranteed to be evaluated from left to right. This means that the resulting list of tuples will take the form [(numbers[0], letters[0]), (numbers[1], letters[1]),..., (numbers[n], letters[n])]. However, for other types of iterables (like sets), you might see some weird results:

>>>
>>> s1={2,3,1}>>> s2={'b','a','c'}>>> list(zip(s1,s2))[(1, 'a'), (2, 'c'), (3, 'b')]

In this example, s1 and s2 are set objects, which don’t keep their elements in any particular order. This means that the tuples returned by zip() will have elements that are paired up randomly. If you’re going to use the Python zip() function with unordered iterables like sets, then this is something to keep in mind.

Passing No Arguments

You can call zip() with no arguments as well. In this case, you’ll simply get an empty iterator:

>>>
>>> zipped=zip()>>> zipped<zip object at 0x7f196294a488>>>> list(zipped)[]

Here, you call zip() with no arguments, so your zipped variable holds an empty iterator. If you consume the iterator with list(), then you’ll see an empty list as well.

You could also try to force the empty iterator to yield an element directly. In this case, you’ll get a StopIterationexception:

>>>
>>> zipped=zip()>>> next(zipped)Traceback (most recent call last):
  File "<stdin>", line 1, in <module>StopIteration

When you call next() on zipped, Python tries to retrieve the next item. However, since zipped holds an empty iterator, there’s nothing to pull out, so Python raises a StopIteration exception.

Passing One Argument

Python’s zip() function can take just one argument as well. The result will be an iterator that yields a series of 1-item tuples:

>>>
>>> a=[1,2,3]>>> zipped=zip(a)>>> list(zipped)[(1,), (2,), (3,)]

This may not be that useful, but it still works. Perhaps you can find some use cases for this behavior of zip()!

As you can see, you can call the Python zip() function with as many input iterables as you need. The length of the resulting tuples will always equal the number of iterables you pass as arguments. Here’s an example with three iterables:

>>>
>>> integers=[1,2,3]>>> letters=['a','b','c']>>> floats=[4.0,5.0,6.0]>>> zipped=zip(integers,letters,floats)# Three input iterables>>> list(zipped)[(1, 'a', 4.0), (2, 'b', 5.0), (3, 'c', 6.0)]

Here, you call the Python zip() function with three iterables, so the resulting tuples have three elements each.

Passing Arguments of Unequal Length

When you’re working with the Python zip() function, it’s important to pay attention to the length of your iterables. It’s possible that the iterables you pass in as arguments aren’t the same length.

In these cases, the number of elements that zip() puts out will be equal to the length of the shortest iterable. The remaining elements in any longer iterables will be totally ignored by zip(), as you can see here:

>>>
>>> list(zip(range(5),range(100)))[(0, 0), (1, 1), (2, 2), (3, 3), (4, 4)]

Since 5 is the length of the first (and shortest) range() object, zip() outputs a list of five tuples. There are still 95 unmatched elements from the second range() object. These are all ignored by zip() since there are no more elements from the first range() object to complete the pairs.

If trailing or unmatched values are important to you, then you can use itertools.zip_longest() instead of zip(). With this function, the missing values will be replaced with whatever you pass to the fillvalue argument (defaults to None). The iteration will continue until the longest iterable is exhausted:

>>>
>>> fromitertoolsimportzip_longest>>> numbers=[1,2,3]>>> letters=['a','b','c']>>> longest=range(5)>>> zipped=zip_longest(numbers,letters,longest,fillvalue='?')>>> list(zipped)[(1, 'a', 0), (2, 'b', 1), (3, 'c', 2), ('?', '?', 3), ('?', '?', 4)]

Here, you use itertools.zip_longest() to yield five tuples with elements from letters, numbers, and longest. The iteration only stops when longest is exhausted. The missing elements from numbers and letters are filled with a question mark ?, which is what you specified with fillvalue.

Comparing zip() in Python 3 and 2

Python’s zip() function works differently in both versions of the language. In Python 2, zip() returns a list of tuples. The resulting list is truncated to the length of the shortest input iterable. If you call zip() with no arguments, then you get an empty list in return:

>>>
>>> # Python 2>>> zipped=zip(range(3),'ABCD')>>> zipped# Hold a list object[(0, 'A'), (1, 'B'), (2, 'C')]>>> type(zipped)<type 'list'>>>> zipped=zip()# Create an empty list>>> zipped[]

In this case, your call to the Python zip() function returns a list of tuples truncated at the value C. When you call zip() with no arguments, you get an empty list.

In Python 3, however, zip() returns an iterator. This object yields tuples on demand and can be traversed only once. The iteration ends with a StopIteration exception once the shortest input iterable is exhausted. If you supply no arguments to zip(), then the function returns an empty iterator:

>>>
>>> # Python 3>>> zipped=zip(range(3),'ABCD')>>> zipped# Hold an iterator<zip object at 0x7f456ccacbc8>>>> type(zipped)<class 'zip'>>>> list(zipped)[(0, 'A'), (1, 'B'), (2, 'C')]>>> zipped=zip()# Create an empty iterator>>> zipped<zip object at 0x7f456cc93ac8>>>> next(zipped)Traceback (most recent call last):
  File "<input>", line 1, in <module>next(zipped)StopIteration

Here, your call to zip() returns an iterator. The first iteration is truncated at C, and the second one results in a StopIteration exception. In Python 3, you can also emulate the Python 2 behavior of zip() by wrapping the returned iterator in a call to list(). This will run through the iterator and return a list of tuples.

If you regularly use Python 2, then note that using zip() with long input iterables can unintentionally consume a lot of memory. In these situations, consider using itertools.izip(*iterables) instead. This function creates an iterator that aggregates elements from each of the iterables. It produces the same effect as zip() in Python 3:

>>>
>>> # Python 2>>> fromitertoolsimportizip>>> zipped=izip(range(3),'ABCD')>>> zipped<itertools.izip object at 0x7f3614b3fdd0>>>> list(zipped)[(0, 'A'), (1, 'B'), (2, 'C')]

In this example, you call itertools.izip() to create an iterator. When you consume the returned iterator with list(), you get a list of tuples, just as if you were using zip() in Python 3. The iteration stops when the shortest input iterable is exhausted.

If you really need to write code that behaves the same way in both Python 2 and Python 3, then you can use a trick like the following:

try:fromitertoolsimportizipaszipexceptImportError:pass

Here, if izip() is available in itertools, then you’ll know that you’re in Python 2 and izip() will be imported using the alias zip. Otherwise, your program will raise an ImportError and you’ll know that you’re in Python 3. (The pass statement here is just a placeholder.)

With this trick, you can safely use the Python zip() function throughout your code. When run, your program will automatically select and use the correct version.

So far, you’ve covered how Python’s zip() function works and learned about some of its most important features. Now it’s time to roll up your sleeves and start coding real-world examples!

Looping Over Multiple Iterables

Looping over multiple iterables is one of the most common use cases for Python’s zip() function. If you need to iterate through multiple lists, tuples, or any other sequence, then it’s likely that you’ll fall back on zip(). This section will show you how to use zip() to iterate through multiple iterables at the same time.

Traversing Lists in Parallel

Python’s zip() function allows you to iterate in parallel over two or more iterables. Since zip() generates tuples, you can unpack these in the header of a for loop:

>>>
>>> letters=['a','b','c']>>> numbers=[0,1,2]>>> forl,ninzip(letters,numbers):... print(f'Letter: {l}')... print(f'Number: {n}')...Letter: aNumber: 0Letter: bNumber: 1Letter: cNumber: 2

Here, you iterate through the series of tuples returned by zip() and unpack the elements into l and n. When you combine zip(), for loops, and tuple unpacking, you can get a useful and Pythonic idiom for traversing two or more iterables at once.

You can also iterate through more than two iterables in a single for loop. Consider the following example, which has three input iterables:

>>>
>>> letters=['a','b','c']>>> numbers=[0,1,2]>>> operators=['*','/','+']>>> forl,n,oinzip(letters,numbers,operators):... print(f'Letter: {l}')... print(f'Number: {n}')... print(f'Operator: {o}')...Letter: aNumber: 0Operator: *Letter: bNumber: 1Operator: /Letter: cNumber: 2Operator: +

In this example, you use zip() with three iterables to create and return an iterator that generates 3-item tuples. This lets you iterate through all three iterables in one go. There’s no restriction on the number of iterables you can use with Python’s zip() function.

Note: If you want to dive deeper into Python for loops, check out Python “for” Loops (Definite Iteration).

Traversing Dictionaries in Parallel

In Python 3.6 and beyond, dictionaries are ordered collections, meaning they keep their elements in the same order in which they were introduced. If you take advantage of this feature, then you can use the Python zip() function to iterate through multiple dictionaries in a safe and coherent way:

>>>
>>> dict_one={'name':'John','last_name':'Doe','job':'Python Consultant'}>>> dict_two={'name':'Jane','last_name':'Doe','job':'Community Manager'}>>> for(k1,v1),(k2,v2)inzip(dict_one.items(),dict_two.items()):... print(k1,'->',v1)... print(k2,'->',v2)...name -> Johnname -> Janelast_name -> Doelast_name -> Doejob -> Python Consultantjob -> Community Manager

Here, you iterate through dict_one and dict_two in parallel. In this case, zip() generates tuples with the items from both dictionaries. Then, you can unpack each tuple and gain access to the items of both dictionaries at the same time.

Note: If you want to dive deeper into dictionary iteration, check out How to Iterate Through a Dictionary in Python.

Notice that, in the above example, the left-to-right evaluation order is guaranteed. You can also use Python’s zip() function to iterate through sets in parallel. However, you’ll need to consider that, unlike dictionaries in Python 3.6, sets don’t keep their elements in order. If you forget this detail, the final result of your program may not be quite what you want or expect.

Unzipping a Sequence

There’s a question that comes up frequently in forums for new Pythonistas: “If there’s a zip() function, then why is there no unzip() function that does the opposite?”

The reason why there’s no unzip() function in Python is because the opposite of zip() is… well, zip(). Do you recall that the Python zip() function works just like a real zipper? The examples so far have shown you how Python zips things closed. So, how do you unzip Python objects?

Say you have a list of tuples and want to separate the elements of each tuple into independent sequences. To do this, you can use zip() along with the unpacking operator *, like so:

>>>
>>> pairs=[(1,'a'),(2,'b'),(3,'c'),(4,'d')]>>> numbers,letters=zip(*pairs)>>> numbers(1, 2, 3, 4)>>> letters('a', 'b', 'c', 'd')

Here, you have a list of tuples containing some kind of mixed data. Then, you use the unpacking operator * to unzip the data, creating two different lists (numbers and letters).

Sorting in Parallel

Sorting is a common operation in programming. Suppose you want to combine two lists and sort them at the same time. To do this, you can use zip() along with .sort() as follows:

>>>
>>> letters=['b','a','d','c']>>> numbers=[2,4,3,1]>>> data1=list(zip(letters,numbers))>>> data1[('b', 2), ('a', 4), ('d', 3), ('c', 1)]>>> data1.sort()# Sort by letters>>> data1[('a', 4), ('b', 2), ('c', 1), ('d', 3)]>>> data2=list(zip(numbers,letters))>>> data2[(2, 'b'), (4, 'a'), (3, 'd'), (1, 'c')]>>> data2.sort()# Sort by numbers>>> data2[(1, 'c'), (2, 'b'), (3, 'd'), (4, 'a')]

In this example, you first combine two lists with zip() and sort them. Notice how data1 is sorted by letters and data2 is sorted by numbers.

You can also use sorted() and zip() together to achieve a similar result:

>>>
>>> letters=['b','a','d','c']>>> numbers=[2,4,3,1]>>> data=sorted(zip(letters,numbers))# Sort by letters>>> data[('a', 4), ('b', 2), ('c', 1), ('d', 3)]

In this case, sorted() runs through the iterator generated by zip() and sorts the items by letters, all in one go. This approach can be a little bit faster since you’ll need only two function calls: zip() and sorted().

With sorted(), you’re also writing a more general piece of code. This will allow you to sort any kind of sequence, not just lists.

Calculating in Pairs

You can use the Python zip() function to make some quick calculations. Suppose you have the following data in a spreadsheet:

Element/MonthJanuaryFebruaryMarch
Total Sales52,000.0051,000.0048,000.00
Production Cost46,800.0045,900.0043,200.00

You’re going to use this data to calculate your monthly profit. zip() can provide you with a fast way to make the calculations:

>>>
>>> total_sales=[52000.00,51000.00,48000.00]>>> prod_cost=[46800.00,45900.00,43200.00]>>> forsales,costsinzip(total_sales,prod_cost):... profit=sales-costs... print(f'Total profit: {profit}')...Total profit: 5200.0Total profit: 5100.0Total profit: 4800.0

Here, you calculate the profit for each month by subtracting costs from sales. Python’s zip() function combines the right pairs of data to make the calculations. You can generalize this logic to make any kind of complex calculation with the pairs returned by zip().

Building Dictionaries

Python’s dictionaries are a very useful data structure. Sometimes, you might need to build a dictionary from two different but closely related sequences. A convenient way to achieve this is to use dict() and zip() together. For example, suppose you retrieved a person’s data from a form or a database. Now you have the following lists of data:

>>>
>>> fields=['name','last_name','age','job']>>> values=['John','Doe','45','Python Developer']

With this data, you need to create a dictionary for further processing. In this case, you can use dict() along with zip() as follows:

>>>
>>> a_dict=dict(zip(fields,values))>>> a_dict{'name': 'John', 'last_name': 'Doe', 'age': '45', 'job': 'Python Developer'}

Here, you create a dictionary that combines the two lists. zip(fields, values) returns an iterator that generates 2-items tuples. If you call dict() on that iterator, then you’ll be building the dictionary you need. The elements of fields become the dictionary’s keys, and the elements of values represent the values in the dictionary.

You can also update an existing dictionary by combining zip() with dict.update(). Suppose that John changes his job and you need to update the dictionary. You can do something like the following:

>>>
>>> new_job=['Python Consultant']>>> field=['job']>>> a_dict.update(zip(field,new_job))>>> a_dict{'name': 'John', 'last_name': 'Doe', 'age': '45', 'job': 'Python Consultant'}

Here, dict.update() updates the dictionary with the key-value tuple you created using Python’s zip() function. With this technique, you can easily overwrite the value of job.

Conclusion

In this tutorial, you’ve learned how to use Python’s zip() function. zip() can receive multiple iterables as input. It returns an iterator that can generate tuples with paired elements from each argument. The resulting iterator can be quite useful when you need to process multiple iterables in a single loop and perform some actions on their items at the same time.

Now you can:

  • Use the zip() function in both Python 3 and Python 2
  • Loop over multiple iterables and perform different actions on their items in parallel
  • Create and update dictionaries on the fly by zipping two input iterables together

You’ve also coded a few examples that you can use as a starting point for implementing your own solutions using Python’s zip() function. Feel free to modify these examples as you explore zip() in depth!


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Catalin George Festila: Python 3.7.4 : Using the paramiko pakage.

$
0
0
Today I tested the paramiko package. First, I install and check the version of this package. [mythcat@desk my_network_tools]$ pip3 install paramiko --user Collecting paramiko ... Running setup.py install for pycparser ... done Successfully installed asn1crypto-0.24.0 bcrypt-3.1.7 cffi-1.12.3 cryptography-2.7 paramiko-2.6.0 pycparser-2.19 pynacl-1.3.0 [mythcat@desk my_network_tools]$ python3

Rene Dudfield: Using PostgreSQL as a cache?

$
0
0
In the article on his blog Peter asks "How much faster is Redis at storing a blob of JSON compared to PostgreSQL?". Answer: 14x slower.

Seems about right. Usually Redis is about 4x faster for a simple query like that compared to using PostgreSQL as a cache in my experience. It's why so many people use Redis as a cache. But I'd suggest PostgreSQL is good enough to act as a cache for many people.

Django is pretty slow at fetching from PostgreSQL compared to other python options, so this could explain part of the 14x VS 4x difference.

Note that Django should be adding an index because of ForeignKey. However it's possible it isn't being used, or the table may need to analyze stats again. Also note that Django does not support prepared statements built in, and when you don't use prepared statements you do not use the PostgreSQL query cache. Prepared statements can give a 50% speedup often, because they don't have to do the query parsing and do a new query plan each time (and the query cache can actually be used).

PG read optimization tips for readers who may be interested.
  1. Run VACUUM ANALYZE, or have auto vacuuming on.
  2. Check the query plan with EXPLAIN to confirm it is using an index (.explain(verbose=True) on the query in Django). Paste your explain result into https://explain.depesz.com/ if you can not understand it.
  3. Tuning PostgreSQL config. Default homebrew PostgreSQL on Mac isn't really configured for speed for example. Easiest is to use a tool like pgtune and plump in your workload numbers https://pgtune.leopard.in.ua/ Make your changes and try the EXPLAIN again (or run your benchmarks). Make sure to VACUUM ANALYZE between changes for best effect. This is a very application and OS dependent task.
  4. Consider table layout, column statistics optimizations, and rewriting the table in the order of the index with CLUSTER. REINDEX may also be needed for tables used as caches where they are changed very often.
  5. 'Index only scan' can be used where all the data is in the index, so the row doesn't need to be looked at. Not applicable in this case probably, since indexing on the JSON data probably wouldn't be a good idea. If your cache hit ratio is very low(mostly you are doing network lookups) then it could still be a performance increase.
  6. PREPARE statements. In Django there are external packages available for this.
  7. If you are already using a postgresql connection pool like Pgpool-II, then you can use a query cache there pretty easily. See "in memory query cache".

I wouldn't be surprised if these could take it down to 4x-10x with the Django ORM compared to Redis.

Of course you should probably just cache the views at the CDN/web proxy level, or even at the Django view or template level. So you probably won't even hit the Django app most times.

Matt Layman: Bring in the WhiteNoise, Bring in Da Funk - Building SaaS #34

$
0
0
In this episode, we added WhiteNoise to the app as a tool for handling static assets. This lets us move away from depending on Nginx for the task and gives shiny new features like Brotli support. We installed WhiteNoise into the requirements.in file and used pip-tools to generate a new requirements.txt. whitenoise[brotli]==4.1.4 Once WhiteNoise was installed, it needed two primary settings changes. Add a new middleware. Change the STATICFILES_STORAGE. MIDDLEWARE = [ .

Codementor: Node.js VS Python: Which is Better?

$
0
0
Node.JS & Python are two of the most widely-used programming tools in contemporary times. It is a tough to choose which is better among them. Let's try.

Stack Abuse: File Management with AWS S3, Python, and Flask

$
0
0

Introduction

One of the key driving factors to technology growth is data. Data has become more important and crucial in the tools being built as technology advances. It has become the driving factor to technology growth, how to collect, store, secure, and distribute data.

This data growth has led to an increase in the utilization of cloud architecture to store and manage data while minimizing the hassle required to maintain consistency and accuracy. As consumers of technology, we are generating and consuming data and this has necessitated the requirement of elaborate systems to help us manage the data.

The cloud architecture gives us the ability to upload and download files from multiple devices as long as we are connected to the internet. And that is part of what AWS helps us achieve through S3 buckets.

What is S3?

Amazon Simple Storage Service (S3) is an offering by Amazon Web Services (AWS) that allows users to store data in the form of objects. It is designed to cater to all kinds of users, from enterprises to small organizations or personal projects.

S3 can be used to store data ranging from images, video, and audio all the way up to backups, or website static data, among others.

An S3 bucket is a named storage resource used to store data on AWS. It is akin to a folder that is used to store data on AWS. Buckets have unique names and based on the tier and pricing, users receive different levels of redundancy and accessibility at different prices.

Access privileges to S3 Buckets can also be specified through the AWS Console, the AWS CLI tool, or through provided APIs and libraries.

What is Boto3?

Boto3 is a software development kit (SDK) provided by AWS to facilitate the interaction with S3 APIs and other services such as Elastic Compute Cloud (EC2). Using Boto3, we can list all the S3 buckets, create an EC2 instances, or control any number of AWS resources.

Why use S3?

We can always provision our own servers to store our data and make it accessible from a range of devices over the internet, so why should we use AWS's S3? There are several scenarios where it comes in handy.

First, AWS S3 eliminates all the work and costs involved in building and maintaining servers that store our data. We do not have to worry about acquiring the hardware to host our data or the personnel required to maintain the infrastructure. Instead, we can focus solely on our code and ensuring our services are in the best condition.

By using S3, we get to tap into the impressive performance, availability, and scalability capabilities of AWS. Our code will be able to scale effectively and perform under heavy loads and be highly available to our end users. We get to achieve this without having to build or manage the infrastructure behind it.

AWS offers tools to help us with analytics and audit, as well as management and reports on our data. We can view and analyze how the data in our buckets is accessed or even replicate the data into other regions to enhance the access of the data by the end-users. Our data is also encrypted and securely stored so that it is secure at all times.

Through AWS Lambda we can also respond to data being uploaded or downloaded from our S3 buckets and respond to users through configured alerts or reports for a more personalized and instant experience as expected from technology.

Setting Up AWS

To get started with S3, we need to set up an account on AWS or log in to an existing one.

We will also need to set up the AWS CLI tool to be able to interact with our resources from the command line, which is available for Mac, Linux, and Windows.

We can install it by running:

$ pip install awscli

Once the CLI tool is set up, we can generate our credentials under our profile dropdown and use them to configure our CLI tool as follows:

$ aws configure

This command will give us prompts to provide our Access Key ID, Secret Access Key, default regions, and output formats. More details about configuring the AWS CLI tool can be found here.

Our Application - FlaskDrive

Setup

Let's build a Flask application that allows users to upload and download files to and from our S3 buckets, as hosted on AWS.

We will use the Boto3 SDK to facilitate these operations and build out a simple front-end to allow users to upload and view the files as hosted online.

It is advisable to use a virtual environment when working on Python projects, and for this one we will use the Pipenv tool to create and manage our environment. Once set up, we create and activate our environment with Python3 as follows:

$ pipenv install --three
$ pipenv shell

We now need to install Boto3 and Flask that are required to build our FlaskDrive application as follows:

$ pipenv install flask
$ pipenv install boto3

Implementation

After setting up, we need to create the buckets to store our data and we can achieve that by heading over to the AWS console and choosing S3 in the Services menu.

After creating a bucket, we can use the CLI tool to view the buckets we have available:

$ aws s3api list-buckets
{
    "Owner": {
        "DisplayName": "robley",
        "ID": "##########################################"
    },
    "Buckets": [
        {
            "CreationDate": "2019-09-25T10:33:40.000Z",
            "Name": "flaskdrive"
        }
    ]
}

We will now create the functions to upload, download, and list files on our S3 buckets using the Boto3 SDK, starting off with the upload_file function:

def upload_file(file_name, bucket):
    """
    Function to upload a file to an S3 bucket
    """
    object_name = file_name
    s3_client = boto3.client('s3')
    response = s3_client.upload_file(file_name, bucket, object_name)

    return response

The upload_file function takes in a file and the bucket name and uploads the given file to our S3 bucket on AWS.

def download_file(file_name, bucket):
    """
    Function to download a given file from an S3 bucket
    """
    s3 = boto3.resource('s3')
    output = f"downloads/{file_name}"
    s3.Bucket(bucket).download_file(file_name, output)

    return output

The download_file function takes in a file name and a bucket and downloads it to a folder that we specify.

def list_files(bucket):
    """
    Function to list files in a given S3 bucket
    """
    s3 = boto3.client('s3')
    contents = []
    for item in s3.list_objects(Bucket=bucket)['Contents']:
        contents.append(item)

    return contents

The function list_files is used to retrieve the files in our S3 bucket and list their names. We will use these names to download the files from our S3 buckets.

With our S3 interaction file in place, we can build our Flask application to provide the web-based interface for interaction. The application will be a simple single-file Flask application for demonstration purposes with the following structure:

.
├── Pipfile       # stores our application requirements
├── __init__.py
├── app.py        # our main Flask application
├── downloads     # folder to store our downloaded files
├── s3_demo.py    # S3 interaction code
├── templates
│   └── storage.html
└── uploads       # folder to store the uploaded files

The core functionality of our Flask application will reside in the app.py file:

import os
from flask import Flask, render_template, request, redirect, send_file
from s3_demo import list_files, download_file, upload_file

app = Flask(__name__)
UPLOAD_FOLDER = "uploads"
BUCKET = "flaskdrive"

@app.route('/')
def entry_point():
    return 'Hello World!'

@app.route("/storage")
def storage():
    contents = list_files("flaskdrive")
    return render_template('storage.html', contents=contents)

@app.route("/upload", methods=['POST'])
def upload():
    if request.method == "POST":
        f = request.files['file']
        f.save(os.path.join(UPLOAD_FOLDER, f.filename))
        upload_file(f"uploads/{f.filename}", BUCKET)

        return redirect("/storage")

@app.route("/download/<filename>", methods=['GET'])
def download(filename):
    if request.method == 'GET':
        output = download_file(filename, BUCKET)

        return send_file(output, as_attachment=True)

if __name__ == '__main__':
    app.run(debug=True)

This is a simple Flask application with 4 endpoints:

  • The /storage endpoint will be the landing page where we will display the current files in our S3 bucket for download, and also an input for users to upload a file to our S3 bucket,
  • The /upload endpoint will be used to receive a file and then call the upload_file() method that uploads a file to an S3 bucket
  • The /download endpoint will receive a file name and use the download_file() method to download the file to the user's device

And finally, our HTML template will be as simple as:

<!DOCTYPE html>
<html>
  <head>
    <title>FlaskDrive</title>
  </head>
  <body>
    <div class="content">
        <h3>Flask Drive: S3 Flask Demo</h3>
        <p>Welcome to this AWS S3 Demo</p>
        <div>
          <h3>Upload your file here:</h3>
          <form method="POST" action="/upload" enctype=multipart/form-data>
            <input type=file name=file>
            <input type=submit value=Upload>
          </form>
        </div>
        <div>
          <h3>These are your uploaded files:</h3>
          <p>Click on the filename to download it.</p>
          <ul>
            {% for item in contents %}
              <li>
                <a href="/download/{{ item.Key }}"> {{ item.Key }} </a>
              </li>
            {% endfor %}
          </ul>
        </div>
    </div>
  </body>
</html>

With our code and folders set up, we start our application with:

$ python app.py

When we navigate to http://localhost:5000/storage we are welcomed by the following landing page:

flask_drive_landing_1

Let us now upload a file using the input field and this is the output:

flask_drive_landing_2

We can confirm the upload by checking our S3 dashboard, and we can find our image there:

flask_drive_s3_dashboard

Our file has been successfully uploaded from our machine to AWS's S3 Storage.

On our FlaskDrive landing page, we can download the file by simply clicking on the file name then we get the prompt to save the file on our machines.

Conclusion

In this post, we have created a Flask application that stores files on AWS's S3 and allows us to download the same files from our application. We used the Boto3 library alongside the AWS CLI tool to handle the interaction between our application and AWS.

We have eliminated the need for us having our own servers to handle the storage of our files and tapped into Amazon's infrastructure to handle it for us through the AWS Simple Storage Service. It has taken us a short time to develop, deploy and make our application available to end-users and we can now enhance it to add permissions among other features.

The source code for this project is available here on Github.

Andrew Dalke: mmpdb crowdfunding consortium

$
0
0

How can we raise money to fund open source software development in cheminformatics? It's a hard question. Asking for donations doesn't work– companies might not even have a mechanism to make donations. Consultant-based funding doesn't work that well either, because the cost of developing a general-purpose tool is several times more expensive than developing a tool which only meets the specialized needs of one client, and few clients are willing to subsidize the rest of the field. Proprietary software development solves the problem by getting many people to pay for the same product. Can we learn from the success of proprietary software to get the funds which would certainly be useful in improving open source software?

I have started the mmpdb crowdfunding consortium to see if crowdfunding can be used to fund further development of the matched molecular pair program mmpdb. The deadline to join is 1 Febrary 2020 – join now!

Background

mmpdb is an open source success story. It started as the mmpa program developed by Jameed Hussain and Ceara Rea. Their employer, GSK contributed it to the RDKit project. There was no more GSK funding, but others could study and improve the code.

Roche then funded me, Christian Kramer, and Jérôme Hert to add several improvements:

  • better support for symmetry, which results in fully canonical pair descriptions
  • support for chirality, including matching chiral with prochiral structures
  • can include the chemical environment when finding pairs
  • generate property change statistics for each pair, environment, and property type
  • parallelized fragmentation
  • fragmentation can re-use fragmentations from a previous run
  • performance speedups during indexing
  • pair, environment, and property statistics are stored in a SQLite database
  • analysis tools to propose possible transforms to an input structure, or to predict property shifts between two structures
The final code was also contributed to the RDKit project.

Now what?

Mmpdb is popular. Several people at the 2019 RDKit User Group meeting in Hamburg presented work which used it or at least referenced it.

But, who supports it? Who adds features? There is no more funding from GSK or Roche, so all we have a precious and scarce volunteer time. Others might fund their own developers to improve mmpdb, but the code is pretty complicated and it will take a while for new developers to get up to speed.

Sustainability

There is a long and ongoing discussion about how to fund open source projects. I won't even attempt to summarize them here, though I will point to Roads and Bridges: The Unseen Labor Behind Our Digital Infrastructure as one starting point.

My question is, are mmpdb users willing to fund its further development? If not, the project is not sustainable. I believe they are willing; the problem is that it's hard to justify paying money for software anyone can download for free.

Crowfunding consortium

I previously tried to develop chemfp as a purely open source commercial product. When customers bought the product, they got the software under the MIT license. This ended up being difficult, for reasons I'll likely blog about later. I now also offer chemfp with proprietary licensing, at a cheaper price.

With mmpdb, I am trying crowdfunding, along the lines of Kickstarter. The basic goals are:

  • Postgres support
  • new commmand-line option ("proprulecat") to export property tables as CSV
Everyone who joins will get these two features, under the existing 3-clause BSD license.

Beyond that are stretch goals. The one many people want is to store the chemical environment in the database as a fragment SMILES, rather than a hex-encoded SHA256 hash of the rooted Morgan fingerprints.

As more people sign up, I'll develop mmpdb further. Many of the stretch goals are related to documentation and testing. Mmpdb was developed as a research project, and needs those sorts of infrastructure improvements to allow future growth.

If enough people join, there will definitely be future crowdfunding efforts, perhaps a web interface, or support for categorial statistics, or other features people have asked me about.

I don't think people will pay for features that are available for free, so these changes will not be made available to the public until specific funding goals are reached.

How do you explain crowdfunding to accounting?

Don't. (Unless you really want to.) Tell them you are going to purchase a new version of mmpdb with Postgres and "proprulecat" support. You will receive these within two weeks of sending me – that is, my Sweden-based software company – a purchase order.

In addition, purchase includes membership in the mmpdb consortium. As more people join, and additional funding goals met, I will continue to improve mmpdb, and you will get those improvements as part of your membership.

Join now!

PyCharm: 2019.3 EAP 4

$
0
0

This week’s Early Access Program (EAP) for PyCharm 2019.3 is available now! Download it from our website.

New for this version

Test templates for pytest support

Support for pytest test creation using pytest templates was added. Now you can create and edit test files using pytest templates.

To create a test using these templates first you will need to set pytest as the default test runner (Settings/Preferences | Tools | Python Integrated Tools  then on the Default test runner option select pytest).  Then navigate to the context menu from the method declaration you wish to create a test from. Click on Go To | Test and select Create New Test. A dialog will open so you can configure your testing file accordingly. Once you click OK on this dialog PyCharm will generate a file using the appropriate test method.

Screenshot 2019-10-03 at 3.23.53 PM

Fixed in this version

  • The “Go to Declaration”/”Go to Implementations” behavior was corrected so they will properly lead to library implementations and not other files.
  • We fixed an error that made imports to be inserted before module-level dunder names. Now, in compliance to PEP-8, imports are placed after dunders.
  • An issue was solved that wasn’t allowing to use quick fix to install missing packages when the interpreter is switched through the Status Bar.
  • Some issues causing an interpreter not to be removed through the project settings or changed when using the interpreter widget were solved.
  • For more details on what’s new in this version, see the release notes

Interested?

Download this EAP from our website. Alternatively, you can use the JetBrains Toolbox App to stay up to date throughout the entire EAP.

If you’re on Ubuntu 16.04 or later, you can use snap to get PyCharm EAP, and stay up to date. You can find the installation instructions on our website.

Vinta Software: PyGotham 2019: Talking Python in NY!

$
0
0
We are arriving at New York! Part of our team is on their way to PyGotham 2019, the biggest event of the Python community in New York. The experience last year was amazing, so we decided to come back. We are also sponsoring it this year, so if you are going to the event make sure to stop by our booth, we are bringing lots of cool swags and some br

Codementor: Django vs Ruby on Rails: Web Frameworks Comparison

$
0
0
There are more than 90 web development frameworks out there. No wonder it’s hard to choose the one that’ll suit your project best. Still, there are at least two major frameworks that are widely...

Stack Abuse: Solving Systems of Linear Equations with Python's Numpy

$
0
0

The Numpy library can be used to perform a variety of mathematical/scientific operations such as matrix cross and dot products, finding sine and cosine values, Fourier transform and shape manipulation, etc. The word Numpy is short-hand notation for "Numerical Python".

In this article, you will see how to solve a system of linear equations using Python's Numpy library.

What is a System of Linear Equations?

Wikipedia defines a system of linear equations as:

In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same set of variables.

The ultimate goal of solving a system of linear equations is to find the values of the unknown variables. Here is an example of a system of linear equations with two unknown variables, x and y:

Equation 1:

4x  + 3y = 20
-5x + 9y = 26

To solve the above system of linear equations, we need to find the values of the x and y variables. There are multiple ways to solve such a system, such as Elimination of Variables, Cramer's Rule, Row Reduction Technique, and the Matrix Solution. In this article we will cover the matrix solution.

In the matrix solution, the system of linear equations to be solved is represented in the form of matrix AX = B. For instance, we can represent Equation 1 in the form of a matrix as follows:

A = [[ 4   3]
     [-5   9]]

X = [[x]
     [y]]

B = [[20]
     [26]]

To find the value of x and y variables in Equation 1, we need to find the values in the matrix X. To do so, we can take the dot product of the inverse of matrix A, and the matrix B as shown below:

X = inverse(A).B

If you are not familiar with how to find the inverse of a matrix, take a look at this link to understand how to manually find the inverse of a matrix. To understand the matrix dot product, check out this article.

Solving a System of Linear Equations with Numpy

From the previous section, we know that to solve a system of linear equations, we need to perform two operations: matrix inversion and a matrix dot product. The Numpy library from Python supports both the operations. If you have not already installed the Numpy library, you can do with the following pip command:

$ pip install numpy

Let's now see how to solve a system of linear equations with the Numpy library.

Using the inv() and dot() Methods

First, we will find inverse of matrix A that we defined in the previous section.

Let's first create the matrix A in Python. To create a matrix, the array method of the Numpy module can be used. A matrix can be considered as a list of lists where each list represents a row.

In the following script we create a list named m_list, which further contains two lists: [4,3] and [-5,9]. These lists are the two rows in the matrix A. To create the matrix A with Numpy, the m_list is passed to the array method as shown below:

import numpy as np

m_list = [[4, 3], [-5, 9]]
A = np.array(m_list)

To find the inverse of a matrix, the matrix is passed to the linalg.inv() method of the Numpy module:

inv_A = np.linalg.inv(A)

print(inv_A)

The next step is to find the dot product between the inverse of matrix A, and the matrix B. It is important to mention that matrix dot product is only possible between the matrices if the inner dimensions of the matrices are equal i.e. the number of columns of the left matrix must match the number of rows in the right matrix.

To find the dot product with the Numpy library, the linalg.dot() function is used. The following script finds the dot product between the inverse of matrix A and the matrix B, which is the solution of the Equation 1.

B = np.array([20, 26])
X = np.linalg.inv(A).dot(B)

print(X)

Output:

[2. 4.]

Here, 2 and 4 are the respective values for the unknowns x and y in Equation 1. To verify, if you plug 2 in place of the unknown x and 4 in the place of the unknown y in equation 4x + 3y, you will see that the result will be 20.

Let's now solve a system of three linear equations, as shown below:

4x + 3y + 2z = 25
-2x + 2y + 3z = -10
3x -5y + 2z = -4

The above equation can be solved using the Numpy library as follows:

Equation 2:

A = np.array([[4, 3, 2], [-2, 2, 3], [3, -5, 2]])
B = np.array([25, -10, -4])
X = np.linalg.inv(A).dot(B)

print(X)

In the script above the linalg.inv() and the linalg.dot() methods are chained together. The variable X contains the solution for Equation 2, and is printed as follows:

[ 5.  3. -2.]

The value for the unknowns x, y, and z are 5, 3, and -2, respectively. You can plug these values in Equation 2 and verify their correctness.

Using the solve() Method

In the previous two examples, we used linalg.inv() and linalg.dot() methods to find the solution of system of equations. However, the Numpy library contains the linalg.solve() method, which can be used to directly find the solution of a system of linear equations:

A = np.array([[4, 3, 2], [-2, 2, 3], [3, -5, 2]])
B = np.array([25, -10, -4])
X2 = np.linalg.solve(A,B)

print(X2)

Output:

[ 5.  3. -2.]

You can see that the output is same as before.

A Real-World Example

Let's see how a system of linear equation can be used to solve real-world problems.

Suppose, a fruit-seller sold 20 mangoes and 10 oranges in one day for a total of $350. The next day he sold 17 mangoes and 22 oranges for $500. If the prices of the fruits remained unchanged on both the days, what was the price of one mango and one orange?

This problem can be easily solved with a system of two linear equations.

Let's say the price of one mango is x and the price of one orange is y. The above problem can be converted like this:

20x + 10y = 350
17x + 22y = 500

The solution for the above system of equations is shown here:

A = np.array([[20, 10], [17, 22]])
B = np.array([350, 500])
X = np.linalg.solve(A,B)

print(X)

And here is the output:

[10. 15.]

The output shows that the price of one mango is $10 and the price of one orange is $15.

Conclusion

The article explains how to solve a system of linear equations using Python's Numpy library. You can either use linalg.inv() and linalg.dot() methods in chain to solve a system of linear equations, or you can simply use the solve() method. The solve() method is the preferred way.

Codementor: Getting to Know Go, Python, and Benchmarks

$
0
0
This article was written by Vadym Zakovinko (Solution Architect) for Django Stars (https://djangostars.com). Hello, my name is Vadym, and this is my story about how I started learning Go, what it...

Codementor: A concise resource repository for machine learning!

$
0
0
A concise repository of machine learning bookmarks.

PyBites: Code Challenge 64 - PyCon ES 2019 Marvel Challenge

$
0
0

There is an immense amount to be learned simply by tinkering with things. - Henry Ford

Hey Pythonistas,

This weekend is Pycon ES and in the unlikely event you get bored, you can always do some coding with PyBites. Two more good reasons to do so:

  1. there are prizes / give aways,
  2. your PRs count towards Hacktoberfest (t-shirt). Fire up your editors and let's get coding!

The Challenge

Most of this challenge is open-ended. We really want to give you creative powers. Here is what we are going to do:

  1. Create an account on https://developer.marvel.com. Upon confirming your email you sould get an API key.

  2. Write code to successfully make requests to the API, check out the docs and the boilerplate code provided in the challenge directory (as usual make your virtual env and install requirements / requests for starters!)

  3. To be good citizens make a function to download the main 6 endpoints: characters, comics, creators, events, series, and stories. Save the JSON outputs in a data folder.

  4. Now the fun part, here we let you totally free: look through the data and tell us / our community a story. Make stunning data vizualizations and share them on our slack, in our new #marvel channel.

  5. PR your work on our platformbefore Friday 11th of Oct. 2019 23.59 AoE (again remember, this also adds up for that Hacktoberfest t-shirt!). The 3 best submissions win one of our prizes:

Good luck and impress your fellow Pythonistas! Ideas for future challenges? use GH Issues.


Get serious, take your Python to the next level ...

At PyBites we're all about creating Python ninjas through challenges and real-world exercises. Read more about our story.

We are happy and proud to share that we now hear monthly stories from our users that they're landing new Python jobs. For many this is a dream come true, especially as they're often landing roles with significantly higher salaries!

Our 200 Bites of Py exercises are geared toward instilling the habit of coding frequently, if not daily which will dramatically improve your Python and problem solving skills. This is THE number one skillset necessary to becoming a linchpin in the industry and will enable you to crush it wherever codes need to be written.

Take our free trial and let us know on Slack how it helps you improve your Python!


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Talk Python to Me: #232 Become a robot developer with Python

$
0
0
When you think about the types of jobs you get as a Python developer, you probably weight the differences between data science and web development.
Viewing all 22860 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>