Quantcast
Channel: Planet Python
Viewing all 22404 articles
Browse latest View live

Mike Driscoll: Letting Users Change a wx.ComboBox’s Contents in wxPython

$
0
0

This week I came across someone who was wondering if there was a way to allow the user to edit the contents of a wx.ComboBox. By editing the contents, I mean change the names of the pre-existing choices that the ComboBox contains, not adding new items to the widget.

While editing the contents of the selected item in a ComboBox works out of the box, the widget will not save those edits automatically. So if you edit something and then choose a different option in the ComboBox, the edited item will revert back to whatever it was previously and your changes will be lost.

Let’s find out how you can create a ComboBox that allows this functionality!

Creating GUI Applications with wxPython

Purchase now on Leanpub or Amazon


Changing a ComboBox

Changing wx.ComboBox

The first step when trying something new out is to write some code. You’ll need to create an instance of wx.ComboBox and pass it a list of choices as well as set the default choice. Of course, you cannot create a single widget in isolation. The widget must be inside of a parent widget. In wxPython, you almost always want the parent to be a wx.Panel that is inside of a wx.Frame.

Let’s write some code and see how this all lays out:

import wx

class MainPanel(wx.Panel):

    def __init__(self, parent):
        super().__init__(parent)

        self.cb_value = 'One'

        self.combo_contents = ['One', 'Two', 'Three']
        self.cb = wx.ComboBox(self, choices=self.combo_contents,
                              value=self.cb_value, size=(100, -1))

        self.cb.Bind(wx.EVT_TEXT, self.on_text_change)
        self.cb.Bind(wx.EVT_COMBOBOX, self.on_selection)

    def on_text_change(self, event):
        current_value = self.cb.GetValue()
        if current_value != self.cb_value and current_value not in self.combo_contents:
            # Value has been edited
            index = self.combo_contents.index(self.cb_value)
            self.combo_contents.pop(index)
            self.combo_contents.insert(index, current_value)
            self.cb.SetItems(self.combo_contents)
            self.cb.SetValue(current_value)
            self.cb_value = current_value
            
    def on_selection(self, event):
        self.cb_value = self.cb.GetValue()

class MainFrame(wx.Frame):

    def __init__(self):
        super().__init__(None, title='ComboBox Changing Demo')
        panel = MainPanel(self)
        self.Show()


if __name__ == "__main__":
    app = wx.App(False)
    frame = MainFrame()
    app.MainLoop()

The main part of the code that you are interested in is inside the MainPanel class. Here you create the widget, set its choices list and a couple of other parameters. Next you will need to bind the ComboBox to two events:

  • wx.EVT_TEXT– For text change events
  • wx.EVT_COMBOBOX– For changing item selection events

The first event, wx.EVT_TEXT, is fired when you change the text in the widget by typing and it also fires when you change the selection. The other event only fires when you change selections. The wx.EVT_TEXT event fires first, so it has precedence over wx.EVT_COMBOBOX.

When you change the text, on_text_change() is called. Here you will check if the current value of the ComboBox matches the value that you expect it to be. You also check to see if the current value matches the choice list that is currently set. This allows you to see if the user has changed the text. If they have, then you want to grab the index of the currently selected item in your choice list.

Then you use the list’s pop() method to remove the old string and the insert() method to add the new string in its place. Now you need to call the widget’s SetItems() method to update its choices list. Then you set its value to the new string and update the cb_value instance variable so you can check if it changes again later.

The on_selection() method is short and sweet. All it does is update cb_value to whatever the current selection is.

Give the code a try and see how it works!


Wrapping Up

Adding the ability to allow the user to update the wx.ComboBox‘s contents isn’t especially hard. You could even subclass wx.ComboBox and create a version where it does that for you all the time. Another enhancement that might be fun to add is to have the widget load its choices from a config file or a JSON file. Then you could update on_text_change() to save your changes to disk and then your application could save the choices and reload them the next time you start your application.

Have fun and happy coding!

The post Letting Users Change a wx.ComboBox’s Contents in wxPython appeared first on The Mouse Vs. The Python.


Fabio Zadrozny: PyDev 7.5.0 Released (Python 3.8 and Cython)

$
0
0
PyDev 7.5.0 is now available for download.

The major changes in this release are Python 3.8 support and improved Cython parsing.

Python 3.8 should've been in 7.4.0 (but because of an oversight on my part during the build it wasn't, so, this release fixes that).

As for the Cyhon AST, Cython is now parsed using Cython itself (so, it needs to be installed and available in the default interpreter for PyDev to be able to parse it). The major issue right now is that the parser is not fault tolerant (this means that for code-completion and code-analysis to kick in the code needs to be syntax-correct, which is a problem when completing for instance variables right after a dot).

Fixing that in Cython seems to be trivial (https://github.com/cython/cython/issues/3303), but I'm still waiting for a signal that it's ok to add that support to make Cython parsing fault-tolerant.

Enjoy!

Python Data: Python Data Weekly Roundup – Jan 10 2020

$
0
0

In this week’s Python Data Weekly Roundup:

A Comprehensive Learning Path to Understand and Master NLP in 2020

If you’re looking to learn more about Natural Language Processing (NLP) in 2020, this is a very good article describing a good learning path to take including links to articles, courses, videos and more to get you started down the road of becoming proficient with the tools and methods of NLP.

The Best of Both Worlds: Forecasting US Equity Market Returns using a Hybrid Machine Learning – Time Series Approach

Abstract:

Predicting long-term equity market returns is of great importance for investors to strategically allocate their assets. We apply machine learning methods to forecast 10-year-ahead U.S. stock returns and compare the results to traditional Shiller regression-based forecasts more commonly used in the asset-management industry. Machine-learning forecasts have similar forecast errors to a traditional return forecast model based on lagged CAPE ratios. However, machine-learning forecasts have higher forecast errors than the regression-based, two-step approach of Davis et al [2018] that forecasts the CAPE ratio based on macroeconomic variables and then imputes stock returns. When we combine our two-step approach with machine learning to forecast CAPE ratios (a hybrid ML-VAR approach), U.S. stock return forecasts are statistically and economically more accurate than all other approaches. We discuss why and conclude with some best practices for both data scientists and economists in making real-world investment return forecasts.

 Improving U.S. stock return forecasts: A “fair-value” CAPE approachSource: Improving U.S. stock return forecasts: A “fair-value” CAPE approach

Building machine learning workflows with AWS Data Exchange and Amazon SageMaker

This article describes how to use AWS’ Sagemaker and Data Exchagne to build a machine learning model and machine learning workflows.   What I found interesting is the ability to use AWS Data Exchange to find a large number of different types of data.

Tutorial: Python Regex (Regular Expressions) for Data Scientists

I hate regex. Of course I love the functionality and capabilities of using regex, but I loathe my inability to come up with my own regex ‘formulas’. I *always* have to go out on the web to search for how to do what I’m trying to do.  This article doesn’t solve that problem for me, but it does provide a refresher in regex patterns and a reminder why regex is important.


That’s it for this week’s Python Data Weekly Roundup. Subscribe to our newsletter to receive this weekly roundup in your email.

 

The post Python Data Weekly Roundup – Jan 10 2020 appeared first on Python Data.

Peter Bengtsson: How to have default/initial values in a Django form that is bound and rendered

$
0
0

Django's Form framework is excellent. It's intuitive and versatile and, best of all, easy to use. However, one little thing that is not so intuitive is how do you render a bound form with default/initial values when the form is never rendered unbound.

If you do this in Django:

classMyForm(forms.Form):name=forms.CharField(required=False)defview(request):form=MyForm(initial={'name':'Peter'})returnrender(request,'page.html',form=form)# Imagine, in 'page.html' that it does this:#  <label>Name:</label>#  {{ form.name }}

...it will render out this:

<label>Name:</label><inputtype="text"name="name"value="Peter">

The whole initial trick is something you can set on the whole form or individual fields. But it's only used in UN-bound forms when rendered.

If you change your view function to this:

defview(request):form=MyForm(request.GET,initial={'name':'Peter'})# data passed!ifform.is_valid():# makes it bound!print(form.cleaned_data['name'])returnrender(request,'page.html',form=form)

Now, the form is bound and the initial stuff is essentially ignored.
Because name is not present in request.GET. And if it was present, but an empty string, it wouldn't be able to benefit for the default value.

My solution

I tried many suggestions and tricks (based on rapid Stackoverflow searching) and nothing worked.

I knew one thing: Only the view should know the actual initial values.

Here's what works:

importcopyclassMyForm(forms.Form):name=forms.CharField(required=False)def__init__(self,data,**kwargs):data=copy.copy(data)forkey,valueinkwargs.get("initial",{}).items():data[key]=data.get(key,value)super().__init__(data,**kwargs)

Now, suppose you don't have ?name=something in request.GET the line print(form.cleaned_data['name']) will print Peter and the rendered form will look like this:

<label>Name:</label><inputtype="text"name="name"value="Peter">

And, as expected, if you have ?name=Ashley in request.GET it will print Ashley and produce this rendered HTML too:

<label>Name:</label><inputtype="text"name="name"value="Ashley">

Peter Hoffmann: Azure Data Lake Storage Gen 2 with Python

$
0
0

Microsoft has released a beta version of the python client azure-storage-file-datalake for the Azure Data Lake Storage Gen 2 service.

The service offers blob storage capabilities with filesystem semantics, atomic operations, and a hierarchical namespace. Azure Data Lake Storage Gen 2 is built on top of Azure Blob Storage, shares the same scaling and pricing structure (only transaction costs are a little bit higher). Multi protocol access allows you to use data created with azure blob storage APIs in the data lake and vice versa. This enables a smooth migration path if you already use the blob storage with tools like kartothek and simplekv to store your datasets in parquet. Naming terminologies differ a little bit. What is called a container in the blob storage APIs is now a file system in the adls context.

pipinstallazure-storage-file-datalake--pre

The entry point into the Azure Datalake is the DataLakeServiceClient which interacts with the service on a storage account level. It can be authenticated with the account and storage key, SAS tokens or a service principal. A storage account can have many file systems (aka blob containers) to store data isolated from each other.

importosfromazure.storage.filedatalakeimportDataLakeServiceClientfromazure.core.exceptionsimportResourceExistsErroraccount_name=os.getenv("STORAGE_ACCOUNT_NAME")credential=os.getenv("STORAGE_ACCOUNT_KEY")account_url="https://{}.dfs.core.windows.net/".format(account_name)datalake_service=DataLakeServiceClient(account_url=account_url,credential=credential)file_system="testfs"try:filesystem_client=datalake.create_file_system(file_system=file_system)exceptResourceExistsErrorase:filesystem_client=datalake.get_file_system_client(file_system)

The FileSystemClient represents interactions with the directories and folders within it. So let's create some data in the storage.

dir_client=filesystem_client.get_directory_client("incoming")dir_client.create_directory()data="""name,populationBerlin, 3406000Munich, 1275000"""file_client=dir_client.create_file("cities.txt")file_client.append_data(data,0,len(data))file_client.flush_data(len(data))
>>>[(i.name,i.is_directory)foriinfilesystem_client.get_paths("")]...[('incoming',True),('incoming/cities.txt',False)]

If the FileClient is created from a DirectoryClient it inherits the path of the direcotry, but you can also instanciate it directly from the FileSystemClient with an absolute path:

>>>file_client=filesystem_client.get_file_client('incoming/cities.txt')>>>file_client.read_file()...b'name,population\nBerlin, 3406000\nMunich, 1275000\n'

These interactions with the azure data lake do not differ that much to the existing blob storage API and the data lake client also uses the azure blob storage client behind the scenes.

What differs and is much more interesting is the hierarchical namespace support in azure datalake gen2. The convention of using slashes in the name/key of the objects/files have been already used to organize the content in the blob storage into a hierarchy. With prefix scans over the keys it has also been possible to get the contents of a folder. What has been missing in the azure blob storage API is a way to work on directories with atomic operations.

A typical use case are data pipelines where the data is partitioned over multiple files using a hive like partitioning scheme:

incoming/date=2019-01-01/part1.parquetincoming/date=2019-01-01/part2.parquetincoming/date=2019-01-01/part3.parquetincoming/date=2019-01-02/part1.parquetincoming/date=2019-01-02/part2.parquet...

If you work with large datasets with thousands of files moving a daily subset of the data to a processed state would have involved looping over the files in the azure blob API and moving each file individually. This is not only inconvenient and rather slow but also lacks the characteristics of an atomic operation.

With the new azure data lake API it is now easily possible to do in one operation:

directory_client=filesystem_client.get_directory_client('processed')directory_client.create_directory()directory_client=filesystem_client.get_directory_client('incoming/date=2019-01-01')directory_client.rename_directory('testfs/processed/date=2019-01-01')[(i.name,i.is_directory)foriinfilesystem_client.get_paths("")]
[('incoming',True),('incoming/date=2019-01-02',True),('incoming/date=2019-01-02/part1.parquet',False),('incoming/date=2019-01-02/part2.parquet',False),('processed',True),('processed/date=2019-01-01',True),('processed/date=2019-01-01/part1.parquet',False),('processed/date=2019-01-01/part2.parquet',False),('processed/date=2019-01-01/part3.parquet',False)]

Deleting directories and files within is also supported as an atomic operation

directory_client=filesystem_client.get_directory_client('incoming/date=2019-01-02')directory_client.delete_directory()

So especially the hierarchical namespace support and atomic operations make the new azure datalake API interesting for distributed data pipelines. Extra security features like POSIX permissions on individual directories and files are also notable.

Catalin George Festila: Python 3.7.5 : About asterisk operators in Python.

$
0
0
The asterisk known as the star operator is used in Python with more than one meaning attached to it. Today I will show you some simple examples of how can be used. Let's start with these issues. You can merge two or more dictionaries by unpacking them in a new one: >>> a = {'u': 1} >>> b = {'v': 2} >>> ab = {**a, **b, 'c':'d'} >>> ab {'u': 1, 'v': 2, 'c': 'd'}Create multiple assignments: >>> *x,

Weekly Python StackOverflow Report: (ccx) stackoverflow python report

$
0
0

Ned Batchelder: Bug #915: please help!

$
0
0

I just released coverage.py 5.0.3, with two bug fixes. There was another bug I really wanted to fix, but it has stumped me. I’m hoping someone can figure it out.

Bug #915 describes a disk I/O failure. Thanks to some help from Travis support, Chris Caron has provided instructions for reproducing it in Docker, and they work: I can generate disk I/O errors at will. What I can’t figure out is what coverage.py is doing wrong that causes the errors.

To reproduce it, start a Travis-based docker image:

cid=$(docker run -dti --privileged=true --entrypoint=/sbin/init -v /sys/fs/cgroup:/sys/fs/cgroup:ro travisci/ci-sardonyx:packer-1542104228-d128723)
docker exec -it $cid /bin/bash

Then in the container, run these commands:

su - travis
git clone --depth=1 --branch=nedbat/debug-915 https://github.com/nedbat/apprise-api.git
cd apprise-api
source ~/virtualenv/python3.6/bin/activate
pip install tox
tox -e bad,good

This will run two tox environments, called good and bad. Bad will fail with a disk I/O error, good will succeed. The difference is that bad uses the pytest-cov plugin, good does not. Two detailed debug logs will be created: debug-good.txt and debug-bad.txt. They show what operations were executed in the SqliteDb class in coverage.py.

The Big Questions: Why does bad fail? What is it doing at the SQLite level that causes the failure? And most importantly, what can I change in coverage.py to prevent the failure?

Some observations and questions:

  • If I change the last line of the steps to “tox -e good,bad” (that is, run the environments in the other order) then the error doesn’t happen. I don’t understand why that would make a difference.
  • I’ve tried adding time.sleep’s to try to slow the pace of database access, but maybe in not enough places? And if this fixes it, what’s the right way to productize that change?
  • I’ve tried using the detailed debug log to create a small Python program that in theory accesses the SQLite database in exactly the same way, but I haven’t managed to create the error that way. What aspect of access am I overlooking?

If you come up with answers to any of these questions, I will reward you somehow. I am also eager to chat if that would help you solve the mysteries. I can be reached on email, Twitter, as nedbat on IRC, or in Slack. Please get in touch if you have any ideas. Thanks.


Jaime Buelta: Python Automation Cookbook

$
0
0
So, great news, I wrote a book and it’s available! It’s called Python Automation Cookbook, and it’s aimed to people that already know a bit of Python (not necessarily developers only), but would like to use it to automate common tasks like search files, creating different kind of documents, adding graphs, sending emails, text messages, … Continue reading Python Automation Cookbook

Jaime Buelta: Hands-On Docker for Microservices with Python Book

$
0
0
Last year I published a book, and I liked the experience, so I wrote another! The book is called Hands-On Docker for Microservices with Python, and it goes through the different steps to move from a Monolith Architecture towards a Microservices one. It is written from a very practical stand point, and aims to cover … Continue reading Hands-On Docker for Microservices with Python Book

Jaime Buelta: ffind v1.2.0 released!

$
0
0
The new version of ffind v1.2.0 is available in GitHub and PyPi. This version includes the ability to configure defaults by environment variables and to force case insensitivity in searches. You can upgrade with     pip install ffind --upgrade This will be the latest version to support Python 2.6. Happy searching!

Codementor: Python for Beginners: Making Your First Socket Program (Client & Server Communication)

$
0
0
How to send a text file between client and server: Python simple example and source code download. See the video for more info!

Mike C. Fletcher: Started work on getting py-spy/speedscope in RunSnakeRun

$
0
0

So having finally written down the thoughts on a carbon tax, that kept distracting me from actually working on Open Source, I finally got a bit of work done on Open Source on the last night of the vacation.

What I started work on was getting a sampling profiler format supported, and for that I chose py-spy, particularly its speedscope export format. The work is still early days, but it does seem to work in my initial test cases.

At the moment I'm only supporting the "sampled" mode (vs the evented mode, which is closer to coldshot) for the format. I haven't implemented module/location tree-view yet. More annoying, the sample format doesn't include start-of-function information, so there's no differentiation between two functions with the same name in the same file for separating out the results. The results are also a bit confusing when you're used to cProfile style, as the boxes are stack-line based, so you'll see separate boxes for funcname:32 and funcname:34 children next to each other even though its the same child function involved. That's confusing enough that I'll likely group children that are calls to the same function (regardless of which line in the function they were in during the sample) into the same box.

The speedscope format would also make it pretty easy to do per-line heat-maps in the file, and obviously (given it's what speedscope normally does), a flame-graph would be a reasonable display as well. Anyway, when I have some more vacation time I can look into further work on it.

Mike Driscoll: PyDev of the Week: Tyler Reddy

$
0
0

This week we welcome Tyler Reddy (@Tyler_Reddy) as our PyDev of the Week! Tyler is a core developer of Scipy and Numpy. He has also worked on the MDAnalysis library, which is for particle physics simulation analysis. If you’re interested in seeing some of his contributions, you can check out his Github profile. Let’s spend some time getting to know Tyler better!

Tyler Reddy

Can you tell us a little about yourself (hobbies, education, etc):

I grew up in Dartmouth, Nova Scotia, Canada and stayed there until my late twenties. My Bachelor and PhD degrees were both in biochemistry, focused on structural biology. I did travel a lot for chess, winning a few notable tournaments in my early teen years and achieving a master rating in Canada by my late teens. Dartmouth is also known as the “City of Lakes,” and I grew up paddling on the nearby Lake Banook. In the cold Canadian Winter the lake would freeze over and training would switch to a routine including distance running—this is where my biggest “hobby” really took off. I still run about 11 miles daily in the early morning.

I did an almost six year post-doc in Oxford, United Kingdom. I had started to realize during my PhD that my skill set was better suited to computational work than work on the lab bench. Formally, I was still a biol- ogist while at Oxford, but it was becoming clear that my contributions were starting to look a lot more like applied computer science and computational geometry in particular. I was recruited to Los Alamos National Labora- tory to work on viruses (the kind that make a person, not computer, sick), but ultimately my job has evolved into applied computer scientist here, and nothing beats distance running in beautiful Santa Fe, NM.

Why did you start using Python?

I think it started during my PhD with Jan Rainey in Canada. He was pretty good about letting me explore ways to use programming to make research processes more efficient, even when I might have been better off in the short term by “just doing the science.” Eventually my curiosity grew to the point where I just read one of the editions of Mark Lutz’s “Learning Python” from cover to cover. I very rarely used the terminal to test things out while reading the book—I just kept going through chapters feverishly—I suppose Python is pretty readable! I still prefer reading books to random experimenting when approaching new problems/languages, though I don’t always have the time/luxury to do so. I remember reading Peter Seibel’s “Coders at Work,” and making a list of all the books the famous programmers interviewed there were talking about.

What other programming languages do you know and which is your favorite?

During my second postdoc at Los Alamos I read Stephen Kochan’s “Pro- gramming in C.” For that book I did basically do every single exercise in the terminal as I read it—I found that far more necessary with C than Python to get the ideas to stick. I had made an earlier attempt at reading the classic “The C Programming Language” book by K&R and found it rather hard to learn from! I thought I was doing something wrong since it was described as a classic in “Coders at Work,” I think. I’ll probably never go back to that book now, but I certainly get a lot of mileage out of my C knowledge these days.

I did a sabbatical at UC Berkeley with Stéfan van der Walt and the NumPy core team, working on open source full time for a year. NumPy is written in C under the hood, so it was essential I could at least read the source. A lot of the algorithm implementations in SciPy that I review or write are written in the hybrid Cython (C/Python) language to speed up the inner loops, etc.

I’ve also written a fair bit of tcl, and I write a lot of CMake code these days at work.

Python easily wins out as my favorite language, but C isn’t too far be- hind. I have to agree with the high-profile authors in “Coders at Work” who described C as “beautiful” (or similar) and C++ as, well, something else. Indeed, the NumPy team wrote a custom type templating language in C, processed by Python, instead of using C++. That said, Bjarne did visit UC Berkeley while I was there and it sounds like C++ may be taking a few more ideas from the Python world in the future!

What projects are you working on now?

I’m the release manager for SciPy, which has been my main long-term open source project focus in recent years. I’ve been trying really hard to improve the computational geometry algorithms available in SciPy—both in terms of adding new ones from the recent mathematics literature and improving the ones we already have.

A lot of my time goes into code review now though. I don’t mind— that’s kind of how it works—if I’m going to expect the other core devs and community to review my code and help me get over the finish line I should be ready to do the same for them. Indeed, as funding is now starting to show up a bit more for some OSS projects we’re quickly realizing that just dumping a bunch of new code on the core team/community will quickly cause a problem—review bandwidth is really important.

I’ve had a few rejected proposals for funding for computational geometry work in scipy.spatial, but I will keep trying! We recently wrote a paper for SciPy, which was a lot of work with such a big group/history/body of code, but probably worth it in the end.

I also try to stay involved in NumPy code review, especially for infrastructure- related changes (wheels, CI testing, etc.) and some interest I have in datetime code.

My open source journey started with the MDAnalysis library for particle physics simulation analysis. I try to help out there too, but just keeping up with the emails/notifications for 3+ OSS projects is extremely hard in mostly free time. I try to track notifications/stay somewhat involved in what is going on with OpenBLAS and asv as well, though it feels like I’m failing to keep up most of the time!

Which Python libraries are your favorite (core or 3rd party)?

I think hypothesis is probably underrated—some libraries are hesitant to incorporate it into their testing frameworks, but I think the property-based testing has real potential to catch scenarios humans would have a hard time anticipating, or at least that would take a long time to properly plan for. I find that hypothesis almost always adds a few useful test cases I hadn’t thought of that will require special error handling, for example.

Coverage.py is pretty important for showing line coverage, but I wish the broader CI testing ecosystem had more robust/diverse options for displaying coverage data and aggregating results from Python and compiled language source code. A number of the larger projects I work on have issues with reliability of codecov. The Azure Pipelines service has an initial coverage offering—we’ll see if that really takes off. It will be neat if we can soon mouse over a line of tested code and see the name of the test that covers it. I think I saw somewhere that this will perhaps soon be possible.

How did you get involved with SciPy?

My first substantial contribution was the implementation of Spherical Voronoi diagram calculation in scipy.spatial.SphericalVoronoi. I was working on physics simulations of spherical influenza viruses at the time, and wanted a reliable way to determine the amount of surface area that molecules were occupying. I was fortunate that my postdoc supervisor at the time, Mark Sansom at Oxford, allowed me to explore my interest in computational geom- etry algorithms like that. I gave a talk at what I believe was the second annual PyData London conference about the algorithm implementation, which was still incomplete at the time, and received some really helpful feedback from two expert computational geometers—one was an academic, the other was loosely associated with the CGAL team.

I really enjoyed the process of working with the SciPy team—I remember the first person to ever review my code there was CJ Carey, a computer scientist who is now working at Google. I was pretty intimidated, but they were quite welcoming and I was probably a little too excited when Ralf Gommers, the chair of the steering council, invited me to join the core team. I’ve been hooked ever since!

What are the pros and cons of using SciPy?

You can usually depend on SciPy to have a pretty stable API over time—we generally take changes in behavior quite seriously. A break in backwards compatibility would normally require a long deprecation cycle. The qual- ity/robustness expected for algorithms implemented in SciPy is generally quite high and the library is well-tested, so it is usually best to use SciPy if an algorithm is already available in it. The documentation is of reasonably-high quality and constantly improving and many common questions are answered on i.e., StackOverflow.

If you want to play with experimental algorithms or advocate for a rapid change in behavior, SciPy may not be your first choice. Early adoption of im- mature technologies is usually not likely to happen. Stability and reliability are important at the base of the Python Scientific Computing ecosystem.

How will SciPy / NumPy be changing in the future?

The amount of activity/progress happening for these two projects is pretty staggering. The official response is usually to take a look at the roadmaps for NumPy and SciPy.

A few things that stand out off the top of my head: improving support for using different backends to perform calculations with NumPy and SciPy (for example, using GPUs or distributed infrastructure), and making it easier to use custom dtypes. You might want to speed up code with Cython or Numba or Pythran and some thought may be required for NumPy and SciPy to remain well-suited for each of those.

I think I’m starting to see indications that binary wheels will eventually become available for PowerPC and ARM architectures, but my impression was that there were still some challenges there.

I think you’ll probably see better published papers/citation targets for these two projects in the future as well. With all the efforts underway to get grants to fund these projects I think we’ll continue to see periods where there will be funded developers driving things forward more quickly, as has happened with the grant for NumPy at UC BIDS.

Thanks for doing the interview, Tyler!

The post PyDev of the Week: Tyler Reddy appeared first on The Mouse Vs. The Python.

IslandT: Small python application which will remove duplicate files from the windows 10 os

$
0
0

I am glad to inform you all that the remove duplicate file project written with python has finally completed and now it will be uploaded to GitHub for your all to enjoy. This is free software and it will always remain free. Although I would really love to create a Linux version of this remove duplicate file’s software, but I do not have a Linux os’s computer therefore at the moment this software is just for the windows user only. I have packed this software up where you can just download the setup.exe file and then install the program and start it up to search and destroy the duplicate files inside your computer. Here are the steps you need to do to search and destroy the duplicate files:

  • Open up the application on your Windows desktop.
  • Click on the remove file button to select a file or use shift-left click and select many files at once.
  • Select any other folder which you want this program to search and destroy all the duplicate files from, then sit back and enjoy.
  • This program will search all the files within that selected folder but make sure you are not selecting the same folder which you have selected the original file from.
  • You can select multiple files at the same time and you can also select another file which you want to remove its duplicate version from after you have selected the first batch of files but it is not advisable for you to do multiple times of file selection action at once if this program is running on a slower computer.

All right, now let us look at the images below for the step by step tutorial on how to use this python application.

Click on the remove button below to select a file or files
Click shift and select all the files that you wish to search and destroy their duplicate versions from and click the open button
Next is to select the folder which you wish to remove all the duplicate files from then click on the select folder button
That is it, now just sit back and enjoy while watching the program doing its job

If you spot any bug in this application you can leave a comment below this post and I will fix it as fast as possible. If you are not sure about how to use this program you can create a folder then copy and paste some files from one folder to this new folder and start to practice to delete the duplicate files based on the steps I have shown to you earlier. This program is not perfect and thus your feedback is very important for me to improve on the quality of this program.

This program is only for the windows os’s user and maybe you need to have python installed on your desktop before you can use it. If you intend to compile and run the program by yourself, then you can download the entire package then just open up the windows command prompt and type in ‘python path/to/Multitas.py’ to start the application on your windows os laptop, this application runs well with no problem at all! The latest version (Download the setup.exe file): Download the setup.exe file to your windows os’s laptop then install the application on your windows laptop by following the setup instruction. In the future, all the latest updates will come with the setup file with the version number on it, for example, setup1.exe, setup2.exe and etc.

What is new in this latest version: The move file feature has been included where you can now move a file from one folder to another folder by clicking on the move button then select a file and then select the folder which you want to move that file into!

I have just created a python program which can help you to remove the duplicate files in any folder from r/Python

The above is the latest version of the program.

The application has been uploaded to GitHub and you can now download the setup.exe file through this link.  After you have downloaded the file, click on the setup.exe file to install the program and use it.


Codementor: Top 3 Best Python Books You Should Read in 2019

$
0
0
These 3 best python books cover the python programming language. They contain quality content on python 3, data science, and machine learning techniques used in python. Python is a widely used&hellip;

Codementor: 5 Best Text Editors for Programmers

$
0
0
The 5 Best Text Editors for Programmers. 1. Atom text editor 2. Vim text editor 3. VS Code text editor 4. Notepad++ text editor 5. Sublime text editor. It is essential for Software Developers and&hellip;

Ned Batchelder: Bug #915: solved!

$
0
0

Yesterday I pleaded, Bug #915: please help! It got posted to Hacker News, where Robert Xiao (nneonneo) did some impressive debugging and found the answer.

The user’s code used mocks to simulate an OSError when trying to make temporary files (source):

with patch('tempfile._TemporaryFileWrapper') as mock_ntf:
    mock_ntf.side_effect = OSError()

Inside tempfile.NamedTemporaryFile, the error handling misses the possibility that _TemporaryFileWrapper will fail (source):

(fd, name) = _mkstemp_inner(dir, prefix, suffix, flags, output_type)
try:
    file = _io.open(fd, mode, buffering=buffering,
                    newline=newline, encoding=encoding, errors=errors)

    return _TemporaryFileWrapper(file, name, delete)
except BaseException:
    _os.unlink(name)
    _os.close(fd)
    raise

If _TemporaryFileWrapper fails, the file descriptor fd is closed, but the file object referencing it still exists. Eventually, it will be garbage collected, and the file descriptor it references will be closed again.

But file descriptors are just small integers which will be reused. The failure in bug 915 is that the file descriptor did get reused, by SQLite. When the garbage collector eventually reclaimed the file object leaked by NamedTemporaryFile, it closed a file descriptor that SQLite was using. Boom.

There are two improvements to be made here. First, the user code should be mocking public functions, not internal details of the Python stdlib. In fact, the variable is already named mock_ntf as if it had been a mock of NamedTemporaryFile at some point.

NamedTemporaryFile would be a better mock because that is the function being used by the user’s code. Mocking _TemporaryFileWrapper is relying on an internal detail of the standard library.

The other improvement is to close the leak in NamedTemporaryFile. That request is now bpo39318. As it happens, the leak had also been reported as bpo21058 and bpo26385.

Lessons learned:

  • Hacker News can be helpful, in spite of the tangents about shell redirection, authorship attribution, and GitHub monoculture.
  • There are always people more skilled at debugging. I had no idea you could script gdb.
  • Error handling is hard to get right. Edge cases can be really subtle. Bugs can linger for years.

I named Robert Xiao at the top, but lots of people chipped in effort to help get to the bottom of this. ikanobori posted it to Hacker News in the first place. Chris Caron reported the original #915 and stuck with the process as it dragged on. Thanks everybody.

Reuven Lerner: Last chance for Weekly Python Exercise A1

$
0
0

This is a final reminder that in a few hours, registration will close for Weekly Python Exercise A1: Data structures for beginners.

Again and again, WPE participants have said that Weekly Python Exercise was the boost they needed to become more familiar with Python.

Now, if Python fluency is your goal, then that’s great.  But for most people, Python fluency isn’t the goal — it’s a means to an end.  And to what end?

  • With more fluent Python, you can spend time doing your job, rather than searching Stack Overflow and Google.
  • With more fluent Python, you’ll write tighter, more readable, code.
  • With more fluent Python, you’ll be able to solve bigger and more complex problems than before.
  • With more fluent Python, you’ll be able to interview for — and get — more senior Python development positions.

The $100 you spend for this 15-week course will more than pay for itself in future earnings.  But if you find that the price is out of reach, remember that I give discounts to students, seniors/pensioners/retirees, and anyone living outside of the world’s 30 richest countries.  If this applies to you, then just e-mail me, and I’ll gladly give you the appropriate coupon code.

But don’t delay, because the first exercise will soon be going out to subscribers!  And I won’t be offering WPE A1 again until 2021.

Click here to join Weekly Python Exercise A1: Data structures for beginners

The post Last chance for Weekly Python Exercise A1 appeared first on Reuven Lerner.

Abhijeet Pal: Python Program To Display Characters From A to Z

$
0
0

Problem Definition Create a Python program to display all alphabets from A to Z. Solution This article will go through two pythonic ways to generate alphabets. Using String module Python’s built-in string module comes with a number of useful string functions one of them is string.ascii_lowercase Program import string for i in string.ascii_lowercase: print(i, end="") Output a b c d e f g h i j k l m n o p q r s t u v w x y z The string.ascii_lowercase method returns all lowercase alphabets as a single string abcdefghijklmnopqrstuvwxyzso the program is simply running a for loop over the string characters and printing them. Similarly for uppercase A to Z letters. Program import string for i in string.ascii_uppercase: print(i, end="") Output A B C D E F G H I J K L M N O P Q R S T U V W X Y Z Using chr() Function The chr() function in Python returns a Unicode character for the provided ASCII value, hence chr(97) returns “a”. To learn more about chr() …

The post Python Program To Display Characters From A to Z appeared first on Django Central.

Viewing all 22404 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>