Quantcast
Channel: Planet Python
Viewing all 22646 articles
Browse latest View live

Stefan Scherfke: The macOS Dark Mode, your Terminal and Vim

$
0
0

The new Dark Mode in macOS Mojave is a nice addition and is – especially in the night hours — more pleasing to your eyes than the light mode.

MacOS light mode with a light Terminal profile and a light Vim theme.

MacOS light mode with a light Terminal profile and a light Vim theme.

However, enabling Dark Mode will not change the Terminal profile, which is a little bit annoying – especially if your color theme has a light and a dark variant (like the infamous Solarized, Snow, One, or my own Rasta theme).

MacOS dark mode with a light Terminal profile and a light Vim theme.

MacOS dark mode with a light Terminal profile and a light Vim theme.

If you change your Terminal profile to something dark, Vim still doesn’t look right because it uses its own mechanism for light/dark backgrounds (see :help 'background' for details) and doesn’t know about the changes you made to the Terminal profile.

MacOS dark mode with a dark Terminal profile and a light Vim theme.

MacOS dark mode with a dark Terminal profile and a light Vim theme.

If you execute :set background=dark in Vim (and if you color scheme supports it), Vim looks nice and dark now, too.

MacOS dark mode with a dark Terminal profile and a dark Vim theme.

MacOS dark mode with a dark Terminal profile and a dark Vim theme.

However, on the next day, the fun begins again when you want to switch everything back to light mode …

Wouldn’t it be nice if this could all be accomplished with a single command?

There are tools, that help you with switching to/from macOS Dark Mode (e.g., NightOwl or Shifty), but they can’t change your Terminal profile or notify Vim.

As it turns out, it’s not too hard to implement a little program that does exactly this:

  • You can uses the defaults command to get the current macOS Dark Theme mode:

    $ defaults read -g AppleInterfaceStyle
    Dark
  • You can use AppleScript (oh, how I love this language …) to set Dark Mode and update the Terminal profile:

    # Set Dark Modetellapplication"System Events"tellappearancepreferencessetdarkmodetotrue# Can be one of: true, false, not darkendtellendtell# Update default settings (for new windows/tabs)tellapplication"Terminal"setdefaultsettingstosettingsset"Rasta"endtell# Update settings for exsting windows/tabstellapplication"Terminal"setcurrentsettingsoftabsofwindowstosettingsset"Rasta"# Theme nameendtell
  • You can wrap both things with a Python script:

    # toggle-macos-dark-mode.pyimportsubprocessOSASCRIPT="""tell application "System Events"    tell appearance preferences        set dark mode to {mode}    end tellend telltell application "Terminal"    set default settings to settings set "{theme}"end telltell application "Terminal"    set current settings of tabs of windows to settings set "{theme}"end tell"""TERMINAL_THEMES={False:'Rasta light',True:'Rasta',}defis_dark_mode()->bool:"""Return the current Dark Mode status."""result=subprocess.run(['defaults','read','-g','AppleInterfaceStyle'],text=True,capture_output=True,)returnresult.returncode==0andresult.stdout.strip()=='Dark'defset_interface_style(dark:bool):"""Enable/disable dark mode."""mode='true'ifdarkelse'false'# mode can be {true, false, not dark}script=OSASCRIPT.format(mode=mode,theme=TERMINAL_THEMES[dark])result=subprocess.run(['osascript','-e',script],text=True,capture_output=True,)assertresult.returncode==0,resultif__name__=='__main__':set_interface_style(notis_dark_mode())
  • You can use the timer_start() function introduced in Vim 8 and neovim to regularly check for the current Dark Mode settings. Put this into your Vim config:

    function! SetBackgroundMode(...)let s:new_bg ="light"if $TERM_PROGRAM ==? "Apple_Terminal"let s:mode= systemlist("defaults read -g AppleInterfaceStyle")[0]
            if s:mode==? "dark"let s:new_bg ="dark"elselet s:new_bg ="light"endifelse" This is for Linux where I use an environment variable for this:if $VIM_BACKGROUND ==? "dark"let s:new_bg ="dark"elselet s:new_bg ="light"endifendifif&background!=? s:new_bg
            let&background= s:new_bg
        endifendfunctioncall SetBackgroundMode()call timer_start(3000,"SetBackgroundMode", {"repeat": -1})
  • You can create an Automator action that runs the Python script and that can be activated with a global shortcut. I use ⌥⌘D (you need to deactivate this shortcut for showing/hiding the Dock first). This is the AppleScript I used:

    do shell script"/usr/local/bin/python3 ~/toggle-macos-dark-mode.py"
    Yo dawg, I heard you like AppleScript … so I wrote some AppleScript that wraps your Python that wraps your AppleScript

The drawback of this method is that the current application (at the time you press ⌥⌘D) is used as “source” of the action you get two dialogs asking you to give that app permissions to remote control the System Settings and Terminal.

A better solution would be if the authors of NightOwl and Shifty would integrated this into their tools. I’m gonna contact them and see what happens. :-)


Test and Code: 50: Flaky Tests and How to Deal with Them

$
0
0

Anthony Shaw joins Brian to discuss flaky tests and flaky test suites.

  • What are flaky tests?
  • Is it the same as fragile tests?
  • Why are they bad?
  • How do we deal with them?
  • What causes flakiness?
  • How can we fix them?
  • How can we avoid them?
  • Proactively rooting out flakiness
  • Test design
  • GUI tests
  • Sharing solutions

Special Guest: Anthony Shaw.

Sponsored By:

Support Test and Code

Links:

<p>Anthony Shaw joins Brian to discuss flaky tests and flaky test suites.</p> <ul> <li>What are flaky tests?</li> <li>Is it the same as fragile tests?</li> <li>Why are they bad?</li> <li>How do we deal with them?</li> <li>What causes flakiness?</li> <li>How can we fix them?</li> <li>How can we avoid them?</li> <li>Proactively rooting out flakiness</li> <li>Test design</li> <li>GUI tests</li> <li>Sharing solutions</li> </ul><p>Special Guest: Anthony Shaw.</p><p>Sponsored By:</p><ul><li><a rel="nofollow" href="https://do.co/testandcode">DigitalOcean</a>: <a rel="nofollow" href="https://do.co/testandcode">Get started with a free $100 credit toward your first project on DigitalOcean and experience everything the platform has to offer, such as: cloud firewalls, real-time monitoring and alerts, global datacenters, object storage, and the best support anywhere. Claim your credit today at: do.co/testandcode</a></li></ul><p><a rel="payment" href="https://www.patreon.com/testpodcast">Support Test and Code</a></p><p>Links:</p><ul><li><a title="Dropbox article on flaky tests" rel="nofollow" href="https://blogs.dropbox.com/tech/2018/05/how-were-winning-the-battle-against-flaky-tests/">Dropbox article on flaky tests</a></li><li><a title="Microsoft article on flaky tests" rel="nofollow" href="https://blogs.msdn.microsoft.com/bharry/2017/06/28/testing-in-a-cloud-delivery-cadence/">Microsoft article on flaky tests</a></li><li><a title="pytest-rerunfailures: a py.test plugin that re-runs failed tests up to -n times to eliminate flakey failures" rel="nofollow" href="https://github.com/pytest-dev/pytest-rerunfailures">pytest-rerunfailures: a py.test plugin that re-runs failed tests up to -n times to eliminate flakey failures</a></li><li><a title="pytest-randomly: Pytest plugin to randomly order tests and control random.seed" rel="nofollow" href="https://github.com/pytest-dev/pytest-randomly">pytest-randomly: Pytest plugin to randomly order tests and control random.seed</a></li><li><a title="pytest-random-order: pytest plugin to randomise the order of tests with some control over the randomness" rel="nofollow" href="https://github.com/jbasko/pytest-random-order">pytest-random-order: pytest plugin to randomise the order of tests with some control over the randomness</a></li><li><a title="math.isclose()" rel="nofollow" href="https://docs.python.org/3/library/math.html#math.isclose">math.isclose()</a></li><li><a title="numpy.isclose()" rel="nofollow" href="https://docs.scipy.org/doc/numpy-1.15.0/reference/generated/numpy.isclose.html">numpy.isclose()</a></li><li><a title="pytest.approx()" rel="nofollow" href="https://docs.pytest.org/en/latest/reference.html#pytest-approx">pytest.approx()</a> &mdash; approx</li><li><a title="Anthony&#39;s testing article on RealPython" rel="nofollow" href="https://realpython.com/python-testing/">Anthony&#39;s testing article on RealPython</a></li><li><a title="Ghost Inspector" rel="nofollow" href="https://ghostinspector.com/">Ghost Inspector</a></li><li><a title="wily: A Python application for tracking, reporting on timing and complexity in tests" rel="nofollow" href="https://github.com/tonybaloney/wily">wily: A Python application for tracking, reporting on timing and complexity in tests</a></li></ul>

Stack Abuse: Python GUI Development with Tkinter: Part 2

$
0
0

This is the second installment of our multi-part series on developing GUIs in Python using Tkinter. Check out the links below for the other parts to this series:

Introduction

In the first part of the StackAbuse Tkinter tutorial series, we learned how to quickly build simple graphical interfaces using Python. The article explained how to create several different widgets and position them on the screen using two different methods offered by Tkinter – but still, we barely scratched the surface of the module's capabilities.

Get ready for the second part of our tutorial, where we'll discover how to modify the appearance of our graphical interface during our program's runtime, how to cleverly connect the interface with the rest of our code, and how to easily get text input from our users.

Advanced Grid Options

In the last article, we got to know the grid() method that lets us orient widgets in rows and columns, which allows for much more ordered results than using the pack() method. Traditional grids have their disadvantages though, which can be illustrated by the following example:

import tkinter

root = tkinter.Tk()

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=1, row=0, sticky="nsew")  
frame3.grid(column=0, row=1, sticky="nsew")

label1 = tkinter.Label(frame1, text="Simple label")  
button1 = tkinter.Button(frame2, text="Simple button")  
button2 = tkinter.Button(frame3, text="Apply and close", command=root.destroy)

label1.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')

root.mainloop()  

Output:

The code above should be easily understandable for you if you went through the first part of our Tkinter tutorial, but let's do a quick recap anyway. In line 3, we create our main root window. In lines 5-7 we create three frames: we define that the root is their parent widget and that their edges will be given a subtle 3D effect. In lines 9-11 the frames are distributed inside the window using the grid() method. We indicate the grid cells that are to be occupied by each widget and we use the sticky option to stretch them horizontally and vertically.

In lines 13-15 we create three simple widgets: a label, a button that does nothing, and another button that closes (destroys) the main window – one widget per frame. Then, in lines 17-19 we use the pack() method to place the widgets inside their respective parent frames.

As you can see, three widgets distributed over two rows and two columns do not generate an aesthetically pleasing outcome. Even though frame3 has its entire row for itself, and the sticky option makes it stretch horizontally, it can only stretch within its individual grid cell's boundaries. The moment we look at the window we instinctively know that the frame containing button2 should span two columns – especially considering the important function that the button executes.

Well, luckily, the creators of the grid() method predicted this kind of scenario and offers a column span option. After applying a tiny modification to line 11:

import tkinter

root = tkinter.Tk()

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=1, row=0, sticky="nsew")  
frame3.grid(column=0, row=1, sticky="nsew", columnspan=2)

label1 = tkinter.Label(frame1, text="Simple label")  
button1 = tkinter.Button(frame2, text="Simple button")  
button2 = tkinter.Button(frame3, text="Apply and close", command=root.destroy)

label1.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')

root.mainloop()  

We can make our frame3 stretch all the way across the entire width of our window.

Output:

The place() Method

Usually when building nice and ordered Tkinter-based interfaces, place() and grid() methods should satisfy all your needs. Still, the package offers one more geometry manager– the place() method.

The place() method is based on the simplest principles out of all three of Tkinter's geometry managers. Using place() you can explicitly specify your widget's position inside the window, either by directly providing its exact coordinates, or making its position relative to the window's size. Take a look at the following example:

import tkinter

root = tkinter.Tk()

root.minsize(width=300, height=300)  
root.maxsize(width=300, height=300)

button1 = tkinter.Button(root, text="B")  
button1.place(x=30, y=30, anchor="center")

root.mainloop()  

Output:

In lines 5 and 6 we specify that we want the dimensions of our window to be exactly 300 by 300 pixels. In line 8 we create a button. Finally, in line 9, we use the place() method to place the button inside our root window.

We provide three values. Using the x and y parameters, we define exact coordinates of the button inside the window. The third option, anchor, lets us define which part of the widget will end up at the (x,y) point. In this case, we want it to be the central pixel of our widget. Similarly to the sticky option of grid(), we can use different combinations of n, s, e and w to anchor the widget by its edges or corners.

The place() method doesn't care if we make a mistake here. If the coordinates happen to point to a place outside our window's boundaries, the button will not be displayed. A safer way of using this geometry manager is using coordinates relative to the window's size.

import tkinter

root = tkinter.Tk()

root.minsize(width=300, height=300)  
root.maxsize(width=300, height=300)

button1 = tkinter.Button(root, text="B")  
button1.place(relx=0.5, rely=0.5, anchor="center")

root.mainloop()  

Output

In the example above, we modified line 9. Instead of absolute x and y coordinates, we now use relative coordinates. By setting relx and rely to 0.5, we make sure that regardless of the window's size, our button will be placed at its center.

Okay, there's one more thing about the place() method that you'll probably find interesting. Let's now combine examples 2 and 4 from this tutorial:

import tkinter

root = tkinter.Tk()

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=1, row=0, sticky="nsew")  
frame3.grid(column=0, row=1, sticky="nsew", columnspan=2)

label1 = tkinter.Label(frame1, text="Simple label")  
button1 = tkinter.Button(frame2, text="Simple button")  
button2 = tkinter.Button(frame3, text="Apply and close", command=root.destroy)

label1.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')

button1 = tkinter.Button(root, text="B")  
button1.place(relx=0.5, rely=0.5, anchor="center")

root.mainloop()  

Output:

In the example above we just took the code from example 2 and then, in lines 21 and 22, we created and placed our small button from example 4 inside the same window. You might be surprised that this code does not cause an exception, even though we clearly mix grid() and place() methods in the root window. Well, because of the simple and absolute nature of place(), you can actually mingle it with pack() and grid(). But only if you really have to.

The result, in this case, is obviously pretty ugly. If the centered button was bigger, it will affect the usability of the interface. Oh, and as an exercise, you can try moving lines 21 and 22 above the definitions of the frames and see what happens.

It is usually not a good idea to use place() in your interfaces. Especially in larger GUIs, setting (even relative) coordinates for every single widget is just a lot of work and your window can become messy very quickly – either if your user decides to resize the window, or especially if you decide to add more content to it.

Configuring the Widgets

The appearance of our widgets can be changed while the program is running. Most of the cosmetic aspects of the elements of our windows can be modified in our code with the help of the configure option. Let's take a look at the following example:

import tkinter

root = tkinter.Tk()

def color_label():  
    label1.configure(text="Changed label", bg="green", fg="white")

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=1, row=0, sticky="nsew")  
frame3.grid(column=0, row=1, sticky="nsew", columnspan=2)

label1 = tkinter.Label(frame1, text="Simple label")  
button1 = tkinter.Button(frame2, text="Configure button", command=color_label)  
button2 = tkinter.Button(frame3, text="Apply and close", command=root.destroy)

label1.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')

root.mainloop()  

Output:

In lines 5 and 6 we added a simple definition of a new function. Our new color_label() function configures the state of label1. The options that the configure() method takes are the same options that we use when we create new widget objects and define initial visual aspects of their appearance.

In this case, pressing the freshly renamed "Configure button" changes the text, background color (bg), and foreground color (fg – in this case it is the color of the text) of our already-existing label1.

Now, let's say we add another button to our interface that we want to be used in order to color other widgets in a similar manner. At this point, the color_label() function is able to modify just one specific widget displayed in our interface. In order to modify multiple widgets, this solution would require us to define as many identical functions as the total number of widgets we'd like to modify. This would be possible, but obviously a very poor solution. There are, of course, ways to reach that goal in a more elegant way. Let's expand our example a little bit.

import tkinter

root = tkinter.Tk()

def color_label():  
    label1.configure(text="Changed label", bg="green", fg="white")

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame4 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame5 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=0, row=1, sticky="nsew")  
frame3.grid(column=1, row=0, sticky="nsew")  
frame4.grid(column=1, row=1, sticky="nsew")  
frame5.grid(column=0, row=2, sticky="nsew", columnspan=2)

label1 = tkinter.Label(frame1, text="Simple label 1")  
label2 = tkinter.Label(frame2, text="Simple label 2")  
button1 = tkinter.Button(frame3, text="Configure button 1", command=color_label)  
button2 = tkinter.Button(frame4, text="Configure button 2", command=color_label)

button3 = tkinter.Button(frame5, text="Apply and close", command=root.destroy)

label1.pack(fill='x')  
label2.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')  
button3.pack(fill='x')

root.mainloop()  

Output:

Okay, so now we have two labels and three buttons. Let's say we want "Configure button 1" to configure "Simple label 1" and "Configure button 2" to configure "Simple label 2" in the exact same way. Of course, the code above doesn't work this way – both buttons execute the color_label() function, which still only modifies one of the labels.

Probably the first solution that comes to your mind is modifying the color_label() function so that it takes a widget object as an argument and configures it. Then we could modify the button definition so that each of them passes its individual label in the command option:

# ...

def color_label(any_label):  
    any_label.configure(text="Changed label", bg="green", fg="white")

# ...

button1 = tkinter.Button(frame3, text="Configure button 1", command=color_label(label1))  
button2 = tkinter.Button(frame4, text="Configure button 2", command=color_label(label2))

# ...

Unfortunately, when we run this code, the color_label() function is executed, the moment the buttons are created, which is not a desirable outcome.

So how do we make it work properly?

Passing Arguments via Lambda Expressions

Lambda expressions offer a special syntax to create so-called anonymous functions, defined in a single line. Going into details about how lambdas work and when they are usually utilized is not the goal of this tutorial, so let's focus on our case, in which lambda expressions definitely come in handy.

import tkinter

root = tkinter.Tk()

def color_label(any_label):  
    any_label.configure(text="Changed label", bg="green", fg="white")

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame4 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame5 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=0, row=1, sticky="nsew")  
frame3.grid(column=1, row=0, sticky="nsew")  
frame4.grid(column=1, row=1, sticky="nsew")  
frame5.grid(column=0, row=2, sticky="nsew", columnspan=2)

label1 = tkinter.Label(frame1, text="Simple label 1")  
label2 = tkinter.Label(frame2, text="Simple label 2")  
button1 = tkinter.Button(frame3, text="Configure button 1", command=lambda: color_label(label1))  
button2 = tkinter.Button(frame4, text="Configure button 2", command=lambda: color_label(label2))

button3 = tkinter.Button(frame5, text="Apply and close", command=root.destroy)

label1.pack(fill='x')  
label2.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')  
button3.pack(fill='x')

root.mainloop()  

Output:

We modified the color_label() function the same way as we did in the previous shortened example. We made it accept an argument, which in this case can be any label (other widgets with text would work as well) and configured it by changing its text, text color, and background color.

The interesting part is lines 22 and 23. Here, we actually define two new lambda functions, that pass different arguments to the color_label() function and execute it. This way, we can avoid invoking the color_label() function the moment the buttons are initialized.

Getting User Input

We're getting closer to the end of the second article of our Tkinter tutorial series, so at this point, it would be good to show you a way of getting input from your program's user. To do so, the Entry widget can be useful. Look at the following script:

import tkinter

root = tkinter.Tk()

def color_label(any_label, user_input):  
    any_label.configure(text=user_input, bg="green", fg="white")

frame1 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame2 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame3 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame4 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame5 = tkinter.Frame(root, borderwidth=2, relief='ridge')  
frame6 = tkinter.Frame(root, borderwidth=2, relief='ridge')

frame1.grid(column=0, row=0, sticky="nsew")  
frame2.grid(column=0, row=1, sticky="nsew")  
frame3.grid(column=1, row=0, sticky="nsew")  
frame4.grid(column=1, row=1, sticky="nsew")  
frame5.grid(column=0, row=2, sticky="nsew", columnspan=2)  
frame6.grid(column=0, row=3, sticky="nsew", columnspan=2)

label1 = tkinter.Label(frame1, text="Simple label 1")  
label2 = tkinter.Label(frame2, text="Simple label 2")  
button1 = tkinter.Button(frame3, text="Configure button 1", command=lambda: color_label(label1, entry.get()))  
button2 = tkinter.Button(frame4, text="Configure button 2", command=lambda: color_label(label2, entry.get()))

button3 = tkinter.Button(frame5, text="Apply and close", command=root.destroy)

entry = tkinter.Entry(frame6)

label1.pack(fill='x')  
label2.pack(fill='x')  
button1.pack(fill='x')  
button2.pack(fill='x')  
button3.pack(fill='x')  
entry.pack(fill='x')

root.mainloop()  

Output:

Take a look at lines 5 and 6. As you can see, color_label() method accepts a new argument now. This argument – a string – is then used to modify the configured the label's text parameter. Additionally, in line 29 we create a new Entry widget (and in line 36 we pack it inside a new frame created in line 13).

In lines 24 and 25, we can see that each of our lambda functions also pass one additional argument. The get() method of the Entry class returns a string which is what the user typed into the entry field. So, as you probably already suspect, after clicking the "configure" buttons, the text of the labels assigned to them is changed to whatever text the user typed into our new entry field.

Conclusion

I hope this part of the tutorial filled some gaps in your understanding of the Tkinter module. Although some advanced features of Tkinter might seem a bit tricky at first, the general philosophy of building interfaces using the most popular GUI package for Python is very simple and intuitive.

Stay tuned for the last part of our Tkinter basics tutorial, where we'll discover some very clever shortcuts that let us create complex user interfaces with very limited code.

Techiediaries - Django: Angular 7 Tutorial: Introducing Angular for Python Developers

$
0
0

Angular 7 is out and we'll use it to continue with our front-end tutorial series designed for Python developers.

This tutorial is part of an ongoing series for teaching Angular to Python developers. Saying that; this tutorial can be also followed by front-end developers that don't use Python as their back-end language.

Before diving into practical steps for developing a full-stack Python & Angular 7 web application. Let’s first learn about the basics of this front-end framework and how to get started using it.

In the previous tutorial, we've learned how to integrate Angular with Django. This tutorial (now, updated to Angular 7) will be dedicated to teach you how to get started with v7.

You can also learn how we consume a Django RESTful API from an Angular interface in this tutorial which used v6.

Throughout this beginner's series, you'll learn how you can use v7 to build client side web applications for mobile and desktop with a Django back-end.

v7, was just released and has many new features under the hood particularly regarding the Angular CLI 7 tool-chain. One amazing feature you'll love is the CLI Prompts which allows you to interactively choose the libraries you want to include in your project such as routing. In this tutorial series we'll also learn how to upgrade our previously built project from v6 to v7.

Throughout this tutorial series, we’ll learn:

  • how to build a full-stack web applications with REST API and JWT.
  • to use the Angular CLI 7 for quickly creating a front-end project, generating components, pipes, directives and services.
  • routing using Angular router
  • forms — dynamic and template based
  • consuming REST APIs using HttpClient and RxJS 6 Observables.
  • use Angular Material for building professional-grade UIs.
  • If you still want to use Bootstrap, we'll also cover how to integrate BS 4 with Angular.

This first tutorial is a sort of in depth introduction to Angular aimed at new developers who have little experience with JavaScript client-side frameworks and want to learn the essential concepts of Angular.

What's a Framework and Why Using it

A JavaScript or client-side framework is an abstraction that provides developers with a set of tools to easily and efficiently develop front-end web applications. Most frameworks dictate many aspects of your web projects like directory structure and configuration files and different tools that can be used for adding essential functionalities like testing.

A client-side framework is built on top of a client side programming language to help abstract the low level APIs of programming languages and client APIs and makes developers more productive. In fact there is only one client-side language which is JavaScript; the plethora of the web and the only language that web browsers understand but there also more sophisticated and modern programming languages that compile to JavaScript such as TypeScript and CoffeeScript. Which means they can also be the base of a client side framework.

Frameworks are all the rage nowadays and most serious JS developers use a framework for building front-end apps and interfaces instead of using plain JavaScript or jQuery.

Most JavaScript frameworks are said to be opinionated which means their creators enforce their opinions or their own philosophy of how web projects should be configured and organized. This also means, developers should learn the new abstractions provided by the framework and any new concepts besides learning the base programming language.

Frameworks provide abstractions for working with many aspects like for example DOM manipulation and Ajax/HTTP. if the technology deals with only one aspect, it's mostly called a library. For example popular libraries like React or Vue.js deal only with the view or the UI of an application by using a virtual DOM and diffing with the real DOM which provides better performance.

Nowadays powerful and modern JavaScript frameworks have emerged and taken the web by storm. Instead of websites with poorly structured JS or jQuery code we have now complete web apps with best practices and code structure with complex and rich UIs. These modern client-side web apps use heavy JavaScript which impacts performance and by result the user experience; and as such even if web browsers became more powerful we still need to follow best practices and battle tested tools and patterns which client-side frameworks try to help with.

Introducing Angular

AngularJS was the most popular client-side framework among JavaScript developers for many years. Google introduced AngularJS in 2012. It's based on a variation of the very popular Model-View-Controller pattern which is called Model-View-*.

The AngularJS framework, was built on top of JavaScript with the aim to decouple the business logic of an application from the low level DOM manipulation and create dynamic websites. Developers could use it to either create full-fledged SPAs and rich web applications or simply control a portion of a web page which makes it suitable for different scenarios and developer requirements.

Data Binding

Among the powerful concepts that were introduced by AngularJS, is the concept of data binding which enables the view to get automatically updated whenever the data (the model layer) is changed and inversely.

Directives

The concept of directives was also introduced by AngularJS, which allows developers to create their own custom HTML tags.

Dependency Injection

The other introduced concept is Dependency Injection, which allows developers to inject what's called services (singletons that encapsulates a unique and re-usable functionality within an application) into other components which encourages re-usability of the code.

Angular Features

In the beginning there was Angular.js which has taken the web by storm. It became very popular among client side JavaScript developers and has supercharged them with a set of best patterns in the software development world like the popular MVC (Model-View-Controller) architectural pattern and Dependency Injection alongside with factories, services and modules etc. Which made structuring large JavaScript apps more easier than before. After that, Google continued its innovation by creating Angular 2, the next version of Angular that was completely re-written from scratch but with TypeScript instead of JavaScript which has opened the door for a new set of features, since TypeScript is a statically typed language with strong types and OOP (Object Oriented Programming) concepts similar to popular language like Java. The Angular team has then concentrated on improving Angular by releasing a new version each six months and following semantic versioning, starting with v4 (v3 skipped) then v5, v6 and v7. Each version introduced many new features including performance and new tooling. Let's briefly see the new features which came with each version:

Angular 7 New Features

In this section we'll see a subset of the features of v7, you can refer to this article for more:

The CLI Prompts

Angular 7 introduced a new needed feature that enables the CLI to interactively running commands like ng new or ng add. For example, of you want to create a project using the ng new command , the CLI will ask you if you would like to add routing. If your answer is Yes, It will install the required dependencies and setup a routing module and import it into the main module automatically and add a router outlet in the main component. anything except. The CLI will also ask you of the format you want to use for stylesheets and give you options (CSS, Sass, SCSS etc.) to choose from.

CLI Prompts can be also customized using the angular.json file. No just that,they can be also used with Angular Schematics to enable developers to prompt users when installing their libraries which can be done by using the x-prompt key inside a Schematics collection.

Using CLI Budgets by Default

With Angular 7, new projects are defaulted to use Budgets in the CLI which will notify developers when the initial bundle has more than 2MB in size and will throw an error when it has 5MB in size. These limits can be easily changed from the angular.json file.

Virtual Scrolling

Virtual scrolling is a strategy mostly used in mobile UI libraries which allows developers to maintain performance while scrolling a large set of items.

Now the Angular 7 Material CDK has added support for virtual scrolling. You need to simply use the <cdk-virtual-scroll-viewport> component to work with large lists of items. This works by simply rendering the only items that actually fit on the visible part of the app's UI.

You can read more information from Angular Material docs.

Support for Drag and Drop

With Angular 7, you can use drag and drop without resorting rto any external libraries for that matter as the support is built right into the CDK.

You can read more about drag and drop.

Angular 6 New Features

Angular 6 brought a set of new features and additions. The most work that was done in this version was about the tool-chain and the CLI.

Let's briefly go over the most important ones:

ng add and ng update Commands

The Angular CLI v6 introduces two useful commands:

  • ng add: This new commands allows you to quickly add or install new libraries including adding configuration for you behind the curtain. Popular libraries such as Angular Material or ng-bootstrap can now be added on the fly without adding any settings manually from your part. For example, to add Bootstrap to your project you only to issue the following command:
$ ng add @ng-bootstrap/schematics

You can also add new libraries by using Angular 6 Schematics to create schematics for new libraries.

  • Using ng update, It's easy than before to update your Angular 4|5 projects to use Angular 6. And you can also use Schematics to make it easy to integrate third-party libraries with ng update.

New Configuration File: angular.json instead of .angulac-cli.json

With Angular 6, the configuration file for the CLI .angular-cli.json is renamed to angular.json. The overall strcuture of angular.json has also changed.

The Angular CLI 6 now generates a work-space which includes multiple apps, among them one default app. So you can have multiple apps per one project and you can also add libraries as a part of the project (ng g library my-lib).

Schematics

Schematics is a powerful workflow tool for Angular. It can be used to apply transforms to your project, such as creating new components, updating old code automatically etc. This will allow you to build frameworks on top of your project which can boost your productivity like never before.

Ivy: The New Renderer

The Angular team has done a re-write of the Angular renderer. It's code named Ivy. This new renderer will allow you to produce smaller bundles in size like Preact for example. Ivy has experimental support in Angular 6 and can be enabled with a configuration option.

Angular Elements

With Angular 6 Elements, we can develop standard web components or custom elements that can be used natively in modern web browsers with other Angular projects and also with any other framework such as React or Vue or even with plain vanilla JavaScript.

Support for TypeScript 2.7

Angular 6 depends on TypeScript 2.7.

Support for RxJS 6

Angular 6 has support for RxJS 6. RxJS brings new changes and features such as new import paths, tree-shakablility resulting in even smaller Angular bundles etc.

Angular 5 Features

Angular 5, code named pentagonal-donut, was just released. It has new features and internal changes which make Angular applications faster and smaller. In this section we will go over the most important changes and instructions on how to upgrade your existing Angular 2+ project to latest version.

  • As of Angular 5.0.0, the build optimizer is enabled by default which applies a series of optimizations to builds.

  • The Angular team has also improved the compiler which can make now faster rebuilds (especially for production and AOT builds) thanks to incremental compilation.

  • The Angular team has added many features to Angular decorators.

  • Developers can now ship faster and smaller bundles by removing white spaces.

  • The Angular compiler now supports TypeScript 2.3 Transforms a new feature that enables you to hook into the standard TypeScript compilation pipeline. Make sure to use the --aot switch to enable this feature.

$ ng serve --aot
  • You can now get rid of white spaces in template's code by setting preserveWhitespaces to false in the component's decorator. You can also turn it on globally by setting "preserveWhitespaces":falseunderangularCompilerOptions in tsconfig.json. This will help reduce your app's final bundle size.

  • You can now use lambdas inside Angular component's decorator.

  • New improved number, date, and currency pipes that have better standardization support across browsers without i18n polyfills.

  • The old HTTP module is now deprecated in favor of HttpClient which was introduced in Angular 4.3

  • Angular 4 Features

Angular 4 came with many improvements and new features such as:

  • Size and performance: Angular 4 applications are smaller by hundreds of kilobytes, thanks to the improvements to the View Engine which have reduced the size of generated components code by around 60% .

  • The Animations are no longer part of Angular core which means that the apps which don't use them don't need to include the extra code in their final bundles. You can use animations in your apps by using BrowserAnimationsModule which can be imported from @angular/platform-browser/animations.

  • Improved *ngIf and *ngFor: *ngIf now supports the else syntax, for example it's easy now to write templates in this way

<div*ngIf="ready ; else loading"><p>Hello Angular 4</p></div><ng-template#loading>Still loading</ng-template>

If the ready variable is false Angular will show the loading template.

You can also assign and use local variables inside both *ngIf and *ngFor expressions, for example:

<div*ngFor="let el of list as users; ">

{{ el }}

</div>
  • The adoption of Angular universal as part of Angular: the Angular team has adopted Angular Universal which is a community driven project that allows developers

to use server side rendering for Angular apps. It's available from @angular/platform-server.

jQuery vs. Angular

jQuery is a library that sits on top of vanilla JavaScript and provides a rich set of features that can be easily learned. It can be used across all web browsers to manipulate the DOM.

jQuery was the most popular front-end library for many years and nowadys it's still used to power the front-end for many websites

One of the reasons jQuery was popular is the difficulty to manipulate the DOM in the browser. jQuery came with an easy to use API to can be used across all the popular browsers without worrying about your website not working on some browser.

Nowadays, browsers and JavaScript are more mature and browser compatibility are nicely addressed with API standards also the front-end ecosystem becomes more vibrant with sophisticated tools, frameworks and libraries like Webpack, Angular, React, Vue.js and Axios (or the standard Fetch API for doing HTTP) etc.

jQuery now is a library that's used by developers who have no idea of what vanilla JavaScript can do nowadays and the new browser APIs that can replace most of and the commonly used jQuery APIs.

Modern frameworks like Angular, React or Vue.js share a common general philosophy which is abstracting all direct operations with the DOM and using a component-based architecture.

Here is a list of differences between jQuery and Angular:

  • jQuery is primarly a DOM manipulation library; Angular is a complete platform for creating client side mobile and web apps.
  • jQuery is mostly used to add interactivity to web pages; Angular is used to create full-fledged SPAs with advanced features such as routing.
  • jQuery does not offer advanced patterns like components, directives, pipes and two-way binding; Angular is all about a component-based architecture with features like routing, dependency injection etc.
  • jQuery can become very difficult to maintain when the project grows but in the case of Angular, different tools, such as feature modules, are introduced to make working with large project easier.

  • etc.

Why Would you Use Angular

Angular is an open-source and TypeScript-based platform for building client-side web applications as Single Page Applications. Angular provides features such as declarative templates, dependency injection and best patterns to solve everyday development problems.

But precisly, why Angular? Because:

  • It provides support for most platform and web browsers such as web, mobile, and desktop.

  • It's powerful and modern with a complete ecosystem,

  • It can be used to developer native mobile apps with frameworks such as NativeScript and Ionic

  • It convenient and can be used with Electron to develop native desktop apps etc.

  • Angular provides you with the tools and also with powerful software design patterns to easily manage your project.

  • It's using TypeScript instead of plain JavaScript, a strongly typed and OOP-based language created by Microsoft which provides features like strong types, classes, interfaces and even decorators etc.

  • It's batteries-included which means you don't have to look for a separate tool for different tasks. With Angular, you have built-in libraries for routing, forms and HTTP calls etc. You have templates with powerful directives and pipes. You can use the forms APIs to easily create, manipulate and validate forms.

  • Angular uses RxJS which is powerful reactive library for working with Observables.

  • Angular is a component-based framework which means decoupled and re-usable components are the basic building of your application.

  • In Angular DOM manipulation is abstracted with a set of powerful APIs.

  • Angular is a powerful framework that can be also used to build PWAs (Progressive Web Apps).

Getting Started with Angular 7

Now let's see how we can start using the latest Angular 7 version.

Prior knowledge of Angular is not required for this tutorial series but you'll need to have a few requirements:

  • Prior working experience or understanding of HTML and CSS.
  • Familiarity with of TypeScript/JavaScript.

Updating to Angular 7 from v6

In case you have started a project with Angular v6, you can update it to Angular 7 instead of creating a new one from scratch. This can be done in a few steps. Please refer to this tutorial on how to update existing Angular CLI projects for the full list of instructions. In fact thanks to the amazing work done in v6 it's now more easier than ever to upgrade to the latest version.

Using GitHub Repository To Generate a Project

You can clone a quick-start Angular project from GitHub to generate a new project.

You need to have Git installed on your system then run the following:


git clone https://github.com/angular/quickstart my-v7-project
cd my-v7-project
npm install
npm start

You can find more information here.

In this tutorial, we’ll use the Angular CLI v7 to generate our Angular 7 front-end project. It’s also the recommended way by the Angular team.

Generating a New Angular 7 Project with Angular CLI v7

Developers can use different ways to to start a new project; such as:

  • Installing Angular 7 by hand in a new project generated with npminit,
  • Installing and using CLI v7 to generate a new project,

- Upgrading from an existing Angular 6 project or any previous version (refer to sections on top for more information).

The best way though is using the Angular CLI which is recommended by the Angular team. A project generated via the CLI has many features and tools built-in like testing for example which makes easy to start developing enterprise-grade apps in no time and without dealing with complex configurations and tools like Webpack.

Requirements

This tutorial has a few requirements. Angular CLI depends on Node.js so you need to have Node and NPM — Node 8.9 or higher, together with NPM 5.5.1 — installed on your development machine. The easy way, is to go theirwebsiteofficial and get the appropriate installer for your operating system.

Angular 7 tutorial basics

For Ubuntu 16.04 users I recommend following this tutorial to successfully install Node.js and NPM on your Ubuntu machine.

Now, just to make sure you have Node.js installed. Open a terminal and run the following command:

node -v

You should get the version of the installed Node.js 8.9+ platform.

Angular 7 tutorial - node version

Node version ~8.9+

Installing Angular CLI 7

The Angular CLI is a powerful command line utility built by the Angular team to make it easy for developers to generate Angular projects without dealing with the complex Webpack configurations or any other tool. It provides a fully-featured tool for working with your project from generating constructs such as components, pipes and services to serving and building production ready bundles etc.

To use the Angular CLI — you first need to install it via npm  package manager.  Head over to your terminal and enter the following command:

$ npm install -g @angular

Depending on your npm configuration, you may need to add sudo to install global packages.

A Primer on Angular CLI 7

After installing Angular CLI 7, you can run many commands. Let’s start by checking the version of the installed CLI:

$ ng version

You should get a similar output:

Angular 7 tutorial - CLI versionAngular CLI version ~ ng version

A second command that you might need to run is the help command:

$ ng help

To get a complete usage help.

Angular 7 tutorial - CLI help

Angular CLI Usage ~ ng help

The CLI provides the following commands:

  • add: Adds support for an external library to your project.

  • build (b): Compiles an Angular app into an output directory named dist/ at the given output path. Must be executed from within a workspace directory.

  • config: Retrieves or sets Angular configuration values.

  • doc (d): Opens the official Angular documentation (angular.io) in a browser, and searches for a given keyword.

  • e2e (e): Builds and serves an Angular app, then runs end-to-end tests using Protractor.

  • generate (g): Generates and/or modifies files based on a schematic.

  • help: Lists available commands and their short descriptions.

  • lint (l): Runs linting tools on Angular app code in a given project folder.

  • new (n): Creates a new workspace and an initial Angular app.

  • run: Runs a custom target defined in your project.

  • serve (s): Builds and serves your app, rebuilding on file changes.

  • test (t): Runs unit tests in a project.

  • update: Updates your application and its dependencies. See https://update.angular.io/

  • version (v): Outputs Angular CLI version.

  • xi18n: Extracts i18n messages from source code.

Angular CLI 7 — Generating a New Project from Scratch

You can use Angular CLI 6 to quickly generate your Angular 6 project by running the following command in your terminal:

$ ng new frontend

frontend is the name of the project. You can — obviously— choose any valid name for your project. Since we’ll create a full-stack application I’m using frontend as a name for the front-end application.

As mentioned earlier, the CLI v7 will ask you about if Would you like to add Angular routing?, you can answer by y (Yes) or No which is the default option. Ii will also ask you about the stylesheet format, you want to use (such as CSS). Choose your options and hit Enter to continue.

Angular 7 project structure

After that; you'll have your project created with directory structure and a bunch of configurations and code files. Mostly in TypeScript and JSON formats. Let's see the role of each file:

  • /e2e/: This folder contains end-to-end (simulating user behavior) tests of the website.
  • /node_modules/: All 3rd party libraries are installed to this folder using npm install.
  • /src/: It contains the source code of the application. Most work will be done here.
    • /app/: It contains modules and components.
    • /assets/: It contains static assets like images, icons and styles etc.
    • /environments/: It contains environment (production and development) specific configuration files.
    • browserslist: Needed by autoprefixer for CSS support.
    • favicon.ico: The favicon.
    • index.html: The main HTML file.
    • karma.conf.js: The configuration file for Karma (a testing tool)
    • main.ts: The main starting file from where the AppModule is bootstrapped.
    • polyfills.ts: Polyfills needed by Angular.
    • styles.css: The global stylesheet file for the project.
    • test.ts: This is a configuration file for Karma
    • tsconfig.*.json: The configuration files for TypeScript.
  • angular.json: It contains the configurations for CLI
  • package.json: It contains basic information of the project (name, description and dependencies etc.)
  • README.md: A Markdown file that contains a description of the project.
  • tsconfig.json: The configuration file for TypeScript.
  • tslint.json: The configuration file for TSlint (a static analysis tool)

Angular CLI 7 — Serving your Project with a Development Server

Angular CLI provides a complete tool-chain for developing front-end apps on your local machine. As such, you don’t need to install a local server to serve your project — you can simply, use the ng serve from your terminal to serve your project locally. First navigate inside your project's folder and run the following commands:

$ cd frontend
$ ng serve

You can now navigate to the http://localhost:4200/ address to start playing with your front-end application. The page will automatically live-reload if you change any source file.

You can also use different host address and port other than the default HTTP host and port by providing new options. For example:

$ ng serve --host 0.0.0.0 --port 8080

Agular CLI 7— Generating Components, Directives, Pipes, Services and Modules

To bootstrap your productivity, Angular CLI provides a generate command modules etc. For example to generate a component run:and to quickly generate basic Angular constructs such as components, directives, pipes, services

$ ng generate component account-list

account-list is the name of the component. You can also use just g instead of generate The Angular CLI will automatically add reference to components, directives and pipes in the app.module.ts.

If you want to add your component, directive or pipe to another module — other than the main application module i.e app.module.ts—for example to a feature module, you can simply prefix the name of the component with the module name and a slash — like a path.

$ ng g component account-module/account-list

account-module is the name of an existing module

What is TypeScript

TypeScript is a strongly-typed superset of JavaScript developed by Microsoft. This means three things:

  • TS provides more features to the original JavaScript language.
  • TS doesn't get in the way if you still want to write plain JavaScript.
  • TypeScript does also integrate well with most used JavaScript libraries. TypeScript is not the first attempt to create a super-set of JavaScript but it's by far the mot successful one. It provides powerful OOP (Object Oriented Programming) features like inheritance interfaces and classes, a declarative style, static typing and modules. Although many of these features are already in JavaScript but they are different as JS follows a prototypical-based OOP not class-based OOP.

TS features make it easy for developers to create complex and large JavaScript apps that are easier to main and debug.

TypeScript is supported by two big companies in the software world; Microsoft, obviously because it's the creator but also by Google as it was used to develop Angular from v2 up to Angular 7 (the current version). It's also the official language and the recommended language to build Angular 2+ apps.

TypeScript is a compiled language which means we’ll need to transpile it into JavaScript to be able to run it in web browsers which do only understand one programming language. Fortunately, the TS transpiler integrates well with the majority of build systems and bundlers.

You can install the TypeScript compiler using npm and then you can call it by running the tsc source-file.ts command from you terminal. This will generate a source-file.js JavaScript file with the same name. You can control many aspects of the compilation process using a tsconfig.json configuration file. We can specify the module system to compile to and where to output the compiled file(s) etc.

For large projects, you need to use advanced tools like task runners like Gulp and Grunt and code bundlers like the popular Webpack.

You can use grunt-typescript and gulp-typescript plugins for integrating TypeScript with Gulp and Grunt which will allow you to pass the compiler options from your task runners.

For Webpack, you can use the loader to work with TypeScript.

More often than not, you'll need to use external JavaScript libraries in your project. You’ll also need to use type definitions.

Type definitions are files that end with the .d.ts extension — They allow us to use TypeScript interfaces created by other developers for different JavaScript libraries to integrate seamlessly with TypeScript. These definitions are available from the DefinitelyTyped registry, from where we can install them.

To install them you need to use Typings. It has its own configuration file, which is called typings.json, where you need configure to specify paths for type definitions.

Angular vs. React vs. Vue

Angular, React and Vue are nowadys the most popular frameworks for front-end web development. This is one common thing between them but they have many differences.

The first difference is that Angular a complete platform for building front-end web apps, while React and Vue.js are only libraries that only deal with the view layer of a front-end web application.

Now let's see some statistics about them:

  • Angular has 57 developers on their team while Vue has 25 developers. For React the number of developers in the team are unknown.
  • On Github, Angular has more 40k stars and 755 contributors, React has more than 113k stars and 1,251 contributors, and Vue has more than 117k stars and 215 contributors.

This a Github Stars History for Angular vs React and Vue from timqian

Angular 7 vs React vs Vue

npm trends is a website that displays the number of downloads for npm packages and compare between them. This is graph for Angular vs React and Vue: Angular vs React vs Vue

Angular 7 Concepts

Angular is a component based framework with many new concepts that encourages DRY and separation of concerns principles. In this section, we'll briefly explain the most common used concepts in Angular.

Components

Components are the basic building of an Angular 7 application. A component controls a part of app's UI. It's encapsulated and reusable.

You can create a component by creating a TypeScript class and decorate with the @Component decorator available from the Angular core package ( @angular/core)

A component's view is built using a unique HTML template associated with the component's class and also a stylesheet file that's used to style the HTML view.

This is an example of an Angular component:

import{Component}from'@angular/core';@Component({selector:'app-root',templateUrl:'./app.component.html',styleUrls:['./app.component.css']})exportclassAppComponent{title='angular7-router-demo';}

We start by importing Component from the Angular Core package and we use it to decorate a TypeScript class.

The @Component decorator takes some meta information about the component:

  • selector: It's used to call the component from an HTML template e.g. <app-component></app-component> just like any other HTML tag.
  • templateUrl: It' used to specify the relative path to an HTML file that will be used as the component's template
  • styleUrls: It's an array that specifies one or more stylesheets that can be used to style the component's view.

An Angular's component has a life-cycle from it's creation to destruction. There are many events that you can listen to for executing code at these events.

Services

Angular services are singleton TypeScript classes that has only one instance throughout the app and its lifetime. They provide methods that maintain data from the start of the application to its end.

A service is used to encapsulate the business logic that can be repeated in mu areas of your code. This helps the developers to follow the DRY (Don't Repeat Yourself) software concept.

The service can be called by components and even other services in the app. It's injected in the component's constructor via Dependency Injection.

Services are used to achieve DRY and separation of concerns into an Angular application. Along with components, they help make the application into re-usable and maintainable pieces of code that can be separated and even used throughout other apps.

Let's suppose that your application has many components that need to fetch data from a remote HTTP resource.

If you are making and HTTP call to fetch the remote resource from a server in your component. This means that each component is repeating the similar code for getting the same resource. Instead, you can use a service that encapsulates the part of the code that only deals with fetching the remote resources (The server address and the specific resource to fetch can be passed via parameters to a service method). Then we can simply inject the service wherever we want to call the fetching logic. This is what's called Separation of Concerns that states that components are not responsible for doing a specific tasks (in our case fetch data), instead a service can do the task and pass data back to the components.

Angular 7 Libraries

Angular 7 provides the same libraries as the previous versions. Let's see the most important ones:

HttpClient

Angular has its own powerful HTTP client that can be used to make calls to remote API servers so you don't need to use external libraries like Axios for example or even the standard Fetch API. In fact, the HttpClient is based on the old XMLHttpRequest interface available in all major browsers.

HttpClient is an Angular Service that's available from the @angular/common/http

Angular Router

The Angular router is a powerful client-side routing library that allows you to add build SPAs or Single Page Apps. It provides advanced features such as multiple router outlets, auxiliary paths and nested routing.

Angular 7 didn't add much features to the router, except for some warnings that notify the user when routing is activated outside the zone.

Angular Forms

Angular provides developers with powerful APIs to create and work with forms and two approaches that you can choose from when you are dealing with forms: Template-based and model-based or reactive forms.

Again Angular 7, didn't add any features to the forms APIs.

Angular Material

Angular Material is a modern UI library based on Google's Material Design spec which provides common internationalized and themable UI components that work across the web, mobile and desktop. It's built by the Angular team and integrate well with Angular ecosystem.

In Angular 7 (and v6 also), you can use CLI ng add command for quickly add the required dependencies and configure Material into your project:

ng add @angular/material

Angular 7 has added new features to this library including drag and drop support so you don't need to use an external library anymore and also virtual scrolling which allows you to efficiently scroll large set of items without performance issues, particualrly on mobile devices.

Conclusion

Thanks to Angular CLI 7, you can get started with Angular v7 by generating a new project quickly with a variety of flags to customize and control the generation process.

As a recap, we have seen different ways to create a new Angular 7 project.

We have also seen the new features of all Angular versions up until v7 such as ng add, ng update, Angular Schematics, Angular Elements, CLI Prompts, CLI Budgets and the minimal CLI flag etc.

We’ve generated a new Angular 7 project and seen the different CLI commands to serve, build and work with our project.

In the next tutorial, we're going to start learning about the fundamentals of Angular 7 starting with components.

We'll also use the acquired knowledge to develop a front-end application for a RESTful back-end created with Python and Django — the well-know framework for perfectionists with deadlines.

William Minchin: CName Plugin 1.2.1 for Pelican Released

$
0
0

CName is a plugin for Pelican, a static site generator written in Python.

CName creates a CNAME file in the root of your output directory. This is useful when you are publishing your site to GitHub Pages on a custom domain.

Updates

Writing these posts about the new releases is often a little funny becuase the changes made are often so small that they don’t really feel worthy of their own post, but collectively, they start adding up. So this post actually covers five releases combined: 1.0.3, 1.0.4, 1.1.0, 1.2.0, 1.2.1.

  • v1.0.3 was updated to include the Framework :: Pelican :: Plugins classifier that had been added to PyPI. Sadly, there hasn’t been much uptake of the classifer: I have 6 of the 7 packages listed. But maybe I should be so surprised: Pelican plugins have traditionally been distributed via a large shared repository rather than via PyPI.
  • v1.0.4 was released to make sure the license file was included in the distrubtion uploaded to PyPI.
  • v1.1.0 was me reworking the release process to be based on my minchin.releaser. I was already using an earlier version of the scripts, but I find it helpful to have my release process stardardized and semi-automated. This in turn is part of the reason there have so many small releases: it’s easy to do. So easy, that these last three releases were all pushed out today!
  • v1.2.0 added support for protocol-less SITEURL settings. I’ve been in the process (for some time now) of moving my site to HTTPS. However, sometimes a site will be available on both HTTP and HTTPS, and so to serve the same files, you can specify links without a protocol using just the double slashes: i.e. SITEURL = "//minchin.ca". Because of this, the main part of my site (https://minchin.ca) is now available on both HTTP and HTTPS.
  • v1.2.1 limits some of the internal text replacements to try and avoid future bugs.

You should be able to update through pip without any issues

pip install --upgrade minchin.pelican.plugins.cname

The code for this project is available on GitHub. Contributions are welcome!

Python Piedmont Triad User Group: PYPTUG Monthly Meeting (October): Altair, Ansible and more

$
0
0

Details

Come join PYPTUG at out next monthly meeting (October 30th 2018) to learn more about the Python programming language, modules and tools. Python is the language to learn if you've never programmed before, and at the other end, it is also a tool that no expert would do without.

Main talk:     Altair
presented by Martin DeWitt

bio:
Originally from Winston-Salem, Martin DeWitt is a former assistant professor of physics, who frequently used Python and IPython notebooks to teach both introductory and upper-level physics courses and labs. He is currently transitioning to a career in data science.

Abstract:
Altair is a statistical visualization Python library designed to facilitate the exploration of data by making it easy to generate interactive web-based visualizations. Using Pandas dataframes as data sources, Altair's API provides functionality to transform data (bin, sort, filter, and aggregate) and produce common graphs including histograms, line charts, scatter plots, and heatmaps. Graphs can be made interactive with features like panning, zooming, and filtering by mouse pointer selections. Most of the aspects of generating the visualizations -- axes, scales, legends, and interactive features -- are handled automatically, only requiring the user to employ Altair's concise declarative syntax to specify the connections between data columns in the dataframe and the various properties of the graph (axes, color, size, etc). With very few lines of code, you can generate rich, interactive, and portable web-based graphs.

In this presentation, I will first briefly introduce importing and viewing data using Pandas. I will then demonstrate some of the features of Altair for transforming data and creating both static and interactive graphs. We will work through a number of examples, step-by-step, from importing the data to a finalized graph. I intend for you to code along with me, so please be sure to bring your laptop.

For those who are interested in how the magic happens, Altair is based on Vega/Vega-Lite, which is a high-level grammar for producing interactive visualizations. "With Vega, visualizations are described in JSON, and generate interactive views using either HTML5 Canvas or SVG."(http://vega.github.io/) Altair works by taking specifications from the user through Python objects, generating the proper JSON code, and then using Vega to add a Canvas or SVG-based visualization to a web page. There are also renderers that allow Vega visualizations to be displayed in IPython and Jupyter notebooks.

Lightning talks!


We will have some time for extemporaneous "lightning talks" of 5-10 minute duration. If you'd like to do one, some suggestions of talks were provided here, if you are looking for inspiration. Or talk about a project you are working on.

When:

Tuesday, October 30th 2018
Meeting starts at 6:00PM

Where:

Wake Forest University, close to Polo Rd and University Parkway:
Manchester Hall
room: Manchester 241
Wake Forest University, Winston-Salem, NC 27109
And speaking of parking:  Parking after 5pm is on a first-come, first-serve basis.  The official parking policy is:
"Visitors can park in any general parking lot on campus. Visitors should avoid reserved spaces, faculty/staff lots, fire lanes or other restricted area on campus. Frequent visitors should contact Parking and Transportation to register for a parking permit."

Mailing List:

Don't forget to sign up to our user group mailing list:
It is the only step required to become a PYPTUG member.

Catalin George Festila: Python Qt5 - MP3 player example.

$
0
0
This tutorial with PyQt5 will allow us to play an MP3 file using QtMultimedia.
I used a test.mp3 file in the same folder with my python script.
This is the source script:
import sys

from PyQt5 import QtCore, QtWidgets, QtMultimedia

if __name__ == '__main__':
app = QtWidgets.QApplication(sys.argv)
filename = 'test.mp3'
fullpath = QtCore.QDir.current().absoluteFilePath(filename)
media = QtCore.QUrl.fromLocalFile(fullpath)
content = QtMultimedia.QMediaContent(media)
player = QtMultimedia.QMediaPlayer()
player.setMedia(content)
player.play()
sys.exit(app.exec_())

Python Software Foundation: Python Software Foundation Fellow Members for Q3 2018

$
0
0
We are happy to announce our 2018 3rd Quarter Python Software Foundation Fellow Members: 

Stefan Behnel

Blog, Github
Andrew Godwin
Website, Twitter
David Markey
Twitter
Eduardo Mendes
Github, Twitter, LinkedIn
Claudiu Popa
Github

Congratulations! Thank you for your continued contributions. We have added you to our Fellow roster online.

The above members have contributed to the Python ecosystem by maintaining popular libraries/tools, organizing Python events, hosting Python meet ups, teaching via YouTube videos, contributing to CPython, and overall being great mentors in our community. Each of them continues to help make Python more accessible around the world. To learn more about the new Fellow members, check out their links above.

If you would like to nominate someone to be a PSF Fellow, please send a description of their Python accomplishments and their email address to psf-fellow at python.org. Here is the nomination review schedule for 2018:

  • Q4: October to the end of December (01/10 - 31/12) Cut-off for quarter four will be November 20. New fellows will be announced before December 31. 

We are looking for a few more voting members to join the Work Group to help review nominations. If you are a PSF Fellow and would like to join, please write to psf-fellow at python.org.

Python Celery - Weekly Celery Tutorials and How-tos: Celery Execution Pools: What is it all about?

$
0
0

Have you ever asked yourself what happens when you start a Celery worker? Ok, it might not have been on your mind. But you might have come across things like execution pool, concurrency settings, prefork, gevent, eventlet and solo. So, what is it all about? How does it all fit together? And how is it related to the mechanics of a Celery worker?

The Celery worker

When you start a Celery worker on the command line via celery --app=..., you just start a supervisor process. The Celery worker itself does not process any tasks. It spawns child processes (or threads) and deals with all the book keeping stuff. The child processes (or threads) execute the actual tasks. These child processes (or threads) are also known as the execution pool.

The size of the execution pool determines the number of tasks your Celery worker can process . The more processes (or threads) the worker spawns, the more tasks it can process concurrently. If you need to process as many tasks as quickly as possible, you need a bigger execution pool. At least, that is the idea.

In reality, it is more complicated. The answer to the question how big your execution pool should be, depends whether you use processes or threads. And the answer to the question whether you should use processes or threads, depends what your tasks actually do.

The –pool option

You can choose between processes or threads, using the --pool command line argument. Use a gevent execution pool, spawning 100 green threads (you need to pip-install gevent):

celery worker --app=worker.app --pool=gevent --concurrency=100

Don’t worry too much about the details for now (why are threads green?). We will go into more details if you carry on reading. Celery supports four execution pool implementations:

  • prefork
  • solo
  • eventlet
  • gevent

The --pool command line argument is optional. If not specified, Celery defaults to the prefork execution pool.

Prefork

The prefork pool implementation is based on Python’s multiprocessing package. It allows your Celery worker to side-step Python’s Global Interpreter Lock and fully leverage multiple processors on a given machine.

You want to use the prefork pool if your tasks are CPU bound. A task is CPU bound, if it spends the majority of its time using the CPU (crunching numbers). Your task could only go faster if your CPU were faster.

The number of available cores limits the number of concurrent processes. It only makes sense to run as many CPU bound tasks in parallel as there are CPUs available. Which is why Celery defaults to the number of CPUs available on the machine, if the –concurrency argument is not set. Start a worker using the prefork pool, using as many processes as there are CPUs available:

celery worker --app=worker.app

Solo

The solo pool is a bit of a special execution pool. Strictly speaking, the solo pool is neither threaded nor process-based. And more strictly speaking, the solo pool is not even a pool as it is always solo. And even more strictly speaking, the solo pool contradicts the principle that the worker itself does not process any tasks.

The solo pool runs inside the worker process. It runs inline which means there is no bookkeeping overhead. Which makes the solo worker fast. But it also blocks the worker while it executes tasks. Which has some implications when remote-controlling workers.

celery worker --app=worker.app --pool=solo

The solo pool is an interesting option when running CPU intensive tasks in a microservices environment. In a Docker Swarm or Kubernetes context, managing the worker pool size can be easier than managing multiple execution pools. Instead of managing the execution pool size per worker(s) you manage the total number of workers. 

Eventlet and gevent

Let’s say you need to execute thousands of HTTP GET requests to fetch data from external REST APIs. The time it takes to complete a single GET request depends almost entirely on the time it takes the server to handle that request. Most of the time, your tasks wait for the server to send the response, not using any CPU.

The bottleneck for this kind of task is not the CPU. The bottleneck is waiting for an Input/Output operation to finish. This is an Input/Output-bound task (I/O bound). The time the task takes to complete is determined by the time spent waiting for an input/output operation to finish.

If you run a single process execution pool, you can only handle one request at a time. It takes a long time to complete those thousands of GET requests. So you spawn more processes. But there is a tipping point where adding more processes to the execution pool has a negative impact on performance. The overhead of managing the process pool becomes more expensive than the marginal gain for an additional process.

In this scenario, spawning hundreds (or even thousands) of threads is a much more efficient way to increase capacity for I/O-bound tasks. Celery supports two thread-based execution pools: eventlet and gevent. Here, the execution pool runs in the same process as the Celery worker itself. To be precise, both eventlet and gevent use greenlets and not threads. 

Greenlets - also known as green threads, cooperative threads or coroutines - give you threads, but without using threads. Threads are managed by the operating system kernel. The operating system uses a general-purpose scheduler to switch between threads. This general-purpose scheduler is not always very efficient.

Greenlets emulate multi-threaded environments without relying on any native operating system capabilities. Greenlets are managed in application space and not in kernel space. There is no scheduler pre-emptively switching between your threads at any given moment. Instead your greenlets voluntarily or explicitly give up control to one another at specified points in your code.

This makes greenlets excel at at running a huge number of non-blocking tasks. Your application can schedule things much more efficiently. For a large number of tasks this can be a lot more scalable than letting the operating system interrupt and awaken threads arbitrarily.

For us, the benefit of using a gevent or eventlet pool is that our Celery worker can do more work than it could before. This means we do not need as much RAM to scale up. This optimises the utilisation of our workers.

Start a Celery worker using a gevent execution pool with 500 worker threads (you need to pip-install gevent):

celery worker --app=worker.app --pool=gevent --concurreny=500

Start a Celery worker using a eventlet execution pool with 500 worker threads (you need to pip-install eventlet):

celery worker --app=worker.app --pool=eventlet --concurreny=500

Both pool options are based on the same concept: Spawn a greenlet pool. The difference is that –pool=gevent uses the gevent Greenlet pool  (gevent.pool.Pool). Whereas –pool=eventlet uses the eventlet Greenlet pool (eventlet.GreenPool).

gevent and eventlet are both packages that you need to pip-install yourself. There are implementation differences between the eventlet and gevent packages. Depending on your circumstances, one can perform better than the other. It is worthwhile trying out both.

The –concurrency option

To choose the best execution pool, you need to understand whether your tasks are CPU- or I/O-bound. CPU-bound tasks are best executed by a prefork execution pool. I/O bound tasks are best executed by a gevent/eventlet execution pool.

The only question remains is: how many worker processes/threads should you start? The --concurrency command line argument determines the number of processes/threads:

celery worker --app=worker.app --concurrency=2

This starts a worker with a prefork execution pool which is made up of two processes. For prefork pools, the number of processes should not exceed the number of CPUs. 

Spawn a Greenlet based execution pool with 500 worker threads:

celery worker --app=worker.app --pool=gevent --concurrency=500

If the --concurrency argument is not set, Celery always defaults to the number of CPUs, whatever the execution pool.

This makes most sense for the prefork execution pool. But you have to take it with a grain of salt. If there are many other processes on the machine, running your Celery worker with as many processes as CPUs available might not be the best idea.

Using the default concurrency setting in for a gevent/eventlet pool is almost outright stupid. The number of green threads it makes sense for you to run is unrelated to the number of CPUs you have at your disposal.

Another special case is the solo pool. Even though you can provide the --concurrency command line argument, it meaningless for this execution pool. 

For these reasons, it is always a good idea to set the --concurrency command line argument.

5 - Conclusion

Celery supports two concepts for spawning its execution pool: Prefork and Greenlets. Prefork is based on multiprocessing and is the best choice for tasks which make heavy use of CPU resources. Prefork pool sizes are roughly in line with the number of available CPUs on the machine.

Tasks that perform Input/Output operations should run in a greenlet-based execution pool. Greenlets heave like threads, but are much more lightweight and efficient. Greenlet pools can scale to hundreds or even thousands of tasks .

What can you do if you have a mix of CPU and I/O bound tasks? Set up two queues with one worker processing each queue. One queue/worker with a prefork execution pool for CPU heavy tasks. And another queue/worker with a gevent or eventlet execution pool for I/O tasks. And don’t forget to route your tasks to the correct queue.

Not Invented Here: Introducing nti.fakestatsd

$
0
0

Lately at NextThought we've been much more focused on using application level metrics to proactively monitor and understand the run-time characteristics of our applications. Much of the open source stack we are built on top of is already instrumented with the great perfmetrics library. Because of this, when it was time to expand the metrics we collected, perfmetrics was the obvious choice. However we quickly ran into a problem. How should we test the metrics we generated were actually emitted as the StatsD metrics we expected?

We needed a perfmetrics compatible fake StatsD client that we could drop in during testing. Ultimately we wanted something that was to perfmetrics as fakeredis is to redis-py. We couldn't find what we were looking for on PyPi or GitHub so we wrote our own.

Today we are excited to introduce nti.fakestatsd, a testing client for verifying StatsD metrics emitted by perfmetrics.

It's easy to create a new client for use in testing:

>>> fromnti.fakestatsdimportFakeStatsDClient>>> test_client=FakeStatsDClient()

This client exposes the same public interface as perfmetrics.statsd.StatsdClient. For example we can increment counters, set gauges, etc:

>>> test_client.incr('request_c')>>> test_client.gauge('active_sessions',320)

Unlike perfmetrics.statsd.StatsdClient, FakeStatsDClient simply tracks the statsd packets that would be sent. This information is exposed on our test_client both as the raw statsd packet, and for convenience this information is also parsed and exposed as Metric objects. For complete details see FakeStatsDClient and Metric.

>>> test_client.packets['request_c:1|c', 'active_sessions:320|g']>>> test_client.metrics[<nti.fakestatsd.metric.Metric object at ...>, <nti.fakestatsd.metric.Metric object at ...>]

For validating metrics we provide a set of hamcrest matchers for use in test assertions:

>>> fromhamcrestimportassert_that>>> fromhamcrestimportcontains>>> fromnti.fakestatsd.matchersimportis_metric>>> fromnti.fakestatsd.matchersimportis_gauge>>> assert_that(test_client,... contains(is_metric('c','request_c','1'),... is_gauge('active_sessions','320')))>>> assert_that(test_client,... contains(is_gauge('request_c','1'),... is_gauge('active_sessions','320')))Traceback (most recent call last):...AssertionError:
Expected: a sequence containing [Metric of form <request_c:1|g>, Metric of form <active_sessions:320|g>]     but: item 0: was <request_c:1|c>

As with all our open-source projects we encourage you to check it out on github and, of course, Pull Requests are always welcome.

Ned Batchelder: Why warnings is mysterious

$
0
0

I recently went through a process I’ve done many times before: I tried to configure the Python warnings module, and was mystified. I set the PYTHONWARNINGS environment variable, and it doesn’t do what it seems like it should. I read the docs again, I think I understand what they are telling me, but the code isn’t doing what it seems like it should be doing. Frustrating.

I had some time today to dig into it, and now I understand better. The docs are misleading and/or incomplete. The module is not designed for maximum utility. Let me explain.

Here’s is what the docs tell you: PYTHONWARNINGS (or the -W command-line option) is a comma-separated list of filters. Each filter is a 5-tuple of fields separated by colons. The third and fourth fields are of interest here, they are category and module.

Let’s start with module: it’s the module that caused the warning. The docs say it is a regular expression. This is false! Internally, this string is used as part of a regex match operation, but first it is escaped, so if you include an asterisk in your setting, you will be trying to match module names that have a literal asterisk in them, which is impossible.

OK, so the module string is a literal string, not a regex, but the escaped string is being used as part of re.match, so it should be possible to suppress warning from an entire package (like backports.*) just by specifying “backports”, right? Nope! After being escaped, a $ is added to the end, so your literal string must be an exact match on the entire module name. Sigh.

Just to add to the confusion, the docs have long included this example, which isn’t even a sensible regex, never mind that regexes aren’t usable here:

error:::mymodule[.*]    # Convert warnings to errors in "mymodule"
                        # and any subpackages of "mymodule"

These concerns about the regex behavior is the topic of bpo 34624, BTW.

On to category: this is actually the class of warning exception being used, so you can say (for example) DeprecationWarning here. In my case, I wanted to suppress the deprecation warnings that pytest raises. Pytest helpfully uses a base class, pytest.PytestDeprecationWarning, so I used that as the category. But this causes an error message at startup:

Invalid -W option ignored: invalid module name: ‘pytest’

Huh? pytest is an importable module! Nope: these category names are imported early in the startup sequence, before your real sys.path is built, so you cannot name third-party modules here...!

There are probably other things about warnings that confuse people. These are the ones I uncovered today after a long rage-fueled debugging session. An important developer skill is an irrational belief that things can be understood, and be made to make sense. Another is knowing when to give up and just accept the confusion. This morning I fully embraced the first.

In my case, I was trying to suppress warnings reported while running tests with pytest. Pytest has its own setting for warnings filters, and it uses its own copy of the warnings.py code for reading them, so that the regexes are not escaped! This is very useful, but could also add to the mystery, since the pytest docs don’t mention the difference.

And since pytest interprets its settings after sys.path has been configured, I can use third-party warning categories there. So this works perfectly:

[pytest]
filterwarnings =
    ignore:::backports
    ignore::pytest.PytestDeprecationWarning

It’s very satisfying to have some mysteries solved.

BangPypers: Talks - October, 2018

$
0
0

For October 2018's session, we had a session on “Data Science” . The venue was G0-JEK, Domlur and we had 3 speakers. All the sessions ran for around 40 minutes.

The first talk by Usha was on “Introduction to Probabilistic Graphical Models” . The talk started with the video by Judea Pearl following which
she gave an introduction to Graphical Models. Further into the talks she
discussed the traditional methods, and how Graphical Models came into light.
This talk consisted of the basics of probability - specifically Bayesian
Probability. Given the complexity of the subject, this will require a follow-up
talk, but the key takeaways of this talk were about the importance of a basic
understanding of probability to understand the topic better. The relevant
Python package is pgmpy.

The second talk was by Abdul - a talk on
Introduction to NLP basics, and using it with Spacy. He went over to demo Spacy
using a demo dataset from Kaggle which comprised of the Twitter activity around
JustDoit (Nike) hashtag. Kudos to Abdul, on giving a energetic talk.

After a small networking break, we started off with the next session, where
Divya talked about Language Models using Python. She started the talk from
scratch to discuss Language Models required in NLP as knowledge bases or
referential banks. For example, when a sentence is to be analyzed, it can be
parsed and understand based on the rules and guidelines set by the language
model.

The link for the resources would be updated asap

Pics from the Meetup

p1
p2
p3
p4
p5
p6

Weekly Python StackOverflow Report: (cxlix) stackoverflow python report

$
0
0

PyBites: PyBites Twitter Digest - Issue 34, 2018

$
0
0

Python 3.6.7 and 3.7.1 have been released!

Read about how Erik from our PyBites Community started his own North Austin Python Meetup

The next step in virtual AI assistants! Some super realistic graphics phew!

Python Text classification with Keras

Responder v1.1.0 released

PyQt5 Tutorial

Consider sponsoring the PSF! Python wouldn't be where it is today if not for them!

A custom, small library for creating Word Clouds within Jupyter notebook

Test and Code Podcast Episode 50!

Comparing Regex in Perl, Python and Emacs

A guide on the Python secrets module!

A run in with Python Warnings

A good reminder on scopes!

Getting things done in Trello with Python, Flask and Twilio SMS


>>>frompybitesimportBob,JulianKeepCalmandCodeinPython!

Kushal Das: Fedora 29 on Qubes OS

$
0
0

I spent most of my life using Fedora as my primary operating system on my desktop/laptops. I use CentOS on my servers, sometimes even Fedora, and a few special cases, I use *BSD systems.

But, for the last one year I am running Qubes OS as my primary operating system on my laptop. That enables me to still keep using Fedora in the AppVMs as I want, and I can also have different work VMs in Debian/Ubuntu or even Windows as required. Moving to a newer version of Fedora is just about installing the new template and rebooting any AppVM with the newest template.

Fedora 29 will release on 30th October, and Qubes team already built a template for the same and pushed to the testing repository. You can install it by the following command.

$ sudo qubes-dom0-update qubes-template-fedora-29 --enablerepo=qubes-template-itl-testing

After this, I just installed all the required packages and setup the template as I want using my Qubes Ansible project. It took only a few minutes to move all of my development related VMs into Fedora 29 and this still keeps the option open to go back to Fedora 28 the moment I want. This is one of the beauty of Qubes OS and of course there are the regular security aspects too.

If you are a software developer using Linux, and also care about security practices, give Qubes OS a try. It has also a very active and helpful user community. I am sure it will not disappoint you.


Zato Blog: zato-apitest 1.12 - API testing for humans

$
0
0

Version 1.12 of zato-apitest has just been released. This version simplifies installation requirements and adds compatibility with PostgreSQL 10+ databases.

zato-apitest is an API testing tool designed from ground up with convenience and ease of use in mind. It supports REST, SQL, Cassandra and Zato-based APIs with tests written in plain English.

There is no need for manual programming, though if required, it is easy to extend it in Python.

It ships with a built-in demo; right after the installation, run apitest demo and a sample test case will be set up and run against a test server, as below:

$ sudo pip install zato-apitest
$ apitest demo

Screenshots

The tool is part of the Zato API and backend server platform but can be used standalone with or without Zato services. More information, including documentation and usage examples can be found here.

Jaime Buelta: Package and deploy a Python module in PyPI with Poetry, tox and Travis

Podcast.__init__: Bringing Python To The Spanish Language Community with Maricela Sanchez

$
0
0
The Python Community is large and growing, however a majority of articles, books, and presentations are still in English. To increase the accessibility for Spanish language speakers, Maricela Sanchez helped to create the Charlas track at PyCon US, and is an organizer for Python Day Mexico. In this episode she shares her motivations for getting involved in community building, her experiences working on Python Day Mexico and PyCon Charlas, and the lessons that she has learned in the process.

Summary

The Python Community is large and growing, however a majority of articles, books, and presentations are still in English. To increase the accessibility for Spanish language speakers, Maricela Sanchez helped to create the Charlas track at PyCon US, and is an organizer for Python Day Mexico. In this episode she shares her motivations for getting involved in community building, her experiences working on Python Day Mexico and PyCon Charlas, and the lessons that she has learned in the process.

Preface

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app you’ll need somewhere to deploy it, so check out Linode. With private networking, shared block storage, node balancers, and a 40Gbit network, all controlled by a brand new API you’ve got everything you need to scale up. Go to podcastinit.com/linode to get a $20 credit and launch a new server in under a minute.
  • Visit the site to subscribe to the show, sign up for the newsletter, and read the show notes. And if you have any questions, comments, or suggestions I would love to hear them. You can reach me on Twitter at @Podcast__init__ or email hosts@podcastinit.com)
  • To help other people find the show please leave a review on iTunes, or Google Play Music, tell your friends and co-workers, and share it on social media.
  • Join the community in the new Zulip chat workspace at podcastinit.com/chat
  • Your host as usual is Tobias Macey and today I’m interviewing Maricela Sanchez Miranda about her work in organizing PyCon Charlas, the spanish language track at PyCon US, as well as Python Day Mexico

Interview

  • Introductions
  • How did you get introduced to Python?
  • Can you briefly describe PyCon Charlas and Python Day Mexico?
    • What has been your motivation for getting involved with organizing these community events?
  • What do you find to be the unique characteristics of the Python community in Mexico?
  • What kind of feedback have you gotton from the Charlas track at PyCon?
  • What are your goals for fostering these Spanish language events?
  • What are some of the lessons that you have learned from PyCon Charlas that were useful in organizing Python Day Mexico?
  • What have been the most challenging or complicated aspects of organizing Python Day Mexico?
    • How many attendees do you anticipate? How has that affected your planning and preparation?
  • Are there any aspects of the geography, infrastructure, or culture of Mexico that you have found to be either beneficial or challenging for organizing a conference?
  • Do you anticipate PyCon Charlas and Python Day Mexico becoming annual events?
  • What is your advice for anyone who is interested in organizing a conference in their own region or language?

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Mike Driscoll: PyDev of the Week: Anthony Sottile

$
0
0

This week we welcome Anthony Sottile (@codewithanthony) as our PyDev of the Week! Anthony is one of the maintainers for the tox and pytest packages. He is also on the “deadsnakes” PPA team, which backports Python for certain EOL Linux distros. While you can discover a little about Anthony on his website, you will probably learn more from his Github profile.

Can you tell us a little about yourself (hobbies, education, etc):

From a young age, I was always fascinated with computers. Some of my earliest programs were written to simplify (read: _cheat_) on homework. A word document containing a rudimentary quadratic-formula-solving gui interface using visual basic was copied to quite a few floppy disks. I eventually switched to web development as it was a much more accessible distribution mechanism.

I attended the University of Michigan (go blue!) originally studying biochemistry. I wanted to change the world through medicine and research but two years into the program I decided to switch to my stronger passion. And after an intense scramble to squeeze a four year program into two years I graduated with a computer science degree!

Most of my personal time is spent biking (which I meticulously track in a fancy spreadsheet — 4600 miles logged this year). Some of my other passions are hiking, cooking, running, ski racing, playing violin, and writing a bit of poetry (is this a dating site?). And of course I spend a significant portion of time building and contributing to open source software 🙂

My crowning achievement is becoming a Pokémon master — not only completing a living Pokédex but enduring _extreme_ tedium by capturing every single possible shiny Pokémon legitimately. It’s all been downhill since then 🤣.

Why did you start using Python?

The first time I was exposed to python was through employment at Yelp. Though I was hired as a JavaScript frontend developer, I quickly delved into full stack development as the curiosity got the best of me. Eventually, I built a web infrastructure and developer tooling team. Along the way, my completionist nature brought me to many corners of the language including metaprogramming, packaging, python 3 porting, RPython (pypy), C extensions and more! Being my current poison of choice, I decided I better know how it works!

What other programming languages do you know and which is your favorite?

At one point or another I would have considered myself an expert at JavaScript, C#, Java, C++, and C (and dabbled in many others). One thing that I’ve valued out of a programming language is the ability to have static guarantees _before_ runtime (usually in the form of a type checker). As such, I’m excited about the improvements in the type-checking space with python’s gradual typing approach! My favorite language (by syntax and features) that I’ve worked with is C# (though maybe that’s just due to how good Visual Studio is).

An honorable mention here is go. While there’s a few things that I think could be better (tabs, packaging, generics) — go has one killer feature that I wish every language had: code rewrite as a first class citizen. Not only is `go fmt` a testament to the success of the this principle (there’s only one way to format your code!) but it’s easy and encouraged to write tools which read and manipulate the code.

What projects are you working on now?

My passion project is [pre-commit](https://pre-commit.com) — a multi-language framework for managing git hooks (mostly linters and fixers). Along with maintaining the framework, I’ve built a few of myownfixers. Recently I’ve also been spending some time helping maintain tox and pytest. One of my newer projects all-repos, is a tool for managing “microliths” at scale, making searching and applying sweeping changes across repositories easy.

Which Python libraries are your favorite (core or 3rd party)?

This was actually a quite interesting question for me — not wanting to take the science out of it I turned to [all-repos](https://github.com/asottile/all-repos) to try and find the answer! Ok so the results aren’t exactly the most interesting (well of course you import `os` a lot!) but let’s dive into some outliers and my favorites. First on the list is pytest— of course it’s first, I think testing is incredibly important and what better tool to use than pytest! Of the standard library my favorite by far is `argparse` and since I tend to write lots of command line utilities it gets used _a lot_. Some honorable mentions in the favorites department are `sqlite3` (great for prototyping and surprisingly performant), `collections` (for `namedtuple`), `contextlib` (for `contextmanager`), and `typing` (which I’ve recently been getting into). A few that I use disproportionately more than most are `ast` and `tokenize` — I really like static analysis and tend to write a bunch of tools for it.

How did you become a part of the “deadsnakes” team (which I really appreciate, by the way)?

Perhaps my favorite part of open source is if you offer to help, people are usually receptive. I originally started working on debian packaging while backporting various packages to end-of-lifed ubuntu lucid while working at Yelp. Just before python 3.5 released, lucid reached the end of its support cycle and launchpad (rightfully so!) disabled PPA uploads for lucid (including deadsnakes). Developers (including myself) wanted to use some of the new features such as `subprocess.run`, `async` / `await`, and unpacking generalizations. At the time, an upgrade to a not-end-of-lifed distribution was still 2 to 3 years out. I was able to successfully backport 3.5 and learned a ton in the process (undoing multiarch, adjusting dependencies, patching tests, etc.). When 3.6 was released, the deadsnakes ppa homepage held a message that support was being discontinued. I offered to help with maintainership, and with a little coaching had working sources for 3.6 and the rest is history!

What challenges have you faced as a part of that team?

To be honest, most of the responsibility and maintenance of deadsnakes is pretty easy and straightforward once you learn the tooling (or [automate the tooling](https://github.com/deadsnakes/runbooks)). Most of the ease-of-maintenance comes from having three relatively-high-quality upstreams: debian, ubuntu, and of course cpython. Most of the maintenance work comes from new releases (a new LTS results in a rebuild of all packages and adjusting for two years of distribution changes!).

Thanks for doing the interview, Anthony!

Made With Mu: Python on Hardware Vlog, from Adafruit.

$
0
0

If you’re a mutineering mutant into playful hacking of embedded hardware with CircuitPython or MicroPython then check out the new video newsletter from Adafruit.


It’s a quick summary of all that’s been happening in the world of embedded Python. Fun facts revealed in the show include such nuggets as the number of pages of projects, tutorials and how-tos about CircuitPython on the Adafruit website as well as the number of drivers for different devices compatible with CircuitPython. The numbers quoted show the growing momentum behind Python on embedded devices. This is good news!

It was also great to see two Mu contributors (Radomir and Josh) have their work featured. Radomir’s inventive hardware hacking is a thing of wonder, while Josh’s work on the EduBlocks project is, quite simply, inspiring stuff (especially when you consider he’s only 14 years old!).

Finally, Limor, Phil and the wider CircuitPython team at Adafruit deserve kudos and all sorts of accolades for their amazing work building such a welcoming, dynamic and creative community around CircuitPython. As the project tag-line says, “Code + Community = CircuitPython”. Thank you for all the work you do!

Viewing all 22646 articles
Browse latest View live