Quantcast
Channel: Planet Python
Viewing all 22404 articles
Browse latest View live

Real Python: Logistic Regression in Python

$
0
0

As the amount of available data, the strength of computing power, and the number of algorithmic improvements continue to rise, so does the importance of data science and machine learning. Classification is among the most important areas of machine learning, and logistic regression is one of its basic methods. By the end of this tutorial, you’ll have learned about classification in general and the fundamentals of logistic regression in particular, as well as how to implement logistic regression in Python.

In this tutorial, you’ll learn:

  • What logistic regression is
  • What logistic regression is used for
  • How logistic regression works
  • How to implement logistic regression in Python, step by step

Free Bonus:Click here to get access to a free NumPy Resources Guide that points you to the best tutorials, videos, and books for improving your NumPy skills.

Classification

Classification is a very important area of supervised machine learning. A large number of important machine learning problems fall within this area. There are many classification methods, and logistic regression is one of them.

What Is Classification?

Supervised machine learning algorithms define models that capture relationships among data. Classification is an area of supervised machine learning that tries to predict which class or category some entity belongs to, based on its features.

For example, you might analyze the employees of some company and try to establish a dependence on the features or variables, such as the level of education, number of years in a current position, age, salary, odds for being promoted, and so on. The set of data related to a single employee is one observation. The features or variables can take one of two forms:

  1. Independent variables, also called inputs or predictors, don’t depend on other features of interest (or at least you assume so for the purpose of the analysis).
  2. Dependent variables, also called outputs or responses, depend on the independent variables.

In the above example where you’re analyzing employees, you might presume the level of education, time in a current position, and age as being mutually independent, and consider them as the inputs. The salary and the odds for promotion could be the outputs that depend on the inputs.

Note: Supervised machine learning algorithms analyze a number of observations and try to mathematically express the dependence between the inputs and outputs. These mathematical representations of dependencies are the models.

The nature of the dependent variables differentiates regression and classification problems. Regression problems have continuous and usually unbounded outputs. An example is when you’re estimating the salary as a function of experience and education level. On the other hand, classification problems have discrete and finite outputs called classes or categories. For example, predicting if an employee is going to be promoted or not (true or false) is a classification problem.

There are two main types of classification problems:

  1. Binary or binomial classification: exactly two classes to choose between (usually 0 and 1, true and false, or positive and negative)
  2. Multiclass or multinomial classification: three or more classes of the outputs to choose from

If there’s only one input variable, then it’s usually denoted with 𝑥. For more than one input, you’ll commonly see the vector notation 𝐱 = (𝑥₁, …, 𝑥ᵣ), where 𝑟 is the number of the predictors (or independent features). The output variable is often denoted with 𝑦 and takes the values 0 or 1.

When Do You Need Classification?

You can apply classification in many fields of science and technology. For example, text classification algorithms are used to separate legitimate and spam emails, as well as positive and negative comments. You can check out Practical Text Classification With Python and Keras to get some insight into this topic. Other examples involve medical applications, biological classification, credit scoring, and more.

Image recognition tasks are often represented as classification problems. For example, you might ask if an image is depicting a human face or not, or if it’s a mouse or an elephant, or which digit from zero to nine it represents, and so on. To learn more about this, check out Traditional Face Detection With Python and Face Recognition with Python, in Under 25 Lines of Code.

Logistic Regression Overview

Logistic regression is a fundamental classification technique. It belongs to the group of linear classifiers and is somewhat similar to polynomial and linear regression. Logistic regression is fast and relatively uncomplicated, and it’s convenient for you to interpret the results. Although it’s essentially a method for binary classification, it can also be applied to multiclass problems.

Math Prerequisites

You’ll need an understanding of the sigmoid function and the natural logarithm function to understand what logistic regression is and how it works.

This image shows the sigmoid function (or S-shaped curve) of some variable 𝑥:

Sigmoid Function

The sigmoid function has values very close to either 0 or 1 across most of its domain. This fact makes it suitable for application in classification methods.

This image depicts the natural logarithm log(𝑥) of some variable 𝑥, for values of 𝑥 between 0 and 1:

Natural Logarithm

As 𝑥 approaches zero, the natural logarithm of 𝑥 drops towards negative infinity. When 𝑥 = 1, log(𝑥) is 0. The opposite is true for log(1 − 𝑥).

Note that you’ll often find the natural logarithm denoted with ln instead of log. In Python, math.log(x) and numpy.log(x) represent the natural logarithm of x, so you’ll follow this notation in this tutorial.

Problem Formulation

In this tutorial, you’ll see an explanation for the common case of logistic regression applied to binary classification. When you’re implementing the logistic regression of some dependent variable 𝑦 on the set of independent variables 𝐱 = (𝑥₁, …, 𝑥ᵣ), where 𝑟 is the number of predictors ( or inputs), you start with the known values of the predictors 𝐱ᵢ and the corresponding actual response (or output) 𝑦ᵢ for each observation 𝑖 = 1, …, 𝑛.

Your goal is to find the logistic regression function𝑝(𝐱) such that the predicted responses𝑝(𝐱ᵢ) are as close as possible to the actual response𝑦ᵢ for each observation 𝑖 = 1, …, 𝑛. Remember that the actual response can be only 0 or 1 in binary classification problems! This means that each 𝑝(𝐱ᵢ) should be close to either 0 or 1. That’s why it’s convenient to use the sigmoid function.

Once you have the logistic regression function 𝑝(𝐱), you can use it to predict the outputs for new and unseen inputs, assuming that the underlying mathematical dependence is unchanged.

Methodology

Logistic regression is a linear classifier, so you’ll use a linear function 𝑓(𝐱) = 𝑏₀ + 𝑏₁𝑥₁ + ⋯ + 𝑏ᵣ𝑥ᵣ, also called the logit. The variables 𝑏₀, 𝑏₁, …, 𝑏ᵣ are the estimators of the regression coefficients, which are also called the predicted weights or just coefficients.

The logistic regression function 𝑝(𝐱) is the sigmoid function of 𝑓(𝐱): 𝑝(𝐱) = 1 / (1 + exp(−𝑓(𝐱)). As such, it’s often close to either 0 or 1. The function 𝑝(𝐱) is often interpreted as the predicted probability that the output for a given 𝐱 is equal to 1. Therefore, 1 − 𝑝(𝑥) is the probability that the output is 0.

Logistic regression determines the best predicted weights 𝑏₀, 𝑏₁, …, 𝑏ᵣ such that the function 𝑝(𝐱) is as close as possible to all actual responses 𝑦ᵢ, 𝑖 = 1, …, 𝑛, where 𝑛 is the number of observations. The process of calculating the best weights using available observations is called model training or fitting.

To get the best weights, you usually maximize the log-likelihood function (LLF) for all observations 𝑖 = 1, …, 𝑛. This method is called the maximum likelihood estimation and is represented by the equation LLF = Σᵢ(𝑦ᵢ log(𝑝(𝐱ᵢ)) + (1 − 𝑦ᵢ) log(1 − 𝑝(𝐱ᵢ))).

When 𝑦ᵢ = 0, the LLF for the corresponding observation is equal to log(1 − 𝑝(𝐱ᵢ)). If 𝑝(𝐱ᵢ) is close to 𝑦ᵢ = 0, then log(1 − 𝑝(𝐱ᵢ)) is close to 0. This is the result you want. If 𝑝(𝐱ᵢ) is far from 0, then log(1 − 𝑝(𝐱ᵢ)) drops significantly. You don’t want that result because your goal is to obtain the maximum LLF. Similarly, when 𝑦ᵢ = 1, the LLF for that observation is 𝑦ᵢ log(𝑝(𝐱ᵢ)). If 𝑝(𝐱ᵢ) is close to 𝑦ᵢ = 1, then log(𝑝(𝐱ᵢ)) is close to 0. If 𝑝(𝐱ᵢ) is far from 1, then log(𝑝(𝐱ᵢ)) is a large negative number.

There are several mathematical approaches that will calculate the best weights that correspond to the maximum LLF, but that’s beyond the scope of this tutorial. For now, you can leave these details to the logistic regression Python libraries you’ll learn to use here!

Once you determine the best weights that define the function 𝑝(𝐱), you can get the predicted outputs 𝑝(𝐱ᵢ) for any given input 𝐱ᵢ. For each observation 𝑖 = 1, …, 𝑛, the predicted output is 1 if 𝑝(𝐱ᵢ) > 0.5 and 0 otherwise. The threshold doesn’t have to be 0.5, but it usually is. You might define a lower or higher value if that’s more convenient for your situation.

There’s one more important relationship between 𝑝(𝐱) and 𝑓(𝐱), which is that log(𝑝(𝐱) / (1 − 𝑝(𝐱))) = 𝑓(𝐱). This equality explains why 𝑓(𝐱) is the logit. It implies that 𝑝(𝐱) = 0.5 when 𝑓(𝐱) = 0 and that the predicted output is 1 if 𝑓(𝐱) > 0 and 0 otherwise.

Classification Performance

Binary classification has four possible types of results:

  1. True negatives: correctly predicted negatives (zeros)
  2. True positives: correctly predicted positives (ones)
  3. False negatives: incorrectly predicted negatives (zeros)
  4. False positives: incorrectly predicted positives (ones)

You usually evaluate the performance of your classifier by comparing the actual and predicted outputsand counting the correct and incorrect predictions.

The most straightforward indicator of classification accuracy is the ratio of the number of correct predictions to the total number of predictions (or observations). Other indicators of binary classifiers include the following:

  • The positive predictive value is the ratio of the number of true positives to the sum of the numbers of true and false positives.
  • The negative predictive value is the ratio of the number of true negatives to the sum of the numbers of true and false negatives.
  • The sensitivity (also known as recall or true positive rate) is the ratio of the number of true positives to the number of actual positives.
  • The specificity (or true negative rate) is the ratio of the number of true negatives to the number of actual negatives.

The most suitable indicator depends on the problem of interest. In this tutorial, you’ll use the most straightforward form of classification accuracy.

Single-Variate Logistic Regression

Single-variate logistic regression is the most straightforward case of logistic regression. There is only one independent variable (or feature), which is 𝐱 = 𝑥. This figure illustrates single-variate logistic regression:

1D Logistic Regression

Here, you have a given set of input-output (or 𝑥-𝑦) pairs, represented by green circles. These are your observations. Remember that 𝑦 can only be 0 or 1. For example, the leftmost green circle has the input 𝑥 = 0 and the actual output 𝑦 = 0. The rightmost observation has 𝑥 = 9 and 𝑦 = 1.

Logistic regression finds the weights 𝑏₀ and 𝑏₁ that correspond to the maximum LLF. These weights define the logit 𝑓(𝑥) = 𝑏₀ + 𝑏₁𝑥, which is the dashed black line. They also define the predicted probability 𝑝(𝑥) = 1 / (1 + exp(−𝑓(𝑥))), shown here as the full black line. In this case, the threshold 𝑝(𝑥) = 0.5 and 𝑓(𝑥) = 0 corresponds to the value of 𝑥 slightly higher than 3. This value is the limit between the inputs with the predicted outputs of 0 and 1.

Multi-Variate Logistic Regression

Multi-variate logistic regression has more than one input variable. This figure shows the classification with two independent variables, 𝑥₁ and 𝑥₂:

2D Logistic Regression

The graph is different from the single-variate graph because both axes represent the inputs. The outputs also differ in color. The white circles show the observations classified as zeros, while the green circles are those classified as ones.

Logistic regression determines the weights 𝑏₀, 𝑏₁, and 𝑏₂ that maximize the LLF. Once you have 𝑏₀, 𝑏₁, and 𝑏₂, you can get:

  • The logit𝑓(𝑥₁, 𝑥₂) = 𝑏₀ + 𝑏₁𝑥₁ + 𝑏₂𝑥₂
  • The probabilities𝑝(𝑥₁, 𝑥₂) = 1 / (1 + exp(−𝑓(𝑥₁, 𝑥₂)))

The dash-dotted black line linearly separates the two classes. This line corresponds to 𝑝(𝑥₁, 𝑥₂) = 0.5 and 𝑓(𝑥₁, 𝑥₂) = 0.

Regularization

Overfitting is one of the most serious kinds of problems related to machine learning. It occurs when a model learns the training data too well. The model then learns not only the relationships among data but also the noise in the dataset. Overfitted models tend to have good performance with the data used to fit them (the training data), but they behave poorly with unseen data (or test data, which is data not used to fit the model).

Overfitting usually occurs with complex models. Regularization normally tries to reduce or penalize the complexity of the model. Regularization techniques applied with logistic regression mostly tend to penalize large coefficients 𝑏₀, 𝑏₁, …, 𝑏ᵣ:

  • L1 regularization penalizes the LLF with the scaled sum of the absolute values of the weights: |𝑏₀|+|𝑏₁|+⋯+|𝑏ᵣ|.
  • L2 regularization penalizes the LLF with the scaled sum of the squares of the weights: 𝑏₀²+𝑏₁²+⋯+𝑏ᵣ².
  • Elastic-net regularization is a linear combination of L1 and L2 regularization.

Regularization can significantly improve model performance on unseen data.

Logistic Regression in Python

Now that you understand the fundamentals, you’re ready to apply the appropriate packages as well as their functions and classes to perform logistic regression in Python. In this section, you’ll see the following:

  • A summary of Python packages for logistic regression (NumPy, scikit-learn, StatsModels, and Matplotlib)
  • Two illustrative examples of logistic regression solved with scikit-learn
  • One conceptual example solved with StatsModels
  • One real-world example of classifying handwritten digits

Let’s start implementing logistic regression in Python!

Logistic Regression Python Packages

There are several packages you’ll need for logistic regression in Python. All of them are free and open-source, with lots of available resources. First, you’ll need NumPy, which is a fundamental package for scientific and numerical computing in Python. NumPy is useful and popular because it enables high-performance operations on single- and multi-dimensional arrays.

NumPy has many useful array routines. It allows you to write elegant and compact code, and it works well with many Python packages. If you want to learn NumPy, then you can start with the official user guide. The NumPy Reference also provides comprehensive documentation on its functions, classes, and methods.

Note: To learn more about NumPy performance and the other benefits it can offer, check out Pure Python vs NumPy vs TensorFlow Performance Comparison and Look Ma, No For-Loops: Array Programming With NumPy.

Another Python package you’ll use is scikit-learn. This is one of the most popular data science and machine learning libraries. You can use scikit-learn to perform various functions:

  • Preprocess data
  • Reduce the dimensionality of problems
  • Validate models
  • Select the most appropriate model
  • Solve regression and classification problems
  • Implement cluster analysis

You’ll find useful information on the official scikit-learn website, where you might want to read about generalized linear models and logistic regression implementation. If you need functionality that scikit-learn can’t offer, then you might find StatsModels useful. It’s a powerful Python library for statistical analysis. You can find more information on the official website.

Finally, you’ll use Matplotlib to visualize the results of your classification. This is a Python library that’s comprehensive and widely used for high-quality plotting. For additional information, you can check the official website and user guide. There are several resources for learning Matplotlib you might find useful, like the official tutorials, the Anatomy of Matplotlib, and Python Plotting With Matplotlib (Guide).

Logistic Regression in Python With scikit-learn: Example 1

The first example is related to a single-variate binary classification problem. This is the most straightforward kind of classification problem. There are several general steps you’ll take when you’re preparing your classification models:

  1. Import packages, functions, and classes
  2. Get data to work with and, if appropriate, transform it
  3. Create a classification model and train (or fit) it with your existing data
  4. Evaluate your model to see if its performance is satisfactory

A sufficiently good model that you define can be used to make further predictions related to new, unseen data. The above procedure is the same for classification and regression.

Step 1: Import Packages, Functions, and Classes

First, you have to import Matplotlib for visualization and NumPy for array operations. You’ll also need LogisticRegression, classification_report(), and confusion_matrix() from scikit-learn:

importmatplotlib.pyplotaspltimportnumpyasnpfromsklearn.linear_modelimportLogisticRegressionfromsklearn.metricsimportclassification_report,confusion_matrix

Now you’ve imported everything you need for logistic regression in Python with scikit-learn!

Step 2: Get Data

In practice, you’ll usually have some data to work with. For the purpose of this example, let’s just create arrays for the input (𝑥) and output (𝑦) values:

x=np.arange(10).reshape(-1,1)y=np.array([0,0,0,0,1,1,1,1,1,1])

The input and output should be NumPy arrays (instances of the class numpy.ndarray) or similar objects. numpy.arange() creates an array of consecutive, equally-spaced values within a given range. For more information on this function, check the official documentation or NumPy arange(): How to Use np.arange().

The array x is required to be two-dimensional. It should have one column for each input, and the number of rows should be equal to the number of observations. To make x two-dimensional, you apply .reshape() with the arguments -1 to get as many rows as needed and 1 to get one column. For more information on .reshape(), you can check out the official documentation. Here’s how x and y look now:

>>>
>>> xarray([[0],       [1],       [2],       [3],       [4],       [5],       [6],       [7],       [8],       [9]])>>> yarray([0, 0, 0, 0, 1, 1, 1, 1, 1, 1])

x has two dimensions:

  1. One column for a single input
  2. Ten rows, each corresponding to one observation

y is one-dimensional with ten items. Again, each item corresponds to one observation. It contains only zeros and ones since this is a binary classification problem.

Step 3: Create a Model and Train It

Once you have the input and output prepared, you can create and define your classification model. You’re going to represent it with an instance of the class LogisticRegression:

model=LogisticRegression(solver='liblinear',random_state=0)

The above statement creates an instance of LogisticRegression and binds its references to the variable model. LogisticRegression has several optional parameters that define the behavior of the model and approach:

  • penalty is a string ('l2' by default) that decides whether there is regularization and which approach to use. Other options are 'l1', 'elasticnet', and 'none'.

  • dual is a Boolean (False by default) that decides whether to use primal (when False) or dual formulation (when True).

  • tol is a floating-point number (0.0001 by default) that defines the tolerance for stopping the procedure.

  • C is a positive floating-point number (1.0 by default) that defines the relative strength of regularization. Smaller values indicate stronger regularization.

  • fit_intercept is a Boolean (True by default) that decides whether to calculate the intercept 𝑏₀ (when True) or consider it equal to zero (when False).

  • intercept_scaling is a floating-point number (1.0 by default) that defines the scaling of the intercept 𝑏₀.

  • class_weight is a dictionary, 'balanced', or None (default) that defines the weights related to each class. When None, all classes have the weight one.

  • random_state is an integer, an instance of numpy.RandomState, or None (default) that defines what pseudo-random number generator to use.

  • solver is a string ('liblinear' by default) that decides what solver to use for fitting the model. Other options are 'newton-cg', 'lbfgs', 'sag', and 'saga'.

  • max_iter is an integer (100 by default) that defines the maximum number of iterations by the solver during model fitting.

  • multi_class is a string ('ovr' by default) that decides the approach to use for handling multiple classes. Other options are 'multinomial' and 'auto'.

  • verbose is a non-negative integer (0 by default) that defines the verbosity for the 'liblinear' and 'lbfgs' solvers.

  • warm_start is a Boolean (False by default) that decides whether to reuse the previously obtained solution.

  • n_jobs is an integer or None (default) that defines the number of parallel processes to use. None usually means to use one core, while -1 means to use all available cores.

  • l1_ratio is either a floating-point number between zero and one or None (default). It defines the relative importance of the L1 part in the elastic-net regularization.

You should carefully match the solver and regularization method for several reasons:

  • 'liblinear' solver doesn’t work without regularization.
  • 'newton-cg', 'sag', 'saga', and 'lbfgs' don’t support L1 regularization.
  • 'saga' is the only solver that supports elastic-net regularization.

Once the model is created, you need to fit (or train) it. Model fitting is the process of determining the coefficients 𝑏₀, 𝑏₁, …, 𝑏ᵣ that correspond to the best value of the cost function. You fit the model with .fit():

model.fit(x,y)

.fit() takes x, y, and possibly observation-related weights. Then it fits the model and returns the model instance itself:

LogisticRegression(C=1.0, class_weight=None, dual=False, fit_intercept=True,
                   intercept_scaling=1, l1_ratio=None, max_iter=100,
                   multi_class='warn', n_jobs=None, penalty='l2',
                   random_state=0, solver='liblinear', tol=0.0001, verbose=0,
                   warm_start=False)

This is the obtained string representation of the fitted model.

You can use the fact that .fit() returns the model instance and chain the last two statements. They are equivalent to the following line of code:

model=LogisticRegression(solver='liblinear',random_state=0).fit(x,y)

At this point, you have the classification model defined.

You can quickly get the attributes of your model. For example, the attribute .classes_ represents the array of distinct values that y takes:

>>>
>>> model.classes_array([0, 1])

This is the example of binary classification, and y can be 0 or 1, as indicated above.

You can also get the value of the slope 𝑏₁ and the intercept 𝑏₀ of the linear function 𝑓 like so:

>>>
>>> model.intercept_array([-1.04608067])>>> model.coef_array([[0.51491375]])

As you can see, 𝑏₀ is given inside a one-dimensional array, while 𝑏₁ is inside a two-dimensional array. You use the attributes .intercept_ and .coef_ to get these results.

Step 4: Evaluate the Model

Once a model is defined, you can check its performance with .predict_proba(), which returns the matrix of probabilities that the predicted output is equal to zero or one:

>>>
>>> model.predict_proba(x)array([[0.74002157, 0.25997843],       [0.62975524, 0.37024476],       [0.5040632 , 0.4959368 ],       [0.37785549, 0.62214451],       [0.26628093, 0.73371907],       [0.17821501, 0.82178499],       [0.11472079, 0.88527921],       [0.07186982, 0.92813018],       [0.04422513, 0.95577487],       [0.02690569, 0.97309431]])

In the matrix above, each row corresponds to a single observation. The first column is the probability of the predicted output being zero, that is 1 - 𝑝(𝑥). The second column is the probability that the output is one, or 𝑝(𝑥).

You can get the actual predictions, based on the probability matrix and the values of 𝑝(𝑥), with .predict():

>>>
>>> model.predict(x)array([0, 0, 0, 1, 1, 1, 1, 1, 1, 1])

This function returns the predicted output values as a one-dimensional array.

The figure below illustrates the input, output, and classification results:

Result of Logistic Regression

The green circles represent the actual responses as well as the correct predictions. The red × shows the incorrect prediction. The full black line is the estimated logistic regression line 𝑝(𝑥). The grey squares are the points on this line that correspond to 𝑥 and the values in the second column of the probability matrix. The black dashed line is the logit 𝑓(𝑥).

The value of 𝑥 slightly above 2 corresponds to the threshold 𝑝(𝑥)=0.5, which is 𝑓(𝑥)=0. This value of 𝑥 is the boundary between the points that are classified as zeros and those predicted as ones.

For example, the first point has input 𝑥=0, actual output 𝑦=0, probability 𝑝=0.26, and a predicted value of 0. The second point has 𝑥=1, 𝑦=0, 𝑝=0.37, and a prediction of 0. Only the fourth point has the actual output 𝑦=0 and the probability higher than 0.5 (at 𝑝=0.62), so it’s wrongly classified as 1. All other values are predicted correctly.

When you have nine out of ten observations classified correctly, the accuracy of your model is equal to 9/10=0.9, which you can obtain with .score():

>>>
>>> model.score(x,y)0.9

.score() takes the input and output as arguments and returns the ratio of the number of correct predictions to the number of observations.

You can get more information on the accuracy of the model with a confusion matrix. In the case of binary classification, the confusion matrix shows the numbers of the following:

  • True negatives in the upper-left position
  • False negatives in the lower-left position
  • False positives in the upper-right position
  • True positives in the lower-right position

To create the confusion matrix, you can use confusion_matrix() and provide the actual and predicted outputs as the arguments:

>>>
>>> confusion_matrix(y,model.predict(x))array([[3, 1],       [0, 6]])

The obtained matrix shows the following:

  • Three true negative predictions: The first three observations are zeros predicted correctly.
  • No false negative predictions: These are the ones wrongly predicted as zeros.
  • One false positive prediction: The fourth observation is a zero that was wrongly predicted as one.
  • Six true positive predictions: The last six observations are ones predicted correctly.

It’s often useful to visualize the confusion matrix. You can do that with .imshow() from Matplotlib, which accepts the confusion matrix as the argument:

cm=confusion_matrix(y,model.predict(x))fig,ax=plt.subplots(figsize=(8,8))ax.imshow(cm)ax.grid(False)ax.xaxis.set(ticks=(0,1),ticklabels=('Predicted 0s','Predicted 1s'))ax.yaxis.set(ticks=(0,1),ticklabels=('Actual 0s','Actual 1s'))ax.set_ylim(1.5,-0.5)foriinrange(2):forjinrange(2):ax.text(j,i,cm[i,j],ha='center',va='center',color='red')plt.show()

The code above creates a heatmap that represents the confusion matrix:

Classification Confusion Matrix

In this figure, different colors represent different numbers and similar colors represent similar numbers. Heatmaps are a nice and convenient way to represent a matrix. To learn more about them, check out the Matplotlib documentation on Creating Annotated Heatmaps and .imshow().

You can get a more comprehensive report on the classification with classification_report():

>>>
>>> print(classification_report(y,model.predict(x)))              precision    recall  f1-score   support           0       1.00      0.75      0.86         4           1       0.86      1.00      0.92         6    accuracy                           0.90        10   macro avg       0.93      0.88      0.89        10weighted avg       0.91      0.90      0.90        10

This function also takes the actual and predicted outputs as arguments. It returns a report on the classification as a dictionary if you provide output_dict=True or a string otherwise.

Note: It’s usually better to evaluate your model with the data you didn’t use for training. That’s how you avoid bias and detect overfitting. You’ll see an example later in this tutorial.

For more information on LogisticRegression, check out the official documentation. In addition, scikit-learn offers a similar class LogisticRegressionCV, which is more suitable for cross-validation. You can also check out the official documentation to learn more about classification reports and confusion matrices.

Improve the Model

You can improve your model by setting different parameters. For example, let’s work with the regularization strength C equal to 10.0, instead of the default value of 1.0:

model=LogisticRegression(solver='liblinear',C=10.0,random_state=0)model.fit(x,y)

Now you have another model with different parameters. It’s also going to have a different probability matrix and a different set of coefficients and predictions:

>>>
>>> model.intercept_array([-3.51335372])>>> model.coef_array([[1.12066084]])>>> model.predict_proba(x)array([[0.97106534, 0.02893466],       [0.9162684 , 0.0837316 ],       [0.7810904 , 0.2189096 ],       [0.53777071, 0.46222929],       [0.27502212, 0.72497788],       [0.11007743, 0.88992257],       [0.03876835, 0.96123165],       [0.01298011, 0.98701989],       [0.0042697 , 0.9957303 ],       [0.00139621, 0.99860379]])>>> model.predict(x)array([0, 0, 0, 0, 1, 1, 1, 1, 1, 1])

As you can see, the absolute values of the intercept 𝑏₀ and the coefficient 𝑏₁ are larger. This is the case because the larger value of C means weaker regularization, or weaker penalization related to high values of 𝑏₀ and 𝑏₁.

Different values of 𝑏₀ and 𝑏₁ imply a change of the logit 𝑓(𝑥), different values of the probabilities 𝑝(𝑥), a different shape of the regression line, and possibly changes in other predicted outputs and classification performance. The boundary value of 𝑥 for which 𝑝(𝑥)=0.5 and 𝑓(𝑥)=0 is higher now. It’s above 3. In this case, you obtain all true predictions, as shown by the accuracy, confusion matrix, and classification report:

>>>
>>> model.score(x,y)1.0>>> confusion_matrix(y,model.predict(x))array([[4, 0],       [0, 6]])>>> print(classification_report(y,model.predict(x)))              precision    recall  f1-score   support           0       1.00      1.00      1.00         4           1       1.00      1.00      1.00         6    accuracy                           1.00        10   macro avg       1.00      1.00      1.00        10weighted avg       1.00      1.00      1.00        10

The score (or accuracy) of 1 and the zeros in the lower-left and upper-right fields of the confusion matrix indicate that the actual and predicted outputs are the same. That’s also shown with the figure below:

Result of Logistic Regression

This figure illustrates that the estimated regression line now has a different shape and that the fourth point is correctly classified as 0. There isn’t a red ×, so there is no wrong prediction.

Logistic Regression in Python With scikit-learn: Example 2

Let’s solve another classification problem. It’s similar to the previous one, except that the output differs in the second value. The code is similar to the previous case:

# Step 1: Import packages, functions, and classesimportnumpyasnpfromsklearn.linear_modelimportLogisticRegressionfromsklearn.metricsimportclassification_report,confusion_matrix# Step 2: Get datax=np.arange(10).reshape(-1,1)y=np.array([0,1,0,0,1,1,1,1,1,1])# Step 3: Create a model and train itmodel=LogisticRegression(solver='liblinear',C=10.0,random_state=0)model.fit(x,y)# Step 4: Evaluate the modelp_pred=model.predict_proba(x)y_pred=model.predict(x)score_=model.score(x,y)conf_m=confusion_matrix(y,y_pred)report=classification_report(y,y_pred)

This classification code sample generates the following results:

>>>
>>> print('x:',x,sep='\n')x:[[0] [1] [2] [3] [4] [5] [6] [7] [8] [9]]>>> print('y:',y,sep='\n',end='\n\n')y:[0 1 0 0 1 1 1 1 1 1]>>> print('intercept:',model.intercept_)intercept: [-1.51632619]>>> print('coef:',model.coef_,end='\n\n')coef: [[0.703457]]>>> print('p_pred:',p_pred,sep='\n',end='\n\n')p_pred:[[0.81999686 0.18000314] [0.69272057 0.30727943] [0.52732579 0.47267421] [0.35570732 0.64429268] [0.21458576 0.78541424] [0.11910229 0.88089771] [0.06271329 0.93728671] [0.03205032 0.96794968] [0.0161218  0.9838782 ] [0.00804372 0.99195628]]>>> print('y_pred:',y_pred,end='\n\n')y_pred: [0 0 0 1 1 1 1 1 1 1]>>> print('score_:',score_,end='\n\n')score_: 0.8>>> print('conf_m:',conf_m,sep='\n',end='\n\n')conf_m:[[2 1] [1 6]]>>> print('report:',report,sep='\n')report:              precision    recall  f1-score   support           0       0.67      0.67      0.67         3           1       0.86      0.86      0.86         7    accuracy                           0.80        10   macro avg       0.76      0.76      0.76        10weighted avg       0.80      0.80      0.80        10

In this case, the score (or accuracy) is 0.8. There are two observations classified incorrectly. One of them is a false negative, while the other is a false positive.

The figure below illustrates this example with eight correct and two incorrect predictions:

Result of Logistic Regression

This figure reveals one important characteristic of this example. Unlike the previous one, this problem is not linearly separable. That means you can’t find a value of 𝑥 and draw a straight line to separate the observations with 𝑦=0 and those with 𝑦=1. There is no such line. Keep in mind that logistic regression is essentially a linear classifier, so you theoretically can’t make a logistic regression model with an accuracy of 1 in this case.

Logistic Regression in Python With StatsModels: Example

You can also implement logistic regression in Python with the StatsModels package. Typically, you want this when you need more statistical details related to models and results. The procedure is similar to that of scikit-learn.

Step 1: Import Packages

All you need to import is NumPy and statsmodels.api:

importnumpyasnpimportstatsmodels.apiassm

Now you have the packages you need.

Step 2: Get Data

You can get the inputs and output the same way as you did with scikit-learn. However, StatsModels doesn’t take the intercept 𝑏₀ into account, and you need to include the additional column of ones in x. You do that with add_constant():

x=np.arange(10).reshape(-1,1)y=np.array([0,1,0,0,1,1,1,1,1,1])x=sm.add_constant(x)

add_constant() takes the array x as the argument and returns a new array with the additional column of ones. This is how x and y look:

>>>
>>> xarray([[1., 0.],       [1., 1.],       [1., 2.],       [1., 3.],       [1., 4.],       [1., 5.],       [1., 6.],       [1., 7.],       [1., 8.],       [1., 9.]])>>> yarray([0, 1, 0, 0, 1, 1, 1, 1, 1, 1])

This is your data. The first column of x corresponds to the intercept 𝑏₀. The second column contains the original values of x.

Step 3: Create a Model and Train It

Your logistic regression model is going to be an instance of the class statsmodels.discrete.discrete_model.Logit. This is how you can create one:

>>>
>>> model=sm.Logit(y,x)

Note that the first argument here is y, followed by x.

Now, you’ve created your model and you should fit it with the existing data. You do that with .fit() or, if you want to apply L1 regularization, with .fit_regularized():

>>>
>>> result=model.fit(method='newton')Optimization terminated successfully.         Current function value: 0.350471         Iterations 7

The model is now ready, and the variable result holds useful data. For example, you can obtain the values of 𝑏₀ and 𝑏₁ with .params:

>>>
>>> result.paramsarray([-1.972805  ,  0.82240094])

The first element of the obtained array is the intercept 𝑏₀, while the second is the slope 𝑏₁. For more information, you can look at the official documentation on Logit, as well as .fit() and .fit_regularized().

Step 4: Evaluate the Model

You can use results to obtain the probabilities of the predicted outputs being equal to one:

>>>
>>> result.predict(x)array([0.12208792, 0.24041529, 0.41872657, 0.62114189, 0.78864861,       0.89465521, 0.95080891, 0.97777369, 0.99011108, 0.99563083])

These probabilities are calculated with .predict(). You can use their values to get the actual predicted outputs:

>>>
>>> (result.predict(x)>=0.5).astype(int)array([0, 0, 0, 1, 1, 1, 1, 1, 1, 1])

The obtained array contains the predicted output values. As you can see, 𝑏₀, 𝑏₁, and the probabilities obtained with scikit-learn and StatsModels are different. This is the consequence of applying different iterative and approximate procedures and parameters. However, in this case, you obtain the same predicted outputs as when you used scikit-learn.

You can obtain the confusion matrix with .pred_table():

>>>
>>> result.pred_table()array([[2., 1.],       [1., 6.]])

This example is the same as when you used scikit-learn because the predicted ouptuts are equal. The confusion matrices you obtained with StatsModels and scikit-learn differ in the types of their elements (floating-point numbers and integers).

.summary() and .summary2() get output data that you might find useful in some circumstances:

>>>
>>> result.summary()<class 'statsmodels.iolib.summary.Summary'>"""                           Logit Regression Results                           ==============================================================================Dep. Variable:                      y   No. Observations:                   10Model:                          Logit   Df Residuals:                        8Method:                           MLE   Df Model:                            1Date:                Sun, 23 Jun 2019   Pseudo R-squ.:                  0.4263Time:                        21:43:49   Log-Likelihood:                -3.5047converged:                       True   LL-Null:                       -6.1086                                        LLR p-value:                   0.02248==============================================================================                 coef    std err          z      P>|z|      [0.025      0.975]------------------------------------------------------------------------------const         -1.9728      1.737     -1.136      0.256      -5.377       1.431x1             0.8224      0.528      1.557      0.119      -0.213       1.858==============================================================================""">>> result.summary2()<class 'statsmodels.iolib.summary2.Summary'>"""                        Results: Logit===============================================================Model:              Logit            Pseudo R-squared: 0.426   Dependent Variable: y                AIC:              11.0094Date:               2019-06-23 21:43 BIC:              11.6146No. Observations:   10               Log-Likelihood:   -3.5047Df Model:           1                LL-Null:          -6.1086Df Residuals:       8                LLR p-value:      0.022485Converged:          1.0000           Scale:            1.0000  No. Iterations:     7.0000                                     -----------------------------------------------------------------          Coef.    Std.Err.      z      P>|z|     [0.025   0.975]-----------------------------------------------------------------const    -1.9728     1.7366   -1.1360   0.2560   -5.3765   1.4309x1        0.8224     0.5281    1.5572   0.1194   -0.2127   1.8575==============================================================="""

These are detailed reports with values that you can obtain with appropriate methods and attributes. For more information, check out the official documentation related to LogitResults.

Logistic Regression in Python: Handwriting Recognition

The previous examples illustrated the implementation of logistic regression in Python, as well as some details related to this method. The next example will show you how to use logistic regression to solve a real-world classification problem. The approach is very similar to what you’ve already seen, but with a larger dataset and several additional concerns.

This example is about image recognition. To be more precise, you’ll work on the recognition of handwritten digits. You’ll use a dataset with 1797 observations, each of which is an image of one handwritten digit. Each image has 64 px, with a width of 8 px and a height of 8 px.

Note: To learn more about this dataset, check the official documentation.

The inputs (𝐱) are vectors with 64 dimensions or values. Each input vector describes one image. Each of the 64 values represents one pixel of the image. The input values are the integers between 0 and 16, depending on the shade of gray for the corresponding pixel. The output (𝑦) for each observation is an integer between 0 and 9, consistent with the digit on the image. There are ten classes in total, each corresponding to one image.

Step 1: Import Packages

You’ll need to import Matplotlib, NumPy, and several functions and classes from scikit-learn:

importmatplotlib.pyplotaspltimportnumpyasnpfromsklearn.datasetsimportload_digitsfromsklearn.linear_modelimportLogisticRegressionfromsklearn.metricsimportclassification_report,confusion_matrixfromsklearn.model_selectionimporttrain_test_splitfromsklearn.preprocessingimportStandardScaler

That’s it! You have all the functionality you need to perform classification.

Step 2a: Get Data

You can grab the dataset directly from scikit-learn with load_digits(). It returns a tuple of the inputs and output:

x,y=load_digits(return_X_y=True)

Now you have the data. This is how x and y look:

>>>
>>> xarray([[ 0.,  0.,  5., ...,  0.,  0.,  0.],       [ 0.,  0.,  0., ..., 10.,  0.,  0.],       [ 0.,  0.,  0., ..., 16.,  9.,  0.],       ...,       [ 0.,  0.,  1., ...,  6.,  0.,  0.],       [ 0.,  0.,  2., ..., 12.,  0.,  0.],       [ 0.,  0., 10., ..., 12.,  1.,  0.]])>>> yarray([0, 1, 2, ..., 8, 9, 8])

That’s your data to work with. x is a multi-dimensional array with 1797 rows and 64 columns. It contains integers from 0 to 16. y is an one-dimensional array with 1797 integers between 0 and 9.

Step 2b: Split Data

It’s a good and widely-adopted practice to split the dataset you’re working with into two subsets. These are the training set and the test set. This split is usually performed randomly. You should use the training set to fit your model. Once the model is fitted, you evaluate its performance with the test set. It’s important not to use the test set in the process of fitting the model. This approach enables an unbiased evaluation of the model.

One way to split your dataset into training and test sets is to apply train_test_split():

x_train,x_test,y_train,y_test=\
    train_test_split(x,y,test_size=0.2,random_state=0)

train_test_split() accepts x and y. It also takes test_size, which determines the size of the test set, and random_state to define the state of the pseudo-random number generator, as well as other optional arguments. This function returns a list with four arrays:

  1. x_train: the part of x used to fit the model
  2. x_test: the part of x used to evaluate the model
  3. y_train: the part of y that corresponds to x_train
  4. y_test: the part of y that corresponds to x_test

Once your data is split, you can forget about x_test and y_test until you define your model.

Step 2c: Scale Data

Standardization is the process of transforming data in a way such that the mean of each column becomes equal to zero, and the standard deviation of each column is one. This way, you obtain the same scale for all columns. Take the following steps to standardize your data:

  1. Calculate the mean and standard deviation for each column.
  2. Subtract the corresponding mean from each element.
  3. Divide the obtained difference by the corresponding standard deviation.

It’s a good practice to standardize the input data that you use for logistic regression, although in many cases it’s not necessary. Standardization might improve the performance of your algorithm. It helps if you need to compare and interpret the weights. It’s important when you apply penalization because the algorithm is actually penalizing against the large values of the weights.

You can standardize your inputs by creating an instance of StandardScaler and calling .fit_transform() on it:

scaler=StandardScaler()x_train=scaler.fit_transform(x_train)

.fit_transform() fits the instance of StandardScaler to the array passed as the argument, transforms this array, and returns the new, standardized array. Now, x_train is a standardized input array.

Step 3: Create a Model and Train It

This step is very similar to the previous examples. The only difference is that you use x_train and y_train subsets to fit the model. Again, you should create an instance of LogisticRegression and call .fit() on it:

model=LogisticRegression(solver='liblinear',C=0.05,multi_class='ovr',random_state=0)model.fit(x_train,y_train)

When you’re working with problems with more than two classes, you should specify the multi_class parameter of LogisticRegression. It determines how to solve the problem:

  • 'ovr' says to make the binary fit for each class.
  • 'multinomial' says to apply the multinomial loss fit.

The last statement yields the following output since .fit() returns the model itself:

LogisticRegression(C=0.05,class_weight=None,dual=False,fit_intercept=True,intercept_scaling=1,l1_ratio=None,max_iter=100,multi_class='ovr',n_jobs=None,penalty='l2',random_state=0,solver='liblinear',tol=0.0001,verbose=0,warm_start=False)

These are the parameters of your model. It’s now defined and ready for the next step.

Step 4: Evaluate the Model

You should evaluate your model similar to what you did in the previous examples, with the difference that you’ll mostly use x_test and y_test, which are the subsets not applied for training. If you’ve decided to standardize x_train, then the obtained model relies on the scaled data, so x_test should be scaled as well with the same instance of StandardScaler:

x_test=scaler.transform(x_test)

That’s how you obtain a new, properly-scaled x_test. In this case, you use .transform(), which only transforms the argument, without fitting the scaler.

You can obtain the predicted outputs with .predict():

y_pred=model.predict(x_test)

The variable y_pred is now bound to an array of the predicted outputs. Note that you use x_test as the argument here.

You can obtain the accuracy with .score():

>>>
>>> model.score(x_train,y_train)0.964509394572025>>> model.score(x_test,y_test)0.9416666666666667

Actually, you can get two values of the accuracy, one obtained with the training set and other with the test set. It might be a good idea to compare the two, as a situation where the training set accuracy is much higher might indicate overfitting. The test set accuracy is more relevant for evaluating the performance on unseen data since it’s not biased.

You can get the confusion matrix with confusion_matrix():

>>>
>>> confusion_matrix(y_test,y_pred)array([[27,  0,  0,  0,  0,  0,  0,  0,  0,  0],       [ 0, 32,  0,  0,  0,  0,  1,  0,  1,  1],       [ 1,  1, 33,  1,  0,  0,  0,  0,  0,  0],       [ 0,  0,  1, 28,  0,  0,  0,  0,  0,  0],       [ 0,  0,  0,  0, 29,  0,  0,  1,  0,  0],       [ 0,  0,  0,  0,  0, 39,  0,  0,  0,  1],       [ 0,  1,  0,  0,  0,  0, 43,  0,  0,  0],       [ 0,  0,  0,  0,  0,  0,  0, 39,  0,  0],       [ 0,  2,  1,  2,  0,  0,  0,  1, 33,  0],       [ 0,  0,  0,  1,  0,  1,  0,  2,  1, 36]])

The obtained confusion matrix is large. In this case, it has 100 numbers. This is a situation when it might be really useful to visualize it:

cm=confusion_matrix(y_test,y_pred)fig,ax=plt.subplots(figsize=(8,8))ax.imshow(cm)ax.grid(False)ax.set_xlabel('Predicted outputs',fontsize=font_size,color='black')ax.set_ylabel('Actual outputs',fontsize=font_size,color='black')ax.xaxis.set(ticks=range(10))ax.yaxis.set(ticks=range(10))ax.set_ylim(9.5,-0.5)foriinrange(10):forjinrange(10):ax.text(j,i,cm[i,j],ha='center',va='center',color='white')plt.show()

The code above produces the following figure of the confusion matrix:

Classification Confusion Matrix

This is a heatmap that illustrates the confusion matrix with numbers and colors. You can see that the shades of purple represent small numbers (like 0, 1, or 2), while green and yellow show much larger numbers (27 and above).

The numbers on the main diagonal (27, 32, …, 36) show the number of correct predictions from the test set. For example, there are 27 images with zero, 32 images of one, and so on that are correctly classified. Other numbers correspond to the incorrect predictions. For example, the number 1 in the third row and the first column shows that there is one image with the number 2 incorrectly classified as 0.

Finally, you can get the report on classification as a string or dictionary with classification_report():

>>>
>>> print(classification_report(y_test,y_pred))              precision    recall  f1-score   support           0       0.96      1.00      0.98        27           1       0.89      0.91      0.90        35           2       0.94      0.92      0.93        36           3       0.88      0.97      0.92        29           4       1.00      0.97      0.98        30           5       0.97      0.97      0.97        40           6       0.98      0.98      0.98        44           7       0.91      1.00      0.95        39           8       0.94      0.85      0.89        39           9       0.95      0.88      0.91        41    accuracy                           0.94       360   macro avg       0.94      0.94      0.94       360weighted avg       0.94      0.94      0.94       360

This report shows additional information, like the support and precision of classifying each digit.

Beyond Logistic Regression in Python

Logistic regression is a fundamental classification technique. It’s a relatively uncomplicated linear classifier. Despite its simplicity and popularity, there are cases (especially with highly complex models) where logistic regression doesn’t work well. In such circumstances, you can use other classification techniques:

  • k-Nearest Neighbors
  • Naive Bayes classifiers
  • Support Vector Machines
  • Decision Trees
  • Random Forests
  • Neural Networks

Fortunately, there are several comprehensive Python libraries for machine learning that implement these techniques. For example, the package you’ve seen in action here, scikit-learn, implements all of the above-mentioned techniques, with the exception of neural networks.

For all these techniques, scikit-learn offers suitable classes with methods like model.fit(), model.predict_proba(), model.predict(), model.score(), and so on. You can combine them with train_test_split(), confusion_matrix(), classification_report(), and others.

Neural networks (including deep neural networks) have become very popular for classification problems. Libraries like TensorFlow, PyTorch, or Keras offer suitable, performant, and powerful support for these kinds of models.

Conclusion

You now know what logistic regression is and how you can implement it for classification with Python. You’ve used many open-source packages, including NumPy, to work with arrays and Matplotlib to visualize the results. You also used both scikit-learn and StatsModels to create, fit, evaluate, and apply models.

Generally, logistic regression in Python has a straightforward and user-friendly implementation. It usually consists of these steps:

  1. Import packages, functions, and classes
  2. Get data to work with and, if appropriate, transform it
  3. Create a classification model and train (or fit) it with existing data
  4. Evaluate your model to see if its performance is satisfactory
  5. Apply your model to make predictions

You’ve come a long way in understanding one of the most important areas of machine learning! If you have questions or comments, then please put them in the comments section below.


[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]


Podcast.__init__: Using Deliberate Practice To Level Up Your Python

$
0
0
An effective strategy for teaching and learning is to rely on well structured exercises and collaboration for practicing the material. In this episode long time Python trainer Reuven Lerner reflects on the lessons that he has learned in the 5 years since his first appearance on the show, how his teaching has evolved, and the ways that he has incorporated more hands-on experiences into his lessons. This was a great conversation about the benefits of being deliberate in your approach to ongoing education in the field of technology, as well as having some helpful references for ways to keep your own skills sharp.

Summary

An effective strategy for teaching and learning is to rely on well structured exercises and collaboration for practicing the material. In this episode long time Python trainer Reuven Lerner reflects on the lessons that he has learned in the 5 years since his first appearance on the show, how his teaching has evolved, and the ways that he has incorporated more hands-on experiences into his lessons. This was a great conversation about the benefits of being deliberate in your approach to ongoing education in the field of technology, as well as having some helpful references for ways to keep your own skills sharp.

Announcements

  • Hello and welcome to Podcast.__init__, the podcast about Python and the people who make it great.
  • When you’re ready to launch your next app or want to try a project you hear about on the show, you’ll need somewhere to deploy it, so take a look at our friends over at Linode. With 200 Gbit/s private networking, scalable shared block storage, node balancers, and a 40 Gbit/s public network, all controlled by a brand new API you’ve got everything you need to scale up. And for your tasks that need fast computation, such as training machine learning models, they just launched dedicated CPU instances. Go to pythonpodcast.com/linode to get a $20 credit and launch a new server in under a minute. And don’t forget to thank them for their continued support of this show!
  • You listen to this show to learn and stay up to date with the ways that Python is being used, including the latest in machine learning and data analysis. For even more opportunities to meet, listen, and learn from your peers you don’t want to miss out on this year’s conference season. We have partnered with organizations such as O’Reilly Media, Corinium Global Intelligence, ODSC, and Data Council. Upcoming events include the Software Architecture Conference in NYC, Strata Data in San Jose, and PyCon US in Pittsburgh. Go to pythonpodcast.com/conferences to learn more about these and other events, and take advantage of our partner discounts to save money when you register today.
  • Your host as usual is Tobias Macey and today I’m pleased to welcome back Reuven Lerner to talk about the benefits of deliberate practice for learning and improving programming skills

Interview

  • Introductions

  • How did you get introduced to Python?

  • In your first appearance on the show back in episode 2 we talked about your experience as a Python trainer. How has your teaching style evolved in the past 5 years?

    • How has the focus and scope of your training changed in that time period?
  • What have you found to be some of the most helpful and effective tactics in your training?

  • From the learner perspective, what are some strategies that you recommend for retaining information, particularly in the context of gaining technical knowledge?

  • In-person training vs. real-time online training vs. recorded videos, advantages and disadvantages of each.

  • Blended learning, in which we combine aspects of the above

    • Beyond in-person training, what are your preferred methods for learning and maintaining new skills?
  • What is deliberate practice and how does it differ from the habits that many of us might default to?

    • What are some of the resources that you provide for students of your trainings for practicing?
    • What are some of the outside resources which you have found most useful or effective?

Keep In Touch

Picks

Links

The intro and outro music is from Requiem for a Fish The Freak Fandango Orchestra / CC BY-SA

Wingware News: Wing Python IDE 7.2 Release Candidate 1 - January 14, 2020

$
0
0

Wing 7.2 adds auto-formatting with Black and YAPF, expands support for virtualenv, adds support for Anaconda environments, explicitly supports debugging modules launched with python-m, simplifies manually configured remote debugging, and fixes a number of usability issues.


Wing 7.2 Screen Shot

Download Wing 7.2 Now:Wing Pro | Wing Personal | Wing 101 | Compare Products


What's New in Wing 7.2


Auto-Reformatting with Black and YAPF (Wing Pro)

Wing 7.2 adds support for Black and YAPF for code reformatting, in addition to the previously available built-in autopep8 reformatting. To use Black or YAPF, they must first be installed into your Python with pip, conda, or other package manager. Reformatting options are available from the Source>Reformatting menu group, and automatic reformatting may be configured in the Editor>Auto-reformatting preferences group.

For details, see Auto-Reformatting in the SourceCodeEditor chapter of the WingManual found in Wing's Help menu.

Improved Support for Virtualenv

Wing 7.2 improves support for virtualenv by allowing the command that activates the environment to be entered in the Python Executable in Project Properties, Launch Configurations, and when creating new projects. The New Project dialog now also includes the option to create a new virtualenv along with the new project, optionally specifying packages to install.

For details, see UsingWingwithVirtualenv under the How-Tos found in Wing's Help menu.

Support for Anaconda Environments

Similarly, Wing 7.2 adds support for Anaconda environments, so the condaactivate command can be entered when configuring the Python Executable and the New Project dialog supports using an existing Anaconda environment or creating a new one along with the project.

For details, see UsingWingwithAnaconda under the How-Tos found in Wing's Help menu.

And More

Wing 7.2 also adds explicit support for debugging modules with python-m, simplifies manual configuration of remote debugging, allows using a command line for the configured PythonExecutable, and fixes a number of usability issues.

For details see the change log.

For a complete list of new features in Wing 7, see What's New in Wing 7.


Try Wing 7.2 Now!


Wing 7.2 is an exciting new step for Wingware's Python IDE product line. Find out how Wing 7.2 can turbocharge your Python development by trying it today.

Downloads:Wing Pro | Wing Personal | Wing 101 | Compare Products

See Upgrading for details on upgrading from Wing 6 and earlier, and Migrating from Older Versions for a list of compatibility notes.

Python Diary: Creating a transparently encrypted field in Django

$
0
0

This is officially PythonDiary's first Python 3 article! Python 2 is now officially dead, so there's less reasons to make that a major focus going forward.

In some rare situations you may wish to have data which may otherwise be visible on the Django site, or through the Django admin, but may wish to have this data transparently encrypted into the database. This could be very useful, if for example, you use an untrusted database where it is not managed by you, and some database administrator can indeed either dump the data, or otherwise view the stored schemas. This is common with managed databases, which are either maintained by a hosting provider, or is shared with other tenants. In this current day and age with many database breaches appearing in the news from large vendors, you can never be 100% sure that the data you save into your database will never be leaked.

Django supports custom fields on your database models, and the various CRUD and model services Django provides will use these fields with ease, making the creation of a globally transparently encrypted field possible. First lets start with the creation of the custom Django field to explain how that works first.

fromdjango.db.models.fieldsimportCharFieldimportcipherclassEnField(CharField):deffrom_db_value(self,value,expression,connection):""" Decrypt the data for display in Django as normal. """returncipher.decrypt(value)defget_prep_value(self,value):""" Encrypt the data when saving it into the database. """returncipher.encrypt(value)

As you can see here, it is really straightforward to extend an existing field, such as CharField. I choose CharField in this example, as it tends to render easily everywhere in the Django framework with ease, so it is the most straightforward to test this concept with. You may also wish to use the binhex module's base64 encoding, but most databases should allow binary data to be stored into a VARCHAR. You may also opt to use a binary field as well. It is also the simplest when it comes to playing with a model in the Python shell. Next, let's see how all the magic works in the cipher module.

fromCrypto.CipherimportAESfromdjango.confimportsettingsimporthashlib,randomdef__random5():""" Generate a random sequence of 5 bytes for use in a SHA512 hash. """returnbytes(''.join(map(chr,random.sample(range(255),5))),'utf-8')def__fill():""" This is used to generate filler data to pad our plain text before encryption. """returnhashlib.sha512(__random5()).digest()def__cipher():""" A simple constructor we can call from both our encrypt and decrypt functions. """key=hashlib.sha256(bytes(settings.SECRET_KEY,'utf-8')).digest()# Key is generated by our SECRET_KEY in Django.returnAES.new(key)# Here you should perhaps use MODE_CBC, and add an initialization vector for additional security.  ECB is the default, and isn't very secure.defencrypt(data):""" The entrypoint for encrypting our field. """FILL=__fill()+__fill()+__fill()# This is used to generate filler so we can satisfy the block size of AES.  It is best to pad with random data, than to pad with say nulls.return__cipher().encrypt(bytes(data,'utf-8')+b'|'+FILL[len(data)+1:])defdecrypt(data):""" Entrypoint for decryption """return__cipher().decrypt(data).split(b'|')[0].decode('utf-8')

Pretty neat, huh? Feel free to change how the filler is generated, the cipher being used, and of course the options passed to the cipher to further customize this solution. I have tested this using Python 3.5, and Django 2.2.9. Although, it should work on future versions of both Python and Django. For obvious reasons, you should be using caching on your Django site if you plan on displaying these fields on your front-end. The best part about doing this as a field, rather than placing this code in your model, through a signal, or in a form, is that it 100% works in the Django admin, and any other place you may reference this field within your Django codebase. This is an interesting example, of when to use a custom model field in Django, rather than adding the logic to the model or the form.

Techiediaries - Django: Django 3 Tutorial & CRUD Example with MySQL and Bootstrap

$
0
0

Django 3 is released with full async support! In this tutorial, we'll see by example how to create a CRUD application from scratch and step by step. We'll see how to configure a MySQL database, enable the admin interface, and create the django views.

We'll be using Bootstrap 4 for styling.

You'll learn how to:

  • Implement CRUD operations,
  • Configure and access a MySQL database,
  • Create django views, templates and urls,
  • Style the UI with Bootstrap 4

Django 3 Features

Django 3 comes with many new features such as:

  • MariaDB support: Django now officially supports MariaDB 10.1+. You can use MariaDB via the MySQL backend,
  • ASGI support for async programming,
  • Django 3.0 provides support for running as an ASGI application, making Django fully async-capable
  • Exclusion constraints on PostgreSQL: Django 3.0 adds a new ExclusionConstraint class which adds exclusion constraints on PostgreSQL, etc.

Prerequisites

Let's start with the prerequisites for this tutorial. In order to follow the tutorial step by step, you'll need a few requirements, such as:

  • Basic knowledge of Python,
  • Working knowledge of Django (django-admin.py and manage.py),
  • A recent version of Python 3 installed on your system (the latest version is 3.7),
  • MySQL database installed on your system.

We will be using pip and venv which are bundled as modules in recent versions of Python so you don't actually need to install them unless you are working with old versions.

If you are ready, lets go started!

Django 3 Tutorial, Step 1 - Creating a MySQL Database

In this step, we'll create a mysql database for storing our application data.

Open a new command-line interface and run the mysql client as follows:

$ mysql -u root -p

You'll be prompted for your MySQL password, enter it and press Enter.

Next, create a database using the following SQL statement:

mysql> create database mydb;

We now have an empty mysql database!

Django 3 Tutorial, Step 2 - Initializing a New Virtual Environment

In this step, we'll initialize a new virtual environment for installing our project packages in separation of the system-wide packages.

Head back to your command-line interface and run the following command:

$ python3 -m venv .env

Next, activate your virtual environment using the following command:

$ source .env/bin/activate

At this point of our tutorial, we've a mysql database for persisting data and created a virtual environment for installing the project packages.

Django 3 Tutorial, Step 3 - Installing Django and MySQL Client

In this step, we'll install django and mysql client from PyPI using pip in our activated virtual environment.

Head back to your command-line interface and run the following command to install the django package:

$ pip install django

At the time of writing this tutorial, django-3.0.2 is installed.

You will also need to install the mysql client for Python using pip:

$ pip install mysqlclient

Django 3 Tutorial, Step 4 - Initializing a New Project

In this step, we'll initialize a new django project using the django-admin.

Head back to your command-line interface and run the following command:

$ django-admin startproject djangoCrudExample

Next, open the settings.py file and update the database settings to configure the mydb database:

DATABASES={'default':{'ENGINE':'django.db.backends.mysql','NAME':'mydb','USER':'root','PASSWORD':'<YOUR_DB_PASSWORD>','HOST':'localhost','PORT':'3306',}}

Next, migrate the database using the following commands:

$ cd djangoCrudExample
$ python3 manage.py migrate

You'll get a similar output:

Operations to perform:
  Apply all migrations: admin, auth, contenttypes, sessions
Running migrations:
  Applying contenttypes.0001_initial... OK
  Applying auth.0001_initial... OK
  Applying admin.0001_initial... OK
  Applying admin.0002_logentry_remove_auto_add... OK
  Applying admin.0003_logentry_add_action_flag_choices... OK
  Applying contenttypes.0002_remove_content_type_name... OK
  Applying auth.0002_alter_permission_name_max_length... OK
  Applying auth.0003_alter_user_email_max_length... OK
  Applying auth.0004_alter_user_username_opts... OK
  Applying auth.0005_alter_user_last_login_null... OK
  Applying auth.0006_require_contenttypes_0002... OK
  Applying auth.0007_alter_validators_add_error_messages... OK
  Applying auth.0008_alter_user_username_max_length... OK
  Applying auth.0009_alter_user_last_name_max_length... OK
  Applying auth.0010_alter_group_name_max_length... OK
  Applying auth.0011_update_proxy_permissions... OK
  Applying sessions.0001_initial... OK

This simply applies a set of builtin django migrations to create some necessary database tables or the working of django.

Django 3 Tutorial, Step 5 - Installing django-widget-tweaks

In this step, we'll install django-widget-tweaks in our virtual environment. Head back to your command-line interface and run the following command:

$ pip insll django-widget-tweaks

Next, open the settings.py file and add the application to the installed apps:

INSTALLED_APPS=['django.contrib.admin','django.contrib.auth','django.contrib.contenttypes','django.contrib.sessions','django.contrib.messages','django.contrib.staticfiles','widget_tweaks']

Django 3 Tutorial, Step 6 - Creating an Admin User

In this step, we'll create an admin user that will allow us to access the admin interface of our app using the following command:

$ python manage.py createsuperuser

Provide the desired username, email and password when prompted:

Username (leave blank to use 'ahmed'): 
Email address: ahmed@gmail.com
Password: 
Password (again): 
Superuser created successfully.

Django 3 Tutorial, Step 7 - Creating a Django Application

In this step, we'll create a django application.

Head back to your command-line interface, and run the following command:

$ python manage.py startapp crudapp

Next, you need to add it in the settings.py file as follows:

INSTALLED_APPS=['django.contrib.admin','django.contrib.auth','django.contrib.contenttypes','django.contrib.sessions','django.contrib.messages','django.contrib.staticfiles','widget_tweaks','crudapp']

Django 3 Tutorial, Step 8 - Creating the Model(s)

In this step. we'll create the database model for storing contacts.

Open the crudapp/models.py file and add the following code:

fromdjango.dbimportmodelsclassContact(models.Model):firstName=models.CharField("First name",max_length=255,blank=True,null=True)lastName=models.CharField("Last name",max_length=255,blank=True,null=True)email=models.EmailField()phone=models.CharField(max_length=20,blank=True,null=True)address=models.TextField(blank=True,null=True)description=models.TextField(blank=True,null=True)createdAt=models.DateTimeField("Created At",auto_now_add=True)def__str__(self):returnself.firstName

After creating these model, you need to create migrations using the following command:

$ python manage.py makemigrations

You should get a similar output:

  crudapp/migrations/0001_initial.py
    - Create model Contact

Next, you need to migrate your database using the following command:

$ python manage.py migrate

You should get a similar output:

  Applying crudapp.0001_initial... OK

Django 3 Tutorial, Step 9 - Creating a Form

In this step, we'll create a form for creating a contact.

In the crudapp folder, create a forms.py file and add the following code:

fromdjangoimportformsfrom.modelsimportContactclassContactForm(forms.ModelForm):classMeta:model=Contactfields="__all__"

We import the Contact model from the models.py file. We created a class called ContactForm, subclassing Django’s ModelForms from the django.forms package and specifying the model we want to use. We also specified that we will be using all fields in the Contact model. This will make it possible for us to display those fields in our templates.

Django 3 Tutorial, Step 10 - Creating the Views

In this step, we'll create the views for performing the CRUD operations.

Open the crudapp/views.py file and add:

fromdjango.shortcutsimportrender,redirect,get_object_or_404from.modelsimportContactfrom.formsimportContactFormfromdjango.views.genericimportListView,DetailView

Next, add:

classIndexView(ListView):template_name='crudapp/index.html'context_object_name='contact_list'defget_queryset(self):returnContact.objects.all()classContactDetailView(DetailView):model=Contacttemplate_name='crudapp/contact-detail.html'

Next, add:

defcreate(request):ifrequest.method=='POST':form=ContactForm(request.POST)ifform.is_valid():form.save()returnredirect('index')form=ContactForm()returnrender(request,'crudapp/create.html',{'form':form})defedit(request,pk,template_name='crudapp/edit.html'):contact=get_object_or_404(Contact,pk=pk)form=ContactForm(request.POSTorNone,instance=post)ifform.is_valid():form.save()returnredirect('index')returnrender(request,template_name,{'form':form})defdelete(request,pk,template_name='crudapp/confirm_delete.html'):contact=get_object_or_404(Contact,pk=pk)ifrequest.method=='POST':contact.delete()returnredirect('index')returnrender(request,template_name,{'object':contact})

Django 3 Tutorial, Step 11 - Creating Templates

Open the settings.py file and add os.path.join(BASE_DIR, 'templates') to the TEMPLATES array:

TEMPLATES=[{'BACKEND':'django.template.backends.django.DjangoTemplates','DIRS':[os.path.join(BASE_DIR,'templates')],'APP_DIRS':True,'OPTIONS':{'context_processors':['django.template.context_processors.debug','django.template.context_processors.request','django.contrib.auth.context_processors.auth','django.contrib.messages.context_processors.messages',],},},]

This will tell django to look for the templates in the templates folder.

Next, inside the crudapp folder create a templates folder:

$ mkdir templates

Next, inside the templates folder, create the following files:

  • base.html
  • confirm_delete.html
  • edit.html
  • index.html
  • create.html
  • contact-detail.html

By running the following commands from the root of your project:

$ mkdir templates
$ cd templates
$ mkdir crudapp
$ touch crudapp/base.html
$ touch crudapp/confirm_delete.html
$ touch crudapp/edit.html
$ touch crudapp/index.html
$ touch crudapp/create.html
$ touch crudapp/contact-detail.html

Open the crudapp/templates/base.html file and the add:

<!DOCTYPE html><html><head><title>Django 3 CRUD Example</title><metacharset="utf-8"><metaname="viewport"content="width=device-width, initial-scale=1"><linkrel="stylesheet"href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/css/bootstrap.min.css"></head><body>
{% block content %}
{% endblock %}
<script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js"integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49"crossorigin="anonymous"></script><script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script><script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/js/bootstrap.min.js"></script></body></html>

Next, open the crudapp/templates/index.html file and the add:

{% extends 'crudapp/base.html' %}
{% block content %}
<divclass="container-fluid"><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10"><h3class="round3"style="text-align:center;">Contacts</h3></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div><divclass="row"><divclass="col-md-10 col-xs-10 col-sm-10"></div><divclass="col-md-2 col-xs-1 col-sm-1"><br/><ahref="{% url 'create' %}"><buttontype="button"class="btn btn-success"><spanclass="glyphicon glyphicon-plus"></span></button></a></div></div><br/>
    {% for contact in contact_list %}
    <divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-7 col-xs-7 col-sm-7"><ulclass="list-group"><liclass="list-group-item "><ahref="{% url 'detail' contact.pk %}"> {{ contact.firstName }} {{contact.lastName}} </a><spanclass="badge"></span></li></ul><br></div><divclass="col-md-1 col-xs-1 col-sm-1"><ahref="{% url 'detail' contact.pk %}"><buttontype="button"class="btn btn-info"><spanclass="glyphicon glyphicon-open"></span></button></a></div><divclass="col-md-1"><ahref="{% url 'edit' contact.pk %}"><buttontype="button"class="btn btn-info"><spanclass="glyphicon glyphicon-pencil"></span></button></a></div><divclass="col-md-1"><ahref="{% url 'delete' contact.pk %}"><buttontype="button"class="btn btn-danger"><spanclass="glyphicon glyphicon-trash"></span></button></a></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div>
    {% endfor %}
</div>
{% endblock %}

Next, open the crudapp/templates/create.html file and the add:

{% load widget_tweaks %}
<!DOCTYPE html><html><head><title>Posts</title><metacharset="utf-8"><metaname="viewport"content="width=device-width, initial-scale=1"><linkrel="stylesheet"href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css"integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO"crossorigin="anonymous"><style type="text/css"><style></style></style></head><body><divclass="container-fluid"><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10 "><br/><h6style="text-align:center;"><fontcolor="red"> All fields are required</font></h6></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10"><formmethod="post"novalidate>
                    {% csrf_token %}
                    {% for hidden_field in form.hidden_fields %}
                    {{ hidden_field }}
                    {% endfor %}
                    {% for field in form.visible_fields %}
                    <divclass="form-group">
                        {{ field.label_tag }}
                        {% render_field field class="form-control" %}
                        {% if field.help_text %}
                        <smallclass="form-text text-muted">{{ field.help_text }}</small>
                        {% endif %}
                    </div>
                    {% endfor %}
                    <buttontype="submit"class="btn btn-primary">post</button></form><br></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div></div><script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js"integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49"crossorigin="anonymous"></script><script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js"integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy"crossorigin="anonymous"></script></body></html>

Next, open the crudapp/templates/edit.html file and the add:

{% load widget_tweaks %}
<!DOCTYPE html><html><head><title>Edit Contact</title><metacharset="utf-8"><metaname="viewport"content="width=device-width, initial-scale=1"><linkrel="stylesheet"href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css"integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO"crossorigin="anonymous"><style type="text/css"><style></style></style></head><body><divclass="container-fluid"><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10 "><br/><h6style="text-align:center;"><fontcolor="red"> All fields are required</font></h6></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10"><formmethod="post"novalidate>
                {% csrf_token %}
                {% for hidden_field in form.hidden_fields %}
                {{ hidden_field }}
                {% endfor %}
                {% for field in form.visible_fields %}
                <divclass="form-group">
                    {{ field.label_tag }}
                    {% render_field field class="form-control" %}
                    {% if field.help_text %}
                    <smallclass="form-text text-muted">{{ field.help_text }}</small>
                    {% endif %}
                </div>
                {% endfor %}
                <buttontype="submit"class="btn btn-primary">submit</button></form><br></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div></div><script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js"integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49"crossorigin="anonymous"></script><script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js"integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy"crossorigin="anonymous"></script></body></html>

Next, open the crudapp/templates/confirm_delete.html file and the add:

{% extends 'crudapp/base.html' %}
{% block content %}
<divclass="container"><divclass="row"></div><br/><divclass="row"><divclass="col-md-2 col-xs-2 col-sm-2"></div><divclass="col-md-10 col-xs-10 col-sm-10"><formmethod="post">
                {% csrf_token %}
                <divclass="form-row"><divclass="alert alert-warning">
                        Are you sure you want to delete {{ object }}?
                    </div></div><buttontype="submit"class="btn btn-danger"><spanclass="glyphicon glyphicon-trash"></span></button></form></div></div></div>
{% endblock %}

Django 3 Tutorial, Step 12 - Creating URLs

In this step, we'll create the urls to access our CRUD views.

Go to the urls.py file and update it as follows:

fromdjango.contribimportadminfromdjango.urlsimportpathfromcrudappimportviewsurlpatterns=[path('admin/',admin.site.urls),path('contacts/',views.IndexView.as_view(),name='index'),path('contacts/<int:pk>/',views.ContactDetailView.as_view(),name='detail'),path('contacts/edit/<int:pk>/',views.edit,name='edit'),path('contacts/create/',views.create,name='create'),path('contacts/delete/<int:pk>/',views.delete,name='delete'),]

Django 3 Tutorial, Step 11 - Running the Local Development Server

In this step, we'll run the local development server for playing with our app without deploying it to the web.

Head back to your command-line interface and run the following command:

$ python manage.py runserver

Next, go to the http://localhost:8000/ address with a web browser.

Conclusion

In this django 3 tutorial, we have initialized a new django project, created and migrated a MySQL database, and built a simple CRUD interface.

IslandT: Return the word with the longest length within a string using Python

$
0
0

Simple challenge – eliminate all bugs from the supplied code so that the code runs and outputs the expected value. The output should be the length of the longest word, as a number. There will only be one ‘longest’ word.

Above is a question from CodeWars, we will create the below python function to perform the above task.

def find_longest(string):
    
    longest_list = string.split(' ')
    longest = len(longest_list.pop(0))
    for n in longest_list:
        if len(n) > longest:
            longest = len(n)
    return longest
  • First, the function above will split the string into a list.
  • Then it will use the length of the first word to compare to the remaining words within a for’s loop.
  • If the length of any word within that list is longer than the length of the first word then the larger length will be assigned to the ‘longest’ variable which means that length will replace the length of the first word.
  • Finally, return the longest length.

This will be the last time we solve the solution on CodeWars as from now onward we will concentrate on creating a project in python. My next project is a video editing python program written in Python, so stay tuned!

Kushal Das: Creating password input widget in PyQt

$
0
0

One of the most common parts of writing any desktop tool and taking password input is about having a widget that can show/hide password text. In Qt, we can add a QAction to a QLineEdit to do the same. The only thing to remember, that the icons for the QAction, must be square in aspect ratio; otherwise, they look super bad.

The following code creates such a password input, and you can see it working at the GIF at the end of the blog post. I wrote this for the SecureDrop client project.

class PasswordEdit(QLineEdit):
    """
    A LineEdit with icons to show/hide password entries
    """
    CSS = '''QLineEdit {
        border-radius: 0px;
        height: 30px;
        margin: 0px 0px 0px 0px;
    }
    '''

    def __init__(self, parent):
        self.parent = parent
        super().__init__(self.parent)

        # Set styles
        self.setStyleSheet(self.CSS)

        self.visibleIcon = load_icon("eye_visible.svg")
        self.hiddenIcon = load_icon("eye_hidden.svg")

        self.setEchoMode(QLineEdit.Password)
        self.togglepasswordAction = self.addAction(self.visibleIcon, QLineEdit.TrailingPosition)
        self.togglepasswordAction.triggered.connect(self.on_toggle_password_Action)
        self.password_shown = False

    def on_toggle_password_Action(self):
        if not self.password_shown:
            self.setEchoMode(QLineEdit.Normal)
            self.password_shown = True
            self.togglepasswordAction.setIcon(self.hiddenIcon)
        else:
            self.setEchoMode(QLineEdit.Password)
            self.password_shown = False
            self.togglepasswordAction.setIcon(self.visibleIcon)

Mike Driscoll: Getting Jenkins Jobs by Build State with Python

$
0
0

I have been working with Python and Jenkins a lot lately and recently needed to find a way to check the job’s status at the build level. I discovered the jenkinsapi package and played around with it to see if it would give me the ability to drill down to the build and resultset level within Jenkins.

In the builds that I run, there are X number of sub-jobs. Each of these sub-jobs can pass or fail. If one of them fails, the entire build is marked with the color yellow and tagged as “UNSTABLE”, which is failed in my book. I want a way to track which of these sub-jobs is failing and how often over a time period. Some of these jobs can be unstable because they access network resources, which others may have been broken by a recent commit to the code base.

I eventually came up with some code that helps me figure out some of this information. But before you can dive into the code, you will need to install a package.


Installing the Prerequisites

The jenkinsapi package is easy to install because it is pip-compatible. You can install it to your main Python installation or in a Python virtual environment by using the following command:

pip install jenkinsapi

You will also need to install requests, which is also pip-compatibe:

pip install requests

These are the only packages you need. Now you can move on to the next section!

Querying Jenkins

The first step that you will want to accomplish is getting the jobs by their status. The standard statuses that you will find in Jenkins are SUCCESS, UNSTABLE or ABORTED.

Let’s write some code to find only the UNSTABLE jobs:

from jenkinsapi.jenkins import Jenkins
from jenkinsapi.custom_exceptions import NoBuildData
from requests import ConnectionError

def get_job_by_build_state(url, view_name, state='SUCCESS'):
    server = Jenkins(url)
    view_url = f'{url}/view/{view_name}/'
    view = server.get_view_by_url(view_url)
    jobs = view.get_job_dict()

    jobs_by_state = []

    for job in jobs:
        job_url = f'{url}/{job}'
        j = server.get_job(job)
        try:
            build = j.get_last_completed_build()
            status = build.get_status()
            if status == state:
                jobs_by_state.append(job)
        except NoBuildData:
            continue
        except ConnectionError:
            pass

    return jobs_by_state

if __name__ == '__main__':
    jobs = get_job_by_build_state(url='http://myJenkins:8080', view_name='VIEW_NAME',
                                  state='UNSTABLE')

Here you create an instance of Jenkins and assign it to server. Then you use get_view_by_url() to get the specified view name. The view is basically a set of associated jobs that you have set up. For example, you might create a group of jobs that does dev/ops type things and put them into a Utils view, for example.

Once you have the view object, you can use get_job_dict() to get you a dictionary of all the jobs in that view. Now that you have the dictionary, you can loop over them and get the individual jobs inside the view. You can get the job by calling the Jenkin’s object’s get_job() method. Now that you have the job object, you can finally drill down to the build itself.

To prevent errors, I found that you could use get_last_completed_build() to get the last completely build. This is for the best as if you use get_build() and the build hasn’t finished, the build object may not have the contents that you expect in it. Now that you have the build, you can use get_status() to get its status and compare it to the one that you passed in. If they match, then you add that job to jobs_by_state, which is a Python list.

You also catch a couple of errors that can happen. You probably won’t see NoBuildData unless the job was aborted or something really odd happens on your server. The ConnectionError exception happens when you try to connect to a URL that doesn’t exist or is offline.

At this point you should now have a list of the jobs filtered to the status that you asked for.

If you’d like to drill down further into sub-jobs within the job, then you need to call the build’s has_resultset() method to verify that there are results to inspect. Then you can do something like this:

resultset = build.get_resultset()
for item in resultset.items():
    # do something here

The resultset that is returned varies quite a bit depending on the job type, so you will need to parse the item tuple yourself to see if it contains the information you need.


Wrapping Up

At this point, you should have enough information to start digging around in Jenkin’s internals to get the information you need. I have used a variation of this script to help me extract information on builds that have failed to help me discover jobs that have failed repeatedly sooner than I would have otherwise. The documentation for jenkinsapi is unfortunately not very detailed, so you will be spending a lot of time in the debugger trying to figure out how it works. However it works pretty well overall once you figure it out.

The post Getting Jenkins Jobs by Build State with Python appeared first on The Mouse Vs. The Python.


Abhijeet Pal: Python Program To Reverse a Number

$
0
0

Problem Definition Create a Python program to reverse a number in Python. Solution This article will show multiple solutions for reversing a number in Python. Reversing a number mathematically The algorithm below is used to reverse a number mathematically with time complexity of O(log n) where n is input number. Algorithm Input : num (1) Initialize rev_num = 0 (2) Loop while num > 0 (a) Multiply rev_num by 10 and add remainder of num divide by 10 to rev_num (b) Divide num by 10 (3) Return rev_num  Program num = 12345 rev_num = 0 while num != 0: rev_num = rev_num * 10 rev_num = rev_num + (num%10) num = num // 10 print(rev_num) Output 54321 Using reversed() method Python’s built in reversed() method returns an iterator that accesses the given sequence in the reverse order. Program # input num = 1234 rev_iterator = reversed(str(num)) rev_num = "".join(rev_iterator) print(rev_num) Output 4321 Note that the reversed() method doesn’t accept integer as a parameter therefore data type is converted to a string. Sincereversed() returns an iterator we need to join it …

The post Python Program To Reverse a Number appeared first on Django Central.

Abhijeet Pal: Python Program to Calculate Power of a Number

$
0
0

Problem Definition Create a Python program to take two numbers from the user one being the base number another the exponent then calculate the power. Program import math base_number = float(input("Enter the base number")) exponent = float(input("Enter the exponent")) power = math.pow(base_number,exponent) print("Power is =",power) Output Enter the base number2 Enter the exponent4 Power is = 16.0 The built-in math module provides a number of functions for mathematical operations. The pow() method takes a base number and exponent as parameters and returns the power. Since in Python, there is always more than one way of achieving things calculating power with the exponentiation operator is also possible. The exponentiation operator x**yevaluates to power. Program base_number = int(input("Enter the base number")) exponent = int(input("Enter the exponent")) power = base_number ** exponent print("Result is =",power) Output Enter the base number2 Enter the exponent5 Result is = 32

The post Python Program to Calculate Power of a Number appeared first on Django Central.

Abhijeet Pal: Python Program to Find the Factors of a Number

$
0
0

The factor of any number is a whole number which exactly divides the number into a whole number without leaving any remainder. For example, 3 is a factor of 9 because 3 divides 9 evenly leaving no remainder. Problem Create a Python program to find all the factors of a number. Algorithm Step 1:  Take a number Step 2: Loop over every number from 1 to the given number Step 3: If the loop iterator evenly divides  the provided number i.e. number % i == 0 print it. Program number = 69 print("The factors of {} are,".format(number)) for i in range(1,number+1): if number % i == 0: print(i) Output The factors of 69 are, 1 3 23 69 Print factors of a user-provided number number = int(input("Enter a number ")) print("The factors of {} are,".format(number)) for i in range(1,number+1): if number % i == 0: print(i) Output Enter a number 469 The factors of 469 are, 1 7 67 469

The post Python Program to Find the Factors of a Number appeared first on Django Central.

Abhijeet Pal: Python Programs to Create Pyramid and Patterns

$
0
0

In this article, we will go over different ways to generate pyramids and patters in Python. Half pyramid of asterisks def half_pyramid(rows): for i in range(rows): print('*' * (i+1)) half_pyramid(6) Output * ** *** **** ***** ****** An alternate way to generate half pyramid using nested loops in Python. Program def half_pyramid(rows): for i in range(rows): for j in range(i+1): print("*", end="") print("") half_pyramid(6) Output * ** *** **** ***** ****** Half pyramid of X’s def half_pyramid(rows): for i in range(rows): print('X' * (i+1)) half_pyramid(6) Output X XX XXX XXXX XXXXX XXXXXX Half pyramid of numbers def half_pyramid(rows): for i in range(rows): for j in range(i + 1): print(j + 1, end="") print("") half_pyramid(5) Output 1 12 123 1234 12345 Generating a full pyramid of asterisks def full_pyramid(rows): for i in range(rows): print(' '*(rows-i-1) + '*'*(2*i+1)) full_pyramid(6) Output * *** ***** ******* ********* *********** Full pyramid of X’s def full_pyramid(rows): for i in range(rows): print(' '*(rows-i-1) + 'X'*(2*i+1)) full_pyramid(6) Output X XXX XXXXX XXXXXXX XXXXXXXXX XXXXXXXXXXX Reversed pyramid def inverted_pyramid(rows): for i in reversed(range(rows)): print(' '*(rows-i-1) + '*'*(2*i+1)) inverted_pyramid(6) Output …

The post Python Programs to Create Pyramid and Patterns appeared first on Django Central.

Real Python: Supercharge Your Classes With Python super()

$
0
0

While Python isn’t purely an object-oriented language, it’s flexible enough and powerful enough to allow you to build your applications using the object-oriented paradigm. One of the ways in which Python achieves this is by supporting inheritance, which it does with super().

By the end of this course, you’ll be able to:

  • Compose a class
  • Use super() to access parent methods
  • Understand single and multiple inheritance

[ Improve Your Python With 🐍 Python Tricks 💌 – Get a short & sweet Python Trick delivered to your inbox every couple of days. >> Click here to learn more and see examples ]

Codementor: Quick Dive into Selenium with python

$
0
0
Dive into the world of browser automation with python.

Trey Hunner: Passing a function as an argument to another function in Python

$
0
0

One of the more hair-raising facts we learn in my introductory Python trainings is that you can pass functions into other functions. You can pass functions around because in Python, functions are objects.

You likely don’t need to know about this in your first week of using Python, but as you dive deeper into Python you’ll find that it can be quite convenient to understand how to pass a function into another function.

This is part 1 of what I expect to be a series on the various properties of “function objects”. This article focuses on what a new Python programmer should know and appreciate about the object-nature of Python’s functions.

    Functions can be referenced

    If you try to use a function without putting parentheses after it Python won’t complain but it also won’t do anything useful:

    12345
    >>> defgreet():... print("Hello world!")...>>> greet<function greet at 0x7ff246c6d9d0>

    This applies to methods as well (methods are functions which live on objects):

    123
    >>> numbers=[1,2,3]>>> numbers.pop<built-in method pop of list object at 0x7ff246c76a80>

    Python is allowing us to refer to these function objects, the same way we might refer to a string, a number, or a range object:

    123456
    >>> "hello"'hello'>>> 2.52.5>>> range(10)range(0, 10)

    Since we can refer to functions like any other object, we can point a variable to a function:

    12
    >>> numbers=[2,1,3,4,7,11,18,29]>>> gimme=numbers.pop

    That gimme variable now points to the pop method on our numbers list. So if we call gimme, it’ll do the same thing that calling numbers.pop would have done:

    12345678910
    >>> gimme()29>>> numbers[2, 1, 3, 4, 7, 11, 18]>>> gimme(0)2>>> numbers[1, 3, 4, 7, 11, 18]>>> gimme()18

    Note that we didn’t make a new function. We’ve just pointed the gimme variable name to the numbers.pop function:

    1234
    >>> gimme<built-in method pop of list object at 0x7ff246c76bc0>>>> numbers.pop<built-in method pop of list object at 0x7ff246c76bc0>

    You can even store functions inside data structures and then reference them later:

    123456789101112131415161718
    >>> defsquare(n):returnn**2...>>> defcube(n):returnn**3...>>> operations=[square,cube]>>> numbers=[2,1,3,4,7,11,18,29]>>> fori,ninenumerate(numbers):... action=operations[i%2]... print(f"{action.__name__}({n}):",action(n))...square(2): 4cube(1): 1square(3): 9cube(4): 64square(7): 49cube(11): 1331square(18): 324cube(29): 24389

    It’s not very common to take a function and give it another name or to store it inside a data structure, but Python allows us to do these things because functions can be passed around, just like any other object.

    Functions can be passed into other functions

    Functions, like any other object, can be passed as an argument to another function.

    For example we could define a function:

    123456
    >>> defgreet(name="world"):... """Greet a person (or the whole world by default)."""... print(f"Hello {name}!")...>>> greet("Trey")Hello Trey!

    And then pass it into the built-in help function to see what it does:

    12345
    >>> help(greet)Help on function greet in module __main__:greet(name='world')    Greet a person (or the whole world by default).

    And we can pass the function into itself (yes this is weird), which converts it to a string here:

    12
    >>> greet(greet)Hello <function greet at 0x7f93416be8b0>!

    There are actually quite a few functions built-in to Python that are specifically meant to accept other functions as arguments.

    The built-in filter function accepts two things as an argument: a function and an iterable.

    123456
    >>> help(filter) |  filter(function or None, iterable) --> filter object | |  Return an iterator yielding those items of iterable for which function(item) |  is true. If function is None, return the items that are true.

    The given iterable (list, tuple, string, etc.) is looped over and the given function is called on each item in that iterable: whenever the function returns True (or another truthy value) the item is included in the filter output.

    So if we pass filter an is_odd function (which returns True when given an odd number) and a list of numbers, we’ll get back all of the numbers we gave it which are odd.

    1234567
    >>> numbers=[2,1,3,4,7,11,18,29]>>> defis_odd(n):returnn%2==1...>>> filter(is_odd,numbers)<filter object at 0x7ff246c8dc40>>>> list(filter(is_odd,numbers))[1, 3, 7, 11, 29]

    The object returned from filter is a lazy iterator so we needed to convert it to a list to actually see its output.

    Since functions can be passed into functions, that also means that functions can accept another function as an argument. The filter function assumes its first argument is a function. You can think of the filter function as pretty much the same as this function:

    123456
    deffilter(predicate,iterable):return(itemforiteminiterableifpredicate(item))

    This function expects the predicate argument to be a function (technically it could be any callable). When we call that function (with predicate(item)), we pass a single argument to it and then check the truthiness of its return value.

    Lambda functions are an example of this

    A lambda expression is a special syntax in Python for creating an anonymous function. When you evaluate a lambda expression the object you get back is called a lambda function.

    12345
    >>> is_odd=lambdan:n%2==1>>> is_odd(3)True>>> is_odd(4)False

    Lambda functions are pretty much just like regular Python functions, with a few caveats.

    Unlike other functions, lambda functions don’t have a name (their name shows up as <lambda>). They also can’t have docstrings and they can only contain a single Python expression.

    123456
    >>> add=lambdax,y:x+y>>> add(2,3)5>>> add<function <lambda> at 0x7ff244852f70>>>> add.__doc__

    You can think of a lambda expression as a shortcut for making a function which will evaluate a single Python expression and return the result of that expression.

    So defining a lambda expression doesn’t actually evaluate that expression: it returns a function that can evaluate that expression later.

    12345
    >>> greet=lambdaname="world":print(f"Hello {name}")>>> greet("Trey")Hello Trey>>> greet()Hello world

    I’d like to note that all three of the above examples of lambda are poor examples. If you want a variable name to point to a function object that you can use later, you should use def to define a function: that’s the usual way to define a function.

    123456
    >>> defis_odd(n):returnn%2==1...>>> defadd(x,y):returnx+y...>>> defgreet(name="world"):print(f"Hello {name}")...

    Lambda expressions are for when we’d like to define a function and pass it into another function immediately.

    For example here we’re using filter to get even numbers, but we’re using a lambda expression so we don’t have to define an is_even function before we use it:

    1234
    >>> numbers[2, 1, 3, 4, 7, 11, 18, 29]>>> list(filter(lambdan:n%2==0,numbers))[2, 4, 18]

    This is the most appropriate use of lambda expressions: passing a function into another function while defining that passed function all on one line of code.

    As I’ve written about in Overusing lambda expressions, I’m not a fan of Python’s lambda expression syntax. Whether or not you like this syntax, you should know that this syntax is just a shortcut for creating a function.

    Whenever you see lambda expressions, keep in mind that:

    1. A lambda expression is a special syntax for creating a function and passing it to another function all on one line of code
    2. Lambda functions are just like all other function objects: neither is more special than the other and both can be passed around

    All functions in Python can be passed as an argument to another function (that just happens to be the sole purpose of lambda functions).

    A common example: key functions

    Besides the built-in filter function, where will you ever see a function passed into another function? Probably the most common place you’ll see this in Python itself is with a key function.

    It’s a common convention for functions which accept an iterable-to-be-sorted/ordered to also accept a named argument called key. This key argument should be a function or another callable.

    The sorted, min, and max functions all follow this convention of accepting a key function:

    123456789
    >>> fruits=['kumquat','Cherimoya','Loquat','longan','jujube']>>> defnormalize_case(s):returns.casefold()...>>> sorted(fruits,key=normalize_case)['Cherimoya', 'jujube', 'kumquat', 'longan', 'Loquat']>>> min(fruits,key=normalize_case)'Cherimoya'>>> max(fruits,key=normalize_case)'Loquat'

    That key function is called for each value in the given iterable and the return value is used to order/sort each of the iterable items. You can think of this key function as computing a comparison key for each item in the iterable.

    In the above example our comparison key returns a lowercased string, so each string is compared by its lowercased version (which results in a case-insensitive ordering).

    We used a normalize_case function to do this, but the same thing could be done using str.casefold:

    123
    >>> fruits=['kumquat','Cherimoya','Loquat','longan','jujube']>>> sorted(fruits,key=str.casefold)['Cherimoya', 'jujube', 'kumquat', 'longan', 'Loquat']

    Note: That str.casefold trick is a bit odd if you aren’t familiar with how classes work. Classes store the unbound methods that will accept an instance of that class when called. We normally type my_string.casefold() but str.casefold(my_string) is what Python translates that to. That’s a story for another time.

    Here we’re finding the string with the most letters in it:

    12
    >>> max(fruits,key=len)'Cherimoya'

    If there are multiple maximums or minimums, the earliest one wins (that’s how min/max work):

    12345
    >>> fruits=['kumquat','Cherimoya','Loquat','longan','jujube']>>> min(fruits,key=len)'Loquat'>>> sorted(fruits,key=len)['Loquat', 'longan', 'jujube', 'kumquat', 'Cherimoya']

    Here’s a function which will return a 2-item tuple containing the length of a given string and the case-normalized version of that string:

    123
    deflength_and_alphabetical(string):"""Return sort key: length first, then case-normalized string."""return(len(string),string.casefold())

    We could pass this length_and_alphabetical function as the key argument to sorted to sort our strings by their length first and then by their case-normalized representation:

    1234
    >>> fruits=['kumquat','Cherimoya','Loquat','longan','jujube']>>> fruits_by_length=sorted(fruits,key=length_and_alphabetical)>>> fruits_by_length['jujube', 'longan', 'Loquat', 'kumquat', 'Cherimoya']

    This relies on the fact that Python’s ordering operators do deep comparisons.

    Other examples of passing a function as an argument

    The key argument accepted by sorted, min, and max is just one common example of passing functions into functions.

    Two more function-accepting Python built-ins are map and filter.

    We’ve already seen that filter will filter our list based on a given function’s return value.

    123456
    >>> numbers[2, 1, 3, 4, 7, 11, 18, 29]>>> defis_odd(n):returnn%2==1...>>> list(filter(is_odd,numbers))[1, 3, 7, 11, 29]

    The map function will call the given function on each item in the given iterable and use the result of that function call as the new item:

    12
    >>> list(map(is_odd,numbers))[False, True, True, False, True, True, False, True]

    For example here we’re converting numbers to strings and squaring numbers:

    1234
    >>> list(map(str,numbers))['2', '1', '3', '4', '7', '11', '18', '29']>>> list(map(lambdan:n**2,numbers))[4, 1, 9, 16, 49, 121, 324, 841]

    Note: as I noted in my article on overusing lambda, I personally prefer to use generator expressions instead of the map and filter functions.

    Similar to map, and filter, there’s also takewhile and dropwhile from the itertools module. The first one is like filter except it stops once it finds a value for which the predicate function is false. The second one does the opposite: it only includes values after the predicate function has become false.

    12345678
    >>> fromitertoolsimporttakewhile,dropwhile>>> colors=['red','green','orange','purple','pink','blue']>>> defshort_length(word):returnlen(word)<6...>>> list(takewhile(short_length,colors))['red', 'green']>>> list(dropwhile(short_length,colors))['orange', 'purple', 'pink', 'blue']

    And there’s functools.reduce and itertools.accumulate, which both call a 2-argument function to accumulate values as they loop:

    123456789
    >>> fromfunctoolsimportreduce>>> fromitertoolsimportaccumulate>>> numbers=[2,1,3,4,7]>>> defproduct(x,y):returnx*y...>>> reduce(product,numbers)168>>> list(accumulate(numbers,product))[2, 2, 6, 24, 168]

    The defaultdict class in the collections module is another example. The defaultdict class creates dictionary-like objects which will never raise a KeyError when a missing key is accessed, but will instead add a new value to the dictionary automatically.

    123456
    >>> fromcollectionsimportdefaultdict>>> counts=defaultdict(int)>>> counts['jujubes']0>>> countsdefaultdict(<class 'int'>, {'jujubes': 0})

    This defaultdict class accepts a callable (function or class) that will be called to create a default value whenever a missing key is accessed.

    The above code worked because int returns 0 when called with no arguments:

    12
    >>> int()0

    Here the default value is list, which returns a new list when called with no arguments.

    12345
    >>> things_by_color=defaultdict(list)>>> things_by_color['purple'].append('socks')>>> things_by_color['purple'].append('shoes')>>> things_by_colordefaultdict(<class 'list'>, {'purple': ['socks', 'shoes']})

    The partial function in the functools module is another example. partial accepts a function and any number of arguments and returns a new function (technically it returns a callable object).

    Here’s an example of partial used to “bind” the sep keyword argument to the print function:

    1
    >>> print_each=partial(print,sep='\n')

    The print_each function returned now does the same thing as if print was called with sep='\n':

    12345678910
    >>> print(1,2,3)1 2 3>>> print(1,2,3,sep='\n')123>>> print_each(1,2,3)123

    You’ll also find functions-that-accept-functions in third-party libraries, like in Django, and in numpy. Anytime you see a class or a function with documentation stating that one of its arguments should be a callable or a callable object, that means “you could pass in a function here”.

    A topic I’m skipping over: nested functions

    Python also supports nested functions (functions defined inside of other functions). Nested functions power Python’s decorator syntax.

    I’m not going to discuss nested functions in this article because nested functions warrant exploration of non-local variables, closures, and other weird corners of Python that you don’t need to know when you’re first getting started with treating functions as objects.

    I plan to write a follow-up article on this topic and link to it here later. In the meantime, if you’re interested in nested functions in Python, a search for higher order functions in Python may be helpful.

    Treating functions as objects is normal

    Python has first-class functions, which means:

    1. You can assign functions to variables
    2. You can store functions in lists, dictionaries, or other data structures
    3. You can pass functions into other functions
    4. You can write functions that return functions

    It might seem odd to treat functions as objects, but it’s not that unusual in Python. By my count, about 15% of the Python built-ins are meant to accept functions as arguments (min, max, sorted, map, filter, iter, property, classmethod, staticmethod, callable).

    The most important uses of Python’s first-class functions are:

    1. Passing a key function to the built-in sorted, min, and max functions
    2. Passing functions into looping helpers like filter and itertools.dropwhile
    3. Passing a “default-value generating factory function” to classes like defaultdict
    4. “Partially-evaluating” functions by passing them into functools.partial

    This topics goes much deeper than what I’ve discussed here, but until you find yourself writing decorator functions, you probably don’t need to explore this topic any further.


    Jaime Buelta: Interviewed about microservices

    $
    0
    0
    I got interviewed about Microservice and talk a bit about my last book, Hands-on Docker for Microservices with Python. I was an interesting view on what are the most important areas of Microservices and when migrating from Monolith architecture is a good idea. And also talking about related tools like Python, Docker or Kubernetes. Check… Read More

    Will Kahn-Greene: Switching from pyup to dependabot

    $
    0
    0

    Switching from pyup to dependabot

    I maintain a bunch of Python-based projects including some major projects like Crash Stats, Mozilla Symbols Server, and Mozilla Location Services. In order to keep up with dependency updates, we used pyup to monitor dependencies in those projects and create GitHub pull requests for updates.

    pyup was pretty nice. It would create a single pull request with many dependency updates in it. I could then review the details, wait for CI to test everything, make adjustments as necessary, and then land the pull request and go do other things.

    Starting in October of 2019, pyup stopped doing monthly updates. A co-worker of mine tried to contact them to no avail. I don't know what happened. I got tired of waiting for it to start working again.

    Since my projects are all on GitHub, we had already switched to GitHub security alerts. Given that, I decided it was time to switch from pyup to dependabot (also owned by GitHub).

    Switching from pyup to dependabot

    I had to do a bunch of projects, so I ended up with a process along these lines:

    1. Remove projects from pyup.

      All my projects are either in mozilla or mozilla-services organizations on GitHub.

      We had a separate service account configure pyup, so I'm not able to make changes to pyup myself.

      I had to ask Greg to remove my projects from pyup.

      I wouldn't suggest proceeding until your project has been removed from pyup. Otherwise, it's possible you'll get PRs from pyup and dependabot for the same updates.

    2. Add dependabot configuration to repo.

      Then I added the required dependabot configuration to my repository and removed the pyup configuration.

      I used these resources:

      I created a pull request with these changes, reviewed it, and landed it.

    3. Enable dependabot.

      For some reason, I couldn't enable dependabot for my projects. I had to ask Greg who I think asked Hal to enable dependabot for my projects.

      Once this was done, then dependabot created a plethora of pull requests.

    While there are Mozilla-specific bits in here, it's probably generally helpful.

    Dealing with incoming pull requests

    dependabot isn't as nice as pyup was. It can only update one dependency per PR. That stinks for a bunch of reasons:

    1. working through 30 PRs is extremely time consuming

    2. every time you finish up work on one PR, it triggers dependabot to update the others and that triggers email notifications, CI builds, and a bunch of spam and resource usage

    3. dependencies often depend on each other and need to get updated as a group

    Since we hadn't been keeping up with Python dependencies, we ended up with between 20 and 60 pull requests to deal with per repository.

    For Antenna, I rebased each PR, reviewed it, and merged it by hand. That took a day to do. It sucked. I can't imagine doing this four times every month.

    While working on PRs for Socorro, I hit a case where I needed to update multiple dependencies at the same time. I decided to write a tool that combined pull requests.

    Thus was born paul-mclendahand. Using this tool, I can combine pull requests. Using paul-mclendahand, I worked through 20 pull requests for Tecken in about an hour. This saves me tons of time!

    My process goes like this:

    1. create a new branch on my laptop based off of master

    2. list all open pull requests by running pmac listprs

    3. make a list of pull requests to combine into it

    4. for each pull request, I:

      1. run pmac add PR

      2. resolve any cherry-pick conflicts

      3. (optional) rebuild my project and run tests

    5. push the new branch to GitHub

    6. create a pull request

    7. run pmac prmsg and copy-and-paste the output as the pull request description

    I can then review the pull request. It has links to the other pull requests and the data that dependabot puts together for each update. I can rebase, add additional commits, etc.

    When I'm done, I merge it and that's it!

    paul-mclendahand v1.0.0

    I released paul-mclendahand 1.0.0!

    Install it with pipx:

    pipx install paul-mclendahand

    Install it with pip:

    pip install paul-mclendahand

    It doesn't just combine pull requests from dependabot--it's general and can work on any pull requests.

    If you find any issues, please report them in the issue tracker.

    I hope this helps you!

    Python Anywhere: The PythonAnywhere newsletter, January 2020

    $
    0
    0

    So, we have managed to break another record for our longest period ever between two monthly newsletters. It has been sixteen busy months between September 2018 and now, so we have made 2019 an official Year Without a Newsletter.

    Happy New Year, and a warm welcome to the January 2020 PythonAnywhere newsletter. Hooray! Here is what has happened since our last one.

    Python 3.8 now available

    We recently added support for Python 3.8. If you signed up after 4 December 2019, you'll have it available on your account -- you can use it just like any other Python version.

    If you signed up before then, it's a little more complicated, but we can update your account to provide it -- more information here.

    Always-on tasks

    Always-on tasks are a feature we rolled out for paid accounts in our October 2018 update. Essentially, you specify a program and we keep it running for you all the time. If it exits for any reason, we'll automatically restart it -- even in extreme circumstances, for instance if the server that it's running on has a hardware failure, it will fail over to another working machine quickly.

    Let's Encrypt certificates with automatic renewal

    You can now get a free, automated HTTPS certificate for your custom domain using Let's Encrypt. Previously, there was all sorts of tedious manual mucking around with dehydrated to get that free cert. And you won't need to remember to renew the certificate anymore either! (or as was the case for some of our users, setting up a scheduled task to auto-renew your certificate!) Now this will all happen behind the scenes automatically.

    Deployment of eu.pythonanywhere.com and migration option

    Back in February 2019, we announced eu.pythonanywhere.com. It's a completely separate version of our site, with all of the computers and storage hosted in Frankfurt, rather than in the US for www.pythonanywhere.com.

    In November 2019 we made a migration system available to our users. It allows us to move accounts from the US system to the EU one with minimal downtime. If you have an account on www.pythonanywhere.com and would like it to be moved to eu.pythonanywhere.com, just let us know via email (support@pythonanywhere.com).

    Tutorials

    We published some tutorials and HOW-TOs:

    PythonAnywhere metrics

    As in the last newsletter we wanted to share some of the metrics:

    • Web requests: we're processing on average about 375 hits/second through our systems (across all websites) with spikes at busy times of up to 450/second.
    • That's across about 49,000 websites. Of course the number of hits sites get is spread over a long tail distribution -- some of those sites are ones that people set up as part of tutorials, so they only get hits from their owners, while on the other hand the busiest websites might be processing 40 hits/second at their peak times
    • There are over 9,000 scheduled and always-on tasks.
    • Our live system currently comprises 69 separate machines on Amazon AWS in the US cluster and 17 in the EU one.

    New modules

    Although you can install Python packages on PythonAnywhere yourself, we like to make sure that we have plenty of batteries included.

    Everything got updated for the new system image that provides access to Python 3.8, so if you're using that image, you should have the most recent (or at least a very recent) version of everything :-)

    New whitelisted sites

    Paying PythonAnywhere customers get unrestricted Internet access, but if you're a free PythonAnywhere user, you may have hit problems when writing code that tries to access sites elsewhere on the Internet. We have to restrict you to sites on a whitelist to stop hackers from creating dummy accounts to hide their identities when breaking into other people's websites.

    But we really do encourage you to suggest new sites that should be on the whitelist. Our rule is, if it's got an official public API, which means that the site's owners are encouraging automated access to their server, then we'll whitelist it. Just drop us a line with a link to the API docs.

    We keep adding new sites to the list every day.

    And now for something completely different

    You might not have noticed, but in August 2019 a Florida manhas hand-captured a Burmese python measuring 17 feet, 9 inches.

    PyCoder’s Weekly: Issue #403 (Jan. 14, 2020)

    $
    0
    0

    #403 – JANUARY 14, 2020
    View in Browser »

    The PyCoder’s Weekly Logo


    A coverage.py Debugging Story

    Ned was getting reports for a mysterious disk I/O bug in the latest coverage.py release and asked the community for help. Read the crowd-sourced diagnosis on Hacker News and Ned’s follow-up post next. What a journey…
    NED BATCHELDER

    The “No Code” Delusion

    “2020 is going to be the year of ‘no code’: the movement that say you can write business logic and even entire applications without having the training of a software developer. I empathise with people doing this, and I think some of the ‘no code’ tools are great. But I also thing it’s wrong at heart.”
    ALEX HUDSONopinion

    Python Developers Are in Demand on Vettery

    alt

    Vettery is an online hiring marketplace that’s changing the way people hire and get hired. Ready for a bold career move? Make a free profile, name your salary, and connect with hiring managers from top employers today →
    VETTERYsponsor

    How Python Implements Super Long Integers?

    “Python must be doing something beautiful internally to support super long integers and today we find out what’s under the hood. The article goes in-depth to explain design, storage, and operations on super long integers as implemented by Python.”
    ARPIT BHAYANI

    Python GUI Programming Learning Path

    Does your Python program need a Graphical User Interface (GUI)? With this free learning path you’ll develop your Python GUI programming skills from scratch. Covers Tkinter, PyQt, wxPython, and Kivy.
    REAL PYTHON

    Mercurial’s Journey to and Reflections on Python 3

    Lessons learned from Mercurial’s Python 3 porting effort and a more opinionated commentary of the transition to Python 3 and the Python language ecosystem as a whole. A great read about the mechanics of porting a large Python project to Python 3.
    GREGORY SZORC

    Supercharge Your Python OOP Code With super()

    How to leverage single and multiple inheritance in your object-oriented Python code using the built-in super() function.
    REAL PYTHONvideo

    What I Learned Going From Prison to Python

    How open source programming can offer opportunities after incarceration.
    SHADEED WALLACE-STEPTER

    Discussions

    Python Jobs

    Python Web Developer (Remote)

    Premiere Digital

    Python Tutorial Editor (Remote)

    Real Python

    Software Engineer (Bristol, UK)

    Envelop Risk

    Database Administrator (PostgreSQL & Python) (Remote)

    CyberCoders

    More Python Jobs >>>

    Articles & Tutorials

    Logistic Regression in Python

    In this step-by-step tutorial, you’ll get started with logistic regression in Python. Classification is one of the most important areas of machine learning, and logistic regression is one of its basic methods. You’ll learn how to create, evaluate, and apply a model to make predictions.
    REAL PYTHON

    Redis Server-Assisted Client-Side Caching in Python

    Server-assisted client-side caching is a new capability added in Redis version 6. It is intended to assist the management of a local cache by having the server send invalidation notifications. The server tracks the keys accessed by a client and notifies the client when these change.
    ITAMAR HABER

    Become a Python Guru With PyCharm

    alt

    PyCharm is the Python IDE for Professional Developers by JetBrains providing a complete set of tools for productive Python, Web and scientific development. Be more productive and save time while PyCharm takes care of the routine →
    JETBRAINSsponsor

    Exploring HTTPS With Python

    In this tutorial, you’ll gain a working knowledge of the various factors that combine to keep communications over the Internet safe. You’ll see concrete examples of how to keep information secure and use cryptography to build your own Python HTTPS application.
    REAL PYTHON

    Embedding Bokeh in a Script

    “I really wanted to have a self-contained script that would launch Bokeh as part of its operation, rather than remembering which command line options I needed to specify.”
    JIM ANDERSON

    Developing and Testing an Asynchronous API With FastAPI and Pytest

    This tutorial looks at how to develop and test an asynchronous API with FastAPI, Postgres, Pytest, and Docker using Test-Driven Development (TDD).
    TESTDRIVEN.IO• Shared by Michael Herman

    Running Python in the Linux Kernel

    “This article will talk about a cool project I’ve worked on recently — a full Python interpreter running inside the Linux kernel”
    YONATAN GOLDSCHMIDT

    Publish a Static Website in a Day With MkDocs and Netlify

    A Pythonista’s (almost) no-code solution to building a website with the Python-based MkDocs static site generator.
    SEAN STEWART• Shared by Sean Stewart

    Quickly & Easily Convert HTML Documents to PDF With the PDFShift API

    Stop worrying about missing CSS3 features, library updates, or badly-rendered documents, and start focusing on what matters. Convert your HTML documents to PDF via a simple POST HTTP request to PDFShift’s high-fidelity, up-to-date, and very fast API.
    PDFSHIFTsponsor

    From Browser to Django

    What happens from when a browser makes a request to how Django receives the request and sends back a response.
    MATT LAYMAN• Shared by Matt Layman

    Projects & Code

    Events

    MadPUG

    January 16, 2020
    MEETUP.COM

    BangPypers

    January 18, 2020
    MEETUP.COM


    Happy Pythoning!
    This was PyCoder’s Weekly Issue #403.
    View in Browser »

    alt

    [ Subscribe to 🐍 PyCoder’s Weekly 💌 – Get the best Python news, articles, and tutorials delivered to your inbox once a week >> Click here to learn more ]

    Ahmed Bouchefra: Django 3 Tutorial & CRUD Example with MySQL and Bootstrap

    $
    0
    0

    Django 3 is released with full async support! In this tutorial, we’ll see by example how to create a CRUD application from scratch and step by step. We’ll see how to configure a MySQL database, enable the admin interface, and create the django views.

    We’ll be using Bootstrap 4 for styling.

    You’ll learn how to:

    • Implement CRUD operations,
    • Configure and access a MySQL database,
    • Create django views, templates and urls,
    • Style the UI with Bootstrap 4

    Django 3 Features

    Django 3 comes with many new features such as:

    • MariaDB support: Django now officially supports MariaDB 10.1+. You can use MariaDB via the MySQL backend,
    • ASGI support for async programming,
    • Django 3.0 provides support for running as an ASGI application, making Django fully async-capable
    • Exclusion constraints on PostgreSQL: Django 3.0 adds a new ExclusionConstraint class which adds exclusion constraints on PostgreSQL, etc.

    Prerequisites

    Let’s start with the prerequisites for this tutorial. In order to follow the tutorial step by step, you’ll need a few requirements, such as:

    • Basic knowledge of Python,
    • Working knowledge of Django (django-admin.py and manage.py),
    • A recent version of Python 3 installed on your system (the latest version is 3.7),
    • MySQL database installed on your system.

    We will be using pip and venv which are bundled as modules in recent versions of Python so you don’t actually need to install them unless you are working with old versions.

    If you are ready, lets go started!

    Django 3 Tutorial, Step 1 - Creating a MySQL Database

    In this step, we’ll create a mysql database for storing our application data.

    Open a new command-line interface and run the mysql client as follows:

    $ mysql -u root -p

    You’ll be prompted for your MySQL password, enter it and press Enter.

    Next, create a database using the following SQL statement:

    mysql> create database mydb;

    We now have an empty mysql database!

    Django 3 Tutorial, Step 2 - Initializing a New Virtual Environment

    In this step, we’ll initialize a new virtual environment for installing our project packages in separation of the system-wide packages.

    Head back to your command-line interface and run the following command:

    $ python3 -m venv .env
    

    Next, activate your virtual environment using the following command:

    $ source .env/bin/activate
    

    At this point of our tutorial, we’ve a mysql database for persisting data and created a virtual environment for installing the project packages.

    Django 3 Tutorial, Step 3 - Installing Django and MySQL Client

    In this step, we’ll install django and mysql client from PyPI using pip in our activated virtual environment.

    Head back to your command-line interface and run the following command to install the django package:

    $ pip install django
    

    At the time of writing this tutorial, django-3.0.2 is installed.

    You will also need to install the mysql client for Python using pip:

    $ pip install mysqlclient
    

    Django 3 Tutorial, Step 4 - Initializing a New Project

    In this step, we’ll initialize a new django project using the django-admin.

    Head back to your command-line interface and run the following command:

    $ django-admin startproject djangoCrudExample
    

    Next, open the settings.py file and update the database settings to configure the mydb database:

    DATABASES={'default':{'ENGINE':'django.db.backends.mysql','NAME':'mydb','USER':'root','PASSWORD':'<YOUR_DB_PASSWORD>','HOST':'localhost','PORT':'3306',}}

    Next, migrate the database using the following commands:

    $ cd djangoCrudExample
    $ python3 manage.py migrate
    

    You’ll get a similar output:

    Operations to perform:
      Apply all migrations: admin, auth, contenttypes, sessions
    Running migrations:
      Applying contenttypes.0001_initial... OK
      Applying auth.0001_initial... OK
      Applying admin.0001_initial... OK
      Applying admin.0002_logentry_remove_auto_add... OK
      Applying admin.0003_logentry_add_action_flag_choices... OK
      Applying contenttypes.0002_remove_content_type_name... OK
      Applying auth.0002_alter_permission_name_max_length... OK
      Applying auth.0003_alter_user_email_max_length... OK
      Applying auth.0004_alter_user_username_opts... OK
      Applying auth.0005_alter_user_last_login_null... OK
      Applying auth.0006_require_contenttypes_0002... OK
      Applying auth.0007_alter_validators_add_error_messages... OK
      Applying auth.0008_alter_user_username_max_length... OK
      Applying auth.0009_alter_user_last_name_max_length... OK
      Applying auth.0010_alter_group_name_max_length... OK
      Applying auth.0011_update_proxy_permissions... OK
      Applying sessions.0001_initial... OK
    

    This simply applies a set of builtin django migrations to create some necessary database tables or the working of django.

    Django 3 Tutorial, Step 5 - Installing django-widget-tweaks

    In this step, we’ll install django-widget-tweaks in our virtual environment. Head back to your command-line interface and run the following command:

    $ pip insll django-widget-tweaks
    

    Next, open the settings.py file and add the application to the installed apps:

    INSTALLED_APPS=['django.contrib.admin','django.contrib.auth','django.contrib.contenttypes','django.contrib.sessions','django.contrib.messages','django.contrib.staticfiles','widget_tweaks']

    Django 3 Tutorial, Step 6 - Creating an Admin User

    In this step, we’ll create an admin user that will allow us to access the admin interface of our app using the following command:

    $ python manage.py createsuperuser
    

    Provide the desired username, email and password when prompted:

    Username (leave blank to use 'ahmed'): 
    Email address: ahmed@gmail.com
    Password: 
    Password (again): 
    Superuser created successfully.
    

    Django 3 Tutorial, Step 7 - Creating a Django Application

    In this step, we’ll create a django application.

    Head back to your command-line interface, and run the following command:

    $ python manage.py startapp crudapp
    

    Next, you need to add it in the settings.py file as follows:

    INSTALLED_APPS=['django.contrib.admin','django.contrib.auth','django.contrib.contenttypes','django.contrib.sessions','django.contrib.messages','django.contrib.staticfiles','widget_tweaks','crudapp']

    Django 3 Tutorial, Step 8 - Creating the Model(s)

    In this step. we’ll create the database model for storing contacts.

    Open the crudapp/models.py file and add the following code:

    fromdjango.dbimportmodelsclassContact(models.Model):firstName=models.CharField("First name",max_length=255,blank=True,null=True)lastName=models.CharField("Last name",max_length=255,blank=True,null=True)email=models.EmailField()phone=models.CharField(max_length=20,blank=True,null=True)address=models.TextField(blank=True,null=True)description=models.TextField(blank=True,null=True)createdAt=models.DateTimeField("Created At",auto_now_add=True)def__str__(self):returnself.firstName

    After creating these model, you need to create migrations using the following command:

    $ python manage.py makemigrations
    
    

    You should get a similar output:

      crudapp/migrations/0001_initial.py
        - Create model Contact
    
    

    Next, you need to migrate your database using the following command:

    $ python manage.py migrate
    

    You should get a similar output:

      Applying crudapp.0001_initial... OK
    

    Django 3 Tutorial, Step 9 - Creating a Form

    In this step, we’ll create a form for creating a contact.

    In the crudapp folder, create a forms.py file and add the following code:

    fromdjangoimportformsfrom.modelsimportContactclassContactForm(forms.ModelForm):classMeta:model=Contactfields="__all__"

    We import the Contact model from the models.py file. We created a class called ContactForm, subclassing Django’s ModelForms from the django.forms package and specifying the model we want to use. We also specified that we will be using all fields in the Contact model. This will make it possible for us to display those fields in our templates.

    Django 3 Tutorial, Step 10 - Creating the Views

    In this step, we’ll create the views for performing the CRUD operations.

    Open the crudapp/views.py file and add:

    fromdjango.shortcutsimportrender,redirect,get_object_or_404from.modelsimportContactfrom.formsimportContactFormfromdjango.views.genericimportListView,DetailView

    Next, add:

    classIndexView(ListView):template_name='crudapp/index.html'context_object_name='contact_list'defget_queryset(self):returnContact.objects.all()classContactDetailView(DetailView):model=Contacttemplate_name='crudapp/contact-detail.html'

    Next, add:

    defcreate(request):ifrequest.method=='POST':form=ContactForm(request.POST)ifform.is_valid():form.save()returnredirect('index')form=ContactForm()returnrender(request,'crudapp/create.html',{'form':form})defedit(request,pk,template_name='crudapp/edit.html'):contact=get_object_or_404(Contact,pk=pk)form=ContactForm(request.POSTorNone,instance=post)ifform.is_valid():form.save()returnredirect('index')returnrender(request,template_name,{'form':form})defdelete(request,pk,template_name='crudapp/confirm_delete.html'):contact=get_object_or_404(Contact,pk=pk)ifrequest.method=='POST':contact.delete()returnredirect('index')returnrender(request,template_name,{'object':contact})

    Django 3 Tutorial, Step 11 - Creating Templates

    Open the settings.py file and add os.path.join(BASE_DIR, 'templates') to the TEMPLATES array:

    TEMPLATES=[{'BACKEND':'django.template.backends.django.DjangoTemplates','DIRS':[os.path.join(BASE_DIR,'templates')],'APP_DIRS':True,'OPTIONS':{'context_processors':['django.template.context_processors.debug','django.template.context_processors.request','django.contrib.auth.context_processors.auth','django.contrib.messages.context_processors.messages',],},},]

    This will tell django to look for the templates in the templates folder.

    Next, inside the crudapp folder create a templates folder:

    $ mkdir templates
    

    Next, inside the templates folder, create the following files:

    • base.html
    • confirm_delete.html
    • edit.html
    • index.html
    • create.html
    • contact-detail.html

    By running the following commands from the root of your project:

    $ mkdir templates
    $ cd templates
    $ mkdir crudapp
    $ touch crudapp/base.html
    $ touch crudapp/confirm_delete.html
    $ touch crudapp/edit.html
    $ touch crudapp/index.html
    $ touch crudapp/create.html
    $ touch crudapp/contact-detail.html
    

    Open the crudapp/templates/base.html file and the add:

    <!DOCTYPE html><html><head><title>Django 3 CRUD Example</title><metacharset="utf-8"><metaname="viewport"content="width=device-width, initial-scale=1"><linkrel="stylesheet"href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/css/bootstrap.min.css"></head><body>
    {% block content %}
    {% endblock %}
    <script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js"integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49"crossorigin="anonymous"></script><script src="https://ajax.googleapis.com/ajax/libs/jquery/3.3.1/jquery.min.js"></script><script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.0/js/bootstrap.min.js"></script></body></html>

    Next, open the crudapp/templates/index.html file and the add:

    {% extends 'crudapp/base.html' %}
    {% block content %}
    <divclass="container-fluid"><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10"><h3class="round3"style="text-align:center;">Contacts</h3></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div><divclass="row"><divclass="col-md-10 col-xs-10 col-sm-10"></div><divclass="col-md-2 col-xs-1 col-sm-1"><br/><ahref="{% url 'create' %}"><buttontype="button"class="btn btn-success"><spanclass="glyphicon glyphicon-plus"></span></button></a></div></div><br/>
        {% for contact in contact_list %}
        <divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-7 col-xs-7 col-sm-7"><ulclass="list-group"><liclass="list-group-item "><ahref="{% url 'detail' contact.pk %}"> {{ contact.firstName }} {{contact.lastName}} </a><spanclass="badge"></span></li></ul><br></div><divclass="col-md-1 col-xs-1 col-sm-1"><ahref="{% url 'detail' contact.pk %}"><buttontype="button"class="btn btn-info"><spanclass="glyphicon glyphicon-open"></span></button></a></div><divclass="col-md-1"><ahref="{% url 'edit' contact.pk %}"><buttontype="button"class="btn btn-info"><spanclass="glyphicon glyphicon-pencil"></span></button></a></div><divclass="col-md-1"><ahref="{% url 'delete' contact.pk %}"><buttontype="button"class="btn btn-danger"><spanclass="glyphicon glyphicon-trash"></span></button></a></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div>
        {% endfor %}
    </div>
    {% endblock %}
    

    Next, open the crudapp/templates/create.html file and the add:

    {% load widget_tweaks %}
    <!DOCTYPE html><html><head><title>Posts</title><metacharset="utf-8"><metaname="viewport"content="width=device-width, initial-scale=1"><linkrel="stylesheet"href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css"integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO"crossorigin="anonymous"><style type="text/css"><style></style></style></head><body><divclass="container-fluid"><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10 "><br/><h6style="text-align:center;"><fontcolor="red"> All fields are required</font></h6></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10"><formmethod="post"novalidate>
                        {% csrf_token %}
                        {% for hidden_field in form.hidden_fields %}
                        {{ hidden_field }}
                        {% endfor %}
                        {% for field in form.visible_fields %}
                        <divclass="form-group">
                            {{ field.label_tag }}
                            {% render_field field class="form-control" %}
                            {% if field.help_text %}
                            <smallclass="form-text text-muted">{{ field.help_text }}</small>
                            {% endif %}
                        </div>
                        {% endfor %}
                        <buttontype="submit"class="btn btn-primary">post</button></form><br></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div></div><script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js"integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49"crossorigin="anonymous"></script><script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js"integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy"crossorigin="anonymous"></script></body></html>

    Next, open the crudapp/templates/edit.html file and the add:

    {% load widget_tweaks %}
    <!DOCTYPE html><html><head><title>Edit Contact</title><metacharset="utf-8"><metaname="viewport"content="width=device-width, initial-scale=1"><linkrel="stylesheet"href="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/css/bootstrap.min.css"integrity="sha384-MCw98/SFnGE8fJT3GXwEOngsV7Zt27NXFoaoApmYm81iuXoPkFOJwJ8ERdknLPMO"crossorigin="anonymous"><style type="text/css"><style></style></style></head><body><divclass="container-fluid"><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10 "><br/><h6style="text-align:center;"><fontcolor="red"> All fields are required</font></h6></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div><divclass="row"><divclass="col-md-1 col-xs-1 col-sm-1"></div><divclass="col-md-10 col-xs-10 col-sm-10"><formmethod="post"novalidate>
                    {% csrf_token %}
                    {% for hidden_field in form.hidden_fields %}
                    {{ hidden_field }}
                    {% endfor %}
                    {% for field in form.visible_fields %}
                    <divclass="form-group">
                        {{ field.label_tag }}
                        {% render_field field class="form-control" %}
                        {% if field.help_text %}
                        <smallclass="form-text text-muted">{{ field.help_text }}</small>
                        {% endif %}
                    </div>
                    {% endfor %}
                    <buttontype="submit"class="btn btn-primary">submit</button></form><br></div><divclass="col-md-1 col-xs-1 col-sm-1"></div></div></div><script src="https://code.jquery.com/jquery-3.3.1.slim.min.js"integrity="sha384-q8i/X+965DzO0rT7abK41JStQIAqVgRVzpbzo5smXKp4YfRvH+8abtTE1Pi6jizo"crossorigin="anonymous"></script><script src="https://cdnjs.cloudflare.com/ajax/libs/popper.js/1.14.3/umd/popper.min.js"integrity="sha384-ZMP7rVo3mIykV+2+9J3UJ46jBk0WLaUAdn689aCwoqbBJiSnjAK/l8WvCWPIPm49"crossorigin="anonymous"></script><script src="https://stackpath.bootstrapcdn.com/bootstrap/4.1.3/js/bootstrap.min.js"integrity="sha384-ChfqqxuZUCnJSK3+MXmPNIyE6ZbWh2IMqE241rYiqJxyMiZ6OW/JmZQ5stwEULTy"crossorigin="anonymous"></script></body></html>

    Next, open the crudapp/templates/confirm_delete.html file and the add:

    {% extends 'crudapp/base.html' %}
    {% block content %}
    <divclass="container"><divclass="row"></div><br/><divclass="row"><divclass="col-md-2 col-xs-2 col-sm-2"></div><divclass="col-md-10 col-xs-10 col-sm-10"><formmethod="post">
                    {% csrf_token %}
                    <divclass="form-row"><divclass="alert alert-warning">
                            Are you sure you want to delete {{ object }}?
                        </div></div><buttontype="submit"class="btn btn-danger"><spanclass="glyphicon glyphicon-trash"></span></button></form></div></div></div>
    {% endblock %}
    

    Django 3 Tutorial, Step 12 - Creating URLs

    In this step, we’ll create the urls to access our CRUD views.

    Go to the urls.py file and update it as follows:

    fromdjango.contribimportadminfromdjango.urlsimportpathfromcrudappimportviewsurlpatterns=[path('admin/',admin.site.urls),path('contacts/',views.IndexView.as_view(),name='index'),path('contacts/<int:pk>/',views.ContactDetailView.as_view(),name='detail'),path('contacts/edit/<int:pk>/',views.edit,name='edit'),path('contacts/create/',views.create,name='create'),path('contacts/delete/<int:pk>/',views.delete,name='delete'),]

    Django 3 Tutorial, Step 11 - Running the Local Development Server

    In this step, we’ll run the local development server for playing with our app without deploying it to the web.

    Head back to your command-line interface and run the following command:

    $ python manage.py runserver
    

    Next, go to the http://localhost:8000/ address with a web browser.

    Conclusion

    In this django 3 tutorial, we have initialized a new django project, created and migrated a MySQL database, and built a simple CRUD interface.

    Viewing all 22404 articles
    Browse latest View live


    <script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>