Quantcast
Channel: Planet Python
Viewing all articles
Browse latest Browse all 22462

Codementor: Spark & Python: MLlib Basic Statistics & Exploratory Data Analysis

$
0
0

Instructions

My Spark & Python series of tutorials can be examined individually, although there is a more or less linear ‘story’ when followed in sequence. By using the same dataset they try to solve a related set of tasks with it.

It is not the only one but, a good way of following these Spark tutorials is by first cloning the GitHub repo, and then starting your own IPython notebook in pySpark mode. For example, if we have a standalone Spark installation running in our localhost with a maximum of 6Gb per node assigned to IPython:

MASTER="spark://127.0.0.1:7077" SPARK_EXECUTOR_MEMORY="6G" IPYTHON_OPTS="notebook --pylab inline" ~/spark-1.3.1-bin-hadoop2.6/bin/pyspark

Notice that the path to the pyspark command will depend on your specific installation. So as a requirement, you need to have Spark installed in the same machine you are going to start the IPython notebook server.

For more Spark options see here. In general it works the rule of passign options described in the form spark.executor.memory as SPARK_EXECUTOR_MEMORY when calling IPython/pySpark.

Datasets

We will be using datasets from the KDD Cup 1999.

References

The reference book for these and other Spark related topics is Learning Spark by Holden Karau, Andy Konwinski, Patrick Wendell, and Matei Zaharia.

The KDD Cup 1999 competition dataset is described in detail here.

Introduction

So far we have used different map and aggregation functions, on simple and key/value pair RDD’s, in order to get simple statistics that help us understand our datasets. In this notebook we will introduce Spark’s machine learning library MLlib through its basic statistics functionality in order to better understand our dataset. We will use the reduced 10-percent KDD Cup 1999 datasets through the notebook.

Getting the data and creating the RDD

As we did in our first notebook, we will use the reduced dataset (10 percent) provided for the KDD Cup 1999, containing nearly half million network interactions. The file is provided as a Gzip file that we will download locally.

import urllib
f = urllib.urlretrieve ("http://kdd.ics.uci.edu/databases/kddcup99/kddcup.data_10_percent.gz", "kddcup.data_10_percent.gz")


data_file = "./kddcup.data_10_percent.gz"
raw_data = sc.textFile(data_file)

Local vectors

A local vector is often used as a base type for RDDs in Spark MLlib. A local vector has integer-typed and 0-based indices and double-typed values, stored on a single machine. MLlib supports two types of local vectors: dense and sparse. A dense vector is backed by a double array representing its entry values, while a sparse vector is backed by two parallel arrays: indices and values.

For dense vectors, MLlib uses either Python lists or the NumPyarray type. The later is recommended, so you can simply pass NumPy arrays around.

For sparse vectors, users can construct a SparseVector object from MLlib or pass SciPyscipy.sparse column vectors if SciPy is available in their environment. The easiest way to create sparse vectors is to use the factory methods implemented in Vectors.

An RDD of dense vectors

Let’s represent each network interaction in our dataset as a dense vector. For that we will use the NumPyarray type.

import numpy as np

def parse_interaction(line):
    line_split = line.split(",")
    # keep just numeric and logical values
    symbolic_indexes = [1,2,3,41]
    clean_line_split = [item for i,item in enumerate(line_split) if i not in symbolic_indexes]
    return np.array([float(x) for x in clean_line_split])

vector_data = raw_data.map(parse_interaction)

Summary statistics

Spark’s MLlib provides column summary statistics for RDD[Vector] through the function colStats available in Statistics. The method returns an instance of MultivariateStatisticalSummary, which contains the column-wise max, min, mean, variance, and number of nonzeros, as well as the total count.

from pyspark.mllib.stat import Statistics 
from math import sqrt 


# Compute column summary statistics.
summary = Statistics.colStats(vector_data)

print "Duration Statistics:"
print " Mean: {}".format(round(summary.mean()[0],3))
print " St. deviation: {}".format(round(sqrt(summary.variance()[0]),3))
print " Max value: {}".format(round(summary.max()[0],3))
print " Min value: {}".format(round(summary.min()[0],3))
print " Total value count: {}".format(summary.count())
print " Number of non-zero values: {}".format(summary.numNonzeros()[0])
Duration Statistics:  
Mean: 47.979  
St. deviation: 707.746  
Max value: 58329.0  
Min value: 0.0  
Total value count: 494021  
Number of non-zero values: 12350.0

Summary statistics by label

The interesting part of summary statistics, in our case, comes from being able to obtain them by the type of network attack or ‘label’ in our dataset. By doing so we will be able to better characterise our dataset dependent variable in terms of the independent variables range of values.

If we want to do such a thing we could filter our RDD containing labels as keys and vectors as values. For that we just need to adapt our parse_interaction function to return a tuple with both elements.

def parse_interaction_with_key(line):
    line_split = line.split(",")
    # keep just numeric and logical values
    symbolic_indexes = [1,2,3,41]
    clean_line_split = [item for i,item in enumerate(line_split) if i not in symbolic_indexes]
    return (line_split[41], np.array([float(x) for x in clean_line_split]))

label_vector_data = raw_data.map(parse_interaction_with_key)

The next step is not very sofisticated. We use filter on the RDD to leave out other labels but the one we want to gather statistics from.

normal_label_data = label_vector_data.filter(lambda x: x[0]=="normal.")

Now we can use the new RDD to call colStats on the values.

normal_summary = Statistics.colStats(normal_label_data.values())

And collect the results as we did before.

print "Duration Statistics for label: {}".format("normal")
print " Mean: {}".format(normal_summary.mean()[0],3)
print " St. deviation: {}".format(round(sqrt(normal_summary.variance()[0]),3))
print " Max value: {}".format(round(normal_summary.max()[0],3))
print " Min value: {}".format(round(normal_summary.min()[0],3))
print " Total value count: {}".format(normal_summary.count())
print " Number of non-zero values: {}".format(normal_summary.numNonzeros()[0])
Duration Statistics for label: normal  
Mean: 216.657322313  
St. deviation: 1359.213  
Max value: 58329.0  
Min value: 0.0  
Total value count: 97278  
Number of non-zero values: 11690.0

Instead of working with a key/value pair we could have just filter our raw data split using the label in column 41. Then we can parse the results as we did before. This will work as well. However having our data organised as key/value pairs will open the door to better manipulations. Since values() is a transformation on an RDD, and not an action, we don’t perform any computation until we call colStats anyway.

But lets wrap this within a function so we can reuse it with any label.

def summary_by_label(raw_data, label):
    label_vector_data = raw_data.map(parse_interaction_with_key).filter(lambda x: x[0]==label)
    return Statistics.colStats(label_vector_data.values())

Let’s give it a try with the “normal.” label again.

normal_sum = summary_by_label(raw_data, "normal.")

print "Duration Statistics for label: {}".format("normal")
print " Mean: {}".format(normal_sum.mean()[0],3)
print " St. deviation: {}".format(round(sqrt(normal_sum.variance()[0]),3))
print " Max value: {}".format(round(normal_sum.max()[0],3))
print " Min value: {}".format(round(normal_sum.min()[0],3))
print " Total value count: {}".format(normal_sum.count())
print " Number of non-zero values: {}".format(normal_sum.numNonzeros()[0])
Duration Statistics for label: normal  
Mean: 216.657322313  
St. deviation: 1359.213  
Max value: 58329.0  
Min value: 0.0  
Total value count: 97278  
Number of non-zero values: 11690.0

Let’s try now with some network attack. We have all of them listed here.

guess_passwd_summary = summary_by_label(raw_data, "guess_passwd.")

print "Duration Statistics for label: {}".format("guess_password")
print " Mean: {}".format(guess_passwd_summary.mean()[0],3)
print " St. deviation: {}".format(round(sqrt(guess_passwd_summary.variance()[0]),3))
print " Max value: {}".format(round(guess_passwd_summary.max()[0],3))
print " Min value: {}".format(round(guess_passwd_summary.min()[0],3))
print " Total value count: {}".format(guess_passwd_summary.count())
print " Number of non-zero values: {}".format(guess_passwd_summary.numNonzeros()[0])
Duration Statistics for label: guess_password  
Mean: 2.71698113208  
St. deviation: 11.88  
Max value: 60.0  
Min value: 0.0  
Total value count: 53  
Number of non-zero values: 4.0

We can see that this type of attack is shorter in duration than a normal interaction. We could build a table with duration statistics for each type of interaction in our dataset. First we need to get a list of labels as described in the first line here.

label_list = ["back.","buffer_overflow.","ftp_write.",
              "guess_passwd.","imap.","ipsweep.",
              "land.","loadmodule.","multihop.",
              "neptune.","nmap.","normal.","perl.",
              "phf.","pod.","portsweep.",
              "rootkit.","satan.","smurf.","spy.",
              "teardrop.","warezclient.",
              "warezmaster."]

Then we get a list of statistics for each label.

stats_by_label = [(label, summary_by_label(raw_data, label)) for label in label_list]

Now we get the duration column, first in our dataset (i.e. index 0).

duration_by_label = [ 
    (stat[0], 
     np.array([
         float(stat[1].mean()[0]), 
         float(sqrt(stat[1].variance()[0])), 
         float(stat[1].min()[0]), 
         float(stat[1].max()[0]), 
         int(stat[1].count())])) 
    for stat in stats_by_label]

That we can put into a Pandas data frame.

import pandas as pd
pd.set_option('display.max_columns', 50)

stats_by_label_df = pd.DataFrame.from_items(duration_by_label, columns=["Mean", "Std Dev", "Min", "Max", "Count"], orient='index')

And print it.

print "Duration statistics, by label"
stats_by_label_df

Duration statistics, by label

LabelMeanStd DevMinMaxCount
back.0.1289151.1100620142203
buffer_overflow.91.70000097.514685032130
ftp_write.32.37500047.44903301348
guess_passwd.2.71698111.87981106053
imap.6.00000014.17424004112
ipsweep.0.0344830.438439071247
land.0.0000000.0000000021
loadmodule.36.22222241.40886901039
multihop.184.000000253.85100607187
neptune.0.0000000.00000000107201
nmap.0.0000000.00000000231
normal.216.6573221359.21346905832997278
perl.41.33333314.84362925543
phf.4.5000005.7445630124
pod.0.0000000.00000000264
portsweep.1915.2990387285.1251590424481040
rootkit.100.800000216.185003070810
satan.0.0402770.5224330111589
smurf.0.0000000.00000000280790
spy.318.00000026.8700582993372
teardrop.0.0000000.00000000979
warezclient.615.2578432207.6949660151681020
warezmaster.15.05000033.385271015620

In order to reuse this code and get a dataframe from any variable in our dataset we will define a function.

def get_variable_stats_df(stats_by_label, column_i):
    column_stats_by_label = [
        (stat[0], 
         np.array([
             float(stat[1].mean()[column_i]), 
             float(sqrt(stat[1].variance()[column_i])), 
             float(stat[1].min()[column_i]), 
             float(stat[1].max()[column_i]), 
             int(stat[1].count())])) 
        for stat in stats_by_label
    ]
    return pd.DataFrame.from_items(
        column_stats_by_label, 
        columns=["Mean", "Std Dev", "Min", "Max", "Count"], 
        orient='index')

Let’s try for duration again.

get_variable_stats_df(stats_by_label,0)
LabelMeanStd DevMinMaxCount
back.0.1289151.1100620142203
buffer_overflow.91.70000097.514685032130
ftp_write.32.37500047.44903301348
guess_passwd.2.71698111.87981106053
imap.6.00000014.17424004112
ipsweep.0.0344830.438439071247
land.0.0000000.0000000021
loadmodule.36.22222241.40886901039
multihop.184.000000253.85100607187
neptune.0.0000000.00000000107201
nmap.0.0000000.00000000231
normal.216.6573221359.21346905832997278
perl.41.33333314.84362925543
phf.4.5000005.7445630124
pod.0.0000000.00000000264
portsweep.1915.2990387285.1251590424481040
rootkit.100.800000216.185003070810
satan.0.0402770.5224330111589
smurf.0.0000000.00000000280790
spy.318.00000026.8700582993372
teardrop.0.0000000.00000000979
warezclient.615.2578432207.6949660151681020
warezmaster.15.05000033.385271015620

Now for the next numeric column in the dataset, src_bytes.

print "src_bytes statistics, by label"
get_variable_stats_df(stats_by_label,1)
src_bytes statistics, by label
LabelMeanStd DevMinMaxCount
back.54156.3558783159.36023213140545402203
buffer_overflow.1400.4333331337.1326160627430
ftp_write.220.750000267.74761606768
guess_passwd.125.3396233.03786010412653
imap.347.583333629.9260360149212
ipsweep.10.0834005.2316580181247
land.0.0000000.0000000021
loadmodule.151.888889127.74529803029
multihop.435.142857540.960389014127
neptune.0.0000000.00000000107201
nmap.24.11688359.4198710207231
normal.1157.04752434226.1247180219461997278
perl.265.6666674.9328832602693
phf.51.0000000.00000051514
pod.1462.651515125.0980445641480264
portsweep.666707.43653821500665.86670006933756401040
rootkit.294.700000538.5781800172710
satan.1.33731942.946200017101589
smurf.935.772300200.0223865201032280790
spy.174.50000088.3883481122372
teardrop.28.0000000.0000002828979
warezclient.300219.5627451200905.2431303051356781020
warezmaster.49.300000212.155132095020

And so on. By reusing the summary_by_label and get_variable_stats_df functions we can perform some exploratory data analysis in large datasets with Spark.

Correlations

Spark’s MLlib supports Pearson’s and Spearman’s to calculate pairwise correlation methods among many series. Both of them are provided by the corr method in the Statistics package.

We have two options as input. Either two RDD[Double]s or an RDD[Vector]. In the first case the output will be a Double value, while in the second a whole correlation Matrix. Due to the nature of our data, we will obtain the second.

from pyspark.mllib.stat import Statistics 
correlation_matrix = Statistics.corr(vector_data, method="spearman")

Once we have the correlations ready, we can start inspecting their values.

import pandas as pd
pd.set_option('display.max_columns', 50)

col_names = ["duration","src_bytes","dst_bytes",
             "land","wrong_fragment",
             "urgent","hot","num_failed_logins",
             "logged_in","num_compromised",
             "root_shell","su_attempted",
             "num_root","num_file_creations",
             "num_shells","num_access_files",
             "num_outbound_cmds",
             "is_hot_login","is_guest_login","count",
             "srv_count","serror_rate",
             "srv_serror_rate","rerror_rate",
             "srv_rerror_rate","same_srv_rate",
             "diff_srv_rate","srv_diff_host_rate",
             "dst_host_count","dst_host_srv_count",
             "dst_host_same_srv_rate","dst_host_diff_srv_rate",
             "dst_host_same_src_port_rate",
             "dst_host_srv_diff_host_rate","dst_host_serror_rate",
             "dst_host_srv_serror_rate",
             "dst_host_rerror_rate","dst_host_srv_rerror_rate"]

corr_df = pd.DataFrame(
                    correlation_matrix, 
                    index=col_names, 
                    columns=col_names)

corr_df
.durationsrc_bytesdst_byteslandwrong_fragmenturgenthotnum_failed_loginslogged_innum_compromisedroot_shellsu_attemptednum_rootnum_file_creationsnum_shellsnum_access_filesnum_outbound_cmdsis_hot_loginis_guest_logincountsrv_countserror_ratesrv_serror_ratererror_ratesrv_rerror_ratesame_srv_ratediff_srv_ratesrv_diff_host_ratedst_host_countdst_host_srv_countdst_host_same_srv_ratedst_host_diff_srv_ratedst_host_same_src_port_ratedst_host_srv_diff_host_ratedst_host_serror_ratedst_host_srv_serror_ratedst_host_rerror_ratedst_host_srv_rerror_rate
duration1.0000000.0141960.299189-0.001068-0.0080250.0178830.1086390.0143630.1595640.0106870.0404250.0260150.0134010.0610990.0086320.019407-0.000019-0.0000100.205606-0.259032-0.250139-0.074211-0.073663-0.025936-0.0264200.062291-0.0508750.123621-0.161107-0.217167-0.2119790.231644-0.0652020.100692-0.056753-0.057298-0.007759-0.013891
src_bytes0.0141961.000000-0.167931-0.009404-0.0193580.0000940.113920-0.008396-0.0897020.1185620.0030670.002282-0.0020500.0277100.014403-0.0014970.0000100.0000190.0275110.6662300.722609-0.657460-0.652391-0.342180-0.3329770.744046-0.739988-0.1040420.1303770.7419790.729151-0.7129650.815039-0.140231-0.645920-0.641792-0.297338-0.300581
dst_bytes0.299189-0.1679311.000000-0.003040-0.0226590.0072340.1931560.0219520.8821850.1697720.0260540.012192-0.0038840.034154-0.0000540.065776-0.0000310.0000410.085947-0.639157-0.497683-0.205848-0.198715-0.100958-0.0813070.229677-0.2225720.521003-0.6119720.0241240.055033-0.035073-0.3961950.578557-0.167047-0.158378-0.0030420.001621
land-0.001068-0.009404-0.0030401.000000-0.000333-0.000065-0.000539-0.000076-0.002785-0.000447-0.000093-0.000049-0.000230-0.000150-0.000076-0.000211-0.0028810.002089-0.000250-0.010939-0.0101280.0141600.014342-0.000451-0.0016900.002153-0.0018460.020678-0.019923-0.0123410.002576-0.0018030.0042650.0161710.0135660.0122650.000389-0.001816
wrong_fragment-0.008025-0.019358-0.022659-0.0003331.000000-0.000150-0.004042-0.000568-0.020911-0.003370-0.000528-0.000248-0.001727-0.001160-0.000507-0.001519-0.0001470.000441-0.001869-0.057711-0.029117-0.008849-0.0233820.000430-0.0126760.010218-0.0093860.012117-0.029149-0.058225-0.0495600.055542-0.0154490.0073060.010387-0.0241170.046656-0.013666
urgent0.0178830.0000940.007234-0.000065-0.0001501.0000000.0085940.0630090.0068210.0317650.0674370.0000200.0619940.061383-0.0000660.0233800.0128790.005162-0.000100-0.004778-0.004799-0.001338-0.001327-0.000705-0.0007260.001521-0.001522-0.000788-0.005894-0.005698-0.0040780.005208-0.001939-0.000976-0.001381-0.001370-0.000786-0.000782
hot0.1086390.1139200.193156-0.000539-0.0040420.0085941.0000000.1125600.1891260.8115290.101983-0.0004000.0030960.0286940.0091460.004224-0.000393-0.0002480.463706-0.120847-0.114735-0.035487-0.0349340.0134680.0520030.041342-0.0405550.032141-0.074178-0.0179600.018783-0.017198-0.086998-0.014141-0.004706-0.0107210.1990190.189142
num_failed_logins0.014363-0.0083960.021952-0.000076-0.0005680.0630090.1125601.000000-0.0021900.0046190.0168950.0727480.0100600.015211-0.0000930.0055810.003431-0.001560-0.000428-0.018024-0.018027-0.003674-0.0040270.0353240.0348760.005716-0.005538-0.003096-0.028369-0.0150920.003004-0.002960-0.006617-0.0025880.0147130.0149140.0323950.032151
logged_in0.159564-0.0897020.882185-0.002785-0.0209110.0068210.189126-0.0021901.0000000.1611900.0252930.0118130.0825330.0555300.0243540.0726980.0000790.0001270.089318-0.578287-0.438947-0.187114-0.180122-0.091962-0.0722870.216969-0.2140190.503807-0.6827210.0803520.114526-0.093565-0.3595060.659078-0.143283-0.1324740.0072360.012979
num_compromised0.0106870.1185620.169772-0.000447-0.0033700.0317650.8115290.0046190.1611901.0000000.0855580.0489850.0285570.0312230.0112560.0069770.001048-0.000438-0.002504-0.097212-0.091154-0.030516-0.0302640.0085730.0540060.035253-0.0349530.036497-0.0416150.0034650.038980-0.039091-0.078843-0.020979-0.005019-0.0045040.2141150.217858
root_shell0.0404250.0030670.026054-0.000093-0.0005280.0674370.1019830.0168950.0252930.0855581.0000000.2334860.0945120.1406500.1320560.0693530.011462-0.006602-0.000405-0.016409-0.015174-0.004952-0.004923-0.001104-0.0011430.004946-0.0045530.002286-0.021367-0.0119060.000515-0.000916-0.0046170.008631-0.003498-0.0030320.0027630.002151
su_attempted0.0260150.0022820.012192-0.000049-0.0002480.000020-0.0004000.0727480.0118130.0489850.2334861.0000000.1193260.0531100.0404870.081272-0.0188960.012927-0.000219-0.008279-0.008225-0.002318-0.002295-0.001227-0.0012530.002634-0.0026490.000348-0.006697-0.006288-0.0057380.006687-0.0050200.0010520.0019740.0028930.0031730.001731
num_root0.013401-0.002050-0.003884-0.000230-0.0017270.0619940.0030960.0100600.0825330.0285570.0945120.1193261.0000000.0475210.0344050.0145130.001524-0.002585-0.001281-0.054721-0.053530-0.016031-0.015936-0.008610-0.0087080.013881-0.0113370.006316-0.078717-0.038689-0.0389350.047414-0.0159680.061030-0.008457-0.007096-0.000421-0.005012
num_file_creations0.0610990.0277100.034154-0.000150-0.0011600.0613830.0286940.0152110.0555300.0312230.1406500.0531100.0475211.0000000.0686600.031042-0.004081-0.0016640.013242-0.036467-0.034598-0.009703-0.010390-0.005069-0.0047750.009784-0.0087110.014412-0.049529-0.026890-0.0217310.027092-0.0150180.030590-0.002257-0.0042950.000626-0.001096
num_shells0.0086320.014403-0.000054-0.000076-0.000507-0.0000660.009146-0.0000930.0243540.0112560.1320560.0404870.0344050.0686601.0000000.019438-0.002592-0.006631-0.000405-0.013938-0.011784-0.004343-0.004740-0.002541-0.0025720.004282-0.0037430.001096-0.021200-0.012017-0.0099620.010761-0.0035210.015882-0.001588-0.002357-0.000617-0.002020
num_access_files0.019407-0.0014970.065776-0.000211-0.0015190.0233800.0042240.0055810.0726980.0069770.0693530.0812720.0145130.0310420.0194381.000000-0.001597-0.0028500.002466-0.045282-0.040497-0.013945-0.013572-0.0075810.0018740.015499-0.0151120.024266-0.023865-0.023657-0.0213580.026703-0.0332880.011765-0.011197-0.011487-0.004743-0.004552
num_outbound_cmds-0.0000190.000010-0.000031-0.002881-0.0001470.012879-0.0003930.0034310.0000790.0010480.011462-0.0188960.001524-0.004081-0.002592-0.0015971.0000000.8228900.000924-0.0000760.0001000.0001670.0002090.0005360.0003460.0002080.000328-0.000141-0.000424-0.000280-0.000503-0.000181-0.0004550.000288-0.000011-0.000372-0.000823-0.001038
is_hot_login-0.0000100.0000190.0000410.0020890.0004410.005162-0.000248-0.0015600.000127-0.000438-0.0066020.012927-0.002585-0.001664-0.006631-0.0028500.8228901.0000000.0015120.0000360.0000640.000102-0.000302-0.0005500.000457-0.000159-0.000235-0.000360-0.0001060.0002060.000229-0.0000040.0002830.000538-0.000076-0.000007-0.000435-0.000529
is_guest_login0.2056060.0275110.085947-0.000250-0.001869-0.0001000.463706-0.0004280.089318-0.002504-0.000405-0.000219-0.0012810.013242-0.0004050.0024660.0009240.0015121.000000-0.062340-0.062713-0.017343-0.017240-0.008867-0.0091930.018042-0.017000-0.008878-0.055453-0.044366-0.0417490.044640-0.038092-0.012578-0.001066-0.0168850.025282-0.004292
count-0.2590320.666230-0.639157-0.010939-0.057711-0.004778-0.120847-0.018024-0.578287-0.097212-0.016409-0.008279-0.054721-0.036467-0.013938-0.045282-0.0000760.000036-0.0623401.0000000.950587-0.303538-0.308923-0.213824-0.2213520.346718-0.361737-0.3840100.5474430.5869790.539698-0.5468690.776906-0.496554-0.331571-0.335290-0.261194-0.256176
srv_count-0.2501390.722609-0.497683-0.010128-0.029117-0.004799-0.114735-0.018027-0.438947-0.091154-0.015174-0.008225-0.053530-0.034598-0.011784-0.0404970.0001000.000064-0.0627130.9505871.000000-0.428185-0.421424-0.281468-0.2840340.517227-0.511998-0.2390570.4426110.7207460.681955-0.6739160.812280-0.391712-0.449096-0.442823-0.313442-0.308132
serror_rate-0.074211-0.657460-0.2058480.014160-0.008849-0.001338-0.035487-0.003674-0.187114-0.030516-0.004952-0.002318-0.016031-0.009703-0.004343-0.0139450.0001670.000102-0.017343-0.303538-0.4281851.0000000.990888-0.091157-0.095285-0.8519150.828012-0.1214890.165350-0.724317-0.7457450.719708-0.650336-0.1535680.9739470.965663-0.103198-0.105434
srv_serror_rate-0.073663-0.652391-0.1987150.014342-0.023382-0.001327-0.034934-0.004027-0.180122-0.030264-0.004923-0.002295-0.015936-0.010390-0.004740-0.0135720.000209-0.000302-0.017240-0.308923-0.4214240.9908881.000000-0.110664-0.115286-0.8393150.815305-0.1122220.160322-0.713313-0.7343340.707753-0.646256-0.1480720.9672140.970617-0.122630-0.124656
rerror_rate-0.025936-0.342180-0.100958-0.0004510.000430-0.0007050.0134680.035324-0.0919620.008573-0.001104-0.001227-0.008610-0.005069-0.002541-0.0075810.000536-0.000550-0.008867-0.213824-0.281468-0.091157-0.1106641.0000000.978813-0.3279860.345571-0.017902-0.067857-0.330391-0.3031260.308722-0.2784650.073061-0.094076-0.1106460.9102250.911622
srv_rerror_rate-0.026420-0.332977-0.081307-0.001690-0.012676-0.0007260.0520030.034876-0.0722870.054006-0.001143-0.001253-0.008708-0.004775-0.0025720.0018740.0003460.000457-0.009193-0.221352-0.284034-0.095285-0.1152860.9788131.000000-0.3165680.3334390.011285-0.072595-0.323032-0.2943280.300186-0.2822390.075178-0.096146-0.1143410.9045910.914904
same_srv_rate0.0622910.7440460.2296770.0021530.0102180.0015210.0413420.0057160.2169690.0352530.0049460.0026340.0138810.0097840.0042820.0154990.000208-0.0001590.0180420.3467180.517227-0.851915-0.839315-0.327986-0.3165681.000000-0.9821090.140660-0.1901210.8487540.873551-0.8445370.7328410.179040-0.830067-0.819335-0.282487-0.282913
diff_srv_rate-0.050875-0.739988-0.222572-0.001846-0.009386-0.001522-0.040555-0.005538-0.214019-0.034953-0.004553-0.002649-0.011337-0.008711-0.003743-0.0151120.000328-0.000235-0.017000-0.361737-0.5119980.8280120.8153050.3455710.333439-0.9821091.000000-0.1382930.185942-0.844028-0.8685800.850911-0.727031-0.1769300.8072050.7958440.2990410.298904
srv_diff_host_rate0.123621-0.1040420.5210030.0206780.012117-0.0007880.032141-0.0030960.5038070.0364970.0022860.0003480.0063160.0144120.0010960.024266-0.000141-0.000360-0.008878-0.384010-0.239057-0.121489-0.112222-0.0179020.0112850.140660-0.1382931.000000-0.4450510.0350100.068648-0.050472-0.2227070.433173-0.097973-0.0926610.0225850.024722
dst_host_count-0.1611070.130377-0.611972-0.019923-0.029149-0.005894-0.074178-0.028369-0.682721-0.041615-0.021367-0.006697-0.078717-0.049529-0.021200-0.023865-0.000424-0.000106-0.0554530.5474430.4426110.1653500.160322-0.067857-0.072595-0.1901210.185942-0.4450511.0000000.022731-0.0704480.0443380.189876-0.9188940.1238810.113845-0.125142-0.125273
dst_host_srv_count-0.2171670.7419790.024124-0.012341-0.058225-0.005698-0.017960-0.0150920.0803520.003465-0.011906-0.006288-0.038689-0.026890-0.012017-0.023657-0.0002800.000206-0.0443660.5869790.720746-0.724317-0.713313-0.330391-0.3230320.848754-0.8440280.0350100.0227311.0000000.970072-0.9551780.7694810.043668-0.722607-0.708392-0.312040-0.300787
dst_host_same_srv_rate-0.2119790.7291510.0550330.002576-0.049560-0.0040780.0187830.0030040.1145260.0389800.000515-0.005738-0.038935-0.021731-0.009962-0.021358-0.0005030.000229-0.0417490.5396980.681955-0.745745-0.734334-0.303126-0.2943280.873551-0.8685800.068648-0.0704480.9700721.000000-0.9802450.7711580.107926-0.742045-0.725272-0.278068-0.264383
dst_host_diff_srv_rate0.231644-0.712965-0.035073-0.0018030.0555420.005208-0.017198-0.002960-0.093565-0.039091-0.0009160.0066870.0474140.0270920.0107610.026703-0.000181-0.0000040.044640-0.546869-0.6739160.7197080.7077530.3087220.300186-0.8445370.850911-0.0504720.044338-0.955178-0.9802451.000000-0.766402-0.0886650.7192750.7011490.2874760.271067
dst_host_same_src_port_rate-0.0652020.815039-0.3961950.004265-0.015449-0.001939-0.086998-0.006617-0.359506-0.078843-0.004617-0.005020-0.015968-0.015018-0.003521-0.033288-0.0004550.000283-0.0380920.7769060.812280-0.650336-0.646256-0.278465-0.2822390.732841-0.727031-0.2227070.1898760.7694810.771158-0.7664021.000000-0.175310-0.658737-0.652636-0.299273-0.297100
dst_host_srv_diff_host_rate0.100692-0.1402310.5785570.0161710.007306-0.000976-0.014141-0.0025880.659078-0.0209790.0086310.0010520.0610300.0305900.0158820.0117650.0002880.000538-0.012578-0.496554-0.391712-0.153568-0.1480720.0730610.0751780.179040-0.1769300.433173-0.9188940.0436680.107926-0.088665-0.1753101.000000-0.118697-0.1037150.1149710.120767
dst_host_serror_rate-0.056753-0.645920-0.1670470.0135660.010387-0.001381-0.0047060.014713-0.143283-0.005019-0.0034980.001974-0.008457-0.002257-0.001588-0.011197-0.000011-0.000076-0.001066-0.331571-0.4490960.9739470.967214-0.094076-0.096146-0.8300670.807205-0.0979730.123881-0.722607-0.7420450.719275-0.658737-0.1186971.0000000.968015-0.087531-0.096899
dst_host_srv_serror_rate-0.057298-0.641792-0.1583780.012265-0.024117-0.001370-0.0107210.014914-0.132474-0.004504-0.0030320.002893-0.007096-0.004295-0.002357-0.011487-0.000372-0.000007-0.016885-0.335290-0.4428230.9656630.970617-0.110646-0.114341-0.8193350.795844-0.0926610.113845-0.708392-0.7252720.701149-0.652636-0.1037150.9680151.000000-0.111578-0.110532
dst_host_rerror_rate-0.007759-0.297338-0.0030420.0003890.046656-0.0007860.1990190.0323950.0072360.2141150.0027630.003173-0.0004210.000626-0.000617-0.004743-0.000823-0.0004350.025282-0.261194-0.313442-0.103198-0.1226300.9102250.904591-0.2824870.2990410.022585-0.125142-0.312040-0.2780680.287476-0.2992730.114971-0.087531-0.1115781.0000000.950964
dst_host_srv_rerror_rate-0.013891-0.3005810.001621-0.001816-0.013666-0.0007820.1891420.0321510.0129790.2178580.0021510.001731-0.005012-0.001096-0.002020-0.004552-0.001038-0.000529-0.004292-0.256176-0.308132-0.105434-0.1246560.9116220.914904-0.2829130.2989040.024722-0.125273-0.300787-0.2643830.271067-0.2971000.120767-0.096899-0.1105320.9509641.000000

We have used a PandasDataFrame here to render the correlation matrix in a more comprehensive way. Now we want those variables that are highly correlated. For that we do a bit of dataframe manipulation.

# get a boolean dataframe where true means that 

# a pair of variables is highly correlated
highly_correlated_df = (abs(corr_df) > .8) & (corr_df < 1.0)


# get the names of the variables so we can use 

# them to slice the dataframe
correlated_vars_index = (highly_correlated_df==True).any()
correlated_var_names = correlated_vars_index[correlated_vars_index==True].index


# slice it
highly_correlated_df.loc[correlated_var_names,correlated_var_names]
.src_bytesdst_byteshotlogged_innum_compromisednum_outbound_cmdsis_hot_logincountsrv_countserror_ratesrv_serror_ratererror_ratesrv_rerror_ratesame_srv_ratediff_srv_ratedst_host_countdst_host_srv_countdst_host_same_srv_ratedst_host_diff_srv_ratedst_host_same_src_port_ratedst_host_srv_diff_host_ratedst_host_serror_ratedst_host_srv_serror_ratedst_host_rerror_ratedst_host_srv_rerror_rate
src_bytesFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalse
dst_bytesFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
hotFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
logged_inFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
num_compromisedFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
num_outbound_cmdsFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
is_hot_loginFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
countFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
srv_countFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalse
serror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseTrueTrueFalseFalseFalseFalseFalseFalseTrueTrueFalseFalse
srv_serror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseTrueTrueFalseFalseFalseFalseFalseFalseTrueTrueFalseFalse
rerror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrue
srv_rerror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrue
same_srv_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseFalseTrueFalseTrueTrueTrueFalseFalseTrueTrueFalseFalse
diff_srv_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseTrueFalseFalseTrueTrueTrueFalseFalseTrueFalseFalseFalse
dst_host_countFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalse
dst_host_srv_countFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseTrueTrueFalseFalseFalseFalseFalseFalse
dst_host_same_srv_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseTrueFalseTrueFalseFalseFalseFalseFalseFalse
dst_host_diff_srv_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseTrueTrueFalseFalseFalseFalseFalseFalseFalse
dst_host_same_src_port_rateTrueFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalse
dst_host_srv_diff_host_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseFalseFalse
dst_host_serror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseTrueTrueFalseFalseFalseFalseFalseFalseFalseTrueFalseFalse
dst_host_srv_serror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseTrueFalseFalseFalseFalseFalseFalseFalseTrueFalseFalseFalse
dst_host_rerror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrue
dst_host_srv_rerror_rateFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueTrueFalseFalseFalseFalseFalseFalseFalseFalseFalseFalseTrueFalse

Conclusions

and possible model selection hints

The previous dataframe showed us which variables are highly correlated. We have kept just those variables with at least one strong correlation. We can use as we please, but a good way could be to do some model selection. That is, if we have a group of variables that are highly correlated, we can keep just one of them to represent the group under the assumption that they convey similar information as predictors. Reducing the number of variables will not improve our model accuracy, but it will make it easier to understand and also more efficient to compute.

For example, from the description of the KDD Cup 99 task we know that the variable dst_host_same_src_port_rate references the percentage of the last 100 connections to the same port, for the same destination host. In our correlation matrix (and auxiliar dataframes) we find that this one is highly and positively correlated to src_bytes and srv_count. The former is the number of bytes sent form source to destination. The later is the number of connections to the same service as the current connection in the past 2 seconds. We might decide not to include dst_host_same_src_port_rate in our model if we include the other two, as a way to reduce the number of variables and later one better interpret our models.

Later on, in those notebooks dedicated to build predictive models, we will make use of this information to build more interpretable models.


Viewing all articles
Browse latest Browse all 22462

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>