Quantcast
Channel: Planet Python
Viewing all articles
Browse latest Browse all 24360

John Ludhi/nbshare.io: Decision Tree Regression With Hyper Parameter Tuning In Python

$
0
0

Decision Tree Regression With Hyper Parameter Tuning

In this post, we will go through Decision Tree model building. We will use air quality data. Here is the link to data.

In [1]:
importpandasaspdimportnumpyasnp
In [2]:
# Reading our csv datacombine_data=pd.read_csv('data/Real_combine.csv')combine_data.head(5)
Out[2]:
Unnamed: 0TTMTmSLPHVVVVMPM 2.5
0126.733.020.01012.460.05.14.413.0284.795833
1329.135.020.51011.949.05.85.214.8219.720833
2528.436.021.01011.346.05.35.711.1182.187500
3725.932.020.01011.856.06.16.911.1154.037500
4924.831.120.61013.658.04.88.311.1223.208333

T == Average Temperature (°C)

TM == Maximum temperature (°C)

Tm == Minimum temperature (°C)

SLP == Atmospheric pressure at sea level (hPa)

H == Average relative humidity (%)

VV == Average visibility (Km)

V == Average wind speed (Km/h)

VM == Maximum sustained wind speed (Km/h)

PM2.5== Fine particulate matter (PM2.5) is an air pollutant that is a concern for people's health when levels in air are high

Data Cleaning

Let us drop first the unwanted columns.

In [3]:
combine_data.drop(['Unnamed: 0'],axis=1,inplace=True)

Data Analysis

In [4]:
combine_data.head(2)
Out[4]:
TTMTmSLPHVVVVMPM 2.5
026.733.020.01012.460.05.14.413.0284.795833
129.135.020.51011.949.05.85.214.8219.720833
In [5]:
# combine data top 5 rowscombine_data.head()
Out[5]:
TTMTmSLPHVVVVMPM 2.5
026.733.020.01012.460.05.14.413.0284.795833
129.135.020.51011.949.05.85.214.8219.720833
228.436.021.01011.346.05.35.711.1182.187500
325.932.020.01011.856.06.16.911.1154.037500
424.831.120.61013.658.04.88.311.1223.208333
In [6]:
# combine data bottom 5 featurescombine_data.tail()
Out[6]:
TTMTmSLPHVVVVMPM 2.5
63828.533.420.91012.659.05.36.314.8185.500000
63924.933.214.81011.548.04.24.613.0166.875000
64026.432.020.91011.270.03.96.79.4200.333333
64120.825.014.51016.878.04.75.911.1349.291667
64223.328.014.91014.071.04.53.09.4310.250000

Let us print the statistical data using describe() function.

In [7]:
# To get statistical data combine_data.describe()
Out[7]:
TTMTmSLPHVVVVMPM 2.5
count643.000000643.000000643.000000643.000000643.000000643.000000643.000000643.000000643.000000
mean27.60995333.97402820.6692071009.03032751.7169525.0576987.68693616.139036111.378895
std3.8160304.1897734.3145144.70500116.6650380.7271433.9737366.91563082.144946
min18.90000022.0000009.000000998.00000015.0000002.3000001.1000005.4000000.000000
25%24.90000031.00000017.9500001005.10000038.0000004.7000005.00000011.10000046.916667
50%27.00000033.00000021.4000001009.40000051.0000005.0000006.90000014.80000089.875000
75%29.80000037.00000023.7000001013.10000064.0000005.5000009.40000018.300000159.854167
max37.70000045.00000031.2000001019.20000095.0000007.70000025.60000077.800000404.500000

Let us check if there are any null values in our data.

In [8]:
combine_data.isnull().sum()
Out[8]:
T         0
TM        0
Tm        0
SLP       0
H         0
VV        0
V         0
VM        0
PM 2.5    0
dtype: int64

we can also visualize null values with seaborn too. From the heatmap, it is clear that there are no null values.

In [9]:
importseabornassnssns.heatmap(combine_data.isnull(),yticklabels=False)
Out[9]:
<AxesSubplot:>

Let us check outliers in our data using seaborn boxplot.

In [10]:
# To check outliers importmatplotlib.pyplotasplta4_dims=(11.7,8.27)fig,ax=plt.subplots(figsize=a4_dims)g=sns.boxplot(data=combine_data,linewidth=2.5,ax=ax)g.set_yscale("log")

From the plot, we can see that there are few outliers present in column Tm, W, V, VM and PM 2.5.

We can also do a searborn pairplot multivariate analysis. Using multivariate analysis, we can find out relation between any two variables. Since plot is so big, i am skipping the pairplot, but the command to draw pairplots are shown below.

In [11]:
sns.pairplot(combine_data)

We can also check the corelation between dependent and independent features using dataframe.corr() function. The correlation can be plotted using 'pearson', 'kendall, or 'spearman'. By default corr() function runs 'pearson'.

In [12]:
combine_data.corr()
Out[12]:
TTMTmSLPHVVVVMPM 2.5
T1.0000000.9207520.786809-0.516597-0.4779520.5728180.1605820.192456-0.441826
TM0.9207521.0000000.598095-0.342692-0.6263620.560743-0.0027350.074952-0.316378
Tm0.7868090.5980951.000000-0.7356210.0581050.2969540.4391330.377274-0.591487
SLP-0.516597-0.342692-0.7356211.000000-0.250364-0.187913-0.610149-0.5064890.585046
H-0.477952-0.6263620.058105-0.2503641.000000-0.5651650.2362080.145866-0.153904
VV0.5728180.5607430.296954-0.187913-0.5651651.0000000.0344760.081239-0.147582
V0.160582-0.0027350.439133-0.6101490.2362080.0344761.0000000.747435-0.378281
VM0.1924560.0749520.377274-0.5064890.1458660.0812390.7474351.000000-0.319558
PM 2.5-0.441826-0.316378-0.5914870.585046-0.153904-0.147582-0.378281-0.3195581.000000

If we observe the above correlation table, it is clear that correlation between 'PM 2.5' feature and only SLP is positive. Corelation tells us if 'PM 2.5' increases what is the behaviour of other features. So if correlation is negative that means if one variable increases other variable decreases.

We can also Visualize Correlation Using Seaborn Heatmap.

In [13]:
relation=combine_data.corr()relation_index=relation.index
In [14]:
relation_index
Out[14]:
Index(['T', 'TM', 'Tm', 'SLP', 'H', 'VV', 'V', 'VM', 'PM 2.5'], dtype='object')
In [15]:
sns.heatmap(combine_data[relation_index].corr(),annot=True)
Out[15]:
<AxesSubplot:>

Upto now, we have done only feature engineering. In next section, we will do feature selection.

Feature Selection

In [16]:
fromsklearn.ensembleimportRandomForestRegressorfromsklearn.model_selectionimporttrain_test_splitfromsklearn.metricsimportmean_squared_errorasmse

Splitting the data into train and test data sets.

In [17]:
X_train,X_test,y_train,y_test=train_test_split(combine_data.iloc[:,:-1],combine_data.iloc[:,-1],test_size=0.3,random_state=0)
In [18]:
# size of train data setX_train.shape
Out[18]:
(450, 8)
In [19]:
# size of test data setX_test.shape
Out[19]:
(193, 8)

Feature selection by ExtraTreesRegressor(model based). ExtraTreesRegressor helps us find the features which are most important.

In [20]:
# Feature selection by ExtraTreesRegressor(model based)fromsklearn.ensembleimportExtraTreesRegressorfromsklearn.model_selectionimporttrain_test_splitfromsklearn.metricsimportaccuracy_scoreasacc
In [21]:
X_train,X_test,y_train,y_test=train_test_split(combine_data.iloc[:,:-1],combine_data.iloc[:,-1],test_size=0.3,random_state=0)
In [22]:
reg=ExtraTreesRegressor()
In [23]:
reg.fit(X_train,y_train)
Out[23]:
ExtraTreesRegressor()
In [ ]:
Letusprintthefeaturesimportance.
In [24]:
reg.feature_importances_
Out[24]:
array([0.17525632, 0.09237557, 0.21175783, 0.22835392, 0.0863817 ,
       0.05711284, 0.07977977, 0.06898204])
In [25]:
feat_importances=pd.Series(reg.feature_importances_,index=X_train.columns)feat_importances.nlargest(5).plot(kind='barh')plt.show()

Based on plot above, we can select the features which will be most important for our prediction model.

Before Train the data we need to do feature normalization because models such as decision trees are very sensitive to the scale of features.

Decision Tree Model Training

In [26]:
# Traning model with all features fromsklearn.model_selectionimporttrain_test_splitX_train,X_test,y_train,y_test=train_test_split(combine_data.iloc[:,:-1],combine_data.iloc[:,-1],test_size=0.3,random_state=0)
In [27]:
X_train
Out[27]:
TTMTmSLPHVVVVM
33428.936.015.01009.221.05.34.811.1
4632.839.026.01006.641.05.67.077.8
24630.337.024.21003.738.04.721.929.4
39528.436.623.01003.163.04.710.718.3
51626.931.022.91003.076.04.07.816.5
...........................
923.730.417.01015.846.05.15.214.8
35933.640.025.01006.936.05.86.111.1
19224.930.419.01008.957.04.84.69.4
62926.129.022.41001.287.05.014.122.2
55923.830.217.91010.655.04.53.77.6

450 rows × 8 columns

In [28]:
X_test
Out[28]:
TTMTmSLPHVVVVM
63728.433.520.91013.163.05.36.166.5
16520.730.19.01010.535.04.54.614.8
46726.733.521.01010.937.05.15.711.1
31126.031.020.41011.563.04.83.99.4
43226.430.922.61010.075.04.27.616.5
...........................
24927.232.322.01003.755.04.820.029.4
8929.734.022.61003.856.05.513.527.8
29322.330.311.41012.637.05.17.220.6
44127.133.020.01010.749.04.26.118.3
47825.632.019.01012.159.03.96.111.1

193 rows × 8 columns

In [29]:
fromsklearn.treeimportDecisionTreeRegressor

Let us creat a Decision tree regression model.

In [30]:
reg_decision_model=DecisionTreeRegressor()
In [31]:
# fit independent varaibles to the dependent variablesreg_decision_model.fit(X_train,y_train)
Out[31]:
DecisionTreeRegressor()
In [32]:
reg_decision_model.score(X_train,y_train)
Out[32]:
1.0
In [33]:
reg_decision_model.score(X_test,y_test)
Out[33]:
0.05768194549539718

We got 100% score on training data.

On test data we got 5.7% score because we did not provide any tuning parameters while intializing the tree as a result of which algorithm split the training data till the leaf node. Due to which depth of tree increased and our model did the overfitting.

That's why we are getting high score on our training data and less score on test data.

So to solve this problem we would use hyper parameter tuning.

We can use GridSearch or RandomizedSearch for hyper parameters tuning.

Decision Tree Model Evaluation

In [34]:
prediction=reg_decision_model.predict(X_test)

Let us do a distribution plot between our label y and predicted y values.

In [35]:
# checking difference between labled y and predicted ysns.distplot(y_test-prediction)
/home/abhiphull/anaconda3/envs/condapy36/lib/python3.6/site-packages/seaborn/distributions.py:2557: FutureWarning: `distplot` is a deprecated function and will be removed in a future version. Please adapt your code to use either `displot` (a figure-level function with similar flexibility) or `histplot` (an axes-level function for histograms).
  warnings.warn(msg, FutureWarning)
Out[35]:
<AxesSubplot:xlabel='PM 2.5', ylabel='Density'>

We are getting nearly bell shape curve that means our model working good? No we can't make that conclusion. Good bell curve only tell us the range of predicted values are with in the same range as our original data range values are.

In [ ]:
checkingpredictedyandlabeledyusingascatterplot.
In [36]:
plt.scatter(y_test,prediction)
Out[36]:
<matplotlib.collections.PathCollection at 0x7fa05aeb0320>

Hyper Parameter tuning

In [37]:
# Hyper parameters range intialization for tuning parameters={"splitter":["best","random"],"max_depth":[1,3,5,7,9,11,12],"min_samples_leaf":[1,2,3,4,5,6,7,8,9,10],"min_weight_fraction_leaf":[0.1,0.2,0.3,0.4,0.5,0.6,0.7,0.8,0.9],"max_features":["auto","log2","sqrt",None],"max_leaf_nodes":[None,10,20,30,40,50,60,70,80,90]}

Above we intialized hyperparmeters random range using Gridsearch to find the best parameters for our decision tree model.

In [38]:
# calculating different regression metricsfromsklearn.model_selectionimportGridSearchCV
In [39]:
tuning_model=GridSearchCV(reg_decision_model,param_grid=parameters,scoring='neg_mean_squared_error',cv=3,verbose=3)
In [40]:
# function for calculating how much time take for hyperparameter tuningdeftimer(start_time=None):ifnotstart_time:start_time=datetime.now()returnstart_timeelifstart_time:thour,temp_sec=divmod((datetime.now()-start_time).total_seconds(),3600)tmin,tsec=divmod(temp_sec,60)#print(thour,":",tmin,':',round(tsec,2))
In [41]:
X=combine_data.iloc[:,:-1]
In [42]:
y=combine_data.iloc[:,-1]
In [43]:
%%capture
from datetime import datetime

start_time=timer(None)

tuning_model.fit(X,y)

timer(start_time)

Hyper parameter tuning took around 17 minues. It might vary depending upon your machine.

In [44]:
# best hyperparameters tuning_model.best_params_
Out[44]:
{'max_depth': 5,
 'max_features': 'auto',
 'max_leaf_nodes': 40,
 'min_samples_leaf': 2,
 'min_weight_fraction_leaf': 0.1,
 'splitter': 'random'}
In [45]:
# best model scoretuning_model.best_score_
Out[45]:
-3786.5642998048047

Training Decision Tree With Best Hyperparameters

In [46]:
tuned_hyper_model=DecisionTreeRegressor(max_depth=5,max_features='auto',max_leaf_nodes=50,min_samples_leaf=2,min_weight_fraction_leaf=0.1,splitter='random')
In [47]:
# fitting modeltuned_hyper_model.fit(X_train,y_train)
Out[47]:
DecisionTreeRegressor(max_depth=5, max_features='auto', max_leaf_nodes=50,
                      min_samples_leaf=2, min_weight_fraction_leaf=0.1,
                      splitter='random')
In [48]:
# prediction tuned_pred=tuned_hyper_model.predict(X_test)
In [49]:
plt.scatter(y_test,tuned_pred)
Out[49]:
<matplotlib.collections.PathCollection at 0x7fa05ac52c50>

Ok the above scatter plot looks lot better.

Let us compare now Error rate of our model with hyper tuning of paramerters to our original model which is without the tuning of parameters.

In [50]:
# With hyperparameter tuned fromsklearnimportmetricsprint('MAE:',metrics.mean_absolute_error(y_test,tuned_pred))print('MSE:',metrics.mean_squared_error(y_test,tuned_pred))print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test,tuned_pred)))
MAE: 48.814175526595086
MSE: 4155.120637935324
RMSE: 64.46022523956401
In [51]:
# without hyperparameter tuning fromsklearnimportmetricsprint('MAE:',metrics.mean_absolute_error(y_test,prediction))print('MSE:',metrics.mean_squared_error(y_test,prediction))print('RMSE:',np.sqrt(metrics.mean_squared_error(y_test,prediction)))
MAE: 59.15023747989637
MSE: 6426.809819039633
RMSE: 80.16738625550688

Conclusion

If you observe the above metrics for both the models, We got good metric values(MSE 4155) with hyperparameter tuning model compare to model without hyper parameter tuning.


Viewing all articles
Browse latest Browse all 24360

Trending Articles



<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>