markdown
stringlengths
0
1.02M
code
stringlengths
0
832k
output
stringlengths
0
1.02M
license
stringlengths
3
36
path
stringlengths
6
265
repo_name
stringlengths
6
127
Custom sorting of plot series
import pandas as pd import numpy as np from pandas.api.types import CategoricalDtype from plotnine import * from plotnine.data import mpg %matplotlib inline
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Bar plot of manufacturer - Default Output
(ggplot(mpg) + aes(x='manufacturer') + geom_bar(size=20) + coord_flip() + labs(y='Count', x='Manufacturer', title='Number of Cars by Make') )
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Bar plot of manufacturer - Ordered by count (Categorical)By default the discrete values along axis are ordered alphabetically. If we want aspecific orderingwe use a pandas.Categorical variable with categories ordered to our preference.
# Determine order and create a categorical type # Note that value_counts() is already sorted manufacturer_list = mpg['manufacturer'].value_counts().index.tolist() manufacturer_cat = pd.Categorical(mpg['manufacturer'], categories=manufacturer_list) # assign to a new column in the DataFrame mpg = mpg.assign(manufacturer_cat = manufacturer_cat) (ggplot(mpg) + aes(x='manufacturer_cat') + geom_bar(size=20) + coord_flip() + labs(y='Count', x='Manufacturer', title='Number of Cars by Make') )
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
We could also modify the **existing manufacturer category** to set it as orderedinstead of having to create a new CategoricalDtype and apply that to the data.
mpg = mpg.assign(manufacturer_cat = mpg['manufacturer'].cat.reorder_categories(manufacturer_list))
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Bar plot of manufacturer - Ordered by count (limits)Another method to quickly reorder a discrete axis without changing the data is to change it's limits
# Determine order and create a categorical type # Note that value_counts() is already sorted manufacturer_list = mpg['manufacturer'].value_counts().index.tolist() (ggplot(mpg) + aes(x='manufacturer_cat') + geom_bar(size=20) + scale_x_discrete(limits=manufacturer_list) + coord_flip() + labs(y='Count', x='Manufacturer', title='Number of Cars by Make') )
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
You can 'flip' an axis (independent of limits) by reversing the order of the limits.
# Determine order and create a categorical type # Note that value_counts() is already sorted manufacturer_list = mpg['manufacturer'].value_counts().index.tolist()[::-1] (ggplot(mpg) + aes(x='manufacturer_cat') + geom_bar(size=20) + scale_x_discrete(limits=manufacturer_list) + coord_flip() + labs(y='Count', x='Manufacturer', title='Number of Cars by Make') )
_____no_output_____
MIT
demo_plot/plotnine-examples/tutorials/miscellaneous-order-plot-series.ipynb
hermanzhaozzzz/DataScienceScripts
Breast Cancer Wisconsin (Diagnostic) Prediction*Predict whether the cancer is benign or malignant*Features are computed from a digitized image of a fine needle aspirate (FNA) of a breast mass. They describe characteristics of the cell nuclei present in the image. n the 3-dimensional space is that described in: [K. P. Bennett and O. L. Mangasarian: "Robust Linear Programming Discrimination of Two Linearly Inseparable Sets", Optimization Methods and Software 1, 1992, 23-34].This database is also available through the UW CS ftp server: ftp ftp.cs.wisc.edu cd math-prog/cpo-dataset/machine-learn/WDBC/Also can be found on UCI Machine Learning Repository: https://archive.ics.uci.edu/ml/datasets/Breast+Cancer+Wisconsin+%28Diagnostic%29**Attribute Information:***1) ID number 2) Diagnosis (M = malignant, B = benign) 3-32)***Ten real-valued features are computed for each cell nucleus:***a) radius (mean of distances from center to points on the perimeter) b) texture (standard deviation of gray-scale values) c) perimeter d) area e) smoothness (local variation in radius lengths) f) compactness (perimeter^2 / area - 1.0) g) concavity (severity of concave portions of the contour) h) concave points (number of concave portions of the contour) i) symmetry j) fractal dimension ("coastline approximation" - 1)**he mean, standard error and "worst" or largest (mean of the three largest values) of these features were computed for each image, resulting in 30 features. For instance, field 3 is Mean Radius, field 13 is Radius SE, field 23 is Worst Radius.*All feature values are recoded with four significant digits.Missing attribute values: noneClass distribution: 357 benign, 212 malignant
import numpy as np import matplotlib.pyplot as plt import pandas as pd import seaborn as sns from sklearn.model_selection import train_test_split import seaborn as sns # used for plot interactive graph. I like it most for plot from sklearn.linear_model import LogisticRegression # to apply the Logistic regression from sklearn.model_selection import train_test_split # to split the data into two parts from sklearn.cross_validation import KFold # use for cross validation from sklearn.model_selection import GridSearchCV# for tuning parameter from sklearn.ensemble import RandomForestClassifier # for random forest classifier from sklearn.naive_bayes import GaussianNB from sklearn.neighbors import KNeighborsClassifier from sklearn.tree import DecisionTreeClassifier from sklearn import svm # for Support Vector Machine from sklearn import metrics # for the check the error and accuracy of the model %matplotlib inline import warnings warnings.filterwarnings("ignore")
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Data Cleaning
#df is dataframe and its self a variable & here i import the dataset into this variable df=pd.read_csv("cancer.csv") #it will show top 5 data rows df.head() #deleting the useless columns df.drop(['id'], 1, inplace=True) df.drop(['Unnamed: 32'], 1, inplace=True) df.info() # lets now start with features_mean # now as ou know our diagnosis column is a object type so we can map it to integer value df['diagnosis']=df['diagnosis'].map({'M':1,'B':0}) y=df['diagnosis'] y.head() # it will describe the all statistical function of our data df.describe()
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Data Analysis
# plotting the diagonisis result sns.countplot(df['diagnosis'],label="Count") feat_mean= list(df.columns[1:11]) feat_se= list(df.columns[11:20]) feat_worst=list(df.columns[21:31]) corr = df[feat_mean].corr() # .corr is used for find corelation plt.figure(figsize=(14,14)) sns.heatmap(corr, cbar = True, square = True, annot=True, fmt= '.2f',annot_kws={'size': 15}, xticklabels= feat_mean, yticklabels= feat_mean, cmap= 'coolwarm') # for more on heatmap you can visit Link(http://seaborn.pydata.org/generated/seaborn.heatmap.html) #Box Plot for the feature texture_mean sns.boxplot(x = 'diagnosis', y ='texture_mean', data = df) plt.show() #Box Plot for the feature perimeter_mean sns.boxplot(x = 'diagnosis',y = 'perimeter_mean', data = df) plt.show() #Box Plot for the feature smoothness_mean sns.boxplot(x = 'diagnosis', y = 'smoothness_mean', data = df) plt.show() #Box Plot for the feature compactness_mean sns.boxplot(x = 'diagnosis', y = 'compactness_mean', data = df) plt.show() #Box Plot for the feature symmetry_mean sns.boxplot(x = 'diagnosis', y = 'symmetry_mean', data = df) plt.show() #Violin Plots for texture_mean sns.violinplot(x = 'diagnosis', y ='texture_mean', data = df, size = 8) plt.show() #Violin Plots for compactness_mean sns.violinplot(x = 'diagnosis', y = 'compactness_mean', data = df, size = 8) plt.show() #Histogram of Axillary_Nodes sns.FacetGrid(df, hue = "diagnosis", size=5).map(sns.distplot, "symmetry_worst").add_legend(); plt.show(); #taking the main parameters in a single variable main_pred_var = ['texture_mean','perimeter_mean','smoothness_mean','compactness_mean','symmetry_mean']
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Spliting the Dataset into two part
# spliting the dataset into two part train_set,test_set=train_test_split(df, test_size=0.2) #printing the data shape print(train_set.shape) print(test_set.shape)
(455, 31) (114, 31)
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
**Training Set :**
x_train=train_set[main_pred_var] y_train=train_set.diagnosis print(y_train.shape) print(x_train.shape)
(455,) (455, 5)
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
**Test Set :**
x_test=test_set[main_pred_var] y_test=test_set.diagnosis print(y_test.shape) print(x_test.shape)
(114,) (114, 5)
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Various AlgorithmNow i will train this Breast cancer dataset using various Algorithm from scratch see how each of them behaves with respect to one another. Random Forest SVM RandomForst Algorithm
#define the algorithm class into the algo_one variable algo_one=RandomForestClassifier() algo_one.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_one.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
SupportVector Machine Algorithm (SVM)
algo_two=svm.SVC() algo_two.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_two.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Decision Tree Classifier Algorithm
algo_three=DecisionTreeClassifier() algo_three.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_three.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
K-Nearest NeighborsClassifier Algorithm
algo_four=KNeighborsClassifier() algo_four.fit(x_train,y_train) #predicting the algorithm into the non trained dataset that is test set prediction = algo_four.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
GaussianNB Algorithm
algo_five=GaussianNB() algo_five.fit(x_train,y_train) prediction = algo_five.predict(x_test) metrics.accuracy_score(prediction,y_test)
_____no_output_____
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
Tuning Parameters using grid search CVLets Start with Random Forest Classifier Tuning the parameters means using the best parameter for predict there are many parameters need to model a Machine learning Algorithm for RandomForestClassifier.
pred_var = ['radius_mean','perimeter_mean','area_mean','compactness_mean','concave points_mean'] #creating new variable x_grid= df[pred_var] y_grid= df["diagnosis"] # lets Make a function for Grid Search CV def Classification_model_gridsearchCV(model,param_grid,x_grid,y_grid): clf = GridSearchCV(model,param_grid,cv=10,scoring="accuracy") # this is how we use grid serch CV we are giving our model # the we gave parameters those we want to tune # Cv is for cross validation # scoring means to score the classifier clf.fit(x_train,y_train) print("The best parameter found on development set is :") # this will gie us our best parameter to use print(clf.best_params_) print("the bset estimator is ") print(clf.best_estimator_) print("The best score is ") # this is the best score that we can achieve using these parameters# print(clf.best_score_) # you will understand these terms once you follow the link above param_grid = {'max_features': ['auto', 'sqrt', 'log2'], 'min_samples_split': [2,3,4,5,6,7,8,9,10], 'min_samples_leaf':[2,3,4,5,6,7,8,9,10] } # here our gridasearchCV will take all combinations of these parameter and apply it to model # and then it will find the best parameter for model model= RandomForestClassifier() Classification_model_gridsearchCV(model,param_grid,x_grid,y_grid)
The best parameter found on development set is : {'max_features': 'log2', 'min_samples_leaf': 2, 'min_samples_split': 6} the bset estimator is RandomForestClassifier(bootstrap=True, class_weight=None, criterion='gini', max_depth=None, max_features='log2', max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, min_samples_leaf=2, min_samples_split=6, min_weight_fraction_leaf=0.0, n_estimators=10, n_jobs=1, oob_score=False, random_state=None, verbose=0, warm_start=False) The best score is 0.9384615384615385
MIT
Breast Cancer .ipynb
suvhradipghosh07/Breast-Cancer-prediction-using-Machine-Learning-Various-Algorithms
λ‹¨μˆœν•œ 기울기 계산 - z = 2x^2+3
# λ¨Όμ € νŒŒμ΄ν† μΉ˜λ₯Ό λΆˆλŸ¬μ˜΅λ‹ˆλ‹€. import torch # xλ₯Ό [2.0,3.0]의 값을 κ°€μ§„ ν…μ„œλ‘œ μ΄ˆκΈ°ν™” ν•΄μ£Όκ³  기울기 계산을 True둜 켜 λ†“μŠ΅λ‹ˆλ‹€. # z = 2x^2+3 x = torch.tensor(data=[2.0,3.0],requires_grad=True) y = x**2 z = 2*y +3 # https://pytorch.org/docs/stable/autograd.html?highlight=backward#torch.autograd.backward # λͺ©ν‘œκ°’을 μ§€μ •ν•©λ‹ˆλ‹€. target = torch.tensor([3.0,4.0]) # z와 λͺ©ν‘œκ°’μ˜ μ ˆλŒ€κ°’ 차이λ₯Ό κ³„μ‚°ν•©λ‹ˆλ‹€. # backwardλŠ” 슀칼라 값에 λŒ€ν•΄μ„œ λ™μž‘ν•˜κΈ° λ•Œλ¬Έμ— 길이 2짜리 ν…μ„œμΈ lossλ₯Ό torch.sum을 톡해 ν•˜λ‚˜μ˜ 숫자둜 λ°”κΏ”μ€λ‹ˆλ‹€. loss = torch.sum(torch.abs(z-target)) # 그리고 슀칼라 값이 된 loss에 λŒ€ν•΄ backwardλ₯Ό μ μš©ν•©λ‹ˆλ‹€. loss.backward() # μ—¬κΈ°μ„œ y와 zλŠ” κΈ°μšΈκΈ°κ°€ None으둜 λ‚˜μ˜€λŠ”λ° μ΄λŠ” x,y,z쀑에 x만이 leaf node이기 λ•Œλ¬Έμž…λ‹ˆλ‹€. print(x.grad, y.grad, z.grad)
tensor([ 8., 12.]) None None
MIT
Linear Regression Analysis/Calculate Gradients.ipynb
TaehoLi/Pytorch-secondstep
Step1. Define the function wich integrate with.
import numpy as np def f(x): return np.log(3+x)
_____no_output_____
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Step 2define functions for diferent methods
# Simple Trapezoid def simpleTrapezoid(a, b): return (b-a) * (f(a) + f(b)) def simpleTrapezoidError(realIntegrate, a, b): return (abs(realIntegrate - simpleTrapezoid(a, b)) / realIntegrate ) * 100 # compound trapezoid def compoundTrapezoid(a, b, x): sum = 0 for i in range(1, n): sum += 2 * f(x[i]) return (b-a) / (2*n) * (f(a) + sum + f(b)) def compoundTrapezoidError(realIntegrate, a, b, x): return (abs(realIntegrate - compoundTrapezoid(a, b, x)) / realIntegrate ) * 100 # simple Simpson 1/3 def simpleSimpson1_3(x): return (x[2] - x[0]) / 6 * (f(x[0]) + 4 * f(x[1]) + f(x[2])) def simpleSimpson1_3Error(realIntegrate, x): return (abs(realIntegrate - simpleSimpson1_3(x)) / realIntegrate) # compound Sompson 1/3 def compoundSimpson1_3(a, b, x): sum = 0 for i in range(1, n, 2): sum += 4 * f(x[i]) sum_2 = 0 for i in range(2, n-1, 2): sum_2 += 2 * f(x[i]) return (b-a)/(3*n) * (f(a) + sum + sum_2 + f(b)) def compoundSimpson1_3Error(realIntegrate, a, b, x): return (abs(realIntegrate - compoundSimpson1_3(a, b, x)) / realIntegrate) # simple Simpson 3/8 def simpleSimpson1_8(x): return (x[3] - x[0]) / 8* (f(x[0]) + 3 *f(x[1]) +3 * f(x[2]) + f(x[3])) def simpleSimpson1_8Error(realIntegrate, x): return (abs(realIntegrate - simpleSimpson1_8(x)) / realIntegrate) # compound Simpson 3/8 def compoundSimpsin3_8(a,b,x): sum = 0 m = int(n/3) for i in range(0, m): sum += 3 * f(x[3*i+1]) + 3*f(x[3*i+2]) for i in range(0, m-1): sum += 2 * f(x[3*i+3]) return (3/8)*((b-a)/n) * (f(a) + sum + f(b)) def compoundSimpsin1_8Error(realIntegrate, a, b, x): return (abs(realIntegrate - compoundSimpsin3_8(a, b, x)) / realIntegrate) * 100
_____no_output_____
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Step 3 Define entry values
# code proof a = 0.1 b = 3.1 realIntegrate = -5.82773
_____no_output_____
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
a. proof of simple rules
# simple trapezoid print(f"La integral aproximada por la regla de trapecio simple es: {simpleTrapezoid(a, b)} ; error (%): {simpleTrapezoidError(realIntegrate, a, b)} ") #simple Simpson 1/3 n= 2 x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de Simpson 1/3 simle es: {simpleSimpson1_3(x)} : error (%): {simpleSimpson1_3Error(realIntegrate, x)} ") # simple Simpson 3/8 n = 3 x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de Simpson 3/8 simple es: {simpleSimpson1_8(x)} ; error (%): {simpleSimpson1_8Error(realIntegrate, x)}")
La integral aproximada por la regla de trapecio simple es: 8.819072648011097 ; error (%): -251.32946529799932 La integral aproximada por la regla de Simpson 1/3 simle es: 4.521958048325281 : error (%): -1.7759381523037754 La integral aproximada por la regla de Simpson 3/8 simple es: 4.522640033621997 ; error (%): -1.7760551764790058
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
a. proof of compound rules
# compound trapezoid n= 18 # number of segments x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de trapecio compuesto es: {compoundTrapezoid(a, b, x)} ; error (%): {compoundTrapezoidError(realIntegrate, a, b, x)} ") #compound Simpson 1/3 n= 18 # even number x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de Simpson 1/3 compuesta es: {compoundSimpson1_3(a, b, x)} : error (%): {compoundSimpson1_3Error(realIntegrate, a, b, x)} ") # compound Simpson 3/8 n = 18 # integer:: n%3 = 0 x = np.linspace(a, b, n+1) print(f"La integral aproximada por la regla de Simpson 3/8 compuesta es: {compoundSimpsin3_8(a,b,x)} ; error (%): {compoundSimpsin1_8Error(realIntegrate, a, b, x)}")
La integral aproximada por la regla de trapecio compuesto es: 4.522847784399211 ; error (%): -177.6090825141043 La integral aproximada por la regla de Simpson 1/3 compuesta es: 4.523214709695509 : error (%): -1.776153787099867 La integral aproximada por la regla de Simpson 3/8 compuesta es: 4.523214401106994 ; error (%): -177.61537341481147
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Step 5: Gauss quadrature to 2 and 3 points
# 2 points xd_0 = -0.577350269 xd_1 = 0.577350269 C0 = 1 C1 = 1 dx = ( b- a ) / 2 x0 = ((b+a) + (b-a) * xd_0 )/ 2 x1 = ((b+a) + (b-a) * xd_1)/ 2 F0 = f(x0) * dx F1 = f(x1) * dx integralGaussQuadrature = C0 *F0 + C1 * F1 integralGaussQuadratureError = (abs(realIntegrate - integralGaussQuadrature) / realIntegrate) * 100 print(f"la integral aproximada por el metodo de cuadratura de Gauss con 2 puntos es: {integralGaussQuadrature} ; y el error (%): {integralGaussQuadratureError}") # 3 points xd_0 = -0.774596669 xd_1 = 0 xd_2 = 0.774596669 C0 = 0.55555555 C1 = 0.88888888 C2 = 0.55555555 dx = ( b- a ) / 2 x0 = ((b+a) + (b-a) * xd_0 )/ 2 x1 = ((b+a) + (b-a) * xd_1)/ 2 x2 = ((b+a) + (b-a) * xd_2)/ 2 F0 = f(x0) * dx F1 = f(x1) * dx F2 = f(x2) * dx integralGaussQuadrature_3 = C0 *F0 + C1 * F1 + C2 * F2 integralGaussQuadratureError_3 = (abs(realIntegrate - integralGaussQuadrature_3) / realIntegrate) * 100 print(f"la integral aproximada por el metodo de cuadratura de Gauss con 3 puntos es: {integralGaussQuadrature_3} ; y el error (%): {integralGaussQuadratureError_3}")
la integral aproximada por el metodo de cuadratura de Gauss con 2 puntos es: 4.5240374652688375 ; y el error (%): -177.62949665253603 la integral aproximada por el metodo de cuadratura de Gauss con 3 puntos es: 4.52323074338855 ; y el error (%): -177.6156538375757
MIT
integration.ipynb
henrymorenoespitia/numerical_methods_and_analysis
Example with real audio recordingsThe iterations are dropped in contrast to the offline version. To use past observations the correlation matrix and the correlation vector are calculated recursively with a decaying window. $\alpha$ is the decay factor. Setup
channels = 8 sampling_rate = 16000 delay = 3 alpha=0.9999 taps = 10 frequency_bins = stft_options['size'] // 2 + 1
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Audio data
file_template = 'AMI_WSJ20-Array1-{}_T10c0201.wav' signal_list = [ sf.read(str(project_root / 'data' / file_template.format(d + 1)))[0] for d in range(channels) ] y = np.stack(signal_list, axis=0) IPython.display.Audio(y[0], rate=sampling_rate)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Online bufferFor simplicity the STFT is performed before providing the frames.Shape: (frames, frequency bins, channels)frames: K+delay+1
Y = stft(y, **stft_options).transpose(1, 2, 0) T, _, _ = Y.shape def aquire_framebuffer(): buffer = list(Y[:taps+delay+1, :, :]) for t in range(taps+delay+1, T): yield np.array(buffer) buffer.append(Y[t, :, :]) buffer.pop(0)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Non-iterative frame online approachA frame online example requires, that certain state variables are kept from frame to frame. That is the inverse correlation matrix $\text{R}_{t, f}^{-1}$ which is stored in Q and initialized with an identity matrix, as well as filter coefficient matrix that is stored in G and initialized with zeros. Again for simplicity the ISTFT is applied afterwards.
Z_list = [] Q = np.stack([np.identity(channels * taps) for a in range(frequency_bins)]) G = np.zeros((frequency_bins, channels * taps, channels)) for Y_step in tqdm(aquire_framebuffer()): Z, Q, G = online_wpe_step( Y_step, get_power_online(Y_step.transpose(1, 2, 0)), Q, G, alpha=alpha, taps=taps, delay=delay ) Z_list.append(Z) Z_stacked = np.stack(Z_list) z = istft(np.asarray(Z_stacked).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift']) IPython.display.Audio(z[0], rate=sampling_rate)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Frame online WPE in class fashion: Online WPE class holds the correlation Matrix and the coefficient matrix.
Z_list = [] online_wpe = OnlineWPE( taps=taps, delay=delay, alpha=alpha ) for Y_step in tqdm(aquire_framebuffer()): Z_list.append(online_wpe.step_frame(Y_step)) Z = np.stack(Z_list) z = istft(np.asarray(Z).transpose(2, 0, 1), size=stft_options['size'], shift=stft_options['shift']) IPython.display.Audio(z[0], rate=sampling_rate)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
Power spectrumBefore and after applying WPE.
fig, [ax1, ax2] = plt.subplots(1, 2, figsize=(20, 8)) im1 = ax1.imshow(20 * np.log10(np.abs(Y[200:400, :, 0])).T, origin='lower') ax1.set_xlabel('') _ = ax1.set_title('reverberated') im2 = ax2.imshow(20 * np.log10(np.abs(Z_stacked[200:400, :, 0])).T, origin='lower') _ = ax2.set_title('dereverberated') cb = fig.colorbar(im1)
_____no_output_____
MIT
examples/WPE_Numpy_online.ipynb
mdeegen/nara_wpe
_Lambda School Data Science_ Scrape and process dataObjectives- scrape and parse web pages- use list comprehensions- select rows and columns with pandasLinks- [Automate the Boring Stuff with Python, Chapter 11](https://automatetheboringstuff.com/chapter11/) - Requests - Beautiful Soup- [Python List Comprehensions: Explained Visually](https://treyhunner.com/2015/12/python-list-comprehensions-now-in-color/)- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf) - Subset Observations (Rows) - Subset Variables (Columns)- Python Data Science Handbook - [Chapter 3.1](https://jakevdp.github.io/PythonDataScienceHandbook/03.01-introducing-pandas-objects.html), Introducing Pandas Objects - [Chapter 3.2](https://jakevdp.github.io/PythonDataScienceHandbook/03.02-data-indexing-and-selection.html), Data Indexing and Selection Scrape the titles of PyCon 2018 talks
url = 'https://us.pycon.org/2018/schedule/talks/list/'
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
5 ways to look at long titlesLet's define a long title as greater than 80 characters 1. For Loop
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
2. List Comprehension
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
3. Filter with named function
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
4. Filter with anonymous function
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
5. Pandaspandas documentation: [Working with Text Data](https://pandas.pydata.org/pandas-docs/stable/text.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Make new dataframe columnspandas documentation: [apply](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.apply.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
title length
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
long title
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
first letter
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
word countUsing [`textstat`](https://github.com/shivam5992/textstat)
#!pip install textstat
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Rename column`title length` --> `title character count`pandas documentation: [rename](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.rename.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Analyze the dataframe Describepandas documentation: [describe](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.describe.html)
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Sort valuespandas documentation: [sort_values](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.sort_values.html) Five shortest titles, by character count
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Titles sorted reverse alphabetically
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Get value countspandas documentation: [value_counts](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.value_counts.html) Frequency counts of first letters
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Percentage of talks with long titles
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Plotpandas documentation: [Visualization](https://pandas.pydata.org/pandas-docs/stable/visualization.html) Top 5 most frequent first letters
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Histogram of title lengths, in characters
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
Assignment**Scrape** the talk descriptions. Hint: `soup.select('.presentation-description')`**Make** new columns in the dataframe:- description- description character count- description word count- description grade level (use [this `textstat` function](https://github.com/shivam5992/textstatthe-flesch-kincaid-grade-level) to get the Flesh-Kincaid grade level)**Describe** all the dataframe's columns. What's the average description word count? The minimum? The maximum?**Answer** these questions:- Which descriptions could fit in a tweet?- What's the distribution of grade levels? Plot a histogram.
import bs4 import requests import numpy as np import pandas as pd import matplotlib.pyplot as plt result = requests.get(url) soup = bs4.BeautifulSoup(result.text) descriptions = [tag.text.strip() for tag in soup.select('.presentation-description')] print (len(descriptions)) print (descriptions) df = pd.DataFrame({'description': descriptions}) df['char count'] = df.description.apply(len) df.head() import textstat df['descr. word count'] = df['description'].apply(textstat.lexicon_count) df.head() df['grade level'] = df['description'].apply(textstat.flesch_kincaid_grade) df.head() df.describe() df.describe(exclude=np.number) df['tweetable'] = df['char count']<=280 df[df['tweetable'] == True] plt.hist(df['grade level']) plt.title('Histogram of Description Grade Levels') plt.show();
_____no_output_____
MIT
module1-scrape-and-process-data/LS_DS_121_Scrape_and_process_data.ipynb
wjarvis2/DS-Unit-1-Sprint-2-Data-Wrangling
SpecificationsThese parameters need to be specified prior to running the predictive GAN.
# modes gp = False sn = True # steps train = False fixed_input = True eph = '9999' # the model to read if not training # paramters DATASET = 'sine' # sine, moon, 2spirals, circle, helix suffix = '_sn' # suffix of output folder LATENT_DIM = 2 # latent space dimension DIM = 512 # 512 Model dimensionality INPUT_DIM = 2 # input dimension LAMBDA = 0.1 # smaller lambda seems to help for toy tasks specifically DROPOUT_RATE = 0.1 # rate of dropout lr = 1e-4 # learning rate for the optimizer CRITIC_ITERS = 5 # how many critic iterations per generator iteration BATCH_SIZE = 256 # batch size ITERS = 30000 # 100000, how many generator iterations to train for log_interval = 1000 # how frequent to write to log and save models use_cuda = False plot_3d = (DATASET == 'helix') TMP_PATH = 'tmp/' + DATASET + suffix + '/' if not os.path.isdir(TMP_PATH): os.makedirs(TMP_PATH)
_____no_output_____
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
Make generator and discriminatorInitializing generator and discriminator objects. The architectures have been declared in lib.models.
netG = Generator(LATENT_DIM, DIM, DROPOUT_RATE, INPUT_DIM) if sn: netD = DiscriminatorSN(DIM, INPUT_DIM) else: netD = Discriminator(DIM, INPUT_DIM) netG.apply(weights_init) netD.apply(weights_init)
_____no_output_____
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
Train or load modelIf in training mode, a WGAN with either SN or GP will be trained on the specified type of synthetic data (sine, circle, half-moon, helix or double-spirals). A log file will be created with all specifications to keep track of the runs. Loss will be plotted against the number of epochs. Randomly generated samples will also be plotted. The frequency at which to save the plots is specified by the parameter 'log_interval'.If not training, the pre-trained models saved in the tmp path will be loaded. Use 'eph' to specify from which epoch to load the models.
if train: # start writing log f = open(TMP_PATH + "log.txt", "w") # print specifications f.write('gradient penalty: ' + str(gp)) f.write('\n spectral normalization: ' + str(sn)) f.write('\n datasest: ' + DATASET) f.write('\n hidden layer dimension: ' + str(DIM)) f.write('\n latent space dimension: ' + str(LATENT_DIM)) f.write('\n gradient penalty lambda: ' + str(LAMBDA)) f.write('\n dropout rate: ' + str(DROPOUT_RATE)) f.write('\n critic iterations per generator iteration: ' + str(CRITIC_ITERS)) f.write('\n batch size: ' + str(BATCH_SIZE)) f.write('\n total iterations: ' + str(ITERS)) f.write('\n') # print model structures f.write(str(netG)) f.write(str(netD)) f.write('\n') # option of using GPU if use_cuda: netD = netD.cuda() netG = netG.cuda() # declare optimizers for generator and discriminator optimizerD = optim.Adam(netD.parameters(), lr=lr, betas=(0.5, 0.9)) optimizerG = optim.Adam(netG.parameters(), lr=lr, betas=(0.5, 0.9)) # helper tensors for backpropogation one = torch.FloatTensor([1]) mone = one * -1 if use_cuda: one = one.cuda() mone = mone.cuda() # make synthetic data data = make_data_iterator(DATASET, BATCH_SIZE) # record loss and wasserstein-1 estimate losses = [] wass_dist = [] # start timing start = timeit.default_timer() # start training for iteration in range(ITERS): ############################ # (1) Update D network ########################### for iter_d in range(CRITIC_ITERS): _data = next(data).float() if use_cuda: _data = _data.cuda() netD.zero_grad() # train with real D_real = netD(_data) D_real = D_real.mean().unsqueeze(0) D_real.backward(mone) # train with fake noise = torch.randn(BATCH_SIZE, LATENT_DIM) if use_cuda: noise = noise.cuda() fake = netG(noise) D_fake = netD(fake.detach()) D_fake = D_fake.mean().unsqueeze(0) D_fake.backward(one) # train with gradient penalty if gp: gradient_penalty = calc_gradient_penalty(netD, _data, fake, BATCH_SIZE, LAMBDA, use_cuda) gradient_penalty.backward() if gp: D_cost = abs(D_fake - D_real) + gradient_penalty else: D_cost = abs(D_fake - D_real) Wasserstein_D = abs(D_real - D_fake) optimizerD.step() ############################ # (2) Update G network ############################ netG.zero_grad() _data = next(data).float() if use_cuda: _data = _data.cuda() noise = torch.randn(BATCH_SIZE, LATENT_DIM) if use_cuda: noise = noise.cuda() fake = netG(noise) G = netD(fake) G = G.mean().unsqueeze(0) G.backward(mone) G_cost = -G optimizerG.step() losses.append([G_cost.cpu().item(), D_cost.cpu().item()]) wass_dist.append(Wasserstein_D.cpu().item()) if iteration % log_interval == log_interval - 1: # save discriminator model torch.save(netD.state_dict(), TMP_PATH + 'disc_model' + str(iteration) + '.pth') # save generator model torch.save(netG.state_dict(), TMP_PATH + 'gen_model' + str(iteration) + '.pth') # report iteration number f.write('Iteration ' + str(iteration) + '\n') # report time stop = timeit.default_timer() f.write(' Time spent: ' + str(stop - start) + '\n') # report loss f.write(' Generator loss: ' + str(G_cost.cpu().item()) + '\n') f.write(' Discriminator loss: ' + str(D_cost.cpu().item()) + '\n') f.write(' Wasserstein distance: ' + str(Wasserstein_D.cpu().item()) + '\n') # save frame plot noise = torch.randn(BATCH_SIZE, LATENT_DIM) if use_cuda: noise = noise.cuda() plot_data(_data.cpu().numpy(), netG(noise).cpu().data.numpy(), str(iteration), TMP_PATH, plot_3d=plot_3d) # save loss plot fig, ax = plt.subplots(1, 1, figsize=[10, 5]) ax.plot(losses) ax.legend(['Generator', 'Discriminator']) plt.title('Generator Loss v.s Discriminator Loss') ax.grid() plt.savefig(TMP_PATH + 'loss_trend' + str(iteration) + '.png') # save wassertein loss plot fig, ax = plt.subplots(1, 1, figsize=[10, 5]) ax.plot(wass_dist) plt.title('Wassertein Distance') ax.grid() plt.savefig(TMP_PATH + 'wass_dist' + str(iteration) + '.png') # close log file f.close() else: # if not training, load pre-trained models from local files netG.load_state_dict(torch.load(TMP_PATH + 'gen_model' + eph + '.pth')) netD.load_state_dict(torch.load(TMP_PATH + 'disc_model' + eph + '.pth'))
_____no_output_____
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
PredictionFor a list of x, use the trained GAN to make multiple predictions for y. For each x, many predictions will be made. A subset is taken depending on the similarity between the generated x and the specified x.
if fixed_input: data = make_data_iterator(DATASET, BATCH_SIZE) # sine data preds = None for x in np.linspace(-4., 4., 17): print(x) out = predict_fixed(netG, x, 80, 8, INPUT_DIM, LATENT_DIM, use_cuda) if preds == None: preds = out else: preds = torch.cat((preds, out)) # plt.scatter(preds[:, 0], preds[:, 1]) # plt.show() true_dist = next(data) fig, ax = plt.subplots(1, 1, figsize=(6, 4)) plt.scatter(true_dist[:, 0], true_dist[:, 1], c='orange', label='Real data') plt.scatter(preds[:, 0], preds[:, 1], c='blue', label='Predictions') plt.savefig(TMP_PATH + 'fixed_input' + eph + '.jpg')
-4.0 -3.5 -3.0 -2.5 -2.0 -1.5 -1.0 -0.5 0.0 0.5 1.0 1.5 2.0 2.5 3.0 3.5 4.0
MIT
gan_toy_example.ipynb
acse-wx319/gans-on-multimodal-data
A pplication Programming Interface (API) An API lets two pieces of software talk to each other. Just like a function, you don’t have to know how the API works only its inputs and outputs. An essential type of API is a REST API that allows you to access resources via the internet. In this lab, we will review the Pandas Library in the context of an API, we will also review a basic REST API Table of ContentsPandas is an APIREST APIs Basics Quiz on TuplesEstimated Time Needed: 15 min
!pip install nba_api
Collecting nba_api [?25l Downloading https://files.pythonhosted.org/packages/fd/94/ee060255b91d945297ebc2fe9a8672aee07ce83b553eef1c5ac5b974995a/nba_api-1.1.8-py3-none-any.whl (217kB)  |β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 225kB 2.7MB/s [?25hRequirement already satisfied: requests in /usr/local/lib/python3.6/dist-packages (from nba_api) (2.23.0) Requirement already satisfied: certifi>=2017.4.17 in /usr/local/lib/python3.6/dist-packages (from requests->nba_api) (2020.6.20) Requirement already satisfied: urllib3!=1.25.0,!=1.25.1,<1.26,>=1.21.1 in /usr/local/lib/python3.6/dist-packages (from requests->nba_api) (1.24.3) Requirement already satisfied: idna<3,>=2.5 in /usr/local/lib/python3.6/dist-packages (from requests->nba_api) (2.10) Requirement already satisfied: chardet<4,>=3.0.2 in /usr/local/lib/python3.6/dist-packages (from requests->nba_api) (3.0.4) Installing collected packages: nba-api Successfully installed nba-api-1.1.8
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Pandas is an API You will use this function in the lab:
def one_dict(list_dict): keys=list_dict[0].keys() out_dict={key:[] for key in keys} for dict_ in list_dict: for key, value in dict_.items(): out_dict[key].append(value) return out_dict
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Pandas is an API Pandas is actually set of software components , much of witch is not even written in Python.
import pandas as pd import matplotlib.pyplot as plt
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
You create a dictionary, this is just data.
dict_={'a':[11,21,31],'b':[12,22,32]}
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
When you create a Pandas object with the Dataframe constructor in API lingo, this is an "instance". The data in the dictionary is passed along to the pandas API. You then use the dataframe to communicate with the API.
df=pd.DataFrame(dict_) type(df)
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
When you call the method head the dataframe communicates with the API displaying the first few rows of the dataframe.
df.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
When you call the method mean,the API will calculate the mean and return the value.
df.mean()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
REST APIs Rest API’s function by sending a request, the request is communicated via HTTP message. The HTTP message usually contains a JSON file. This contains instructions for what operation we would like the service or resource to perform. In a similar manner, API returns a response, via an HTTP message, this response is usually contained within a JSON.In this lab, we will use the NBA API to determine how well the Golden State Warriors performed against the Toronto Raptors. We will use the API do the determined number of points the Golden State Warriors won or lost by for each game. So if the value is three, the Golden State Warriors won by three points. Similarly it the Golden State Warriors lost by two points the result will be negative two. The API is reltivly will handle a lot of the details such a Endpoints and Authentication In the nba api to make a request for a specific team, it's quite simple, we don't require a JSON all we require is an id. This information is stored locally in the API we import the module teams
from nba_api.stats.static import teams import matplotlib.pyplot as plt #https://pypi.org/project/nba-api/
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The method get_teams() returns a list of dictionaries the dictionary key id has a unique identifier for each team as a value
nba_teams = teams.get_teams()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The dictionary key id has a unique identifier for each team as a value, let's look at the first three elements of the list:
nba_teams[0:3]
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
To make things easier, we can convert the dictionary to a table. First, we use the function one dict, to create a dictionary. We use the common keys for each team as the keys, the value is a list; each element of the list corresponds to the values for each team.We then convert the dictionary to a dataframe, each row contains the information for a different team.
dict_nba_team=one_dict(nba_teams) df_teams=pd.DataFrame(dict_nba_team) df_teams.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Will use the team's nickname to find the unique id, we can see the row that contains the warriors by using the column nickname as follows:
df_warriors=df_teams[df_teams['nickname']=='Warriors'] df_warriors
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
we can use the following line of code to access the first column of the dataframe:
id_warriors=df_warriors[['id']].values[0][0] #we now have an integer that can be used to request the Warriors information id_warriors
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The function "League Game Finder " will make an API call, its in the module stats.endpoints
from nba_api.stats.endpoints import leaguegamefinder
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The parameter team_id_nullable is the unique ID for the warriors. Under the hood, the NBA API is making a HTTP request. The information requested is provided and is transmitted via an HTTP response this is assigned to the object gamefinder.
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP. # The following code is comment out, you can run it on jupyter labs on your own computer. # gamefinder = leaguegamefinder.LeagueGameFinder(team_id_nullable=id_warriors)
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
we can see the json file by running the following line of code.
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP. # The following code is comment out, you can run it on jupyter labs on your own computer. # gamefinder.get_json()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
The game finder object has a method get_data_frames(), that returns a dataframe. If we view the dataframe, we can see it contains information about all the games the Warriors played. The PLUS_MINUS column contains information on the score, if the value is negative the Warriors lost by that many points, if the value is positive, the warriors one by that amount of points. The column MATCHUP had the team the Warriors were playing, GSW stands for golden state and TOR means Toronto Raptors; vs signifies it was a home game and the @ symbol means an away game.
# Since https://stats.nba.com does lot allow api calls from Cloud IPs and Skills Network Labs uses a Cloud IP. # The following code is comment out, you can run it on jupyter labs on your own computer. # games = gamefinder.get_data_frames()[0] # games.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
you can download the dataframe from the API call for Golden State and run the rest like a video.
! wget https://s3-api.us-geo.objectstorage.softlayer.net/cf-courses-data/CognitiveClass/PY0101EN/Chapter%205/Labs/Golden_State.pkl file_name = "Golden_State.pkl" games = pd.read_pickle(file_name) games.head()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
We can create two dataframes, one for the games that the Warriors faced the raptors at home and the second for away games.
games_home=games [games ['MATCHUP']=='GSW vs. TOR'] games_away=games [games ['MATCHUP']=='GSW @ TOR']
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
We can calculate the mean for the column PLUS_MINUS for the dataframes games_home and games_away:
games_home.mean()['PLUS_MINUS'] games_away.mean()['PLUS_MINUS']
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
We can plot out the PLUS MINUS column for for the dataframes games_home and games_away.We see the warriors played better at home.
fig, ax = plt.subplots() games_away.plot(x='GAME_DATE',y='PLUS_MINUS', ax=ax) games_home.plot(x='GAME_DATE',y='PLUS_MINUS', ax=ax) ax.legend(["away", "home"]) plt.show()
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
About the Authors: [Joseph Santarcangelo]( https://www.linkedin.com/in/joseph-s-50398b136/) has a PhD in Electrical Engineering, his research focused on using machine learning, signal processing, and computer vision to determine how videos impact human cognition. Joseph has been working for IBM since he completed his PhD. Copyright &copy; 2017 [cognitiveclass.ai](https:cognitiveclass.ai). This notebook and its source code are released under the terms of the [MIT License](cognitiveclass.ai).
_____no_output_____
MIT
Python_For_DSandAI_5_1_Intro_API.ipynb
ornob39/Python_For_DataScience_AI-IBM-
Machine Learning Engineer Nanodegree Unsupervised Learning Project: Creating Customer Segments Welcome to the third project of the Machine Learning Engineer Nanodegree! In this notebook, some template code has already been provided for you, and it will be your job to implement the additional functionality necessary to successfully complete this project. Sections that begin with **'Implementation'** in the header indicate that the following block of code will require additional functionality which you must provide. Instructions will be provided for each section and the specifics of the implementation are marked in the code block with a `'TODO'` statement. Please be sure to read the instructions carefully!In addition to implementing code, there will be questions that you must answer which relate to the project and your implementation. Each section where you will answer a question is preceded by a **'Question X'** header. Carefully read each question and provide thorough answers in the following text boxes that begin with **'Answer:'**. Your project submission will be evaluated based on your answers to each of the questions and the implementation you provide. >**Note:** Code and Markdown cells can be executed using the **Shift + Enter** keyboard shortcut. In addition, Markdown cells can be edited by typically double-clicking the cell to enter edit mode. Getting StartedIn this project, you will analyze a dataset containing data on various customers' annual spending amounts (reported in *monetary units*) of diverse product categories for internal structure. One goal of this project is to best describe the variation in the different types of customers that a wholesale distributor interacts with. Doing so would equip the distributor with insight into how to best structure their delivery service to meet the needs of each customer.The dataset for this project can be found on the [UCI Machine Learning Repository](https://archive.ics.uci.edu/ml/datasets/Wholesale+customers). For the purposes of this project, the features `'Channel'` and `'Region'` will be excluded in the analysis β€” with focus instead on the six product categories recorded for customers.Run the code block below to load the wholesale customers dataset, along with a few of the necessary Python libraries required for this project. You will know the dataset loaded successfully if the size of the dataset is reported.
# Import libraries necessary for this project import numpy as np import pandas as pd from IPython.display import display # Allows the use of display() for DataFrames from scipy.stats import skew # Import supplementary visualizations code visuals.py import visuals as vs # Pretty display for notebooks %matplotlib inline # Load the wholesale customers dataset try: data = pd.read_csv("customers.csv") data.drop(['Region', 'Channel'], axis = 1, inplace = True) print "Wholesale customers dataset has {} samples with {} features each.".format(*data.shape) except: print "Dataset could not be loaded. Is the dataset missing?" display(data.head(4)) print data.skew()
Wholesale customers dataset has 440 samples with 6 features each.
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Data ExplorationIn this section, you will begin exploring the data through visualizations and code to understand how each feature is related to the others. You will observe a statistical description of the dataset, consider the relevance of each feature, and select a few sample data points from the dataset which you will track through the course of this project.Run the code block below to observe a statistical description of the dataset. Note that the dataset is composed of six important product categories: **'Fresh'**, **'Milk'**, **'Grocery'**, **'Frozen'**, **'Detergents_Paper'**, and **'Delicatessen'**. Consider what each category represents in terms of products you could purchase.
# Display a description of the dataset display(data.describe()) #by looking at statistics, mean and 50% i.e. median are too far away for every colum. So, skew is found. apply log to every column #or use box cox test
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Selecting SamplesTo get a better understanding of the customers and how their data will transform through the analysis, it would be best to select a few sample data points and explore them in more detail. In the code block below, add **three** indices of your choice to the `indices` list which will represent the customers to track. It is suggested to try different sets of samples until you obtain customers that vary significantly from one another.
import seaborn as sns # TODO: Select three indices of your choice you wish to sample from the dataset indices = [23, 57,234] # Create a DataFrame of the chosen samples samples = pd.DataFrame(data.loc[indices], columns = data.keys()).reset_index(drop = True) print "Chosen samples of wholesale customers dataset:" display(samples) sns.heatmap((samples-data.mean())/data.std(ddof=0), annot=True, cbar=False, square=True)
Chosen samples of wholesale customers dataset:
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 1Consider the total purchase cost of each product category and the statistical description of the dataset above for your sample customers. *What kind of establishment (customer) could each of the three samples you've chosen represent?* **Hint:** Examples of establishments include places like markets, cafes, and retailers, among many others. Avoid using names for establishments, such as saying *"McDonalds"* when describing a sample customer as a restaurant. **Answer:**I have considered 3 categories to divide the sample. 1)Supermarket: It has extremely higher values of fresh, milk, Grocery and frozen which surely implies the big store.The spendings on Fresh, Grocery and Delicatessen is quite high than mean spending of each item. Hence, a super market2)Grocerstore: The first sample shows high consumption of grocery which can be a outlet nearby human settlements.Convenience store would be a better name. Statistics show that spending on Grocery, Delicatessen and milk is quite higher than mean value of each. Hence, this can simply be assumed as Grocerstore since relevant items to grocery have high spending.3)Hotel: The consumption of all the given things is quite lesser than Supermarket. It represents a cafe type hotel since detergents_paper , Fresh and Milk have greater spedings than mean of each from statistics. Implementation: Feature RelevanceOne interesting thought to consider is if one (or more) of the six product categories is actually relevant for understanding customer purchasing. That is to say, is it possible to determine whether customers purchasing some amount of one category of products will necessarily purchase some proportional amount of another category of products? We can make this determination quite easily by training a supervised regression learner on a subset of the data with one feature removed, and then score how well that model can predict the removed feature.In the code block below, you will need to implement the following: - Assign `new_data` a copy of the data by removing a feature of your choice using the `DataFrame.drop` function. - Use `sklearn.cross_validation.train_test_split` to split the dataset into training and testing sets. - Use the removed feature as your target label. Set a `test_size` of `0.25` and set a `random_state`. - Import a decision tree regressor, set a `random_state`, and fit the learner to the training data. - Report the prediction score of the testing set using the regressor's `score` function.
'''# TODO: Make a copy of the DataFrame, using the 'drop' function to drop the given feature new_data = None # TODO: Split the data into training and testing sets using the given feature as the target X_train, X_test, y_train, y_test = (None, None, None, None) # TODO: Create a decision tree regressor and fit it to the training set regressor = None # TODO: Report the score of the prediction using the testing set score = None ''' from scipy.stats import skew from sklearn.tree import DecisionTreeRegressor from sklearn.cross_validation import train_test_split from sklearn.metrics import accuracy_score scores = [] names = data.columns.values for i, name in enumerate(data.columns.values): x = data.drop(name,axis=1).values y = data[name].values #print (x,y) X_train, X_test, y_train, y_test = train_test_split(x,y,test_size=0.25, random_state =10) #print (len(X_train), len(X_test),len(y_train),len(y_test)) clf = DecisionTreeRegressor(random_state =10) clf.fit(X_train, y_train) scores.append(clf.score(X_test,y_test)) #print scores df = pd.DataFrame(np.array([scores,names]).T, columns = ['score','feature'])#take transpose of scores and names for vertical display(df) #print data.skew()
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 2*Which feature did you attempt to predict? What was the reported prediction score? Is this feature necessary for identifying customers' spending habits?* **Hint:** The coefficient of determination, `R^2`, is scored between 0 and 1, with 1 being a perfect fit. A negative `R^2` implies the model fails to fit the data. **Answer:** Higher R^2 score implies that label or dependent variable are easily predicted by independent variables or training feature which is unnecessary for predictin customers spending habitsHowever, the one with lower R^2 score like Delicatessen, Fresh , Milk indicates that it is necessary for learning algorithm.Also, the high R^2 values of Grocery and Frozen are not needed. Visualize Feature DistributionsTo get a better understanding of the dataset, we can construct a scatter matrix of each of the six product features present in the data. If you found that the feature you attempted to predict above is relevant for identifying a specific customer, then the scatter matrix below may not show any correlation between that feature and the others. Conversely, if you believe that feature is not relevant for identifying a specific customer, the scatter matrix might show a correlation between that feature and another feature in the data. Run the code block below to produce a scatter matrix.
# Produce a scatter matrix for each pair of features in the data pd.scatter_matrix(data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); import seaborn as sns sns.heatmap(data.corr(), annot=True)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 3*Are there any pairs of features which exhibit some degree of correlation? Does this confirm or deny your suspicions about the relevance of the feature you attempted to predict? How is the data for those features distributed?* **Hint:** Is the data normally distributed? Where do most of the data points lie? **Answer:**How to identify correlation in scattermatrix plot? The plot should show a linear regression line surrounded closely by datapoints.Detergents_paper and Grocery, Milk and Grocery, Detergent_paper and milk exhibit high correlation.It confirms about my suspicion about relevance of feature.The data is distributed lognormalReveiewer Note: I am quite doubtful about the way I looked into graph and found correlation. Mention some other ways to find correlation. Data PreprocessingIn this section, you will preprocess the data to create a better representation of customers by performing a scaling on the data and detecting (and optionally removing) outliers. Preprocessing data is often times a critical step in assuring that results you obtain from your analysis are significant and meaningful. Implementation: Feature ScalingIf data is not normally distributed, especially if the mean and median vary significantly (indicating a large skew), it is most [often appropriate](http://econbrowser.com/archives/2014/02/use-of-logarithms-in-economics) to apply a non-linear scaling β€” particularly for financial data. One way to achieve this scaling is by using a [Box-Cox test](http://scipy.github.io/devdocs/generated/scipy.stats.boxcox.html), which calculates the best power transformation of the data that reduces skewness. A simpler approach which can work in most cases would be applying the natural logarithm.In the code block below, you will need to implement the following: - Assign a copy of the data to `log_data` after applying logarithmic scaling. Use the `np.log` function for this. - Assign a copy of the sample data to `log_samples` after applying logarithmic scaling. Again, use `np.log`.
# TODO: Scale the data using the natural logarithm log_data = np.log(data) # TODO: Scale the sample data using the natural logarithm log_samples = np.log(samples) # Produce a scatter matrix for each pair of newly-transformed features pd.scatter_matrix(log_data, alpha = 0.3, figsize = (14,8), diagonal = 'kde'); log_data.skew() # skew values are between -1<=skew value<=1 while fresh and delicatessen are almost satisfying condition. #Reviewer can explain how to use boxcox test. Please mention the code.
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
ObservationAfter applying a natural logarithm scaling to the data, the distribution of each feature should appear much more normal. For any pairs of features you may have identified earlier as being correlated, observe here whether that correlation is still present (and whether it is now stronger or weaker than before).Run the code below to see how the sample data has changed after having the natural logarithm applied to it.
# Display the log-transformed sample data display(log_samples)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Outlier DetectionDetecting outliers in the data is extremely important in the data preprocessing step of any analysis. The presence of outliers can often skew results which take into consideration these data points. There are many "rules of thumb" for what constitutes an outlier in a dataset. Here, we will use [Tukey's Method for identfying outliers](http://datapigtechnologies.com/blog/index.php/highlighting-outliers-in-your-data-with-the-tukey-method/): An *outlier step* is calculated as 1.5 times the interquartile range (IQR). A data point with a feature that is beyond an outlier step outside of the IQR for that feature is considered abnormal.In the code block below, you will need to implement the following: - Assign the value of the 25th percentile for the given feature to `Q1`. Use `np.percentile` for this. - Assign the value of the 75th percentile for the given feature to `Q3`. Again, use `np.percentile`. - Assign the calculation of an outlier step for the given feature to `step`. - Optionally remove data points from the dataset by adding indices to the `outliers` list.**NOTE:** If you choose to remove any outliers, ensure that the sample data does not contain any of these points! Once you have performed this implementation, the dataset will be stored in the variable `good_data`.
# For each feature find the data points with extreme high or low values outindex = {} for feature in log_data.keys(): # TODO: Calculate Q1 (25th percentile of the data) for the given feature Q1 = np.percentile(log_data[feature],25) # TODO: Calculate Q3 (75th percentile of the data) for the given feature Q3 = np.percentile(log_data[feature], 75) # TODO: Use the interquartile range to calculate an outlier step (1.5 times the interquartile range) step = 1.5*float((Q3-Q1)) # Display the outliers print "Data points considered outliers for the feature '{}':".format(feature) display(log_data[~((log_data[feature] >= Q1 - step) & (log_data[feature] <= Q3 + step))]) for i,r in log_data[feature].iteritems(): if not((r >= Q1 -step) & (r <= Q3 + step)): if i not in outindex: outindex[i]=1 else: outindex[i]=outindex[i]+1 outliers = [] # OPTIONAL: Select the indices for data points you wish to remove for i in outindex: if outindex[i]>=2: outliers.append(i) print outliers # Remove the outliers, if any were specified good_data = log_data.drop(log_data.index[outliers]).reset_index(drop = True)
Data points considered outliers for the feature 'Fresh':
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 4*Are there any data points considered outliers for more than one feature based on the definition above? Should these data points be removed from the dataset? If any data points were added to the `outliers` list to be removed, explain why.* **Answer:**[128, 154, 65, 66, 75] are the outliers and should be removed from dataset. These are the points which are outlier for more than one feature. Outliers only in single feature can be neglected since removing them may cause loss in information.However, those which are caught as outlier in multiple features set are more likely to be real outliers. Hence, the should be removed. Feature TransformationIn this section you will use principal component analysis (PCA) to draw conclusions about the underlying structure of the wholesale customer data. Since using PCA on a dataset calculates the dimensions which best maximize variance, we will find which compound combinations of features best describe customers. Implementation: PCANow that the data has been scaled to a more normal distribution and has had any necessary outliers removed, we can now apply PCA to the `good_data` to discover which dimensions about the data best maximize the variance of features involved. In addition to finding these dimensions, PCA will also report the *explained variance ratio* of each dimension β€” how much variance within the data is explained by that dimension alone. Note that a component (dimension) from PCA can be considered a new "feature" of the space, however it is a composition of the original features present in the data.In the code block below, you will need to implement the following: - Import `sklearn.decomposition.PCA` and assign the results of fitting PCA in six dimensions with `good_data` to `pca`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
from sklearn.decomposition import PCA # TODO: Apply PCA by fitting the good data with the same number of dimensions as features pca = PCA(n_components = 6) pca.fit(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Generate PCA results plot pca_results = vs.pca_results(good_data, pca)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 5*How much variance in the data is explained* ***in total*** *by the first and second principal component? What about the first four principal components? Using the visualization provided above, discuss what the first four dimensions best represent in terms of customer spending.* **Hint:** A positive increase in a specific dimension corresponds with an *increase* of the *positive-weighted* features and a *decrease* of the *negative-weighted* features. The rate of increase or decrease is based on the individual feature weights. **Answer:**1) First and second principal component explains about 0.7068 variance in data.2)The first four explains about 0.9311 variance in data.3)Dimension 1 has large increases for features Milk, Grocery and Detergents_Paper, a small increase for Delicatessen, and small decreases for features Fresh and Frozen.Dimension 2 has large increases for Fresh, Frozen and Delicatessen, and small increase for Milk, Grocery and Detergents_Paper.Dimension 3 has large increases for Frozen and Delicatessen, and large decreases for Fresh and Detergents_Paper.Dimension 4 has large increases for Frozen and Detergents_Paper, and large a decrease for Fish and Delicatessen. ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it in six dimensions. Observe the numerical value for the first four dimensions of the sample points. Consider if this is consistent with your initial interpretation of the sample points.
# Display sample log-data after having a PCA transformation applied display(pd.DataFrame(np.round(pca_samples, 4), columns = pca_results.index.values))
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Dimensionality ReductionWhen using principal component analysis, one of the main goals is to reduce the dimensionality of the data β€” in effect, reducing the complexity of the problem. Dimensionality reduction comes at a cost: Fewer dimensions used implies less of the total variance in the data is being explained. Because of this, the *cumulative explained variance ratio* is extremely important for knowing how many dimensions are necessary for the problem. Additionally, if a signifiant amount of variance is explained by only two or three dimensions, the reduced data can be visualized afterwards.In the code block below, you will need to implement the following: - Assign the results of fitting PCA in two dimensions with `good_data` to `pca`. - Apply a PCA transformation of `good_data` using `pca.transform`, and assign the results to `reduced_data`. - Apply a PCA transformation of `log_samples` using `pca.transform`, and assign the results to `pca_samples`.
# TODO: Apply PCA by fitting the good data with only two dimensions pca = PCA(n_components =2) pca.fit(good_data) # TODO: Transform the good data using the PCA fit above reduced_data = pca.transform(good_data) # TODO: Transform log_samples using the PCA fit above pca_samples = pca.transform(log_samples) # Create a DataFrame for the reduced data reduced_data = pd.DataFrame(reduced_data, columns = ['Dimension 1', 'Dimension 2']) display(reduced_data)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
ObservationRun the code below to see how the log-transformed sample data has changed after having a PCA transformation applied to it using only two dimensions. Observe how the values for the first two dimensions remains unchanged when compared to a PCA transformation in six dimensions.
# Display sample log-data after applying PCA transformation in two dimensions display(pd.DataFrame(np.round(pca_samples, 4), columns = ['Dimension 1', 'Dimension 2']))
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Visualizing a BiplotA biplot is a scatterplot where each data point is represented by its scores along the principal components. The axes are the principal components (in this case `Dimension 1` and `Dimension 2`). In addition, the biplot shows the projection of the original features along the components. A biplot can help us interpret the reduced dimensions of the data, and discover relationships between the principal components and original features.Run the code cell below to produce a biplot of the reduced-dimension data.
# Create a biplot vs.biplot(good_data, reduced_data, pca)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
ObservationOnce we have the original feature projections (in red), it is easier to interpret the relative position of each data point in the scatterplot. For instance, a point the lower right corner of the figure will likely correspond to a customer that spends a lot on `'Milk'`, `'Grocery'` and `'Detergents_Paper'`, but not so much on the other product categories. From the biplot, which of the original features are most strongly correlated with the first component? What about those that are associated with the second component? Do these observations agree with the pca_results plot you obtained earlier? ClusteringIn this section, you will choose to use either a K-Means clustering algorithm or a Gaussian Mixture Model clustering algorithm to identify the various customer segments hidden in the data. You will then recover specific data points from the clusters to understand their significance by transforming them back into their original dimension and scale. Question 6*What are the advantages to using a K-Means clustering algorithm? What are the advantages to using a Gaussian Mixture Model clustering algorithm? Given your observations about the wholesale customer data so far, which of the two algorithms will you use and why?* **Answer:**1)kmeans:It is hard clustering type. It is simpler and minimizes the (x-u)^2 i.e. euclidean distance and it always converges.Its easy to implement.u=mean2)GMM:It is soft clustering type. It is used when there is possiblity of 2 overlapping clusters. It takes variance into account.3)Intuition is that there might be possibility of overlapping clusters with this dataset. Thus, soft clustering is good to use.Hence, GMM. Implementation: Creating ClustersDepending on the problem, the number of clusters that you expect to be in the data may already be known. When the number of clusters is not known *a priori*, there is no guarantee that a given number of clusters best segments the data, since it is unclear what structure exists in the data β€” if any. However, we can quantify the "goodness" of a clustering by calculating each data point's *silhouette coefficient*. The [silhouette coefficient](http://scikit-learn.org/stable/modules/generated/sklearn.metrics.silhouette_score.html) for a data point measures how similar it is to its assigned cluster from -1 (dissimilar) to 1 (similar). Calculating the *mean* silhouette coefficient provides for a simple scoring method of a given clustering.In the code block below, you will need to implement the following: - Fit a clustering algorithm to the `reduced_data` and assign it to `clusterer`. - Predict the cluster for each data point in `reduced_data` using `clusterer.predict` and assign them to `preds`. - Find the cluster centers using the algorithm's respective attribute and assign them to `centers`. - Predict the cluster for each sample data point in `pca_samples` and assign them `sample_preds`. - Import `sklearn.metrics.silhouette_score` and calculate the silhouette score of `reduced_data` against `preds`. - Assign the silhouette score to `score` and print the result.
from sklearn.metrics import silhouette_score from sklearn.mixture import GMM scorer = {}#for n sample points 2 to n-1 clusters can be created. for i in range(2,10): clusterer = GMM(n_components = i) clusterer.fit(reduced_data) pred = clusterer.predict(reduced_data) score = silhouette_score(reduced_data, pred) scorer[i]=score print (scorer) optimal_components = 2 # TODO: Apply your clustering algorithm of choice to the reduced data clusterer = GMM(n_components=optimal_components).fit(reduced_data) # TODO: Predict the cluster for each data point preds = clusterer.predict(reduced_data) # TODO: Find the cluster centers centers = clusterer.means_ # TODO: Predict the cluster for each transformed sample data point sample_preds = clusterer.predict(pca_samples) # TODO: Calculate the mean silhouette coefficient for the number of clusters chosen score = silhouette_score(reduced_data, preds) print 'Best number of clusters : %s and score : %s'%(str(optimal_components), str(score)) print '--------------------------------------------------------------------------------------------------------------------------'
C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:52: DeprecationWarning: Class GMM is deprecated; The class GMM is deprecated in 0.18 and will be removed in 0.20. Use class GaussianMixture instead. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function distribute_covar_matrix_to_match_covariance_type is deprecated; The functon distribute_covar_matrix_to_match_covariance_typeis deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning) C:\Users\admin\Anaconda2\lib\site-packages\sklearn\utils\deprecation.py:70: DeprecationWarning: Function log_multivariate_normal_density is deprecated; The function log_multivariate_normal_density is deprecated in 0.18 and will be removed in 0.20. warnings.warn(msg, category=DeprecationWarning)
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 7*Report the silhouette score for several cluster numbers you tried. Of these, which number of clusters has the best silhouette score?* **Answer:**{2: 0.41181886438624482, 3: 0.37616616509083634, 4: 0.34168407828470648, 5: 0.28001985722335737, 6: 0.26923051036000389, 7: 0.32398601556485884, 8: 0.30410685766208839, 9: 0.27229645992822205} is dictionary with number of clusters and their silhouette score as a tuple. Best score is given by number of clusters = 2 and its score is 0.41181886438624482 Cluster VisualizationOnce you've chosen the optimal number of clusters for your clustering algorithm using the scoring metric above, you can now visualize the results by executing the code block below. Note that, for experimentation purposes, you are welcome to adjust the number of clusters for your clustering algorithm to see various visualizations. The final visualization provided should, however, correspond with the optimal number of clusters.
# Display the results of the clustering from implementation vs.cluster_results(reduced_data, preds, centers, pca_samples)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Implementation: Data RecoveryEach cluster present in the visualization above has a central point. These centers (or means) are not specifically data points from the data, but rather the *averages* of all the data points predicted in the respective clusters. For the problem of creating customer segments, a cluster's center point corresponds to *the average customer of that segment*. Since the data is currently reduced in dimension and scaled by a logarithm, we can recover the representative customer spending from these data points by applying the inverse transformations.In the code block below, you will need to implement the following: - Apply the inverse transform to `centers` using `pca.inverse_transform` and assign the new centers to `log_centers`. - Apply the inverse function of `np.log` to `log_centers` using `np.exp` and assign the true centers to `true_centers`.
# TODO: Inverse transform the centers log_centers = pca.inverse_transform(centers) # TODO: Exponentiate the centers true_centers = np.exp(log_centers) # Display the true centers segments = ['Segment {}'.format(i) for i in range(0,len(centers))] true_centers = pd.DataFrame(np.round(true_centers), columns = data.keys()) true_centers.index = segments display(true_centers) data.describe()
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
Question 8Consider the total purchase cost of each product category for the representative data points above, and reference the statistical description of the dataset at the beginning of this project. *What set of establishments could each of the customer segments represent?* **Hint:** A customer who is assigned to `'Cluster X'` should best identify with the establishments represented by the feature set of `'Segment X'`. **Answer:**Seeing the statistics and taking segment 0 into account, Fresh is above median while all others are below median.Thus segment 0 can be considered as Fresh Produce or Corner store.Whereas for segment 1, all products except Fresh are close or above the median. Thus it possibly comes into convenience store. Question 9*For each sample point, which customer segment from* ***Question 8*** *best represents it? Are the predictions for each sample point consistent with this?*Run the code block below to find which cluster each sample point is predicted to be.
# Display the predictions for i, pred in enumerate(sample_preds): print "Sample point", i, "predicted to be in Cluster", pred display(samples)
Sample point 0 predicted to be in Cluster 0 Sample point 1 predicted to be in Cluster 0 Sample point 2 predicted to be in Cluster 1
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
**Answer:** The third datapoint is correctly classified by both me and gmm. The first two aren't exactly misclassfied since I have put them into two categories Supermarket and Convenience Store which can be also called as Retailer Conclusion In this final section, you will investigate ways that you can make use of the clustered data. First, you will consider how the different groups of customers, the ***customer segments***, may be affected differently by a specific delivery scheme. Next, you will consider how giving a label to each customer (which *segment* that customer belongs to) can provide for additional features about the customer data. Finally, you will compare the ***customer segments*** to a hidden variable present in the data, to see whether the clustering identified certain relationships. Question 10Companies will often run [A/B tests](https://en.wikipedia.org/wiki/A/B_testing) when making small changes to their products or services to determine whether making that change will affect its customers positively or negatively. The wholesale distributor is considering changing its delivery service from currently 5 days a week to 3 days a week. However, the distributor will only make this change in delivery service for customers that react positively. *How can the wholesale distributor use the customer segments to determine which customers, if any, would react positively to the change in delivery service?* **Hint:** Can we assume the change affects all customers equally? How can we determine which group of customers it affects the most? **Answer:**Select random customers from each clusters.This should be a small subset. The owner can then change the services to these cutomers and find the customer satisfaction in range of 0 and 1. Care should be taken that the chosen subset should be statistically stable. Thus, if the customers behave positively. the delivery services could be extended by increasing the number of random customers. The prepared model can then be cross validated. The process is repeated until market goals are achieved. Question 11Additional structure is derived from originally unlabeled data when using clustering techniques. Since each customer has a ***customer segment*** it best identifies with (depending on the clustering algorithm applied), we can consider *'customer segment'* as an **engineered feature** for the data. Assume the wholesale distributor recently acquired ten new customers and each provided estimates for anticipated annual spending of each product category. Knowing these estimates, the wholesale distributor wants to classify each new customer to a ***customer segment*** to determine the most appropriate delivery service. *How can the wholesale distributor label the new customers using only their estimated product spending and the* ***customer segment*** *data?* **Hint:** A supervised learner could be used to train on the original customers. What would be the target variable? **Answer:**Running a supervised learning model which classifies the customer segments as (0:Grocer, Supermarket:2 and so on)can be done. The target would be segments(0:Grocer, 1:supermarket) generated by clusters and their features would be fresh, frozen, GRocery, Detergents_paper,Delicatessen, Milk. Visualizing Underlying DistributionsAt the beginning of this project, it was discussed that the `'Channel'` and `'Region'` features would be excluded from the dataset so that the customer product categories were emphasized in the analysis. By reintroducing the `'Channel'` feature to the dataset, an interesting structure emerges when considering the same PCA dimensionality reduction applied earlier to the original dataset.Run the code block below to see how each data point is labeled either `'HoReCa'` (Hotel/Restaurant/Cafe) or `'Retail'` the reduced space. In addition, you will find the sample points are circled in the plot, which will identify their labeling.
# Display the clustering results based on 'Channel' data vs.channel_results(reduced_data, outliers, pca_samples)
_____no_output_____
MIT
customer_segments/customer_segments.ipynb
nk101/Machine-Learning-ND
SSD Evaluation TutorialThis is a brief tutorial that explains how compute the average precisions for any trained SSD model using the `Evaluator` class. The `Evaluator` computes the average precisions according to the Pascal VOC pre-2010 or post-2010 detection evaluation algorithms. You can find details about these computation methods [here](http://host.robots.ox.ac.uk/pascal/VOC/voc2012/htmldoc/devkit_doc.htmlsec:ap).As an example we'll evaluate an SSD300 on the Pascal VOC 2007 `test` dataset, but note that the `Evaluator` works for any SSD model and any dataset that is compatible with the `DataGenerator`. If you would like to run the evaluation on a different model and/or dataset, the procedure is analogous to what is shown below, you just have to build the appropriate model and load the relevant dataset.Note: I that in case you would like to evaluate a model on MS COCO, I would recommend to follow the [MS COCO evaluation notebook](https://github.com/pierluigiferrari/ssd_keras/blob/master/ssd300_evaluation_COCO.ipynb) instead, because it can produce the results format required by the MS COCO evaluation server and uses the official MS COCO evaluation code, which computes the mAP slightly differently from the Pascal VOC method.Note: In case you want to evaluate any of the provided trained models, make sure that you build the respective model with the correct set of scaling factors to reproduce the official results. The models that were trained on MS COCO and fine-tuned on Pascal VOC require the MS COCO scaling factors, not the Pascal VOC scaling factors.
from keras import backend as K from tensorflow.keras.models import load_model from tensorflow.keras.optimizers import Adam from matplotlib.pyplot import imread import numpy as np from matplotlib import pyplot as plt from models.keras_ssd300 import ssd_300 from keras_loss_function.keras_ssd_loss import SSDLoss from keras_layers.keras_layer_AnchorBoxes import AnchorBoxes from keras_layers.keras_layer_DecodeDetections import DecodeDetections from keras_layers.keras_layer_DecodeDetectionsFast import DecodeDetectionsFast from keras_layers.keras_layer_L2Normalization import L2Normalization from data_generator.object_detection_2d_data_generator import DataGenerator from eval_utils.average_precision_evaluator import Evaluator %matplotlib inline # Set a few configuration parameters. img_height = 300 img_width = 300 n_classes = 20 model_mode = 'inference'
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
1. Load a trained SSDEither load a trained model or build a model and load trained weights into it. Since the HDF5 files I'm providing contain only the weights for the various SSD versions, not the complete models, you'll have to go with the latter option when using this implementation for the first time. You can then of course save the model and next time load the full model directly, without having to build it.You can find the download links to all the trained model weights in the README. 1.1. Build the model and load trained weights into it
# 1: Build the Keras model K.clear_session() # Clear previous models from memory. model = ssd_300(image_size=(img_height, img_width, 3), n_classes=n_classes, mode=model_mode, l2_regularization=0.0005, scales=[0.1, 0.2, 0.37, 0.54, 0.71, 0.88, 1.05], # The scales for MS COCO [0.07, 0.15, 0.33, 0.51, 0.69, 0.87, 1.05] aspect_ratios_per_layer=[[1.0, 2.0, 0.5], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5, 3.0, 1.0/3.0], [1.0, 2.0, 0.5], [1.0, 2.0, 0.5]], two_boxes_for_ar1=True, steps=[8, 16, 32, 64, 100, 300], offsets=[0.5, 0.5, 0.5, 0.5, 0.5, 0.5], clip_boxes=False, variances=[0.1, 0.1, 0.2, 0.2], normalize_coords=True, subtract_mean=[123, 117, 104], swap_channels=[2, 1, 0], confidence_thresh=0.01, iou_threshold=0.45, top_k=200, nms_max_output_size=400) # 2: Load the trained weights into the model. # TODO: Set the path of the trained weights. weights_path = 'path/to/trained/weights/VGG_VOC0712_SSD_300x300_ft_iter_120000.h5' model.load_weights(weights_path, by_name=True) # 3: Compile the model so that Keras won't complain the next time you load it. adam = Adam(lr=0.001, beta_1=0.9, beta_2=0.999, epsilon=1e-08, decay=0.0) ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) model.compile(optimizer=adam, loss=ssd_loss.compute_loss)
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
Or 1.2. Load a trained modelWe set `model_mode` to 'inference' above, so the evaluator expects that you load a model that was built in 'inference' mode. If you're loading a model that was built in 'training' mode, change the `model_mode` parameter accordingly.
# TODO: Set the path to the `.h5` file of the model to be loaded. model_path = 'path/to/trained/model.h5' # We need to create an SSDLoss object in order to pass that to the model loader. ssd_loss = SSDLoss(neg_pos_ratio=3, alpha=1.0) K.clear_session() # Clear previous models from memory. model = load_model(model_path, custom_objects={'AnchorBoxes': AnchorBoxes, 'L2Normalization': L2Normalization, 'DecodeDetections': DecodeDetections, 'compute_loss': ssd_loss.compute_loss})
_____no_output_____
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
2. Create a data generator for the evaluation datasetInstantiate a `DataGenerator` that will serve the evaluation dataset during the prediction phase.
dataset = DataGenerator() # TODO: Set the paths to the dataset here. Pascal_VOC_dataset_images_dir = '../../datasets/VOCdevkit/VOC2007/JPEGImages/' Pascal_VOC_dataset_annotations_dir = '../../datasets/VOCdevkit/VOC2007/Annotations/' Pascal_VOC_dataset_image_set_filename = '../../datasets/VOCdevkit/VOC2007/ImageSets/Main/test.txt' # The XML parser needs to now what object class names to look for and in which order to map them to integers. classes = ['background', 'aeroplane', 'bicycle', 'bird', 'boat', 'bottle', 'bus', 'car', 'cat', 'chair', 'cow', 'diningtable', 'dog', 'horse', 'motorbike', 'person', 'pottedplant', 'sheep', 'sofa', 'train', 'tvmonitor'] dataset.parse_xml(images_dirs=[Pascal_VOC_dataset_images_dir], image_set_filenames=[Pascal_VOC_dataset_image_set_filename], annotations_dirs=[Pascal_VOC_dataset_annotations_dir], classes=classes, include_classes='all', exclude_truncated=False, exclude_difficult=False, ret=False)
test.txt: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4952/4952 [00:13<00:00, 373.84it/s]
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras
3. Run the evaluationNow that we have instantiated a model and a data generator to serve the dataset, we can set up the evaluator and run the evaluation.The evaluator is quite flexible: It can compute the average precisions according to the Pascal VOC pre-2010 algorithm, which samples 11 equidistant points of the precision-recall curves, or according to the Pascal VOC post-2010 algorithm, which integrates numerically over the entire precision-recall curves instead of sampling a few individual points. You could also change the number of sampled recall points or the required IoU overlap for a prediction to be considered a true positive, among other things. Check out the `Evaluator`'s documentation for details on all the arguments.In its default settings, the evaluator's algorithm is identical to the official Pascal VOC pre-2010 Matlab detection evaluation algorithm, so you don't really need to tweak anything unless you want to.The evaluator roughly performs the following steps: It runs predictions over the entire given dataset, then it matches these predictions to the ground truth boxes, then it computes the precision-recall curves for each class, then it samples 11 equidistant points from these precision-recall curves to compute the average precision for each class, and finally it computes the mean average precision over all classes.
evaluator = Evaluator(model=model, n_classes=n_classes, data_generator=dataset, model_mode=model_mode) results = evaluator(img_height=img_height, img_width=img_width, batch_size=8, data_generator_mode='resize', round_confidences=False, matching_iou_threshold=0.5, border_pixels='include', sorting_algorithm='quicksort', average_precision_mode='sample', num_recall_points=11, ignore_neutral_boxes=True, return_precisions=True, return_recalls=True, return_average_precisions=True, verbose=True) mean_average_precision, average_precisions, precisions, recalls = results
Number of images in the evaluation dataset: 4952 Producing predictions batch-wise: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 619/619 [02:17<00:00, 4.50it/s] Matching predictions to ground truth, class 1/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 7902/7902 [00:00<00:00, 19253.00it/s] Matching predictions to ground truth, class 2/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4276/4276 [00:00<00:00, 23249.07it/s] Matching predictions to ground truth, class 3/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 19126/19126 [00:00<00:00, 28311.89it/s] Matching predictions to ground truth, class 4/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 25291/25291 [00:01<00:00, 21126.87it/s] Matching predictions to ground truth, class 5/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 33520/33520 [00:00<00:00, 34410.41it/s] Matching predictions to ground truth, class 6/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4395/4395 [00:00<00:00, 20824.68it/s] Matching predictions to ground truth, class 7/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 41833/41833 [00:01<00:00, 20956.01it/s] Matching predictions to ground truth, class 8/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2740/2740 [00:00<00:00, 24270.08it/s] Matching predictions to ground truth, class 9/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 91992/91992 [00:03<00:00, 25723.87it/s] Matching predictions to ground truth, class 10/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4085/4085 [00:00<00:00, 23969.80it/s] Matching predictions to ground truth, class 11/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 6912/6912 [00:00<00:00, 26573.85it/s] Matching predictions to ground truth, class 12/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4294/4294 [00:00<00:00, 24942.89it/s] Matching predictions to ground truth, class 13/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 2779/2779 [00:00<00:00, 20814.98it/s] Matching predictions to ground truth, class 14/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 3003/3003 [00:00<00:00, 17807.53it/s] Matching predictions to ground truth, class 15/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 183522/183522 [00:09<00:00, 19243.38it/s] Matching predictions to ground truth, class 16/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 35198/35198 [00:01<00:00, 21565.75it/s] Matching predictions to ground truth, class 17/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10535/10535 [00:00<00:00, 19680.06it/s] Matching predictions to ground truth, class 18/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 4371/4371 [00:00<00:00, 11523.11it/s] Matching predictions to ground truth, class 19/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 5768/5768 [00:00<00:00, 9747.21it/s] Matching predictions to ground truth, class 20/20.: 100%|β–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆβ–ˆ| 10860/10860 [00:00<00:00, 13970.50it/s] Computing precisions and recalls, class 1/20 Computing precisions and recalls, class 2/20 Computing precisions and recalls, class 3/20 Computing precisions and recalls, class 4/20 Computing precisions and recalls, class 5/20 Computing precisions and recalls, class 6/20 Computing precisions and recalls, class 7/20 Computing precisions and recalls, class 8/20 Computing precisions and recalls, class 9/20 Computing precisions and recalls, class 10/20 Computing precisions and recalls, class 11/20 Computing precisions and recalls, class 12/20 Computing precisions and recalls, class 13/20 Computing precisions and recalls, class 14/20 Computing precisions and recalls, class 15/20 Computing precisions and recalls, class 16/20 Computing precisions and recalls, class 17/20 Computing precisions and recalls, class 18/20 Computing precisions and recalls, class 19/20 Computing precisions and recalls, class 20/20 Computing average precision, class 1/20 Computing average precision, class 2/20 Computing average precision, class 3/20 Computing average precision, class 4/20 Computing average precision, class 5/20 Computing average precision, class 6/20 Computing average precision, class 7/20 Computing average precision, class 8/20 Computing average precision, class 9/20 Computing average precision, class 10/20 Computing average precision, class 11/20 Computing average precision, class 12/20 Computing average precision, class 13/20 Computing average precision, class 14/20 Computing average precision, class 15/20 Computing average precision, class 16/20 Computing average precision, class 17/20 Computing average precision, class 18/20 Computing average precision, class 19/20 Computing average precision, class 20/20
Apache-2.0
ssd300_evaluation.ipynb
rogeryan/ssd_keras