markdown
stringlengths 0
1.02M
| code
stringlengths 0
832k
| output
stringlengths 0
1.02M
| license
stringlengths 3
36
| path
stringlengths 6
265
| repo_name
stringlengths 6
127
|
---|---|---|---|---|---|
Check the input and output dimensionsAs a check that your model is working as expected, test out how it responds to input data. | # test that dimensions are as expected
test_rnn = RNN(input_size=1, output_size=1, hidden_dim=10, n_layers=2)
# generate evenly spaced, test data pts
time_steps = np.linspace(0, np.pi, seq_length)
data = np.sin(time_steps)
data.resize((seq_length, 1))
test_input = torch.Tensor(data).unsqueeze(0) # give it a batch_size of 1 as first dimension
print('Input size: ', test_input.size())
# test out rnn sizes
test_out, test_h = test_rnn(test_input, None)
print('Output size: ', test_out.size())
print('Hidden state size: ', test_h.size()) | Input size: torch.Size([1, 20, 1])
Output size: torch.Size([20, 1])
Hidden state size: torch.Size([2, 1, 10])
| MIT | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch |
--- Training the RNNNext, we'll instantiate an RNN with some specified hyperparameters. Then train it over a series of steps, and see how it performs. | # decide on hyperparameters
input_size=1
output_size=1
hidden_dim=32
n_layers=1
# instantiate an RNN
rnn = RNN(input_size, output_size, hidden_dim, n_layers)
print(rnn) | RNN(
(rnn): RNN(1, 32, batch_first=True)
(fc): Linear(in_features=32, out_features=1, bias=True)
)
| MIT | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch |
Loss and OptimizationThis is a regression problem: can we train an RNN to accurately predict the next data point, given a current data point?>* The data points are coordinate values, so to compare a predicted and ground_truth point, we'll use a regression loss: the mean squared error.* It's typical to use an Adam optimizer for recurrent models. | # MSE loss and Adam optimizer with a learning rate of 0.01
criterion = nn.MSELoss()
optimizer = torch.optim.Adam(rnn.parameters(), lr=0.01) | _____no_output_____ | MIT | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch |
Defining the training functionThis function takes in an rnn, a number of steps to train for, and returns a trained rnn. This function is also responsible for displaying the loss and the predictions, every so often. Hidden StatePay close attention to the hidden state, here:* Before looping over a batch of training data, the hidden state is initialized* After a new hidden state is generated by the rnn, we get the latest hidden state, and use that as input to the rnn for the following steps | # train the RNN
def train(rnn, n_steps, print_every):
# initialize the hidden state
hidden = None
for batch_i, step in enumerate(range(n_steps)):
# defining the training data
time_steps = np.linspace(step * np.pi, (step+1)*np.pi, seq_length + 1)
data = np.sin(time_steps)
data.resize((seq_length + 1, 1)) # input_size=1
x = data[:-1]
y = data[1:]
# convert data into Tensors
x_tensor = torch.Tensor(x).unsqueeze(0) # unsqueeze gives a 1, batch_size dimension
y_tensor = torch.Tensor(y)
# outputs from the rnn
prediction, hidden = rnn(x_tensor, hidden)
## Representing Memory ##
# make a new variable for hidden and detach the hidden state from its history
# this way, we don't backpropagate through the entire history
hidden = hidden.data
# calculate the loss
loss = criterion(prediction, y_tensor)
# zero gradients
optimizer.zero_grad()
# perform backprop and update weights
loss.backward()
optimizer.step()
# display loss and predictions
if batch_i%print_every == 0:
print('Loss: ', loss.item())
plt.plot(time_steps[1:], x, 'r.') # input
plt.plot(time_steps[1:], prediction.data.numpy().flatten(), 'b.') # predictions
plt.show()
return rnn
# train the rnn and monitor results
n_steps = 75
print_every = 15
trained_rnn = train(rnn, n_steps, print_every) | C:\Users\johnj\miniconda3\lib\site-packages\torch\autograd\__init__.py:145: UserWarning: CUDA initialization: CUDA driver initialization failed, you might not have a CUDA gpu. (Triggered internally at ..\c10\cuda\CUDAFunctions.cpp:109.)
Variable._execution_engine.run_backward(
| MIT | recurrent-neural-networks/time-series/Simple_RNN.ipynb | johnsonjoseph37/deep-learning-v2-pytorch |
Εστω οτι παρατηρούμε εναν αστέρα στον ουρανό και μετράμε τη ροή φωτονίων. Θεωρώντας ότι η ροή είναι σταθερή με το χρόνο ίση με $F_{\mathtt{true}}$. Παίρνουμε $N$ παρατηρήσεις, μετρώντας τη ροή $F_i$ και το σφάλμα $e_i$. Η ανίχνευση ενός φωτονίου είναι ενα ανεξάρτητο γεγονός που ακολουθεί μια τυχαία κατανομή Poisson. Από τη διακύμανση της κατανομής Poisson υπολογίζουμε το σφάλμα $e_i=\sqrt{F_i}$ | N=100
F_true=1000.
F=np.random.poisson(F_true*np.ones(N))
e=np.sqrt(F)
plt.errorbar(np.arange(N),F,yerr=e, fmt='ok', ecolor='gray', alpha=0.5)
plt.hlines(np.mean(F),0,N,linestyles='--')
plt.hlines(F_true,0,N)
print np.mean(F),np.mean(F)-F_true,np.std(F)
ax=seaborn.distplot(F,bins=N/3)
xx=np.linspace(F.min(),F.max())
gaus=np.exp(-0.5*((xx-F_true)/np.std(F))**2)/np.sqrt(2.*np.pi*np.std(F)**2)
ax.plot(xx,gaus) | _____no_output_____ | MIT | Untitled.ipynb | Mixpap/astrostatistics |
Η αρχική προσέγγιση μας είναι μέσω της μεγιστοποιήσης της πιθανοφάνειας. Με βάση τα δεδομένωα $D_i=(F_i,e_i)$ μπορούμε να υπολογίσουμε τη πιθανότητα να τα έχουμε παρατηρήσει δεδομένου της αληθινής τιμής $F_{\mathtt{true}}$ υποθέτωντας ότι τα σφάλματα είναι gaussian$$P(D_i|F_{\mathtt{true}})=\frac{1}{\sqrt{2\pi e_i^2}}e^{-\frac{(F_i-F_{\mathtt{true}})^2}{2e_i^2}}$$Ορίζουμε τη συνάρτηση πιθανοφάνειας σαν το σύνολο των πιθανοτήτων για κάθε σημείο$$L(D|F_{\mathtt{true}})=\prod _{i=1}^N P(D_i|F_{\mathtt{true}})$$Επειδή η τιμή της συνάρτηση πιθανοφάνειας μπορεί να γίνει πολύ μικρή, είναι πιο έυκολο να χρησιμοποιήσουμε το λογάριθμο της$$\log L = -\frac{1}{2} \sum _{i=0}^N \big[ \log(2\pi e_i^2) + \frac{(F_i-F_\mathtt{true})^2}{e_i^2} \big]$$ | #xx=np.linspace(0,10,5000)
xx=np.ones(1000)
#seaborn.distplot(np.random.poisson(xx),kde=False)
plt.hist(np.random.poisson(xx))
w = 1. / e ** 2
print("""
F_true = {0}
F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)
""".format(F_true, (w * F).sum() / w.sum(), w.sum() ** -0.5, N))
np.sum(((F-F.mean())/F.std())**2)/(N-1)
def log_prior(theta):
return 1 # flat prior
def log_likelihood(theta, F, e):
return -0.5 * np.sum(np.log(2 * np.pi * e ** 2)
+ (F - theta[0]) ** 2 / e ** 2)
def log_posterior(theta, F, e):
return log_prior(theta) + log_likelihood(theta, F, e)
ndim = 1 # number of parameters in the model
nwalkers = 100 # number of MCMC walkers
nburn = 1000 # "burn-in" period to let chains stabilize
nsteps = 5000 # number of MCMC steps to take
# we'll start at random locations between 0 and 2000
starting_guesses = 20 * np.random.rand(nwalkers, ndim)
import emcee
sampler = emcee.EnsembleSampler(nwalkers, ndim, log_posterior, args=[F, e])
sampler.run_mcmc(starting_guesses, nsteps)
sample = sampler.chain # shape = (nwalkers, nsteps, ndim)
sample = sampler.chain[:, nburn:, :].ravel() # discard burn-in points
sampler.chain[0]
# plot a histogram of the sample
plt.hist(sampler.chain, bins=50, histtype="stepfilled", alpha=0.3, normed=True)
# plot a best-fit Gaussian
F_fit = np.linspace(F.min(),F.max(),500)
pdf = stats.norm(np.mean(sample), np.std(sample)).pdf(F_fit)
#plt.plot(F_fit, pdf, '-k')
plt.xlabel("F"); plt.ylabel("P(F)")
print("""
F_true = {0}
F_est = {1:.0f} +/- {2:.0f} (based on {3} measurements)
""".format(F_true, np.mean(sample), np.std(sample), N)) |
F_true = 1000.0
F_est = 997 +/- 3 (based on 100 measurements)
| MIT | Untitled.ipynb | Mixpap/astrostatistics |
Load MNIST Data | # MNIST dataset downloaded from Kaggle :
#https://www.kaggle.com/c/digit-recognizer/data
# Functions to read and show images.
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
d0 = pd.read_csv('./mnist_train.csv')
print(d0.head(5)) # print first five rows of d0.
# save the labels into a variable l.
l = d0['label']
# Drop the label feature and store the pixel data in d.
d = d0.drop("label",axis=1)
print(d.shape)
print(l.shape)
# display or plot a number.
plt.figure(figsize=(7,7))
idx = 1
grid_data = d.iloc[idx].as_matrix().reshape(28,28) # reshape from 1d to 2d pixel array
plt.imshow(grid_data, interpolation = "none", cmap = "gray")
plt.show()
print(l[idx]) | _____no_output_____ | Apache-2.0 | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml |
2D Visualization using PCA | # Pick first 15K data-points to work on for time-effeciency.
#Excercise: Perform the same analysis on all of 42K data-points.
labels = l.head(15000)
data = d.head(15000)
print("the shape of sample data = ", data.shape)
# Data-preprocessing: Standardizing the data
from sklearn.preprocessing import StandardScaler
standardized_data = StandardScaler().fit_transform(data)
print(standardized_data.shape)
#find the co-variance matrix which is : A^T * A
sample_data = standardized_data
# matrix multiplication using numpy
covar_matrix = np.matmul(sample_data.T , sample_data)
print ( "The shape of variance matrix = ", covar_matrix.shape)
# finding the top two eigen-values and corresponding eigen-vectors
# for projecting onto a 2-Dim space.
from scipy.linalg import eigh
# the parameter 'eigvals' is defined (low value to heigh value)
# eigh function will return the eigen values in asending order
# this code generates only the top 2 (782 and 783) eigenvalues.
values, vectors = eigh(covar_matrix, eigvals=(782,783))
print("Shape of eigen vectors = ",vectors.shape)
# converting the eigen vectors into (2,d) shape for easyness of further computations
vectors = vectors.T
print("Updated shape of eigen vectors = ",vectors.shape)
# here the vectors[1] represent the eigen vector corresponding 1st principal eigen vector
# here the vectors[0] represent the eigen vector corresponding 2nd principal eigen vector
# projecting the original data sample on the plane
#formed by two principal eigen vectors by vector-vector multiplication.
import matplotlib.pyplot as plt
new_coordinates = np.matmul(vectors, sample_data.T)
print (" resultanat new data points' shape ", vectors.shape, "X", sample_data.T.shape," = ", new_coordinates.shape)
import pandas as pd
# appending label to the 2d projected data
new_coordinates = np.vstack((new_coordinates, labels)).T
# creating a new data frame for ploting the labeled points.
dataframe = pd.DataFrame(data=new_coordinates, columns=("1st_principal", "2nd_principal", "label"))
print(dataframe.head())
# ploting the 2d data points with seaborn
import seaborn as sn
sn.FacetGrid(dataframe, hue="label", size=6).map(plt.scatter, '1st_principal', '2nd_principal').add_legend()
plt.show() | _____no_output_____ | Apache-2.0 | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml |
PCA using Scikit-Learn | # initializing the pca
from sklearn import decomposition
pca = decomposition.PCA()
# configuring the parameteres
# the number of components = 2
pca.n_components = 2
pca_data = pca.fit_transform(sample_data)
# pca_reduced will contain the 2-d projects of simple data
print("shape of pca_reduced.shape = ", pca_data.shape)
# attaching the label for each 2-d data point
pca_data = np.vstack((pca_data.T, labels)).T
# creating a new data fram which help us in ploting the result data
pca_df = pd.DataFrame(data=pca_data, columns=("1st_principal", "2nd_principal", "label"))
sn.FacetGrid(pca_df, hue="label", size=6).map(plt.scatter, '1st_principal', '2nd_principal').add_legend()
plt.show() | _____no_output_____ | Apache-2.0 | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml |
PCA for dimensionality redcution (not for visualization) | # PCA for dimensionality redcution (non-visualization)
pca.n_components = 784
pca_data = pca.fit_transform(sample_data)
percentage_var_explained = pca.explained_variance_ / np.sum(pca.explained_variance_);
cum_var_explained = np.cumsum(percentage_var_explained)
# Plot the PCA spectrum
plt.figure(1, figsize=(6, 4))
plt.clf()
plt.plot(cum_var_explained, linewidth=2)
plt.axis('tight')
plt.grid()
plt.xlabel('n_components')
plt.ylabel('Cumulative_explained_variance')
plt.show()
# If we take 200-dimensions, approx. 90% of variance is expalined. | _____no_output_____ | Apache-2.0 | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml |
t-SNE using Scikit-Learn | # TSNE
from sklearn.manifold import TSNE
# Picking the top 1000 points as TSNE takes a lot of time for 15K points
data_1000 = standardized_data[0:1000,:]
labels_1000 = labels[0:1000]
model = TSNE(n_components=2, random_state=0)
# configuring the parameteres
# the number of components = 2
# default perplexity = 30
# default learning rate = 200
# default Maximum number of iterations for the optimization = 1000
tsne_data = model.fit_transform(data_1000)
# creating a new data frame which help us in ploting the result data
tsne_data = np.vstack((tsne_data.T, labels_1000)).T
tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2", "label"))
# Ploting the result of tsne
sn.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
plt.show()
model = TSNE(n_components=2, random_state=0, perplexity=50)
tsne_data = model.fit_transform(data_1000)
# creating a new data fram which help us in ploting the result data
tsne_data = np.vstack((tsne_data.T, labels_1000)).T
tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2", "label"))
# Ploting the result of tsne
sn.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
plt.title('With perplexity = 50')
plt.show()
model = TSNE(n_components=2, random_state=0, perplexity=50, n_iter=5000)
tsne_data = model.fit_transform(data_1000)
# creating a new data fram which help us in ploting the result data
tsne_data = np.vstack((tsne_data.T, labels_1000)).T
tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2", "label"))
# Ploting the result of tsne
sn.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
plt.title('With perplexity = 50, n_iter=5000')
plt.show()
model = TSNE(n_components=2, random_state=0, perplexity=2)
tsne_data = model.fit_transform(data_1000)
# creating a new data fram which help us in ploting the result data
tsne_data = np.vstack((tsne_data.T, labels_1000)).T
tsne_df = pd.DataFrame(data=tsne_data, columns=("Dim_1", "Dim_2", "label"))
# Ploting the result of tsne
sn.FacetGrid(tsne_df, hue="label", size=6).map(plt.scatter, 'Dim_1', 'Dim_2').add_legend()
plt.title('With perplexity = 2')
plt.show()
#Excercise: Run the same analysis using 42K points with various
#values of perplexity and iterations.
# If you use all of the points, you can expect plots like this blog below:
# http://colah.github.io/posts/2014-10-Visualizing-MNIST/ | _____no_output_____ | Apache-2.0 | 2.assign_amzn_fine_food_review_tsne/pca_tsne_mnist.ipynb | be-shekhar/learning-ml |
APIs and data Catherine Devlin (@catherinedevlin)Innovation Specialist, 18FOakwood High School, Feb 16 2017 Who am I?(hint: not Jean Valjean) Cool things I've done- Chemical engineer in college- Oops, became a programmer- Created IPython `%sql` magic [Dayton Dynamic Languages](http://d8ndl.org) PyOhio [18F](18f.gsa.gov) <img src="https://18f.gsa.gov/assets/img/logos/18f-logo.svg" alt="18F logo" width="30%" />"Digital startup" within the Federal governmentIt's like college!Much of what I'm teaching you I've learned the last couple years... some of it last week Federal Election Commission [Old site](http://www.fec.gov/)[New site](https://beta.fec.gov/)User research & best practicesLet's look up our Representative API  Webpage vs. API FEC API https://api.open.fec.gov/developers/Every API works differently.Let's find the committee ID for our Congressional representative.C00373001 https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20 - Knife- Cheese grater- Vegetable peeler- Apple corer- Food processor  `requests` libraryFirst, we install. That's like buying it. | !pip install requests | _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
Then, we import. That's like getting it out of the cupboard. | import requests | _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
Oakwood High School | with offline.use_cassette('offline.vcr'):
response = requests.get('http://ohs.oakwoodschools.org/pages/Oakwood_High_School')
response.ok
response.status_code
print(response.text) | _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
We have backed our semi up to the front door.OK, back to checking out politicians. | url = 'https://api.open.fec.gov/v1/committee/C00373001/totals/?page=1&api_key=DEMO_KEY&sort=-cycle&per_page=20'
with offline.use_cassette('offline.vcr'):
response = requests.get(url)
response.ok
response.status_code
response.json()
response.json()['results']
results = response.json()['results']
results[0]['cycle']
results[0]['disbursements']
for result in results:
print(result['cycle'])
for result in results:
year = result['cycle']
spent = result['disbursements']
print('year: {}\t spent:{}'.format(year, spent))
| _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
[Pandas](http://pandas.pydata.org/) | !pip install pandas
import pandas as pd
data = pd.DataFrame(response.json()['results'])
data
data = data.set_index('cycle')
data
data['disbursements']
data[data['disbursements'] < 1000000 ] | _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
[Bokeh](http://bokeh.pydata.org/en/latest/) | !pip install bokeh
from bokeh.charts import Bar, show, output_notebook
by_year = Bar(data, values='disbursements')
output_notebook()
show(by_year) | _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
Playtime[so many options](http://bokeh.pydata.org/en/latest/docs/user_guide/charts.html)- Which column to map?- Colors or styles?- Scatter- Better y-axis label?- Some other candidate committee? - Portman C00458463, Brown C00264697- Filter it Where's it coming from?https://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016 | url = 'https://api.open.fec.gov/v1/committee/C00373001/schedules/schedule_a/by_state/?per_page=20&api_key=DEMO_KEY&page=1&cycle=2016'
with offline.use_cassette('offline.vcr'):
response = requests.get(url)
results = response.json()['results']
data = pd.DataFrame(results)
data
data = data.set_index('state')
by_state = Bar(data, values='total')
show(by_state) | _____no_output_____ | CC0-1.0 | presentation_vcr.ipynb | catherinedevlin/code-org-apis-data |
[Diabetes dataset](https://scikit-learn.org/stable/datasets/toy_dataset.htmldiabetes-dataset)---------------- | import pandas as pd
from sklearn import datasets
diabetes = datasets.load_diabetes()
print(diabetes['DESCR'])
# Convert the data to a pandas dataframe
df = pd.DataFrame(diabetes.data, columns=diabetes.feature_names)
df['diabetes'] = diabetes.target
df.head() | _____no_output_____ | MIT | ejercicios/reg-toy-diabetes.ipynb | joseluisGA/videojuegos |
Random ForestAplicação do random forest em uma mão de poker***Dataset:*** https://archive.ics.uci.edu/ml/datasets/Poker+Hand***Apresentação:*** https://docs.google.com/presentation/d/1zFS4cTf9xwvcVPiCOA-sV_RFx_UeoNX2dTthHkY9Am4/edit | import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
from sklearn.neural_network import MLPClassifier
from sklearn.utils import column_or_1d
from sklearn.linear_model import LogisticRegression
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.preprocessing import OneHotEncoder
from sklearn.ensemble import AdaBoostClassifier
from sklearn.metrics import confusion_matrix
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.model_selection import cross_val_score
import seaborn as sn
import timeit
from format import format_poker_data
train_data = pd.read_csv('train.csv')
test_data = pd.read_csv('test.csv')
X_train, y_train = np.split(train_data,[-1],axis=1)
test_data = test_data.dropna()
X_test, y_test = np.split(test_data,[-1],axis=1)
start_time = timeit.default_timer()
X_train , equal_suit_train = format_poker_data(X_train)
elapsed = timeit.default_timer() - start_time
print(str(elapsed)+" ns")
X_test , equal_suit_test = format_poker_data(X_test)
rf = RandomForestClassifier(n_estimators=50,random_state=42)
rf2 = RandomForestClassifier(n_estimators=50,random_state=42)
y_train = column_or_1d(y_train)
y_test = column_or_1d(y_test)
rf.fit(X_train,y_train)
rf.score(X_train,y_train)
rf.score(X_test,y_test)
n_data_train = pd.DataFrame()
n_data_train['predict'] = rf.predict(X_train)
n_data_train['is_the_same'] = equal_suit_train
n_data_train.shape
n_data_test = pd.DataFrame()
n_data_test['predict'] = rf.predict(X_test)
n_data_test['is_the_same'] = equal_suit_test
n_data_train.head()
n_data_train = pd.get_dummies(n_data_train,columns=['predict']).astype('bool')
n_data_test = pd.get_dummies(n_data_test,columns=['predict']).astype('bool')
rf2.fit(n_data_train,y_train)
rf2.score(n_data_train,y_train)
rf2.score(n_data_test,y_test)
#Confusion Matrixfor Test Data
conf_array_test = confusion_matrix(y_test,rf2.predict(n_data_test))
conf_array_test = conf_array_test / conf_array_test.astype(np.float).sum(axis=1)
df_class_test = pd.DataFrame(conf_array_test, range(10),range(10))
sn.set(font_scale=0.7)#for label size
sn.heatmap(df_class_test,annot=True)# font size
| _____no_output_____ | MIT | RandomForest.ipynb | AM-2018-2-dusteam/ML-poker |
Description This task is to do an exploratory data analysis on the balance-scale dataset Data Set Information This data set was generated to model psychological experimental results. Each example is classified as having the balance scale tip to the right, tip to the left, or be balanced. The attributes are the left weight, the left distance, the right weight, and the right distance. The correct way to find the class is the greater of (left-distance left-weight) and (right-distance right-weight). If they are equal, it is balanced. Attribute Information:- 1. Class Name: 3 (L, B, R)2. Left-Weight: 5 (1, 2, 3, 4, 5)3. Left-Distance: 5 (1, 2, 3, 4, 5)4. Right-Weight: 5 (1, 2, 3, 4, 5)5. Right-Distance: 5 (1, 2, 3, 4, 5) | #importing libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
#reading the data
data=pd.read_csv('balance-scale.data')
#shape of the data
data.shape
#first five rows of the data
data.head()
#Generating the x values
x=data.drop(['Class'],axis=1)
x.head()
#Generating the y values
y=data['Class']
y.head()
#Checking for any null data in x
x.isnull().any()
#Checking for any null data in y
y.isnull().any()
#Adding left and right torque as a new data frame
x1=pd.DataFrame()
x1['LT']=x['LW']*x['LD']
x1['RT']=x['RW']*x['RD']
x1.head()
#Converting the results of "Classs" attribute ,i.e., Balanced(B), Left(L) and Right(R) to numerical values for computation in sklearn
y=y.map(dict(B=0,L=1,R=2))
y.head() | _____no_output_____ | MIT | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject |
Using the Weight and Distance parameters Splitting the data set into a ratio of 70:30 by the built in 'train_test_split' function in sklearn to get a better idea of accuracy of the model | from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(x,y,stratify=y, test_size=0.3, random_state=2)
X_train.describe()
#Importing decision tree classifier and creating it's object
from sklearn.tree import DecisionTreeClassifier
clf= DecisionTreeClassifier()
clf.fit(X_train,y_train)
y_pred=clf.predict(X_test)
from sklearn.metrics import accuracy_score
accuracy_score(y_test,y_pred) | _____no_output_____ | MIT | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject |
We observe that the accuracy score is pretty low. Thus, we need to find optimal parameters to get the best accuracy. We do that by using GridSearchCV | #Using GridSearchCV to find the maximun optimal depth
from sklearn.model_selection import GridSearchCV
tree_para={"criterion":["gini","entropy"], "max_depth":[3,4,5,6,7,8,9,10,11,12]}
dt_model_grid= GridSearchCV(DecisionTreeClassifier(random_state=3),tree_para, cv=10)
dt_model_grid.fit(X_train,y_train)
# To print the optimum parameters computed by GridSearchCV required for best accuracy score
dt_model=dt_model_grid.best_estimator_
print(dt_model)
#To find the best accuracy score for all possible combinations of parameters provided
dt_model_grid.best_score_
dt_model_grid.best_params_
#Scoring the model
from sklearn.metrics import classification_report
y_pred1=dt_model.predict(X_test)
print(classification_report(y_test,y_pred1,target_names=["Balanced","Left","Right"]))
from sklearn import tree
!pip install graphviz
#Plotting the Tree
from sklearn.tree import export_graphviz
export_graphviz(
dt_model,
out_file=("model1.dot"),
feature_names=["Left Weight","Left Distance","Right Weight","Right Distance"],
class_names=["Balanced","Left","Right"],
filled=True)
#Run this to print png
# !dot -Tpng model1.dot -o model1.png
| _____no_output_____ | MIT | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject |
Using the created Torque | dt_model2 = DecisionTreeClassifier(random_state=31)
X_train, X_test, y_train, y_test= train_test_split(x1,y, stratify=y, test_size=0.3, random_state=8)
X_train.head(
)
X_train.shape
dt_model2.fit(X_train, y_train)
y_pred2= dt_model2.predict(X_test)
print(classification_report(y_test, y_pred2, target_names=["Balanced","Left","Right"]))
#Plotting the Tree
from sklearn import export_graphviz
export_graphviz(
dt_model2,
out_file=("model2.dot"),
feature_names=["Left Torque", "Right Torque"],
class_names=["Balanced","Left","Right"],
filled=True)
# run this to make png
# dot -Tpng model2.dot -o model2.png | _____no_output_____ | MIT | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject |
Increasing the optimization After observing the trees, we conclude that differences are not being taken into account. Hence, we add the differences attribute to try and increase the accuracy. | x1['Diff']= x1['LT']- x1['RT']
x1.head()
X_train, X_test, y_train, y_test =train_test_split(x1,y, stratify=y, test_size=0.3,random_state=40)
dt_model3= DecisionTreeClassifier(random_state=40)
dt_model3.fit(X_train, y_train)
#Create Classification Report
y_pred3= dt_model3.predict(X_test)
print(classification_report(y_test, y_pred3, target_names=["Balanced", "Left", "Right"]))
#Plotting the tree
from sklearn.tree import export_graphviz
export_graphviz(
dt_model3
out_file=("model3.dot"),
feature_names=["Left Torque","Right Torque","Difference"],
class_names=["Balanced","Left","Right"]
filled=True)
# run this to make png
# dot -Tpng model3.dot -o model3.png
from sklearn.metrics import accuracy_score
accuracy_score(y_pred3,y_test) | _____no_output_____ | MIT | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject |
Final Conclusion The model returns a perfect accuracy score as desired. | !pip install seaborn
| Collecting seaborn
Downloading seaborn-0.11.2-py3-none-any.whl (292 kB)
Requirement already satisfied: numpy>=1.15 in c:\python39\lib\site-packages (from seaborn) (1.21.2)
Requirement already satisfied: scipy>=1.0 in c:\python39\lib\site-packages (from seaborn) (1.7.1)
Requirement already satisfied: matplotlib>=2.2 in c:\python39\lib\site-packages (from seaborn) (3.4.3)
Requirement already satisfied: pandas>=0.23 in c:\python39\lib\site-packages (from seaborn) (1.3.3)
Requirement already satisfied: pyparsing>=2.2.1 in c:\python39\lib\site-packages (from matplotlib>=2.2->seaborn) (2.4.7)
Requirement already satisfied: cycler>=0.10 in c:\python39\lib\site-packages (from matplotlib>=2.2->seaborn) (0.10.0)
Requirement already satisfied: pillow>=6.2.0 in c:\python39\lib\site-packages (from matplotlib>=2.2->seaborn) (8.3.2)
Requirement already satisfied: python-dateutil>=2.7 in c:\python39\lib\site-packages (from matplotlib>=2.2->seaborn) (2.8.2)
Requirement already satisfied: kiwisolver>=1.0.1 in c:\python39\lib\site-packages (from matplotlib>=2.2->seaborn) (1.3.2)
Requirement already satisfied: six in c:\python39\lib\site-packages (from cycler>=0.10->matplotlib>=2.2->seaborn) (1.16.0)
Requirement already satisfied: pytz>=2017.3 in c:\python39\lib\site-packages (from pandas>=0.23->seaborn) (2021.1)
Installing collected packages: seaborn
Successfully installed seaborn-0.11.2
| MIT | AnushkaProject/Balance Scale Decision Tree.ipynb | Sakshat682/BalanceDataProject |
3.10 多层感知机的简洁实现 | import torch
from torch import nn
from torch.nn import init
import numpy as np
import sys
sys.path.append("..")
import d2lzh_pytorch as d2l
print(torch.__version__) | 0.4.1
| Apache-2.0 | code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb | fizzyelf-es/Dive-into-DL-PyTorch |
3.10.1 定义模型 | num_inputs, num_outputs, num_hiddens = 784, 10, 256
net = nn.Sequential(
d2l.FlattenLayer(),
nn.Linear(num_inputs, num_hiddens),
nn.ReLU(),
nn.Linear(num_hiddens, num_outputs),
)
for params in net.parameters():
init.normal_(params, mean=0, std=0.01) | _____no_output_____ | Apache-2.0 | code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb | fizzyelf-es/Dive-into-DL-PyTorch |
3.10.2 读取数据并训练模型 | batch_size = 256
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
loss = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(net.parameters(), lr=0.5)
num_epochs = 5
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, batch_size, None, None, optimizer) | epoch 1, loss 0.0031, train acc 0.703, test acc 0.757
epoch 2, loss 0.0019, train acc 0.824, test acc 0.822
epoch 3, loss 0.0016, train acc 0.845, test acc 0.825
epoch 4, loss 0.0015, train acc 0.855, test acc 0.811
epoch 5, loss 0.0014, train acc 0.865, test acc 0.846
| Apache-2.0 | code/chapter03_DL-basics/3.10_mlp-pytorch.ipynb | fizzyelf-es/Dive-into-DL-PyTorch |
Summarizing Emails using Machine Learning: Data Wrangling Table of Contents1. Imports & Initalization 2. Data Input A. Enron Email Dataset B. BC3 Corpus 3. Preprocessing A. Data Cleaning. B. Sentence Cleaning C. Tokenizing 4. Store Data A. Locally as pickle B. Into database 5. Data Exploration A. Enron Emails B. BC3 Corpus The goal of this notebook is to clean both the Enron Email and BC3 Corpus data sets to perform email text summarization. The BC3 Corpus contains human summarizations that can be used to calculate ROUGE metrics to better understand how accurate the summarizations are. The Enron dataset is far more comprehensive, but lacks summaries to test against. You can find the text summarization notebook that uses the preprocessed data [here.](https://github.com/dailykirt/ML_Enron_email_summary/blob/master/notebooks/Text_rank_summarization.ipynb)A visual summary of the preprocessing steps are in the figure below. 1. Imports & Initalization | import sys
from os import listdir
from os.path import isfile, join
import configparser
from sqlalchemy import create_engine
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import email
import mailparser
import xml.etree.ElementTree as ET
from talon.signature.bruteforce import extract_signature
from nltk.tokenize import word_tokenize, sent_tokenize
from nltk.corpus import stopwords
import re
import dask.dataframe as dd
from distributed import Client
import multiprocessing as mp
#Set local location of emails.
mail_dir = '../data/maildir/'
#mail_dir = '../data/testdir/' | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
2. Data Input A. Enron Email DatasetThe raw enron email dataset contains a maildir directory that contains folders seperated by employee which contain the emails. The following processes the raw text of each email into a dask dataframe with the following columns: Employee: The username of the email owner. Body: Cleaned body of the email. Subject: The title of the email. From: The original sender of the email Message-ID: Used to remove duplicate emails, as each email has a unique ID. Chain: The parsed out email chain from a email that was forwarded. Signature: The extracted signature from the body.Date: Time the email was sent. All of the Enron emails were sent using the Multipurpose Internet Mail Extensions 1.0 (MIME) format. Keeping this in mind helps find the correct libraries and methods to clean the emails in a standardized fashion. | def process_email(index):
'''
This function splits a raw email into constituent parts that can be used as features.
'''
email_path = index[0]
employee = index[1]
folder = index[2]
mail = mailparser.parse_from_file(email_path)
full_body = email.message_from_string(mail.body)
#Only retrieve the body of the email.
if full_body.is_multipart():
return
else:
mail_body = full_body.get_payload()
split_body = clean_body(mail_body)
headers = mail.headers
#Reformating date to be more pandas readable
date_time = process_date(headers.get('Date'))
email_dict = {
"employee" : employee,
"email_folder": folder,
"message_id": headers.get('Message-ID'),
"date" : date_time,
"from" : headers.get('From'),
"subject": headers.get('Subject'),
"body" : split_body['body'],
"chain" : split_body['chain'],
"signature": split_body['signature'],
"full_email_path" : email_path #for debug purposes.
}
#Append row to dataframe.
return email_dict
def clean_body(mail_body):
'''
This extracts both the email signature, and the forwarding email chain if it exists.
'''
delimiters = ["-----Original Message-----","To:","From"]
#Trying to split string by biggest delimiter.
old_len = sys.maxsize
for delimiter in delimiters:
split_body = mail_body.split(delimiter,1)
new_len = len(split_body[0])
if new_len <= old_len:
old_len = new_len
final_split = split_body
#Then pull chain message
if (len(final_split) == 1):
mail_chain = None
else:
mail_chain = final_split[1]
#The following uses Talon to try to get a clean body, and seperate out the rest of the email.
clean_body, sig = extract_signature(final_split[0])
return {'body': clean_body, 'chain' : mail_chain, 'signature': sig}
def process_date(date_time):
'''
Converts the MIME date format to a more pandas friendly type.
'''
try:
date_time = email.utils.format_datetime(email.utils.parsedate_to_datetime(date_time))
except:
date_time = None
return date_time
def generate_email_paths(mail_dir):
'''
Given a mail directory, this will generate the file paths to each email in each inbox.
'''
mailboxes = listdir(mail_dir)
for mailbox in mailboxes:
inbox = listdir(mail_dir + mailbox)
for folder in inbox:
path = mail_dir + mailbox + "/" + folder
emails = listdir(path)
for single_email in emails:
full_path = path + "/" + single_email
if isfile(full_path): #Skip directories.
yield (full_path, mailbox, folder)
#Use multiprocessing to speed up initial data load and processing. Also helps partition DASK dataframe.
try:
cpus = mp.cpu_count()
except NotImplementedError:
cpus = 2
pool = mp.Pool(processes=cpus)
print("CPUS: " + str(cpus))
indexes = generate_email_paths(mail_dir)
enron_email_df = pool.map(process_email,indexes)
#Remove Nones from the list
enron_email_df = [i for i in enron_email_df if i]
enron_email_df = pd.DataFrame(enron_email_df)
enron_email_df.describe() | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
B. BC3 Corpus This dataset is split into two xml files. One contains the original emails split line by line, and the other contains the summarizations created by the annotators. Each email may contain several summarizations from different annotators and summarizations may also be over several emails. This will create a data frame for both xml files, then join them together using the thread number in combination of the email number for a single final dataframe. The first dataframe will contain the wrangled original emails containing the following information:Listno: Thread identifier Email_num: Email in thread sequence From: The original sender of the email To: The recipient of the email. Recieved: Time email was recieved. Subject: Title of email. Body: Original body. | def parse_bc3_emails(root):
'''
This adds every BC3 email to a newly created dataframe.
'''
BC3_email_list = []
#The emails are seperated by threads.
for thread in root:
email_num = 0
#Iterate through the thread elements <name, listno, Doc>
for thread_element in thread:
#Getting the listno allows us to link the summaries to the correct emails
if thread_element.tag == "listno":
listno = thread_element.text
#Each Doc element is a single email
if thread_element.tag == "DOC":
email_num += 1
email_metadata = []
for email_attribute in thread_element:
#If the email_attri is text, then each child contains a line from the body of the email
if email_attribute.tag == "Text":
email_body = ""
for sentence in email_attribute:
email_body += sentence.text
else:
#The attributes of the Email <Recieved, From, To, Subject, Text> appends in this order.
email_metadata.append(email_attribute.text)
#Use same enron cleaning methods on the body of the email
split_body = clean_body(email_body)
email_dict = {
"listno" : listno,
"date" : process_date(email_metadata[0]),
"from" : email_metadata[1],
"to" : email_metadata[2],
"subject" : email_metadata[3],
"body" : split_body['body'],
"email_num": email_num
}
BC3_email_list.append(email_dict)
return pd.DataFrame(BC3_email_list)
#load BC3 Email Corpus. Much smaller dataset has no need for parallel processing.
parsedXML = ET.parse( "../data/BC3_Email_Corpus/corpus.xml" )
root = parsedXML.getroot()
#Clean up BC3 emails the same way as the Enron emails.
bc3_email_df = parse_bc3_emails(root)
bc3_email_df.info()
bc3_email_df.head(3) | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
The second dataframe contains the summarizations of each email:Annotator: Person who created summarization. Email_num: Email in thread sequence. Listno: Thread identifier. Summary: Human summarization of the email. | def parse_bc3_summaries(root):
'''
This parses every BC3 Human summary that is contained in the dataset.
'''
BC3_summary_list = []
for thread in root:
#Iterate through the thread elements <listno, name, annotation>
for thread_element in thread:
if thread_element.tag == "listno":
listno = thread_element.text
#Each Doc element is a single email
if thread_element.tag == "annotation":
for annotation in thread_element:
#If the email_attri is summary, then each child contains a summarization line
if annotation.tag == "summary":
summary_dict = {}
for summary in annotation:
#Generate the set of emails the summary sentence belongs to (often a single email)
email_nums = summary.attrib['link'].split(',')
s = set()
for num in email_nums:
s.add(num.split('.')[0].strip())
#Remove empty strings, since they summarize whole threads instead of emails.
s = [x for x in set(s) if x]
for email_num in s:
if email_num in summary_dict:
summary_dict[email_num] += ' ' + summary.text
else:
summary_dict[email_num] = summary.text
#get annotator description
elif annotation.tag == "desc":
annotator = annotation.text
#For each email summarizaiton create an entry
for email_num, summary in summary_dict.items():
email_dict = {
"listno" : listno,
"annotator" : annotator,
"email_num" : email_num,
"summary" : summary
}
BC3_summary_list.append(email_dict)
return pd.DataFrame(BC3_summary_list)
#Load summaries and process
parsedXML = ET.parse( "../data/BC3_Email_Corpus/annotation.xml" )
root = parsedXML.getroot()
bc3_summary_df = parse_bc3_summaries(root)
bc3_summary_df['email_num'] = bc3_summary_df['email_num'].astype(int)
bc3_summary_df.info()
#merge the dataframes together
bc3_df = pd.merge(bc3_email_df,
bc3_summary_df[['annotator', 'email_num', 'listno', 'summary']],
on=['email_num', 'listno'])
bc3_df.head() | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
3. Preprocessing A. Data Cleaning | #Convert date to pandas datetime.
enron_email_df['date'] = pd.to_datetime(enron_email_df['date'], utc=True)
bc3_df['date'] = pd.to_datetime(bc3_df.date, utc=True)
#Look at the timeframe
start_date = str(enron_email_df.date.min())
end_date = str(enron_email_df.date.max())
print("Start Date: " + start_date)
print("End Date: " + end_date) | Start Date: 1980-01-01 00:00:00+00:00
End Date: 2024-05-26 10:49:57+00:00
| MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
Since the Enron data was collected in May 2002 according to wikipedia its a bit strange to see emails past that date. Reading some of the emails seem to suggest it's mostly spam. | enron_email_df[(enron_email_df.date > '2003-01-01')].head()
#Quick look at emails before 1999,
enron_email_df[(enron_email_df.date < '1999-01-01')].date.value_counts().head()
enron_email_df[(enron_email_df.date == '1980-01-01')].head() | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
There seems to be a glut of emails dated exactly on 1980-01-01. The emails seem legitimate, but these should be droped since without the true date we won't be able to figure out where the email fits in the context of a batch of summaries. Keep emails between Jan 1st 1999 and June 1st 2002. | enron_email_df = enron_email_df[(enron_email_df.date > '1998-01-01') & (enron_email_df.date < '2002-06-01')] | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
B. Sentence Cleaning The raw enron email Corpus tends to have a large amount of unneeded characters that can interfere with tokenizaiton. It's best to do a bit more cleaning. | def clean_email_df(df):
'''
These remove symbols and character patterns that don't aid in producing a good summary.
'''
#Removing strings related to attatchments and certain non numerical characters.
patterns = ["\[IMAGE\]","-", "_", "\*", "+","\".\""]
for pattern in patterns:
df['body'] = pd.Series(df['body']).str.replace(pattern, "")
#Remove multiple spaces.
df['body'] = df['body'].replace('\s+', ' ', regex=True)
#Blanks are replaced with NaN in the whole dataframe. Then rows with a 'NaN' in the body will be dropped.
df = df.replace('',np.NaN)
df = df.dropna(subset=['body'])
#Remove all Duplicate emails
#df = df.drop_duplicates(subset='body')
return df
#Apply clean to both datasets.
enron_email_df = clean_email_df(enron_email_df)
bc3_df = clean_email_df(bc3_df) | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
C. Tokenizing It's important to split up sentences into it's constituent parts for the ML algorithim that will be used for text summarization. This will aid in further processing like removing extra whitespace. We can also remove stopwords, which are very commonly used words that don't provide additional sentence meaning like 'and' 'or' and 'the'. This will be applied to both the Enron and BC3 datasets. | def remove_stopwords(sen):
'''
This function removes stopwords
'''
stop_words = stopwords.words('english')
sen_new = " ".join([i for i in sen if i not in stop_words])
return sen_new
def tokenize_email(text):
'''
This function splits up the body into sentence tokens and removes stop words.
'''
clean_sentences = sent_tokenize(text, language='english')
#removing punctuation, numbers and special characters. Then lowercasing.
clean_sentences = [re.sub('[^a-zA-Z ]', '',s) for s in clean_sentences]
clean_sentences = [s.lower() for s in clean_sentences]
clean_sentences = [remove_stopwords(r.split()) for r in clean_sentences]
return clean_sentences | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
Starting with the Enron dataset. | #This tokenizing will be the extracted sentences that may be chosen to form the email summaries.
enron_email_df['extractive_sentences'] = enron_email_df['body'].apply(sent_tokenize)
#Splitting the text in emails into cleaned sentences
enron_email_df['tokenized_body'] = enron_email_df['body'].apply(tokenize_email)
#Tokenizing the bodies might have revealed more duplicate emails that should be droped.
enron_email_df = enron_email_df.loc[enron_email_df.astype(str).drop_duplicates(subset='tokenized_body').index] | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
Now working on the BC3 Dataset. | bc3_df['extractive_sentences'] = bc3_df['body'].apply(sent_tokenize)
bc3_df['tokenized_body'] = bc3_df['body'].apply(tokenize_email)
#bc3_email_df = bc3_email_df.loc[bc3_email_df.astype(str).drop_duplicates(subset='tokenized_body').index] | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
Store Data A. Locally as pickle After all the preprocessing is finished its best to store the the data so it can be quickly and easily retrieved by other software. Pickles are best used if you are working locally and want a simple way to store and load data. You can also use a cloud database that can be accessed by other production services such as Heroku to retrieve the data. In this case, I load the data up into a AWS postgres database. | #Local locations for pickle files.
ENRON_PICKLE_LOC = "../data/dataframes/wrangled_enron_full_df.pkl"
BC3_PICKLE_LOC = "../data/dataframes/wrangled_BC3_df.pkl"
#Store dataframes to disk
enron_email_df.to_pickle(ENRON_PICKLE_LOC)
bc3_df.head()
bc3_df.to_pickle(BC3_PICKLE_LOC) | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
B. Into database I used a Postgres database with the DB configurations stored in a config_notebook.ini file. This allows me to easily switch between local and AWS configurations. | #Configure postgres database
config = configparser.ConfigParser()
config.read('config_notebook.ini')
#database_config = 'LOCAL_POSTGRES'
database_config = 'AWS_POSTGRES'
POSTGRES_ADDRESS = config[database_config]['POSTGRES_ADDRESS']
POSTGRES_USERNAME = config[database_config]['POSTGRES_USERNAME']
POSTGRES_PASSWORD = config[database_config]['POSTGRES_PASSWORD']
POSTGRES_DBNAME = config[database_config]['POSTGRES_DBNAME']
#now create database connection
postgres_str = ('postgresql+psycopg2://{username}:{password}@{ipaddress}/{dbname}'
.format(username=POSTGRES_USERNAME,
password=POSTGRES_PASSWORD,
ipaddress=POSTGRES_ADDRESS,
dbname=POSTGRES_DBNAME))
cnx = create_engine(postgres_str)
#Store data.
enron_email_df.to_sql('full_enron_emails', cnx) | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
5. Data Exploration Exploring the dataset can go a long way to building more accurate machine learning models and spotting any possible issues with the dataset. Since the Enron dataset is quite large, we can speed up some of our computations by using Dask. While not strictly necessary, iterating on this dataset should be much faster. A. Enron Emails | client = Client(processes = True)
client.cluster
#Make into dask dataframe.
enron_email_df = dd.from_pandas(enron_email_df, npartitions=cpus)
enron_email_df.columns
#Used to create a describe summary of the dataset. Ignoring tokenized columns.
enron_email_df[['body', 'chain', 'date', 'email_folder', 'employee', 'from', 'full_email_path', 'message_id', 'signature', 'subject']].describe().compute()
#Get word frequencies from tokenized word lists
def get_word_freq(df):
freq_words=dict()
for tokens in df.tokenized_words.compute():
for token in tokens:
if token in freq_words:
freq_words[token] += 1
else:
freq_words[token] = 1
return freq_words
def tokenize_word(sentences):
tokens = []
for sentence in sentences:
tokens = word_tokenize(sentence)
return tokens
#Tokenize the sentences
enron_email_df['tokenized_words'] = enron_email_df['tokenized_body'].apply(tokenize_word).compute()
#Creating word dictionary to understand word frequencies.
freq_words = get_word_freq(enron_email_df)
print('Unique words: {:,}'.format(len(freq_words)))
word_data = []
#Sort dictionary by highest word frequency.
for key, value in sorted(freq_words.items(), key=lambda item: item[1], reverse=True):
word_data.append([key, freq_words[key]])
#Prepare to plot bar graph of top words.
#Create dataframe with Word and Frequency, then sort in Descending order.
freq_words_df = pd.DataFrame.from_dict(freq_words, orient='index').reset_index()
freq_words_df = freq_words_df.rename(columns={"index": "Word", 0: "Frequency"})
freq_words_df = freq_words_df.sort_values(by=['Frequency'],ascending = False)
freq_words_df.reset_index(drop = True, inplace=True)
freq_words_df.head(30).plot(x='Word', kind='bar', figsize=(20,10)) | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
B. BC3 Corpus | bc3_df.head()
bc3_df['to'].value_counts().head() | _____no_output_____ | MIT | notebooks/Process_Emails.ipynb | dailykirt/ML_Enron_email_summary |
Compass heading | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('Heading [degree]', fontsize=16)
ax1.plot(standardized_time, compass_heading, label='compass heading')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
# Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('ground_speed_x [m/s]', fontsize=16)
ax1.plot(standardized_time, speed, label='ground_speed_x', color='m')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
# Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('angular_z [rad/s]', fontsize=16)
ax1.plot(standardized_time, angular_z, label='angular_z', color='r')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show() | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
Temperature | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('Temperature [degree]', fontsize=16)
ax1.plot(standardized_time, temp, label='temp', color='k')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
print("Standard Deviation of the temp is % s " %(statistics.stdev(temp)))
print("Mean of the temp is % s " %(statistics.mean(temp))) | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
PH | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('PH', fontsize=16)
ax1.plot(standardized_time, PH, label='PH', color='r')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
print("Standard Deviation of the temp is % s " %(statistics.stdev(PH)))
print("Mean of the temp is % s " %(statistics.mean(PH))) | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
Conductivity | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('Conductivity [ms]', fontsize=16)
ax1.plot(standardized_time, cond, label='conductivity', color='b')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
print("Standard Deviation of the chlorophyll is % s " %(statistics.stdev(cond)))
print("Mean of the chlorophyll is % s " %(statistics.mean(cond))) | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
Chlorophyll | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('chlorophyll [RFU]', fontsize=16)
ax1.plot(standardized_time, chlorophyll, label='chlorophyll', color='g')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
print("Standard Deviation of the chlorophyll is % s " %(statistics.stdev(chlorophyll)))
print("Mean of the chlorophyll is % s " %(statistics.mean(chlorophyll))) | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
ODO | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('ODO [%sat]', fontsize=16)
ax1.plot(standardized_time, ODO, label='ODO', color='m')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show()
print("Standard Deviation of the DO is % s " %(statistics.stdev(ODO)))
print("Mean of the DO is % s " %(statistics.mean(ODO))) | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
Sonar depth | # Figure initialization
fig, ax1 = plt.subplots()
ax1.set_xlabel('Time [sec]', fontsize=16)
ax1.set_ylabel('sonar [m]', fontsize=16)
ax1.plot(standardized_time, sonar, label='sonar', color='c')
ax1.legend()
for wp in standardized_time2:
plt.axvline(x=wp, color='gray', linestyle='--')
plt.show() | _____no_output_____ | MIT | Jupyter_notebook/ISER2021/Path 1/.ipynb_checkpoints/20200626-Sunapee-manualvisit-checkpoint.ipynb | dartmouthrobotics/epscor_asv_data_analysis |
Classification Binary classification Stochastic gradient descent (SGD) | from sklearn.linear_model import SGDClassifier | _____no_output_____ | Apache-2.0 | cheat-sheets/ml/classification/algorithms.ipynb | AElOuassouli/reading-notes |
QSVM multiclass classificationA [multiclass extension](https://qiskit.org/documentation/apidoc/qiskit.aqua.components.multiclass_extensions.html) works in conjunction with an underlying binary (two class) classifier to provide classification where the number of classes is greater than two.Currently the following multiclass extensions are supported:* OneAgainstRest* AllPairs* ErrorCorrectingCodeThese use different techniques to group the data from the binary classification to achieve the final multiclass classification. | import numpy as np
from qiskit import BasicAer
from qiskit.circuit.library import ZZFeatureMap
from qiskit.utils import QuantumInstance, algorithm_globals
from qiskit_machine_learning.algorithms import QSVM
from qiskit_machine_learning.multiclass_extensions import AllPairs
from qiskit_machine_learning.utils.dataset_helper import get_feature_dimension | _____no_output_____ | Apache-2.0 | tutorials/02_qsvm_multiclass.ipynb | gabrieleagl/qiskit-machine-learning |
We want a dataset with more than two classes, so here we choose the `Wine` dataset that has 3 classes. | from qiskit_machine_learning.datasets import wine
n = 2 # dimension of each data point
sample_Total, training_input, test_input, class_labels = wine(training_size=24,
test_size=6, n=n, plot_data=True)
temp = [test_input[k] for k in test_input]
total_array = np.concatenate(temp) | _____no_output_____ | Apache-2.0 | tutorials/02_qsvm_multiclass.ipynb | gabrieleagl/qiskit-machine-learning |
To used a multiclass extension an instance thereof simply needs to be supplied, on the QSVM creation, using the `multiclass_extension` parameter. Although `AllPairs()` is used in the example below, the following multiclass extensions would also work: OneAgainstRest() ErrorCorrectingCode(code_size=5) | algorithm_globals.random_seed = 10598
backend = BasicAer.get_backend('qasm_simulator')
feature_map = ZZFeatureMap(feature_dimension=get_feature_dimension(training_input),
reps=2, entanglement='linear')
svm = QSVM(feature_map, training_input, test_input, total_array,
multiclass_extension=AllPairs())
quantum_instance = QuantumInstance(backend, shots=1024,
seed_simulator=algorithm_globals.random_seed,
seed_transpiler=algorithm_globals.random_seed)
result = svm.run(quantum_instance)
for k,v in result.items():
print(f'{k} : {v}')
import qiskit.tools.jupyter
%qiskit_version_table
%qiskit_copyright | _____no_output_____ | Apache-2.0 | tutorials/02_qsvm_multiclass.ipynb | gabrieleagl/qiskit-machine-learning |
Building Simple Neural NetworksIn this section you will:* Import the MNIST dataset from Keras.* Format the data so it can be used by a Sequential model with Dense layers.* Split the dataset into training and test sections data.* Build a simple neural network using Keras Sequential model and Dense layers.* Train that model.* Evaluate the performance of that model.While we are accomplishing these tasks, we will also stop to discuss important concepts:* Splitting data into test and training sets.* Training rounds, batch size, and epochs.* Validation data vs test data.* Examining results. Importing and Formatting the DataKeras has several built-in datasets that are already well formatted and properly cleaned. These datasets are an invaluable learning resource. Collecting and processing datasets is a serious undertaking, and deep learning tactics perform poorly without large high quality datasets. We will be leveraging the [Keras built in datasets](https://keras.io/datasets/) extensively, and you may wish to explore them further on your own.In this exercise, we will be focused on the MNIST dataset, which is a set of 70,000 images of handwritten digits each labeled with the value of the written digit. Additionally, the images have been split into training and test sets. | # For drawing the MNIST digits as well as plots to help us evaluate performance we
# will make extensive use of matplotlib
from matplotlib import pyplot as plt
# All of the Keras datasets are in keras.datasets
from keras.datasets import mnist
# Keras has already split the data into training and test data
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# Training images is a list of 60,000 2D lists.
# Each 2D list is 28 by 28—the size of the MNIST pixel data.
# Each item in the 2D array is an integer from 0 to 255 representing its grayscale
# intensity where 0 means white, 255 means black.
print(len(training_images), training_images[0].shape)
# training_labels are a value between 0 and 9 indicating which digit is represented.
# The first item in the training data is a 5
print(len(training_labels), training_labels[0])
# Lets visualize the first 100 images from the dataset
for i in range(100):
ax = plt.subplot(10, 10, i+1)
ax.axis('off')
plt.imshow(training_images[i], cmap='Greys')
| _____no_output_____ | Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
Problems With This DataThere are (at least) two problems with this data as it is currently formatted, what do you think they are? 1. The input data is formatted as a 2D array, but our deep neural network needs to data as a 1D vector. * This is because of how deep neural networks are constructed, it is simply not possible to send anything but a vector as input. * These vectors can be/represent anything, but from the computer's perspective they must be a 1D vector.2. Our labels are numbers, but we're not performing regression. We need to use a 1-hot vector encoding for our labels. * This is important because if we use the number values we would be training our network to think of these values as continuous. * If the digit is supposed to be a 2, guessing 1 and guessing 9 are both equally wrong. * Training the network with numbers would imply that a prediction of 1 would be "less wrong" than a prediction of 9, when in fact both are equally wrong. Fixing the data formatLuckily, this is a common problem and we can use two methods to fix the data: `numpy.reshape` and `keras.utils.to_categorical`. This is nessesary because of how deep neural networks process data, there is no way to send 2D data to a `Sequential` model made of `Dense` layers. | from keras.utils import to_categorical
# Preparing the dataset
# Setup train and test splits
(training_images, training_labels), (test_images, test_labels) = mnist.load_data()
# 28 x 28 = 784, because that's the dimensions of the MNIST data.
image_size = 784
# Reshaping the training_images and test_images to lists of vectors with length 784
# instead of lists of 2D arrays. Same for the test_images
training_data = training_images.reshape(training_images.shape[0], image_size)
test_data = test_images.reshape(test_images.shape[0], image_size)
# [
# [1,2,3]
# [4,5,6]
# ]
# => [1,2,3,4,5,6]
# Just showing the changes...
print("training data: ", training_images.shape, " ==> ", training_data.shape)
print("test data: ", test_images.shape, " ==> ", test_data.shape)
# Create 1-hot encoded vectors using to_categorical
num_classes = 10 # Because it's how many digits we have (0-9)
# to_categorical takes a list of integers (our labels) and makes them into 1-hot vectors
training_labels = to_categorical(training_labels, num_classes)
test_labels = to_categorical(test_labels, num_classes)
# Recall that before this transformation, training_labels[0] was the value 5. Look now:
print(training_labels[0]) | [0. 0. 0. 0. 0. 1. 0. 0. 0. 0.]
| Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
Building a Deep Neural NetworkNow that we've prepared our data, it's time to build a simple neural network. To start we'll make a deep network with 3 layers—the input layer, a single hidden layer, and the output layer. In a deep neural network all the layers are 1 dimensional. The input layer has to be the shape of our input data, meaning it must have 784 nodes. Similarly, the output layer must match our labels, meaning it must have 10 nodes. We can choose the number of nodes in our hidden layer, I've chosen 32 arbitrarally. | from keras.models import Sequential
from keras.layers import Dense
# Sequential models are a series of layers applied linearly.
model = Sequential()
# The first layer must specify it's input_shape.
# This is how the first two layers are added, the input layer and the hidden layer.
model.add(Dense(units=32, activation='sigmoid', input_shape=(image_size,)))
# This is how the output layer gets added, the 'softmax' activation function ensures
# that the sum of the values in the output nodes is 1. Softmax is very
# common in classification networks.
model.add(Dense(units=num_classes, activation='softmax'))
# This function provides useful text data for our network
model.summary() | Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
dense_1 (Dense) (None, 32) 25120
_________________________________________________________________
dense_2 (Dense) (None, 10) 330
=================================================================
Total params: 25,450
Trainable params: 25,450
Non-trainable params: 0
_________________________________________________________________
| Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
Compiling and Training a ModelOur model must be compiled and trained before it can make useful predictions. Models are trainined with the training data and training labels. During this process Keras will use an optimizer, loss function, metrics of our chosing to repeatedly make predictions and recieve corrections. The loss function is used to train the model, the metrics are only used for human evaluation of the model during and after training.Training happens in a series of epochs which are divided into a series of rounds. Each round the network will recieve `batch_size` samples from the training data, make predictions, and recieve one correction based on the errors in those predictions. In a single epoch, the model will look at every item in the training set __exactly once__, which means individual data points are sampled from the training data without replacement during each round of each epoch.During training, the training data itself will be broken into two parts according to the `validation_split` parameter. The proportion that you specify will be left out of the training process, and used to evaluate the accuracy of the model. This is done to preserve the test data, while still having a set of data left out in order to test against — and hopefully prevent — overfitting. At the end of each epoch, predictions will be made for all the items in the validation set, but those predictions won't adjust the weights in the model. Instead, if the accuracy of the predictions in the validation set stops improving then training will stop early, even if accuracy in the training set is improving. | # sgd stands for stochastic gradient descent.
# categorical_crossentropy is a common loss function used for categorical classification.
# accuracy is the percent of predictions that were correct.
model.compile(optimizer="sgd", loss='categorical_crossentropy', metrics=['accuracy'])
# The network will make predictions for 128 flattened images per correction.
# It will make a prediction on each item in the training set 5 times (5 epochs)
# And 10% of the data will be used as validation data.
history = model.fit(training_data, training_labels, batch_size=128, epochs=5, verbose=True, validation_split=.1) | Train on 54000 samples, validate on 6000 samples
Epoch 1/5
54000/54000 [==============================] - 1s 17us/step - loss: 1.3324 - accuracy: 0.6583 - val_loss: 0.8772 - val_accuracy: 0.8407
Epoch 2/5
54000/54000 [==============================] - 1s 13us/step - loss: 0.7999 - accuracy: 0.8356 - val_loss: 0.6273 - val_accuracy: 0.8850
Epoch 3/5
54000/54000 [==============================] - 1s 12us/step - loss: 0.6350 - accuracy: 0.8643 - val_loss: 0.5207 - val_accuracy: 0.8940
Epoch 4/5
54000/54000 [==============================] - 1s 11us/step - loss: 0.5499 - accuracy: 0.8752 - val_loss: 0.4532 - val_accuracy: 0.9040
Epoch 5/5
54000/54000 [==============================] - 1s 11us/step - loss: 0.4950 - accuracy: 0.8837 - val_loss: 0.4233 - val_accuracy: 0.9045
| Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
Evaluating Our ModelNow that we've trained our model, we want to evaluate its performance. We're using the "test data" here although in a serious experiment, we would likely not have done nearly enough work to warrent the application of the test data. Instead, we would rely on the validation metrics as a proxy for our test results until we had models that we believe would perform well. Once we evaluate our model on the test data, any subsequent changes we make would be based on what we learned from the test data. Meaning, we would have functionally incorporated information from the test set into our training procedure which could bias and even invalidate the results of our research. In a non-research setting the real test might be more like putting this feature into production. Nevertheless, it is always wise to create a test set that is not used as an evaluative measure until the very end of an experimental lifecycle. That is, once you have a model that you believe __should__ generalize well to unseen data you should test it on the test data to test that hypothosis. If your model performs poorly on the test data, you'll have to reevaluate your model, training data, and procedure. | loss, accuracy = model.evaluate(test_data, test_labels, verbose=True)
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['training', 'validation'], loc='best')
plt.show()
print(f'Test loss: {loss:.3}')
print(f'Test accuracy: {accuracy:.3}') | 10000/10000 [==============================] - 0s 15us/step
| Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
How Did Our Network Do? * Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy?* Our model was more accurate on the validation data than it was on the training data. * Is this okay? Why or why not? * What if our model had been more accurate on the training data than the validation data?* Did our model get better during each epoch? * If not: why might that be the case? * If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss? Answers:* Why do we only have one value for test loss and test accuracy, but a chart over time for training and validation loss and accuracy? * __Because we only evaluate the test data once at the very end, but we evaluate training and validation scores once per epoch.__* Our model was more accurate on the validation data than it was on the training data. * Is this okay? Why or why not? * __Yes, this is okay, and even good. When our validation scores are better than our training scores, it's a sign that we are probably not overfitting__ * What if our model had been more accurate on the training data than the validation data? * __This would concern us, because it would suggest we are probably overfitting.__* Did our model get better during each epoch? * If not: why might that be the case? * __Optimizers rely on the gradient to update our weights, but the 'function' we are optimizing (our neural network) is not a ground truth. A single batch, and even a complete epoch, may very well result in an adjustment that hurts overall performance.__ * If so: should we always expect this, where each epoch strictly improves training/validation accuracy/loss? * __Not at all, see the above answer.__ Look at Specific ResultsOften, it can be illuminating to view specific results, both when the model is correct and when the model is wrong. Lets look at the images and our model's predictions for the first 16 samples in the test set. | from numpy import argmax
# Predicting once, then we can use these repeatedly in the next cell without recomputing the predictions.
predictions = model.predict(test_data)
# For pagination & style in second cell
page = 0
fontdict = {'color': 'black'}
# Repeatedly running this cell will page through the predictions
for i in range(16):
ax = plt.subplot(4, 4, i+1)
ax.axis('off')
plt.imshow(test_images[i + page], cmap='Greys')
prediction = argmax(predictions[i + page])
true_value = argmax(test_labels[i + page])
fontdict['color'] = 'black' if prediction == true_value else 'red'
plt.title("{}, {}".format(prediction, true_value), fontdict=fontdict)
page += 16
plt.tight_layout()
plt.show() | _____no_output_____ | Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
Will A Different Network Perform Better?Given what you know so far, use Keras to build and train another sequential model that you think will perform __better__ than the network we just built and trained. Then evaluate that model and compare its performance to our model. Remember to look at accuracy and loss for training and validation data over time, as well as test accuracy and loss. | # Your code here...
| _____no_output_____ | Unlicense | 01-intro-to-deep-learning/02-building-simple-neural-networks.ipynb | rekil156/intro-to-deep-learning |
> The email portion of this campaign was actually run as an A/B test. Half the emails sent out were generic upsells to your product while the other half contained personalized messaging around the users’ usage of the site.这是 AB Test 的实验内容。 | import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
# export
'''Calculate conversion rates and related metrics.'''
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
def conversion_rate(dataframe, column_names, converted = 'converted', id_name = 'user_id'):
'''Calculate conversion rate.
Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas
Parmaters
---------
dataframe: pandas.DataFrame
column_names: str
The conlumn(s) chosen to partition groups to
calculate conversion rate.
converted: str
The column with True and False to determine
whether users are converted.
id_name: str
The column saved user_id.
Returns
-------
conversion_rate: conversion rate'''
# Total number of converted users
column_conv = dataframe[dataframe[converted] == True] \
.groupby(column_names)[id_name] \
.nunique()
# Total number users
column_total = dataframe \
.groupby(column_names)[id_name] \
.nunique()
# Conversion rate
conversion_rate = column_conv/column_total
# Fill missing values with 0
conversion_rate = conversion_rate.fillna(0)
return conversion_rate
marketing = pd.read_csv("data/marketing.csv",
parse_dates = ['date_served', 'date_subscribed', 'date_canceled'])
# Subset the DataFrame
email = marketing[marketing.marketing_channel == 'Email']
# Group the email DataFrame by variant
alloc = email.groupby(['variant']).user_id.nunique()
# Plot a bar chart of the test allocation
alloc.plot(kind = 'bar')
plt.title('Personalization test allocation')
plt.ylabel('# participants')
plt.show() | _____no_output_____ | MIT | 01-demo1.ipynb | JiaxiangBU/conversion_metrics |
差异不大。 | # Group marketing by user_id and variant
subscribers = email.groupby(['user_id',
'variant'])['converted'].max()
subscribers_df = pd.DataFrame(subscribers.unstack(level=1))
# Drop missing values from the control column
control = subscribers_df['control'].dropna()
# Drop missing values from the personalization column
personalization = subscribers_df['personalization'].dropna()
print('Control conversion rate:', np.mean(control))
print('Personalization conversion rate:', np.mean(personalization)) | Control conversion rate: 0.2814814814814815
Personalization conversion rate: 0.3908450704225352
| MIT | 01-demo1.ipynb | JiaxiangBU/conversion_metrics |
这种 Python 写法我觉得有点复杂。 $$\begin{array}{l}{\text { Calculating lift: }} \\ {\qquad \frac{\text { Treatment conversion rate - Control conversion rate }}{\text { Control conversion rate }}}\end{array}$$ 注意这里的 lift 是转化率的比较,因此是可以超过 100 % | # export
def lift(a,b, sig = 2):
'''Calculate lift statistic for an AB test.
Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas
Parmaters
---------
a: float.
control group.
b: float.
test group.
sig: integer.
default 2.
Returns
-------
lift: lift statistic'''
# Calcuate the mean of a and b
a_mean = np.mean(a)
b_mean = np.mean(b)
# Calculate the lift using a_mean and b_mean
lift = b_mean/a_mean - 1
return str(round(lift*100, sig)) + '%'
lift(control, personalization, sig = 3) | _____no_output_____ | MIT | 01-demo1.ipynb | JiaxiangBU/conversion_metrics |
查看是否统计显著 | # export
from scipy import stats
def lift_sig(a,b):
'''Calculate lift statistical significance for an AB test.
Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas
Parmaters
---------
a: float.
control group.
b: float.
test group.
sig: integer.
default 2.
Returns
-------
lift: lift statistic'''
output = stats.ttest_ind(a,b)
t_value, p_value = output.statistic,output.pvalue
print('The t value of the two variables is %.3f with p value %.3f' % (t_value, p_value))
return (t_value, p_value)
t_value, p_value = lift_sig(control,personalization ) | The t value of the two variables is -0.577 with p value 0.580
| MIT | 01-demo1.ipynb | JiaxiangBU/conversion_metrics |
> In the next lesson, you will explore whether that holds up across all demographics.这真是做 AB test 一个成熟的思维,不代表每一个 group 都很好。 | # export
def ab_test(df, segment, id_name = 'user_id', test_column = 'variant', converted = 'converted'):
'''Calculate lift statistic by segmentation.
Cite https://www.datacamp.com/courses/analyzing-marketing-campaigns-with-pandas
Parmaters
---------
df: pandas.DataFrame.
segment: str.
group column.
id_name: user_id
test_column: str
The column indentify test or ctrl groups.
converted: logical.
Whether converted or not.
Returns
-------
lift: lift statistic'''
# Build a for loop for each segment in marketing
for subsegment in np.unique(marketing[segment].values):
print('Group - %s: ' % subsegment)
df1 = df[df[segment] == subsegment]
df2 = df1.groupby([id_name, test_column])[converted].max()
df2 = pd.DataFrame(df2.unstack(level=1))
ctrl = df2.iloc[:,0].dropna()
test = df2.iloc[:,1].dropna()
# information
print('lift:', lift(ctrl, test))
lift_sig(ctrl, test)
df = marketing[marketing['marketing_channel'] == 'Email']
ab_test(df, segment='language_displayed', id_name='user_id', test_column='variant', converted='converted')
df.head() | _____no_output_____ | MIT | 01-demo1.ipynb | JiaxiangBU/conversion_metrics |
Ran the new few blocks for my colab configuration, can be ignored. | from google.colab import drive
drive.mount('/content/gdrive')
!wget https://d17h27t6h515a5.cloudfront.net/topher/2016/December/584f6edd_data/data.zip
import shutil
shutil.move("/content/data.zip", "/content/gdrive/My Drive/udacity-behavioural-cloning/")
os.chdir('/content/gdrive/My Drive/udacity-behavioural-cloning/')
with zipfile.ZipFile('data.zip') as f:
f.extractall()
os.chdir('/content/gdrive/My Drive/udacity-behavioural-cloning/data/') | _____no_output_____ | MIT | behavioral-cloning/model.ipynb | KOKSANG/Self-Driving-Car |
Training code starts here | df = pd.read_csv('driving_log.csv')
# Visualizing original distribution
plt.figure(figsize=(15, 3))
hist, bins = np.histogram(df.steering.values, bins=50)
plt.hist(df.steering.values, bins=bins)
plt.title('Steering Distribution Plot')
plt.xlabel('Steering')
plt.ylabel('Count')
plt.show()
# create grayscale image
def grayscale(img):
return cv2.cvtColor(img, cv2.COLOR_RGB2GRAY)
# normalize image to zero mean
def normalize(img):
mean = np.mean(img)
std = np.std(img)
return (img-mean)/std
# preprocess with grayscale and normalization
def preprocess(img):
return normalize(grayscale(img))
# augment image, left right flip for now
def augment(image, randn):
return np.flip(image, axis=randn%2).astype(np.uint8)
# yeo-johnson bias
def yeo_johnson_bias(steering):
if steering >= 0:
return np.log(steering + 1)
elif steering < 0:
return -np.log(-steering + 1)
# To separate center, left and right
df_center = pd.concat([df.center, df.steering], axis=1).rename(index=str, columns={'center': 'img'})
df_left = pd.concat([df.left, df.steering], axis=1).rename(index=str, columns={'left': 'img'})
df_right = pd.concat([df.right, df.steering], axis=1).rename(index=str, columns={'right': 'img'})
df_center.head()
# Adjusting the steering value 0 for left and right
for k, v in df_left.iterrows():
if v.steering == 0:
df_left.loc[k, 'steering'] = df_left.loc[k, 'steering'] + random.uniform(0.2, 0.5)
for k, v in df_right.iterrows():
if v.steering == 0:
df_right.loc[k, 'steering'] = df_right.loc[k, 'steering'] + random.uniform(-0.2, -0.5)
new_df = pd.concat([df_center, df_left, df_right], axis=0, ignore_index=True, sort=False)
new_df.tail()
new_df.to_csv('adjusted_log.csv', index=False, encoding='utf-8')
df = pd.read_csv('adjusted_log.csv')
# Visualizing adjusted distribution
plt.figure(figsize=(15, 3))
hist, bins = np.histogram(df.steering.values, bins=50)
plt.hist(df.steering.values, bins=bins)
plt.title('Steering Distribution Plot')
plt.xlabel('Steering')
plt.ylabel('Count')
plt.show()
df.plot(figsize=(15, 3))
df.shape
# Grouping all images and steering together, to do a train test splitting
images = df.img.tolist()
steering = df.steering.tolist()
img_list = []
for img, angle in zip(images, steering):
row = [img, angle]
img_list.append(row)
train_samples, validation_samples = train_test_split(img_list, test_size=0.2)
# Data generator
def generator(samples, batch_size=32):
cwd = os.getcwd()
num_samples = len(samples)
while True: # Loop forever so the generator never terminates
shuffle(samples)
for offset in range(0, num_samples, batch_size):
batch_samples = samples[offset:offset+batch_size]
images = []
angles = []
for batch_sample in batch_samples:
name = os.path.join(cwd, batch_sample[0].strip())
try:
# normalizing image
image = normalize(mpimg.imread(name))
# reshaping image into its rgb form
image = np.reshape(image, (image.shape[0], image.shape[1], 3))
steering = float(batch_sample[1])
images.append(image)
angles.append(steering)
# if image not found, skip the image
except FileNotFoundError as msg:
print(msg)
continue
# trim image to only see section with road|
X_train = np.array(images)
y_train = np.array(angles)
yield shuffle(X_train, y_train)
# Set our batch size
batch_size = 32
# compile and train the model using the generator function
train_generator = generator(train_samples, batch_size=batch_size)
validation_generator = generator(validation_samples, batch_size=batch_size)
### PART 3: TRAINING ###
# Training Architecture: inspired by NVIDIA architecture #
INPUT_SHAPE = (160, 320, 3)
model = Sequential()
model.add(Cropping2D(cropping=((70,25), (0, 0)), input_shape=INPUT_SHAPE))
model.add(Conv2D(filters=24, kernel_size=5, strides=(2, 2), activation='relu'))
model.add(BatchNormalization())
model.add(Conv2D(filters=36, kernel_size=5, strides=(2, 2), activation='relu'))
#model.add(BatchNormalization())
model.add(Conv2D(filters=48, kernel_size=5, strides=(2, 2), activation='relu'))
#model.add(BatchNormalization())
model.add(Conv2D(filters=64, kernel_size=3, strides=(1, 1), activation='relu'))
#model.add(BatchNormalization())
model.add(Conv2D(filters=64, kernel_size=3, strides=(1, 1), activation='relu'))
model.add(Flatten())
model.add(Dense(1164, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(100, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(50, activation='relu'))
model.add(Dropout(rate=0.5))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
adam = Adam(lr = 0.0001)
model.compile(optimizer= adam, loss='mse', metrics=['accuracy'])
model.summary()
history = model.fit_generator(generator=train_generator, steps_per_epoch=math.ceil(len(train_samples)/ batch_size), \
epochs=15, verbose=1, validation_data=validation_generator, \
validation_steps=math.ceil(len(validation_samples)/ batch_size), use_multiprocessing=False)
print('Done Training')
###Saving Model and Weights###
model_json = model.to_json()
with open("model5.json", "w") as json_file:
json_file.write(model_json)
model.save('model5.h5')
model.save_weights("model_weights5.h5")
print("Saved model to disk")
### print the keys contained in the history object
print(history.history.keys())
### plot the training and validation loss for each epoch
plt.figure(figsize=(15, 3))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model mean squared error loss')
plt.ylabel('mean squared error loss')
plt.xlabel('epoch')
plt.legend(['training set', 'validation set'], loc='upper right')
plt.show()
| _____no_output_____ | MIT | behavioral-cloning/model.ipynb | KOKSANG/Self-Driving-Car |
entities-search-engine loadingSPARQL query to `{"type": [values]}` | import sys
sys.path.append("..")
from heritageconnector.config import config
from heritageconnector.utils.sparql import get_sparql_results
from heritageconnector.utils.wikidata import url_to_qid
import json
import time
from tqdm import tqdm
endpoint = config.WIKIDATA_SPARQL_ENDPOINT | _____no_output_____ | MIT | experiments/entities-search-engine/1. load data from sparql.ipynb | TheScienceMuseum/heritage-connector |
humans sample | limit = 10000
query = f"""
SELECT ?item WHERE {{
?item wdt:P31 wd:Q5.
}} LIMIT {limit}
"""
res = get_sparql_results(endpoint, query)
data = {
"humans": [url_to_qid(x['item']['value']) for x in res['results']['bindings']]
}
with open("./entities-search-engine/data/humans_sample.json", 'w') as f:
json.dump(data, f) | _____no_output_____ | MIT | experiments/entities-search-engine/1. load data from sparql.ipynb | TheScienceMuseum/heritage-connector |
humans sample: paginatedgot a 500 timeout error nearly all of the way through. Looked like it was going to take around 1h20m. *Better to do with dump?* | # there are 8,011,382 humans in Wikidata so this should take 161 iterations
total_humans = 8011382
pagesize = 40000
reslen = pagesize
paged_json = []
i = 0
start = time.time()
pbar = tqdm(total=total_humans)
while reslen == pagesize:
query = f"""
SELECT ?item WHERE {{
?item wdt:P31 wd:Q5.
}} LIMIT {pagesize} OFFSET {i*pagesize}
"""
res = get_sparql_results(endpoint, query)['results']['bindings']
reslen = len(res)
paged_json.append(
{ "humans": [url_to_qid(x['item']['value']) for x in res] }
)
# print total number so far
#print(i+1, (i+1)*pagesize)
i+=1
pbar.update(pagesize)
end = time.time()
pbar.close()
print(f"COMPLETED: {round(end-start, 2)} seconds")
# with open("./entities-search-engine/data/humans_sample.json", 'w') as f:
# json.dump(data, f)
for idx, item in tqdm(enumerate(paged_json)):
with open(f"./entities-search-engine/data/humans/humans_{idx}.json", 'w') as f:
json.dump(item, f) |
0it [00:00, ?it/s][A
3it [00:00, 24.90it/s][A
5it [00:00, 22.01it/s][A
8it [00:00, 23.76it/s][A
11it [00:00, 24.37it/s][A
15it [00:00, 24.48it/s][A
19it [00:00, 26.02it/s][A
22it [00:00, 26.84it/s][A
25it [00:00, 27.48it/s][A
29it [00:01, 28.67it/s][A
32it [00:01, 27.39it/s][A
35it [00:01, 27.23it/s][A
38it [00:01, 25.93it/s][A
41it [00:01, 25.11it/s][A
44it [00:01, 24.59it/s][A
47it [00:01, 23.96it/s][A
50it [00:01, 22.40it/s][A
53it [00:02, 23.28it/s][A
56it [00:02, 24.64it/s][A
59it [00:02, 24.45it/s][A
62it [00:02, 25.65it/s][A
65it [00:02, 25.37it/s][A
68it [00:02, 25.41it/s][A
71it [00:02, 25.86it/s][A
74it [00:02, 26.11it/s][A
77it [00:03, 26.18it/s][A
80it [00:03, 26.45it/s][A
83it [00:03, 26.71it/s][A
86it [00:03, 26.90it/s][A
89it [00:03, 24.88it/s][A
92it [00:03, 25.85it/s][A
95it [00:03, 25.95it/s][A
98it [00:03, 25.79it/s][A
101it [00:03, 25.92it/s][A
104it [00:04, 26.47it/s][A
108it [00:04, 27.56it/s][A
111it [00:04, 27.39it/s][A
114it [00:04, 27.24it/s][A
117it [00:04, 27.55it/s][A
120it [00:04, 27.45it/s][A
123it [00:04, 27.16it/s][A
126it [00:04, 26.65it/s][A
129it [00:04, 26.50it/s][A
132it [00:05, 26.36it/s][A
135it [00:05, 27.20it/s][A
138it [00:05, 27.28it/s][A
141it [00:05, 27.53it/s][A
144it [00:05, 27.68it/s][A
147it [00:05, 27.42it/s][A
150it [00:05, 27.27it/s][A
153it [00:05, 27.94it/s][A
156it [00:05, 26.64it/s][A
159it [00:06, 26.79it/s][A
162it [00:06, 26.72it/s][A
165it [00:06, 26.58it/s][A
168it [00:06, 26.12it/s][A
172it [00:06, 27.28it/s][A
175it [00:06, 27.70it/s][A
178it [00:06, 26.39it/s][A
| MIT | experiments/entities-search-engine/1. load data from sparql.ipynb | TheScienceMuseum/heritage-connector |
By now basically everyone ([here](http://datacolada.org/2014/06/04/23-ceiling-effects-and-replications/?utm_source=feedburner&utm_medium=feed&utm_campaign=Feed%3A+DataColada+%28Data+Colada+Feed%29), [here](http://yorl.tumblr.com/post/87428392426/ceiling-effects), [here](http://www.talyarkoni.org/blog/2014/06/01/there-is-no-ceiling-effect-in-johnson-cheung-donnellan-2014/), [here](http://pigee.wordpress.com/2014/05/24/additional-reflections-on-ceiling-effects-in-recent-replication-research/) and [here](http://www.nicebread.de/reanalyzing-the-schnalljohnson-cleanliness-data-sets-new-insights-from-bayesian-and-robust-approaches/), and there is likely even more out there) who writes a blog and knows how to do a statistical analysis has analysed data from a recent replication study and from the original study (data repository is here). The study of two experiments. Let's focus on Experiment 1 here. The experiment consists of a treatment and control group. The performance is measured by six likert-scale items. The scale has 9 levels. All responses are averaged together and we obtain a single composite score for each group. We are interested whether the treatment works, which would show up as a positive difference between the score of the treatment and the control group. Replication study did the same with more subjects.Let's perform the original analysis to see the results and why this dataset is so "popular". | %pylab inline
import pystan
from matustools.matusplotlib import *
from scipy import stats
il=['dog','trolley','wallet','plane','resume','kitten','mean score','median score']
D=np.loadtxt('schnallstudy1.csv',delimiter=',')
D[:,1]=1-D[:,1]
Dtemp=np.zeros((D.shape[0],D.shape[1]+1))
Dtemp[:,:-1]=D
Dtemp[:,-1]=np.median(D[:,2:-2],axis=1)
D=Dtemp
DS=D[D[:,0]==0,1:]
DR=D[D[:,0]==1,1:]
DS.shape
def plotCIttest1(y,x=0,alpha=0.05):
m=y.mean();df=y.size-1
se=y.std()/y.size**0.5
cil=stats.t.ppf(alpha/2.,df)*se
cii=stats.t.ppf(0.25,df)*se
out=[m,m-cil,m+cil,m-cii,m+cii]
_errorbar(out,x=x,clr='k')
return out
def plotCIttest2(y1,y2,x=0,alpha=0.05):
n1=float(y1.size);n2=float(y2.size);
v1=y1.var();v2=y2.var()
m=y2.mean()-y1.mean()
s12=(((n1-1)*v1+(n2-1)*v2)/(n1+n2-2))**0.5
se=s12*(1/n1+1/n2)**0.5
df= (v1/n1+v2/n2)**2 / ( (v1/n1)**2/(n1-1)+(v2/n2)**2/(n2-1))
cil=stats.t.ppf(alpha/2.,df)*se
cii=stats.t.ppf(0.25,df)*se
out=[m,m-cil,m+cil,m-cii,m+cii]
_errorbar(out,x=x)
return out
plt.figure(figsize=(4,3))
dts=[DS[DS[:,0]==0,-2],DS[DS[:,0]==1,-2],
DR[DR[:,0]==0,-2],DR[DR[:,0]==1,-2]]
for k in range(len(dts)):
plotCIttest1(dts[k],x=k)
plt.grid(False,axis='x')
ax=plt.gca()
ax.set_xticks(range(len(dts)))
ax.set_xticklabels(['OC','OT','RC','RT'])
plt.xlim([-0.5,len(dts)-0.5])
plt.figure(figsize=(4,3))
plotCIttest2(dts[0],dts[1],x=0,alpha=0.1)
plotCIttest2(dts[2],dts[3],x=1,alpha=0.1)
ax=plt.gca()
ax.set_xticks([0,1])
ax.set_xticklabels(['OT-OC','RT-RC'])
plt.grid(False,axis='x')
plt.xlim([-0.5,1.5]); | /usr/local/lib/python2.7/dist-packages/matplotlib-1.3.1-py2.7-linux-i686.egg/matplotlib/font_manager.py:1236: UserWarning: findfont: Font family ['Arial'] not found. Falling back to Bitstream Vera Sans
(prop.get_family(), self.defaultFamily[fontext]))
/usr/local/lib/python2.7/dist-packages/matplotlib-1.3.1-py2.7-linux-i686.egg/matplotlib/figure.py:1595: UserWarning: This figure includes Axes that are not compatible with tight_layout, so its results might be incorrect.
warnings.warn("This figure includes Axes that are not "
| MIT | _ipynb/SchnallSupplement.ipynb | simkovic/simkovic.github.io |
Legend: OC - original study, control group; OT - original study, treatment group; RC - replication study, control group; RT - replication study, treatment group; In the original study the difference between the treatment and control is significantly greater than zero. In the replication, it is not. However the ratings in the replication are higher overall. The author of the original study therefore raised a concern that no difference was obtained in replication because of ceiling effects. How do we show that there are ceiling efects in the replication? The authors and bloggers presented various arguments that support some conclusion (mostly that there are no ceiling effects). Ultimately ceiling effects are a matter of degree and since no one knows how to quantify them the whole discussion of the replication's validity is heading into an inferential limbo. My point here is that if the analysis computed the proper effect size - the causal effect size, we would avoid these kinds of arguments and discussions. | def plotComparison(A,B,stan=False):
plt.figure(figsize=(8,16))
cl=['control','treatment']
x=np.arange(11)-0.5
if not stan:assert A.shape[1]==B.shape[1]
for i in range(A.shape[1]-1):
for cond in range(2):
plt.subplot(A.shape[1]-1,2,2*i+cond+1)
a=np.histogram(A[A[:,0]==cond,1+i],bins=x, normed=True)
plt.barh(x[:-1],-a[0],ec='w',height=1)
if stan: a=[B[:,i,cond]]
else: a=np.histogram(B[B[:,0]==cond,1+i],bins=x, normed=True)
plt.barh(x[:-1],a[0],ec='w',fc='g',height=1)
#plt.hist(DS[:,2+i],bins=np.arange(11)-0.5,normed=True,rwidth=0.5)
plt.xlim([-0.7,0.7]);plt.gca().set_yticks(range(10))
plt.ylim([-1,10]);#plt.grid(b=False,axis='y')
if not i: plt.title('condition: '+cl[cond])
if not cond: plt.ylabel(il[i],size=12)
if not i and not cond: plt.legend(['original','replication'],loc=4);
plotComparison(DS,DR)
model='''
data {
int<lower=2> K;
int<lower=0> N;
int<lower=1> M;
int<lower=1,upper=K> y[N,M];
int x[N];
}
parameters {
real beta[M];
ordered[K-1] c[M];
}
transformed parameters{
real pt[M,K-1]; real pc[M,K-1];
for (m in 1:M){
for (k in 1:(K-1)){
pt[m,k] <- inv_logit(beta[m]-c[m][k]);
pc[m,k] <- inv_logit(-c[m][k]);
}}}
model {
for (m in 1:M){
for (k in 1:(K-1)) c[m][k]~ uniform(-100,100);
for (n in 1:N) y[n,m] ~ ordered_logistic(x[n] * beta[m], c[m]);
}}
'''
sm1=pystan.StanModel(model_code=model)
dat = {'y':np.int32(DS[:,1:7])+1,'x':np.int32(DS[:,0]),'N':DS.shape[0] ,'K':10,'M':6}
fit = sm1.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
print fit
pt=fit.extract()['pt']
pc=fit.extract()['pc']
DP=np.zeros((pt.shape[2]+2,pt.shape[1],2))
DP[0,:,:]=1
DP[1:-1,:,:]=np.array([pc,pt]).T.mean(2)
DP=-np.diff(DP,axis=0)
plotComparison(DS[:,:7],DP,stan=True)
model='''
data {
int<lower=2> K;
int<lower=0> N;
int<lower=1> M;
int<lower=1,upper=K> y[N,M];
int x[N];
}
parameters {
real beta;
ordered[K-1] c[M];
}
transformed parameters{
real pt[M,K-1]; real pc[M,K-1];
for (m in 1:M){
for (k in 1:(K-1)){
pt[m,k] <- inv_logit(beta-c[m][k]);
pc[m,k] <- inv_logit(-c[m][k]);
}}}
model {
for (m in 1:M){
for (k in 1:(K-1)) c[m][k]~ uniform(-100,100);
for (n in 1:N) y[n,m] ~ ordered_logistic(x[n] * beta, c[m]);
}}
'''
sm2=pystan.StanModel(model_code=model)
dat = {'y':np.int32(DS[:,1:7])+1,'x':np.int32(DS[:,0]),'N':DS.shape[0] ,'K':10,'M':6}
fit2 = sm2.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
print fit2
saveStanFit(fit2,'fit2')
w=loadStanFit('fit2')
pt=w['pt']
pc=w['pc']
DP=np.zeros((pt.shape[2]+2,pt.shape[1],2))
DP[0,:,:]=1
DP[1:-1,:,:]=np.array([pc,pt]).T.mean(2)
DP=-np.diff(DP,axis=0)
plotComparison(DS[:,:7],DP,stan=True)
model='''
data {
int<lower=2> K;
int<lower=0> N;
int<lower=1> M;
int<lower=1,upper=K> y[N,M];
int x[N];
}
parameters {
// real mb; real<lower=0,upper=100> sb[2];
vector[2*M-1] bbeta;
ordered[K-1] c;
}
transformed parameters{
real pt[M,K-1]; real pc[M,K-1];
vector[M] beta[2];
for (m in 1:M){
if (m==1){beta[1][m]<-0.0; beta[2][m]<-bbeta[2*M-1];}
else{beta[1][m]<-bbeta[2*(m-1)-1]; beta[2][m]<-bbeta[2*(m-1)];}
for (k in 1:(K-1)){
pt[m,k] <- inv_logit(beta[2][m]-c[k]);
pc[m,k] <- inv_logit(beta[1][m]-c[k]);
}}}
model {
for (k in 1:(K-1)) c[k]~ uniform(-100,100);
//beta[1]~normal(0.0,sb[1]);
//beta[2]~normal(mb,sb[2]);
for (m in 1:M){
for (n in 1:N) y[n,m] ~ ordered_logistic(beta[x[n]+1][m], c);
}}
'''
sm3=pystan.StanModel(model_code=model)
dat = {'y':np.int32(DS[:,1:7])+1,'x':np.int32(DS[:,0]),'N':DS.shape[0] ,'K':10,'M':6}
fit3 = sm3.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
#print fit3
saveStanFit(fit3,'fit3')
w=loadStanFit('fit3')
pt=w['pt']
pc=w['pc']
DP=np.zeros((pt.shape[2]+2,pt.shape[1],2))
DP[0,:,:]=1
DP[1:-1,:,:]=np.array([pc,pt]).T.mean(2)
DP=-np.diff(DP,axis=0)
plotComparison(DS[:,:7],DP,stan=True)
model='''
data {
int<lower=2> K;
int<lower=0> N;
int<lower=1> M;
int<lower=1,upper=K> y[N,M];
int x[N];
}
parameters {
// real mb; real<lower=0,upper=100> sb[2];
vector[M-1] bbeta;
real delt;
ordered[K-1] c;
}
transformed parameters{
real pt[M,K-1]; real pc[M,K-1];
vector[M] beta;
for (m in 1:M){
if (m==1) beta[m]<-0.0;
else beta[m]<-bbeta[m-1];
for (k in 1:(K-1)){
pt[m,k] <- inv_logit(beta[m]+delt-c[k]);
pc[m,k] <- inv_logit(beta[m]-c[k]);
}}}
model {
for (k in 1:(K-1)) c[k]~ uniform(-100,100);
for (m in 1:M){
for (n in 1:N) y[n,m] ~ ordered_logistic(beta[m]+delt*x[n], c);
}}
'''
sm4=pystan.StanModel(model_code=model)
dat = {'y':np.int32(DS[:,1:7])+1,'x':np.int32(DS[:,0]),'N':DS.shape[0] ,'K':10,'M':6}
fit4 = sm4.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
print pystan.misc._print_stanfit(fit4,pars=['delt','bbeta','c'],digits_summary=2)
saveStanFit(fit4,'fit4')
w=loadStanFit('fit4')
pt=w['pt']
pc=w['pc']
DP=np.zeros((pt.shape[2]+2,pt.shape[1],2))
DP[0,:,:]=1
DP[1:-1,:,:]=np.array([pc,pt]).T.mean(2)
DP=-np.diff(DP,axis=0)
plotComparison(DS[:,:7],DP,stan=True)
pystanErrorbar(w,keys=['beta','c','delt'])
dat = {'y':np.int32(DR[:,1:7])+1,'x':np.int32(DR[:,0]),'N':DR.shape[0] ,'K':10,'M':6}
fit5 = sm4.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
print pystan.misc._print_stanfit(fit4,pars=['delt','bbeta','c'],digits_summary=2)
saveStanFit(fit5,'fit5')
w=loadStanFit('fit5')
pt=w['pt']
pc=w['pc']
DP=np.zeros((pt.shape[2]+2,pt.shape[1],2))
DP[0,:,:]=1
DP[1:-1,:,:]=np.array([pc,pt]).T.mean(2)
DP=-np.diff(DP,axis=0)
plotComparison(DR[:,:7],DP,stan=True)
pystanErrorbar(w,keys=['beta','c','delt'])
model='''
data {
int<lower=2> K;
int<lower=0> N;
int<lower=1> M;
int<lower=1,upper=K> y[N,M];
int x[N,2];
}
parameters {
// real mb; real<lower=0,upper=100> sb[2];
vector[M-1] bbeta;
real dd[3];
ordered[K-1] c;
}
transformed parameters{
//real pt[M,K-1]; real pc[M,K-1];
vector[M] beta;
for (m in 1:M){
if (m==1) beta[m]<-0.0;
else beta[m]<-bbeta[m-1];
//for (k in 1:(K-1)){
// pt[m,k] <- inv_logit(beta[m]+delt-c[k]);
// pc[m,k] <- inv_logit(beta[m]-c[k]);}
}}
model {
for (k in 1:(K-1)) c[k]~ uniform(-100,100);
for (m in 1:M){
for (n in 1:N) y[n,m] ~ ordered_logistic(beta[m]
+dd[2]*x[n,1]*(1-x[n,2]) // rep + control
+dd[1]*x[n,2]*(1-x[n,1]) // orig + treat
+dd[3]*x[n,1]*x[n,2], c); // rep + treat
}}
'''
sm5=pystan.StanModel(model_code=model)
dat = {'y':np.int32(D[:,2:8])+1,'x':np.int32(D[:,[0,1]]),'N':D.shape[0] ,'K':10,'M':6}
fit6 = sm5.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
print pystan.misc._print_stanfit(fit6,pars=['dd','bbeta','c'],digits_summary=2)
saveStanFit(fit6,'fit6')
w=loadStanFit('fit6')
pystanErrorbar(w,keys=['beta','c','dd'])
plt.figure(figsize=(10,4))
c=w['c']
b=w['beta']
d=w['dd']
errorbar(c,x=np.linspace(6.5,8,9))
ax=plt.gca()
plt.plot([-1,100],[0,0],'k',lw=2)
ax.set_yticks(np.median(c,axis=0))
ax.set_yticklabels(np.arange(1,10)+0.5)
plt.grid(b=False,axis='x')
errorbar(b[:,::-1],x=np.arange(9,15),clr='g')
errorbar(d,x=np.arange(15,18),clr='r')
plt.xlim([6,17.5])
ax.set_xticks(range(9,18))
ax.set_xticklabels(il[:6][::-1]+['OT','RC','RT'])
for i in range(d.shape[1]): printCI(d[:,i])
printCI(d[:,2]-d[:,1])
c
def ordinalLogitRvs(beta, c,n,size=1):
assert np.all(np.diff(c)>0) # c must be strictly increasing
def invLogit(x): return 1/(1+np.exp(-x))
p=[1]+list(invLogit(beta-c))+[0]
p=-np.diff(p)
#return np.random.multinomial(n,p,size)
return np.int32(np.round(p*n))
def reformatData(dat):
out=[]
for k in range(dat.size):
out.extend([k]*dat[k])
return np.array(out)
b=np.linspace(-10,7,21)
d=np.median(w['dd'][:,0])
c=np.median(w['c'],axis=0)
S=[];P=[]
for bb in b:
S.append([np.squeeze(ordinalLogitRvs(bb,c,100)),
np.squeeze(ordinalLogitRvs(bb+d,c,100))])
P.append([reformatData(S[-1][0]),reformatData(S[-1][1])])
model='''
data {
int<lower=2> K;
int<lower=0> y1[K];
int<lower=0> y2[K];
}
parameters {
real<lower=-1000,upper=1000> d;
ordered[K-1] c;
}
model {
for (k in 1:(K-1)) c[k]~ uniform(-200,200);
for (k in 1:K){
for (n in 1:y1[k]) k~ ordered_logistic(0.0,c);
for (n in 1:y2[k]) k~ ordered_logistic(d ,c);
}}
'''
sm9=pystan.StanModel(model_code=model)
#(S[k][0]!=0).sum()+1
for k in range(21):
i1=np.nonzero(S[k][0]!=0)[0]
i2=np.nonzero(S[k][1]!=0)[0]
if max((S[k][0]!=0).sum(),(S[k][1]!=0).sum())<9:
s= max(min(i1[0],i2[0])-1,0)
e= min(max(i1[-1],i2[-1])+1,10)
S[k][0]=S[k][0][s:e+1]
S[k][1]=S[k][1][s:e+1]
S[0][0].size
ds=[];cs=[]
for k in range(len(S)):
dat = {'y1':S[k][0],'y2':S[k][1],'K':S[k][0].size}
fit = sm9.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
print fit
saveStanFit(fit,'dc%d'%k)
for k in range(21):
i1=np.nonzero(S[k][0]!=0)[0]
i2=np.nonzero(S[k][1]!=0)[0]
if max((S[k][0]!=0).sum(),(S[k][1]!=0).sum())<9:
s= min(i1[0],i2[0])
e= max(i1[-1],i2[-1])
S[k][0]=S[k][0][s:e+1]
S[k][1]=S[k][1][s:e+1]
ds=[];cs=[]
for k in range(len(S)):
if S[k][0].size==1: continue
dat = {'y1':S[k][0],'y2':S[k][1],'K':S[k][0].size}
fit = sm9.sampling(data=dat,iter=1000, chains=4,thin=2,warmup=500,njobs=2,seed=4)
#print fit
saveStanFit(fit,'dd%d'%k)
ds=[];xs=[]
for k in range(b.size):
try:
f=loadStanFit('dd%d'%k)['d']
xs.append(b[k])
ds.append(f)
except:pass
ds=np.array(ds);xs=np.array(xs)
ds.shape
d1=np.median(w['dd'][:,0])
d2=DS[DS[:,0]==1,-2].mean()-DS[DS[:,0]==0,-2].mean()
plt.figure(figsize=(8,4))
plt.plot([-10,5],[d1,d1],'r',alpha=0.5)
res1=errorbar(ds.T,x=xs-0.1)
ax1=plt.gca()
plt.ylim([-2,2])
plt.xlim([-10,5])
plt.grid(b=False,axis='x')
ax2 = ax1.twinx()
res2=np.zeros((b.size,5))
for k in range(b.size):
res2[k,:]=plotCIttest2(y1=P[k][0],y2=P[k][1],x=b[k]+0.1)
plt.ylim([-2/d1*d2,2/d1*d2])
plt.xlim([-10,5])
plt.grid(b=False,axis='y')
plt.plot(np.median(w['beta'],axis=0),[-0.9]*6,'ob')
plt.plot(np.median(w['beta']+np.atleast_2d(w['dd'][:,1]).T,axis=0),[-1.1]*6,'og')
d1=np.median(w['dd'][:,0])
d2=DS[DS[:,0]==1,-2].mean()-DS[DS[:,0]==0,-2].mean()
plt.figure(figsize=(8,4))
ax1=plt.gca()
plt.plot([-10,5],[d1,d1],'r',alpha=0.5)
temp=[list(xs)+list(xs)[::-1],list(res1[:,1])+list(res1[:,2])[::-1]]
ax1.add_patch(plt.Polygon(xy=np.array(temp).T,alpha=0.2,fc='k',ec='k'))
plt.plot(xs,res1[:,0],'k')
plt.ylim([-1.5,2])
plt.xlim([-10,5])
plt.grid(b=False,axis='x')
plt.legend(['True ES','Estimate Ordinal Logit'],loc=8)
plt.ylabel('Estimate Ordinal Logit')
ax2 = ax1.twinx()
temp=[list(b)+list(b)[::-1],list(res2[:,1])+list(res2[:,2])[::-1]]
for t in range(len(temp[0]))[::-1]:
if np.isnan(temp[1][t]):
temp[0].pop(t);temp[1].pop(t)
ax2.add_patch(plt.Polygon(xy=np.array(temp).T,alpha=0.2,fc='m',ec='m'))
plt.plot(b,res2[:,0],'m')
plt.ylim([-1.5/d1*d2,2/d1*d2])
plt.xlim([-10,5])
plt.grid(b=False,axis='y')
plt.plot(np.median(w['beta'],axis=0),[-0.3]*6,'ob')
plt.plot(np.median(w['beta']+np.atleast_2d(w['dd'][:,1]).T,axis=0),[-0.5]*6,'og')
plt.legend(['Estimate T-C','Item Difficulty Orignal Study','Item Difficulty Replication'],loc=4)
plt.ylabel('Estimate T - C',color='m')
for tl in ax2.get_yticklabels():tl.set_color('m') | _____no_output_____ | MIT | _ipynb/SchnallSupplement.ipynb | simkovic/simkovic.github.io |
!pip3 install xgboost > /dev/null
import pandas as pd
import numpy as np
import io
import gc
import time
from pprint import pprint
# import PIL.Image as Image
# import matplotlib.pylab as plt
from datetime import date
# import tensorflow as tf
# import tensorflow_hub as hub
# settings
import warnings
warnings.filterwarnings("ignore")
gc.enable()
# Calculating Precision, Recall and f1-score
def model_score(actual_value,predicted_values):
from sklearn.metrics import confusion_matrix
from sklearn.metrics import accuracy_score
from sklearn.metrics import classification_report
from sklearn.metrics import recall_score
actual = actual_value
predicted = predicted_values
results = confusion_matrix(actual, predicted)
print('Confusion Matrix :')
print(results)
print('Accuracy Score :',accuracy_score(actual, predicted))
print('Report : ')
print(classification_report(actual, predicted))
print('Recall Score : ')
print(recall_score(actual, predicted))
# connect to google drive
from google.colab import drive
drive.mount('/content/drive')
gDrivePath = '/content/drive/MyDrive/Datasets/Hackerearth_vehicle_insurance_claim/dataset/'
gDriveTrainFinal = gDrivePath + 'final_datasets/train_final.csv'
gDriveTestFinal = gDrivePath + 'final_datasets/test_final.csv'
df_train = pd.read_csv(gDriveTrainFinal)
df_test = pd.read_csv(gDriveTestFinal)
df_train.head()
df_train.sample(n = 10)
df_train.drop(['image_name'], axis=1, inplace=True)
df_test.drop(['image_name'], axis=1, inplace=True)
df_train[['Insurance_company', 'Cost_of_vehicle', 'Min_coverage', 'Max_coverage', 'Condition', 'Amount']].isna().any() | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
|
Removing NaN in target variable | # select rows where amount is not NaN
df_train = df_train[df_train['Amount'].notna()]
df_train[df_train['Amount'].isna()].shape
# delete rows where Amount < 0
df_train = df_train[df_train['Amount'] >= 0]
df_train[['Cost_of_vehicle', 'Min_coverage', 'Max_coverage', 'Amount']].describe()
selected_columns = ['Cost_of_vehicle', 'Min_coverage', 'Max_coverage']
# replacing nan values with median
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values = np.nan, strategy ='median')
imputer = imputer.fit(df_train[selected_columns])
# Imputing the data
df_train[selected_columns] = imputer.transform(df_train[selected_columns])
df_test[selected_columns] = imputer.transform(df_test[selected_columns])
df_train[['Insurance_company', 'Cost_of_vehicle', 'Min_coverage', 'Max_coverage', 'Condition', 'Amount']].isna().any() | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Checking if the dataset is balanced/imbalanced - Condition | # python check if dataset is imbalanced : https://www.kaggle.com/rafjaa/resampling-strategies-for-imbalanced-datasets
target_count = df_train['Condition'].value_counts()
print('Class 0 (No):', target_count[0])
print('Class 1 (Yes):', target_count[1])
print('Proportion:', round(target_count[0] / target_count[1], 2), ': 1')
target_count.plot(kind='bar', title='Condition') | Class 0 (No): 99
Class 1 (Yes): 1288
Proportion: 0.08 : 1
| MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Splitting Data into train-cv | classification_labels = df_train['Condition'].values
# for regresion delete rows where Condition = 0
df_train_regression = df_train[df_train['Condition'] == 1]
regression_labels = df_train_regression['Amount'].values
######
df_train_regression.drop(['Condition','Amount'], axis=1, inplace=True)
df_train.drop(['Condition','Amount'], axis=1, inplace=True)
df_test.drop(['Condition','Amount'], axis=1, inplace=True, errors='ignore')
# classification split
from sklearn.model_selection import train_test_split
X_train, X_cv, y_train, y_cv = train_test_split(df_train, classification_labels, test_size=0.1) | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Over Sampling using SMOTE | # https://machinelearningmastery.com/smote-oversampling-for-imbalanced-classification/
from imblearn.over_sampling import SMOTE
smote_overSampling = SMOTE()
X_train,y_train = smote_overSampling.fit_resample(X_train,y_train)
unique, counts = np.unique(y_train, return_counts=True)
dict(zip(unique, counts)) | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Scaling data | from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
X_train_scaled = scaler.fit_transform(X_train)
X_cv_scaled = scaler.transform(X_cv)
X_test_scaled = scaler.transform(df_test)
X_train_scaled | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Modelling & Cross-Validation Classification | %%time
# Train multiple models : https://www.kaggle.com/tflare/testing-multiple-models-with-scikit-learn-0-79425
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC, LinearSVC
from sklearn.neighbors import KNeighborsClassifier
from sklearn.tree import DecisionTreeClassifier
from sklearn.ensemble import RandomForestClassifier
from sklearn.ensemble import AdaBoostClassifier
from sklearn.ensemble import BaggingClassifier
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.ensemble import GradientBoostingClassifier
from sklearn.linear_model import LogisticRegressionCV
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
models = []
LogisticRegression = LogisticRegression(n_jobs=-1)
LinearSVC = LinearSVC()
KNeighbors = KNeighborsClassifier(n_jobs=-1)
DecisionTree = DecisionTreeClassifier()
RandomForest = RandomForestClassifier()
AdaBoost = AdaBoostClassifier()
Bagging = BaggingClassifier()
ExtraTrees = ExtraTreesClassifier()
GradientBoosting = GradientBoostingClassifier()
LogisticRegressionCV = LogisticRegressionCV(n_jobs=-1)
XGBClassifier = XGBClassifier(nthread=-1)
# models.append(("LogisticRegression",LogisticRegression))
# models.append(("LinearSVC", LinearSVC))
# models.append(("KNeighbors", KNeighbors))
# models.append(("DecisionTree", DecisionTree))
# models.append(("RandomForest", RandomForest))
models.append(("AdaBoost", AdaBoost))
# models.append(("Bagging", Bagging))
# models.append(("ExtraTrees", ExtraTrees))
# models.append(("GradientBoosting", GradientBoosting))
# models.append(("LogisticRegressionCV", LogisticRegressionCV))
# models.append(("XGBClassifier", XGBClassifier))
# metric_names = ['f1', 'average_precision', 'accuracy', 'precision', 'recall']
metric_names = ['f1']
results = []
names = []
nested_dict = {}
for name,model in models:
nested_dict[name] = {}
for metric in metric_names:
print("\nRunning : {}, with metric : {}".format(name, metric))
score = cross_val_score(model, X_train_scaled, y_train, n_jobs=-1, scoring=metric, cv=5)
nested_dict[name][metric] = score.mean()
import json
print(json.dumps(nested_dict, sort_keys=True, indent=4)) | {
"AdaBoost": {
"f1": 0.9991397849462367
}
}
| MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Regression | X_train_regression, X_cv_regression, y_train_regression, y_cv_regression = train_test_split(df_train_regression, regression_labels, test_size=0.1)
scaler = StandardScaler()
X_train_scaled_regression = scaler.fit_transform(X_train_regression)
X_cv_scaled_regression = scaler.transform(X_cv_regression)
X_test_scaled_regression = scaler.transform(df_test)
X_train_scaled_regression
%%time
from sklearn.linear_model import LinearRegression, SGDRegressor
from sklearn.svm import SVR, LinearSVR
from sklearn.neighbors import KNeighborsRegressor
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.ensemble import AdaBoostRegressor
from sklearn.ensemble import BaggingRegressor
from sklearn.ensemble import ExtraTreesRegressor
from sklearn.ensemble import GradientBoostingRegressor
from xgboost import XGBClassifier
from sklearn.model_selection import cross_val_score
models = []
LinearReg = LinearRegression(n_jobs=-1)
SGDReg = SGDRegressor()
SVReg = SVR()
LinearSVReg = LinearSVR()
KNeighborsReg = KNeighborsRegressor(n_jobs=-1)
DecisionTreeReg = DecisionTreeRegressor()
RandomForestReg = RandomForestRegressor(n_jobs=-1)
AdaBoostReg = AdaBoostRegressor()
BaggingReg = BaggingRegressor(n_jobs=-1)
ExtraTreesReg = ExtraTreesRegressor(n_jobs=-1)
GradientBoostingReg = GradientBoostingRegressor()
# XGBReg = XGBRegressor(nthread=-1)
# models.append(("LinearRegression",LinearReg))
# models.append(("SGDRegressor",SGDReg))
# models.append(("SVR", SVReg))
# models.append(("LinearSVR", LinearSVReg))
# models.append(("KNeighborsRegressor", KNeighborsReg))
# models.append(("DecisionTreeRegressor", DecisionTreeReg))
# models.append(("RandomForestRegressor", RandomForestReg))
# models.append(("AdaBoostRegressor", AdaBoostReg))
# models.append(("BaggingRegressor", BaggingReg))
models.append(("ExtraTreesRegressor", ExtraTreesReg))
# models.append(("GradientBoostingRegressor", GradientBoostingReg))
# models.append(("XGBReg", XGBRegressor))
# metric_names = ['f1', 'average_precision', 'accuracy', 'precision', 'recall']
metric_names = ['r2']
results = []
names = []
nested_dict = {}
# for name,model in models:
# nested_dict[name] = {}
# for metric in metric_names:
# print("\nRunning : {}, with metric : {}".format(name, metric))
# score = cross_val_score(model, X_train_scaled_regression, y_train_regression, n_jobs=-1, scoring=metric, cv=5)
# nested_dict[name][metric] = score.mean()
# import json
# print(json.dumps(nested_dict, sort_keys=True, indent=4))
# # Hyperparameter tuning ExtraTreesRegressor
# # ExtraTreesRegressor(bootstrap=True, criterion='mae',n_estimators=100, warm_start=True,
# # max_depth=None, max_features='auto', max_leaf_nodes=None,
# # max_samples=None, min_impurity_decrease=0.0,
# # min_impurity_split=None, min_samples_leaf=1,
# # min_samples_split=2, min_weight_fraction_leaf=0.0,
# # n_jobs=-1, oob_score=False,
# # random_state=None, verbose=0)
from sklearn.model_selection import GridSearchCV, RandomizedSearchCV
model = ExtraTreesRegressor(n_jobs=-1, bootstrap=True, criterion='mae', warm_start=True, max_depth=9, max_features='auto')
param_grid = {
# 'n_estimators': np.arange(100, 3000, 100, dtype=int),
# 'criterion': ['mse', 'mae'],
# 'max_depth': np.arange(5, 16, 1, dtype=int),
# 'bootstrap': [True, False],
# 'max_features': ['auto', 'sqrt', 'log2'],
# 'max_features': np.arange(100, 1540, 20, dtype=int),
# 'warm_start': [True, False],
}
gsc = GridSearchCV(estimator=model, param_grid=param_grid, scoring='r2', cv=5, n_jobs=-1, verbose=1000)
grid_result = gsc.fit(X_train_scaled_regression, y_train_regression)
# n_iter_search = 100
# random_search = RandomizedSearchCV(estimator=model, param_distributions=param_grid, n_iter=n_iter_search, scoring='r2', cv=3, n_jobs=-1, verbose=500)
# random_search.fit(X_train, y_train)
print("Best: %f using %s" % (grid_result.best_score_, grid_result.best_params_)) | Best: 0.062463 using {}
| MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Predicting on CV data | classification_alg = AdaBoost
# regression_alg = ExtraTreesReg
# hypertuned model
regression_alg = gsc
classification_alg.fit(X_train_scaled, y_train)
regression_alg.fit(X_train_scaled_regression, y_train_regression)
# predictions_class = classification_alg.predict(X_cv)
# pprint(classification_alg.get_params())
# model_score(y_cv,predictions) | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Predicting on test Data | trained_classifier = classification_alg
trained_regressor = regression_alg
predictions_trained_classifier_test = trained_classifier.predict(X_test_scaled)
predictions_trained_regressor_test = trained_regressor.predict(X_test_scaled_regression)
read = pd.read_csv(gDrivePath + 'test.csv')
submission = pd.DataFrame({
"Image_path": read["Image_path"],
"Condition": predictions_trained_classifier_test,
"Amount": predictions_trained_regressor_test
})
submission.head()
submission['Amount'][submission.Condition == 0] = 0
submission[submission['Condition'] == 0].sample(n = 10)
submission.Amount = submission.Amount.round()
submission.head()
submission.to_csv('./submission.csv', index=False) | _____no_output_____ | MIT | Hackerearth-Predict_condition_and_insurance_amount/train_models.ipynb | chiranjeet14/ML_Projects |
Build a Traffic Sign Recognition Classifier Deep Learning Some improvements are taken :- [x] Adding of convolution networks at the same size of previous layer, to get 1x1 layer- [x] Activation function use 'ReLU' instead of 'tanh'- [x] Adaptative learning rate, so learning rate is decayed along to training phase- [x] Enhanced training dataset Load and Visualize the Enhanced training datasetFrom the original standard German Traffic Signs dataset, we add some 'generalized' sign to cover cases that the classifier can not well interpret small figures inside. `Also`, in our Enhanced training dataset, each figure is taken from standard library - not from road images, so they are very clear and in high-definition. *Enhanced traffic signs &8595;* | # load enhanced traffic signs
import os
import cv2
import matplotlib.pyplot as plot
import numpy
dir_enhancedsign = 'figures\enhanced_training_dataset2'
files_enhancedsign = [os.path.join(dir_enhancedsign, f) for f in os.listdir(dir_enhancedsign)]
# read & resize (32,32) images in enhanced dataset
images_enhancedsign = numpy.array([cv2.cvtColor(cv2.resize(cv2.imread(f), (32,32), interpolation = cv2.INTER_AREA), cv2.COLOR_BGR2RGB) for f in files_enhancedsign])
# plot new test images
fig, axes = plot.subplots(7, 8)
plot.suptitle('Enhanced training dataset')
for i, ax in enumerate(axes.ravel()):
if i < 50:
ax.imshow(images_enhancedsign[i])
# ax.set_title('{}'.format(i))
plot.setp(ax.get_xticklabels(), visible=False)
plot.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([]), ax.set_yticks([])
ax.axis('off')
plot.draw()
fig.savefig('figures/' + 'enhancedsign' + '.jpg', dpi=700)
print("Image Shape : {}".format(images_enhancedsign[0].shape))
print()
print("Enhanced Training Dataset : {} samples".format(len(images_enhancedsign)))
# classes of enhanced dataset are taken from their filenames
import re
regex = re.compile(r'\d+')
y_enhancedsign = [int(regex.findall(f)[0]) for f in os.listdir(dir_enhancedsign)]
print(y_enhancedsign) | [0, 1, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 2, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 3, 30, 31, 32, 33, 34, 35, 36, 37, 38, 39, 4, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 5, 6, 7, 8, 9]
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
*Enhanced German traffic signs dataset &8595;* **We would have 50 classes in total with new enhanced training dataset :** | n_classes_enhanced = len(numpy.unique(y_enhancedsign))
print('n_classes enhanced : {}'.format(n_classes_enhanced)) | n_classes enhanced : 50
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Load and Visualize the standard German Traffic Signs Dataset | # Load pickled data
import pickle
import numpy
# TODO: Fill this in based on where you saved the training and testing data
training_file = 'traffic-signs-data/train.p'
validation_file = 'traffic-signs-data/valid.p'
testing_file = 'traffic-signs-data/test.p'
with open(training_file, mode='rb') as f:
train = pickle.load(f)
with open(validation_file, mode='rb') as f:
valid = pickle.load(f)
with open(testing_file, mode='rb') as f:
test = pickle.load(f)
X_train, y_train = train['features'], train['labels'] # training dataset
X_valid, y_valid = valid['features'], valid['labels'] # validation dataset used in training phase
X_test, y_test = test['features'], test['labels'] # test dataset
n_classes_standard = len(numpy.unique(y_train))
assert(len(X_train) == len(y_train))
assert(len(X_valid) == len(y_valid))
assert(len(X_test) == len(y_test))
print("Image Shape : {}".format(X_train[0].shape))
print()
print("Training Set : {} samples".format(len(X_train)))
print("Validation Set : {} samples".format(len(X_valid)))
print("Test Set : {} samples".format(len(X_test)))
print('n_classes standard : {}'.format(n_classes_standard))
n_classes = n_classes_enhanced | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Implementation of LeNet>http://yann.lecun.com/exdb/publis/pdf/sermanet-ijcnn-11.pdf Above is the original article of Pierre Sermanet and Yann LeCun in 1998 that we can follow to create LeNet convolutional networks with a good accuracy even for very-beginners in deep-learning. It's really excited to see that many years of works now could be implemented in just 9 lines of code thank to Keras high-level API !(low-level API implementation with Tensorflow 1 is roughly 20 lines of code) >Here is also an interesting medium article : https://medium.com/@mgazar/lenet-5-in-9-lines-of-code-using-keras-ac99294c8086 | ### Import tensorflow and keras
import tensorflow as tf
from tensorflow import keras
print ("TensorFlow version: " + tf.__version__) | TensorFlow version: 2.1.0
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
2-stage ConvNet architecture by Pierre Sermanet and Yann LeCunWe will try to implement the 2-stage ConvNet architecture by Pierre Sermanet and Yann LeCun which is not sequential. Keras disposes keras.Sequential() API for sequential architectures but it can not handle models with non-linear topology, shared layers or multi-in/output. So the choose of the 2-stage ConvNet architecture by `Pierre Sermanet` and `Yann LeCun` is to challenge us also.>Source: "Traffic Sign Recognition with Multi-Scale Convolutional Networks" by `Pierre Sermanet` and `Yann LeCun`Here in this architecture, the 1st stage's ouput is feed-forward to the classifier (could be considered as a 3rd stage). | #LeNet model
inputs = keras.Input(shape=(32,32,3), name='image_in')
#0 stage :conversion from normalized RGB [0..1] to HSV
layer_HSV = tf.image.rgb_to_hsv(inputs)
#1st stage ___________________________________________________________
#Convolution with ReLU activation
layer1_conv = keras.layers.Conv2D(256, kernel_size=(5,5), strides=1, activation='relu', padding='valid')(layer_HSV)
#Average Pooling
layer1_maxpool = keras.layers.MaxPooling2D(pool_size=(2,2), strides=2, padding='valid')(layer1_conv)
#Conv 1x1
layer1_conv1x1 = keras.layers.Conv2D(256, kernel_size=(14,14), strides=1, activation='relu', padding='valid')(layer1_maxpool)
#2nd stage ___________________________________________________________
#Convolution with ReLU activation
layer2_conv = keras.layers.Conv2D(64, kernel_size=(5,5), strides=1, activation='relu', padding='valid')(layer1_maxpool)
#MaxPooling 2D
layer2_maxpool = keras.layers.MaxPooling2D(pool_size=(2,2), strides=2, padding='valid')(layer2_conv)
#Conv 1x1
layer2_conv1x1 = keras.layers.Conv2D(512, kernel_size=(5,5), strides=1, activation='relu', padding='valid')(layer2_maxpool)
#3rd stage | Classifier ______________________________________________
#Concate
layer3_flatten_1 = keras.layers.Flatten()(layer1_conv1x1)
layer3_flatten_2 = keras.layers.Flatten()(layer2_conv1x1)
layer3_concat = keras.layers.Concatenate()([layer3_flatten_1, layer3_flatten_2])
#Dense (fully-connected)
layer3_dense_1 = keras.layers.Dense(units=129, activation='relu', kernel_initializer="he_normal")(layer3_concat)
# layer3_dense_2 = keras.layers.Dense(units=129, activation='relu', kernel_initializer="he_normal")(layer3_dense_1)
#Dense (fully-connected) | logits for 43 categories (n_classes)
outputs = keras.layers.Dense(units=n_classes)(layer3_dense_1)
LeNet_Model = keras.Model(inputs, outputs, name="LeNet_Model_improved")
#Plot model architecture
LeNet_Model.summary()
keras.utils.plot_model(LeNet_Model, "figures/LeNet_improved_HLS.png", show_shapes=True) | Model: "LeNet_Model_improved"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
image_in (InputLayer) [(None, 32, 32, 3)] 0
__________________________________________________________________________________________________
tf_op_layer_RGBToHSV (TensorFlo [(None, 32, 32, 3)] 0 image_in[0][0]
__________________________________________________________________________________________________
conv2d (Conv2D) (None, 28, 28, 256) 19456 tf_op_layer_RGBToHSV[0][0]
__________________________________________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 14, 14, 256) 0 conv2d[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 10, 10, 64) 409664 max_pooling2d[0][0]
__________________________________________________________________________________________________
max_pooling2d_1 (MaxPooling2D) (None, 5, 5, 64) 0 conv2d_2[0][0]
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 1, 1, 256) 12845312 max_pooling2d[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 1, 1, 512) 819712 max_pooling2d_1[0][0]
__________________________________________________________________________________________________
flatten (Flatten) (None, 256) 0 conv2d_1[0][0]
__________________________________________________________________________________________________
flatten_1 (Flatten) (None, 512) 0 conv2d_3[0][0]
__________________________________________________________________________________________________
concatenate (Concatenate) (None, 768) 0 flatten[0][0]
flatten_1[0][0]
__________________________________________________________________________________________________
dense (Dense) (None, 129) 99201 concatenate[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 50) 6500 dense[0][0]
==================================================================================================
Total params: 14,199,845
Trainable params: 14,199,845
Non-trainable params: 0
__________________________________________________________________________________________________
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Input preprocessing Color-SpacePierre Sermanet and Yann LeCun used YUV color space with almost of processings on Y-channel (Y stands for brightness, U and V stand for Chrominance). NormalizationEach channel of an image is in uint8 scale (0-255), we will normalize each channel to 0-1. Generally, we normalize data to get them center around -1 and 1, to prevent numrical errors due to many steps of matrix operation. Imagine that we have 255x255x255x255xk operation, it could give a huge numerical error if we just have a small error in k. | import cv2
def input_normalization(X_in):
X = numpy.float32(X_in/255.0)
return X
# normalization of dataset
# enhanced training dataset is added
X_train_norm = input_normalization(X_train)
X_valid_norm = input_normalization(X_valid)
X_enhancedtrain_norm = input_normalization(images_enhancedsign)
# one-hot matrix
y_train_onehot = keras.utils.to_categorical(y_train, n_classes)
y_valid_onehot = keras.utils.to_categorical(y_valid, n_classes)
y_enhanced_onehot = keras.utils.to_categorical(y_enhancedsign, n_classes)
print(X_train_norm.shape)
print('{0:.4g}'.format(numpy.max(X_train_norm)))
print('{0:.3g}'.format(numpy.min(X_train_norm)))
print(X_enhancedtrain_norm.shape)
print('{0:.4g}'.format(numpy.max(X_enhancedtrain_norm)))
print('{0:.3g}'.format(numpy.min(X_enhancedtrain_norm))) | (50, 32, 32, 3)
1
0
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Training Pipeline_Optimizer : we use Adam optimizer, better than SDG (Stochastic Gradient Descent) _Loss function : Cross Entropy by category _Metrics : accuracy *learning rate 0.001 work well with our network, it's better to try with small laerning rate in the begining. | rate = 0.001
LeNet_Model.compile(
optimizer=keras.optimizers.Nadam(learning_rate = rate, beta_1=0.9, beta_2=0.999, epsilon=1e-07),
loss=keras.losses.CategoricalCrossentropy(from_logits=True),
metrics=["accuracy"]) | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Real-time data augmentation | from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen_enhanced = ImageDataGenerator(
rotation_range=30.0,
zoom_range=0.5,
width_shift_range=0.5,
height_shift_range=0.5,
featurewise_center=True,
featurewise_std_normalization=True,
horizontal_flip=False)
datagen_enhanced.fit(X_enhancedtrain_norm)
from tensorflow.keras.preprocessing.image import ImageDataGenerator
datagen = ImageDataGenerator(
rotation_range=15.0,
zoom_range=0.2,
width_shift_range=0.1,
height_shift_range=0.1,
featurewise_center=False,
featurewise_std_normalization=False,
horizontal_flip=False)
datagen.fit(X_train_norm) | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Train the Model on standard training dataset | EPOCHS = 30
BATCH_SIZE = 32
STEPS_PER_EPOCH = int(len(X_train_norm)/BATCH_SIZE)
history_standard_HLS = LeNet_Model.fit(
datagen.flow(X_train_norm, y_train_onehot, batch_size=BATCH_SIZE,shuffle=True),
validation_data=(X_valid_norm, y_valid_onehot),
shuffle=True,
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS) | WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 1087 steps, validate on 4410 samples
Epoch 1/30
1087/1087 [==============================] - 310s 285ms/step - loss: 1.5436 - accuracy: 0.5221 - val_loss: 1.1918 - val_accuracy: 0.6120
Epoch 2/30
1087/1087 [==============================] - 305s 281ms/step - loss: 0.5978 - accuracy: 0.7973 - val_loss: 0.8420 - val_accuracy: 0.7415
Epoch 3/30
1087/1087 [==============================] - 304s 279ms/step - loss: 0.3351 - accuracy: 0.8877 - val_loss: 0.8672 - val_accuracy: 0.7873
Epoch 4/30
1087/1087 [==============================] - 304s 279ms/step - loss: 0.2341 - accuracy: 0.9236 - val_loss: 0.7044 - val_accuracy: 0.8091
Epoch 5/30
1087/1087 [==============================] - 304s 279ms/step - loss: 0.1861 - accuracy: 0.9395 - val_loss: 0.6862 - val_accuracy: 0.8245
Epoch 6/30
1087/1087 [==============================] - 304s 279ms/step - loss: 0.1643 - accuracy: 0.9459 - val_loss: 0.6359 - val_accuracy: 0.8508
Epoch 7/30
1087/1087 [==============================] - 308s 284ms/step - loss: 0.1401 - accuracy: 0.9541 - val_loss: 0.7359 - val_accuracy: 0.8365
Epoch 8/30
1087/1087 [==============================] - 316s 290ms/step - loss: 0.1286 - accuracy: 0.9591 - val_loss: 0.7600 - val_accuracy: 0.8488
Epoch 9/30
1087/1087 [==============================] - 307s 282ms/step - loss: 0.1214 - accuracy: 0.9617 - val_loss: 0.8064 - val_accuracy: 0.8442
Epoch 10/30
1087/1087 [==============================] - 306s 282ms/step - loss: 0.1143 - accuracy: 0.9632 - val_loss: 0.7421 - val_accuracy: 0.8528
Epoch 11/30
1087/1087 [==============================] - 306s 281ms/step - loss: 0.1024 - accuracy: 0.9677 - val_loss: 0.7401 - val_accuracy: 0.8676
Epoch 12/30
1087/1087 [==============================] - 305s 281ms/step - loss: 0.0954 - accuracy: 0.9702 - val_loss: 0.6620 - val_accuracy: 0.8762
Epoch 13/30
1087/1087 [==============================] - 306s 281ms/step - loss: 0.1052 - accuracy: 0.9678 - val_loss: 0.7756 - val_accuracy: 0.8617
Epoch 14/30
1087/1087 [==============================] - 307s 282ms/step - loss: 0.0925 - accuracy: 0.9714 - val_loss: 0.6852 - val_accuracy: 0.8624
Epoch 15/30
1087/1087 [==============================] - 312s 287ms/step - loss: 0.0891 - accuracy: 0.9716 - val_loss: 0.9627 - val_accuracy: 0.8481
Epoch 16/30
1087/1087 [==============================] - 310s 285ms/step - loss: 0.0932 - accuracy: 0.9721 - val_loss: 0.7544 - val_accuracy: 0.8637
Epoch 17/30
1087/1087 [==============================] - 310s 285ms/step - loss: 0.0862 - accuracy: 0.9751 - val_loss: 0.6120 - val_accuracy: 0.8789
Epoch 18/30
1087/1087 [==============================] - 306s 281ms/step - loss: 0.0776 - accuracy: 0.9765 - val_loss: 0.8133 - val_accuracy: 0.8442
Epoch 19/30
1087/1087 [==============================] - 305s 280ms/step - loss: 0.0774 - accuracy: 0.9771 - val_loss: 0.6836 - val_accuracy: 0.8807
Epoch 20/30
1087/1087 [==============================] - 304s 280ms/step - loss: 0.0822 - accuracy: 0.9755 - val_loss: 0.8578 - val_accuracy: 0.8689
Epoch 21/30
1087/1087 [==============================] - 303s 279ms/step - loss: 0.0810 - accuracy: 0.9757 - val_loss: 0.7974 - val_accuracy: 0.8785
Epoch 22/30
1087/1087 [==============================] - 303s 279ms/step - loss: 0.0689 - accuracy: 0.9792 - val_loss: 0.8230 - val_accuracy: 0.8522
Epoch 23/30
1087/1087 [==============================] - 305s 280ms/step - loss: 0.0785 - accuracy: 0.9757 - val_loss: 0.6689 - val_accuracy: 0.8916
Epoch 24/30
1087/1087 [==============================] - 304s 280ms/step - loss: 0.0722 - accuracy: 0.9783 - val_loss: 0.9752 - val_accuracy: 0.8628
Epoch 25/30
1087/1087 [==============================] - 303s 279ms/step - loss: 0.0693 - accuracy: 0.9793 - val_loss: 1.0134 - val_accuracy: 0.8583
Epoch 26/30
1087/1087 [==============================] - 304s 279ms/step - loss: 0.0753 - accuracy: 0.9786 - val_loss: 0.6858 - val_accuracy: 0.8943
Epoch 27/30
1087/1087 [==============================] - 304s 280ms/step - loss: 0.0661 - accuracy: 0.9811 - val_loss: 1.0159 - val_accuracy: 0.8762
Epoch 28/30
1087/1087 [==============================] - 303s 279ms/step - loss: 0.0700 - accuracy: 0.9795 - val_loss: 1.0779 - val_accuracy: 0.8639
Epoch 29/30
1087/1087 [==============================] - 303s 279ms/step - loss: 0.0731 - accuracy: 0.9785 - val_loss: 0.8449 - val_accuracy: 0.8796
Epoch 30/30
1087/1087 [==============================] - 303s 279ms/step - loss: 0.0627 - accuracy: 0.9823 - val_loss: 1.0047 - val_accuracy: 0.8710
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
on enhanced training dataset | EPOCHS = 30
BATCH_SIZE = 1
STEPS_PER_EPOCH = int(len(X_enhancedtrain_norm)/BATCH_SIZE)
history_enhanced_HLS = LeNet_Model.fit(
datagen_enhanced.flow(X_enhancedtrain_norm, y_enhanced_onehot, batch_size=BATCH_SIZE,shuffle=True),
shuffle=True, #validation_data=(X_valid_norm, y_valid_onehot),
steps_per_epoch=STEPS_PER_EPOCH,
epochs=EPOCHS)
LeNet_Model.save("LeNet_enhanced_trainingdataset_HLS.h5") | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Evaluate the ModelWe will use the test dataset to evaluate classification accuracy. | #Normalize test dataset
X_test_norm = input_normalization(X_test)
#One-hot matrix
y_test_onehot = keras.utils.to_categorical(y_test, n_classes)
#Load saved model
reconstructed_LeNet_Model = keras.models.load_model("LeNet_enhanced_trainingdataset_HLS.h5")
#Evaluate and display the prediction
result = reconstructed_LeNet_Model.evaluate(X_test_norm,y_test_onehot)
dict(zip(reconstructed_LeNet_Model.metrics_names, result))
pickle.dump(history_enhanced_HLS.history, open( "history_LeNet_enhanced_trainingdataset_enhanced_HLS.p", "wb" ))
pickle.dump(history_standard_HLS.history, open( "history_LeNet_enhanced_trainingdataset_standard_HLS.p", "wb" ))
with open("history_LeNet_enhanced_trainingdataset_standard_HLS.p", mode='rb') as f:
history_ = pickle.load(f)
import matplotlib.pyplot as plt
# Plot training error.
print('\nPlot of training error over 30 epochs:')
fig = plt.figure()
plt.title('Training error')
plt.ylabel('Cost')
plt.xlabel('epoch')
plt.plot(history_['loss'])
plt.plot(history_['val_loss'])
# plt.plot(history.history['loss'])
# plt.plot(history.history['val_loss'])
plt.legend(['train loss', 'val loss'], loc='upper right')
plt.grid()
plt.show()
fig.savefig('figures/Training_loss_LeNet_enhanced_trainingdataset_standard_HLS.png', dpi=500)
# Plot training error.
print('\nPlot of training accuracy over 30 epochs:')
fig = plt.figure()
plt.title('Training accuracy')
plt.ylabel('Accuracy')
plt.ylim([0.4, 1])
plt.xlabel('epoch')
plt.plot(history_['accuracy'])
plt.plot(history_['val_accuracy'])
plt.legend(['training_accuracy', 'validation_accuracy'], loc='lower right')
plt.grid()
plt.show()
fig.savefig('figures/Training_accuracy_LeNet_enhanced_trainingdataset_HLS.png', dpi=500) |
Plot of training accuracy over 30 epochs:
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Prediction of test dataset with trained modelWe will use the test dataset to test trained model's prediction of instances that it has never seen during training. | print("Test Set : {} samples".format(len(X_test)))
print('n_classes : {}'.format(n_classes))
X_test.shape
#Normalize test dataset
X_test_norm = input_normalization(X_test)
#One-hot matrix
y_test_onehot = keras.utils.to_categorical(y_test, n_classes)
#Load saved model
reconstructed = keras.models.load_model("LeNet_enhanced_trainingdataset_HLS.h5")
#Evaluate and display the prediction
prediction_performance = reconstructed.evaluate(X_test_norm,y_test_onehot)
dict(zip(reconstructed.metrics_names, prediction_performance))
import matplotlib.pyplot as plot
%matplotlib inline
rows, cols = 4, 12
fig, axes = plot.subplots(rows, cols)
for idx, ax in enumerate(axes.ravel()):
if idx < n_classes_standard :
X_test_of_class = X_test[y_test == idx]
#X_train_0 = X_train_of_class[numpy.random.randint(len(X_train_of_class))]
X_test_0 = X_test_of_class[0]
ax.imshow(X_test_0)
ax.axis('off')
ax.set_title('{:02d}'.format(idx))
plot.setp(ax.get_xticklabels(), visible=False)
plot.setp(ax.get_yticklabels(), visible=False)
else:
ax.axis('off')
#
plot.draw()
fig.savefig('figures/' + 'test_representative' + '.jpg', dpi=700)
#### Prediction for all instances inside the test dataset
y_pred_proba = reconstructed.predict(X_test_norm)
y_pred_class = y_pred_proba.argmax(axis=-1)
### Showing prediction results for 10 first instances
for i, pred in enumerate(y_pred_class):
if i <= 10:
print('Image {} - Target = {}, Predicted = {}'.format(i, y_test[i], pred))
else:
break | Image 0 - Target = 16, Predicted = 6
Image 1 - Target = 1, Predicted = 6
Image 2 - Target = 38, Predicted = 6
Image 3 - Target = 33, Predicted = 6
Image 4 - Target = 11, Predicted = 6
Image 5 - Target = 38, Predicted = 6
Image 6 - Target = 18, Predicted = 6
Image 7 - Target = 12, Predicted = 6
Image 8 - Target = 25, Predicted = 6
Image 9 - Target = 35, Predicted = 6
Image 10 - Target = 12, Predicted = 6
| MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
We will display a confusion matrix on test dataset to figure out our error-rate. `X_test_norm` : test dataset `y_test` : test dataset ground truth labels `y_pred_class` : prediction labels on test dataset | confusion_matrix = numpy.zeros([n_classes, n_classes]) | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
confusion_matrix`column` : test dataset ground truth labels `row` : prediction labels on test dataset `diagonal` : incremented when prediction matches ground truth label | for ij in range(len(X_test_norm)):
if y_test[ij] == y_pred_class[ij]:
confusion_matrix[y_test[ij],y_test[ij]] += 1
else:
confusion_matrix[y_pred_class[ij],y_test[ij]] -= 1
column_label = [' L % d' % x for x in range(n_classes)]
row_label = [' P % d' % x for x in range(n_classes)]
# Plot classe representatives
import matplotlib.pyplot as plot
%matplotlib inline
rows, cols = 1, 43
fig, axes = plot.subplots(rows, cols)
for idx, ax in enumerate(axes.ravel()):
if idx < n_classes :
X_test_of_class = X_test[y_test == idx]
X_test_0 = X_test_of_class[0]
ax.imshow(X_test_0)
plot.setp(ax.get_xticklabels(), visible=False)
plot.setp(ax.get_yticklabels(), visible=False)
# plot.tick_params(axis='both', which='both', bottom='off', top='off',
# labelbottom='off', right='off', left='off', labelleft='off')
ax.axis('off')
plot.draw()
fig.savefig('figures/' + 'label_groundtruth' + '.jpg', dpi=3500)
numpy.savetxt("confusion_matrix_LeNet_enhanced_trainingdataset_HLS.csv", confusion_matrix, delimiter=";") | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Thank to confusion matrix, we could identify where to enhance -[x] training dataset -[x] real-time data augmentation -[x] preprocessing *Extract of confusion matrix of classification on test dataset &8595;* Prediction of new instances with trained modelWe will use the test dataset to test trained model's prediction of instances that it has never seen during training.I didn't 'softmax' activation in the last layer of LeNet architecture, so the output prediction is logits. To have prediction confidence level, we can apply softmax function to output logits. | # load french traffic signs
import os
import cv2
import matplotlib.pyplot as plot
import numpy
dir_frenchsign = 'french_traffic-signs-data'
images_frenchsign = [os.path.join(dir_frenchsign, f) for f in os.listdir(dir_frenchsign)]
images_frenchsign = numpy.array([cv2.cvtColor(cv2.imread(f), cv2.COLOR_BGR2RGB) for f in images_frenchsign])
# plot new test images
fig, axes = plot.subplots(3, int(len(images_frenchsign)/3))
plot.title('French traffic signs')
for i, ax in enumerate(axes.ravel()):
ax.imshow(images_frenchsign[i])
ax.set_title('{}'.format(i))
plot.setp(ax.get_xticklabels(), visible=False)
plot.setp(ax.get_yticklabels(), visible=False)
ax.set_xticks([]), ax.set_yticks([])
ax.axis('off')
plot.draw()
fig.savefig('figures/' + 'french_sign' + '.jpg', dpi=700) | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
*Enhanced German traffic signs dataset &8595;* | # manually label for these new images
y_frenchsign = [13, 31, 29, 24, 26, 27, 33, 17, 15, 34, 12, 2, 2, 4, 2]
n_classes = n_classes_enhanced
# when a sign doesn't present in our training dataset, we'll try to find a enough 'similar' sign to label it.
# image 2 : class 29 differed
# image 3 : class 24, double-sens not existed
# image 5 : class 27 differed
# image 6 : class 33 not existed
# image 7 : class 17, halte-péage not existed
# image 8 : class 15, 3.5t limit not existed
# image 9 : class 15, turn-left inhibition not existed
# image 12 : class 2, ending of 50kmh speed-limit not existed
# image 14 : class 2, 90kmh speed-limit not existed | _____no_output_____ | MIT | traffic_sign_classifier_LeNet_enhanced_trainingdataset_HLS.ipynb | nguyenrobot/Traffic-Sign-Recognition-with-Keras-Tensorflow |
Subsets and Splits