path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
ds/kb/courses/deep-learning-v2-pytorch/intro-neural-networks/student-admissions/StudentAdmissions.ipynb | ###Markdown
Predicting Student Admissions with Neural NetworksIn this notebook, we predict student admissions to graduate school at UCLA based on three pieces of data:- GRE Scores (Test)- GPA Scores (Grades)- Class rank (1-4)The dataset originally came from here: http://www.ats.ucla.edu/ Loading the dataTo load the data and format it nicely, we will use two very useful packages called Pandas and Numpy. You can read on the documentation here:- https://pandas.pydata.org/pandas-docs/stable/- https://docs.scipy.org/
###Code
# Import pandas and numpy
import pandas as pd
import numpy as np
# Reading the csv file into a pandas DataFrame
data = pd.read_csv('student_data.csv')
# Printing out the first 10 rows of our data
data.head(10)
###Output
_____no_output_____
###Markdown
Plotting the dataFirst let's make a plot of our data to see how it looks. In order to have a 2D plot, let's ingore the rank.
###Code
# Importing matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
# Function to help us plot
def plot_points(data):
X = np.array(data[["gre","gpa"]])
y = np.array(data["admit"])
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'red', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'cyan', edgecolor = 'k')
plt.xlabel('Test (GRE)')
plt.ylabel('Grades (GPA)')
# Plotting the points
plot_points(data)
plt.show()
###Output
_____no_output_____
###Markdown
Roughly, it looks like the students with high scores in the grades and test passed, while the ones with low scores didn't, but the data is not as nicely separable as we hoped it would. Maybe it would help to take the rank into account? Let's make 4 plots, each one for each rank.
###Code
# Separating the ranks
data_rank1 = data[data["rank"]==1]
data_rank2 = data[data["rank"]==2]
data_rank3 = data[data["rank"]==3]
data_rank4 = data[data["rank"]==4]
# Plotting the graphs
plot_points(data_rank1)
plt.title("Rank 1")
plt.show()
plot_points(data_rank2)
plt.title("Rank 2")
plt.show()
plot_points(data_rank3)
plt.title("Rank 3")
plt.show()
plot_points(data_rank4)
plt.title("Rank 4")
plt.show()
###Output
_____no_output_____
###Markdown
This looks more promising, as it seems that the lower the rank, the higher the acceptance rate. Let's use the rank as one of our inputs. In order to do this, we should one-hot encode it. TODO: One-hot encoding the rankUse the `get_dummies` function in pandas in order to one-hot encode the data.Hint: To drop a column, it's suggested that you use `one_hot_data`[.drop( )](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.drop.html).
###Code
# TODO: Make dummy variables for rank and concat existing columns
one_hot_data = pd.get_dummies(data, columns=["rank"])
# TODO: Drop the previous rank column
# one_hot_data = one_hot_data.drop(columns=["rank"])
# Print the first 10 rows of our data
one_hot_data.head()
###Output
_____no_output_____
###Markdown
TODO: Scaling the dataThe next step is to scale the data. We notice that the range for grades is 1.0-4.0, whereas the range for test scores is roughly 200-800, which is much larger. This means our data is skewed, and that makes it hard for a neural network to handle. Let's fit our two features into a range of 0-1, by dividing the grades by 4.0, and the test score by 800.
###Code
# Making a copy of our data
processed_data = one_hot_data[:]
# TODO: Scale the columns
processed_data["gpa"] = processed_data["gpa"] / 4
processed_data["gre"] = processed_data["gre"] / 800
# Printing the first 10 rows of our procesed data
processed_data.head(10)
###Output
_____no_output_____
###Markdown
Splitting the data into Training and Testing In order to test our algorithm, we'll split the data into a Training and a Testing set. The size of the testing set will be 10% of the total data.
###Code
sample = np.random.choice(processed_data.index, size=int(len(processed_data)*0.9), replace=False)
train_data, test_data = processed_data.iloc[sample], processed_data.drop(sample)
print("Number of training samples is", len(train_data))
print("Number of testing samples is", len(test_data))
print(train_data[:10])
print(test_data[:10])
###Output
Number of training samples is 360
Number of testing samples is 40
admit gre gpa rank_1 rank_2 rank_3 rank_4
285 0 0.750 0.8275 0 0 0 1
186 0 0.700 0.9025 0 0 1 0
180 0 0.775 0.9450 0 0 1 0
191 0 1.000 0.8850 0 0 1 0
167 0 0.900 0.9425 0 0 1 0
387 0 0.725 0.8400 0 1 0 0
351 0 0.775 0.8575 0 0 1 0
121 1 0.600 0.6675 0 1 0 0
326 0 0.850 0.8275 0 1 0 0
283 0 0.650 0.7750 0 0 0 1
admit gre gpa rank_1 rank_2 rank_3 rank_4
12 1 0.950 1.0000 1 0 0 0
47 0 0.625 0.7425 0 0 0 1
56 0 0.700 0.7975 0 0 1 0
68 0 0.725 0.9225 1 0 0 0
70 0 0.800 1.0000 0 0 1 0
90 0 0.875 0.9575 0 1 0 0
124 0 0.900 0.9700 0 0 1 0
137 0 0.875 1.0000 0 0 1 0
148 1 0.600 0.7275 1 0 0 0
161 0 0.800 0.8750 0 1 0 0
###Markdown
Splitting the data into features and targets (labels)Now, as a final step before the training, we'll split the data into features (X) and targets (y).
###Code
features = train_data.drop('admit', axis=1)
targets = train_data['admit']
features_test = test_data.drop('admit', axis=1)
targets_test = test_data['admit']
print(features[:10])
print(targets[:10])
###Output
gre gpa rank_1 rank_2 rank_3 rank_4
285 0.750 0.8275 0 0 0 1
186 0.700 0.9025 0 0 1 0
180 0.775 0.9450 0 0 1 0
191 1.000 0.8850 0 0 1 0
167 0.900 0.9425 0 0 1 0
387 0.725 0.8400 0 1 0 0
351 0.775 0.8575 0 0 1 0
121 0.600 0.6675 0 1 0 0
326 0.850 0.8275 0 1 0 0
283 0.650 0.7750 0 0 0 1
285 0
186 0
180 0
191 0
167 0
387 0
351 0
121 1
326 0
283 0
Name: admit, dtype: int64
###Markdown
Training the 2-layer Neural NetworkThe following function trains the 2-layer neural network. First, we'll write some helper functions.
###Code
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
def sigmoid_prime(x):
return sigmoid(x) * (1-sigmoid(x))
def error_formula(y, output):
return - y*np.log(output) - (1 - y) * np.log(1-output)
###Output
_____no_output_____
###Markdown
TODO: Backpropagate the errorNow it's your turn to shine. Write the error term. Remember that this is given by the equation $$ (y-\hat{y}) \sigma'(x) $$
###Code
# TODO: Write the error term formula
def error_term_formula(x, y, output):
return (y - output) * sigmoid_prime(x)
## Alternative solution ##
# you could also *only* use y and the output
# and calculate sigmoid_prime directly from the activated output!
# below is an equally valid solution (it doesn't utilize x)
def error_term_formula(x, y, output):
return (y-output) * output * (1 - output)
# Neural Network hyperparameters
epochs = 1000
learnrate = 0.5
# Training function
def train_nn(features, targets, epochs, learnrate):
# Use to same seed to make debugging easier
np.random.seed(92)
n_records, n_features = features.shape
last_loss = None
# Initialize weights
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features.values, targets):
# Loop through all records, x is the input, y is the target
# Activation of the output unit
# Notice we multiply the inputs and the weights here
# rather than storing h as a separate variable
output = sigmoid(np.dot(x, weights))
# The error, the target minus the network output
error = error_formula(y, output)
# The error term
error_term = error_term_formula(x, y, output)
# The gradient descent step, the error times the gradient times the inputs
del_w += error_term * x
# Update the weights here. The learning rate times the
# change in weights, divided by the number of records to average
weights += learnrate * del_w / n_records
# Printing out the mean square error on the training set
if e % (epochs / 10) == 0:
out = sigmoid(np.dot(features, weights))
loss = np.mean((out - targets) ** 2)
print("Epoch:", e)
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
print("=========")
print("Finished training!")
return weights
weights = train_nn(features, targets, epochs, learnrate)
###Output
Epoch: 0
Train loss: 0.23661253786889236
=========
Epoch: 100
Train loss: 0.21368242400220366
=========
Epoch: 200
Train loss: 0.20873343981515768
=========
Epoch: 300
Train loss: 0.20722500667278487
=========
Epoch: 400
Train loss: 0.206651045371236
=========
Epoch: 500
Train loss: 0.20637355389328985
=========
Epoch: 600
Train loss: 0.20620507892978857
=========
Epoch: 700
Train loss: 0.2060815927241979
=========
Epoch: 800
Train loss: 0.2059783951151709
=========
Epoch: 900
Train loss: 0.20588510773012408
=========
Finished training!
###Markdown
Calculating the Accuracy on the Test Data
###Code
# Calculate accuracy on test data
test_out = sigmoid(np.dot(features_test, weights))
predictions = test_out > 0.5
accuracy = np.mean(predictions == targets_test)
print("Prediction accuracy: {:.3f}".format(accuracy))
###Output
Prediction accuracy: 0.775
|
data_visualization_collection/bike_trend_per_day.ipynb | ###Markdown
Load trip data
###Code
%%time
def _convert_to_dateobject(x):
return datetime.datetime.strptime(x, "%Y-%m-%d %H:%M:%S")
if not os.path.exists('../data/raw_trip_datetime_2018_Q3.pk'):
trip_df = pd.read_csv('../data/Divvy_Trips_2018_Q3.csv')
trip_df['start_time_dtoj'] = trip_df.apply(lambda row: _convert_to_dateobject(row.start_time), axis=1)
trip_df['end_time_dtoj'] = trip_df.apply(lambda row: _convert_to_dateobject(row.end_time), axis=1)
trip_df.to_pickle('../data/raw_trip_datetime_2018_Q3.pk')
else:
trip_df = pd.read_pickle('../data/raw_trip_datetime_2018_Q3.pk')
trip_df.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 1513570 entries, 0 to 1513569
Data columns (total 14 columns):
trip_id 1513570 non-null int64
start_time 1513570 non-null object
end_time 1513570 non-null object
bikeid 1513570 non-null int64
tripduration 1513570 non-null object
from_station_id 1513570 non-null int64
from_station_name 1513570 non-null object
to_station_id 1513570 non-null int64
to_station_name 1513570 non-null object
usertype 1513570 non-null object
gender 1218574 non-null object
birthyear 1221990 non-null float64
start_time_dtoj 1513570 non-null datetime64[ns]
end_time_dtoj 1513570 non-null datetime64[ns]
dtypes: datetime64[ns](2), float64(1), int64(4), object(7)
memory usage: 161.7+ MB
###Markdown
Load station info
###Code
%%time
# Load from preprocessed data
sd = pd.read_csv('../data/Divvy_Stations_2017_Q3Q4.csv')
sd.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 585 entries, 0 to 584
Data columns (total 8 columns):
id 585 non-null int64
name 585 non-null object
city 585 non-null object
latitude 585 non-null float64
longitude 585 non-null float64
dpcapacity 585 non-null int64
online_date 585 non-null object
Unnamed: 7 0 non-null float64
dtypes: float64(3), int64(2), object(3)
memory usage: 36.6+ KB
###Markdown
Select station and date
###Code
# Select a date
DAY_RANDOM_FLAG = False
if DAY_RANDOM_FLAG:
dd = np.random.choice(range(1, 32), 1)[0]
mm = np.random.choice([7, 8, 9], 1)[0]
if dd == 31 and mm == 9:
dd = np.random.choice(range(1, 31), 1)[0]
else:
dd = 2
mm = 7
print('Month: {}, day: {}'.format(mm, dd))
# Get the heavy demand station list for this day
SHOW_hot_station_list = True
def _get_hd_stn_lst(df):
top_station_df = df[
(df.start_time_dtoj.dt.day == dd) &
(df.start_time_dtoj.dt.month == mm)
].groupby(['from_station_id'])[['trip_id']]\
.count().sort_values(by='trip_id', ascending=False)\
.reset_index().head(100)
if SHOW_hot_station_list:
display(top_station_df)
return top_station_df
STN_RANDOM_FLAG = True
if STN_RANDOM_FLAG:
top_station_df = _get_hd_stn_lst(trip_df)
st_id = np.random.choice(top_station_df.from_station_id.unique(), 1)[0]
else:
st_id = 100
print('Station id: {}'.format(st_id))
###Output
_____no_output_____
###Markdown
Trip collection for a single day and a single location
###Code
## Helper functions
# Get net change for each trip
def _get_net(row):
if row.incoming:
return 1
elif row.outgoing:
return -1
else:
return 0
# Get exact time for bike rental/return for this station and then sort
def _get_time(row):
if row.incoming:
return row.end_time_dtoj
elif row.outgoing:
return row.start_time_dtoj
else:
return
# Filter trip day that meet this condition
def get_trip_trips(trip_df, dd, mm, st_id):
daily_trip_details = trip_df[
(trip_df.start_time_dtoj.dt.day == dd) &
(trip_df.start_time_dtoj.dt.month == mm) &
(
(trip_df.from_station_id == st_id) |
(trip_df.to_station_id == st_id)
)
][['trip_id', 'tripduration', 'from_station_id', 'to_station_id',
'usertype', 'gender', 'birthyear', 'start_time_dtoj', 'end_time_dtoj']]
# Check if incoming or outgoing
daily_trip_details['outgoing'] = daily_trip_details.from_station_id == st_id
daily_trip_details['incoming'] = daily_trip_details.to_station_id == st_id
daily_trip_details['net'] = daily_trip_details.apply(lambda x: _get_net(x), axis=1)
daily_trip_details['time'] = daily_trip_details.apply(lambda x: _get_time(x), axis=1)
daily_trip_details.sort_values(by='time', inplace=True)
daily_trip_details['in_cum'] = -daily_trip_details['incoming'].cumsum()
daily_trip_details['out_cum'] = daily_trip_details['outgoing'].cumsum()
daily_trip_details['net_cum'] = -daily_trip_details['net'].cumsum()
station_info = sd[sd.id == st_id]
return daily_trip_details, station_info
###Output
_____no_output_____
###Markdown
Plot as function of time[Great example of make multi-type subplots](https://plot.ly/~empet/15130/mixed-2d-and-3d-subplots-forum//)
###Code
from plotly.offline import download_plotlyjs, init_notebook_mode, plot, iplot
from ipywidgets import widgets
import plotly.graph_objs as go
from plotly import tools
init_notebook_mode(connected=True)
import plotly
plotly.__version__
## Global setup for plotting
# API key
mapbox_access_token = 'pk.eyJ1IjoibWVydnluMTUyIiwiYSI6ImNqeHpkNWZmdjAxczUzY29hbHVoandyMnUifQ.D-botzm1Hr6Gjs8jqwD5VA'
# style dict
style_dict = {}
style_dict['color'] = {'in':'red', 'out':'orange', 'net':'blue'}
style_dict['name'] = {'in':'Return', 'out':'Rental', 'net':'Net Demand'}
## Function to get trip trend data
def _get_trip_trend_date(df, conf, vis=False):
return go.Scatter(
x=df.time,
y=df[conf+'_cum'],
name = style_dict['name'][conf],
mode = 'lines+markers',
line = dict(
color = style_dict['color'][conf],
),
visible=vis
)
## Function to get map data
def _get_map_data(df, vis=False):
return go.Scattermapbox(
lat=[float(df.latitude)],
lon=[float(df.longitude)],
text=["<b>Station id</b>: {}\
<br> <b>Latitude</b>: {:.4f} \
<br> <b>Longitude</b>: {:.4f} \
<br> <b>Station capacity</b>: {}\
<br> <b>Station name</b>: <br> {}\
".format(
int(df.id),
float(df.latitude),
float(df.longitude),
int(df.dpcapacity),
str(df.name.to_string()),
)],
mode='markers',
hoverinfo = 'text',
marker=go.scattermapbox.Marker(
size=10,
color='yellow',
),
subplot='mapbox',
visible=vis,
name='Station '+str(int(df.id)),
showlegend=False,
)
## Style of mixed plot
layout = {
'title': {
'text': 'Trend of bike rental and net demand',
'font': dict(
family='Droid Serif, serif',
size=30,
color='Black'
),
},
'yaxis': {
'zeroline': False,
# 'showgrid': True,
'title': "Number of bikes",
'titlefont': dict(
family='Arial, sans-serif',
size=25,
color='grey'
),
# 'range': [-100, 420],
'domain': [0, 0.95],
'tickangle': -45,
'tickfont': dict(
family='Old Standard TT, serif',
size=14,
color='black'
),
},
'xaxis': {
'zeroline': True,
# 'showgrid': True,
'domain': [0., 0.99],
'tickangle': 0,
'tickfont': dict(
family='Old Standard TT, serif',
size=14,
color='black'
),
},
'mapbox': go.layout.Mapbox(
accesstoken=mapbox_access_token,
bearing=0,
domain={'x': [0.05, 0.4], 'y': [0.55, 1]},
center=go.layout.mapbox.Center(
lat=41.89,
lon=-87.625
),
pitch=60,
zoom=11.5,
style='dark',
# style='mapbox://styles/mervyn152/cjy2i8m1y1s5s1cnxcto3gppa',
# style = 'mapbox://styles/mervyn152/cjy2i8m1y1s5s1cnxcto3gppa'
),
# 'paper_bgcolor': 'black',
'showlegend': True,
'autosize': True,
'legend': dict(
orientation="v",
x=0.5,
y=1,
font=dict(
size=16,
),
),
'margin': go.layout.Margin(l=60, r=10, b=10, t=50, pad=6),
'shapes': [
go.layout.Shape(
type="rect",
xref="paper",
yref="y",
x0="0",
y0=-35,
x1="1",
y1=35,
fillcolor="lightgrey",
opacity=0.5,
layer="below",
line_width=0,
),
go.layout.Shape(
type="line",
xref="paper",
yref="y",
x0=0,
y0=0,
x1=1,
y1=0,
line=dict(
color="black",
width=0.5,
dash='dot',
),
),
],
}
# Set date and station list
pm = 7
pd = 2
station_list = [192, 100, 35, 91, 56]
station_list = [192, 177, 100, 143]
## Get data
data = []
vis_flag = True
for st_id in station_list:
daily_trip_details, station_info = get_trip_trips(trip_df, pd, pm, st_id)
data.append(_get_map_data(station_info, vis=vis_flag))
data.append(_get_trip_trend_date(daily_trip_details, 'out', vis=vis_flag))
data.append(_get_trip_trend_date(daily_trip_details, 'net', vis=vis_flag))
vis_flag = False
# Create button list
button_list = []
n_st = len(station_list)
blank = [False] * n_st *3
for i in range(n_st):
vis_lst = blank.copy()
vis_lst[i*3:i*3+3] = [True] * 3
label_ = 'Station '+str(station_list[i])
tmp_d = dict(
args = [{'visible': vis_lst}],
label = label_,
method ='update'
)
button_list.append(tmp_d)
updatemenus=list([
dict(
buttons=button_list,
direction = 'up',
x = 0.82,
xanchor = 'left',
y = -0.18,
yanchor = 'bottom',
bgcolor = 'lightgrey',
bordercolor = 'black',
font = dict(size=11, color='black'),
showactive=False,
),
])
layout['updatemenus'] = updatemenus
figure = {}
figure['data'] = data
figure['layout'] = layout
SHOW = True
if SHOW:
iplot(figure, config={'displayModeBar': False})
else:
plot(figure, config={'displayModeBar': False}, filename="bike_trip_trend.html")
###Output
_____no_output_____ |
StallurDox2.ipynb | ###Markdown
EE 617 Project: Sensor Fault Diagnosis Part 2 ATC sensor fault diagnoseWe build a state space model for an airplane circling an airport with its distance, azimuth and elevation angle measurements. Since computational resources are not constrained at an airport, we shall use Support Vector Machine to classify a sensor as faulty. We simulate both healthy and faulty sensor measurements for a given time. Then on this measurement signal we extract some features (like mean, std deviation etc) and then use it to train an SVM and also validate it. State Space modelLet $x_1, x_3, x_5$ denote position of aircraft in cartesian coordinates respectively and $x_2, x_4, x_6$ denote velocities in these respective directions. $x_7$ denotes the turn rateThen $\frac{dX}{dt}=\begin{bmatrix} x_2 \\ -x_4x_7 \\ x_4 \\ x_2x_7 \\ x_6 \\ 0 \\ 0\end{bmatrix}+Q_d$Here, $Q_d$ is $diag([0, \sigma_1^2, 0, \sigma_1^2, 0, \sigma_1^2, \sigma_2^2])$ (zero mean noise)The sensor measurement equations are as follows$y_1=\sqrt{x_1^2+x_3^2+x_5^2}$ (range)$y_2=\tan^{-1}(x_3/x_1)$ (azimuth angle)$y_3=\tan^{-1}\left( \frac{x_5}{\sqrt{x_1^2+x_3^2}} \right)$ (elevation angle)Also measurement noise is added which is $R=diag([\sigma_r^2, \sigma_{\theta}^2, \sigma_{\phi}^2])$ (zero mean noise)Initial Condition used for simulation is $x(0)=[1000, 0, 2650, 150, 200, 0, 3]$(Give reference)
###Code
#Importing the librairies
import numpy as np
import matplotlib.pyplot as plt
from math import *
import random
import scipy.linalg as sp
from sklearn import datasets
from sklearn import svm
from sklearn.model_selection import train_test_split as tts
from sklearn.metrics import accuracy_score
from sklearn.metrics import accuracy_score
class Radar(object):
#State Space model of Radar
def __init__(self):
#States
self.x1=1000
self.x2=0
self.x3=2650
self.x4=150
self.x5=200
self.x6=0
self.x7=3
def y1(self):
#Measure range/distance
return sqrt(self.x1**2+self.x3**2+self.x5**2)
def y2(self):
#Measure azimuth
return atan(self.x3/self.x1)
def y3(self):
#measure elevation
return atan(self.x5/(sqrt(self.x1**2+self.x3**2)))
def dxdt(self):
#The model
dx1dt=self.x2
dx2dt=-self.x4*self.x7
dx3dt=self.x4
dx4dt=self.x2*self.x7
dx5dt=self.x6
dx6dt=0
dx7dt=0
a=np.array([dx1dt, dx2dt, dx3dt, dx4dt, dx5dt, dx6dt, dx7dt])
return a
def setState(self, X):
#Set the current state to X
self.x1=X[0]
self.x2=X[1]
self.x3=X[2]
self.x4=X[3]
self.x5=X[4]
self.x6=X[5]
self.x7=X[6]
def getState(self):
#Return the states
return np.array([self.x1, self.x2, self.x3, self.x4, self.x5, self.x6, self.x7])
def update(self, delt, noise=False):
#Use RK4 method to integrate
#Initialise
h=delt
X0=self.getState()
#K1 terms
K1=h*self.dxdt()
X1=X0+K1/2
self.setState(X1)
#K2 terms
K2=h*self.dxdt()
X2=X0+K2/2
self.setState(X2)
#K3 terms
K3=h*self.dxdt()
X3=X0+K3
self.setState(X3)
#K4 terms
K4=h*self.dxdt()
X=X0+K1/6+K2/3+K3/3+K4/6
#If noise bool is true we want noise to be added so add it
if noise==True:
s1=0.2
s2=7e-3
Qd=np.diag([0, s1**2, 0, s1**2, 0, s1**2, s2**2])
X+=np.random.multivariate_normal([0, 0, 0, 0, 0, 0, 0], Qd)
self.setState(X)
def meas(self, noise=False):
#Measurement
x=self.getState()
Y=np.array([self.y1(), self.y2(), self.y3()])
if noise:
#If noise bool is True then add nboise
sr=50
st=0.1
sphi=0.1
R=np.diag([sr**2, st**2, sphi**2])
Y+=np.random.multivariate_normal([0, 0, 0], R)
return Y
###Output
_____no_output_____
###Markdown
SimulationWe simulate the aircraft for $N=500$ time instants with time step $0.1 second$ and plot its $x$ and $y$ coordinates. The plane circles around the airport since the turn rate doesn't change
###Code
a=Radar()
Data=[]
Ys=[]
T=0.1
N=500
for i in range(0, N):
a.update(0.1, True)
Data.append(a.getState())
Ys.append(a.meas(True))
Data=np.array(Data)
Ys=np.array(Ys)
plt.plot(Data[:, 0], Data[:, 2])
class Sensor(object):
#Sensor object its the same as from before in Part 1 of EE 617 Project
def __init__(self):
self.offset=0
self.drift=0
self.Fault=False
self.model=None
self.R=0
self.t=0
def setOffset(self, d):
self.offset=d
self.Fault=True
def setDrift(self, m):
self.drift=m
self.t=0
self.Fault=True
def setModel(self, g, R):
self.model=g
self.R=R
def meas(self, X):
n, m=self.R.shape
a=self.model(X)+self.offset+self.drift*self.t+np.random.multivariate_normal(np.zeros(n), self.R)
self.t+=1
return a
def erraticScale(self, m):
self.R=m*self.R
self.Fault=True
def clear(self):
self.fault=False
self.drift=0*self.drift
self.t=0
self.offset=0*self.offset
def smoother(E, N):
#Smoothing function as as was used in EE 617 Part 1
L=len(E)
A=[]
s=sum(E[:N])
for i in range(N, L):
A.append(s/N)
s+=E[i]
s-=E[i-N]
return A
def Gx(X, Noisier=True):
#The measurement function
x1=X[0]
x2=X[1]
x3=X[2]
x4=X[3]
x5=X[4]
y1=sqrt(x1**2+x3**2+x5**2) #Range
y2=atan(x3/x1) #Azimuth angle
y3=atan(x5/(sqrt(x1**2+x3**2))) #Elevation anlge
Y=np.array([y1, y2, y3]) #Return the measurements as an array
return Y
###Output
_____no_output_____
###Markdown
Fault simulation-OffsetNow we simulate an offset fault in range sensor at some time instant. We also run in parallel a noiseless model and its measurement simulation. This basically represents actual measurements and simulated measurements (though here both are simulations, the former being noisy and latter being clean!)
###Code
Rsens=Sensor() #Sensor Object
#Train the sensor
sr=50
st=0.1
sphi=0.1
R=np.diag([sr**2, st**2, sphi**2]) #Noise matrice
Rsens.setModel(Gx, R) #Set the sensor to the particular noise level and measurement function of interest
a=Radar() #Radar object that represents actual aircraft and its measuremnts
a2=Radar() #Radar object thats supposed to be a simulation
Data=[] #Here we shall store states data
Ys=[] #Store measurements
E=[] #Store actual - simulated measurements
T=0.1 #Time step
N=500 #total number of time steps
t=250 #time when fault occurs
for i in range(0, N):
a.update(0.1, True) #update actual aircraft with noise
a2.update(0.1, False) #update the dummy aircraft without noise
Data.append(a.getState()) #get states data
y=Rsens.meas(a.getState()) #get actual measurement
y2=a2.meas(False) #get simulated measurement without noise
Ys.append(y) #Store the measurement
#Compute and store error
e=y2-y
E.append(e)
if i==t:
#If now is the time, induce fault
Rsens.setOffset([100, 0, 0])
E=np.array(E)
Data=np.array(Data)
Ys=np.array(Ys)
plt.plot(E[:, 0])
plt.xlabel('Time instant')
plt.ylabel('Error')
plt.show()
###Output
_____no_output_____
###Markdown
Feature Extraction and SVM for diagnosisOn this error signal, we extract 10 features ($\mu_x$ is mean and $\sigma$ is std deviation on error signal)$Y_{RMS}=\sqrt{\frac{1}{N}\sum_{i=1}^{N} x_i^2}$$Y_{RMS}=\left( \frac{1}{N} \sum_{i=1}^{N} |x_i| \right)^2$$Y_{KV}=1/N \sum_{i=1}^{N} \left( \frac{x_i-\mu_x}{\sigma} \right)^4$$Y_{SV}=1/N \sum_{i=1}^{N} \left( \frac{x_i-\mu_x}{\sigma} \right)^3$$Y_{PPV}= max(x)-min(x)$$Y_{CF}=\frac{max(|x_i|)}{Y_{RMS}}$$Y_{IF}=\frac{max(|x_i|)}{\frac{1}{N}\sum_{i=1}^{N}|x_i|}$$Y_{MF}=\frac{max(|x_i|)}{Y_{SRA}}$$Y_{SF}=\frac{max(|x|}{\sqrt {\frac{1}{N}\sum_{i=1}^{N}x_i^2}}$$Y_{KF}=\frac{1/N \sum_{i=1}^{N} \left( \frac{x_i-\mu_x}{\sigma} \right)^4}{\left( 1/N\sum_{i=1}^{N}x_i^2 \right)^2}$For a given simulation for 500 time instants, we extract these features on the error signal. The idea is to perform this simulation many times, with fault induction in some cases and not in others. This data of these 10 features will be used to train an SVM that will classify a sensor error signal as faulty or okay. Using SVM, lot of process is automated i.e. we dont have to tweak and look for thresholds as was done in the induction motor case
###Code
def ExtractFeatures(E):
#Based on the above formulae extract the features and return as numpy array
Yrms=0
absSum=0
Ykv=0
Ysv=0
Ypv=0
Ycf=0
Yif=0
Ymf=0
Ysf=0
Ykf=0
N=len(E)
M=sum(E)/N
d=0
for i in E:
Yrms+=i**2
d+=(i-M)**2
Yrms=sqrt(Yrms/N)
sigma=sqrt(d/N)
for i in E:
absSum+=(abs(i))
Ykv+=((i-M)/sigma)**4
Ysv+=((i-M)/sigma)**3
Ysra=(absSum/N)**2
Ykv=Ykv/N
Ysv=Ysv/N
Emax=max(E)
Emin=min(E)
Ypv=Emax-Emin
Emaxabs=max(abs(Emax), abs(Emin))
Ycf=Emaxabs/Yrms
Yif=Emaxabs/(absSum/N)
Ymf=Emaxabs/Ysra
Ysf=Emax/Yrms
Ykf=Ykv/(Yrms**4)
return np.array([Yrms, Ysra, Ykv, Ysv, Ypv, Ycf, Yif, Ymf, Ysf, Ykf])
def normalise(F):
#normalise the data, since we are going to be using SVM
s=0
for i in F:
s+=abs(i)
S=s
return F/S
def simulate(Offset=0, N=500):
#A simple function to perform the simulation with Offset Fault
#at some random time instant
#If Offset=0 it automatically implies healthy simultions
a=Radar()
a2=Radar()
Data=[]
Ys=[]
E=[]
T=0.1
E=[]
t=int(random.uniform(N/10, N)) #The random time instant to generate fault
for i in range(0, N):
a.update(0.1, True)
a2.update(0.1, False)
Data.append(a.getState())
y=Rsens.meas(a.getState())
y2=a2.meas(False)
Ys.append(y)
E.append(y-y2)
if i==t:
Rsens.setOffset([Offset, 0, 0])
Data=np.array(Data)
E=np.array(E)
Y1f=ExtractFeatures(E[:, 0]) #Extract features from error signal of range sensor
Y2f=ExtractFeatures(E[:, 1]) #Extract features from error signal of azimuth sensor
Y3f=ExtractFeatures(E[:, 2]) #Extract features from error signal of elevation sensor
return Y1f, Y2f, Y3f
def genData(M=5, Offset=0):
#Now lets do the rigorous long simulations, :(
#We shall do M simulations with healthy sensor and M with faulty sensor
#These shall store the extracted normalised features on the error signals
Y1=[] #Range sensor
Y2=[] #Azimuth sensor
Y3=[] #elevation sensor
for i in range(0, M):
#healthy simulations
Y1f, Y2f, Y3f=simulate(0)
Y1.append(Y1f)
Y2.append(Y2f)
Y3.append(Y3f)
for i in range(0, M):
#Perform the faulty simulations
Y1f, Y2f, Y3f=simulate(Offset)
Y1.append(Y1f)
Y2.append(Y2f)
Y3.append(Y3f)
#Convert to numpy array it just makes life easier
Y1=np.array(Y1)
Y2=np.array(Y2)
Y3=np.array(Y3)
for i in range(0, 2*M):
#normalise all the data
Y1[i]=normalise(Y1[i])
Y2[i]=normalise(Y2[i])
Y3[i]=normalise(Y3[i])
return Y1, Y2, Y3
###Output
_____no_output_____
###Markdown
Example simulationNow lets do the simulation of ATC 100 times with no fault in range sensor and then 100 times with sensor having an Offset fault of 60. Then we extract the features on each. Then we split the data in 60:40 ratio of train, test in 1000 different combinations. For each combination we train an SVM and calculate the accuracy, we print the average accuracy of these 1000 test-train combinations. This was how data was generated for the report
###Code
Y1, Y2, Y3=genData(100, 60) #100 healthy and 100 faulty simulations with Offset 60
X=Y1
y=np.ones(200) #Target Variable
for i in range(0, 100):
y[i]=0 #First 100 sims are healthy and rest are faulty
Merit=[] #Store accuracies
for i in range(0, 1000):
X_train, X_test, y_train, y_test=tts(X, y, test_size=0.4, random_state=i) #Test train splitting
#Train SVM
model=svm.SVC(kernel='rbf')
model.fit(X_train, y_train)
#Compute accuracy for this particular combination
y_pred=model.predict(X_test)
Merit.append(accuracy_score(y_test, y_pred))
#Print average accuracy
print(sum(Merit)/len(Merit))
###Output
0.9329374999999991
###Markdown
Drift Fault in Range sensorThis is very similar to the offset fault, except that its drift, else everything is almost entirely same!!Lets see an example simulation of the error signal
###Code
Rsens=Sensor()
sr=50
st=0.1
sphi=0.1
R=np.diag([sr**2, st**2, sphi**2])
Rsens.setModel(Gx, R)
a=Radar()
a2=Radar()
Data=[]
Ys=[]
E=[]
T=0.1
N=500
t=250
for i in range(0, N):
a.update(0.1, True)
a2.update(0.1, False)
Data.append(a.getState())
y=Rsens.meas(a.getState())
y2=a2.meas(False)
Ys.append(y)
e=y2-y
E.append(e)
if i==t:
Rsens.setDrift(np.array([1, 0, 0]))
E=np.array(E)
Data=np.array(Data)
Ys=np.array(Ys)
plt.plot(E[:, 0])
###Output
_____no_output_____
###Markdown
Similar functions defined below
###Code
def simulateDrift(drift=0, N=500):
#automate the above process
a=Radar()
a2=Radar()
Data=[]
Ys=[]
E=[]
T=0.1
E=[]
t=int(random.uniform(N/10, N)) #At random time induce fault
for i in range(0, N):
a.update(0.1, True)
a2.update(0.1, False)
Data.append(a.getState())
y=Rsens.meas(a.getState())
y2=a2.meas(False)
Ys.append(y)
E.append(y-y2)
if i==t:
Rsens.setDrift(np.array([drift, 0, 0]))
Data=np.array(Data)
E=np.array(E)
#Extract features and return
Y1f=ExtractFeatures(E[:, 0])
Y2f=ExtractFeatures(E[:, 1])
Y3f=ExtractFeatures(E[:, 2])
return Y1f, Y2f, Y3f
def genDataDrift(M=5, Drift=0):
#M healthy, M faulty simulations same as offset except of course its drifting !!
Y1=[]
Y2=[]
Y3=[]
for i in range(0, M):
Y1f, Y2f, Y3f=simulateDrift(0)
Y1.append(Y1f)
Y2.append(Y2f)
Y3.append(Y3f)
for i in range(0, M):
Y1f, Y2f, Y3f=simulateDrift(Drift)
Y1.append(Y1f)
Y2.append(Y2f)
Y3.append(Y3f)
Y1=np.array(Y1)
Y2=np.array(Y2)
Y3=np.array(Y3)
for i in range(0, 2*M):
Y1[i]=normalise(Y1[i])
Y2[i]=normalise(Y2[i])
Y3[i]=normalise(Y3[i])
return Y1, Y2, Y3
###Output
_____no_output_____
###Markdown
Example simulationPerforming 100 healthy and 100 faulty simulations (drift -> 0.2). For 1000 combinations of train test splitting in ration of 60:40, train SVM and calculate accuracy and report finally the average accuracy.
###Code
Y1, Y2, Y3=genDataDrift(100, 0.2) #100 healthy and 100 faulty simulations with drift 0.2
X=Y1
#Generate the target variable as before
y=np.ones(200)
for i in range(0, 100):
y[i]=0
Merit=[]
for i in range(0, 1000):
X_train, X_test, y_train, y_test=tts(X, y, test_size=0.4, random_state=i) #Split 60-40
#Train SVM
model=svm.SVC(kernel='rbf')
model.fit(X_train, y_train)
#Calcuate accuracy
y_pred=model.predict(X_test)
Merit.append(accuracy_score(y_test, y_pred))
#Print average accuracy
print(sum(Merit)/len(Merit))
###Output
0.8726000000000008
|
Data_analysis/SNP-indel-calling/ANGSD/BOOTSTRAP_CONTIGS/minInd9_overlapping/DADI/adj_error.ipynb | ###Markdown
Table of Contents 1 Preparation2 Model definition3 LRT3.1 get optimal parameter values3.2 get bootstrap replicates3.3 calculate adjustment for D Preparation
###Code
from ipyparallel import Client
cl = Client()
cl.ids
%%px --local
# run whole cell on all engines a well as in the local IPython session
import numpy as np
import sys
sys.path.insert(0, '/home/claudius/Downloads/dadi')
import dadi
from glob import glob
import dill
import pandas as pd
# turn on floating point division by default, old behaviour via '//'
from __future__ import division
from itertools import repeat
def flatten(array):
import numpy as np
res = []
for el in array:
if isinstance(el, (list, tuple, np.ndarray)):
res.extend(flatten(el))
continue
res.append(el)
return list(res)
%matplotlib inline
import pylab
pylab.rcParams['figure.figsize'] = [10, 8]
pylab.rcParams['font.size'] = 12
%%px --local
# load spectrum
sfs2d = dadi.Spectrum.from_file("EryPar.unfolded.sfs.dadi")
sfs2d = sfs2d.transpose()
sfs2d.pop_ids = ['ery', 'par']
sfs2d = sfs2d.fold()
ns = sfs2d.sample_sizes # both populations have the same sample size
# setting the smallest grid size slightly larger than the largest population sample size (36)
pts_l = [40, 50, 60]
dadi.Plotting.plot_single_2d_sfs(sfs2d, vmin=1, cmap='jet')
pylab.savefig("2DSFS_folded.png")
# get number of segregating sites from SFS
sfs2d.S()
###Output
_____no_output_____
###Markdown
Model definition
###Code
def split_asym_mig_2epoch(params, ns, pts):
"""
params = (nu1_1,nu2_1,T1,nu1_2,nu2_2,T2,m1,m2)
ns = (n1,n2)
Split into two populations of specified size, with potentially asymmetric migration.
The split coincides with a stepwise size change in the daughter populations. Then,
have a second stepwise size change at some point in time after the split. This is
enforced to happen at the same time for both populations. Migration is assumed to
be the same during both epochs.
nu1_1: pop size ratio of pop 1 after split (with respect to Na)
nu2_1: pop size ratio of pop 2 after split (with respect to Na)
T1: Time from split to second size change (in units of 2*Na generations)
nu1_2: pop size ratio of pop 1 after second size change (with respect to Na)
nu2_2: pop size ratio of pop 2 after second size change (with respect to Na)
T2: time in past of second size change (in units of 2*Na generations)
m1: Migration rate from ery into par (in units of 2*Na ind per generation)
m2: Migration rate from par into ery (in units of 2*Na ind per generation)
n1,n2: Sample sizes of resulting Spectrum
pts: Number of grid points to use in integration.
"""
nu1_1,nu2_1,T1,nu1_2,nu2_2,T2,m1,m2 = params
xx = dadi.Numerics.default_grid(pts)
phi = dadi.PhiManip.phi_1D(xx)
# split
phi = dadi.PhiManip.phi_1D_to_2D(xx, phi)
# divergence with potentially asymmetric migration for time T1
phi = dadi.Integration.two_pops(phi, xx, T1, nu1_1, nu2_1, m12=m2, m21=m1)
# divergence with potentially asymmetric migration and different pop size for time T2
phi = dadi.Integration.two_pops(phi, xx, T2, nu1_2, nu2_2, m12=m2, m21=m1)
fs = dadi.Spectrum.from_phi(phi, ns, (xx,xx))
return fs
cl[:].push(dict(split_asym_mig_2epoch=split_asym_mig_2epoch))
%%px --local
func_ex = dadi.Numerics.make_extrap_log_func(split_asym_mig_2epoch)
###Output
_____no_output_____
###Markdown
LRT get optimal parameter values
###Code
ar_split_asym_mig_2epoch = []
for filename in glob("OUT_2D_models/split_asym_mig_2epoch_[0-9]*dill"):
ar_split_asym_mig_2epoch.append(dill.load(open(filename)))
l = 2*8+1
returned = [flatten(out)[:l] for out in ar_split_asym_mig_2epoch]
df = pd.DataFrame(data=returned, \
columns=['ery_1_0','par_1_0','T1_0','ery_2_0','par_2_0','T2_0','m1_0','m2_0', 'Nery_1_opt','Npar_1_opt','T1_opt','Nery_2_opt','Npar_2_opt','T2_opt','m1_opt','m2_opt','-logL'])
df.sort_values(by='-logL', ascending=True).iloc[:10,8:17]
# optimal parameter values for complex model
popt_c = np.array(df.sort_values(by='-logL', ascending=True).iloc[0, 8:16]) # take the 8th best parameter combination
popt_c
###Output
_____no_output_____
###Markdown
This two-epoch model can be reduced to one-epoch model by either setting $Nery_2 = Nery_1$ and $Npar_2 = Npar_1$ or by setting $T_2 = 0$.
###Code
# optimal paramter values for simple model (1 epoch)
# note: Nery_2=Nery_1, Npar_2=Npar_1 and T2=0
popt_s = [1.24966921, 3.19164623, 1.42043464, 1.24966921, 3.19164623, 0.0, 0.08489757, 0.39827944]
###Output
_____no_output_____
###Markdown
get bootstrap replicates
###Code
# load bootstrapped 2D SFS
all_boot = [dadi.Spectrum.from_file("../SFS/bootstrap/2DSFS/{0:03d}.unfolded.2dsfs.dadi".format(i)).fold() for i in range(200)]
###Output
_____no_output_____
###Markdown
calculate adjustment for D
###Code
# calculate adjustment for D evaluating at the *simple* model parameterisation
# specifying only T2 as fixed
adj_s = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_s, sfs2d, nested_indices=[5], multinom=True)
adj_s
# calculate adjustment for D evaluating at the *complex* model parameterisation
# specifying only T2 as fixed
adj_c = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_c, sfs2d, nested_indices=[5], multinom=True)
adj_c
###Output
_____no_output_____
###Markdown
From Coffman2016, suppl. mat.:> The two-epoch model can be marginalized down to the SNM model for an LRT by either setting η = 1 or T = 0. We found that the LRT adjustment performed well when _treating both parameters as nested_, so μ(θ) was evaluated with T = 0 and η = 1.
###Code
# calculate adjustment for D evaluating at the *simple* model parameterisation
# treating Nery_2, Npar_2 and T2 as nested
adj_s = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_s, sfs2d, nested_indices=[3,4,5], multinom=True)
adj_s
# calculate adjustment for D evaluating at the *complex* model parameterisation
# treating Nery_2, Npar_2 and T2 as nested
adj_c = dadi.Godambe.LRT_adjust(func_ex, pts_l, all_boot, popt_c, sfs2d, nested_indices=[3,4,5], multinom=True)
adj_c
###Output
_____no_output_____ |
stats_planetoid.ipynb | ###Markdown
f1 = open('./data/'+"trans.citeseer.graph",'rb')
###Code
def comp_accu(tpy, ty):
import numpy as np
return (np.argmax(tpy, axis = 1) == np.argmax(ty, axis = 1)).sum() * 1.0 / tpy.shape[0]
from utils import simple_classify_f1
simple_classify_f1('cora')
simple_classify_f1('citeseer')
simple_classify_f1('pubmed')
def get_stats(dataset_name):
simple_classify_f1(dataset_name)
modified_classify_f1(dataset_name)
classify_analysis(dataset_name)
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
###Output
_____no_output_____
###Markdown
Check for transductive and Inductive files
###Code
#for
get_stats('cora')
get_stats('pubmed')
get_stats('citeseer')
###Output
Name:
Type: Graph
Number of nodes: 3327
Number of edges: 4676
Average degree: 2.8109
Simple classification results
Train ratio: Default with planetoid
micro: 0.647
macro: 0.6264761005126618
samples: 0.647
weighted: 0.6389512022765614
Accuracy: 0.647
Propagating neighbor labels for degree 1 nodes.
Name:
Type: Graph
Number of nodes: 3327
Number of edges: 4676
Average degree: 2.8109
Modified classification results
Train ratio: Default with planetoid
micro: 0.514
macro: 0.49915888464910246
samples: 0.514
weighted: 0.509911170279925
Accuracy: 0.514
+------------+----------+---------------+-----------------+--------+---------------------------+----------------+
| Serial No. | Node no. | True Label | Predicted Label | Degree | Neighbor label | Neighbor Match |
+------------+----------+---------------+-----------------+--------+---------------------------+----------------+
| 1 | 2317 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 2 | 2327 | (array([5]),) | (array([2]),) | 1 | (array([2]),) | False |
| 3 | 2331 | (array([1]),) | (array([5]),) | 1 | (array([4]),) | False |
| 4 | 2333 | (array([0]),) | (array([1]),) | 1 | (array([0]),) | True |
| 5 | 2359 | (array([2]),) | (array([3]),) | 1 | (array([5]),) | False |
| 6 | 2374 | (array([3]),) | (array([5]),) | 1 | (array([5]),) | False |
| 7 | 2381 | (array([2]),) | (array([3]),) | 1 | (array([3]),) | False |
| 8 | 2394 | (array([5]),) | (array([4]),) | 1 | (array([3]),) | False |
| 9 | 2397 | (array([2]),) | (array([3]),) | 1 | (array([2]),) | True |
| 10 | 2402 | (array([2]),) | (array([3]),) | 1 | (array([3]),) | False |
| 11 | 2407 | (array([3]),) | (array([1]),) | 1 | (array([4]),) | False |
| 12 | 2408 | (array([1]),) | (array([5]),) | 1 | (array([], dtype=int64),) | False |
| 13 | 2410 | (array([5]),) | (array([4]),) | 1 | (array([3]),) | False |
| 14 | 2414 | (array([5]),) | (array([2]),) | 1 | (array([5]),) | True |
| 15 | 2417 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 16 | 2421 | (array([4]),) | (array([0]),) | 1 | (array([5]),) | False |
| 17 | 2425 | (array([1]),) | (array([5]),) | 1 | (array([2]),) | False |
| 18 | 2434 | (array([2]),) | (array([3]),) | 1 | (array([2]),) | True |
| 19 | 2435 | (array([5]),) | (array([3]),) | 1 | (array([3]),) | False |
| 20 | 2439 | (array([1]),) | (array([4]),) | 1 | (array([1]),) | True |
| 21 | 2440 | (array([1]),) | (array([0]),) | 1 | (array([3]),) | False |
| 22 | 2444 | (array([2]),) | (array([5]),) | 1 | (array([3]),) | False |
| 23 | 2446 | (array([0]),) | (array([4]),) | 1 | (array([1]),) | False |
| 24 | 2455 | (array([1]),) | (array([2]),) | 1 | (array([2]),) | False |
| 25 | 2463 | (array([5]),) | (array([1]),) | 1 | (array([3]),) | False |
| 26 | 2469 | (array([4]),) | (array([5]),) | 1 | (array([3]),) | False |
| 27 | 2471 | (array([4]),) | (array([0]),) | 1 | (array([1]),) | False |
| 28 | 2476 | (array([1]),) | (array([3]),) | 1 | (array([5]),) | False |
| 29 | 2483 | (array([3]),) | (array([5]),) | 1 | (array([1]),) | False |
| 30 | 2486 | (array([5]),) | (array([4]),) | 1 | (array([2]),) | False |
| 31 | 2499 | (array([2]),) | (array([1]),) | 1 | (array([1]),) | False |
| 32 | 2504 | (array([0]),) | (array([5]),) | 1 | (array([0]),) | True |
| 33 | 2507 | (array([2]),) | (array([3]),) | 1 | (array([2]),) | True |
| 34 | 2509 | (array([1]),) | (array([0]),) | 1 | (array([5]),) | False |
| 35 | 2514 | (array([2]),) | (array([3]),) | 1 | (array([3]),) | False |
| 36 | 2515 | (array([4]),) | (array([5]),) | 1 | (array([4]),) | True |
| 37 | 2516 | (array([1]),) | (array([0]),) | 1 | (array([4]),) | False |
| 38 | 2521 | (array([1]),) | (array([4]),) | 1 | (array([2]),) | False |
| 39 | 2524 | (array([1]),) | (array([0]),) | 1 | (array([5]),) | False |
| 40 | 2526 | (array([4]),) | (array([3]),) | 1 | (array([3]),) | False |
| 41 | 2542 | (array([1]),) | (array([5]),) | 1 | (array([3]),) | False |
| 42 | 2546 | (array([5]),) | (array([4]),) | 1 | (array([3]),) | False |
| 43 | 2553 | (array([4]),) | (array([5]),) | 1 | (array([2]),) | False |
| 44 | 2554 | (array([2]),) | (array([1]),) | 1 | (array([3]),) | False |
| 45 | 2556 | (array([3]),) | (array([1]),) | 1 | (array([3]),) | True |
| 46 | 2557 | (array([3]),) | (array([0]),) | 1 | (array([0]),) | False |
| 47 | 2580 | (array([1]),) | (array([0]),) | 1 | (array([4]),) | False |
| 48 | 2581 | (array([1]),) | (array([0]),) | 1 | (array([1]),) | True |
| 49 | 2584 | (array([2]),) | (array([5]),) | 1 | (array([5]),) | False |
| 50 | 2590 | (array([3]),) | (array([0]),) | 1 | (array([1]),) | False |
| 51 | 2594 | (array([2]),) | (array([1]),) | 1 | (array([1]),) | False |
| 52 | 2595 | (array([5]),) | (array([2]),) | 1 | (array([2]),) | False |
| 53 | 2598 | (array([2]),) | (array([1]),) | 1 | (array([5]),) | False |
| 54 | 2603 | (array([2]),) | (array([4]),) | 1 | (array([5]),) | False |
| 55 | 2613 | (array([3]),) | (array([4]),) | 1 | (array([4]),) | False |
| 56 | 2615 | (array([0]),) | (array([3]),) | 1 | (array([4]),) | False |
| 57 | 2622 | (array([2]),) | (array([3]),) | 1 | (array([3]),) | False |
| 58 | 2630 | (array([2]),) | (array([5]),) | 1 | (array([3]),) | False |
| 59 | 2632 | (array([4]),) | (array([5]),) | 1 | (array([4]),) | True |
| 60 | 2639 | (array([5]),) | (array([2]),) | 1 | (array([4]),) | False |
| 61 | 2648 | (array([5]),) | (array([3]),) | 1 | (array([1]),) | False |
| 62 | 2655 | (array([5]),) | (array([2]),) | 1 | (array([3]),) | False |
| 63 | 2659 | (array([2]),) | (array([0]),) | 1 | (array([0]),) | False |
| 64 | 2663 | (array([3]),) | (array([0]),) | 1 | (array([4]),) | False |
| 65 | 2667 | (array([1]),) | (array([5]),) | 1 | (array([5]),) | False |
| 66 | 2681 | (array([3]),) | (array([4]),) | 1 | (array([3]),) | True |
| 67 | 2683 | (array([3]),) | (array([0]),) | 1 | (array([3]),) | True |
| 68 | 2690 | (array([2]),) | (array([1]),) | 1 | (array([5]),) | False |
| 69 | 2696 | (array([5]),) | (array([3]),) | 1 | (array([1]),) | False |
| 70 | 2700 | (array([4]),) | (array([1]),) | 1 | (array([3]),) | False |
| 71 | 2702 | (array([5]),) | (array([2]),) | 1 | (array([1]),) | False |
| 72 | 2712 | (array([2]),) | (array([1]),) | 1 | (array([2]),) | True |
| 73 | 2713 | (array([2]),) | (array([3]),) | 1 | (array([1]),) | False |
| 74 | 2718 | (array([2]),) | (array([3]),) | 1 | (array([1]),) | False |
| 75 | 2735 | (array([1]),) | (array([4]),) | 1 | (array([4]),) | False |
| 76 | 2740 | (array([5]),) | (array([4]),) | 1 | (array([2]),) | False |
| 77 | 2746 | (array([1]),) | (array([0]),) | 1 | (array([4]),) | False |
| 78 | 2749 | (array([1]),) | (array([3]),) | 1 | (array([2]),) | False |
| 79 | 2754 | (array([3]),) | (array([1]),) | 1 | (array([5]),) | False |
| 80 | 2755 | (array([3]),) | (array([1]),) | 1 | (array([5]),) | False |
| 81 | 2760 | (array([1]),) | (array([0]),) | 1 | (array([5]),) | False |
| 82 | 2769 | (array([1]),) | (array([0]),) | 1 | (array([1]),) | True |
| 83 | 2774 | (array([1]),) | (array([0]),) | 1 | (array([5]),) | False |
| 84 | 2783 | (array([5]),) | (array([0]),) | 1 | (array([2]),) | False |
| 85 | 2785 | (array([3]),) | (array([0]),) | 1 | (array([1]),) | False |
| 86 | 2786 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 87 | 2798 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 88 | 2803 | (array([3]),) | (array([5]),) | 1 | (array([1]),) | False |
| 89 | 2804 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 90 | 2813 | (array([5]),) | (array([0]),) | 1 | (array([4]),) | False |
| 91 | 2816 | (array([0]),) | (array([3]),) | 1 | (array([4]),) | False |
| 92 | 2817 | (array([2]),) | (array([5]),) | 1 | (array([0]),) | False |
| 93 | 2820 | (array([0]),) | (array([3]),) | 1 | (array([2]),) | False |
| 94 | 2822 | (array([1]),) | (array([0]),) | 1 | (array([5]),) | False |
| 95 | 2823 | (array([3]),) | (array([1]),) | 1 | (array([0]),) | False |
| 96 | 2824 | (array([3]),) | (array([2]),) | 1 | (array([0]),) | False |
| 97 | 2826 | (array([1]),) | (array([0]),) | 1 | (array([5]),) | False |
| 98 | 2827 | (array([5]),) | (array([0]),) | 1 | (array([2]),) | False |
| 99 | 2838 | (array([5]),) | (array([2]),) | 1 | (array([4]),) | False |
| 100 | 2839 | (array([2]),) | (array([5]),) | 1 | (array([4]),) | False |
| 101 | 2849 | (array([3]),) | (array([5]),) | 1 | (array([4]),) | False |
| 102 | 2851 | (array([2]),) | (array([1]),) | 1 | (array([4]),) | False |
| 103 | 2853 | (array([3]),) | (array([5]),) | 1 | (array([3]),) | True |
| 104 | 2855 | (array([5]),) | (array([0]),) | 1 | (array([1]),) | False |
| 105 | 2870 | (array([2]),) | (array([1]),) | 1 | (array([2]),) | True |
| 106 | 2885 | (array([1]),) | (array([0]),) | 1 | (array([3]),) | False |
| 107 | 2896 | (array([1]),) | (array([0]),) | 1 | (array([4]),) | False |
| 108 | 2900 | (array([4]),) | (array([1]),) | 1 | (array([0]),) | False |
| 109 | 2901 | (array([1]),) | (array([2]),) | 1 | (array([1]),) | True |
| 110 | 2906 | (array([5]),) | (array([0]),) | 1 | (array([3]),) | False |
| 111 | 2912 | (array([5]),) | (array([2]),) | 1 | (array([3]),) | False |
| 112 | 2918 | (array([0]),) | (array([4]),) | 1 | (array([2]),) | False |
| 113 | 2930 | (array([3]),) | (array([5]),) | 1 | (array([5]),) | False |
| 114 | 2931 | (array([0]),) | (array([5]),) | 1 | (array([2]),) | False |
| 115 | 2938 | (array([0]),) | (array([4]),) | 1 | (array([5]),) | False |
| 116 | 2939 | (array([2]),) | (array([3]),) | 1 | (array([3]),) | False |
| 117 | 2945 | (array([1]),) | (array([4]),) | 1 | (array([3]),) | False |
| 118 | 2946 | (array([3]),) | (array([2]),) | 1 | (array([5]),) | False |
| 119 | 2952 | (array([4]),) | (array([5]),) | 1 | (array([2]),) | False |
| 120 | 2960 | (array([4]),) | (array([2]),) | 1 | (array([3]),) | False |
| 121 | 2969 | (array([5]),) | (array([0]),) | 1 | (array([1]),) | False |
| 122 | 2977 | (array([2]),) | (array([5]),) | 1 | (array([2]),) | True |
| 123 | 2979 | (array([4]),) | (array([1]),) | 1 | (array([5]),) | False |
| 124 | 2981 | (array([5]),) | (array([2]),) | 1 | (array([4]),) | False |
| 125 | 2987 | (array([2]),) | (array([5]),) | 1 | (array([2]),) | True |
| 126 | 2994 | (array([1]),) | (array([3]),) | 1 | (array([2]),) | False |
| 127 | 2996 | (array([3]),) | (array([0]),) | 1 | (array([4]),) | False |
| 128 | 2998 | (array([2]),) | (array([1]),) | 1 | (array([2]),) | True |
| 129 | 3005 | (array([2]),) | (array([5]),) | 1 | (array([2]),) | True |
| 130 | 3012 | (array([0]),) | (array([4]),) | 1 | (array([1]),) | False |
| 131 | 3014 | (array([1]),) | (array([3]),) | 1 | (array([3]),) | False |
| 132 | 3015 | (array([5]),) | (array([2]),) | 1 | (array([3]),) | False |
| 133 | 3032 | (array([2]),) | (array([3]),) | 1 | (array([4]),) | False |
| 134 | 3038 | (array([2]),) | (array([4]),) | 1 | (array([3]),) | False |
| 135 | 3040 | (array([3]),) | (array([5]),) | 1 | (array([2]),) | False |
| 136 | 3055 | (array([4]),) | (array([5]),) | 1 | (array([1]),) | False |
| 137 | 3061 | (array([5]),) | (array([1]),) | 1 | (array([1]),) | False |
| 138 | 3062 | (array([1]),) | (array([4]),) | 1 | (array([1]),) | True |
| 139 | 3063 | (array([1]),) | (array([0]),) | 1 | (array([1]),) | True |
| 140 | 3065 | (array([0]),) | (array([3]),) | 1 | (array([1]),) | False |
| 141 | 3066 | (array([0]),) | (array([1]),) | 1 | (array([1]),) | False |
| 142 | 3068 | (array([4]),) | (array([0]),) | 1 | (array([1]),) | False |
| 143 | 3070 | (array([1]),) | (array([0]),) | 1 | (array([1]),) | True |
| 144 | 3080 | (array([0]),) | (array([2]),) | 1 | (array([2]),) | False |
| 145 | 3100 | (array([1]),) | (array([5]),) | 1 | (array([2]),) | False |
| 146 | 3101 | (array([2]),) | (array([4]),) | 1 | (array([3]),) | False |
| 147 | 3104 | (array([5]),) | (array([1]),) | 1 | (array([5]),) | True |
| 148 | 3118 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 149 | 3121 | (array([2]),) | (array([5]),) | 1 | (array([2]),) | True |
| 150 | 3129 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 151 | 3130 | (array([4]),) | (array([1]),) | 1 | (array([1]),) | False |
| 152 | 3132 | (array([4]),) | (array([1]),) | 1 | (array([2]),) | False |
| 153 | 3139 | (array([3]),) | (array([0]),) | 1 | (array([4]),) | False |
| 154 | 3144 | (array([4]),) | (array([0]),) | 1 | (array([1]),) | False |
| 155 | 3151 | (array([1]),) | (array([0]),) | 1 | (array([2]),) | False |
| 156 | 3152 | (array([0]),) | (array([5]),) | 1 | (array([3]),) | False |
| 157 | 3160 | (array([4]),) | (array([5]),) | 1 | (array([2]),) | False |
| 158 | 3163 | (array([3]),) | (array([1]),) | 1 | (array([3]),) | True |
| 159 | 3169 | (array([2]),) | (array([1]),) | 1 | (array([3]),) | False |
| 160 | 3170 | (array([2]),) | (array([1]),) | 1 | (array([3]),) | False |
| 161 | 3182 | (array([1]),) | (array([0]),) | 1 | (array([0]),) | False |
| 162 | 3185 | (array([0]),) | (array([5]),) | 1 | (array([1]),) | False |
| 163 | 3187 | (array([1]),) | (array([0]),) | 1 | (array([1]),) | True |
| 164 | 3191 | (array([3]),) | (array([0]),) | 1 | (array([3]),) | True |
| 165 | 3192 | (array([3]),) | (array([2]),) | 1 | (array([1]),) | False |
| 166 | 3195 | (array([1]),) | (array([4]),) | 1 | (array([5]),) | False |
| 167 | 3202 | (array([3]),) | (array([0]),) | 1 | (array([5]),) | False |
| 168 | 3212 | (array([5]),) | (array([3]),) | 1 | (array([3]),) | False |
| 169 | 3214 | (array([0]),) | (array([1]),) | 1 | (array([4]),) | False |
| 170 | 3215 | (array([1]),) | (array([0]),) | 1 | (array([], dtype=int64),) | False |
| 171 | 3217 | (array([2]),) | (array([1]),) | 1 | (array([2]),) | True |
| 172 | 3229 | (array([5]),) | (array([3]),) | 1 | (array([1]),) | False |
| 173 | 3240 | (array([3]),) | (array([2]),) | 1 | (array([4]),) | False |
| 174 | 3244 | (array([5]),) | (array([1]),) | 1 | (array([4]),) | False |
| 175 | 3245 | (array([0]),) | (array([1]),) | 1 | (array([4]),) | False |
| 176 | 3246 | (array([2]),) | (array([4]),) | 1 | (array([5]),) | False |
| 177 | 3248 | (array([3]),) | (array([5]),) | 1 | (array([3]),) | True |
| 178 | 3251 | (array([1]),) | (array([0]),) | 1 | (array([4]),) | False |
| 179 | 3260 | (array([1]),) | (array([0]),) | 1 | (array([3]),) | False |
| 180 | 3261 | (array([3]),) | (array([1]),) | 1 | (array([3]),) | True |
| 181 | 3262 | (array([5]),) | (array([0]),) | 1 | (array([0]),) | False |
| 182 | 3266 | (array([2]),) | (array([3]),) | 1 | (array([1]),) | False |
| 183 | 3277 | (array([4]),) | (array([0]),) | 1 | (array([1]),) | False |
| 184 | 3285 | (array([2]),) | (array([1]),) | 1 | (array([5]),) | False |
| 185 | 3288 | (array([4]),) | (array([2]),) | 1 | (array([5]),) | False |
| 186 | 3289 | (array([0]),) | (array([1]),) | 1 | (array([2]),) | False |
| 187 | 3290 | (array([5]),) | (array([2]),) | 1 | (array([5]),) | True |
| 188 | 3292 | (array([4]),) | (array([1]),) | 1 | (array([5]),) | False |
| 189 | 3297 | (array([1]),) | (array([0]),) | 1 | (array([3]),) | False |
| 190 | 3299 | (array([4]),) | (array([0]),) | 1 | (array([1]),) | False |
| 191 | 3306 | (array([2]),) | (array([0]),) | 1 | (array([2]),) | True |
| 192 | 3307 | (array([4]),) | (array([5]),) | 1 | (array([], dtype=int64),) | False |
+------------+----------+---------------+-----------------+--------+---------------------------+----------------+
Total cases of neighbor label matches are: 35
For test set label classification prediction results
Statistics for dataset: citeseer
---------------------------------
For degree 1 total: 553 (0%) and wrong prediction: 192(0%)
For degree 2 total: 240 (0%) and wrong prediction: 89(0%)
For degree 3 total: 97 (0%) and wrong prediction: 35(0%)
For degree 4 total: 41 (0%) and wrong prediction: 14(0%)
For degree 5 total: 22 (0%) and wrong prediction: 10(0%)
For degree 6 total: 16 (0%) and wrong prediction: 6(0%)
For degree 7 total: 7 (0%) and wrong prediction: 2(0%)
For degree 11 total: 7 (0%) and wrong prediction: 1(0%)
For degree 15 total: 2 (0%) and wrong prediction: 2(100%)
For degree 17 total: 2 (0%) and wrong prediction: 2(100%)
###Markdown
Now to check for inductive graph setting
###Code
def get_stats(dataset_name):
simple_classify_f1(dataset_name)
modified_classify_f1(dataset_name)
classify_analysis(dataset_name)
get_stats('citeseer')
get_stats('pubmed')
get_stats('cora')
###Output
Name:
Type: Graph
Number of nodes: 2708
Number of edges: 5278
Average degree: 3.8981
Simple classification results
Train ratio: Default with planetoid
micro: 0.673
macro: 0.6544921937255024
samples: 0.673
weighted: 0.670689628322458
Accuracy: 0.673
Propagating neighbor labels for degree 1 nodes.
Name:
Type: Graph
Number of nodes: 2708
Number of edges: 5278
Average degree: 3.8981
Modified classification results
Train ratio: Default with planetoid
micro: 0.617
macro: 0.5960743936478071
samples: 0.617
weighted: 0.6149304116552744
Accuracy: 0.617
+------------+----------+---------------+-----------------+--------+----------------+----------------+
| Serial No. | Node no. | True Label | Predicted Label | Degree | Neighbor label | Neighbor Match |
+------------+----------+---------------+-----------------+--------+----------------+----------------+
| 1 | 1854 | (array([3]),) | (array([0]),) | 1 | (array([3]),) | True |
| 2 | 1872 | (array([6]),) | (array([0]),) | 1 | (array([3]),) | False |
| 3 | 1883 | (array([1]),) | (array([4]),) | 1 | (array([5]),) | False |
| 4 | 1990 | (array([5]),) | (array([0]),) | 1 | (array([3]),) | False |
| 5 | 2005 | (array([2]),) | (array([6]),) | 1 | (array([3]),) | False |
| 6 | 2032 | (array([2]),) | (array([4]),) | 1 | (array([4]),) | False |
| 7 | 2061 | (array([4]),) | (array([3]),) | 1 | (array([5]),) | False |
| 8 | 2092 | (array([5]),) | (array([0]),) | 1 | (array([1]),) | False |
| 9 | 2098 | (array([3]),) | (array([2]),) | 1 | (array([0]),) | False |
| 10 | 2104 | (array([4]),) | (array([0]),) | 1 | (array([5]),) | False |
| 11 | 2179 | (array([2]),) | (array([3]),) | 1 | (array([0]),) | False |
| 12 | 2191 | (array([0]),) | (array([6]),) | 1 | (array([3]),) | False |
| 13 | 2234 | (array([4]),) | (array([3]),) | 1 | (array([0]),) | False |
| 14 | 2255 | (array([3]),) | (array([6]),) | 1 | (array([1]),) | False |
| 15 | 2258 | (array([3]),) | (array([6]),) | 1 | (array([3]),) | True |
| 16 | 2272 | (array([0]),) | (array([4]),) | 1 | (array([5]),) | False |
| 17 | 2298 | (array([3]),) | (array([1]),) | 1 | (array([4]),) | False |
| 18 | 2300 | (array([3]),) | (array([2]),) | 1 | (array([4]),) | False |
| 19 | 2316 | (array([3]),) | (array([0]),) | 1 | (array([4]),) | False |
| 20 | 2322 | (array([3]),) | (array([4]),) | 1 | (array([1]),) | False |
| 21 | 2328 | (array([5]),) | (array([1]),) | 1 | (array([1]),) | False |
| 22 | 2373 | (array([3]),) | (array([2]),) | 1 | (array([3]),) | True |
| 23 | 2377 | (array([3]),) | (array([0]),) | 1 | (array([1]),) | False |
| 24 | 2410 | (array([0]),) | (array([3]),) | 1 | (array([3]),) | False |
| 25 | 2426 | (array([3]),) | (array([2]),) | 1 | (array([3]),) | True |
| 26 | 2428 | (array([4]),) | (array([3]),) | 1 | (array([3]),) | False |
| 27 | 2432 | (array([4]),) | (array([5]),) | 1 | (array([4]),) | True |
| 28 | 2468 | (array([2]),) | (array([1]),) | 1 | (array([6]),) | False |
| 29 | 2477 | (array([0]),) | (array([1]),) | 1 | (array([3]),) | False |
| 30 | 2487 | (array([4]),) | (array([6]),) | 1 | (array([1]),) | False |
| 31 | 2505 | (array([6]),) | (array([1]),) | 1 | (array([1]),) | False |
| 32 | 2506 | (array([3]),) | (array([0]),) | 1 | (array([5]),) | False |
| 33 | 2513 | (array([5]),) | (array([1]),) | 1 | (array([3]),) | False |
| 34 | 2521 | (array([6]),) | (array([5]),) | 1 | (array([3]),) | False |
| 35 | 2527 | (array([0]),) | (array([5]),) | 1 | (array([5]),) | False |
| 36 | 2529 | (array([3]),) | (array([4]),) | 1 | (array([3]),) | True |
| 37 | 2531 | (array([0]),) | (array([6]),) | 1 | (array([3]),) | False |
| 38 | 2544 | (array([3]),) | (array([4]),) | 1 | (array([0]),) | False |
| 39 | 2550 | (array([5]),) | (array([0]),) | 1 | (array([3]),) | False |
| 40 | 2557 | (array([0]),) | (array([1]),) | 1 | (array([2]),) | False |
| 41 | 2583 | (array([0]),) | (array([3]),) | 1 | (array([4]),) | False |
| 42 | 2598 | (array([0]),) | (array([1]),) | 1 | (array([1]),) | False |
| 43 | 2603 | (array([2]),) | (array([1]),) | 1 | (array([3]),) | False |
| 44 | 2606 | (array([1]),) | (array([5]),) | 1 | (array([3]),) | False |
| 45 | 2610 | (array([0]),) | (array([3]),) | 1 | (array([3]),) | False |
| 46 | 2619 | (array([5]),) | (array([3]),) | 1 | (array([2]),) | False |
| 47 | 2620 | (array([4]),) | (array([1]),) | 1 | (array([0]),) | False |
| 48 | 2622 | (array([0]),) | (array([5]),) | 1 | (array([6]),) | False |
| 49 | 2626 | (array([3]),) | (array([5]),) | 1 | (array([3]),) | True |
| 50 | 2646 | (array([3]),) | (array([2]),) | 1 | (array([4]),) | False |
| 51 | 2650 | (array([1]),) | (array([5]),) | 1 | (array([3]),) | False |
| 52 | 2658 | (array([0]),) | (array([3]),) | 1 | (array([2]),) | False |
| 53 | 2660 | (array([3]),) | (array([4]),) | 1 | (array([3]),) | True |
| 54 | 2672 | (array([4]),) | (array([3]),) | 1 | (array([4]),) | True |
| 55 | 2692 | (array([3]),) | (array([4]),) | 1 | (array([3]),) | True |
| 56 | 2696 | (array([5]),) | (array([0]),) | 1 | (array([4]),) | False |
| 57 | 2699 | (array([3]),) | (array([6]),) | 1 | (array([3]),) | True |
| 58 | 2700 | (array([4]),) | (array([3]),) | 1 | (array([3]),) | False |
| 59 | 2704 | (array([3]),) | (array([6]),) | 1 | (array([3]),) | True |
+------------+----------+---------------+-----------------+--------+----------------+----------------+
Total cases of neighbor label matches are: 12
For test set label classification prediction results
Statistics for dataset: cora
---------------------------------
For degree 1 total: 178 (0%) and wrong prediction: 59(0%)
For degree 2 total: 222 (0%) and wrong prediction: 65(0%)
For degree 3 total: 214 (0%) and wrong prediction: 69(0%)
For degree 4 total: 134 (0%) and wrong prediction: 48(0%)
For degree 5 total: 104 (0%) and wrong prediction: 35(0%)
For degree 6 total: 54 (0%) and wrong prediction: 24(0%)
For degree 7 total: 25 (0%) and wrong prediction: 11(0%)
For degree 8 total: 23 (0%) and wrong prediction: 6(0%)
For degree 9 total: 7 (0%) and wrong prediction: 1(0%)
For degree 10 total: 10 (0%) and wrong prediction: 3(0%)
For degree 12 total: 6 (0%) and wrong prediction: 1(0%)
For degree 15 total: 3 (0%) and wrong prediction: 1(0%)
For degree 16 total: 3 (0%) and wrong prediction: 1(0%)
For degree 31 total: 1 (0%) and wrong prediction: 1(100%)
For degree 44 total: 1 (0%) and wrong prediction: 1(100%)
For degree 65 total: 1 (0%) and wrong prediction: 1(100%)
|
Math/1028/1017. Convert to Base -2.ipynb | ###Markdown
说明: 给出数字 N,返回由若干 "0" 和 "1"组成的字符串,该字符串为 N 的负二进制(base -2)表示。 除非字符串就是 "0",否则返回的字符串中不能含有前导零。示例 1: 输入:2 输出:"110" 解释:(-2) ^ 2 + (-2) ^ 1 = 2示例 2: 输入:3 输出:"111" 解释:(-2) ^ 2 + (-2) ^ 1 + (-2) ^ 0 = 3示例 3: 输入:4 输出:"100" 解释:(-2) ^ 2 = 4提示: 1、0 <= N <= 10^9
###Code
class Solution:
def baseNeg2(self, N: int) -> str:
if N == 0: return '0'
base, res, carry = -2, [], 0
while N != 0:
r = N % base
d = N // (base)
if r < 0:
d += 1
r += 2
res.append(str(r))
N = d
res.reverse()
return ''.join(res)
class Solution:
def baseNeg2(self, N: int) -> str:
if N == 0: return '0'
base, res, carry = 2, [], 0
while N != 0:
r = N % base
d = N // base
if r < 0:
d += 1
r += abs(base)
res.append(str(r))
N = d
res.reverse()
return ''.join(res)
solution = Solution()
solution.baseNeg2(4)
-5 // -3
-5 % -3
###Output
_____no_output_____ |
dic-0-2-model-30ca37.ipynb | ###Markdown
Denoising DIC data using Unet **INDEX**1. DIC Image DATA generator2. Data Visualization3. Resnet50 Architecture4. U-net Architecture5. Prediction 6. Result plots Import libraries
###Code
import tensorflow.keras as keras
import os
import copy
import json
import numpy as np
from matplotlib import pyplot as plt
import tensorflow as tf
from scipy.interpolate import interp2d
import numpy as np
import numpy.random as random
import matplotlib.pyplot as plt
from tensorflow.keras.layers import Input
from tensorflow.keras.layers import Conv2D
from tensorflow.keras.layers import MaxPooling2D
from tensorflow.keras.layers import Conv2DTranspose
from tensorflow.keras.layers import concatenate
from tensorflow.keras.models import Model
from tensorflow.keras.layers import Add, Conv2D, MaxPooling2D
from tensorflow.keras.layers import Dense, Dropout, Activation, Flatten, BatchNormalization
from tensorflow.keras.optimizers import Adam
###Output
_____no_output_____
###Markdown
Data Generator
###Code
def dispgen(batch_size=32):
SubsetSize = 256
count = 1
total=1000
batch = np.zeros((n_batch,SubsetSize,SubsetSize))
while True:
for i in range(total):
for l in range(batch_size):
if (l <6):
s = 128
elif (l<11):
s = 64
elif (l<16):
s = 32
elif (l<21):
s = 16
elif (l<26):
s = 8
else:
s = 4
xp0=np.arange(1,SubsetSize/s+1,1/s)+2
yp0=np.arange(1,SubsetSize/s+1,1/s)+2
xxp0=np.arange(1,(SubsetSize/s)+4,1)
yyp0=np.arange(1,(SubsetSize/s)+4,1)
u=random.randint(-100, 100, [int(SubsetSize/s+3),int(SubsetSize/s+3)])/115
disp_u = interp2d(xxp0,yyp0,u,kind='cubic')
disp_u_=disp_u(xp0,yp0)
batch[count-1] = disp_u_
count += 1
if count == batch_size:
retX = tf.convert_to_tensor(batch)
batch = np.zeros((n_batch,SubsetSize,SubsetSize))
count = 1
Noise= retX+np.random.normal(0,0.2,batch.shape)
yield Noise,retX
n_batch = 32
# call funtion to return generator object for displacement matrix
#def getImage(gen):
# j = random.randint(0,n_batch-1)
# batch = next(gen)
# batch=tf.convert_to_tensor(batch)
# return batch[0],batch[0]+np.random.normal(0,0.2,batch[0].shape)
Ugen = dispgen(n_batch)
###Output
_____no_output_____
###Markdown
Data VisualizationData Visualization is very important. When you know how your data looks then and then only you can process it in the way you want.
###Code
x,y=next(Ugen) #X are the noisy images wit Mean 0 and SD 0.2 and Y are the original images.
for index in range(0,4): #ploting Noising Data
plt.subplot(220 + 1 + index)
plt.imshow(x[index+10])
plt.show()
for index in range(0,4): # ploting Orignal Images
plt.subplot(220 + 1 + index)
plt.imshow(y[index+10])
plt.show()
###Output
_____no_output_____
###Markdown
Building A Resnet50 Architecture
###Code
def resBlock(input_tensor, num_channels=1):
conv1 = Conv2D(num_channels,(3,3),padding='same')(input_tensor)
relu = Activation('relu')(conv1)
conv2 = Conv2D(num_channels,(3,3),padding='same')(relu)
add = Add()([input_tensor, conv2])
output_tensor = Activation('relu')(add)
return output_tensor
def build_resnet_model(height,width,num_channels,num_res_blocks):
inp = Input(shape=(height,width,1))
conv = Conv2D (num_channels,(3,3),padding='same')(inp)
block_out = Activation('relu')(conv)
for i in np.arange(0,num_res_blocks):
block_out = resBlock(block_out, num_channels)
conv_m2 = Conv2D (1,(3,3),padding='same')(block_out)
add_m2 = Add()([inp, conv_m2])
model = Model(inputs =inp,outputs = add_m2)
return model
model = build_resnet_model(256,256,32,5)
model.summary()
###Output
_____no_output_____
###Markdown
U-net Architecture
###Code
def conv_block(inputs=None, n_filters=32, dropout_prob=0, max_pooling=True):
"""
Convolutional downsampling block
Arguments:
inputs -- Input tensor
n_filters -- Number of filters for the convolutional layers
dropout_prob -- Dropout probability
max_pooling -- Use MaxPooling2D to reduce the spatial dimensions of the output volume
Returns:
next_layer, skip_connection -- Next layer and skip connection outputs
"""
conv = Conv2D(n_filters, # Number of filters
3, # Kernel size
activation='relu',
padding='same',
kernel_initializer='he_normal')(inputs)
conv = Conv2D(n_filters, # Number of filters
3, # Kernel size
activation='relu',
padding='same',
kernel_initializer='he_normal')(conv)
# if dropout_prob > 0 add a dropout layer, with the variable dropout_prob as parameter
if dropout_prob > 0:
conv = Dropout(dropout_prob)(conv)
# if max_pooling is True add a MaxPooling2D with 2x2 pool_size
if max_pooling:
next_layer = MaxPooling2D(pool_size=(2,2))(conv)
else:
next_layer = conv
skip_connection = conv
return next_layer, skip_connection
def upsampling_block(expansive_input, contractive_input, n_filters=32):
"""
Convolutional upsampling block
Arguments:
expansive_input -- Input tensor from previous layer
contractive_input -- Input tensor from previous skip layer
n_filters -- Number of filters for the convolutional layers
Returns:
conv -- Tensor output
"""
up = Conv2DTranspose(
n_filters, # number of filters
(3,3), # Kernel size
strides=(2,2),
padding='same')(expansive_input)
# Merge the previous output and the contractive_input
merge = concatenate([up, contractive_input], axis=3)
conv = Conv2D(n_filters, # Number of filters
3, # Kernel size
activation='relu',
padding='same',
kernel_initializer='he_normal')(merge)
conv = Conv2D(n_filters, # Number of filters
3, # Kernel size
activation='relu',
padding='same',
kernel_initializer='he_normal')(conv)
return conv
def unet_model(input_size=(256,256,1), n_filters=32, n_classes=1):
"""
Unet model
Arguments:
input_size -- Input shape
n_filters -- Number of filters for the convolutional layers
n_classes -- Number of output classes
Returns:
model -- tf.keras.Model
"""
inputs = Input(input_size)
# Contracting Path (encoding)
# Add a conv_block with the inputs of the unet_ model and n_filters
cblock1 = conv_block(inputs,n_filters)
# Chain the first element of the output of each block to be the input of the next conv_block.
# Double the number of filters at each new step
cblock2 = conv_block(cblock1[0],2*n_filters)
cblock3 = conv_block(cblock2[0], 4*n_filters)
cblock4 = conv_block(cblock3[0], 8*n_filters, 0.3) # Include a dropout of 0.3 for this layer
# Include a dropout of 0.3 for this layer, and avoid the max_pooling layer
cblock5 = conv_block(cblock4[0],16*n_filters, 0.3, max_pooling=False)
# Expanding Path (decoding)
# Add the first upsampling_block.
# Use the cblock5[0] as expansive_input and cblock4[1] as contractive_input and n_filters * 8
ublock6 = upsampling_block(cblock5[0], cblock4[1], 8*n_filters)
# Chain the output of the previous block as expansive_input and the corresponding contractive block output.
# Note that you must use the second element of the contractive block i.e before the maxpooling layer.
# At each step, use half the number of filters of the previous block
ublock7 = upsampling_block(ublock6, cblock3[1],4*n_filters)
ublock8 = upsampling_block(ublock7, cblock2[1],2*n_filters)
ublock9 = upsampling_block(ublock8, cblock1[1], n_filters)
conv9 = Conv2D(n_filters,
3,
activation='relu',
padding='same',
kernel_initializer='he_normal')(ublock9)
conv10 = Conv2D(n_classes, 1, padding='same')(conv9)
model = tf.keras.Model(inputs=inputs, outputs=conv10)
return model
###Output
_____no_output_____
###Markdown
Training
###Code
model=unet_model((256,256,1))
model.compile(optimizer= Adam(beta_2 = 0.9),loss='mean_squared_error',metrics=['mse'])
model.summary()
###Output
_____no_output_____
###Markdown
Creating a checkpoint for every epoch
###Code
checkpoint_path = "./"
checkpoint_dir = os.path.dirname(checkpoint_path)
# Create a callback that saves the model's weights
cp_callback = tf.keras.callbacks.ModelCheckpoint(filepath=checkpoint_path,
save_weights_only=True,
verbose=1)
with tf.device('/device:GPU:0'):
history=model.fit(Ugen,steps_per_epoch=1250,epochs=10)
filepath='./Pbmodel1'
model.save(
filepath, overwrite=True, include_optimizer=True,
signatures=None, options=None, save_traces=True
)
###Output
_____no_output_____
###Markdown
Prediction
###Code
x_test_noisy,x=next(Ugen)
response = model.predict(x_test_noisy)
response=np.squeeze(response,axis=-1)
###Output
_____no_output_____
###Markdown
Show some of the denoised examples.Images from left to right are - Noisy image, original image, denoised image
###Code
for index in range(24,30):
plt.subplot(330 + 1)
plt.imshow(x_test_noisy[index])
plt.subplot(330 + 2)
plt.imshow(x[index])
plt.subplot(330 + 3)
plt.imshow(response[index])
plt.show()
###Output
_____no_output_____
###Markdown
Result plots
###Code
from tensorflow.keras.preprocessing import image
img_path ='../input/dic-measurements/img2.jpg'
img = image.load_img(img_path, target_size=(256,256))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x=tf.image.rgb_to_grayscale(x)
x1=model.predict(x)
plt.imshow(x[0])
plt.show()
print("Noisy Image Data")
plt.imshow(x1[0])
plt.show()
print("Denosed Image Data")
from tensorflow.keras.preprocessing import image
img_path ='../input/dic-measurements/img1.jpg'
img = image.load_img(img_path, target_size=(256,256))
x = image.img_to_array(img)
x = np.expand_dims(x, axis=0)
x=tf.image.rgb_to_grayscale(x)
x1=model.predict(x)
plt.imshow(x[0])
plt.show()
print("Noisy Image Data")
plt.imshow(x1[0])
plt.show()
print("Denosed Image Data")
avg_mse = []
avg_mean = []
avg_sd = []
for index in range(0,3):
avg_mean.append(np.mean(x[index] - response[index]))
avg_mse.append(np.mean((x[index] - response[index])**2))
avg_sd.append(np.std(x[index] - response[index]))
print("Mean:", np.sum(avg_mean)/11)
print("MSE:", np.sum(avg_mse)/11)
print("SD:", np.sum(avg_sd)/11)
model.save("model.h5")
###Output
_____no_output_____ |
Tensorflow/notebook/lstm.ipynb | ###Markdown
Deep Learning=============Assignment 6------------After training a skip-gram model in `5_word2vec.ipynb`, the goal of this notebook is to train a LSTM character model over [Text8](http://mattmahoney.net/dc/textdata) data.
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import os
import numpy as np
import random
import string
import tensorflow as tf
import zipfile
from six.moves import range
from six.moves.urllib.request import urlretrieve
url = 'http://mattmahoney.net/dc/'
def maybe_download(filename, expected_bytes):
"""Download a file if not present, and make sure it's the right size."""
if not os.path.exists(filename):
filename, _ = urlretrieve(url + filename, filename)
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified %s' % filename)
else:
print(statinfo.st_size)
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
filename = maybe_download('text8.zip', 31344016)
def read_data(filename):
with zipfile.ZipFile(filename) as f:
name = f.namelist()[0]
data = tf.compat.as_str(f.read(name))
return data
text = read_data(filename)
print('Data size %d' % len(text))
print(text[0:10000])
###Output
anarchism originated as a term of abuse first used against early working class radicals including the diggers of the english revolution and the sans culottes of the french revolution whilst the term is still used in a pejorative way to describe any act that used violent means to destroy the organization of society it has also been taken up as a positive label by self defined anarchists the word anarchism is derived from the greek without archons ruler chief king anarchism as a political philosophy is the belief that rulers are unnecessary and should be abolished although there are differing interpretations of what this means anarchism also refers to related social movements that advocate the elimination of authoritarian institutions particularly the state the word anarchy as most anarchists use it does not imply chaos nihilism or anomie but rather a harmonious anti authoritarian society in place of what are regarded as authoritarian political structures and coercive economic institutions anarchists advocate social relations based upon voluntary association of autonomous individuals mutual aid and self governance while anarchism is most easily defined by what it is against anarchists also offer positive visions of what they believe to be a truly free society however ideas about how an anarchist society might work vary considerably especially with respect to economics there is also disagreement about how a free society might be brought about origins and predecessors kropotkin and others argue that before recorded history human society was organized on anarchist principles most anthropologists follow kropotkin and engels in believing that hunter gatherer bands were egalitarian and lacked division of labour accumulated wealth or decreed law and had equal access to resources william godwin anarchists including the the anarchy organisation and rothbard find anarchist attitudes in taoism from ancient china kropotkin found similar ideas in stoic zeno of citium according to kropotkin zeno repudiated the omnipotence of the state its intervention and regimentation and proclaimed the sovereignty of the moral law of the individual the anabaptists of one six th century europe are sometimes considered to be religious forerunners of modern anarchism bertrand russell in his history of western philosophy writes that the anabaptists repudiated all law since they held that the good man will be guided at every moment by the holy spirit from this premise they arrive at communism the diggers or true levellers were an early communistic movement during the time of the english civil war and are considered by some as forerunners of modern anarchism in the modern era the first to use the term to mean something other than chaos was louis armand baron de lahontan in his nouveaux voyages dans l am rique septentrionale one seven zero three where he described the indigenous american society which had no state laws prisons priests or private property as being in anarchy russell means a libertarian and leader in the american indian movement has repeatedly stated that he is an anarchist and so are all his ancestors in one seven nine three in the thick of the french revolution william godwin published an enquiry concerning political justice although godwin did not use the word anarchism many later anarchists have regarded this book as the first major anarchist text and godwin as the founder of philosophical anarchism but at this point no anarchist movement yet existed and the term anarchiste was known mainly as an insult hurled by the bourgeois girondins at more radical elements in the french revolution the first self labelled anarchist pierre joseph proudhon it is commonly held that it wasn t until pierre joseph proudhon published what is property in one eight four zero that the term anarchist was adopted as a self description it is for this reason that some claim proudhon as the founder of modern anarchist theory in what is property proudhon answers with the famous accusation property is theft in this work he opposed the institution of decreed property propri t where owners have complete rights to use and abuse their property as they wish such as exploiting workers for profit in its place proudhon supported what he called possession individuals can have limited rights to use resources capital and goods in accordance with principles of equality and justice proudhon s vision of anarchy which he called mutualism mutuellisme involved an exchange economy where individuals and groups could trade the products of their labor using labor notes which represented the amount of working time involved in production this would ensure that no one would profit from the labor of others workers could freely join together in co operative workshops an interest free bank would be set up to provide everyone with access to the means of production proudhon s ideas were influential within french working class movements and his followers were active in the revolution of one eight four eight in france proudhon s philosophy of property is complex it was developed in a number of works over his lifetime and there are differing interpretations of some of his ideas for more detailed discussion see here max stirner s egoism in his the ego and its own stirner argued that most commonly accepted social institutions including the notion of state property as a right natural rights in general and the very notion of society were mere illusions or ghosts in the mind saying of society that the individuals are its reality he advocated egoism and a form of amoralism in which individuals would unite in associations of egoists only when it was in their self interest to do so for him property simply comes about through might whoever knows how to take to defend the thing to him belongs property and what i have in my power that is my own so long as i assert myself as holder i am the proprietor of the thing stirner never called himself an anarchist he accepted only the label egoist nevertheless his ideas were influential on many individualistically inclined anarchists although interpretations of his thought are diverse american individualist anarchism benjamin tucker in one eight two five josiah warren had participated in a communitarian experiment headed by robert owen called new harmony which failed in a few years amidst much internal conflict warren blamed the community s failure on a lack of individual sovereignty and a lack of private property warren proceeded to organise experimenal anarchist communities which respected what he called the sovereignty of the individual at utopia and modern times in one eight three three warren wrote and published the peaceful revolutionist which some have noted to be the first anarchist periodical ever published benjamin tucker says that warren was the first man to expound and formulate the doctrine now known as anarchism liberty xiv december one nine zero zero one benjamin tucker became interested in anarchism through meeting josiah warren and william b greene he edited and published liberty from august one eight eight one to april one nine zero eight it is widely considered to be the finest individualist anarchist periodical ever issued in the english language tucker s conception of individualist anarchism incorporated the ideas of a variety of theorists greene s ideas on mutual banking warren s ideas on cost as the limit of price a heterodox variety of labour theory of value proudhon s market anarchism max stirner s egoism and herbert spencer s law of equal freedom tucker strongly supported the individual s right to own the product of his or her labour as private property and believed in a market economy for trading this property he argued that in a truly free market system without the state the abundance of competition would eliminate profits and ensure that all workers received the full value of their labor other one nine th century individualists included lysander spooner stephen pearl andrews and victor yarros the first international mikhail bakunin one eight one four one eight seven six in europe harsh reaction followed the revolutions of one eight four eight twenty years later in one eight six four the international workingmen s association sometimes called the first international united some diverse european revolutionary currents including anarchism due to its genuine links to active workers movements the international became signficiant from the start karl marx was a leading figure in the international he was elected to every succeeding general council of the association the first objections to marx came from the mutualists who opposed communism and statism shortly after mikhail bakunin and his followers joined in one eight six eight the first international became polarised into two camps with marx and bakunin as their respective figureheads the clearest difference between the camps was over strategy the anarchists around bakunin favoured in kropotkin s words direct economical struggle against capitalism without interfering in the political parliamentary agitation at that time marx and his followers focused on parliamentary activity bakunin characterised marx s ideas as authoritarian and predicted that if a marxist party gained to power its leaders would end up as bad as the ruling class they had fought against in one eight seven two the conflict climaxed with a final split between the two groups at the hague congress this is often cited as the origin of the conflict between anarchists and marxists from this moment the social democratic and libertarian currents of socialism had distinct organisations including rival internationals anarchist communism peter kropotkin proudhon and bakunin both opposed communism associating it with statism however in the one eight seven zero s many anarchists moved away from bakunin s economic thinking called collectivism and embraced communist concepts communists believed the means of production should be owned
###Markdown
Create a small validation set.
###Code
valid_size = 1000
valid_text = text[:valid_size]
train_text = text[valid_size:]
train_size = len(train_text)
print(train_size, train_text[:100])
print(valid_size, valid_text[:100])
###Output
99999000 ons anarchists advocate social relations based upon voluntary association of autonomous individuals
1000 anarchism originated as a term of abuse first used against early working class radicals including t
###Markdown
Utility functions to map characters to vocabulary IDs and back.
###Code
vocabulary_size = len(string.ascii_lowercase) + 1 # [a-z] + ' '
first_letter = ord(string.ascii_lowercase[0])
def char2id(char):
if char in string.ascii_lowercase:
return ord(char) - first_letter + 1
elif char == ' ':
return 0
else:
print('Unexpected character: %s' % char)
return 0
def id2char(dictid):
if dictid > 0:
return chr(dictid + first_letter - 1)
else:
return ' '
print(char2id('a'), char2id('z'), char2id(' '), char2id('ï'))
print(id2char(1), id2char(26), id2char(0))
###Output
Unexpected character: ï
1 26 0 0
a z
###Markdown
Function to generate a traich for thning bate LSTM model.
###Code
batch_size=64
num_unrollings=10
class BatchGenerator(object):
def __init__(self, text, batch_size, num_unrollings):
self._text = text
self._text_size = len(text)
self._batch_size = batch_size
self._num_unrollings = num_unrollings
segment = self._text_size // batch_size
#print(segment)
self._cursor = [ offset * segment for offset in range(batch_size)]
#print(self._cursor )
self._last_batch = self._next_batch()
#print(self._last_batch.shape)
#print(self._last_batch)
def _next_batch(self):
"""Generate a single batch from the current cursor position in the data."""
batch = np.zeros(shape=(self._batch_size, vocabulary_size), dtype=np.float)
for b in range(self._batch_size):
batch[b, char2id(self._text[self._cursor[b]])] = 1.0
self._cursor[b] = (self._cursor[b] + 1) % self._text_size
return batch
def next(self):
"""Generate the next array of batches from the data. The array consists of
the last batch of the previous array, followed by num_unrollings new ones.
"""
batches = [self._last_batch]
for step in range(self._num_unrollings):
batches.append(self._next_batch())
self._last_batch = batches[-1]
return batches
def characters(probabilities):
"""Turn a 1-hot encoding or a probability distribution over the possible
characters back into its (most likely) character representation."""
return [id2char(c) for c in np.argmax(probabilities, 1)]
def batches2string(batches):
"""Convert a sequence of batches back into their (most likely) string
representation."""
s = [''] * batches[0].shape[0]
for b in batches:
s = [''.join(x) for x in zip(s, characters(b))]
return s
train_batches = BatchGenerator(train_text, batch_size, num_unrollings)
valid_batches = BatchGenerator(valid_text, 1, 1)
print(batches2string(train_batches.next()))
print(batches2string(train_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
print(batches2string(valid_batches.next()))
def logprob(predictions, labels):
"""Log-probability of the true labels in a predicted batch."""
predictions[predictions < 1e-10] = 1e-10
return np.sum(np.multiply(labels, -np.log(predictions))) / labels.shape[0]
def sample_distribution(distribution):
"""Sample one element from a distribution assumed to be an array of normalized
probabilities.
"""
r = random.uniform(0, 1)
s = 0
for i in range(len(distribution)):
s += distribution[i]
if s >= r:
return i
return len(distribution) - 1
def sample(prediction):
"""Turn a (column) prediction into 1-hot encoded samples."""
p = np.zeros(shape=[1, vocabulary_size], dtype=np.float)
p[0, sample_distribution(prediction[0])] = 1.0
return p
def random_distribution():
"""Generate a random column of probabilities."""
b = np.random.uniform(0.0, 1.0, size=[1, vocabulary_size])
return b/np.sum(b, 1)[:,None]
###Output
_____no_output_____
###Markdown
Simple LSTM Model.
###Code
num_nodes = 64
graph = tf.Graph()
with graph.as_default():
# Parameters:
# Input gate: input, previous output, and bias.
ix = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
im = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
ib = tf.Variable(tf.zeros([1, num_nodes]))
# Forget gate: input, previous output, and bias.
fx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
fm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
fb = tf.Variable(tf.zeros([1, num_nodes]))
# Memory cell: input, state and bias.
cx = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
cm = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
cb = tf.Variable(tf.zeros([1, num_nodes]))
# Output gate: input, previous output, and bias.
ox = tf.Variable(tf.truncated_normal([vocabulary_size, num_nodes], -0.1, 0.1))
om = tf.Variable(tf.truncated_normal([num_nodes, num_nodes], -0.1, 0.1))
ob = tf.Variable(tf.zeros([1, num_nodes]))
# Variables saving state across unrollings.
saved_output = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
saved_state = tf.Variable(tf.zeros([batch_size, num_nodes]), trainable=False)
# Classifier weights and biases.
w = tf.Variable(tf.truncated_normal([num_nodes, vocabulary_size], -0.1, 0.1))
b = tf.Variable(tf.zeros([vocabulary_size]))
# Definition of the cell computation.
def lstm_cell(i, o, state):
"""Create a LSTM cell. See e.g.: http://arxiv.org/pdf/1402.1128v1.pdf
Note that in this formulation, we omit the various connections between the
previous state and the gates."""
input_gate = tf.sigmoid(tf.matmul(i, ix) + tf.matmul(o, im) + ib)
forget_gate = tf.sigmoid(tf.matmul(i, fx) + tf.matmul(o, fm) + fb)
update = tf.matmul(i, cx) + tf.matmul(o, cm) + cb
state = forget_gate * state + input_gate * tf.tanh(update)
output_gate = tf.sigmoid(tf.matmul(i, ox) + tf.matmul(o, om) + ob)
return output_gate * tf.tanh(state), state
# Input data.
train_data = list()
for _ in range(num_unrollings + 1):
train_data.append(
tf.placeholder(tf.float32, shape=[batch_size,vocabulary_size]))
train_inputs = train_data[:num_unrollings]
train_labels = train_data[1:] # labels are inputs shifted by one time step.
# Unrolled LSTM loop.
outputs = list()
output = saved_output
state = saved_state
for i in train_inputs:
output, state = lstm_cell(i, output, state)
outputs.append(output)
# State saving across unrollings.
with tf.control_dependencies([saved_output.assign(output),
saved_state.assign(state)]):
# Classifier.
logits = tf.nn.xw_plus_b(tf.concat(outputs, 0), w, b)
loss = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(
labels=tf.concat(train_labels, 0), logits=logits))
# Optimizer.
global_step = tf.Variable(0)
learning_rate = tf.train.exponential_decay(
10.0, global_step, 5000, 0.1, staircase=True)
optimizer = tf.train.GradientDescentOptimizer(learning_rate)
gradients, v = zip(*optimizer.compute_gradients(loss))
gradients, _ = tf.clip_by_global_norm(gradients, 1.25)
optimizer = optimizer.apply_gradients(
zip(gradients, v), global_step=global_step)
# Predictions.
train_prediction = tf.nn.softmax(logits)
# Sampling and validation eval: batch 1, no unrolling.
sample_input = tf.placeholder(tf.float32, shape=[1, vocabulary_size])
saved_sample_output = tf.Variable(tf.zeros([1, num_nodes]))
saved_sample_state = tf.Variable(tf.zeros([1, num_nodes]))
reset_sample_state = tf.group(
saved_sample_output.assign(tf.zeros([1, num_nodes])),
saved_sample_state.assign(tf.zeros([1, num_nodes])))
sample_output, sample_state = lstm_cell(
sample_input, saved_sample_output, saved_sample_state)
with tf.control_dependencies([saved_sample_output.assign(sample_output),
saved_sample_state.assign(sample_state)]):
sample_prediction = tf.nn.softmax(tf.nn.xw_plus_b(sample_output, w, b))
num_steps = 7001
summary_frequency = 100
with tf.Session(graph=graph) as session:
tf.global_variables_initializer().run()
print('Initialized')
mean_loss = 0
for step in range(num_steps):
batches = train_batches.next()
feed_dict = dict()
for i in range(num_unrollings + 1):
feed_dict[train_data[i]] = batches[i]
_, l, predictions, lr = session.run(
[optimizer, loss, train_prediction, learning_rate], feed_dict=feed_dict)
mean_loss += l
if step % summary_frequency == 0:
if step > 0:
mean_loss = mean_loss / summary_frequency
# The mean loss is an estimate of the loss over the last few batches.
print(
'Average loss at step %d: %f learning rate: %f' % (step, mean_loss, lr))
mean_loss = 0
labels = np.concatenate(list(batches)[1:])
print('Minibatch perplexity: %.2f' % float(
np.exp(logprob(predictions, labels))))
if step % (summary_frequency * 10) == 0:
# Generate some samples.
print('=' * 80)
for _ in range(5):
feed = sample(random_distribution())
sentence = characters(feed)[0]
reset_sample_state.run()
for _ in range(79):
prediction = sample_prediction.eval({sample_input: feed})
feed = sample(prediction)
sentence += characters(feed)[0]
print(sentence)
print('=' * 80)
# Measure validation set perplexity.
reset_sample_state.run()
valid_logprob = 0
for _ in range(valid_size):
b = valid_batches.next()
predictions = sample_prediction.eval({sample_input: b[0]})
valid_logprob = valid_logprob + logprob(predictions, b[1])
print('Validation set perplexity: %.2f' % float(np.exp(
valid_logprob / valid_size)))
###Output
Initialized
Average loss at step 0: 3.294314 learning rate: 10.000000
Minibatch perplexity: 26.96
================================================================================
lnnvhv hczs rnt jfrklectay ts i j ofme wcog kbvhwgrheajcmhs qjawtjz etselzsg g
crrv aedoo ozjflghwmtxldsee en n bmpdanacki wkiharg wtg vnesmima slwgwrsatiuneen
oreyjivy nydnxgaaoxzfne yofifbeotnws lplgt ge erjtihqeinrmg xekbuszrzub pfmp
j dgp jtt mga rfhaqr ohpuo ontpuaanyyzoolochkrmtb xne poswutu nhs ia dslcsgfim
woe egqr fxc edemeisfnep caoiefltmtotqj a jaribhwzcmgz zshyjnkbqt lk sveqyolevw
================================================================================
Validation set perplexity: 20.24
Average loss at step 100: 2.623005 learning rate: 10.000000
Minibatch perplexity: 11.01
Validation set perplexity: 10.54
Average loss at step 200: 2.254879 learning rate: 10.000000
Minibatch perplexity: 8.64
Validation set perplexity: 8.75
Average loss at step 300: 2.099661 learning rate: 10.000000
Minibatch perplexity: 7.60
Validation set perplexity: 8.16
Average loss at step 400: 2.000877 learning rate: 10.000000
Minibatch perplexity: 7.43
Validation set perplexity: 7.80
Average loss at step 500: 1.937333 learning rate: 10.000000
Minibatch perplexity: 6.49
Validation set perplexity: 7.00
Average loss at step 600: 1.909437 learning rate: 10.000000
Minibatch perplexity: 6.23
Validation set perplexity: 7.03
Average loss at step 700: 1.856690 learning rate: 10.000000
Minibatch perplexity: 6.57
Validation set perplexity: 6.66
Average loss at step 800: 1.814716 learning rate: 10.000000
Minibatch perplexity: 5.83
Validation set perplexity: 6.23
Average loss at step 900: 1.826778 learning rate: 10.000000
Minibatch perplexity: 6.74
Validation set perplexity: 6.20
Average loss at step 1000: 1.820805 learning rate: 10.000000
Minibatch perplexity: 5.55
================================================================================
w ruch newrol caplicar have in te fimmulic of stanbed the y his have inti on a c
verment that warotd movert the winse and mniling welden male enganution treary i
ing unhod at quecianied comery exiamedrets stailwiog filmer houd stace in of the
s earate maridite in also by the from chict the chilly poselos for four yerome a
er in mules fow buare centry the scanum comorizal defise the iminds of centripta
================================================================================
Validation set perplexity: 6.05
Average loss at step 1100: 1.775186 learning rate: 10.000000
Minibatch perplexity: 5.44
Validation set perplexity: 5.80
Average loss at step 1200: 1.748487 learning rate: 10.000000
Minibatch perplexity: 5.04
Validation set perplexity: 5.65
Average loss at step 1300: 1.732639 learning rate: 10.000000
Minibatch perplexity: 5.74
Validation set perplexity: 5.62
Average loss at step 1400: 1.742479 learning rate: 10.000000
Minibatch perplexity: 6.00
Validation set perplexity: 5.48
Average loss at step 1500: 1.729836 learning rate: 10.000000
Minibatch perplexity: 4.78
Validation set perplexity: 5.44
Average loss at step 1600: 1.744872 learning rate: 10.000000
Minibatch perplexity: 5.50
Validation set perplexity: 5.43
Average loss at step 1700: 1.707732 learning rate: 10.000000
Minibatch perplexity: 5.46
Validation set perplexity: 5.24
Average loss at step 1800: 1.672056 learning rate: 10.000000
Minibatch perplexity: 5.35
Validation set perplexity: 5.24
Average loss at step 1900: 1.643194 learning rate: 10.000000
Minibatch perplexity: 5.05
Validation set perplexity: 5.21
Average loss at step 2000: 1.692570 learning rate: 10.000000
Minibatch perplexity: 5.63
================================================================================
ope came on precoant to ligne butdles s light come winning is s onaxt one free h
ing nandles the unitiation ospennies after gener in in pain the coud and inlodet
ds and mefdustak the secing d pessent weidable rappate of buyder hes time witwom
jan of empross im auboments for nebsent standers zero zero basid both is nohle w
x histast the modah hl wing the daxents iss one nine eiven seven futea consts sh
================================================================================
Validation set perplexity: 5.20
Average loss at step 2100: 1.682833 learning rate: 10.000000
Minibatch perplexity: 5.12
Validation set perplexity: 4.88
Average loss at step 2200: 1.679279 learning rate: 10.000000
Minibatch perplexity: 6.58
Validation set perplexity: 4.95
Average loss at step 2300: 1.639969 learning rate: 10.000000
Minibatch perplexity: 5.06
Validation set perplexity: 4.76
Average loss at step 2400: 1.659329 learning rate: 10.000000
Minibatch perplexity: 5.01
Validation set perplexity: 4.80
Average loss at step 2500: 1.675587 learning rate: 10.000000
Minibatch perplexity: 5.23
Validation set perplexity: 4.56
Average loss at step 2600: 1.649104 learning rate: 10.000000
Minibatch perplexity: 5.68
Validation set perplexity: 4.55
Average loss at step 2700: 1.651925 learning rate: 10.000000
Minibatch perplexity: 4.58
Validation set perplexity: 4.68
Average loss at step 2800: 1.651363 learning rate: 10.000000
Minibatch perplexity: 5.58
Validation set perplexity: 4.58
Average loss at step 2900: 1.646795 learning rate: 10.000000
Minibatch perplexity: 5.62
Validation set perplexity: 4.62
Average loss at step 3000: 1.647693 learning rate: 10.000000
Minibatch perplexity: 4.98
================================================================================
winnes of indians alumon one nibbly putch atter desidenty farmide protocy of oer
f gengo uncleested dease but could ruls alp ivelbism of all ba cambinnong a cont
ver bother doundenved undersider kihneant to arms day churtar and parto reservan
varian s in of the eight four nemation from be detem tyle in the normal part and
k impalle hegaghame accurtity that founding marring are bay dendently syace enco
================================================================================
Validation set perplexity: 4.70
Average loss at step 3100: 1.621529 learning rate: 10.000000
Minibatch perplexity: 5.68
Validation set perplexity: 4.62
Average loss at step 3200: 1.639812 learning rate: 10.000000
Minibatch perplexity: 5.38
Validation set perplexity: 4.65
Average loss at step 3300: 1.632859 learning rate: 10.000000
Minibatch perplexity: 5.00
Validation set perplexity: 4.58
Average loss at step 3400: 1.668599 learning rate: 10.000000
Minibatch perplexity: 5.54
Validation set perplexity: 4.64
Average loss at step 3500: 1.653692 learning rate: 10.000000
Minibatch perplexity: 5.52
Validation set perplexity: 4.69
Average loss at step 3600: 1.665889 learning rate: 10.000000
Minibatch perplexity: 4.35
Validation set perplexity: 4.52
Average loss at step 3700: 1.644094 learning rate: 10.000000
Minibatch perplexity: 5.02
Validation set perplexity: 4.58
Average loss at step 3800: 1.640882 learning rate: 10.000000
Minibatch perplexity: 5.70
Validation set perplexity: 4.69
Average loss at step 3900: 1.636125 learning rate: 10.000000
Minibatch perplexity: 5.05
Validation set perplexity: 4.68
Average loss at step 4000: 1.653151 learning rate: 10.000000
Minibatch perplexity: 4.56
================================================================================
versce central chlensure ruml shorrevies the nine brolding the hervely previeu t
lelitical king revical stania sourgel an in iprograms directions seasionity is o
fathia c canuecally jerong of to be opeball of three merch for maver with bill i
s bradiods the during the quastions can his wave in mudicial usez approhes el co
helecies rif one two zero zero zero zero also payric usophep popular for eight s
================================================================================
Validation set perplexity: 4.66
Average loss at step 4100: 1.631490 learning rate: 10.000000
Minibatch perplexity: 5.10
Validation set perplexity: 4.81
Average loss at step 4200: 1.632502 learning rate: 10.000000
Minibatch perplexity: 5.20
Validation set perplexity: 4.55
Average loss at step 4300: 1.614347 learning rate: 10.000000
Minibatch perplexity: 5.08
Validation set perplexity: 4.58
Average loss at step 4400: 1.611879 learning rate: 10.000000
Minibatch perplexity: 4.82
Validation set perplexity: 4.50
Average loss at step 4500: 1.617748 learning rate: 10.000000
Minibatch perplexity: 5.24
Validation set perplexity: 4.62
Average loss at step 4600: 1.614927 learning rate: 10.000000
Minibatch perplexity: 5.11
Validation set perplexity: 4.61
Average loss at step 4700: 1.623387 learning rate: 10.000000
Minibatch perplexity: 5.14
Validation set perplexity: 4.55
Average loss at step 4800: 1.629507 learning rate: 10.000000
Minibatch perplexity: 4.35
Validation set perplexity: 4.59
Average loss at step 4900: 1.630408 learning rate: 10.000000
Minibatch perplexity: 5.13
Validation set perplexity: 4.74
Average loss at step 5000: 1.606152 learning rate: 1.000000
Minibatch perplexity: 4.41
================================================================================
x who toth expare one eight one two zero years of the preoniniay or in offender
on one nine vanustries pamial cellurally work with generaty is sudararded midite
thirusary a latent indic recordd batgara is one five wish parating electiveid th
x beath york and as foctivity it fembles molivis to groutt by and terpans onlore
by some the lontala elizal eftemuled hitter all socitality dosternd dy would a
================================================================================
Validation set perplexity: 4.67
Average loss at step 5100: 1.602420 learning rate: 1.000000
Minibatch perplexity: 4.91
Validation set perplexity: 4.49
Average loss at step 5200: 1.581252 learning rate: 1.000000
Minibatch perplexity: 4.71
Validation set perplexity: 4.44
Average loss at step 5300: 1.575382 learning rate: 1.000000
Minibatch perplexity: 4.55
Validation set perplexity: 4.44
Average loss at step 5400: 1.575437 learning rate: 1.000000
Minibatch perplexity: 5.07
Validation set perplexity: 4.42
Average loss at step 5500: 1.565138 learning rate: 1.000000
Minibatch perplexity: 4.95
Validation set perplexity: 4.38
Average loss at step 5600: 1.577481 learning rate: 1.000000
Minibatch perplexity: 4.79
Validation set perplexity: 4.38
Average loss at step 5700: 1.564546 learning rate: 1.000000
Minibatch perplexity: 4.47
Validation set perplexity: 4.37
Average loss at step 5800: 1.574814 learning rate: 1.000000
Minibatch perplexity: 4.91
Validation set perplexity: 4.36
Average loss at step 5900: 1.570832 learning rate: 1.000000
Minibatch perplexity: 5.02
Validation set perplexity: 4.37
Average loss at step 6000: 1.543420 learning rate: 1.000000
Minibatch perplexity: 5.02
================================================================================
zs to wings onrelated storks coloned by instrugki of occurity one one one two se
opy dissivert important for information on at the proport levaince in offect wat
geans and or four zero zero zero three and its fuel six dispiract of it as have
shoning that technothing s and the seemento detrical ruth of the pricted for the
onson the indepision one eight bomenish islaids it by fiblis visive yougaway bur
================================================================================
Validation set perplexity: 4.36
Average loss at step 6100: 1.564224 learning rate: 1.000000
Minibatch perplexity: 5.14
Validation set perplexity: 4.34
Average loss at step 6200: 1.532666 learning rate: 1.000000
Minibatch perplexity: 4.87
Validation set perplexity: 4.35
Average loss at step 6300: 1.541885 learning rate: 1.000000
Minibatch perplexity: 5.02
Validation set perplexity: 4.31
Average loss at step 6400: 1.538702 learning rate: 1.000000
Minibatch perplexity: 4.63
Validation set perplexity: 4.31
Average loss at step 6500: 1.554115 learning rate: 1.000000
Minibatch perplexity: 4.70
Validation set perplexity: 4.31
Average loss at step 6600: 1.591587 learning rate: 1.000000
Minibatch perplexity: 4.73
Validation set perplexity: 4.32
Average loss at step 6700: 1.575701 learning rate: 1.000000
Minibatch perplexity: 5.17
Validation set perplexity: 4.32
Average loss at step 6800: 1.600245 learning rate: 1.000000
Minibatch perplexity: 4.68
Validation set perplexity: 4.33
Average loss at step 6900: 1.579958 learning rate: 1.000000
Minibatch perplexity: 4.58
Validation set perplexity: 4.34
Average loss at step 7000: 1.574352 learning rate: 1.000000
Minibatch perplexity: 5.07
================================================================================
ing elomry cornicaten which of johd who paration of the maj the cornifoted trady
ver dober dienans the eight offer american thewe of the udern adving american th
ours partes interimption compodictic skandaummoof to time prevaces of been b tho
weres endive byt to musica betendly conceinties were work regody peestandiations
mo atonte the werab defensive the program regerative nom structor searical one n
================================================================================
Validation set perplexity: 4.31
|
GitHub Workshop.ipynb | ###Markdown
Temperature and humidity workbookThis workbook will print variables of temp
###Code
temperature = 55
humidity = 0.7
print('temp is', temperature, 'and humidity is', humidity)
###Output
temp is 55 and humidity is 0.7
|
matrix_one/day4.ipynb | ###Markdown
idziemy do naszych danych
###Code
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix"
###Output
/content/drive/My Drive/Colab Notebooks/dw_matrix
###Markdown
Wczytujemy baze danych
###Code
df = pd.read_csv('data/men_shoes.csv', low_memory=False)
df.shape
df.columns
###Output
_____no_output_____
###Markdown
Znajdujemy wartość srednią z ceny 18280 butów
###Code
mean_price = np.mean( df['prices_amountmin'])
mean_price
###Output
_____no_output_____
###Markdown
Do brand (marki) przypisujemy nr ID
###Code
df['brand'].factorize()
###Output
_____no_output_____
###Markdown
Powstaje nam nowa kolumna gdzie zamiast 'brand' występują numery ID
###Code
df['brand_cat'] = df['brand'].factorize()[0]
###Output
_____no_output_____
###Markdown
###Code
feats = ['brand_cat']
X = df[feats].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
###Output
_____no_output_____
###Markdown
Definiujemy funkcję
###Code
def run_model(feats):
X = df[feats].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['brand_cat'])
###Output
_____no_output_____
###Markdown
Powstaje nowa kolumna z ID
###Code
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
run_model(['manufacturer_cat'])
###Output
_____no_output_____
###Markdown
Model z dwoma cechami
###Code
run_model(['brand_cat', 'manufacturer_cat'])
###Output
_____no_output_____
###Markdown
Załadować do Gita
###Code
ls matrix_one/day4.ipynb
###Output
matrix_one/day4.ipynb
###Markdown
###Code
feats = ['brand_cat']
x = df[feats].values
y = df['prices_amountmin']
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
cd '/content/drive/My Drive/Colab Notebooks/Matrix'
df = pd.read_csv('data/men_shoes.csv', low_memory=False)
df.shape
df.columns
mean_price = np.mean(df['prices_amountmin'])
mean_price
y_true = df['prices_amountmin']
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
np.log1p(df['prices_amountmin']).hist(bins=100)
y_true = df['prices_amountmin']
price_log_mean = np.expm1( np.mean( np.log1p(y_true )))
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
np.exp( np.mean( np.log1p(y_true ))) - 1
df.brand.value_counts()
df['brand_cat'] = df['brand'].factorize()[0]
feats = ['brand_cat']
x = df[feats].values
y = df['prices_amountmin']
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
def run_model(feats):
x = df[feats].values
y = df['prices_amountmin']
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['brand_cat'])
df['flavors_cat'] = df['flavors'].factorize()[0]
def run_model(feats):
x = df[feats].values
y = df['prices_amountmin']
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['flavors_cat'])
run_model(['brand_cat', 'flavors_cat'])
###Output
_____no_output_____
###Markdown
najpierw klikamy Unmonunt Drive żeby połączyć z My Drive
###Code
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
cd "/content/drive/My Drive/Colab Notebooks/dw_matrix"
df = pd.read_csv('data/men_shoes.csv', low_memory=False)
df.shape
df.columns
mean_price = np.mean(df['prices_amountmin'])
mean_price
[3]*5
y_true = df['prices_amountmin'] # wartość prawidłowa do prognozwania mamy 18 tysiecy
#y_true.shape[0]
# teraz powtarzamy mean_price * każdy wierwsz >18000 żeby wyliczyć y_pred
# y_pred wartość którą chcemy zprognozować
y_pred = [mean_price] * y_true.shape[0]
mean_absolute_error(y_true,y_pred)
###Output
_____no_output_____
###Markdown
żeby znormalizować - trzeba zlogarytmować
###Code
y_true = df['prices_amountmin'].hist(bins=100)
np.log1p(df['prices_amountmax']).hist(bins=100)
# np.log( df[prices_amountmin'] +1).hist(bins=100)
# bo np.log(0) minus nieskończoność
# np.log(0+1) wynosi 0
# funkcja np.log1p robi to samo
###Output
_____no_output_____
###Markdown
drugi eksperyment - zamiast średniej będzie Mediana
###Code
y_true = df['prices_amountmin']
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true,y_pred)
y_true = df['prices_amountmin']
price_log_mean = np.expm1(np.mean(np.log1p(y_true)))
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true,y_pred)
df.columns
df.brand.value_counts()
df['brand'].factorize() #faktoryzacja - dostajmy numer dla każdego unikata
df['brand_cat'] = df['brand'].factorize()[0]
feats = ['brand_cat']
X = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
import sklearn
sklearn.metrics.SCORERS.keys()
def run_model(feats):
X = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['brand_cat'])
###Output
_____no_output_____
###Markdown
nowa cecha df.manufacture.value_counts() + colors df.color.value_counts()
###Code
df.manufacturer.value_counts()
df['manufacturer'].factorize()
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
df.colors.value_counts()
df['colors'].factorize()
df['colors_cat'] = df['colors'].factorize()[0]
run_model(['brand_cat','manufacturer_cat','colors_cat'])
!git add matrix_one/day4.ipynb
!git config --global user.email "[email protected]"
!git config --global user.name "Grzegorz"
!git commit -m "day4 - Read Men's Shoe Prices dataset from data.world"
!git status
cd ..
###Output
/content/drive/My Drive/Colab Notebooks/dw_matrix
###Markdown
prices_amountmin
###Code
df_usd = df[ df.prices_currency == 'USD'].copy()
df_usd['prices_amountmin'] = df_usd.prices_amountmin.astype(np.float)
filter_max = np.percentile(df_usd['prices_amountmin'],99)
df = df_usd[ df_usd['prices_amountmin'] < filter_max ]
mean_price = np.mean(df['prices_amountmin'])
y_true = df['prices_amountmin']
y_pred = [mean_price] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
df['prices_amountmin'].hist(bins=100)
np.log1p(df['prices_amountmin']).hist(bins=100)
y_true = df['prices_amountmin']
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
y_true = df['prices_amountmin']
price_log_mean = np.expm1( np.mean( np.log1p(y_true) ) )
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
df.columns
df.brand.value_counts()
df['brand'].factorize()
df['brand_cat'] = df['brand'].factorize()[0]
feats = ['brand_cat']
X = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scors = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scors), np.std(scors)
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
df['categories_cat'] = df['categories'].factorize()[0]
feats = ['brand_cat']
def run_model(feats):
X = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scors = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scors), np.std(scors)
run_model(feats)
run_model(['manufacturer_cat'])
run_model(['categories_cat'])
run_model(['manufacturer_cat','brand_cat'])
run_model(['brand_cat', 'categories_cat'])
run_model(['manufacturer_cat','brand_cat', 'categories_cat'])
!git status
###Output
On branch master
Your branch is up to date with 'origin/master'.
Changes not staged for commit:
(use "git add <file>..." to update what will be committed)
(use "git checkout -- <file>..." to discard changes in working directory)
[31mmodified: matrix_one/day3.ipynb[m
Untracked files:
(use "git add <file>..." to include in what will be committed)
[31mmatrix_one/day4.ipynb[m
no changes added to commit (use "git add" and/or "git commit -a")
###Markdown
###Code
import pandas as pd
import numpy as np
from sklearn.tree import DecisionTreeRegressor
from sklearn.metrics import mean_absolute_error
from sklearn.model_selection import cross_val_score
cd 'drive/My Drive/Colab Notebooks/dw_matrix'
ls data
df = pd.read_csv('data/men_shoes_prices.csv', low_memory=False)
df.shape
df.columns
mean_price = np.mean( df['prices_amountmin'])
mean_price
y_true = df[ 'prices_amountmin' ]
y_pred = [mean_price] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
np.log1p( df[ 'prices_amountmin' ]) .hist(bins=100)
y_true = df[ 'prices_amountmin' ]
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
y_true = df[ 'prices_amountmin' ]
price_log_mean = np.expm1( np.mean(np.log1p(y_true) ) )
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
df.columns
df.brand.value_counts()
df['brand_cat'] = df['brand'].factorize()[0]
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
feats = ['brand_cat']
x = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
def run_model(feats):
x = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['brand_cat'])
run_model(['manufacturer_cat'])
run_model(['manufacturer_cat', 'brand_cat'])
!git add matrix_one/day4.ipynb
ls
###Output
_____no_output_____
###Markdown
Cel: Prognozowanie kolumny 'prices_amountmin' Model base line - średnia
###Code
# base line - prognozowanie zawsze średniej z całego zestawu danych
mean_price = np.mean( df['prices_amountmin'] )
# wartosc prawidłowa
y_true = df['prices_amountmin']
# predykcja
y_pred = [mean_price] * y_true.shape[0] # mnożymy wektor aby miał długość zestawu danych
# sprawdzamy skutecznosc modelu - MAE
mean_absolute_error(y_true, y_pred)
###Output
_____no_output_____
###Markdown
____________________
###Code
df['prices_amountmin'].hist(bins=100);
# logarytmizujemy dane
np.log( df['prices_amountmin'] + 1) .hist(bins=100);
# dodajemy 1 ponieważ log(0) -> inf
# to samo możemy zrealizować inną funkcją
np.log1p( df['prices_amountmin']) .hist(bins=100);
###Output
_____no_output_____
###Markdown
Model base line 2 - mediana
###Code
# base line - prognozowanie zawsze średniej z całego zestawu danych
median_price = np.median( df['prices_amountmin'] )
# wartosc prawidłowa
y_true = df['prices_amountmin']
# predykcja
y_pred = [median_price] * y_true.shape[0] # mnożymy wektor aby miał długość zestawu danych
# sprawdzamy skutecznosc modelu - MAE
mean_absolute_error(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Model - średnia z transformacji logarytmicznej
###Code
y_true = df['prices_amountmin']
# średnia z transformacji logarytmicznej
price_log_mean = np.expm1( np.mean( np.log1p( y_true ))) # odkrecamy logarytm przez np.exp
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Model - mediana z transformacji logarytmicznej
###Code
y_true = df['prices_amountmin']
# średnia z transformacji logarytmicznej
price_log_median = np.expm1( np.median( np.log1p( y_true ))) # odkrecamy logarytm przez np.exp
y_pred = [price_log_median] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Podejście do ML
###Code
# jakie brandy
df['brand'].value_counts()
# mamy dane kategorialne, które przepisujemy na liczby
df['brand'].factorize()
df['brand_cat'] = df['brand'].factorize()[0]
### tworzymy macierz z cechami do modelu
# lista cech
feats = ['brand_cat']
X = df [ feats ].values
y = df['prices_amountmin'].values
# tworzę model
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
### dodajemy nowe cechy, aby nie kopiować kodu robimy funkcję
def run_model(feats):
X = df [ feats ].values
y = df['prices_amountmin'].values
# tworzę model
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
feats = ['brand_cat']
run_model(feats)
df.columns
# nowa cecha
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
run_model( ['manufacturer_cat'])
feats = ['brand_cat', 'manufacturer_cat']
run_model(feats)
!git add day4.ipynb
!git commit -m 'Men`s Shoe Price - base model with 2 features - misspelling fix'
!git push -u origin master
###Output
_____no_output_____
###Markdown
###Code
df.brand.value_counts()
df['brand_cat'] = df['brand'].factorize()[0]
feats1 = ['brand_cat']
x = df[ feats1 ].values
y = df[ 'prices_amountmin' ].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
import sklearn
sklearn.metrics.SCORERS.keys()
def run_model(feats1):
x = df[ feats1 ].values
y = df[ 'prices_amountmin' ].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model (['brand_cat'])
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
df.manufacturer.value_counts()
feats2 = ['manufacturer_cat']
x = df[ feats2 ].values
y = df[ 'prices_amountmin' ].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
def run_model(feats2):
x = df[ feats2 ].values
y = df[ 'prices_amountmin' ].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, x, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['manufacturer_cat'])
run_model(['manufacturer_cat', 'brand_cat'])
ls
ls
ls matrix_one/
###Output
_____no_output_____
###Markdown
Base Line
###Code
mean_price =np.mean( df['prices_amountmin'])
mean_price
###Output
_____no_output_____
###Markdown
Zwraca zawsze wartość średnią
###Code
[3] * 5
y_true = df['prices_amountmin']
y_pred = [mean_price] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
df['prices_amountmin'].hist(bins=100)
np.log( df['prices_amountmin'] + 1 ).hist(bins=100)
np.log1p( df['prices_amountmin'] ).hist(bins=100)
y_true = df['prices_amountmin']
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Transformacja logarytmiczna
###Code
y_true = df['prices_amountmin']
price_log_mean = np.expm1 (np.mean ( np.log1p(y_true) ) )
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
###Output
_____no_output_____
###Markdown
Decision Tree
###Code
df.columns
df.brand.value_counts()
###Output
_____no_output_____
###Markdown
Zamiana nazw na ID
###Code
# {'Nike' : 1, 'PUMA' : 2 ...}
df.brand.factorize()[0]
df['brand_cat'] = df.brand.factorize()[0]
feats = ['brand_cat']
X = df[ feats ].values
y = df.prices_amountmin
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
###Output
_____no_output_____
###Markdown
Funkcja do modelu
###Code
def run_model(feats):
X = df[ feats ].values
y = df.prices_amountmin
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
df.columns
df.manufacturer.value_counts()
df['manufacturer_cat'] = df.manufacturer.factorize()[0]
run_model( ['manufacturer_cat', 'brand_cat'] )
!git add matrix_one/day4.ipynb
!git config --global user.email "[email protected]"
!git config --global user.name "kozolex"
!git commit -m "day 4 done"
token = ''
repo = 'https://{0}@github.com/kozolex/dwmatrix.git'.format(token)
!git push -u {repo} --force
###Output
_____no_output_____
###Markdown
Prognozuję tą kolumnę 'prices_amountmin'
###Code
mean_price = np.mean( df['prices_amountmin'] )
mean_price
###Output
_____no_output_____
###Markdown
Bardzo prosty model
###Code
y_true = df[ 'prices_amountmin' ]
y_pred = [mean_price] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
df[ 'prices_amountmin' ].hist(bins=100)
np.log1p(df[ 'prices_amountmin' ]).hist(bins=100)
y_true = df[ 'prices_amountmin' ]
y_pred = [np.median(y_true)] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
y_true = df[ 'prices_amountmin' ]
price_log_mean = np.expm1( np.mean( np.log1p(y_true) ) )
y_pred = [price_log_mean] * y_true.shape[0]
mean_absolute_error(y_true, y_pred)
df.columns
df['brand_cat'] = df['brand'].factorize()[0]
df['manufacturer_cat'] = df['manufacturer'].factorize()[0]
df['colors_cat'] = df['colors'].factorize()[0]
feats = ['brand_cat']
X = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
np.mean(scores), np.std(scores)
def run_model(feats):
X = df[ feats ].values
y = df['prices_amountmin'].values
model = DecisionTreeRegressor(max_depth=5)
scores = cross_val_score(model, X, y, scoring='neg_mean_absolute_error')
return np.mean(scores), np.std(scores)
run_model(['brand_cat'])
run_model(['brand_cat', 'manufacturer_cat'])
run_model(['brand_cat', 'manufacturer_cat', 'colors_cat'])
ls
cd matrix_one/
ls
!git status
!git commit -m "Add day4 with first simple model"
###Output
_____no_output_____ |
problem generation/workflow_generating instances.ipynb | ###Markdown
generate client coordinates
###Code
x, y = generate_points(math.sqrt(2), num_clients)
draw_instance(radius, x, y)
###Output
_____no_output_____
###Markdown
generate dataframes to export as excel files
###Code
all_points = np.vstack((np.zeros(2),np.vstack((x, y)).T))
#all_points
travel_times_df = get_travel_times(all_points)
clients_df = generate_service_times(travel_times_df, all_points, l)
# probability vector for provider start time (at hour 0, 1, 2, 3)
# p = [0.5, 0.2, 0.15, 0.15]
providers_df = generate_providers(num_providers)
general_df = generate_general(l, providers_df, clients_df)
general_df.iloc[:,1] = general_df.iloc[:,1].astype(int)
travel_times_df
clients_df
providers_df
general_df
import openpyxl
import xlsxwriter
import xlwt
from datetime import datetime
now = datetime.now().strftime("%d_%m_%M:%S")
with pd.ExcelWriter(f'data_numP{num_providers}_numC{num_clients}_{now}.xlsx') as writer:
general_df.to_excel(writer, sheet_name='General', header=False, index=False)
providers_df.to_excel(writer, sheet_name='Providers', index=False)
clients_df.to_excel(writer, sheet_name='Clients', index=False)
travel_times_df.to_excel(writer, sheet_name='Distances', header=False, index=False)
###Output
_____no_output_____ |
Hello,_Colaboratory.ipynb | ###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/sochachai/decart-data-visualization/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
import tensorflow as tf
from tensorflow import keras
from tensorflow.python.client import device_lib
def get_available_gpus():
local_device_protos = device_lib.list_local_devices()
return [x.name for x in local_device_protos]
print("Devices available:", get_available_gpus())
print("Tensorflow version:", tf.__version__)
print("Keras version:", keras.__version__)
###Output
Devices available: ['/device:CPU:0', '/device:GPU:0']
Tensorflow version: 1.10.0
Keras version: 2.1.6-tf
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb)To open a github notebook in one click, we recommend installing the [Open in Colab Chrome Extension](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo). Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
from __future__ import print_function
import keras
from keras.datasets import mnist
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.layers import Conv2D, MaxPooling2D
from keras import backend as K
batch_size = 128
num_classes = 10
epochs = 12
# input image dimensions
img_rows, img_cols = 28, 28
# the data, split between train and test sets
(x_train, y_train), (x_test, y_test) = mnist.load_data()
if K.image_data_format() == 'channels_first':
x_train = x_train.reshape(x_train.shape[0], 1, img_rows, img_cols)
x_test = x_test.reshape(x_test.shape[0], 1, img_rows, img_cols)
input_shape = (1, img_rows, img_cols)
else:
x_train = x_train.reshape(x_train.shape[0], img_rows, img_cols, 1)
x_test = x_test.reshape(x_test.shape[0], img_rows, img_cols, 1)
input_shape = (img_rows, img_cols, 1)
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train /= 255
x_test /= 255
# convert class vectors to binary class matrices
y_train = keras.utils.to_categorical(y_train, num_classes)
y_test = keras.utils.to_categorical(y_test, num_classes)
model = Sequential()
model.add(Conv2D(32, kernel_size=(3, 3),
activation='relu',
input_shape=input_shape))
model.add(Conv2D(64, (3, 3), activation='relu'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
model.add(Flatten())
model.add(Dense(128, activation='relu'))
model.add(Dropout(0.2))
model.add(Dense(10, activation='sigmoid'))
model.compile(loss=keras.losses.categorical_crossentropy,
optimizer='adam',
metrics=['accuracy'])
model.fit(x_train, y_train,
batch_size=batch_size,
epochs=epochs,
validation_data=(x_test, y_test))
score = model.evaluate(x_test, y_test, verbose=0)
print('Test loss:', score[0])
print('Test accuracy:', score[1])
###Output
Train on 60000 samples, validate on 10000 samples
Epoch 1/12
60000/60000 [==============================] - 176s 3ms/step - loss: 0.2377 - acc: 0.9318 - val_loss: 0.0490 - val_acc: 0.9844
Epoch 2/12
60000/60000 [==============================] - 176s 3ms/step - loss: 0.0666 - acc: 0.9803 - val_loss: 0.0431 - val_acc: 0.9852
Epoch 3/12
60000/60000 [==============================] - 176s 3ms/step - loss: 0.0446 - acc: 0.9859 - val_loss: 0.0339 - val_acc: 0.9892
Epoch 4/12
60000/60000 [==============================] - 174s 3ms/step - loss: 0.0353 - acc: 0.9887 - val_loss: 0.0279 - val_acc: 0.9904
Epoch 5/12
60000/60000 [==============================] - 175s 3ms/step - loss: 0.0280 - acc: 0.9908 - val_loss: 0.0319 - val_acc: 0.9905
Epoch 6/12
60000/60000 [==============================] - 174s 3ms/step - loss: 0.0236 - acc: 0.9927 - val_loss: 0.0315 - val_acc: 0.9897
Epoch 7/12
60000/60000 [==============================] - 176s 3ms/step - loss: 0.0215 - acc: 0.9925 - val_loss: 0.0268 - val_acc: 0.9926
Epoch 8/12
60000/60000 [==============================] - 175s 3ms/step - loss: 0.0166 - acc: 0.9948 - val_loss: 0.0319 - val_acc: 0.9911
Epoch 9/12
60000/60000 [==============================] - 175s 3ms/step - loss: 0.0141 - acc: 0.9953 - val_loss: 0.0356 - val_acc: 0.9892
Epoch 10/12
60000/60000 [==============================] - 175s 3ms/step - loss: 0.0130 - acc: 0.9956 - val_loss: 0.0315 - val_acc: 0.9914
Epoch 11/12
60000/60000 [==============================] - 174s 3ms/step - loss: 0.0113 - acc: 0.9963 - val_loss: 0.0340 - val_acc: 0.9909
Epoch 12/12
60000/60000 [==============================] - 174s 3ms/step - loss: 0.0110 - acc: 0.9963 - val_loss: 0.0347 - val_acc: 0.9906
Test loss: 0.03467182823866224
Test accuracy: 0.9906
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/Denny143/CSV-converter/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info.
###Code
import numpy as np
import pandas as pd
import tensorflow as tf
from sklearn.preprocessing import MultiLabelBinarizer
!wget 'https://storage.googleapis.com/movies_data/movies_metadata.csv'
data = pd.read_csv('movies_metadata.csv')
descriptions=data['overview']
genres=data['genres']
print(descriptions)
top_genres = ['Comedy', 'Thriller', 'Romance', 'Action', 'Horror', 'Crime', 'Documentary', 'Adventure', 'Science Fiction']
train_size = int(len(descriptions) * .8)
print(train_size)
train_descriptions = descriptions[:train_size]
train_genres = genres[:train_size]
test_descriptions = descriptions[train_size:]
test_genres = genres[train_size:]
###Output
_____no_output_____
###Markdown
Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/txemis/txemis.github.io/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
print(type(result))
###Output
<class 'numpy.ndarray'>
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb)To open a github notebook in one click, we recommend installing the [Open in Colab Chrome Extension](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo). Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/LibbyFender/playground/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb)To open a github notebook in one click, we recommend installing the [Open in Colab Chrome Extension](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo). Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb)To open a github notebook in one click, we recommend installing the [Open in Colab Chrome Extension](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo). Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-12-09' #@param {type:"date"}
number_slider = 0.5 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '3rd option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Local runtime supportColab supports connecting to a Jupyter runtime on your local machine. For more information, see our [documentation](https://research.google.com/colaboratory/local-runtimes.html).
###Code
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/abhijit-gupta/LearnML/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb)To open a github notebook in one click, we recommend installing the [Open in Colab Chrome Extension](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo). Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [TensorFlow with TPU](/notebooks/tpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb)- [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubFor a full discussion of interactions between Colab and GitHub, see [Using Colab with GitHub](https://colab.research.google.com/github/googlecolab/colabtools/blob/master/notebooks/colab-github-demo.ipynb). As a brief summary:To save a copy of your Colab notebook to Github, select *File → Save a copy to GitHub…*To load a specific notebook from github, append the github path to http://colab.research.google.com/github/.For example to load this notebook in Colab: [https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://github.com/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb) use the following Colab URL: [https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb](https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/_index.ipynb)To open a github notebook in one click, we recommend installing the [Open in Colab Chrome Extension](https://chrome.google.com/webstore/detail/open-in-colab/iogfkhleblhcpcekbiedikdehleodpjo). Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/rainu1729/data-analysis/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a Google research project created to help disseminate machine learning education and research. It's a Jupyter notebook environment that requires no setup to use and runs entirely in the cloud.Colaboratory notebooks are stored in [Google Drive](https://drive.google.com) and can be shared just as you would with Google Docs or Sheets. Colaboratory is free to use.For more information, see our [FAQ](https://research.google.com/colaboratory/faq.html). Local runtime supportColab also supports connecting to a Jupyter runtime on your local machine. For more information, see our [documentation](https://research.google.com/colaboratory/local-runtimes.html). Python 3Colaboratory supports both Python2 and Python3 for code execution. * When creating a new notebook, you'll have the choice between Python 2 and Python 3.* You can also change the language associated with a notebook; this information will be written into the `.ipynb` file itself, and thus will be preserved for future sessions.
###Code
import sys
print('Hello, Colaboratory from Python {}!'.format(sys.version_info[0]))
###Output
Hello, Colaboratory from Python 3!
###Markdown
TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
import numpy as np
with tf.Session():
input1 = tf.constant(1.0, shape=[2, 3])
input2 = tf.constant(np.reshape(np.arange(1.0, 7.0, dtype=np.float32), (2, 3)))
output = tf.add(input1, input2)
result = output.eval()
result
###Output
_____no_output_____
###Markdown
Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb)
###Code
# Only needs to be run once at the top of the notebook.
!pip install -q matplotlib-venn
# Now the newly-installed library can be used anywhere else in the notebook.
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/popin0/colab/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____
###Markdown
[View in Colaboratory](https://colab.research.google.com/github/ehughes29/courses/blob/master/Hello,_Colaboratory.ipynb) Welcome to Colaboratory!Colaboratory is a free Jupyter notebook environment that requires no setup and runs entirely in the cloud. See our [FAQ](https://research.google.com/colaboratory/faq.html) for more info. Getting Started- [Overview of Colaboratory](/notebooks/basic_features_overview.ipynb)- [Loading and saving data: Local files, Drive, Sheets, Google Cloud Storage](/notebooks/io.ipynb)- [Importing libraries and installing dependencies](/notebooks/snippets/importing_libraries.ipynb)- [Using Google Cloud BigQuery](/notebooks/bigquery.ipynb)- [Forms](/notebooks/forms.ipynb), [Charts](/notebooks/charts.ipynb), [Markdown](/notebooks/markdown_guide.ipynb), & [Widgets](/notebooks/widgets.ipynb)- [TensorFlow with GPU](/notebooks/gpu.ipynb)- [Machine Learning Crash Course](https://developers.google.com/machine-learning/crash-course/): [Intro to Pandas](/notebooks/mlcc/intro_to_pandas.ipynb) & [First Steps with TensorFlow](/notebooks/mlcc/first_steps_with_tensor_flow.ipynb) Highlighted Features SeedbankLooking for Colab notebooks to learn from? Check out [Seedbank](https://tools.google.com/seedbank/), a place to discover interactive machine learning examples. TensorFlow execution Colaboratory allows you to execute TensorFlow code in your browser with a single click. The example below adds two matrices.$\begin{bmatrix} 1. & 1. & 1. \\ 1. & 1. & 1. \\\end{bmatrix} +\begin{bmatrix} 1. & 2. & 3. \\ 4. & 5. & 6. \\\end{bmatrix} =\begin{bmatrix} 2. & 3. & 4. \\ 5. & 6. & 7. \\\end{bmatrix}$
###Code
import tensorflow as tf
input1 = tf.ones((2, 3))
input2 = tf.reshape(tf.range(1, 7, dtype=tf.float32), (2, 3))
output = input1 + input2
with tf.Session():
result = output.eval()
result
###Output
_____no_output_____
###Markdown
GitHubYou can save a copy of your Colab notebook to Github by using File > Save a copy to GitHub…You can load any .ipynb on GitHub by just adding the path to colab.research.google.com/github/ . For example, [colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb](https://colab.research.google.com/github/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) will load [this .ipynb](https://github.com/tensorflow/models/blob/master/samples/core/get_started/_index.ipynb) on GitHub. Visualization Colaboratory includes widely used libraries like [matplotlib](https://matplotlib.org/), simplifying visualization.
###Code
import matplotlib.pyplot as plt
import numpy as np
x = np.arange(20)
y = [x_i + np.random.randn(1) for x_i in x]
a, b = np.polyfit(x, y, 1)
_ = plt.plot(x, y, 'o', np.arange(20), a*np.arange(20)+b, '-')
###Output
_____no_output_____
###Markdown
Want to use a new library? `pip install` it at the top of the notebook. Then that library can be used anywhere else in the notebook. For recipes to import commonly used libraries, refer to the [importing libraries example notebook](/notebooks/snippets/importing_libraries.ipynb).
###Code
!pip install -q matplotlib-venn
from matplotlib_venn import venn2
_ = venn2(subsets = (3, 2, 1))
###Output
_____no_output_____
###Markdown
FormsForms can be used to parameterize code. See the [forms example notebook](/notebooks/forms.ipynb) for more details.
###Code
#@title Examples
text = 'value' #@param
date_input = '2018-03-22' #@param {type:"date"}
number_slider = 0 #@param {type:"slider", min:-1, max:1, step:0.1}
dropdown = '1st option' #@param ["1st option", "2nd option", "3rd option"]
###Output
_____no_output_____ |
day3_solutions_notebook_part2.ipynb | ###Markdown
Object Detection Model (Faster R-CNN) Classification vs. Detection Faster R-CNN Object Detector Overview The Faster R-CNN works as follows:* The RPN generates region proposals.* For all region proposals in the image, a fixed-length feature vector is extracted from each region using the ROI Pooling layer * The extracted feature vectors are then classified using the Fast R-CNN.* The class scores of the detected objects in addition to their bounding-boxes are returned. Object Detection using FasterRCNN Model (Pre-trained) Importing required python librariesMissing python libraries can be installed using the following syntax: !pip install [packagename]This code also uses reference functions that do not belong to any specific python library. They are present in the files:* transforms.py (different from the transforms function from torchvision library)* utils.py* Both these can be download from here --> https://github.com/pytorch/vision/tree/main/references/detection* Download these files and place them in the same location as this jupyter notebook
###Code
import torchvision.transforms as transforms
import cv2
import numpy
import numpy as np
import torchvision
import torch
torch.cuda.empty_cache()
import argparse
import cv2
from PIL import Image
import json
import random
import math
import sys
import os
from pycocotools.coco import COCO
import torchvision
from torchvision.models.detection.faster_rcnn import FastRCNNPredictor
from matplotlib import pyplot as plt
import os
import utils
import transforms as T
###Output
_____no_output_____
###Markdown
Getting Images and Annotations FIFTYONE: The open-source tool for building high-quality datasets and computer vision models.FiftyOne provides the building blocks for optimizing your dataset analysis pipeline. Use it to get hands-on with your data, including visualizing complex labels, evaluating your models, exploring scenarios of interest, identifying failure modes, finding annotation mistakes, and much more! COCO dataset can now be downloaded from FiftyOne Dataset Zoo:Additional information can be found here: https://voxel51.com/docs/fiftyone/api/fiftyone.zoo.datasets.html CoCo dataset output classes:* Coco dataset set contains the following 91 classes as output.* Certain classes have been removed from the 2017 version of the CoCo dataset and hence the "N/A" class.* Additional information can be found here: https://tech.amikelive.com/node-718/what-object-categories-labels-are-in-coco-dataset/
###Code
coco_names = [
'__background__', 'person', 'bicycle', 'car', 'motorcycle', 'airplane', 'bus',
'train', 'truck', 'boat', 'traffic light', 'fire hydrant', 'N/A', 'stop sign',
'parking meter', 'bench', 'bird', 'cat', 'dog', 'horse', 'sheep', 'cow',
'elephant', 'bear', 'zebra', 'giraffe', 'N/A', 'backpack', 'umbrella', 'N/A', 'N/A',
'handbag', 'tie', 'suitcase', 'frisbee', 'skis', 'snowboard', 'sports ball',
'kite', 'baseball bat', 'baseball glove', 'skateboard', 'surfboard', 'tennis racket',
'bottle', 'N/A', 'wine glass', 'cup', 'fork', 'knife', 'spoon', 'bowl',
'banana', 'apple', 'sandwich', 'orange', 'broccoli', 'carrot', 'hot dog', 'pizza',
'donut', 'cake', 'chair', 'couch', 'potted plant', 'bed', 'N/A', 'dining table',
'N/A', 'N/A', 'toilet', 'N/A', 'tv', 'laptop', 'mouse', 'remote', 'keyboard', 'cell phone',
'microwave', 'oven', 'toaster', 'sink', 'refrigerator', 'N/A', 'book',
'clock', 'vase', 'scissors', 'teddy bear', 'hair drier', 'toothbrush'
]
###Output
_____no_output_____
###Markdown
Defining Required functions* predict(): This function uses a trained object detection model to generate bounding boxes, class labels and their confidence scores for a given input image. Values of the detection_treshold can be changed from 0 to 1. A 0.8 threshold value will result in bounding box predictions with greater than 80% confidence* draw_boxes(): This function will overlay the bounding box prediction over an input image.
###Code
def predict(image, model, device, detection_threshold):
# transform the image to tensor
image = transform(image).to(device)
image = image.unsqueeze(0) # add a batch dimension
outputs = model(image) # get the predictions on the image
# print the results individually
# print(f"BOXES: {outputs[0]['boxes']}")
# print(f"LABELS: {outputs[0]['labels']}")
# print(f"SCORES: {outputs[0]['scores']}")
# get all the predicited class names
pred_classes = [coco_names[i] for i in outputs[0]['labels'].cpu().numpy()]
# get score for all the predicted objects
pred_scores = outputs[0]['scores'].detach().cpu().numpy()
# get all the predicted bounding boxes
pred_bboxes = outputs[0]['boxes'].detach().cpu().numpy()
# get boxes above the threshold score
boxes = pred_bboxes[pred_scores >= detection_threshold].astype(np.int32)
return boxes, pred_classes, outputs[0]['labels']
def draw_boxes(boxes, classes, labels, image):
# read the image with OpenCV
image = cv2.cvtColor(np.asarray(image), cv2.COLOR_BGR2RGB)
for i, box in enumerate(boxes):
color = COLORS[labels[i]]
cv2.rectangle(
image,
(int(box[0]), int(box[1])),
(int(box[2]), int(box[3])),
color, 2
)
cv2.putText(image, classes[i], (int(box[0]), int(box[1]-5)),
cv2.FONT_HERSHEY_SIMPLEX, 0.8, color, 2,
lineType=cv2.LINE_AA)
return image
# Create a different color for each class for efficient visualization
COLORS = np.random.uniform(0, 255, size=(len(coco_names), 3))
# Define the torchvision image transforms
transform = transforms.Compose([
transforms.ToTensor(),
])
###Output
_____no_output_____
###Markdown
Generating results using a pre-trained model
###Code
# Providing required directory details
inputDir = "./input/"
outputDir = "./output/"
FinalModelLoc = "./model/"
inputImgs = os.listdir(inputDir)
# Selecting G.P.U device for faster execution
device = torch.device('cuda' if torch.cuda.is_available() else 'cpu')
# Download pre-trained model
minInputSize = 512
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True, min_size=minInputSize)
model.eval().to(device)
# Save the model for future use
torch.save(model,FinalModelLoc+"Pre-Trained_FasterRCNN.pth")
#Generate object detection results and save them in an output folder
for inputImg in inputImgs[:]:
image = Image.open(inputDir+inputImg)
boxes, classes, labels = predict(image, model, device, 0.8)
outputImage = draw_boxes(boxes, classes, labels, image)
#cv2.imshow('Image', image)
cv2.imwrite(outputDir+inputImg+".jpg", outputImage)
###Output
c:\users\ambek\appdata\local\programs\python\python36\lib\site-packages\torch\functional.py:445: UserWarning: torch.meshgrid: in an upcoming release, it will be required to pass the indexing argument. (Triggered internally at ..\aten\src\ATen\native\TensorShape.cpp:2157.)
return _VF.meshgrid(tensors, **kwargs) # type: ignore[attr-defined]
###Markdown
Object Detection using FasterRCNN Model (Non Pre-trained) Here we will train a Faster R-CNN model using custom images dowloaded from 'fiftyone.zoo.load_zoo_dataset'
###Code
# Function to convert list of lists to a sinlge list.
#Eg: [[1,2,3], [4,5], [6]] --> [1,2,3,4,5,6]
def flattenList(inputList):
flatList = []
# Iterate over all the elements in the given list
for elem in inputList:
# Check if type of element is list
if isinstance(elem, list):
# Extend the flat list by adding contents of this element (list)
flatList.extend(flattenList(elem))
else:
# Append the elemengt to the list
flatList.append(elem)
return flatList
# Setting up the root directory for the training data
root = "./coco-2017/"
# Input directory containing the annotation file
annFile = root+"train/labels.json"
# Loading annotation into memory using the COCO library function
coco=COCO(annFile)
# Get the category id from label.json file
categorySelect = ['person']
cat_ids = coco.getCatIds(catNms=categorySelect)
# Get details of all images with the specific catgory in the cat_ids variable
imgIds = coco.getImgIds(catIds=cat_ids);
allimgMetadata = coco.loadImgs(imgIds)
if len(categorySelect) > 1:
for i in range(len(cat_ids)):
imgIds.append(coco.getImgIds(catIds=cat_ids[i]));
oneCatimgId = coco.getImgIds(catIds=cat_ids[i])
print("Total images for the Category --> "+coco_names[cat_ids[i]]+" : "+ str(len(oneCatimgId)))
imgIds = flattenList(imgIds)
print("Total images for all Categories --> "+coco_names[cat_ids[i]]+" : "+ str(len(imgIds)))
else:
for i in range(len(cat_ids)):
imgIds = coco.getImgIds(catIds=cat_ids[i]);
print("Total images for the Category --> "+coco_names[cat_ids[i]]+" : "+ str(len(imgIds)))
###Output
loading annotations into memory...
Done (t=0.01s)
creating index...
index created!
Total images for the Category --> person : 51
###Markdown
Setting up our CoCo dataset classes and functionsThe functions defined in the CoCoDataset() class does the following:* Imports images that will be used in the model.* Imports annotation.json (ground truth) file which contain the bounding box coordinates, class labels.* Applies appropriate tranformations to the training set with the help of the RandomHorizontalFlip class and get_augmentation function, these augmentation usually helps prevent overfitting of the model.* Returns transformed training images along with their ground truth.
###Code
class CoCoDataset(torch.utils.data.Dataset):
def __init__(self, root, transforms):
self.root = root
self.transforms = transforms
self.imgs = list(sorted(os.listdir(os.path.join(root, "data\\"))))
def __getitem__(self, idx):
# load images and get bbox details
# load images based on their image_id from labels.json
global imgIds
allimgMetadata = coco.loadImgs(imgIds)
boxes = []
labels = []
oneImgMetadata = allimgMetadata[idx]
anns_ids = coco.getAnnIds(imgIds=oneImgMetadata['id'], catIds=cat_ids, iscrowd=None)
anns = coco.loadAnns(anns_ids)
bbox = anns[0]['bbox']
xmin = np.float(bbox[0])
ymin = np.float(bbox[1])
xmax = np.float(bbox[0] + bbox[2])
ymax = np.float(bbox[1] + bbox[3])
boxes.append([xmin, ymin, xmax, ymax])
imagename = oneImgMetadata["file_name"]
labels.append(anns[0]['category_id'])
img_path = self.root+"data/"+imagename
img = Image.open(img_path).convert("RGB")
"""
# Code snippet to extract bounding box ROIs from images
boxedimg = numpy.array(img)
singleBOX = boxes[idx]
x,y,x_w,y_h = int(singleBOX[0]), int(singleBOX[1]), int(singleBOX[2]), int(singleBOX[3])
boxedimg = boxedimg[y:y_h, x:x_w]
cv2.imwrite("./roi/"+str([idx])+".png",boxedimg)
"""
boxes = torch.as_tensor(boxes, dtype=torch.float32)
labels = torch.as_tensor(labels, dtype=torch.int64)
im2tarIndex = torch.as_tensor(idx, dtype=torch.int64)
target = {}
target["boxes"] = boxes
target["labels"] = labels
if self.transforms is not None:
# Note that target (including bbox) is also transformed\enhanced here, which is different from transforms from torchvision
img, target = self.transforms(img, target)
return img, target
def __len__(self):
return len(self.imgs)
class RandomHorizontalFlip(object):
def __init__(self, prob):
self.prob = prob
def __call__(self, image, target):
if random.random() < self.prob:
height, width = image.shape[-2:]
image = image.flip(-1)
bbox = target["boxes"]
bbox[:, [0, 2]] = width - bbox[:, [2, 0]]
target["boxes"] = bbox
return image, target
def get_transform(train):
transforms = []
# converts the image, a PIL image, into a PyTorch Tensor
transforms.append(T.ToTensor())
if train:
# during training, randomly flip the training images
# and ground-truth for data augmentation
# 50% chance of flipping horizontally
transforms.append(T.RandomHorizontalFlip(0.5))
return T.Compose(transforms)
# Defining additional functions to help with the training process
def get_object_detection_model(num_classes):
# load an object detection model pre-trained on COCO
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)
# replace the classifier with a new one, that has num_classes which is user-defined
num_classes = 3
# get the number of input features for the classifier
in_features = model.roi_heads.box_predictor.cls_score.in_features
# replace the pre-trained head with a new one
model.roi_heads.box_predictor = FastRCNNPredictor(in_features, num_classes)
return model
def train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq, scaler=None):
model.train()
metric_logger = utils.MetricLogger(delimiter=" ")
metric_logger.add_meter("lr", utils.SmoothedValue(window_size=1, fmt="{value:.6f}"))
header = f"Epoch: [{epoch}]"
lr_scheduler = None
if epoch == 0:
warmup_factor = 1.0 / 1000
warmup_iters = min(1000, len(data_loader) - 1)
lr_scheduler = torch.optim.lr_scheduler.LinearLR(
optimizer, start_factor=warmup_factor, total_iters=warmup_iters
)
for images, targets in metric_logger.log_every(data_loader, print_freq, header):
images = list(image.to(device) for image in images)
targets = [{k: v.to(device) for k, v in t.items()} for t in targets]
#print(images[0].shape, targets[0])
with torch.cuda.amp.autocast(enabled=scaler is not None):
loss_dict = model(images, targets)
losses = sum(loss for loss in loss_dict.values())
# reduce losses over all GPUs for logging purposes
loss_dict_reduced = utils.reduce_dict(loss_dict)
losses_reduced = sum(loss for loss in loss_dict_reduced.values())
loss_value = losses_reduced.item()
if not math.isfinite(loss_value):
print(f"Loss is {loss_value}, stopping training")
print(loss_dict_reduced)
sys.exit(1)
optimizer.zero_grad()
if scaler is not None:
scaler.scale(losses).backward()
scaler.step(optimizer)
scaler.update()
else:
losses.backward()
optimizer.step()
if lr_scheduler is not None:
lr_scheduler.step()
metric_logger.update(loss=losses_reduced, **loss_dict_reduced)
metric_logger.update(lr=optimizer.param_groups[0]["lr"])
return metric_logger
###Output
_____no_output_____
###Markdown
Training a Faster R-CNN Model
###Code
# Project directiry containing training images and ground truth files
root = "./coco-2017/"
FinalModelLoc = "./model/"
# train on the GPU or on the CPU, if a GPU is not available
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
#device = torch.device('cpu')
num_classes = 2
# use our dataset and defined transformations
dataset = CoCoDataset(root+"train/", get_transform(train=False))
dataset_test = CoCoDataset(root+"train/", get_transform(train=False))
# Split the dataset in train and test set.
indices = torch.randperm(len(dataset)).tolist()
dataset = torch.utils.data.Subset(dataset, indices[:-15])
dataset_test = torch.utils.data.Subset(dataset_test, indices[-15:])
# Define training and test data loaders
data_loader = torch.utils.data.DataLoader(
dataset, batch_size=1, shuffle=True, # num_workers=4,
collate_fn=utils.collate_fn)
data_loader_test = torch.utils.data.DataLoader(
dataset_test, batch_size=1, shuffle=False, # num_workers=4,
collate_fn=utils.collate_fn)
# get the model using our helper function
model = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=False, progress=True, num_classes=num_classes, pretrained_backbone=True) # Or get_object_detection_model(num_classes)
# move model to the right device
model.to(device)
# construct an optimizer
params = [p for p in model.parameters() if p.requires_grad]
# SGD
optimizer = torch.optim.SGD(params, lr=0.0003,
momentum=0.9, weight_decay=0.0005)
# and a learning rate scheduler
# cos learning rate
lr_scheduler = torch.optim.lr_scheduler.CosineAnnealingWarmRestarts(optimizer, T_0=1, T_mult=2)
# let's train it for epochs
num_epochs = 100
for epoch in range(num_epochs):
# train for one epoch, printing every 10 iterations
# Engine.pyTrain_ofOne_The epoch function takes both images and targets. to(device)
train_one_epoch(model, optimizer, data_loader, device, epoch, print_freq=50)
# update the learning rate
lr_scheduler.step()
# evaluate on the test dataset
#evaluate(model, data_loader_test, device=device)
print('')
print('==================================================')
print('Model saved')
print('==================================================')
torch.save(model,FinalModelLoc+"FasterRCNN.pth")
print("All Done!")
###Output
Epoch: [0] [ 0/36] eta: 0:00:23 lr: 0.000009 loss: 1.3919 (1.3919) loss_classifier: 0.6978 (0.6978) loss_box_reg: 0.0001 (0.0001) loss_objectness: 0.6914 (0.6914) loss_rpn_box_reg: 0.0026 (0.0026) time: 0.6393 data: 0.0000 max mem: 1176
Epoch: [0] [35/36] eta: 0:00:00 lr: 0.000300 loss: 1.0759 (1.2245) loss_classifier: 0.3819 (0.5119) loss_box_reg: 0.0006 (0.0116) loss_objectness: 0.6779 (0.6828) loss_rpn_box_reg: 0.0059 (0.0182) time: 0.2338 data: 0.0047 max mem: 1660
Epoch: [0] Total time: 0:00:08 (0.2430 s / it)
Epoch: [1] [ 0/36] eta: 0:00:09 lr: 0.000300 loss: 0.9166 (0.9166) loss_classifier: 0.2359 (0.2359) loss_box_reg: 0.0176 (0.0176) loss_objectness: 0.6617 (0.6617) loss_rpn_box_reg: 0.0015 (0.0015) time: 0.2525 data: 0.0156 max mem: 1660
Epoch: [1] [35/36] eta: 0:00:00 lr: 0.000300 loss: 0.6239 (0.7477) loss_classifier: 0.0790 (0.1435) loss_box_reg: 0.0014 (0.0204) loss_objectness: 0.5168 (0.5657) loss_rpn_box_reg: 0.0068 (0.0181) time: 0.2368 data: 0.0062 max mem: 1660
Epoch: [1] Total time: 0:00:08 (0.2350 s / it)
Epoch: [2] [ 0/36] eta: 0:00:08 lr: 0.000150 loss: 0.3633 (0.3633) loss_classifier: 0.0228 (0.0228) loss_box_reg: 0.0014 (0.0014) loss_objectness: 0.3340 (0.3340) loss_rpn_box_reg: 0.0051 (0.0051) time: 0.2397 data: 0.0156 max mem: 1660
Epoch: [2] [35/36] eta: 0:00:00 lr: 0.000150 loss: 0.2713 (0.3088) loss_classifier: 0.0551 (0.0550) loss_box_reg: 0.0562 (0.0405) loss_objectness: 0.1391 (0.1962) loss_rpn_box_reg: 0.0055 (0.0171) time: 0.2364 data: 0.0047 max mem: 1660
Epoch: [2] Total time: 0:00:08 (0.2360 s / it)
Epoch: [3] [ 0/36] eta: 0:00:08 lr: 0.000300 loss: 0.3475 (0.3475) loss_classifier: 0.1172 (0.1172) loss_box_reg: 0.0762 (0.0762) loss_objectness: 0.1150 (0.1150) loss_rpn_box_reg: 0.0392 (0.0392) time: 0.2285 data: 0.0000 max mem: 1660
Epoch: [3] [35/36] eta: 0:00:00 lr: 0.000300 loss: 0.1956 (0.2204) loss_classifier: 0.0612 (0.0756) loss_box_reg: 0.0529 (0.0646) loss_objectness: 0.0427 (0.0654) loss_rpn_box_reg: 0.0073 (0.0147) time: 0.2391 data: 0.0062 max mem: 1660
Epoch: [3] Total time: 0:00:08 (0.2368 s / it)
Epoch: [4] [ 0/36] eta: 0:00:08 lr: 0.000256 loss: 0.1889 (0.1889) loss_classifier: 0.0827 (0.0827) loss_box_reg: 0.0009 (0.0009) loss_objectness: 0.0957 (0.0957) loss_rpn_box_reg: 0.0096 (0.0096) time: 0.2426 data: 0.0000 max mem: 1660
Epoch: [4] [35/36] eta: 0:00:00 lr: 0.000256 loss: 0.1775 (0.1985) loss_classifier: 0.0564 (0.0732) loss_box_reg: 0.0491 (0.0660) loss_objectness: 0.0399 (0.0478) loss_rpn_box_reg: 0.0060 (0.0115) time: 0.2330 data: 0.0047 max mem: 1661
Epoch: [4] Total time: 0:00:08 (0.2378 s / it)
Epoch: [5] [ 0/36] eta: 0:00:08 lr: 0.000150 loss: 0.1730 (0.1730) loss_classifier: 0.0441 (0.0441) loss_box_reg: 0.0996 (0.0996) loss_objectness: 0.0243 (0.0243) loss_rpn_box_reg: 0.0051 (0.0051) time: 0.2401 data: 0.0000 max mem: 1661
Epoch: [5] [35/36] eta: 0:00:00 lr: 0.000150 loss: 0.1568 (0.1914) loss_classifier: 0.0494 (0.0721) loss_box_reg: 0.0479 (0.0668) loss_objectness: 0.0404 (0.0424) loss_rpn_box_reg: 0.0066 (0.0101) time: 0.2407 data: 0.0063 max mem: 1661
Epoch: [5] Total time: 0:00:08 (0.2388 s / it)
Epoch: [6] [ 0/36] eta: 0:00:08 lr: 0.000044 loss: 0.3177 (0.3177) loss_classifier: 0.1209 (0.1209) loss_box_reg: 0.1497 (0.1497) loss_objectness: 0.0350 (0.0350) loss_rpn_box_reg: 0.0122 (0.0122) time: 0.2342 data: 0.0000 max mem: 1661
Epoch: [6] [35/36] eta: 0:00:00 lr: 0.000044 loss: 0.1710 (0.1904) loss_classifier: 0.0524 (0.0729) loss_box_reg: 0.0675 (0.0683) loss_objectness: 0.0331 (0.0398) loss_rpn_box_reg: 0.0051 (0.0094) time: 0.2397 data: 0.0055 max mem: 1661
Epoch: [6] Total time: 0:00:08 (0.2396 s / it)
Epoch: [7] [ 0/36] eta: 0:00:08 lr: 0.000300 loss: 0.1596 (0.1596) loss_classifier: 0.0706 (0.0706) loss_box_reg: 0.0581 (0.0581) loss_objectness: 0.0289 (0.0289) loss_rpn_box_reg: 0.0020 (0.0020) time: 0.2412 data: 0.0000 max mem: 1661
Epoch: [7] [35/36] eta: 0:00:00 lr: 0.000300 loss: 0.1662 (0.1790) loss_classifier: 0.0449 (0.0670) loss_box_reg: 0.0610 (0.0641) loss_objectness: 0.0325 (0.0380) loss_rpn_box_reg: 0.0057 (0.0099) time: 0.2430 data: 0.0062 max mem: 1661
Epoch: [7] Total time: 0:00:08 (0.2400 s / it)
Epoch: [8] [ 0/36] eta: 0:00:07 lr: 0.000289 loss: 0.1578 (0.1578) loss_classifier: 0.0369 (0.0369) loss_box_reg: 0.0956 (0.0956) loss_objectness: 0.0232 (0.0232) loss_rpn_box_reg: 0.0021 (0.0021) time: 0.2188 data: 0.0000 max mem: 1661
Epoch: [8] [35/36] eta: 0:00:00 lr: 0.000289 loss: 0.1888 (0.1990) loss_classifier: 0.0647 (0.0768) loss_box_reg: 0.0727 (0.0767) loss_objectness: 0.0310 (0.0365) loss_rpn_box_reg: 0.0055 (0.0090) time: 0.2390 data: 0.0055 max mem: 1661
Epoch: [8] Total time: 0:00:08 (0.2408 s / it)
Epoch: [9] [ 0/36] eta: 0:00:09 lr: 0.000256 loss: 0.1834 (0.1834) loss_classifier: 0.0850 (0.0850) loss_box_reg: 0.0506 (0.0506) loss_objectness: 0.0385 (0.0385) loss_rpn_box_reg: 0.0094 (0.0094) time: 0.2556 data: 0.0000 max mem: 1661
Epoch: [9] [35/36] eta: 0:00:00 lr: 0.000256 loss: 0.1691 (0.1728) loss_classifier: 0.0510 (0.0625) loss_box_reg: 0.0718 (0.0694) loss_objectness: 0.0276 (0.0316) loss_rpn_box_reg: 0.0051 (0.0093) time: 0.2420 data: 0.0075 max mem: 1661
Epoch: [9] Total time: 0:00:08 (0.2410 s / it)
Epoch: [10] [ 0/36] eta: 0:00:08 lr: 0.000207 loss: 0.1545 (0.1545) loss_classifier: 0.0666 (0.0666) loss_box_reg: 0.0477 (0.0477) loss_objectness: 0.0346 (0.0346) loss_rpn_box_reg: 0.0056 (0.0056) time: 0.2258 data: 0.0000 max mem: 1661
Epoch: [10] [35/36] eta: 0:00:00 lr: 0.000207 loss: 0.1580 (0.1757) loss_classifier: 0.0608 (0.0638) loss_box_reg: 0.0592 (0.0738) loss_objectness: 0.0224 (0.0297) loss_rpn_box_reg: 0.0060 (0.0084) time: 0.2383 data: 0.0031 max mem: 1661
Epoch: [10] Total time: 0:00:08 (0.2412 s / it)
Epoch: [11] [ 0/36] eta: 0:00:08 lr: 0.000150 loss: 0.1028 (0.1028) loss_classifier: 0.0481 (0.0481) loss_box_reg: 0.0349 (0.0349) loss_objectness: 0.0182 (0.0182) loss_rpn_box_reg: 0.0015 (0.0015) time: 0.2449 data: 0.0000 max mem: 1661
Epoch: [11] [35/36] eta: 0:00:00 lr: 0.000150 loss: 0.1482 (0.1548) loss_classifier: 0.0431 (0.0535) loss_box_reg: 0.0615 (0.0656) loss_objectness: 0.0287 (0.0283) loss_rpn_box_reg: 0.0038 (0.0074) time: 0.2408 data: 0.0023 max mem: 1661
Epoch: [11] Total time: 0:00:08 (0.2415 s / it)
Epoch: [12] [ 0/36] eta: 0:00:08 lr: 0.000093 loss: 0.1846 (0.1846) loss_classifier: 0.0732 (0.0732) loss_box_reg: 0.0552 (0.0552) loss_objectness: 0.0471 (0.0471) loss_rpn_box_reg: 0.0092 (0.0092) time: 0.2442 data: 0.0000 max mem: 1661
Epoch: [12] [35/36] eta: 0:00:00 lr: 0.000093 loss: 0.1423 (0.1592) loss_classifier: 0.0444 (0.0550) loss_box_reg: 0.0641 (0.0691) loss_objectness: 0.0280 (0.0278) loss_rpn_box_reg: 0.0055 (0.0073) time: 0.2430 data: 0.0070 max mem: 1661
Epoch: [12] Total time: 0:00:08 (0.2414 s / it)
Epoch: [13] [ 0/36] eta: 0:00:08 lr: 0.000044 loss: 0.1975 (0.1975) loss_classifier: 0.0546 (0.0546) loss_box_reg: 0.1175 (0.1175) loss_objectness: 0.0203 (0.0203) loss_rpn_box_reg: 0.0051 (0.0051) time: 0.2446 data: 0.0156 max mem: 1661
Epoch: [13] [35/36] eta: 0:00:00 lr: 0.000044 loss: 0.1332 (0.1529) loss_classifier: 0.0435 (0.0525) loss_box_reg: 0.0592 (0.0661) loss_objectness: 0.0225 (0.0277) loss_rpn_box_reg: 0.0055 (0.0065) time: 0.2461 data: 0.0039 max mem: 1661
Epoch: [13] Total time: 0:00:08 (0.2423 s / it)
Epoch: [14] [ 0/36] eta: 0:00:08 lr: 0.000011 loss: 0.1183 (0.1183) loss_classifier: 0.0462 (0.0462) loss_box_reg: 0.0230 (0.0230) loss_objectness: 0.0441 (0.0441) loss_rpn_box_reg: 0.0050 (0.0050) time: 0.2287 data: 0.0000 max mem: 1661
Epoch: [14] [35/36] eta: 0:00:00 lr: 0.000011 loss: 0.1517 (0.1488) loss_classifier: 0.0503 (0.0504) loss_box_reg: 0.0648 (0.0659) loss_objectness: 0.0231 (0.0262) loss_rpn_box_reg: 0.0041 (0.0064) time: 0.2415 data: 0.0055 max mem: 1661
Epoch: [14] Total time: 0:00:08 (0.2415 s / it)
Epoch: [15] [ 0/36] eta: 0:00:09 lr: 0.000300 loss: 0.1617 (0.1617) loss_classifier: 0.0817 (0.0817) loss_box_reg: 0.0463 (0.0463) loss_objectness: 0.0299 (0.0299) loss_rpn_box_reg: 0.0038 (0.0038) time: 0.2583 data: 0.0156 max mem: 1661
###Markdown
Testing Faster R-CNN Model
###Code
# Use the trained model, generate results and save them in an output folder
outputDir = "./output/"
model = torch.load(FinalModelLoc+'FasterRCNN_100epoch.pth')
device = torch.device('cuda') if torch.cuda.is_available() else torch.device('cpu')
model.eval().to(device)
for i in range(len(dataset_test)):
img, _ = dataset_test[i]
tran = transforms.ToPILImage()
PILimg = tran(img)
boxes, classes, labels = predict(PILimg, model, device, 0.8)
outputImage = draw_boxes(boxes, classes, labels, PILimg)
#cv2.imshow('Image', image)
cv2.imwrite(outputDir+"/ObjectDetect_"+str(i)+".jpg", outputImage)
###Output
_____no_output_____ |
lijin-THU:notes-python/05-advanced-python/05.09-iterators.ipynb | ###Markdown
迭代器 简介 迭代器对象可以在 `for` 循环中使用:
###Code
x = [2, 4, 6]
for n in x:
print n
###Output
2
4
6
###Markdown
其好处是不需要对下标进行迭代,但是有些情况下,我们既希望获得下标,也希望获得对应的值,那么可以将迭代器传给 `enumerate` 函数,这样每次迭代都会返回一组 `(index, value)` 组成的元组:
###Code
x = [2, 4, 6]
for i, n in enumerate(x):
print 'pos', i, 'is', n
###Output
pos 0 is 2
pos 1 is 4
pos 2 is 6
###Markdown
迭代器对象必须实现 `__iter__` 方法:
###Code
x = [2, 4, 6]
i = x.__iter__()
print i
###Output
<listiterator object at 0x0000000003CAE630>
###Markdown
`__iter__()` 返回的对象支持 `next` 方法,返回迭代器中的下一个元素:
###Code
print i.next()
###Output
2
###Markdown
当下一个元素不存在时,会 `raise` 一个 `StopIteration` 错误:
###Code
print i.next()
print i.next()
i.next()
###Output
_____no_output_____
###Markdown
很多标准库函数返回的是迭代器:
###Code
r = reversed(x)
print r
###Output
<listreverseiterator object at 0x0000000003D615F8>
###Markdown
调用它的 `next()` 方法:
###Code
print r.next()
print r.next()
print r.next()
###Output
6
4
2
###Markdown
字典对象的 `iterkeys, itervalues, iteritems` 方法返回的都是迭代器:
###Code
x = {'a':1, 'b':2, 'c':3}
i = x.iteritems()
print i
###Output
<dictionary-itemiterator object at 0x0000000003D51B88>
###Markdown
迭代器的 `__iter__` 方法返回它本身:
###Code
print i.__iter__()
print i.next()
###Output
('a', 1)
###Markdown
自定义迭代器 自定义一个 list 的取反迭代器:
###Code
class ReverseListIterator(object):
def __init__(self, list):
self.list = list
self.index = len(list)
def __iter__(self):
return self
def next(self):
self.index -= 1
if self.index >= 0:
return self.list[self.index]
else:
raise StopIteration
x = range(10)
for i in ReverseListIterator(x):
print i,
###Output
9 8 7 6 5 4 3 2 1 0
###Markdown
只要我们定义了这三个方法,我们可以返回任意迭代值:
###Code
class Collatz(object):
def __init__(self, start):
self.value = start
def __iter__(self):
return self
def next(self):
if self.value == 1:
raise StopIteration
elif self.value % 2 == 0:
self.value = self.value / 2
else:
self.value = 3 * self.value + 1
return self.value
###Output
_____no_output_____
###Markdown
这里我们实现 [Collatz 猜想](http://baike.baidu.com/view/736196.htm):- 奇数 n:返回 3n + 1- 偶数 n:返回 n / 2直到 n 为 1 为止:
###Code
for x in Collatz(7):
print x,
###Output
22 11 34 17 52 26 13 40 20 10 5 16 8 4 2 1
###Markdown
不过迭代器对象存在状态,会出现这样的问题:
###Code
i = Collatz(7)
for x, y in zip(i, i):
print x, y
###Output
22 11
34 17
52 26
13 40
20 10
5 16
8 4
2 1
###Markdown
一个比较好的解决方法是将迭代器和可迭代对象分开处理,这里提供了一个二分树的中序遍历实现:
###Code
class BinaryTree(object):
def __init__(self, value, left=None, right=None):
self.value = value
self.left = left
self.right = right
def __iter__(self):
return InorderIterator(self)
class InorderIterator(object):
def __init__(self, node):
self.node = node
self.stack = []
def next(self):
if len(self.stack) > 0 or self.node is not None:
while self.node is not None:
self.stack.append(self.node)
self.node = self.node.left
node = self.stack.pop()
self.node = node.right
return node.value
else:
raise StopIteration()
tree = BinaryTree(
left=BinaryTree(
left=BinaryTree(1),
value=2,
right=BinaryTree(
left=BinaryTree(3),
value=4,
right=BinaryTree(5)
),
),
value=6,
right=BinaryTree(
value=7,
right=BinaryTree(8)
)
)
for value in tree:
print value,
###Output
1 2 3 4 5 6 7 8
###Markdown
不会出现之前的问题:
###Code
for x,y in zip(tree, tree):
print x, y
###Output
1 1
2 2
3 3
4 4
5 5
6 6
7 7
8 8
|
branches/1st-edition/ch12.ipynb | ###Markdown
Advanced NumPy
###Code
from __future__ import division
from numpy.random import randn
from pandas import Series
import numpy as np
np.set_printoptions(precision=4)
import sys
###Output
_____no_output_____
###Markdown
ndarray object internals NumPy dtype hierarchy
###Code
ints = np.ones(10, dtype=np.uint16)
floats = np.ones(10, dtype=np.float32)
np.issubdtype(ints.dtype, np.integer)
np.issubdtype(floats.dtype, np.floating)
np.float64.mro()
###Output
_____no_output_____
###Markdown
Advanced array manipulation Reshaping arrays
###Code
arr = np.arange(8)
arr
arr.reshape((4, 2))
arr.reshape((4, 2)).reshape((2, 4))
arr = np.arange(15)
arr.reshape((5, -1))
other_arr = np.ones((3, 5))
other_arr.shape
arr.reshape(other_arr.shape)
arr = np.arange(15).reshape((5, 3))
arr
arr.ravel()
arr.flatten()
###Output
_____no_output_____
###Markdown
C vs. Fortran order
###Code
arr = np.arange(12).reshape((3, 4))
arr
arr.ravel()
arr.ravel('F')
###Output
_____no_output_____
###Markdown
Concatenating and splitting arrays
###Code
arr1 = np.array([[1, 2, 3], [4, 5, 6]])
arr2 = np.array([[7, 8, 9], [10, 11, 12]])
np.concatenate([arr1, arr2], axis=0)
np.concatenate([arr1, arr2], axis=1)
np.vstack((arr1, arr2))
np.hstack((arr1, arr2))
from numpy.random import randn
arr = randn(5, 2)
arr
first, second, third = np.split(arr, [1, 3])
first
second
third
###Output
_____no_output_____
###Markdown
Stacking helpers:
###Code
arr = np.arange(6)
arr1 = arr.reshape((3, 2))
arr2 = randn(3, 2)
np.r_[arr1, arr2]
np.c_[np.r_[arr1, arr2], arr]
np.c_[1:6, -10:-5]
###Output
_____no_output_____
###Markdown
Repeating elements: tile and repeat
###Code
arr = np.arange(3)
arr.repeat(3)
arr.repeat([2, 3, 4])
arr = randn(2, 2)
arr
arr.repeat(2, axis=0)
arr.repeat([2, 3], axis=0)
arr.repeat([2, 3], axis=1)
arr
np.tile(arr, 2)
arr
np.tile(arr, (2, 1))
np.tile(arr, (3, 2))
###Output
_____no_output_____
###Markdown
Fancy indexing equivalents: take and put
###Code
arr = np.arange(10) * 100
inds = [7, 1, 2, 6]
arr[inds]
arr.take(inds)
arr.put(inds, 42)
arr
arr.put(inds, [40, 41, 42, 43])
arr
inds = [2, 0, 2, 1]
arr = randn(2, 4)
arr
arr.take(inds, axis=1)
###Output
_____no_output_____
###Markdown
Broadcasting
###Code
arr = np.arange(5)
arr
arr * 4
arr = randn(4, 3)
arr.mean(0)
demeaned = arr - arr.mean(0)
demeaned
demeaned.mean(0)
arr
row_means = arr.mean(1)
row_means.reshape((4, 1))
demeaned = arr - row_means.reshape((4, 1))
demeaned.mean(1)
###Output
_____no_output_____
###Markdown
Broadcasting over other axes
###Code
arr - arr.mean(1)
arr - arr.mean(1).reshape((4, 1))
arr = np.zeros((4, 4))
arr_3d = arr[:, np.newaxis, :]
arr_3d.shape
arr_1d = np.random.normal(size=3)
arr_1d[:, np.newaxis]
arr_1d[np.newaxis, :]
arr = randn(3, 4, 5)
depth_means = arr.mean(2)
depth_means
demeaned = arr - depth_means[:, :, np.newaxis]
demeaned.mean(2)
def demean_axis(arr, axis=0):
means = arr.mean(axis)
# This generalized things like [:, :, np.newaxis] to N dimensions
indexer = [slice(None)] * arr.ndim
indexer[axis] = np.newaxis
return arr - means[indexer]
###Output
_____no_output_____
###Markdown
Setting array values by broadcasting
###Code
arr = np.zeros((4, 3))
arr[:] = 5
arr
col = np.array([1.28, -0.42, 0.44, 1.6])
arr[:] = col[:, np.newaxis]
arr
arr[:2] = [[-1.37], [0.509]]
arr
###Output
_____no_output_____
###Markdown
Advanced ufunc usage Ufunc instance methods
###Code
arr = np.arange(10)
np.add.reduce(arr)
arr.sum()
np.random.seed(12346)
arr = randn(5, 5)
arr[::2].sort(1) # sort a few rows
arr[:, :-1] < arr[:, 1:]
np.logical_and.reduce(arr[:, :-1] < arr[:, 1:], axis=1)
arr = np.arange(15).reshape((3, 5))
np.add.accumulate(arr, axis=1)
arr = np.arange(3).repeat([1, 2, 2])
arr
np.multiply.outer(arr, np.arange(5))
result = np.subtract.outer(randn(3, 4), randn(5))
result.shape
arr = np.arange(10)
np.add.reduceat(arr, [0, 5, 8])
arr = np.multiply.outer(np.arange(4), np.arange(5))
arr
np.add.reduceat(arr, [0, 2, 4], axis=1)
###Output
_____no_output_____
###Markdown
Custom ufuncs
###Code
def add_elements(x, y):
return x + y
add_them = np.frompyfunc(add_elements, 2, 1)
add_them(np.arange(8), np.arange(8))
add_them = np.vectorize(add_elements, otypes=[np.float64])
add_them(np.arange(8), np.arange(8))
arr = randn(10000)
%timeit add_them(arr, arr)
%timeit np.add(arr, arr)
###Output
_____no_output_____
###Markdown
Structured and record arrays
###Code
dtype = [('x', np.float64), ('y', np.int32)]
sarr = np.array([(1.5, 6), (np.pi, -2)], dtype=dtype)
sarr
sarr[0]
sarr[0]['y']
sarr['x']
###Output
_____no_output_____
###Markdown
Nested dtypes and multidimensional fields
###Code
dtype = [('x', np.int64, 3), ('y', np.int32)]
arr = np.zeros(4, dtype=dtype)
arr
arr[0]['x']
arr['x']
dtype = [('x', [('a', 'f8'), ('b', 'f4')]), ('y', np.int32)]
data = np.array([((1, 2), 5), ((3, 4), 6)], dtype=dtype)
data['x']
data['y']
data['x']['a']
###Output
_____no_output_____
###Markdown
Why use structured arrays? Structured array manipulations: numpy.lib.recfunctions More about sorting
###Code
arr = randn(6)
arr.sort()
arr
arr = randn(3, 5)
arr
arr[:, 0].sort() # Sort first column values in-place
arr
arr = randn(5)
arr
np.sort(arr)
arr
arr = randn(3, 5)
arr
arr.sort(axis=1)
arr
arr[:, ::-1]
###Output
_____no_output_____
###Markdown
Indirect sorts: argsort and lexsort
###Code
values = np.array([5, 0, 1, 3, 2])
indexer = values.argsort()
indexer
values[indexer]
arr = randn(3, 5)
arr[0] = values
arr
arr[:, arr[0].argsort()]
first_name = np.array(['Bob', 'Jane', 'Steve', 'Bill', 'Barbara'])
last_name = np.array(['Jones', 'Arnold', 'Arnold', 'Jones', 'Walters'])
sorter = np.lexsort((first_name, last_name))
zip(last_name[sorter], first_name[sorter])
###Output
_____no_output_____
###Markdown
Alternate sort algorithms
###Code
values = np.array(['2:first', '2:second', '1:first', '1:second', '1:third'])
key = np.array([2, 2, 1, 1, 1])
indexer = key.argsort(kind='mergesort')
indexer
values.take(indexer)
###Output
_____no_output_____
###Markdown
numpy.searchsorted: Finding elements in a sorted array
###Code
arr = np.array([0, 1, 7, 12, 15])
arr.searchsorted(9)
arr.searchsorted([0, 8, 11, 16])
arr = np.array([0, 0, 0, 1, 1, 1, 1])
arr.searchsorted([0, 1])
arr.searchsorted([0, 1], side='right')
data = np.floor(np.random.uniform(0, 10000, size=50))
bins = np.array([0, 100, 1000, 5000, 10000])
data
labels = bins.searchsorted(data)
labels
Series(data).groupby(labels).mean()
np.digitize(data, bins)
###Output
_____no_output_____
###Markdown
NumPy matrix class
###Code
X = np.array([[ 8.82768214, 3.82222409, -1.14276475, 2.04411587],
[ 3.82222409, 6.75272284, 0.83909108, 2.08293758],
[-1.14276475, 0.83909108, 5.01690521, 0.79573241],
[ 2.04411587, 2.08293758, 0.79573241, 6.24095859]])
X[:, 0] # one-dimensional
y = X[:, :1] # two-dimensional by slicing
X
y
np.dot(y.T, np.dot(X, y))
Xm = np.matrix(X)
ym = Xm[:, 0]
Xm
ym
ym.T * Xm * ym
Xm.I * X
###Output
_____no_output_____
###Markdown
Advanced array input and output Memory-mapped files
###Code
mmap = np.memmap('mymmap', dtype='float64', mode='w+', shape=(10000, 10000))
mmap
section = mmap[:5]
section[:] = np.random.randn(5, 10000)
mmap.flush()
mmap
del mmap
mmap = np.memmap('mymmap', dtype='float64', shape=(10000, 10000))
mmap
%xdel mmap
!rm mymmap
###Output
_____no_output_____
###Markdown
HDF5 and other array storage options Performance tips The importance of contiguous memory
###Code
arr_c = np.ones((1000, 1000), order='C')
arr_f = np.ones((1000, 1000), order='F')
arr_c.flags
arr_f.flags
arr_f.flags.f_contiguous
%timeit arr_c.sum(1)
%timeit arr_f.sum(1)
arr_f.copy('C').flags
arr_c[:50].flags.contiguous
arr_c[:, :50].flags
%xdel arr_c
%xdel arr_f
%cd ..
###Output
_____no_output_____ |
Natural Language Processing with Probabilistic Models/Week 4 - Word Embeddings with Neural Networks/NLP_C2_W4_lecture_nb_01.ipynb | ###Markdown
Word Embeddings: Ungraded Practice NotebookIn this ungraded notebook, you'll try out all the individual techniques that you learned about in the lecture. Practicing on small examples will prepare you for the graded assignment, where you will combine the techniques in more advanced ways to create word embeddings from a real-life corpus.This notebook is made of two main parts: data preparation, and the continuous bag-of-words (CBOW) model.To get started, import and initialize all the libraries you will need.
###Code
import sys
!{sys.executable} -m pip install emoji
import re
import nltk
from nltk.tokenize import word_tokenize
import emoji
import numpy as np
from utils2 import get_dict
nltk.download('punkt') # download pre-trained Punkt tokenizer for English
###Output
_____no_output_____
###Markdown
Data preparation In the data preparation phase, starting with a corpus of text, you will:- Clean and tokenize the corpus.- Extract the pairs of context words and center word that will make up the training data set for the CBOW model. The context words are the features that will be fed into the model, and the center words are the target values that the model will learn to predict.- Create simple vector representations of the context words (features) and center words (targets) that can be used by the neural network of the CBOW model. Cleaning and tokenizationTo demonstrate the cleaning and tokenization process, consider a corpus that contains emojis and various punctuation signs.
###Code
corpus = 'Who ❤️ "word embeddings" in 2020? I do!!!'
###Output
_____no_output_____
###Markdown
First, replace all interrupting punctuation signs — such as commas and exclamation marks — with periods.
###Code
print(f'Corpus: {corpus}')
data = re.sub(r'[,!?;-]+', '.', corpus)
print(f'After cleaning punctuation: {data}')
###Output
_____no_output_____
###Markdown
Next, use NLTK's tokenization engine to split the corpus into individual tokens.
###Code
print(f'Initial string: {data}')
data = nltk.word_tokenize(data)
print(f'After tokenization: {data}')
###Output
_____no_output_____
###Markdown
Finally, as you saw in the lecture, get rid of numbers and punctuation other than periods, and convert all the remaining tokens to lowercase.
###Code
print(f'Initial list of tokens: {data}')
data = [ ch.lower() for ch in data
if ch.isalpha()
or ch == '.'
or emoji.get_emoji_regexp().search(ch)
]
print(f'After cleaning: {data}')
###Output
_____no_output_____
###Markdown
Note that the heart emoji is considered as a token just like any normal word.Now let's streamline the cleaning and tokenization process by wrapping the previous steps in a function.
###Code
def tokenize(corpus):
data = re.sub(r'[,!?;-]+', '.', corpus)
data = nltk.word_tokenize(data) # tokenize string to words
data = [ ch.lower() for ch in data
if ch.isalpha()
or ch == '.'
or emoji.get_emoji_regexp().search(ch)
]
return data
###Output
_____no_output_____
###Markdown
Apply this function to the corpus that you'll be working on in the rest of this notebook: "I am happy because I am learning"
###Code
corpus = 'I am happy because I am learning'
print(f'Corpus: {corpus}')
words = tokenize(corpus)
print(f'Words (tokens): {words}')
###Output
_____no_output_____
###Markdown
**Now try it out yourself with your own sentence.**
###Code
tokenize("Now it's your turn: try with your own sentence!")
###Output
_____no_output_____
###Markdown
Sliding window of words Now that you have transformed the corpus into a list of clean tokens, you can slide a window of words across this list. For each window you can extract a center word and the context words.The `get_windows` function in the next cell was introduced in the lecture.
###Code
def get_windows(words, C):
i = C
while i < len(words) - C:
center_word = words[i]
context_words = words[(i - C):i] + words[(i+1):(i+C+1)]
yield context_words, center_word
i += 1
###Output
_____no_output_____
###Markdown
The first argument of this function is a list of words (or tokens). The second argument, `C`, is the context half-size. Recall that for a given center word, the context words are made of `C` words to the left and `C` words to the right of the center word.Here is how you can use this function to extract context words and center words from a list of tokens. These context and center words will make up the training set that you will use to train the CBOW model.
###Code
for x, y in get_windows(
['i', 'am', 'happy', 'because', 'i', 'am', 'learning'],
2
):
print(f'{x}\t{y}')
###Output
_____no_output_____
###Markdown
The first example of the training set is made of:- the context words "i", "am", "because", "i",- and the center word to be predicted: "happy".**Now try it out yourself. In the next cell, you can change both the sentence and the context half-size.**
###Code
for x, y in get_windows(tokenize("Now it's your turn: try with your own sentence!"), 1):
print(f'{x}\t{y}')
###Output
_____no_output_____
###Markdown
Transforming words into vectors for the training set To finish preparing the training set, you need to transform the context words and center words into vectors. Mapping words to indices and indices to wordsThe center words will be represented as one-hot vectors, and the vectors that represent context words are also based on one-hot vectors.To create one-hot word vectors, you can start by mapping each unique word to a unique integer (or index). We have provided a helper function, `get_dict`, that creates a Python dictionary that maps words to integers and back.
###Code
word2Ind, Ind2word = get_dict(words)
###Output
_____no_output_____
###Markdown
Here's the dictionary that maps words to numeric indices.
###Code
word2Ind
###Output
_____no_output_____
###Markdown
You can use this dictionary to get the index of a word.
###Code
print("Index of the word 'i': ",word2Ind['i'])
###Output
_____no_output_____
###Markdown
And conversely, here's the dictionary that maps indices to words.
###Code
Ind2word
print("Word which has index 2: ",Ind2word[2] )
###Output
_____no_output_____
###Markdown
Finally, get the length of either of these dictionaries to get the size of the vocabulary of your corpus, in other words the number of different words making up the corpus.
###Code
V = len(word2Ind)
print("Size of vocabulary: ", V)
###Output
_____no_output_____
###Markdown
Getting one-hot word vectorsRecall from the lecture that you can easily convert an integer, $n$, into a one-hot vector.Consider the word "happy". First, retrieve its numeric index.
###Code
n = word2Ind['happy']
n
###Output
_____no_output_____
###Markdown
Now create a vector with the size of the vocabulary, and fill it with zeros.
###Code
center_word_vector = np.zeros(V)
center_word_vector
###Output
_____no_output_____
###Markdown
You can confirm that the vector has the right size.
###Code
len(center_word_vector) == V
###Output
_____no_output_____
###Markdown
Next, replace the 0 of the $n$-th element with a 1.
###Code
center_word_vector[n] = 1
###Output
_____no_output_____
###Markdown
And you have your one-hot word vector.
###Code
center_word_vector
###Output
_____no_output_____
###Markdown
**You can now group all of these steps in a convenient function, which takes as parameters: a word to be encoded, a dictionary that maps words to indices, and the size of the vocabulary.**
###Code
def word_to_one_hot_vector(word, word2Ind, V):
# BEGIN your code here
one_hot_vector = np.zeros(V)
one_hot_vector[word2Ind[word]] = 1
# END your code here
return one_hot_vector
###Output
_____no_output_____
###Markdown
Check that it works as intended.
###Code
word_to_one_hot_vector('happy', word2Ind, V)
###Output
_____no_output_____
###Markdown
**What is the word vector for "learning"?**
###Code
# BEGIN your code here
word_to_one_hot_vector('learning', word2Ind, V)
# END your code here
###Output
_____no_output_____
###Markdown
Expected output: array([0., 0., 0., 0., 1.]) Getting context word vectors To create the vectors that represent context words, you will calculate the average of the one-hot vectors representing the individual words.Let's start with a list of context words.
###Code
context_words = ['i', 'am', 'because', 'i']
###Output
_____no_output_____
###Markdown
Using Python's list comprehension construct and the `word_to_one_hot_vector` function that you created in the previous section, you can create a list of one-hot vectors representing each of the context words.
###Code
context_words_vectors = [word_to_one_hot_vector(w, word2Ind, V) for w in context_words]
context_words_vectors
###Output
_____no_output_____
###Markdown
And you can now simply get the average of these vectors using numpy's `mean` function, to get the vector representation of the context words.
###Code
np.mean(context_words_vectors, axis=0)
###Output
_____no_output_____
###Markdown
Note the `axis=0` parameter that tells `mean` to calculate the average of the rows (if you had wanted the average of the columns, you would have used `axis=1`).**Now create the `context_words_to_vector` function that takes in a list of context words, a word-to-index dictionary, and a vocabulary size, and outputs the vector representation of the context words.**
###Code
def context_words_to_vector(context_words, word2Ind, V):
# BEGIN your code here
context_words_vectors = [word_to_one_hot_vector(w, word2Ind, V) for w in context_words]
context_words_vectors = np.mean(context_words_vectors, axis=0)
# END your code here
return context_words_vectors
###Output
_____no_output_____
###Markdown
And check that you obtain the same output as the manual approach above.
###Code
context_words_to_vector(['i', 'am', 'because', 'i'], word2Ind, V)
###Output
_____no_output_____
###Markdown
**What is the vector representation of the context words "am happy i am"?**
###Code
# BEGIN your code here
context_words_to_vector(['am', 'happy', 'i', 'am'], word2Ind, V)
# END your code here
###Output
_____no_output_____
###Markdown
Expected output: array([0.5 , 0. , 0.25, 0.25, 0. ]) Building the training set You can now combine the functions that you created in the previous sections, to build a training set for the CBOW model, starting from the following tokenized corpus.
###Code
words
###Output
_____no_output_____
###Markdown
To do this you need to use the sliding window function (`get_windows`) to extract the context words and center words, and you then convert these sets of words into a basic vector representation using `word_to_one_hot_vector` and `context_words_to_vector`.
###Code
for context_words, center_word in get_windows(words, 2): # reminder: 2 is the context half-size
print(f'Context words: {context_words} -> {context_words_to_vector(context_words, word2Ind, V)}')
print(f'Center word: {center_word} -> {word_to_one_hot_vector(center_word, word2Ind, V)}')
print()
###Output
_____no_output_____
###Markdown
In this practice notebook you'll be performing a single iteration of training using a single example, but in this week's assignment you'll train the CBOW model using several iterations and batches of example.Here is how you would use a Python generator function (remember the `yield` keyword from the lecture?) to make it easier to iterate over a set of examples.
###Code
def get_training_example(words, C, word2Ind, V):
for context_words, center_word in get_windows(words, C):
yield context_words_to_vector(context_words, word2Ind, V), word_to_one_hot_vector(center_word, word2Ind, V)
###Output
_____no_output_____
###Markdown
The output of this function can be iterated on to get successive context word vectors and center word vectors, as demonstrated in the next cell.
###Code
for context_words_vector, center_word_vector in get_training_example(words, 2, word2Ind, V):
print(f'Context words vector: {context_words_vector}')
print(f'Center word vector: {center_word_vector}')
print()
###Output
_____no_output_____
###Markdown
Your training set is ready, you can now move on to the CBOW model itself. The continuous bag-of-words model The CBOW model is based on a neural network, the architecture of which looks like the figure below, as you'll recall from the lecture. Figure 1 This part of the notebook will walk you through:- The two activation functions used in the neural network.- Forward propagation.- Cross-entropy loss.- Backpropagation.- Gradient descent.- Extracting the word embedding vectors from the weight matrices once the neural network has been trained. Activation functions Let's start by implementing the activation functions, ReLU and softmax. ReLU ReLU is used to calculate the values of the hidden layer, in the following formulas:\begin{align} \mathbf{z_1} &= \mathbf{W_1}\mathbf{x} + \mathbf{b_1} \tag{1} \\ \mathbf{h} &= \mathrm{ReLU}(\mathbf{z_1}) \tag{2} \\\end{align} Let's fix a value for $\mathbf{z_1}$ as a working example.
###Code
np.random.seed(10)
z_1 = 10*np.random.rand(5, 1)-5
z_1
###Output
_____no_output_____
###Markdown
To get the ReLU of this vector, you want all the negative values to become zeros.First create a copy of this vector.
###Code
h = z_1.copy()
###Output
_____no_output_____
###Markdown
Now determine which of its values are negative.
###Code
h < 0
###Output
_____no_output_____
###Markdown
You can now simply set all of the values which are negative to 0.
###Code
h[h < 0] = 0
###Output
_____no_output_____
###Markdown
And that's it: you have the ReLU of $\mathbf{z_1}$!
###Code
h
###Output
_____no_output_____
###Markdown
**Now implement ReLU as a function.**
###Code
def relu(z):
# BEGIN your code here
result = z.copy()
result[result < 0] = 0
# END your code here
return result
###Output
_____no_output_____
###Markdown
**And check that it's working.**
###Code
z = np.array([[-1.25459881], [ 4.50714306], [ 2.31993942], [ 0.98658484], [-3.4398136 ]])
relu(z)
###Output
_____no_output_____
###Markdown
Expected output: array([[0. ], [4.50714306], [2.31993942], [0.98658484], [0. ]]) Softmax The second activation function that you need is softmax. This function is used to calculate the values of the output layer of the neural network, using the following formulas:\begin{align} \mathbf{z_2} &= \mathbf{W_2}\mathbf{h} + \mathbf{b_2} \tag{3} \\ \mathbf{\hat y} &= \mathrm{softmax}(\mathbf{z_2}) \tag{4} \\\end{align}To calculate softmax of a vector $\mathbf{z}$, the $i$-th component of the resulting vector is given by:$$ \textrm{softmax}(\textbf{z})_i = \frac{e^{z_i} }{\sum\limits_{j=1}^{V} e^{z_j} } \tag{5} $$Let's work through an example.
###Code
z = np.array([9, 8, 11, 10, 8.5])
z
###Output
_____no_output_____
###Markdown
You'll need to calculate the exponentials of each element, both for the numerator and for the denominator.
###Code
e_z = np.exp(z)
e_z
###Output
_____no_output_____
###Markdown
The denominator is equal to the sum of these exponentials.
###Code
sum_e_z = np.sum(e_z)
sum_e_z
###Output
_____no_output_____
###Markdown
And the value of the first element of $\textrm{softmax}(\textbf{z})$ is given by:
###Code
e_z[0]/sum_e_z
###Output
_____no_output_____
###Markdown
This is for one element. You can use numpy's vectorized operations to calculate the values of all the elements of the $\textrm{softmax}(\textbf{z})$ vector in one go.**Implement the softmax function.**
###Code
def softmax(z):
# BEGIN your code here
e_z = np.exp(z)
sum_e_z = np.sum(e_z)
return e_z / sum_e_z
# END your code here
###Output
_____no_output_____
###Markdown
**Now check that it works.**
###Code
softmax([9, 8, 11, 10, 8.5])
###Output
_____no_output_____
###Markdown
Expected output: array([0.08276948, 0.03044919, 0.61158833, 0.22499077, 0.05020223]) Dimensions: 1-D arrays vs 2-D column vectorsBefore moving on to implement forward propagation, backpropagation, and gradient descent, let's have a look at the dimensions of the vectors you've been handling until now.Create a vector of length $V$ filled with zeros.
###Code
x_array = np.zeros(V)
x_array
###Output
_____no_output_____
###Markdown
This is a 1-dimensional array, as revealed by the `.shape` property of the array.
###Code
x_array.shape
###Output
_____no_output_____
###Markdown
To perform matrix multiplication in the next steps, you actually need your column vectors to be represented as a matrix with one column. In numpy, this matrix is represented as a 2-dimensional array.The easiest way to convert a 1D vector to a 2D column matrix is to set its `.shape` property to the number of rows and one column, as shown in the next cell.
###Code
x_column_vector = x_array.copy()
x_column_vector.shape = (V, 1) # alternatively ... = (x_array.shape[0], 1)
x_column_vector
###Output
_____no_output_____
###Markdown
The shape of the resulting "vector" is:
###Code
x_column_vector.shape
###Output
_____no_output_____
###Markdown
So you now have a 5x1 matrix that you can use to perform standard matrix multiplication. Forward propagation Let's dive into the neural network itself, which is shown below with all the dimensions and formulas you'll need. Figure 2 Set $N$ equal to 3. Remember that $N$ is a hyperparameter of the CBOW model that represents the size of the word embedding vectors, as well as the size of the hidden layer.
###Code
N = 3
###Output
_____no_output_____
###Markdown
Initialization of the weights and biases Before you start training the neural network, you need to initialize the weight matrices and bias vectors with random values.In the assignment you will implement a function to do this yourself using `numpy.random.rand`. In this notebook, we've pre-populated these matrices and vectors for you.
###Code
W1 = np.array([[ 0.41687358, 0.08854191, -0.23495225, 0.28320538, 0.41800106],
[ 0.32735501, 0.22795148, -0.23951958, 0.4117634 , -0.23924344],
[ 0.26637602, -0.23846886, -0.37770863, -0.11399446, 0.34008124]])
W2 = np.array([[-0.22182064, -0.43008631, 0.13310965],
[ 0.08476603, 0.08123194, 0.1772054 ],
[ 0.1871551 , -0.06107263, -0.1790735 ],
[ 0.07055222, -0.02015138, 0.36107434],
[ 0.33480474, -0.39423389, -0.43959196]])
b1 = np.array([[ 0.09688219],
[ 0.29239497],
[-0.27364426]])
b2 = np.array([[ 0.0352008 ],
[-0.36393384],
[-0.12775555],
[-0.34802326],
[-0.07017815]])
###Output
_____no_output_____
###Markdown
**Check that the dimensions of these matrices match those shown in the figure above.**
###Code
# BEGIN your code here
print(f'V (vocabulary size): {V}')
print(f'N (embedding size / size of the hidden layer): {N}')
print(f'size of W1: {W1.shape} (NxV)')
print(f'size of b1: {b1.shape} (Nx1)')
print(f'size of W2: {W1.shape} (VxN)')
print(f'size of b2: {b2.shape} (Vx1)')
# END your code here
###Output
_____no_output_____
###Markdown
Training example Run the next cells to get the first training example, made of the vector representing the context words "i am because i", and the target which is the one-hot vector representing the center word "happy".> You don't need to worry about the Python syntax, but there are some explanations below if you want to know what's happening behind the scenes.
###Code
training_examples = get_training_example(words, 2, word2Ind, V)
###Output
_____no_output_____
###Markdown
> `get_training_examples`, which uses the `yield` keyword, is known as a generator. When run, it builds an iterator, which is a special type of object that... you can iterate on (using a `for` loop for instance), to retrieve the successive values that the function generates.>> In this case `get_training_examples` `yield`s training examples, and iterating on `training_examples` will return the successive training examples.
###Code
x_array, y_array = next(training_examples)
###Output
_____no_output_____
###Markdown
> `next` is another special keyword, which gets the next available value from an iterator. Here, you'll get the very first value, which is the first training example. If you run this cell again, you'll get the next value, and so on until the iterator runs out of values to return.>> In this notebook `next` is used because you will only be performing one iteration of training. In this week's assignment with the full training over several iterations you'll use regular `for` loops with the iterator that supplies the training examples. The vector representing the context words, which will be fed into the neural network, is:
###Code
x_array
###Output
_____no_output_____
###Markdown
The one-hot vector representing the center word to be predicted is:
###Code
y_array
###Output
_____no_output_____
###Markdown
Now convert these vectors into matrices (or 2D arrays) to be able to perform matrix multiplication on the right types of objects, as explained above.
###Code
x = x_array.copy()
x.shape = (V, 1)
print('x')
print(x)
print()
y = y_array.copy()
y.shape = (V, 1)
print('y')
print(y)
###Output
_____no_output_____
###Markdown
Values of the hidden layerNow that you have initialized all the variables that you need for forward propagation, you can calculate the values of the hidden layer using the following formulas:\begin{align} \mathbf{z_1} = \mathbf{W_1}\mathbf{x} + \mathbf{b_1} \tag{1} \\ \mathbf{h} = \mathrm{ReLU}(\mathbf{z_1}) \tag{2} \\\end{align}First, you can calculate the value of $\mathbf{z_1}$.
###Code
z1 = np.dot(W1, x) + b1
###Output
_____no_output_____
###Markdown
> `np.dot` is numpy's function for matrix multiplication.As expected you get an $N$ by 1 matrix, or column vector with $N$ elements, where $N$ is equal to the embedding size, which is 3 in this example.
###Code
z1
###Output
_____no_output_____
###Markdown
You can now take the ReLU of $\mathbf{z_1}$ to get $\mathbf{h}$, the vector with the values of the hidden layer.
###Code
h = relu(z1)
h
###Output
_____no_output_____
###Markdown
Applying ReLU means that the negative element of $\mathbf{z_1}$ has been replaced with a zero. Values of the output layerHere are the formulas you need to calculate the values of the output layer, represented by the vector $\mathbf{\hat y}$:\begin{align} \mathbf{z_2} &= \mathbf{W_2}\mathbf{h} + \mathbf{b_2} \tag{3} \\ \mathbf{\hat y} &= \mathrm{softmax}(\mathbf{z_2}) \tag{4} \\\end{align}**First, calculate $\mathbf{z_2}$.**
###Code
# BEGIN your code here
z2 = np.dot(W2, h) + b2
# END your code here
z2
###Output
_____no_output_____
###Markdown
Expected output: array([[-0.31973737], [-0.28125477], [-0.09838369], [-0.33512159], [-0.19919612]]) This is a $V$ by 1 matrix, where $V$ is the size of the vocabulary, which is 5 in this example. **Now calculate the value of $\mathbf{\hat y}$.**
###Code
# BEGIN your code here
y_hat = softmax(z2)
# END your code here
y_hat
###Output
_____no_output_____
###Markdown
Expected output: array([[0.18519074], [0.19245626], [0.23107446], [0.18236353], [0.20891502]]) As you've performed the calculations with random matrices and vectors (apart from the input vector), the output of the neural network is essentially random at this point. The learning process will adjust the weights and biases to match the actual targets better.**That being said, what word did the neural network predict?** SolutionThe neural network predicted the word "happy": the largest element of $\mathbf{\hat y}$ is the third one, and the third word of the vocabulary is "happy".Here's how you could implement this in Python:print(Ind2word[np.argmax(y_hat)]) Well done, you've completed the forward propagation phase! Cross-entropy lossNow that you have the network's prediction, you can calculate the cross-entropy loss to determine how accurate the prediction was compared to the actual target.> Remember that you are working on a single training example, not on a batch of examples, which is why you are using *loss* and not *cost*, which is the generalized form of loss.First let's recall what the prediction was.
###Code
y_hat
###Output
_____no_output_____
###Markdown
And the actual target value is:
###Code
y
###Output
_____no_output_____
###Markdown
The formula for cross-entropy loss is:$$ J=-\sum\limits_{k=1}^{V}y_k\log{\hat{y}_k} \tag{6}$$**Implement the cross-entropy loss function.**Here are a some hints if you're stuck. Hint 1 To multiply two numpy matrices (such as y and y_hat) element-wise, you can simply use the * operator. Hint 2Once you have a vector equal to the element-wise multiplication of y and y_hat, you can use np.sum to calculate the sum of the elements of this vector.
###Code
def cross_entropy_loss(y_predicted, y_actual):
# BEGIN your code here
loss = np.sum(-np.log(y_hat)*y)
# END your code here
return loss
###Output
_____no_output_____
###Markdown
**Now use this function to calculate the loss with the actual values of $\mathbf{y}$ and $\mathbf{\hat y}$.**
###Code
cross_entropy_loss(y_hat, y)
###Output
_____no_output_____
###Markdown
Expected output: 1.4650152923611106 This value is neither good nor bad, which is expected as the neural network hasn't learned anything yet.The actual learning will start during the next phase: backpropagation. BackpropagationThe formulas that you will implement for backpropagation are the following.\begin{align} \frac{\partial J}{\partial \mathbf{W_1}} &= \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right )\mathbf{x}^\top \tag{7}\\ \frac{\partial J}{\partial \mathbf{W_2}} &= (\mathbf{\hat{y}} - \mathbf{y})\mathbf{h^\top} \tag{8}\\ \frac{\partial J}{\partial \mathbf{b_1}} &= \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right ) \tag{9}\\ \frac{\partial J}{\partial \mathbf{b_2}} &= \mathbf{\hat{y}} - \mathbf{y} \tag{10}\end{align}> Note: these formulas are slightly simplified compared to the ones in the lecture as you're working on a single training example, whereas the lecture provided the formulas for a batch of examples. In the assignment you'll be implementing the latter.Let's start with an easy one.**Calculate the partial derivative of the loss function with respect to $\mathbf{b_2}$, and store the result in `grad_b2`.**$$\frac{\partial J}{\partial \mathbf{b_2}} = \mathbf{\hat{y}} - \mathbf{y} \tag{10}$$
###Code
# BEGIN your code here
grad_b2 = y_hat - y
# END your code here
grad_b2
###Output
_____no_output_____
###Markdown
Expected output: array([[ 0.18519074], [ 0.19245626], [-0.76892554], [ 0.18236353], [ 0.20891502]]) **Next, calculate the partial derivative of the loss function with respect to $\mathbf{W_2}$, and store the result in `grad_W2`.**$$\frac{\partial J}{\partial \mathbf{W_2}} = (\mathbf{\hat{y}} - \mathbf{y})\mathbf{h^\top} \tag{8}$$> Hint: use `.T` to get a transposed matrix, e.g. `h.T` returns $\mathbf{h^\top}$.
###Code
# BEGIN your code here
grad_W2 = np.dot(y_hat - y, h.T)
# END your code here
grad_W2
###Output
_____no_output_____
###Markdown
Expected output: array([[ 0.06756476, 0.11798563, 0. ], [ 0.0702155 , 0.12261452, 0. ], [-0.28053384, -0.48988499, 0. ], [ 0.06653328, 0.1161844 , 0. ], [ 0.07622029, 0.13310045, 0. ]]) **Now calculate the partial derivative with respect to $\mathbf{b_1}$ and store the result in `grad_b1`.**$$\frac{\partial J}{\partial \mathbf{b_1}} = \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right ) \tag{9}$$
###Code
# BEGIN your code here
grad_b1 = relu(np.dot(W2.T, y_hat - y))
# END your code here
grad_b1
###Output
_____no_output_____
###Markdown
Expected output: array([[0. ], [0. ], [0.17045858]]) **Finally, calculate the partial derivative of the loss with respect to $\mathbf{W_1}$, and store it in `grad_W1`.**$$\frac{\partial J}{\partial \mathbf{W_1}} = \rm{ReLU}\left ( \mathbf{W_2^\top} (\mathbf{\hat{y}} - \mathbf{y})\right )\mathbf{x}^\top \tag{7}$$
###Code
# BEGIN your code here
grad_W1 = np.dot(relu(np.dot(W2.T, y_hat - y)), x.T)
# END your code here
grad_W1
###Output
_____no_output_____
###Markdown
Expected output: array([[0. , 0. , 0. , 0. , 0. ], [0. , 0. , 0. , 0. , 0. ], [0.04261464, 0.04261464, 0. , 0.08522929, 0. ]]) Before moving on to gradient descent, double-check that all the matrices have the expected dimensions.
###Code
# BEGIN your code here
print(f'V (vocabulary size): {V}')
print(f'N (embedding size / size of the hidden layer): {N}')
print(f'size of grad_W1: {grad_W1.shape} (NxV)')
print(f'size of grad_b1: {grad_b1.shape} (Nx1)')
print(f'size of grad_W2: {grad_W1.shape} (VxN)')
print(f'size of grad_b2: {grad_b2.shape} (Vx1)')
# END your code here
###Output
_____no_output_____
###Markdown
Gradient descentDuring the gradient descent phase, you will update the weights and biases by subtracting $\alpha$ times the gradient from the original matrices and vectors, using the following formulas.\begin{align} \mathbf{W_1} &:= \mathbf{W_1} - \alpha \frac{\partial J}{\partial \mathbf{W_1}} \tag{11}\\ \mathbf{W_2} &:= \mathbf{W_2} - \alpha \frac{\partial J}{\partial \mathbf{W_2}} \tag{12}\\ \mathbf{b_1} &:= \mathbf{b_1} - \alpha \frac{\partial J}{\partial \mathbf{b_1}} \tag{13}\\ \mathbf{b_2} &:= \mathbf{b_2} - \alpha \frac{\partial J}{\partial \mathbf{b_2}} \tag{14}\\\end{align}First, let set a value for $\alpha$.
###Code
alpha = 0.03
###Output
_____no_output_____
###Markdown
The updated weight matrix $\mathbf{W_1}$ will be:
###Code
W1_new = W1 - alpha * grad_W1
###Output
_____no_output_____
###Markdown
Let's compare the previous and new values of $\mathbf{W_1}$:
###Code
print('old value of W1:')
print(W1)
print()
print('new value of W1:')
print(W1_new)
###Output
_____no_output_____
###Markdown
The difference is very subtle (hint: take a closer look at the last row), which is why it takes a fair amount of iterations to train the neural network until it reaches optimal weights and biases starting from random values.**Now calculate the new values of $\mathbf{W_2}$ (to be stored in `W2_new`), $\mathbf{b_1}$ (in `b1_new`), and $\mathbf{b_2}$ (in `b2_new`).**\begin{align} \mathbf{W_2} &:= \mathbf{W_2} - \alpha \frac{\partial J}{\partial \mathbf{W_2}} \tag{12}\\ \mathbf{b_1} &:= \mathbf{b_1} - \alpha \frac{\partial J}{\partial \mathbf{b_1}} \tag{13}\\ \mathbf{b_2} &:= \mathbf{b_2} - \alpha \frac{\partial J}{\partial \mathbf{b_2}} \tag{14}\\\end{align}
###Code
# BEGIN your code here
W2_new = W2 - alpha * grad_W2
b1_new = b1 - alpha * grad_b1
b2_new = b2 - alpha * grad_b2
# END your code here
print('W2_new')
print(W2_new)
print()
print('b1_new')
print(b1_new)
print()
print('b2_new')
print(b2_new)
###Output
_____no_output_____
###Markdown
Expected output: W2_new [[-0.22384758 -0.43362588 0.13310965] [ 0.08265956 0.0775535 0.1772054 ] [ 0.19557112 -0.04637608 -0.1790735 ] [ 0.06855622 -0.02363691 0.36107434] [ 0.33251813 -0.3982269 -0.43959196]] b1_new [[ 0.09688219] [ 0.29239497] [-0.27875802]] b2_new [[ 0.02964508] [-0.36970753] [-0.10468778] [-0.35349417] [-0.0764456 ]] Congratulations, you have completed one iteration of training using one training example!You'll need many more iterations to fully train the neural network, and you can optimize the learning process by training on batches of examples, as described in the lecture. You will get to do this during this week's assignment. Extracting word embedding vectorsOnce you have finished training the neural network, you have three options to get word embedding vectors for the words of your vocabulary, based on the weight matrices $\mathbf{W_1}$ and/or $\mathbf{W_2}$. Option 1: extract embedding vectors from $\mathbf{W_1}$The first option is to take the columns of $\mathbf{W_1}$ as the embedding vectors of the words of the vocabulary, using the same order of the words as for the input and output vectors.> Note: in this practice notebook the values of the word embedding vectors are meaningless after a single iteration with just one training example, but here's how you would proceed after the training process is complete.For example $\mathbf{W_1}$ is this matrix:
###Code
W1
###Output
_____no_output_____
###Markdown
The first column, which is a 3-element vector, is the embedding vector of the first word of your vocabulary. The second column is the word embedding vector for the second word, and so on.The first, second, etc. words are ordered as follows.
###Code
for i in range(V):
print(Ind2word[i])
###Output
_____no_output_____
###Markdown
So the word embedding vectors corresponding to each word are:
###Code
# loop through each word of the vocabulary
for word in word2Ind:
# extract the column corresponding to the index of the word in the vocabulary
word_embedding_vector = W1[:, word2Ind[word]]
print(f'{word}: {word_embedding_vector}')
###Output
_____no_output_____
###Markdown
Option 2: extract embedding vectors from $\mathbf{W_2}$ The second option is to take $\mathbf{W_2}$ transposed, and take its columns as the word embedding vectors just like you did for $\mathbf{W_1}$.
###Code
W2.T
# loop through each word of the vocabulary
for word in word2Ind:
# extract the column corresponding to the index of the word in the vocabulary
word_embedding_vector = W2.T[:, word2Ind[word]]
print(f'{word}: {word_embedding_vector}')
###Output
_____no_output_____
###Markdown
Option 3: extract embedding vectors from $\mathbf{W_1}$ and $\mathbf{W_2}$ The third option, which is the one you will use in this week's assignment, uses the average of $\mathbf{W_1}$ and $\mathbf{W_2^\top}$. **Calculate the average of $\mathbf{W_1}$ and $\mathbf{W_2^\top}$, and store the result in `W3`.**
###Code
# BEGIN your code here
W3 = (W1+W2.T)/2
# END your code here
W3
###Output
_____no_output_____
###Markdown
Expected output: array([[ 0.09752647, 0.08665397, -0.02389858, 0.1768788 , 0.3764029 ], [-0.05136565, 0.15459171, -0.15029611, 0.19580601, -0.31673866], [ 0.19974284, -0.03063173, -0.27839106, 0.12353994, -0.04975536]]) Extracting the word embedding vectors works just like the two previous options, by taking the columns of the matrix you've just created.
###Code
# loop through each word of the vocabulary
for word in word2Ind:
# extract the column corresponding to the index of the word in the vocabulary
word_embedding_vector = W3[:, word2Ind[word]]
print(f'{word}: {word_embedding_vector}')
###Output
_____no_output_____ |
InstaBot 1.ipynb | ###Markdown
Answer 1.1
###Code
#logging_in
from selenium import webdriver
driver=webdriver.Chrome("/Users/User/Web drivers/chromedriver")
driver.get("https://www.instagram.com/")
wait = WebDriverWait(driver, 10)
user= wait.until(EC.presence_of_element_located((By.NAME,"username")))
password=driver.find_element_by_name("password")
#fill up username and password
user.send_keys("SAMPLE USERNAME")
password.send_keys("SAMPLE PASSWORD")
login=driver.find_element_by_xpath('//button[@type="submit"]/div')
login.submit()
#turning off notification pop up which comes on logging in
notificatn_off=driver.find_element_by_class_name("HoLwm")
notificatn_off.click()
driver.maximize_window()
###Output
_____no_output_____
###Markdown
Answer 2.1
###Code
#fetching names of the Instagram Handles that are displayed in list after typing “food”
search=driver.find_element_by_xpath('//input[contains(@class,"XTCLo")]')
search.send_keys("food")
time.sleep(3)
l=driver.find_elements_by_class_name("Ap253")
print("The names of the Instagram Handles that are displayed in list after typing “food” are :--->")
for i in l:
print(i.get_attribute('innerHTML').strip('#'))
###Output
The names of the Instagram Handles that are displayed in list after typing “food” are :--->
foodtalkindia
food_happened
__journey__with__food__
dilsefoodie
your_scrolling_stopper
_foodfiestaa_
food
delhifoodguide
food_lunatic
uzbek_food
food_belly11
1_colours_1
delhieater
foodinsider
food_and_makeup_lover
buzzfeedfood
indian_food_freak
indianfood_lovers
foodofchennai
pune_food_blogger
foodiesdelhite
lefoodtour
dillifoodjunkie
fityetfoodie
hungrypixel
mumbaifoodie
foodnetwork
foodlovers96
food
foodtalkprivilege
foodelhi
mumbaifoodjunkie
foodieveggie
Food Station
thehungrymumbaikar
yum_crunch
Food Square
food_travel_etc
foodporn
food.creative
foodallday365
what.blue.eats
foodsnobtv
foodiliciousmoments
foodiesofguwahati
foodiediana
delhi_foodaholic
salonikukreja
karanfoodfanatic
foodtalkbangalore
the_foodies_journal
foodisnirvana
foodphotography
bangalore_foodjunkies
eatagii
concentrate_on_food
###Markdown
Answer 3.1
###Code
#searching and opening so_delhi page
search=driver.find_element_by_xpath('//input[contains(@class,"XTCLo")]')
search.clear()
search.send_keys("So Delhi")
time.sleep(3)
so_delhi=driver.find_element_by_class_name("Ap253")
so_delhi.click()
###Output
_____no_output_____
###Markdown
Answer 4.1
###Code
#going back to homepage
driver.back()
#searching and opening so_delhi page
search=driver.find_element_by_xpath('//input[contains(@class,"XTCLo")]')
search.send_keys("So Delhi")
time.sleep(3)
so_delhi=driver.find_element_by_class_name("Ap253")
so_delhi.click()
###Output
_____no_output_____
###Markdown
Answer 4.2
###Code
#following so_delhi page
follow_so_delhi=driver.find_element_by_xpath('//button[contains(@class,"_5f5mN")]')
follow_so_delhi.click()
print("Followed So_Delhi")
###Output
Followed So_Delhi
###Markdown
Answer 4.3
###Code
#unfollowing so_delhi page
unfollow_so_delhi=driver.find_element_by_xpath('//button[contains(@class,"_5f5mN")]')
unfollow_so_delhi.click()
unfollow=driver.find_element_by_xpath('//button[contains(@class,"aOOlW")]')
unfollow.click()
print("Unfollowed So_Delhi")
###Output
Unfollowed So_Delhi
###Markdown
Answer 5.1
###Code
#going back to homepage
driver.back()
#searching and opening dilsefoodie page
search=driver.find_element_by_xpath('//input[contains(@class,"XTCLo")]')
search.send_keys("dilsefoodie")
time.sleep(2)
wait = WebDriverWait(driver, 10)
dilsefoodie= wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
dilsefoodie.click()
#opening recent post
wait = WebDriverWait(driver, 10)
to_be_liked_dilsefoodie=wait.until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"v1Nh3 kIKUG")]/a')))
to_be_liked_dilsefoodie.click()
#clicking like and repeating it for first 30 posts
for i in range(30):
wait = WebDriverWait(driver, 10)
like= wait.until(EC.presence_of_element_located((By.XPATH,'//div[@class="QBdPU "]/span')))
like.click()
next=driver.find_element_by_class_name('coreSpriteRightPaginationArrow')
next.click()
print("Liked the first 30 posts of dilsefoodie")
###Output
Liked the first 30 posts of dilsefoodie
###Markdown
Answer 5.2
###Code
#going back to dilsefoodie page
driver.get("https://www.instagram.com/dilsefoodie/")
#opening recent post
wait = WebDriverWait(driver, 10)
to_be_unliked_dilsefoodie=wait.until(EC.element_to_be_clickable((By.XPATH,'//div[contains(@class,"v1Nh3 kIKUG")]/a')))
to_be_unliked_dilsefoodie.click()
#clicking unlike and repeating it for first 30 posts
for i in range(30):
wait = WebDriverWait(driver, 10)
unlike= wait.until(EC.presence_of_element_located((By.XPATH,'//div[@class="QBdPU "]/span')))
unlike.click()
next=driver.find_element_by_class_name('coreSpriteRightPaginationArrow')
next.click()
print("UnLiked the first 30 posts of dilsefoodie")
###Output
UnLiked the first 30 posts of dilsefoodie
###Markdown
Answer 6.1
###Code
#going back to homepage
driver.get("https://www.instagram.com/")
#searching and opening the profile
def search_and_open(profile):
search=driver.find_element_by_xpath('//input[contains(@class,"XTCLo")]')
search.clear()
search.send_keys(profile)
wait = WebDriverWait(driver, 20)
open_page= wait.until(EC.presence_of_element_located((By.CLASS_NAME,"Ap253")))
open_page.click()
#getting top 500 follower's username
def get_username(profile):
search_and_open(profile)
#clicking on followers button
wait = WebDriverWait(driver, 20)
followers_button = wait.until(EC.presence_of_element_located((By.PARTIAL_LINK_TEXT,"followers")))
followers_button.click()
time.sleep(4)
userlist = []
#finding element for the followers popup
fBody = driver.find_element_by_xpath("//div[@class='isgrP']")
#getting 500 usernames
while len(userlist) < 500:
#extracting names using BeautifulSoup
data = driver.page_source
html_data = BeautifulSoup(data, 'html.parser')
ul = html_data.find_all(class_ = 'd7ByH')
for u in ul[len(userlist):]:
if u.a.string not in userlist:
userlist.append(u.a.string)
#scrolling
driver.execute_script('arguments[0].scrollTop = arguments[0].scrollTop + arguments[0].offsetHeight;', fBody)
return userlist
users = get_username('foodtalkindia')
print('The usernames of the first 500 followers of foodtalkindia are:')
for i in range(500):
print(i+1, users[i])
#going back to homepage
driver.get("https://www.instagram.com/")
#so_delhi
users_sodelhi = get_username('sodelhi')
print('The usernames of the first 500 followers of sodelhi are:')
for i in range(500):
print(i+1, users_sodelhi[i])
###Output
The usernames of the first 500 followers of sodelhi are:
1 fatimaaaaa_kh
2 pragsjosh
3 aslamba13
4 jack_your_thoughts
5 afzal5430
6 itsmfahad
7 niks1320
8 mii_tube_
9 talkaboutdelicious
10 pr_rupa_mishra
11 ridhiisehgal
12 vaibhavgar97
13 chillr05
14 nailzonebykanika
15 hxnia_khxn
16 lkanma
17 vikaschauhan_7
18 kirtigoyall
19 sharnehaa
20 quotesby0u
21 that.brown.girl__
22 sami.akhtar786
23 fooodgrams
24 sourabhsharma2048
25 rohanjeenwal_0111
26 morganr44
27 sparkling_gurlll
28 heyniharika
29 arushi__narang
30 shemingleshermind
31 rachi_goel
32 himanshutibre
33 null5216
34 the_odd_bud
35 souledcraft
36 nehagandhilatika
37 veer_ji_malai_chaap_wale_
38 baani.gupta2324
39 royal_banna_abhi
40 naturelover2731
41 sadhkarishma
42 aanandi17
43 sushantarora19
44 s.h.u.b.h9
45 amri.ta7118
46 ritugosain
47 m.ashfaq05
48 koushikkumarray9
49 adityazsupertramp
50 dr_naved
51 rathore.shweta_
52 _faiza_abbasi_
53 prachi1kiara
54 fun_and_foodstation
55 cute_arru988
56 shubhsood
57 _manvi_saini_
58 dr.sarimulhoque
59 loving_raj_13.9
60 theraastar
61 upcycling_fashion_thinker
62 yatayat_seva
63 sonu7947sonu
64 play_boy_raju_royal
65 salz_saloni
66 muditagrawal_07
67 k_salonii
68 be_like_100rb
69 vicky_vibhav
70 deadmauw5
71 _whims_n_craze_
72 vrindaaaaa17
73 rchawla258
74 sunnypanwar269
75 funaphor
76 kapoor2ranbir
77 chhavi0810
78 ajju786ajju786ajju
79 farmersfamily_market
80 shreesha___pande
81 reenaya_27
82 sabrinamargetic
83 thetotos7
84 raheja_jaya
85 slayerr_singh
86 08.03.2020_____
87 pictureclassics
88 theprint_company
89 iamrammaurya
90 manimie
91 ro.hit4576
92 farhadfarhad920
93 1212jayesh
94 caterersshubh.kanpur
95 kamboj7117
96 trueliving1444
97 thedescanttypewriter
98 thejatinpopli
99 shobhit_mehrotra_01
100 shilpyrauniyar
101 makeoversbysaum_14
102 zivya__noor__shah
103 sehaj_chabba
104 latikasingh92107
105 shefaliarora1996
106 keshavkapur_
107 teena_8395
108 divya_111111
109 abhishek03g
110 b_a_i_s_a
111 winnie__june
112 theeverywhereist_v
113 real_pujasharma09
114 himanshu_kohlii_
115 tanmayswagger
116 cookbook.india
117 vermamohit11
118 chaitraveller
119 prakhar_pd
120 bhatia_mohit2210
121 abhishekbhati1111
122 insta_tyagi
123 sameersaifi645
124 llx_wakil_xll
125 apoorvasharma99
126 jahnvi_sahni
127 annuverma130
128 soul_journey_1302
129 manhasasaf3
130 tarpanaggarwal
131 neha_ha_ha3
132 harshsuhane
133 queendanny0035
134 _sinha_saurabh_
135 talesbymales_
136 ayushsingh.10
137 meht.a3908
138 nanzie23
139 mudita_lakhanpal
140 rohan_helpus_
141 krisi.sawhney
142 _chetna_prajapat
143 zaid_farooqui45
144 gato.grrl
145 _aahna_singh
146 vishal_kr.singh_
147 _anzika
148 saiyam_1501
149 __tannu18__
150 birafit
151 karanthakor3432
152 prateek06_
153 mr_man_on_wheel__
154 himani_sen
155 phemoushwor
156 afreenshakeel2
157 infiniteshadesofpatriarchy
158 chetan.2828
159 goutamradhika
160 contenuum.in
161 tejashmishra25
162 bb2332020
163 ayushipurbey
164 sinfultreats2013
165 _ankita_333_
166 travel.marble
167 furtasticindia
168 sg5482
169 mtenterprises_
170 yadav_babina
171 im_rafiyat
172 nikisareen
173 julme6108
174 jaisairam_004
175 chodry_nikhil
176 rachnaj_84
177 mydogsultan
178 simranjeetsgill
179 rahulthekkel
180 foodie_crush26
181 abhaygupta229
182 kahkashan950
183 _mr_vaibhav__
184 ritesh.lohat.988
185 tanya.sharma1233
186 iam_ankurgupta
187 meenarani917
188 colins_book_shelf
189 sheoran_tamanna
190 nitinbharara
191 kreafurnitureofficial
192 ititrishla
193 kamal_mudgal
194 hpal90455
195 azzykhan30
196 jepis_cake_bakery_shop
197 divya13.07
198 aakash_01garg
199 nikita.sharma98
200 preeti_gautam
201 kanishq_basoya_dellhii0001
202 divyanshumehta_
203 ifam_roy7465
204 ritubhatiain
205 aryanofficialid
206 parshavvv
207 lyf_phase
208 gangseytrash
209 aashbeautyshop
210 tanmaytripathi
211 aashika674
212 ch.sanskaar_singh_nirbaan
213 the_professorrr_
214 asadriz
215 romanticboy_sam
216 imashishgoswami
217 mannishaa_
218 tusharr._.r
219 unique_purnima
220 iamsamriddhi_
221 bong_discover
222 tiamettato
223 o_thatguy_
224 sameen_hussain19
225 harleen59
226 craftisfaction
227 ishita.naskar
228 deepti_0708
229 karmvir9412342926
230 jyotilall
231 nilangsonii
232 urvishsambre
233 the_heart_of_iron
234 ayushrma
235 faisalabbasi99112516
236 mridula_vats
237 himanshuon
238 allurisriram
239 chattrapalsingh_worldaroundme
240 fatherofmongrels
241 sanyam_aggarwal169
242 devil_of_angel1
243 socoimbatoreofficial
244 viraj_25_08
245 so.nagpur
246 muskanrathore001
247 shubhamprasadd
248 bakenetic
249 rokysantu
250 laijou_brahma
251 rohit03_09
252 tikki_singh
253 daformoment.x
254 divyanshaasachdevaa
255 gouravkishor_
256 _.ppriyankaa._
257 kharsheen
258 jjprateeksha
259 kshitijmishra_
260 rahuldogra7
261 rakkshet
262 purvi_rastogi
263 ridhi.virmani
264 aryanx.xd
265 yamu_sharma_
266 d_acharya1610
267 cxndystuff._
268 rishabhmajumdar
269 the__pluviophile
270 ann.iykara
271 kamalaggarwal92
272 goelaayushi
273 adiseth
274 rhinitis.gal
275 rajatbhallaofficial
276 nishuudubeyy
277 kunaljain_63
278 arya.anik
279 _anand_pallavi_1219
280 palakgupta862
281 martabaan_madhumodi
282 dbhoriya
283 ehsaanqureshi798
284 aruba944
285 nishchayjain_
286 pulsemedicalcentregk2
287 juhi__verma
288 riyasid1312
289 21stpuneet
290 jinsonraju
291 anam_siddiqui971
292 akskhurana_
293 sparsh973
294 prapya__
295 mycityunnao2020
296 pardeep_bishnoi_29029
297 2022pramodsingh
298 theshantanutyagi
299 anjalisaxena124
300 nishthaaseth
301 vvipchorabaadshah
302 ambragk2
303 variety1430
304 solo_gil
305 none_of_your_friends_
306 aastharora1603
307 akashkawatra04
308 the__karma__
309 jhanvikalra
310 hitours
311 ishaaayayyaa
312 jahnvigupta1_
313 aakashyadav_2004
314 himanipandey27
315 i_am_yash_pandey
316 givni.in
317 solofall17
318 mallikasjewellery
319 foodandflamenco
320 sanjana__30__
321 preea1983
322 shubham.shukla69
323 jhilik__mondal
324 ___rafique_
325 __theblogger_
326 umesh.suri.11
327 v.a.n.s.h.i_12
328 _bhavyaxoxo_
329 aashu1102
330 n.das.73113
331 abhi_yaduvanshi_rao
332 harmonyinnmeerut
333 _iamronny__
334 himanshu_agrawal_
335 the.whitewolff
336 aggarwal.ayushi24
337 kavi1109____
338 surya_rajawat_1
339 hashimraza388
340 _sheena.bhatia_
341 ishi_2504
342 vipulmaru47
343 __pverma__
344 parveengehl
345 aarti_swarnakar
346 _aashi_1297
347 sun.jay_j
348 nausheen_mirza04
349 sarah.kayum
350 _avinash_pathak_10
351 imsupersingha
352 rahulkansal9
353 md_arshu04
354 official_piyalidey
355 aalok_shastri10
356 wtfaanchal
357 photographyidea4
358 skiny_as_hell
359 akakumar6449
360 explorer.chiru
361 shalini3549
362 yummoments
363 minimalmusafir
364 aged___wine
365 _hamiltonborah_
366 thep.erfectguy
367 lipsamulia
368 consciously_yours_john
369 abhishek._bidhuri
370 ra_fun_
371 manav_._makkar
372 vr_vibe_
373 mangopeople123
374 kaur6_officiall
375 enrichlifespan
376 priyam_khandelwal
377 dabas_manju
378 yashdahiya21
379 kaavinisoni
380 ziyanhoney
381 _bibliophilist_
382 lazeez16
383 reetrathore
384 palak_bh
385 anmol._.saluja
386 dhaba_by_mummy
387 tempting_thali
388 aausaey
389 vibha.anand30
390 harshit_27_10
391 fabricon2020
392 sara.singh.in.india
393 jordan.jordee
394 amanescaping
395 bake_wine
396 muskan_chadha
397 justrameeza1
398 rashi_theblogger
399 aviralllllll
400 ishitachauhann_
401 so.kanpur
402 pranitaa_1010
403 unify.times
404 __mansiie
405 perks.with.prince
406 the_._free_soul
407 manasvik10
408 syedhassan.ahmed
409 avantikamiishra
410 payal_malethia
411 siayushh
412 riyaa.raghav
413 lvlivlaff
414 welcometoagworld
415 mr_ritwik
416 shalini6577
417 kara_vaan
418 ritusharma5115
419 ak028king
420 kuldeep_ydv01
421 wandering.singh
422 amitverma4978
423 anil_official_12
424 laubongolotika
425 abhinav_kanaujia
426 __riyaaaaa__
427 _.sona__
428 __asmitaaaa__
429 prince.kalra5
430 lakshaysharma462
431 thedesistuffs
432 babumoshai69
433 salmannahmadd
434 avichal_16
435 ancient___mariner
436 satakshigarg21
437 world_without_wants
438 vinayak0432
439 miss.chauhan18
440 nikhil93
441 kawall__
442 tanyahuja
443 shagunkhanna_
444 abimani285
445 puja_09samanta
446 saitano_ka_godfather
447 sanjanasagwekar
448 __ajmaj_khan__
449 suhaschef
450 reswalchanchal
451 mihirtyagi007
452 amritlikesdecency
453 delhilad
454 shiva.wadhwani11
455 arorak10
456 vishalsharma.7200
457 avinash_75
458 ojha3481
459 houseeofstoriess
460 niveditakhandare7
461 stoked.kombucha
462 imgursahib
463 rai__archana
464 nobita.hu.me_
465 ishika_0330
466 chetanoberoi13
467 _niteshgupta7835_
468 jilanifaiza
469 satender3699
470 king_mahmoudy_video
471 deepak11_06
472 priya.k.mehrotra
473 theyogi_gang
474 monug_singh
475 vivek_patel_001
476 scan.dal49
477 homedaily.in
478 niranjan_m._42
479 ratika.wadhwa
480 wajeehamemon30
481 __s_h_a_i_k____
482 rajash185
483 lavi__arora__
484 apurvanand_
485 chalantika_s_food_journey
486 iamjeoncharu
487 shikhaa__14
488 numangolden
489 arjitaneja_forever
490 parinieta_ahuja
491 travel_buffs_pm
492 greenearthjaipur
493 dance_is_life_444
494 xx.logophile.xx
495 amazingxperience_india
496 maheshkumar272
497 lilac55_
498 mad_picasso95
499 homeco2020
500 usmankhan379
###Markdown
Answer 6.2
###Code
#going back to homepage
driver.get("https://www.instagram.com/")
def not_follow_me(profile):
search_and_open(profile)
wait = WebDriverWait(driver, 10)
# Finding followers of “foodtalkindia” that I am following
followed_by = wait.until(EC.element_to_be_clickable((By.XPATH, '//span[contains(@class, "tc8A9")]')))
followed_by.click()
fl = wait.until(EC.presence_of_all_elements_located((By.XPATH, '//a[contains(@class, "FPmhX")]')))
data = driver.page_source
html_data = BeautifulSoup(data, 'html.parser')
followers_list = []
fll = html_data.find_all(class_ = 'FPmhX')
for f in fll:
followers_list.append(f.string)
#Finding all my followers
driver.back()
my_profile = wait.until(EC.element_to_be_clickable((By.XPATH, '//span[contains(@class, "_2dbep qNELH")]')))
my_profile.click()
profile = wait.until(EC.element_to_be_clickable((By.XPATH, '//a[contains(@class, "-qQT3")]')))
profile.click()
time.sleep(5)
no_of_followers = wait.until(EC.presence_of_element_located((By.XPATH, '//a[contains(@class, "-nal3 ")]')))
data = driver.page_source
html_data = BeautifulSoup(data, 'html.parser')
n = html_data.find_all(class_ = '-nal3')
num = int(n[1].span.string)
my_followers = wait.until(EC.element_to_be_clickable((By.XPATH, '//a[contains(@class, "-nal3 ")]')))
my_followers.click()
userlist = []
#finding element for the followers name popup
fBody = driver.find_element_by_xpath("//div[@class='isgrP']")
while len(userlist) < num:
#extracting names using BeautifulSoup
data = driver.page_source
html_data = BeautifulSoup(data, 'html.parser')
ul = html_data.find_all(class_ = 'd7ByH')
for u in ul[len(userlist):]:
if u.a.string not in userlist:
userlist.append(u.a.string)
#scrolling
driver.execute_script('arguments[0].scrollTop = arguments[0].scrollTop + arguments[0].offsetHeight;', fBody)
followers_list = set(followers_list)
userlist = set(userlist)
#Finding All the followers of “foodtalkindia” that I am following but those who don’t follow me
ans = followers_list - userlist
print('All the followers of “foodtalkindia” that I am following but those who don’t follow me are:')
for user in ans:
print(user)
not_follow_me('foodtalkindia')
###Output
All the followers of “foodtalkindia” that I am following but those who don’t follow me are:
your_scrolling_stopper
shreya241990
###Markdown
Answer 7.1
###Code
#going back to homepage
driver.get("https://www.instagram.com/")
from selenium.common.exceptions import TimeoutException
def get_story(profile):
search_and_open(profile)
wait = WebDriverWait(driver, 10)
time.sleep(5)
try:
#finding element for the story
story = wait.until(EC.element_to_be_clickable((By.XPATH, '//div[contains(@class, "RR-M- h5uC0")]')))
data = driver.page_source
html_data = BeautifulSoup(data, 'html.parser')
c = html_data.find(class_ = 'CfWVH')
h = int(c['height'])
w = int(c['width'])
#height and width different for the story viewed and not viewed
if h == 208 and w == 208:
print("You have already seen the story")
elif h == 210 and w == 210:
print("You have not seen the story")
story.click()
print("You are now seening the story")
except TimeoutException:
print("The user has no story")
driver.get("https://www.instagram.com/")
time.sleep(2)
find_story=get_story("coding.ninjas")
###Output
You have already seen the story
|
fitting_spectral_line_minimize.ipynb | ###Markdown
Fitting an "artificial" spectral lineWe want to fit an artificial absorption line. We construct a simple model composed of a line representing the continuum and a Gaussian dip representing the feature itself that (locally) fits the spectrum. Our model therefore has 5 parameters: slope ($m$), intercept ($b$), central wavelength ($\lambda_0$), width ($\sigma$), and strength ($C$).This model is a *generative* model, which means it can (artifically) generate observations. We want to compare the model's spectrum with the observed spectrum. Our likelihood function then looks like this:$$ P(\{x_i\}\ |\ m, b, \lambda_0, {\rm EW}) = m*\lambda + b - C * \exp\left[- \frac{(\lambda-\lambda_0)^2}{\sigma^2} \right] $$
###Code
# Importing Libraries
import numpy as np
import matplotlib.pyplot as plt
from matplotlib import gridspec
from scipy import stats
from scipy.optimize import minimize, brent
from decimal import Decimal
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
# CREATING THE ARTIFICIAL LINE
# Input model values
m = -0.2
b = 2000.0
lamb_0 = 6563.
sigma = 1.5
C = 5.0
param = [m,b,lamb_0,sigma,C]
# The "theoretical" spectrum
lamb = np.linspace(6550, 6570, 100)
flux = m*lamb + b - C*np.exp(-(lamb-lamb_0)**2 / sigma**2)
plt.plot(lamb,flux, label="Input Model")
# The "real" data with noise
x_i = flux + stats.norm.rvs(0.0, 0.5, len(flux))
plt.scatter(lamb, x_i, marker='.', color='k',label='Observations')
plt.axes().set_xticks(np.linspace(6550, 6570, 6))
plt.xlabel(r"$\lambda$")
plt.ylabel("Flux")
plt.legend()
plt.show()
###Output
('Real parameters ', [-0.2, 2000.0, 6563.0, 1.5, 5.0])
###Markdown
Now the fitting process. We want to find the best fit model parameters for this spectrum. We provide the code for the model and the minimization routine.
###Code
# Instead of maximizing the likelihood (L) we minimize -L
def neg_likelihood(M, lamb, x_i):
m, b, lamb_0, sigma, C = M
x_model = m*lamb + b - C*np.exp(-(lamb-lamb_0)**2 / sigma**2)
# Loss function
likelihood = (x_model - x_i)**2
return np.sum(likelihood)
x0 = np.array([-0.05, 1000, 6563, 2.5, 10.0]) # some random parameters
res = minimize(neg_likelihood, x0, args=(lamb, x_i)) # it will find the solution of the above
# function, given some initial guesses by minimizing
print('Best model parameters =',res.x) # these are the best model parameters
print('Real parameters =',param)
# Input model
plt.plot(lamb, flux, label="Input Model")
# Observations
plt.scatter(lamb, x_i, marker='.', color='k')
# Best-fit model
m, b, lamb_0, sigma, C = res.x
x_model = m*lamb + b - C*np.exp(-(lamb-lamb_0)**2 / sigma**2)
plt.plot(lamb, x_model, color='r', alpha=0.5, label="Best fit Model")
plt.legend()
plt.axes().set_xticks(np.linspace(6550, 6570, 6))
plt.xlabel(r"$\lambda$")
plt.ylabel("Flux")
plt.show()
###Output
('Best model parameters =', array([ -1.99938673e-01, 1.99963003e+03, 6.56302073e+03,
1.65977094e+00, 4.79921873e+00]))
('Real parameters =', [-0.2, 2000.0, 6563.0, 1.5, 5.0])
|
notebooks/_Group4-Final_Notebook1-Logistic_Regression.ipynb | ###Markdown
InstructionsPlease refer to [README file details]Here's the docker cmd to run from /notebooks:docker run -dit --rm --name notebook -p 8888:8888 -e JUPYTER_ENABLE_LAB=yes -v "$PWD"/..:/home/jovyan/work ajduncanson/nba-modelling Introduction This notebook is an experiment of building a model that will predict if a rookie player will last at least 5 years in the league based on his stats.In the National Basketball Association (NBA), a rookie is any player who has never played a game in the NBA until that year. At the end of the season the NBA awards the best rookie with the NBA Rookie of the Year Award.Moving to the NBA league is a big deal for any basketball player. Sport commentators and fans are very excited to follow the start of their careers and guess how they will perform in the future.In this experiment, LogisticRegression model is used. Import the libraries
###Code
import pandas as pd
import numpy as np
import imblearn
from sklearn.preprocessing import StandardScaler, PolynomialFeatures
import os
import sys
sys.path.append(os.path.abspath('..'))
from src.common_lib import DataReader, NBARawData
from src.common_lib import confusion_matrix, plot_roc, eval_report
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import accuracy_score, roc_curve, auc
import matplotlib.pyplot as plt
from joblib import dump
from collections import Counter
import warnings
warnings.simplefilter(action='ignore', category=FutureWarning)
%load_ext autoreload
%autoreload 2
###Output
_____no_output_____
###Markdown
Load the data
###Code
# Instantiate the custom data reader class
data_reader = DataReader()
# Load Raw Train Data
train_df = data_reader.read_data(NBARawData.TRAIN)
# Load Test Raw Data
test_df = data_reader.read_data(NBARawData.TEST)
###Output
_____no_output_____
###Markdown
Class Balance Check on Raw Data
###Code
target_count = train_df["TARGET_5Yrs"].value_counts()
print('Propotion: ', round(target_count[0]/ target_count[1], 2), ":1")
print( Counter(target_count))
data_reader.plot_class_balance(train_df["TARGET_5Yrs"])
###Output
Propotion: 0.2 :1
Counter({6669: 1, 1331: 1})
###Markdown
Scaling with Standard Scaler
###Code
## Scaling
df_cleaned = train_df.copy()
target = df_cleaned.pop('TARGET_5Yrs')
train_df_scaled = data_reader.scale_features_by_standard_scaler(df_cleaned)
train_df_scaled
###Output
_____no_output_____
###Markdown
Select Features
###Code
train_df_scaled['TARGET_5Yrs'] = target
selected_features = data_reader.select_feature_by_correlation(train_df_scaled, ['Id', 'Id_old'])
selected_features
train_df_scaled = train_df_scaled[selected_features]
train_df_scaled.head()
###Output
_____no_output_____
###Markdown
Split Train and Test on Scaled Data with Selected Features
###Code
X_train, X_val, y_train, y_val = data_reader.split_data(train_df_scaled)
### Visualisation Before Sampling Classification Count
print( Counter(y_train))
data_reader.plot_class_balance(y_train)
print(Counter(y_val))
data_reader.plot_class_balance(y_val)
###Output
Counter({1: 1343, 0: 257})
###Markdown
Re-sampling - Over Sampling
###Code
# Resample Train Data
X_train_res, y_train_res = data_reader.resample_data_upsample_smote(X_train, y_train)
X_val_res, y_val_res = data_reader.resample_data_upsample_smote(X_val, y_val)
# Re-plot the target
data_reader.plot_class_balance(y_train_res)
###Output
_____no_output_____
###Markdown
Build The Model - Logistic Regression
###Code
log_reg = LogisticRegression().fit(X_train_res, y_train_res)
###Output
_____no_output_____
###Markdown
Accuracy Test on Train Set
###Code
y_train_prob = log_reg.predict_proba(X_train_res)[:,1]
## Check Accuracy Score
y_pred_train=log_reg.predict(X_train_res)
## Confusion matrix, metrics and AUC plot
eval_report(y_train_res, y_pred_train)
###Output
Confusion Matrix:
pred:0 pred:1
true:0 3610 1716
true:1 1891 3435
Classification Report:
precision recall f1-score support
0 0.66 0.68 0.67 5326
1 0.67 0.64 0.66 5326
accuracy 0.66 10652
macro avg 0.66 0.66 0.66 10652
weighted avg 0.66 0.66 0.66 10652
ROC Curve:
AUC = 0.661
###Markdown
Accuracy Check on Validation Set
###Code
y_prob_val=log_reg.predict_proba(X_val_res)[:,1]
## Check Accuracy Score
y_pred_val=log_reg.predict(X_val_res)
## Confusion matrix, metrics and AUC plot
eval_report(y_val_res, y_pred_val)
###Output
Confusion Matrix:
pred:0 pred:1
true:0 890 453
true:1 491 852
Classification Report:
precision recall f1-score support
0 0.64 0.66 0.65 1343
1 0.65 0.63 0.64 1343
accuracy 0.65 2686
macro avg 0.65 0.65 0.65 2686
weighted avg 0.65 0.65 0.65 2686
ROC Curve:
AUC = 0.649
###Markdown
Prediction on Test Set
###Code
# Remove the target column, because the raw test set does not contain it
features_without_target = np.delete(selected_features, 13)
test_df = test_df[features_without_target]
# apply scaling
test_df_scaled = data_reader.scale_features_by_standard_scaler(test_df)
# predictions
y_test_proba =log_reg.predict_proba(test_df_scaled)[:,1]
final_prediction_test = pd.DataFrame({'Id': range(0,3799), 'TARGET_5Yrs': [p for p in y_test_proba]})
final_prediction_test.head(10)
###Output
_____no_output_____
###Markdown
Coefficients
###Code
coef_table = pd.DataFrame({'Feature': features_without_target, 'Coefficient': log_reg.coef_[0]})
coef_table
###Output
_____no_output_____
###Markdown
Variable importance by permutation
###Code
from sklearn.inspection import permutation_importance
r = permutation_importance(
log_reg, X_train_res, y_train_res,
n_repeats=30,
random_state=8
)
table = pd.DataFrame(r.importances_mean)
table.index = X_train.columns
importances = pd.DataFrame({'Feature': features_without_target, 'importance': r.importances_mean})
importances
###Output
_____no_output_____
###Markdown
Examine features by class
###Code
import matplotlib.pyplot as plt
import seaborn as sns
#sns.pairplot(train_df_scaled, hue="TARGET_5Yrs")
#chart_cols = selected_features[0:-1]
chart_cols = importances.sort_values(by=['importance'], ascending = False, inplace=False)['Feature']
for c in chart_cols:
length_fig, length_ax = plt.subplots()
sns.kdeplot(data=train_df, x=c, hue="TARGET_5Yrs", fill=True, common_norm=False, palette="coolwarm", alpha=.5, linewidth=0)
plt.savefig('../reports/figures/density_by_class_' + c + '.png')
###Output
_____no_output_____ |
linear-regression/Surface and contour plot.ipynb | ###Markdown
Surfacr Plots |Data visualisationSurface plots are used to- Visualise loss function in Machine learning and deep learning- visualise state or state value function in reinforcement learning
###Code
import matplotlib.pyplot as plt
import numpy as np
# a = np.array([1,2,3])
# b = np.array([4,5,6,7])
a = np.arange(-1,1,0.02)
b=a
a,b = np.meshgrid(a,b)
print(a)
print(b)
from mpl_toolkits.mplot3d import Axes3D
fig = plt.figure()
axes = fig.gca(projection='3d')
axes.plot_surface(a,b,a**2+b**2,cmap="rainbow")# color difference depicts high and low value
plt.show()
###Output
_____no_output_____
###Markdown
contour plot
###Code
fig = plt.figure()
axes = fig.gca(projection='3d')
axes.contour(a,b,a**2+b**2,cmap = 'rainbow')
plt.title("contour plot")
plt.show()
###Output
_____no_output_____ |
_notebooks/2021-11-22-vpp-seq.ipynb | ###Markdown
Notes about Sequence Modelling> Predicting Ventilator Pressure Time Series- hide: false- toc: true- badges: false- comments: true- author: Johannes Tomasoni- image: images/logo_vpp_seq.png- categories: [TimeSeries, Transformer, LSTM]
###Code
#hide
import numpy as np
from matplotlib import pyplot as plt
import seaborn as sns
from pathlib import Path
import fastai
from fastai.data.all import *
import torch
from fastai.basics import *
from fastai.callback.all import *
from sklearn.model_selection import GroupKFold
from sklearn.preprocessing import RobustScaler
from torch.nn import functional as F
import pandas as pd
###Output
_____no_output_____
###Markdown
Notes about Sequence ModellingI lately participated in the [Google Brain - Ventilator Pressure Prediction](https://www.kaggle.com/c/ventilator-pressure-prediction) competition. I didn't score decent, but I still learned a lot. And some of it is worth to sum up, so I can easily look it up later.The goal of the competition was to predict airway pressure of lungs that are ventilated in a clinician-intensive procedure. Given values of the input pressure (`u_in`) we had to predict the output `pressure` for a time frame of a few seconds.
###Code
#hide
BS = 64
WORKERS = 4
EPOCHS = 5 #100
#hide
in_path = Path('../input')
train = pd.read_csv(in_path/'train.csv')
#hide_input
sample = train[train.breath_id==1][['time_step', 'u_in', 'pressure']].melt(id_vars= 'time_step', var_name = 'Typ', value_name = 'Pressure')
sns.lineplot(x = 'time_step', y = 'Pressure', hue ='Typ', data=sample)
###Output
_____no_output_____
###Markdown
Since all `u_in` values for a time frame were given we can build a bidirectional sequence model. Unless in a typical time-series problem where the future points are unknown at a certain time step, we know the future and past input values. Therefore I decided not to *mask* the sequences while training.A good model choice for sequencing tasks are LSTMs and Transformers. I built a model that combines both architectures. I also tried XGBoost with a lot of features (especially windowing, rolling, lead, lag features) engineering, But neural nets (NN) performed better, here. Though I kept some of the engineered features as embeddings for the NN model.The competition metric was mean average error (MAE). Only those pressures were evaluated, that appear while filling the lungs with oxygen. Feature engineeringBesides the given features, *u_in*, *u_out*, *R*, *C* and *time_step* I defined several features. They can by categorized as:- area (accumulation of u_in over time) from [this notebook](https://www.kaggle.com/cdeotte/ensemble-folds-with-median-0-153)- one hot encoding of ventilator parameters R and C- statistical (mean, max, skewness, quartiles, rolling mean, ...)- shifted input pressure- input pressure performance over window- inverse featuresTo reduce memory consumption I used a function from [this notebook](https://www.kaggle.com/konradb/tabnet-end-to-end-starter).
###Code
def gen_features(df, norm=False):
# area feature from https://www.kaggle.com/cdeotte/ensemble-folds-with-median-0-153
df['area'] = df['time_step'] * df['u_in']
df['area_crv'] = (1.5-df['time_step']) * df['u_in']
df['area'] = df.groupby('breath_id')['area'].cumsum()
df['area_crv'] = df.groupby('breath_id')['area_crv'].cumsum()
df['area_inv'] = df.groupby('breath_id')['area'].transform('max') - df['area']
df['ts'] = df.groupby('breath_id')['id'].rank().astype('int')
df['R4'] = 1/df['R']**4
df['R'] = df['R'].astype('str')
df['C'] = df['C'].astype('str')
df = pd.get_dummies(df)
for in_out in [0,1]: #,1
for qs in [0.2, 0.25, 0.5, 0.9, 0.95]:
df.loc[:, f'u_in_{in_out}_q{str(qs*100)}'] = 0
df.loc[df.u_out==in_out, f'u_in_{in_out}_q{str(qs*100)}'] = df[df.u_out==in_out].groupby('breath_id')['u_in'].transform('quantile', q=0.2)
for agg_type in ['count', 'std', 'skew','mean', 'min', 'max', 'median', 'last', 'first']:
df.loc[:,f'u_out_{in_out}_{agg_type}'] = 0
df.loc[df.u_out==in_out, f'u_out_{in_out}_{agg_type}'] = df[df.u_out==in_out].groupby('breath_id')['u_in'].transform(agg_type)
if norm:
df.loc[:,f'u_in'] = (df.u_in - df[f'u_out_{in_out}_mean']) / (df[f'u_out_{in_out}_std']+1e-6)
for s in range(1,8):
df.loc[:,f'shift_u_in_{s}'] = 0
df.loc[:,f'shift_u_in_{s}'] = df.groupby('breath_id')['u_in'].shift(s)
df.loc[:,f'shift_u_in_m{s}'] = 0
df.loc[:,f'shift_u_in_m{s}'] = df.groupby('breath_id')['u_in'].shift(-s)
df.loc[:,'perf1'] = (df.u_in / df.shift_u_in_1).clip(-2,2)
df.loc[:,'perf3'] = (df.u_in / df.shift_u_in_3).clip(-2,2)
df.loc[:,'perf5'] = (df.u_in / df.shift_u_in_5).clip(-2,2)
df.loc[:,'perf7'] = (df.u_in / df.shift_u_in_7).clip(-2,2)
df.loc[:,'perf1'] = df.perf1-1
df.loc[:,'perf3'] = df.perf3-1
df.loc[:,'perf5'] = df.perf5-1
df.loc[:,'perf7'] = df.perf7-1
df.loc[:,'perf1inv'] = (df.u_in / df.shift_u_in_m1).clip(-2,2)
df.loc[:,'perf3inv'] = (df.u_in / df.shift_u_in_m3).clip(-2,2)
df.loc[:,'perf5inv'] = (df.u_in / df.shift_u_in_m5).clip(-2,2)
df.loc[:,'perf7inv'] = (df.u_in / df.shift_u_in_m7).clip(-2,2)
df.loc[:,'perf1inv'] = df.perf1inv-1
df.loc[:,'perf3inv'] = df.perf3inv-1
df.loc[:,'perf5inv'] = df.perf5inv-1
df.loc[:,'perf7inv'] = df.perf7inv-1
df.loc[:,'rol_mean5'] = df.u_in.rolling(5).mean()
return df
#hide
# from https://www.kaggle.com/konradb/tabnet-end-to-end-starter
def reduce_memory_usage(df):
start_memory = np.round(df.memory_usage().sum() / 1024**2,2)
print(f"Memory usage of dataframe is {start_memory} MB")
for col in df.columns:
col_type = df[col].dtype
if col_type != 'object':
c_min = df[col].min()
c_max = df[col].max()
if str(col_type)[:3] == 'int':
if c_min > np.iinfo(np.int8).min and c_max < np.iinfo(np.int8).max:
df[col] = df[col].astype(np.int8)
elif c_min > np.iinfo(np.int16).min and c_max < np.iinfo(np.int16).max:
df[col] = df[col].astype(np.int16)
elif c_min > np.iinfo(np.int32).min and c_max < np.iinfo(np.int32).max:
df[col] = df[col].astype(np.int32)
elif c_min > np.iinfo(np.int64).min and c_max < np.iinfo(np.int64).max:
df[col] = df[col].astype(np.int64)
else:
if c_min > np.finfo(np.float16).min and c_max < np.finfo(np.float16).max:
df[col] = df[col].astype(np.float16)
elif c_min > np.finfo(np.float32).min and c_max < np.finfo(np.float32).max:
df[col] = df[col].astype(np.float32)
else:
pass
else:
df[col] = df[col].astype('category')
end_memory = np.round(df.memory_usage().sum() / 1024**2,2)
print(f"Memory usage of dataframe after reduction {end_memory} MB")
print(f"Reduced by { np.round(100 * (start_memory - end_memory) / start_memory,2) } % ")
return df
#hide
train = gen_features(train,False).fillna(0)
train = reduce_memory_usage(train)
###Output
Memory usage of dataframe is 2889.7 MB
Memory usage of dataframe after reduction 782.87 MB
Reduced by 72.91 %
###Markdown
ScalerThe data was transformed with *scikit's RobustScaler* to reduce influence of outliers.
###Code
features = list(set(train.columns)-set(['id','breath_id','pressure','kfold_2021','kfold']))
features.sort()
rs = RobustScaler().fit(train[features])
###Output
_____no_output_____
###Markdown
FoldsI didn't do cross validation here, but instead trained the final model on the entire dataset. Nevertheless it's helpful to build kfolds for model evaluation. I build GroupKFold over `breath_id` to keep the entire time frame in the same fold.
###Code
#hide
# Kfold
train['kfold'] = -1
n_splits = 5
#skf = StratifiedKFold(n_splits=n_split, shuffle=True, random_state=2020)
skf = GroupKFold(n_splits = n_splits)
for fold, (trn, vld) in enumerate(skf.split(X = train, y = train['kfold'], groups = train['breath_id'])):
train.loc[vld, 'kfold'] = fold
# train.head()
###Output
_____no_output_____
###Markdown
DataloaderSince the data is quite small (ca. 800 MB after memory reduction) I decided to load the entire train set in the `Dataset` object during construction (calling `__init__()`). In a first attempt I loaded the data as *Pandas Dataframe*. Then I figured out (from [this notebook](https://www.kaggle.com/junkoda/pytorch-lstm-with-tensorflow-like-initialization)) that converting the Dataframe into an numpy array speeds up training significantly. The Dataframe is converted to an numpy array by the scaler.Since the competition metric only evaluates the pressures where `u_out==0` I also provide a mask tensor, which can later on be used feeding the loss and metric functions.
###Code
class VPPDataset(torch.utils.data.Dataset):
def __init__(self,df, scaler, is_train = True, kfolds = [0], features = ['R','C', 'time_step', 'u_in', 'u_out']):
if is_train:
# build a mask for metric and loss function
self.mask = torch.FloatTensor(1 - df[df['kfold'].isin(kfolds)].u_out.values.reshape(-1,80))
self.target = torch.FloatTensor(df[df['kfold'].isin(kfolds)].pressure.values.reshape(-1,80))
# calling scaler also converts the dataframe in an numpy array, which results in speed up while training
feature_values = scaler.transform(df[df['kfold'].isin(kfolds)][features])
self.df = torch.FloatTensor(feature_values.reshape(-1,80,len(features)))
else:
self.mask = torch.FloatTensor(1 - df.u_out.values.reshape(-1,80))
feature_values = scaler.transform(df[features])
self.df = torch.FloatTensor(feature_values.reshape(-1,80,len(features)))
self.target = None
self.features = features
self.is_train = is_train
def __len__(self):
return self.df.shape[0]
def __getitem__(self, item):
sample = self.df[item]
mask = self.mask[item]
if self.is_train:
targets = self.target[item]
else:
targets = torch.zeros((1))
return torch.cat([sample, mask.view(80,1)],dim=1), targets #.float()
#hide
train_ds = VPPDataset(df = train, scaler = rs, is_train = True, kfolds = [1,2,3,4], features =features)
valid_ds = VPPDataset(df = train, scaler = rs, is_train = True, kfolds = [0], features =features)
train_dl = DataLoader(train_ds, batch_size= BS, shuffle = True, num_workers = WORKERS)
valid_dl = DataLoader(valid_ds, batch_size= BS, shuffle = False, num_workers = WORKERS)
data = DataLoaders(train_dl, valid_dl).cuda()
###Output
_____no_output_____
###Markdown
ModelMy model combines a multi layered LSTM and a Transformer Encoder. Additionally I build an AutoEncoder by placing a Transformer Decoder on top of the Transformer encoder. The AutoEncoder predictions are used as auxiliary variables.Some further considerations:- I did not use drop out. The reason why it performence worse is discussed [here](https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/276719).- LayerNorm can be used in sequential models but didn't improve my score.The model is influenced by these notebooks:- [Transformer part](https://pytorch.org/tutorials/beginner/transformer_tutorial.html)- [LSTM part](https://www.kaggle.com/theoviel/deep-learning-starter-simple-lstm)- [Parameter initialization](https://www.kaggle.com/junkoda/pytorch-lstm-with-tensorflow-like-initialization)
###Code
# Influenced by:
# Transformer: https://pytorch.org/tutorials/beginner/transformer_tutorial.html
# LSTM: https://www.kaggle.com/theoviel/deep-learning-starter-simple-lstm
# Parameter init from: https://www.kaggle.com/junkoda/pytorch-lstm-with-tensorflow-like-initialization
class VPPEncoder(nn.Module):
def __init__(self, fin = 5, nhead = 8, nhid = 2048, nlayers = 6, seq_len=80, use_decoder = True):
super(VPPEncoder, self).__init__()
self.seq_len = seq_len
self.use_decoder = use_decoder
# number of input features
self.fin = fin
#self.tail = nn.Sequential(
# nn.Linear(self.fin, nhid),
# #nn.LayerNorm(nhid),
# nn.SELU(),
# nn.Linear(nhid, fin),
# #nn.LayerNorm(nhid),
# nn.SELU(),
# #nn.Dropout(0.05),
#)
encoder_layers = nn.TransformerEncoderLayer(self.fin, nhead, nhid , activation= 'gelu')
self.transformer_encoder = nn.TransformerEncoder(encoder_layers, nlayers)
decoder_layers = nn.TransformerDecoderLayer(self.fin, nhead, nhid, activation= 'gelu')
self.transformer_decoder = nn.TransformerDecoder(decoder_layers, nlayers)
self.lstm_layer = nn.LSTM(fin, nhid, num_layers=3, bidirectional=True)
# Head
self.linear1 = nn.Linear(nhid*2+fin , seq_len*2)
self.linear3 = nn.Linear(seq_len*2, 1)
self._reinitialize()
# from https://www.kaggle.com/junkoda/pytorch-lstm-with-tensorflow-like-initialization
def _reinitialize(self):
"""
Tensorflow/Keras-like initialization
"""
for name, p in self.named_parameters():
if 'lstm' in name:
if 'weight_ih' in name:
nn.init.xavier_uniform_(p.data)
elif 'weight_hh' in name:
nn.init.orthogonal_(p.data)
elif 'bias_ih' in name:
p.data.fill_(0)
# Set forget-gate bias to 1
n = p.size(0)
p.data[(n // 4):(n // 2)].fill_(1)
elif 'bias_hh' in name:
p.data.fill_(0)
elif 'fc' in name:
if 'weight' in name:
nn.init.xavier_uniform_(p.data,gain=3/4)
elif 'bias' in name:
p.data.fill_(0)
def forward(self, x):
out = x[:,:,:-1]
out = out.permute(1,0,2)
out = self.transformer_encoder( out)
out_l,_ = self.lstm_layer(out)
if self.use_decoder:
out = self.transformer_decoder(out, out)
out_dec_diff = (out - x[:,:,:-1].permute(1,0,2)).abs().mean(dim=2)
else:
out_dec_diff = out*0
out = torch.cat([out, out_l], dim=2)
# Head
out = F.gelu(self.linear1(out.permute(1,0,2)))
out = self.linear3(out)
return out.view(-1, self.seq_len) , x[:,:,-1], out_dec_diff.view(-1, self.seq_len)
###Output
_____no_output_____
###Markdown
Metric and Loss Function Masked MAE metricThe competition metric was Mean Absolute Error (MAE), but only for the time-steps where air flows into the lunge (approx. half of the timesteps).Hence, I masked the predictions (using the flag introduced in the Dataset) ignoring the unnecessary time-steps. The flags are passed through the model (`val[1]`) and is an output along with the predictions.
###Code
def vppMetric(val, target):
flag = val[1]
preds = val[0]
loss = (preds*flag-target*flag).abs()
loss= loss.sum()/flag.sum()
return loss
###Output
_____no_output_____
###Markdown
Thee values produced by the AutoGenerater are additionally measured by the `vppGenMetric`. It uses MAE to evaluate how good the reconstruction of the input features values evolves.
###Code
def vppGenMetric(val, target):
gen =val[2]
flag = val[1]
loss = (gen*flag).abs()
loss= loss.sum()/flag.sum()
return loss
###Output
_____no_output_____
###Markdown
Combined Loss functionThe loss function is a combination of L1-derived-Loss (`vppAutoLoss`) for the predictions and the AutoEncoder-predictions.Due to [this discussion](https://www.kaggle.com/c/ventilator-pressure-prediction/discussion/277690) I did some experiments with variations of Huber and SmoothL1Loss. The later (`vppAutoSmoothL1Loss`) performed better.
###Code
def vppAutoLoss(val, target):
gen =val[2]
flag = val[1]
preds = val[0]
loss = (preds*flag-target*flag).abs() + (gen*flag).abs()*0.2 #
loss= loss.sum()/flag.sum()
return loss
# Adapting https://pytorch.org/docs/stable/generated/torch.nn.SmoothL1Loss.html#torch.nn.SmoothL1Loss
def vppAutoSmoothL1Loss(val, target):
beta = 2
fct = 0.5
gen =val[2]
flag = val[1]
preds = val[0]
loss = (preds*flag-target*flag).abs() + (gen*flag).abs()*0.2
loss = torch.where(loss < beta, (fct*(loss**2))/beta, loss)#-fct*beta)
# reduction mean**0.5
loss = loss.sum()/flag.sum() #()**0.5
return loss
###Output
_____no_output_____
###Markdown
TrainingThe training was done in mixed precision mode (`to_fp16()`) to speed up training. As optimizer I used *QHAdam*.The best single score I achieved with a 100 epoch `fit_one_cycle` (*CosineAnnealing* with warmup). I also tried more epochs with restart schedules `fit_sgdr` and changing loss functions. But the didn't do better.
###Code
learn = Learner(data,
VPPEncoder(fin = len(features), nhead = 5, nhid = 128, nlayers = 6, seq_len=80, use_decoder = True),
opt_func= QHAdam,
loss_func = vppAutoLoss, #vppAutoSmoothL1Loss
metrics=[vppMetric, vppGenMetric],
cbs=[ShowGraphCallback()]).to_fp16()
learn.fit_one_cycle(EPOCHS, 2e-3)
###Output
_____no_output_____ |
Week9_Dropout.ipynb | ###Markdown
###Code
!pip install d2l==0.17.0
import tensorflow as tf
from d2l import tensorflow as d2l
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
# In this case, all elements are dropped out
if dropout == 1:
return tf.zeros_like(X)
# In this case, all elements are kept
if dropout == 0:
return X
mask = tf.random.uniform(
shape=tf.shape(X), minval=0, maxval=1) < 1 - dropout
return tf.cast(mask, dtype=tf.float32) * X / (1.0 - dropout)
X = tf.reshape(tf.range(16, dtype=tf.float32), (2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
num_outputs, num_hiddens1, num_hiddens2 = 10, 256, 256
dropout1, dropout2 = 0.2, 0.5
class Net(tf.keras.Model):
def __init__(self, num_outputs, num_hiddens1, num_hiddens2):
super().__init__()
self.input_layer = tf.keras.layers.Flatten()
self.hidden1 = tf.keras.layers.Dense(num_hiddens1, activation='relu')
self.hidden2 = tf.keras.layers.Dense(num_hiddens2, activation='relu')
self.output_layer = tf.keras.layers.Dense(num_outputs)
def call(self, inputs, training=None):
x = self.input_layer(inputs)
x = self.hidden1(x)
if training:
x = dropout_layer(x, dropout1)
x = self.hidden2(x)
if training:
x = dropout_layer(x, dropout2)
x = self.output_layer(x)
return x
net = Net(num_outputs, num_hiddens1, num_hiddens2)
num_epochs, lr, batch_size = 10, 0.5, 256
loss = tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True)
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
net = tf.keras.models.Sequential([
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
# Add a dropout layer after the first fully connected layer
tf.keras.layers.Dropout(dropout1),
tf.keras.layers.Dense(256, activation=tf.nn.relu),
# Add a dropout layer after the second fully connected layer
tf.keras.layers.Dropout(dropout2),
tf.keras.layers.Dense(10),
])
trainer = tf.keras.optimizers.SGD(learning_rate=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
###Output
_____no_output_____
###Markdown
###Code
!pip install d2l
pip install matplotlib==3.0.2
import torch
from torch import nn
from d2l import torch as d2l
def dropout_layer(X, dropout):
assert 0 <= dropout <= 1
# In this case, all elements are dropped out
if dropout == 1:
return torch.zeros_like(X)
# In this case, all elements are kept
if dropout == 0:
return X
mask = (torch.rand(X.shape) > dropout).float()
return mask * X / (1.0 - dropout)
X= torch.arange(16, dtype = torch.float32).reshape((2, 8))
print(X)
print(dropout_layer(X, 0.))
print(dropout_layer(X, 0.5))
print(dropout_layer(X, 1.))
num_inputs, num_outputs, num_hiddens1, num_hiddens2 = 784, 10, 256, 256
dropout1, dropout2 = 0.2, 0.5
class Net(nn.Module):
def __init__(self, num_inputs, num_outputs, num_hiddens1, num_hiddens2,
is_training = True):
super(Net, self).__init__()
self.num_inputs = num_inputs
self.training = is_training
self.lin1 = nn.Linear(num_inputs, num_hiddens1)
self.lin2 = nn.Linear(num_hiddens1, num_hiddens2)
self.lin3 = nn.Linear(num_hiddens2, num_outputs)
self.relu = nn.ReLU()
def forward(self, X):
H1 = self.relu(self.lin1(X.reshape((-1, self.num_inputs))))
# Use dropout only when training the model
if self.training == True:
# Add a dropout layer after the first fully connected layer
H1 = dropout_layer(H1, dropout1)
H2 = self.relu(self.lin2(H1))
if self.training == True:
# Add a dropout layer after the second fully connected layer
H2 = dropout_layer(H2, dropout2)
out = self.lin3(H2)
return out
net = Net(num_inputs, num_outputs, num_hiddens1, num_hiddens2)
num_epochs, lr, batch_size = 10, 0.5, 256
loss = nn.CrossEntropyLoss(reduction='none')
train_iter, test_iter = d2l.load_data_fashion_mnist(batch_size)
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
net = nn.Sequential(nn.Flatten(),
nn.Linear(784, 256),
nn.ReLU(),
# Add a dropout layer after the first fully connected layer
nn.Dropout(dropout1),
nn.Linear(256, 256),
nn.ReLU(),
# Add a dropout layer after the second fully connected layer
nn.Dropout(dropout2),
nn.Linear(256, 10))
def init_weights(m):
if type(m) == nn.Linear:
nn.init.normal_(m.weight, std=0.01)
net.apply(init_weights);
trainer = torch.optim.SGD(net.parameters(), lr=lr)
d2l.train_ch3(net, train_iter, test_iter, loss, num_epochs, trainer)
###Output
_____no_output_____ |
examples/reference/elements/matplotlib/Box.ipynb | ###Markdown
Title Box Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``Box`` is an annotation that takes a center x-position, a center y-position and a size:
###Code
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(40, 60), np.arange(20, 40)] = -1
data[np.arange(40, 50), np.arange(70, 80)] = -3
(hv.Image(data) * hv.Box(-0.2, 0, 0.25 ) * hv.Box(-0, 0, (0.4,0.9))).opts(
opts.Box(color='red', linewidth=5),
opts.Image(cmap='gray'))
###Output
_____no_output_____
###Markdown
In addition to these arguments, it supports an optional ``aspect ratio``:By default, the size argument results in a square such as the small square shown above. Alternatively, the size can be given as the tuple ``(width, height)`` resulting in a rectangle. If you only supply a size value, you can still specify a rectangle by specifying an optional aspect value. In addition, you can also set the orientation (in radians, rotating anticlockwise):
###Code
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(30, 70), np.arange(30, 70)] = -3
box = hv.Box(-0, 0, 0.25, aspect=3, orientation=-np.pi/4)
(hv.Image(data) * box).opts(
opts.Box(color='purple', linewidth=5),
opts.Image(cmap='gray'))
###Output
_____no_output_____
###Markdown
Title Box Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``Box`` is an annotation that takes a center x-position, a center y-position and a size:
###Code
%%opts Box (linewidth=5 color='red') Image (cmap='gray')
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(40, 60), np.arange(20, 40)] = -1
data[np.arange(40, 50), np.arange(70, 80)] = -3
hv.Image(data) * hv.Box(-0.2, 0, 0.25 ) * hv.Box(-0, 0, (0.4,0.9) )
###Output
_____no_output_____
###Markdown
In addition to these arguments, it supports an optional ``aspect ratio``:By default, the size argument results in a square such as the small square shown above. Alternatively, the size can be given as the tuple ``(width, height)`` resulting in a rectangle. If you only supply a size value, you can still specify a rectangle by specifying an optional aspect value. In addition, you can also set the orientation (in radians, rotating anticlockwise):
###Code
%%opts Box (linewidth=5 color='purple') Image (cmap='gray')
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(30, 70), np.arange(30, 70)] = -3
hv.Image(data) * hv.Box(-0, 0, 0.25, aspect=3, orientation=-np.pi/4)
###Output
_____no_output_____
###Markdown
Title Box Element Dependencies Matplotlib Backends Matplotlib Bokeh
###Code
import numpy as np
import holoviews as hv
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``Box`` is an annotation that takes a center x-position, a center y-position and a size:
###Code
%%opts Box (linewidth=5 color='red') Image (cmap='gray')
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(40, 60), np.arange(20, 40)] = -1
data[np.arange(40, 50), np.arange(70, 80)] = -3
hv.Image(data) * hv.Box(-0.2, 0, 0.25 ) * hv.Box(-0, 0, (0.4,0.9) )
###Output
_____no_output_____
###Markdown
In addition to these arguments, it supports an optional ``aspect ratio``:By default, the size argument results in a square such as the small square shown above. Alternatively, the size can be given as the tuple ``(width, height)`` resulting in a rectangle. If you only supply a size value, you can still specify a rectangle by specifying an optional aspect value. In addition, you can also set the orientation (in radians, rotating anticlockwise):
###Code
%%opts Box (linewidth=5 color='purple') Image (cmap='gray')
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(30, 70), np.arange(30, 70)] = -3
hv.Image(data) * hv.Box(-0, 0, 0.25, aspect=3, orientation=-np.pi/4)
###Output
_____no_output_____
###Markdown
Title Box Element Dependencies Matplotlib Backends Matplotlib Bokeh Plotly
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('matplotlib')
###Output
_____no_output_____
###Markdown
A ``Box`` is an annotation that takes a center x-position, a center y-position and a size:
###Code
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(40, 60), np.arange(20, 40)] = -1
data[np.arange(40, 50), np.arange(70, 80)] = -3
(hv.Image(data) * hv.Box(-0.2, 0, 0.25 ) * hv.Box(-0, 0, (0.4,0.9))).opts(
opts.Box(color='red', linewidth=5),
opts.Image(cmap='gray'))
###Output
_____no_output_____
###Markdown
In addition to these arguments, it supports an optional ``aspect ratio``:By default, the size argument results in a square such as the small square shown above. Alternatively, the size can be given as the tuple ``(width, height)`` resulting in a rectangle. If you only supply a size value, you can still specify a rectangle by specifying an optional aspect value. In addition, you can also set the orientation (in radians, rotating anticlockwise):
###Code
data = np.sin(np.mgrid[0:100,0:100][1]/10.0)
data[np.arange(30, 70), np.arange(30, 70)] = -3
box = hv.Box(-0, 0, 0.25, aspect=3, orientation=-np.pi/4)
(hv.Image(data) * box).opts(
opts.Box(color='purple', linewidth=5),
opts.Image(cmap='gray'))
###Output
_____no_output_____ |
EHR_Only/GBT/Comp_FAMD.ipynb | ###Markdown
FAMD Transformation
###Code
from prince import FAMD
famd = FAMD(n_components = 15, n_iter = 3, random_state = 101)
for (colName, colData) in co_train_gpop.iteritems():
if (colName != 'Co_N_Drugs_R0' and colName!= 'Co_N_Hosp_R0' and colName != 'Co_Total_HospLOS_R0' and colName != 'Co_N_MDVisit_R0'):
co_train_gpop[colName].replace((1,0) ,('yes','no'), inplace = True)
co_train_low[colName].replace((1,0) ,('yes','no'), inplace = True)
co_train_high[colName].replace((1,0) ,('yes','no'), inplace = True)
co_validation_gpop[colName].replace((1,0), ('yes','no'), inplace = True)
co_validation_high[colName].replace((1,0), ('yes','no'), inplace = True)
co_validation_low[colName].replace((1,0), ('yes','no'), inplace = True)
famd.fit(co_train_gpop)
co_train_gpop_FAMD = famd.transform(co_train_gpop)
famd.fit(co_train_high)
co_train_high_FAMD = famd.transform(co_train_high)
famd.fit(co_train_low)
co_train_low_FAMD = famd.transform(co_train_low)
famd.fit(co_validation_gpop)
co_validation_gpop_FAMD = famd.transform(co_validation_gpop)
famd.fit(co_validation_high)
co_validation_high_FAMD = famd.transform(co_validation_high)
famd.fit(co_validation_low)
co_validation_low_FAMD = famd.transform(co_validation_low)
###Output
/PHShome/se197/anaconda3/lib/python3.8/site-packages/pandas/core/series.py:4509: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().replace(
###Markdown
General Population
###Code
best_clf = xgBoost(co_train_gpop_FAMD, out_train_cardio_gpop)
cross_val(co_train_gpop_FAMD, out_train_cardio_gpop)
print()
scores(co_validation_gpop_FAMD, out_validation_cardio_gpop)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
###Markdown
High Continuity
###Code
best_clf = xgBoost(co_train_high_FAMD, out_train_cardio_high)
cross_val(co_train_high_FAMD, out_train_cardio_high)
print()
scores(co_validation_high_FAMD, out_validation_cardio_high)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
###Markdown
Low Continuity
###Code
best_clf = xgBoost(co_train_low_FAMD, out_train_cardio_low)
cross_val(co_train_low_FAMD, out_train_cardio_low)
print()
scores(co_validation_low_FAMD, out_validation_cardio_low)
###Output
Fitting 5 folds for each of 4 candidates, totalling 20 fits
|
jupyter-notebooks/forest-monitoring/drc_roads_mosaic.ipynb | ###Markdown
DRC Change Detection Using MosaicsThis notebook performs forest change (in the form of creation of new roads) using Planet mosaics as the data source. This notebook follows the workflow, uses the labeled data, and pulls code from the following notebooks, which perform forest change detection using PSOrthoTiles as the data source:* [DRC Roads Classification](drc_roads_classification.ipynb)* [DRC Roads Temporal Analysis](drc_roads_temporal_analysis.ipynb)**NOTE**: This notebook uses the gdal PLMosaic driver to access and download Planet mosaics. Use of the gdal PLMosaic driver requires specification of the Planet API key. This can either be specified in the command-line options, or gdal will try to pull this from the environmental variable `PL_API_KEY`. This notebook assumes that the environmental variable `PL_API_KEY` is set. See the [gdal PLMosaic driver](https://www.gdal.org/frmt_plmosaic.html) for more information.
###Code
from functools import reduce
import os
import subprocess
import tempfile
import numpy as np
from planet import api
from planet.api import downloader, filters
import rasterio
from skimage import feature, filters
from sklearn.ensemble import RandomForestClassifier
# load local modules
from utils import Timer
import visual
#uncomment if visual is in development
# import importlib
# importlib.reload(visual)
# Import functionality from local notebooks
from ipynb.fs.defs.drc_roads_classification import get_label_mask, get_unmasked_count \
load_4band, get_feature_bands, combine_masks, num_valid, perc_masked, bands_to_X, \
make_same_size_samples, classify_forest, y_to_band, classified_band_to_rgb
###Output
_____no_output_____
###Markdown
Download Mosaics
###Code
# uncomment to see what mosaics are available and to make sure the PLMosaic driver is working
# !gdalinfo "PLMosaic:"
# get mosaic names for July 2017 to March 2018
mosaic_dates = [('2017', '{0:02d}'.format(m)) for m in range(7, 13)] + \
[('2018', '{0:02d}'.format(m)) for m in range(1, 4)]
mosaic_names = ['global_monthly_{}_{}_mosaic'.format(yr, mo)
for (yr, mo) in mosaic_dates]
def get_mosaic_filename(mosaic_name):
return os.path.join('data', mosaic_name + '.tif')
for name in mosaic_names:
print('{} -> {}'.format(name, get_mosaic_filename(name)))
aoi_filename = 'pre-data/aoi.geojson'
def _gdalwarp(input_filename, output_filename, options):
commands = ['gdalwarp'] + options + \
['-overwrite',
input_filename,
output_filename]
print(' '.join(commands))
subprocess.check_call(commands)
# lossless compression of an image
def _compress(input_filename, output_filename):
commands = ['gdal_translate',
'-co', 'compress=LZW',
'-co', 'predictor=2',
input_filename,
output_filename]
print(' '.join(commands))
subprocess.check_call(commands)
def download_mosaic(mosaic_name,
output_filename,
crop_filename,
overwrite=False,
compress=True):
# typically gdalwarp would require `-oo API_KEY={PL_API_KEY}`
# but if the environmental variable PL_API_KEY is set, gdal will use that
options = ['-cutline', crop_filename, '-crop_to_cutline',
'-oo', 'use_tiles=YES']
# use PLMosaic driver
input_name = 'PLMosaic:mosaic={}'.format(mosaic_name)
# check to see if output file exists, if it does, do not warp
if os.path.isfile(output_filename) and not overwrite:
print('{} already exists. Aborting download of {}.'.format(output_filename, mosaic_name))
elif compress:
with tempfile.NamedTemporaryFile(suffix='.vrt') as vrt_file:
options += ['-of', 'vrt']
_gdalwarp(input_name, vrt_file.name, options)
_compress(vrt_file.name, output_filename)
else:
_gdalwarp(input_name, output_filename, options)
for name in mosaic_names:
download_mosaic(name, get_mosaic_filename(name), aoi_filename)
###Output
data/global_monthly_2017_07_mosaic.tif already exists. Aborting download of global_monthly_2017_07_mosaic.
data/global_monthly_2017_08_mosaic.tif already exists. Aborting download of global_monthly_2017_08_mosaic.
data/global_monthly_2017_09_mosaic.tif already exists. Aborting download of global_monthly_2017_09_mosaic.
data/global_monthly_2017_10_mosaic.tif already exists. Aborting download of global_monthly_2017_10_mosaic.
data/global_monthly_2017_11_mosaic.tif already exists. Aborting download of global_monthly_2017_11_mosaic.
data/global_monthly_2017_12_mosaic.tif already exists. Aborting download of global_monthly_2017_12_mosaic.
data/global_monthly_2018_01_mosaic.tif already exists. Aborting download of global_monthly_2018_01_mosaic.
data/global_monthly_2018_02_mosaic.tif already exists. Aborting download of global_monthly_2018_02_mosaic.
data/global_monthly_2018_03_mosaic.tif already exists. Aborting download of global_monthly_2018_03_mosaic.
###Markdown
Classify Mosaics into Forest and Non-ForestTo classify the mosaics into forest and non-forest, we use the Random Forests classifier. This is a supervised classification technique, so we need to create a training dataset. The training dataset will be created from one mosaic image and then the trained classifier will classify all mosaic images.Although we have already performed classification of a 4-band Orthotile into forest and non-forest in [drc_roads_classification](drc_roads_classification.ipnb), the format of the data is different in mosaics, so we need to re-create our training dataset. However, we will use the same label images that were created as a part of that notebook. Additionally, we will pull a lot of code from that notebook. Create Label Masks
###Code
forest_img = os.path.join('pre-data', 'forestroad_forest.tif')
road_img = os.path.join('pre-data', 'forestroad_road.tif')
forest_mask = get_label_mask(forest_img)
print(get_unmasked_count(forest_mask))
road_mask = get_label_mask(road_img)
print(get_unmasked_count(road_mask))
forest_mask.shape
###Output
/opt/conda/lib/python3.6/site-packages/rasterio/__init__.py:240: NotGeoreferencedWarning: Dataset has no geotransform set. Default transform will be applied (Affine.identity())
s = DatasetReader(fp, driver=driver, **kwargs)
###Markdown
Warp Mosaic to Match Label MasksThe label images used to create the label masks were created from the PSOrthoTiles. Therefore, they are in a different projection, have a different transform, and have a different pixel size than the mosaic images. To create the training dataset, we must first match the label and mosaic images so that the pixel dimensions and locations line up. To do this, we warp the mosaic image to match the label image coordinate reference system, bounds, and pixel dimensions. The forest/non-forest labeled images were created in GIMP, which doesn't save georeference information. Therefore, we will pull georeference information from the source image used to create the labeled images, `roads.tif`.
###Code
# specify the training dataset mosaic image file
image_file = get_mosaic_filename(mosaic_names[0])
image_file
# this is the georeferenced image that was used to create the forest and non-forest label images
label_image = 'pre-data/roads.tif'
# get label image crs, bounds, and pixel dimensions
with rasterio.open(label_image, 'r') as ref:
dst_crs = ref.crs['init']
(xmin, ymin, xmax, ymax) = ref.bounds
width = ref.width
height = ref.height
print(dst_crs)
print((xmin, ymin, xmax, ymax))
print((width, height))
# this is the warped training mosaic image we will create with gdal
training_file = os.path.join('data', 'mosaic_training.tif')
# use gdalwarp to warp mosaic image to match label image
!gdalwarp -t_srs $dst_crs \
-te $xmin $ymin $xmax $ymax \
-ts $width $height \
-overwrite $image_file $training_file
###Output
Using band 4 of source image as alpha.
Creating output file that is 6008P x 3333L.
Processing data/global_monthly_2017_07_mosaic.tif [1/1] : 0...10...20...30...40...50...60...70...80...90...100 - done.
###Markdown
Create Training DatasetsNow that the images match, we create the training datasets from the labels and the training mosaic image.
###Code
feature_bands = get_feature_bands(training_file)
print(feature_bands[0].shape)
total_mask = combine_masks(feature_bands)
print(total_mask.shape)
# combine the label masks with the valid data mask and then create X dataset for each label
total_forest_mask = np.logical_or(total_mask, forest_mask)
print('{} valid pixels ({}% masked)'.format(num_valid(total_forest_mask),
round(perc_masked(total_forest_mask), 2)))
X_forest = bands_to_X(feature_bands, total_forest_mask)
total_road_mask = np.logical_or(total_mask, road_mask)
print('{} valid pixels ({}% masked)'.format(num_valid(total_road_mask),
round(perc_masked(total_road_mask), 2)))
X_road = bands_to_X(feature_bands, total_road_mask)
[X_forest_sample, X_road_sample] = \
make_same_size_samples([X_forest, X_road], size_percent=100)
print(X_forest_sample.shape)
print(X_road_sample.shape)
forest_label_value = 0
road_label_value = 1
X_training = np.concatenate((X_forest_sample, X_road_sample), axis=0)
y_training = np.array(X_forest_sample.shape[0] * [forest_label_value] + \
X_road_sample.shape[0] * [road_label_value])
print(X_training.shape)
print(y_training.shape)
###Output
(34438, 6)
(34438,)
###Markdown
Classify Training ImageNow we will train the classifier to detect forest/non-forest classes from the training data and will run this on the original training mosaic image to see how well it works.
###Code
with Timer():
y_band_rf = classify_forest(image_file, X_training, y_training)
visual.plot_image(classified_band_to_rgb(y_band_rf),
title='Classified Training Image (Random Forests)',
figsize=(15, 15))
###Output
/opt/conda/lib/python3.6/site-packages/sklearn/ensemble/forest.py:246: FutureWarning: The default value of n_estimators will change from 10 in version 0.20 to 100 in 0.22.
"10 in version 0.20 to 100 in 0.22.", FutureWarning)
###Markdown
Classification on All Mosaic ImagesNow that the classifier is trained, run it on all of the mosaic images. This process takes a while so in this section, if classification has already been run and the classification results have been saved, we will load the cached results instead of rerunning classification. This behavior can be altered by setting `use_cache` to `False`.
###Code
classified_bands_file = os.path.join('data', 'classified_mosaic_bands.npz')
def save_to_cache(classified_bands, mosaic_names):
save_bands = dict((s, classified_bands[s])
for s in mosaic_names)
# masked arrays are saved as just arrays, so save mask for later
save_bands.update(dict((s+'_msk', classified_bands[s].mask)
for s in mosaic_names))
np.savez_compressed(classified_bands_file, **save_bands)
def load_from_cache():
classified_bands = np.load(classified_bands_file)
scene_ids = [k for k in classified_bands.keys() if not k.endswith('_msk')]
# reform masked array from saved array and saved mask
classified_bands = dict((s, np.ma.array(classified_bands[s], mask=classified_bands[s+'_msk']))
for s in scene_ids)
return classified_bands
use_cache = True
if use_cache and os.path.isfile(classified_bands_file):
print('using cached classified bands')
classified_bands = load_from_cache()
else:
with Timer():
def classify(mosaic_name):
img = get_mosaic_filename(mosaic_name)
# we only have two values, 0 and 1. Convert to uint8 for memory
band = (classify_forest(img, X_training, y_training)).astype(np.uint8)
return band
classified_bands = dict((s, classify(s)) for s in mosaic_names)
# save to cache
save_to_cache(classified_bands, mosaic_names)
# Decimate classified arrays for memory conservation
def decimate(arry, num=8):
return arry[::num, ::num].copy()
do_visualize = True # set to True to view images
if do_visualize:
for mosaic_name, classified_band in classified_bands.items():
visual.plot_image(classified_band_to_rgb(decimate(classified_band)),
title='Classified Image ({})'.format(mosaic_name),
figsize=(8, 8))
###Output
_____no_output_____
###Markdown
These classified mosaics look a lot better than the classified PSOrthoTile strips. This bodes well for the quality of our change detection results! Identify ChangeIn this section, we use Random Forest classification once again to detect change in the forest in the form of new roads being built. Once again we need to train the classifier, this time to detect change/no-change. And once again we use hand-labeled images created in the [Temporal Analysis Notebook](drc_roads_temporal_analysis.ipynb) to train the classifier. Create Label MasksThe change/no-change label images were created in the [Temporal Analysis Notebook](drc_roads_temporal_analysis.ipynb). The images were created from an image that was created for use with the PSOrthoTiles. Therefore, they are in a different projection, have a different affine transformation, and have different resolution than the classified mosaic bands. Further, the images were created with GIMP, which does not save the georeference information. We already have the classified mosaic bands in memory and so, instead of saving them out and warping each one, we will warp the label images to match the mosaic bands (this is the opposite of what we did in forest/non-forest classification). Therefore, the label images need to be georeferenced and then warped to match the classified mosaic images. Once this is done, the change/no-change label masks can be created. Georeference Labeled ImagesThe labeled images are prepared in GIMP, so georeference information has not been preserved. First, we will restore georeference information to the labeled images using `rasterio`.
###Code
# labeled change images, not georeferenced
change_img_orig = os.path.join('pre-data', 'difference_change.tif')
nochange_img_orig = os.path.join('pre-data', 'difference_nochange.tif')
# georeferenced source image
src_img = os.path.join('pre-data', 'difference.tif')
# destination georeferened label images
change_img_geo = os.path.join('data', 'difference_change.tif')
nochange_img_geo = os.path.join('data', 'difference_nochange.tif')
# get crs and transform from the georeferenced source image
with rasterio.open(src_img, 'r') as src:
src_crs = src.crs
src_transform = src.transform
# create the georeferenced label images
for (label_img, geo_img) in ((change_img_orig, change_img_geo),
(nochange_img_orig, nochange_img_geo)):
with rasterio.open(label_img, 'r') as src:
profile = {
'width': src.width,
'height': src.height,
'driver': 'GTiff',
'count': src.count,
'compress': 'lzw',
'dtype': rasterio.uint8,
'crs': src_crs,
'transform': src_transform
}
with rasterio.open(geo_img, 'w', **profile) as dst:
dst.write(src.read())
###Output
_____no_output_____
###Markdown
Match Georeferenced Label Images to Mosaic ImagesNow that the label images are georeferenced, we warp them to match the mosaic images.
###Code
# get dest crs, bounds, and shape from mosaic image
image_file = get_mosaic_filename(mosaic_names[0])
with rasterio.open(image_file, 'r') as ref:
dst_crs = ref.crs['init']
(xmin, ymin, xmax, ymax) = ref.bounds
width = ref.width
height = ref.height
print(dst_crs)
print((xmin, ymin, xmax, ymax))
print((width, height))
# destination matched images
change_img = os.path.join('data', 'mosaic_difference_change.tif')
nochange_img = os.path.join('data', 'mosaic_difference_nochange.tif')
# resample and resize to match mosaic
!gdalwarp -t_srs $dst_crs \
-te $xmin $ymin $xmax $ymax \
-ts $width $height \
-overwrite $change_img_geo $change_img
!gdalwarp -t_srs $dst_crs \
-te $xmin $ymin $xmax $ymax \
-ts $width $height \
-overwrite $nochange_img_geo $nochange_img
###Output
Creating output file that is 3930P x 2194L.
Processing data/difference_change.tif [1/1] : 0...10...20...30...40...50...60...70...80...90...100 - done.
Creating output file that is 3930P x 2194L.
Processing data/difference_nochange.tif [1/1] : 0...10...20...30...40...50...60...70...80...90...100 - done.
###Markdown
Load Label MasksNow that the label images match the mosaic images, we can load the label masks.
###Code
change_mask = get_label_mask(change_img)
print(get_unmasked_count(change_mask))
nochange_mask = get_label_mask(nochange_img)
print(get_unmasked_count(nochange_mask))
###Output
97356
3347608
###Markdown
Get Features from LabelsCreate our training dataset from the label masks and the classified mosaic bands.
###Code
# combine the label masks with the valid data mask and then create X dataset for each label
classified_bands_arrays = classified_bands.values()
total_mask = combine_masks(classified_bands_arrays)
total_change_mask = np.logical_or(total_mask, change_mask)
print('Change: {} valid pixels ({}% masked)'.format(num_valid(total_change_mask),
round(perc_masked(total_change_mask), 2)))
X_change = bands_to_X(classified_bands_arrays, total_change_mask)
total_nochange_mask = np.logical_or(total_mask, nochange_mask)
print('No Change: {} valid pixels ({}% masked)'.format(num_valid(total_nochange_mask),
round(perc_masked(total_nochange_mask), 2)))
X_nochange = bands_to_X(classified_bands_arrays, total_nochange_mask)
# create a training sample set that is equal in size for all categories
# and uses 10% of the labeled change pixels
[X_change_sample, X_nochange_sample] = \
make_same_size_samples([X_change, X_nochange], size_percent=10)
print(X_change_sample.shape)
print(X_nochange_sample.shape)
change_label_value = 0
nochange_label_value = 1
X_rf = np.concatenate((X_change_sample, X_nochange_sample), axis=0)
y_rf = np.array(X_change_sample.shape[0] * [change_label_value] + \
X_nochange_sample.shape[0] * [nochange_label_value])
print(X_rf.shape)
print(y_rf.shape)
###Output
(19242, 9)
(19242,)
###Markdown
Classify Change
###Code
# NOTE: This relative import isn't working so the following code is directly
# copied from the temporal analysis notebook
# from ipynb.fs.defs.drc_roads_temporal_analysis import classify_change
def classify_change(classified_bands, mask, X_training, y_training):
clf = RandomForestClassifier()
with Timer():
clf.fit(X_training, y_training)
X = bands_to_X(classified_bands, total_mask)
with Timer():
y_pred = clf.predict(X)
y_band = y_to_band(y_pred, total_mask)
return y_band
with Timer():
y_band_rf = classify_change(classified_bands_arrays, total_mask, X_rf, y_rf)
visual.plot_image(classified_band_to_rgb(y_band_rf), title='RF Classified Image', figsize=(25, 25))
###Output
_____no_output_____ |
Navigation-v2.ipynb | ###Markdown
Navigation---Implement Deep Q Network for the Banana Collector environment using the Unity ML-Agents toolkit The program has 3 parts :- Part 1 Defines the classes, initiates the environment and so forth. It sets up all the scaffolding needed- Part 2 Explore and Learn - it performs the DQN Reinforcement Learning. It also saves the best model- Part 3 Run saved model- I have captured portion of the runs in the file p1_nav-02.m4v. So one can run the mp4 file to see how the agent behaves.So one can either :- Run the cells in Part 1 and then Part 2 -> to train a model, explore hyperparameters and so forth- `Or` - Run cells in Part 1 and then Part 3 -> to run a stored model Part 1 - Definitions & Setup 1.1. Install the required packagesThe required setup is detailed in the README.mdI am running this on a MacBookPro 14,3 1.2. Define importspython 3, numpy, matplotlib, torch
###Code
# General imports
import numpy as np
import random
from collections import namedtuple, deque
import time
from datetime import datetime, timedelta
import matplotlib.pyplot as plt
%matplotlib inline
# torch imports
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Constants Definitions
BUFFER_SIZE = int(1e5) # replay buffer size
BATCH_SIZE = 64 # minibatch size
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR = 5e-4 # learning rate
UPDATE_EVERY = 4 # how often to update the network
# Number of neurons in the layers of the Q Network
FC1_UNITS = 16
FC2_UNITS = 8
FC3_UNITS = 4
# Store models flag. Store during calibration runs and do not store during hyperparameter search
STORE_MODELS = False
###Output
_____no_output_____
###Markdown
The unity environments contain **_brains_** which are responsible for deciding the actions of their associated agents. Here we check for the first brain available, and set it as the default brain we will be controlling from Python.`Note : The file_name might be different for different OS. As mentioned earlier, I am running OSX on a MAcBookPro`
###Code
from unityagents import UnityEnvironment
env = UnityEnvironment(file_name="Banana.app") #, no_graphics=True)
# get the default brain
brain_name = env.brain_names[0]
brain = env.brains[brain_name]
###Output
INFO:unityagents:
'Academy' started successfully!
Unity Academy name: Academy
Number of Brains: 1
Number of External Brains : 1
Lesson number : 0
Reset Parameters :
Unity brain name: BananaBrain
Number of Visual Observations (per agent): 0
Vector Observation space type: continuous
Vector Observation space size (per agent): 37
Number of stacked Vector Observation: 1
Vector Action space type: discrete
Vector Action space size (per agent): 4
Vector Action descriptions: , , ,
###Markdown
1.3. Examine the State and Action SpacesThe simulation contains a single agent that navigates a large environment. At each time step, it has four actions at its disposal:- `0` - walk forward - `1` - walk backward- `2` - turn left- `3` - turn rightThe state space has `37` dimensions and contains the agent's velocity, along with ray-based perception of objects around agent's forward direction. A reward of `+1` is provided for collecting a yellow banana, and a reward of `-1` is provided for collecting a blue banana. - The cell below tests to make sure the environment is up and running by printing some information about the environment.- It also acquires the dimensions of the state and action space
###Code
# reset the environment for training agents via external python API
env_info = env.reset(train_mode=True)[brain_name]
# number of agents in the environment
print('Number of agents:', len(env_info.agents))
# number of actions
action_size = brain.vector_action_space_size
print('Number of actions:', action_size)
# examine the state space
state = env_info.vector_observations[0]
print('States look like:', state)
state_size = len(state)
print('States have length:', state_size)
###Output
Number of agents: 1
Number of actions: 4
States look like: [1. 0. 0. 0. 0.84408134 0.
0. 1. 0. 0.0748472 0. 1.
0. 0. 0.25755 1. 0. 0.
0. 0.74177343 0. 1. 0. 0.
0.25854847 0. 0. 1. 0. 0.09355672
0. 1. 0. 0. 0.31969345 0.
0. ]
States have length: 37
###Markdown
1.4. Define classes and setupThe device declaration enables the program leverage GPUs if they are available
###Code
# device = torch.device("cuda:0" if torch.cuda.is_available() else "cpu")
# Not used as we are using only the CPU for this project
###Output
_____no_output_____
###Markdown
1.5. Learning AlgorithmWe are using the Deep Q Network Algorithm. The major components of the algorithm are:1. `A function approximator` implemented as a Deep Neural Network whih consists of fully connected layers. The function approximator learns the Q values for all the actions for a state space. The banana environment has a state space of 37 and an action space of 4. So out network has an inputsize of 37 and an output size of 4. The accompanying Report.pdf has details on the network architecture.2. `Experience replay buffer` - in order to train the network we take actions and then store the results in the replay buffer. The replay buffer is a circular buffer and it has methods to sample a random batch3. `The Agent` brings all of the above together. It interacts with the environment by taking actions based on a policy, collects rewards and the observation feedback, then stores the experience in the replay buffer and also initiates a learning step on the Q Network. The accompanying Report.pdf has more details on the agent.
###Code
class QNetwork(nn.Module):
"""Actor (Policy) Model."""
def __init__(self, state_size, action_size, seed, fc1_units = FC1_UNITS, fc2_units = FC2_UNITS, fc3_units = FC3_UNITS):
"""Initialize parameters and build model.
Params
======
state_size (int): Dimension of each state
action_size (int): Dimension of each action
seed (int): Random seed
fcx_units : Number of units in each layer
ToDo : It is a little klugy, as the network is built manually layer-by-layer.
Should take in a list fc_units and then dynamically build the network
"""
super(QNetwork, self).__init__()
self.seed = torch.manual_seed(seed)
self.fc1 = nn.Linear(state_size,fc1_units)
self.fc2 = nn.Linear(fc1_units,fc2_units)
self.fc3 = nn.Linear(fc2_units,fc3_units)
self.fc4 = nn.Linear(fc3_units,action_size)
def forward(self, state):
"""Build a network that maps state -> action values."""
x = F.relu(self.fc1(state))
x = F.relu(self.fc2(x))
x = F.relu(self.fc3(x))
x = self.fc4(x)
return x
class ReplayBuffer:
"""Fixed-size buffer to store experience tuples."""
def __init__(self, action_size, buffer_size, batch_size, seed):
"""Initialize a ReplayBuffer object.
Params
======
action_size (int): dimension of each action
buffer_size (int): maximum size of buffer
batch_size (int): size of each training batch
seed (int): random seed
"""
self.action_size = action_size
self.memory = deque(maxlen=buffer_size)
self.batch_size = batch_size
self.experience = namedtuple("Experience", field_names=["state", "action", "reward", "next_state", "done"])
self.seed = random.seed(seed)
def add(self, state, action, reward, next_state, done):
"""Add a new experience to memory."""
e = self.experience(state, action, reward, next_state, done)
self.memory.append(e)
def sample(self):
"""Randomly sample a batch of experiences from memory."""
experiences = random.sample(self.memory, k=self.batch_size)
states = torch.from_numpy(np.vstack([e.state for e in experiences if e is not None])).float()
actions = torch.from_numpy(np.vstack([e.action for e in experiences if e is not None])).long()
rewards = torch.from_numpy(np.vstack([e.reward for e in experiences if e is not None])).float()
next_states = torch.from_numpy(np.vstack([e.next_state for e in experiences if e is not None])).float()
dones = torch.from_numpy(np.vstack([e.done for e in experiences if e is not None]).astype(np.uint8)).float()
return (states, actions, rewards, next_states, dones)
def __len__(self):
"""Return the current size of internal memory."""
return len(self.memory)
class Agent():
"""Interacts with and learns from the environment."""
def __init__(self, state_size, action_size, seed):
"""Initialize an Agent object.
Params
======
state_size (int): dimension of each state
action_size (int): dimension of each action
seed (int): random seed
"""
self.state_size = state_size
self.action_size = action_size
self.seed = random.seed(seed)
# Q-Network
self.qnetwork_local = QNetwork(state_size, action_size, seed)
self.qnetwork_target = QNetwork(state_size, action_size, seed)
self.optimizer = optim.Adam(self.qnetwork_local.parameters(), lr=LR)
# Replay memory
self.memory = ReplayBuffer(action_size, BUFFER_SIZE, BATCH_SIZE, seed)
# Initialize time step (for updating every UPDATE_EVERY steps)
self.t_step = 0
def step(self, state, action, reward, next_state, done):
# Save experience in replay memory
self.memory.add(state, action, reward, next_state, done)
# Learn every UPDATE_EVERY time steps.
self.t_step = (self.t_step + 1) % UPDATE_EVERY
if self.t_step == 0:
# If enough samples are available in memory, get random subset and learn
if len(self.memory) > BATCH_SIZE:
experiences = self.memory.sample()
self.learn(experiences, GAMMA)
def act(self, state, eps=0.):
"""Returns actions for given state as per current policy.
Params
======
state (array_like): current state
eps (float): epsilon, for epsilon-greedy action selection
"""
state = torch.from_numpy(state).float().unsqueeze(0)
self.qnetwork_local.eval()
with torch.no_grad():
action_values = self.qnetwork_local(state)
self.qnetwork_local.train()
# Epsilon-greedy action selection
if random.random() > eps:
return np.argmax(action_values.cpu().data.numpy())
else:
return random.choice(np.arange(self.action_size))
def learn(self, experiences, gamma):
"""Update value parameters using given batch of experience tuples.
Params
======
experiences (Tuple[torch.Variable]): tuple of (s, a, r, s', done) tuples
gamma (float): discount factor
"""
states, actions, rewards, next_states, dones = experiences
# compute and minimize the loss
# Get max predicted Q values (for next states) from target model
Q_targets_next = self.qnetwork_target(next_states).detach().max(1)[0].unsqueeze(1)
# Compute Q targets for current states
Q_targets = rewards + gamma * Q_targets_next * (1 - dones)
# Get expected Q values from local model
Q_expected = self.qnetwork_local(states).gather(1,actions)
# Compute Loss
loss = F.mse_loss(Q_expected,Q_targets)
#Minimize Loss
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
# ------------------- update target network ------------------- #
self.soft_update(self.qnetwork_local, self.qnetwork_target, TAU)
def soft_update(self, local_model, target_model, tau):
"""Soft update model parameters.
θ_target = τ*θ_local + (1 - τ)*θ_target
Params
======
local_model (PyTorch model): weights will be copied from
target_model (PyTorch model): weights will be copied to
tau (float): interpolation parameter
"""
for target_param, local_param in zip(target_model.parameters(), local_model.parameters()):
target_param.data.copy_(tau*local_param.data + (1.0-tau)*target_param.data)
###Output
_____no_output_____
###Markdown
1.6. Instantiate an agentThe state space and the action space dimensions come from the environment
###Code
agent = Agent(state_size=state_size, action_size=action_size, seed=42)
print(agent.qnetwork_local)
###Output
QNetwork(
(fc1): Linear(in_features=37, out_features=16, bias=True)
(fc2): Linear(in_features=16, out_features=8, bias=True)
(fc3): Linear(in_features=8, out_features=4, bias=True)
(fc4): Linear(in_features=4, out_features=4, bias=True)
)
###Markdown
Part 2 - Learn & Train----- `Note : If you want to run a stored model, skip Part 2 and run the cells in Part 3 below` 2.1. DQN AlgorithmDefine the DQN Algorithm. Once we have defined the foundations (network, buffer, agent and so forth), the DQN is relatively easy. It has a few responsibilities:1. Orchastrate the episodes calling the appropriate methods2. Display a running commentry of the scores and episode count3. Check the success criterion for solving the environment i.e. if running average is > 13 and print the episode count4. Store the model with the maximum score5. Keep track of the scores for analytics at the end of the run
###Code
def dqn(n_episodes=2000, max_t=1000, eps_start=1.0, eps_end=0.01, eps_decay=0.995):
"""Deep Q-Learning.
Params
======
n_episodes (int): maximum number of training episodes
max_t (int): maximum number of timesteps per episode
eps_start (float): starting value of epsilon, for epsilon-greedy action selection
eps_end (float): minimum value of epsilon
eps_decay (float): multiplicative factor (per episode) for decreasing epsilon
"""
scores = [] # list containing scores from each episode
scores_window = deque(maxlen=100) # last 100 scores
eps = eps_start # initialize epsilon
has_seen_13 = False
max_score = 0
for i_episode in range(1, n_episodes+1):
env_info = env.reset(train_mode=True)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0
max_steps = 0
for t in range(max_t):
action = agent.act(state, eps)
# next_state, reward, done, _ = env.step(action)
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0]
agent.step(state, action, reward, next_state, done)
state = next_state
score += reward
max_steps += 1
if done:
break
scores_window.append(score) # save most recent score
scores.append(score) # save most recent score
eps = max(eps_end, eps_decay*eps) # decrease epsilon
print('\rEpisode : {}\tAverage Score : {:5.2f}\tMax_steps : {}\teps : {:5.3f}\tMax.Score : {:5.3f}'.\
format(i_episode, np.mean(scores_window),max_steps,eps,max_score), end="")
if i_episode % 100 == 0:
print('\rEpisode : {}\tAverage Score : {:5.2f}\tMax_steps : {}\teps : {:5.3f}\tMax.Score : {:5.3f}'.\
format(i_episode, np.mean(scores_window),max_steps,eps,max_score))
if (np.mean(scores_window)>=13.0) and (not has_seen_13):
print('\nEnvironment solved in {:d} episodes!\tAverage Score: {:5.2f}'.\
format(i_episode-100, np.mean(scores_window)))
# torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
has_seen_13 = True
# break
# To see how far it can go
# Store the best model if desired
if STORE_MODELS:
if np.mean(scores_window) > max_score:
max_score = np.mean(scores_window)
torch.save(agent.qnetwork_local.state_dict(), 'checkpoint.pth')
# print(' .. Storing with score {}'.format(max_score))
return scores
###Output
_____no_output_____
###Markdown
2.2. The actual training Run1. Run the DQN2. Calculate and display end-of-run analytics viz. descriptive statistics and a plot of the scores
###Code
start_time = time.time()
scores = dqn(n_episodes=1000,eps_end=0.005, eps_decay=0.85)
# The env ends at 300 steps. Tried max_t > 1K. Didn't see any complex adaptive temporal behavior
env.close() # Close the environment
print('Elapsed : {}'.format(timedelta(seconds=time.time() - start_time)))
print(datetime.now())
# plot the scores
fig = plt.figure()
ax = fig.add_subplot(111)
plt.plot(np.arange(len(scores)), scores)
plt.ylabel('Score')
plt.xlabel('Episode #')
plt.show()
print(agent.qnetwork_local)
print('Max Score {:2f} at {}'.format(np.max(scores), np.argmax(scores)))
print('Percentile [25,50,75] : {}'.format(np.percentile(scores,[25,50,75])))
print('Variance : {:.3f}'.format(np.var(scores)))
### Run Logs and notes as we tweak the parameters
#### A place to keep the statistics and qualitative observations
'''
Max = 2000 episodes,
GAMMA = 0.99 # discount factor
TAU = 1e-3 # for soft update of target parameters
LR = 5e-4 # learning rate
UPDATE_EVERY = 4 # how often to update the network
fc1:64-fc2:64-fc3:4 -> 510 episodes, Max 26 @ 1183 episodes, Runnimg 100 mean 16.25 @1200, @1800
Elapsed : 1:19:28.997291
fc1:32-fc2:16-fc3:4 -> 449 episodes, Max 28 @ 1991 episodes, Runnimg 100 mean 16.41 @1300, 16.66 @1600, 17.65 @2000
Elapsed : 1:20:27.390989
Less Variance ? Overall learns better & steady; keeps high scores onc it learned them - match impedence
percentile[25,50,75] = [11. 15. 18.]; var = 30.469993749999997
fc1:16-fc2:8-fc3:4 -> 502 episodes, Max 28 @ 1568 episodes, Runnimg 100 mean 16.41 @1400, 16.23 @1500, 16.32 @1600
Elapsed : 1:18:33.396898
percentile[25,50,75] = [10. 14. 17.]; var = 30.15840975
Very calm CPU ! Embed in TX2 or raspberry Pi environment - definitely this network
Doesn't reach the highs of a larger network
fc1:32-fc2:16-fc3:8-fc4:4 -> 405 episodes, Max 28 @ 1281 episodes, Runnimg 100 mean 17.05 @1500, 16.69 @1700
Elapsed : 1:24:07.507518
percentile[25,50,75] = [11. 15. 18.]; var = 34.83351975
Back to heavy CPU usage. Reaches solution faster, so definitely more fidelity. Depth gives early advantage
fc1:64-fc2:32-fc3:16-fc4:8-fc5:4 -> 392 episodes, Max 27 @ 631 episodes, Runnimg 100 mean 16.94 @1500
Elapsed : 1:17:21.398181
percentile[25,50,75] = [11. 15. 18.]; var = 31.014
Higher CPU usage. Reaches solution more faster, so definitely more fidelity. Depth gives early advantage
But takes longer to train ie get to the same point as the less deep networks.
Didn't get to the max score as others ie > 17
Monstrous atrocity w.r.t the problem we are solving!
fc1:32-fc2:4-> 492 episodes, Max 27 @ 1580 episodes, Runnimg 100 mean 16.73 @1900
Elapsed : 1:14:06.276599
percentile[25,50,75] = [10. 14. 17.]; var = 33.485
Minimalist
Final Model:
fc1:16-fc2:8-fc3:4 -> 567 episodes, Max 27 @ 1535 episodes, Runnimg 100 mean 16.41 @1400, 16.1 @1500
Elapsed : 1:21:45.714247
percentile[25,50,75] = [ 9. 14. 17.]; var = 31.137
Very calm CPU ! I like this, based on my autonomous car and drone background !!
Occam's Razor and law of parsimony applies here - Optimum Model
Discount Factor = 0.85, Episodes = 1000 (for faster iteration)
10/1/18 : Solved in 852 episodes. It needs a larger discount rate
smaller and faster eps eps_end=0.001, eps_decay=0.85
reaches 0.001 in ~50 episodes
the eps_end is more important
solved in 203 episodes ! Much faster to solve, might not be good for larger action spaces as well as probabilistic spaces
same again !
eps_end = 0.005
solved in 209 episodes. 0.005 is good
Max Score 25.000000 at 599
Percentile [25,50,75] : [10. 14. 17.]
Variance : 31.983
'''
###Output
_____no_output_____
###Markdown
2.3. Test Area
###Code
print(np.percentile(scores,[25,50,75]))
print(np.var(scores))
print(agent.qnetwork_local)
print('Max Score {:2f} at {}'.format(np.max(scores), np.argmax(scores)))
len(scores)
np.median(scores)
# Work area to quickly test utility functions
import time
from datetime import datetime, timedelta
start_time = time.time()
time.sleep(10)
print('Elapsed : {}'.format(timedelta(seconds=time.time() - start_time)))
print(datetime.now())
env.close()
###Output
_____no_output_____
###Markdown
Part 3 : Run a stored Model NoteHere we are saving and loading the state dict, because wee have access to the code.The best way to save and load model, to be used by 2 distinct and separate entities is to :- `torch.save(model, filepath)`; - Then later, `model = torch.load(filepath)`But, for now, this is not recommended since pytorch is still undergoing a lot of changes. Once Torch 1.0 is released, this might be the best option
###Code
agent.qnetwork_local.load_state_dict(torch.load('checkpoint.pth'))
scores=[]
for i in range(10): # 10 episodes
env_info = env.reset(train_mode=False)[brain_name] # reset the environment
state = env_info.vector_observations[0] # get the current state
score = 0 # initialize the score
while True:
action = agent.act(state) # select an action
env_info = env.step(action)[brain_name] # send the action to the environment
next_state = env_info.vector_observations[0] # get the next state
reward = env_info.rewards[0] # get the reward
done = env_info.local_done[0] # see if episode has finished
score += reward # update the score
state = next_state # roll over the state to next time step
if done: # exit loop if episode finished
break
scores.append(score)
print("Episode {:2d} Score {:5.2f}".format(i+1,score))
print('Mean of {} episodes = {}'.format(i+1,np.mean(scores)))
print(datetime.now())
env.close()
###Output
Episode 1 Score 12.00
Episode 2 Score 11.00
Episode 3 Score 15.00
Episode 4 Score 15.00
Episode 5 Score 18.00
Episode 6 Score 16.00
Episode 7 Score 18.00
Episode 8 Score 18.00
Episode 9 Score 16.00
Episode 10 Score 11.00
Mean of 10 episodes = 15.0
2018-10-01 21:15:43.115171
|
tutorials/an-introduction/1.hello-world.ipynb | ###Markdown
Tutorial: "hello world!" (Part 1 of 3)--- IntroductionIn **part 1 of this get started series**, you will submit a trivial "hello world" python script to the cloud by:- Running Python code in the cloud with Azure Machine Learning SDK- Switching between debugging locally on a compute instance- Submitting remote runs in the cloud- Monitoring and recording runs in the Azure Machine Learning studio Test in your development environmentYou can test your code works on a compute instance or locally (for example, a laptop), which has the benefit of interactive debugging of code:
###Code
!python src/hello.py
###Output
_____no_output_____
###Markdown
Submit your code to Azure Machine LearningBelow you create a __*control script*__ this is where you specify _how_ your code is submitted to Azure Machine Learning. The code you submit to Azure Machine Learning (in this case `hello.py`) does not need anything specific to Azure Machine Learning - it can be any valid Python code. It is only the control script that is Azure Machine Learning specific.The code below will show a Jupyter widget that tracks the progress of your run, and displays logs.> ! NOTE > The very first run will take 5-10 minutes to complete. This is because in the background a docker image is built in the cloud, the compute cluster is resized from 0 to 1 node, and the docker image is downloaded to the compute. Subsequent runs are much quicker (~15 seconds) as the docker image is cached on the compute - you can test this by resubmitting the code below after the first run has completed.
###Code
from azureml.core import Workspace, Experiment, ScriptRunConfig
from azureml.widgets import RunDetails
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="an-introduction-hello-world-tutorial")
src = ScriptRunConfig(
source_directory="src", script="hello.py", compute_target="cpu-cluster"
)
run = exp.submit(src)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Understanding the control code| Code |Description | |---|---|| `ws = Workspace.from_config()` | [Workspace](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py&preserve-view=true) connects to your Azure Machine Learning workspace, so that you can communicate with your Azure Machine Learning resources. || `exp = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. || `src = ScriptRunConfig( ... )` | [ScriptRunConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.core.scriptrunconfig?view=azure-ml-py&preserve-view=true) wraps your `hello.py` code and passes it to your workspace. As the name suggests, you can use this class to _configure_ how you want your _script_ to _run_ in Azure Machine Learning. Also specifies what compute target the script will run on. In this code, the target is the compute cluster you created in the [setup tutorial](tutorial-1st-experiment-sdk-setup-local.md). || `run = exp.submit(config)` | Submits your script. This submission is called a [Run](https://docs.microsoft.com/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=true). A run encapsulates a single execution of your code. Use a run to monitor the script progress, capture the output, analyze the results, visualize metrics and more. ||`RunDetails(run).show()` | There is an Azure Machine Learning widget that shows the progress of your job along with streaming the log files. View the logsThe widget has a dropdown box titled **Output logs** select `70_driver_log.txt`, which shows the following standard output: ``` 1: [2020-08-04T22:15:44.407305] Entering context manager injector. 2: [context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError', 'UserExceptions:context_managers.UserExceptions'], invocation=['hello.py']) 3: Starting the daemon thread to refresh tokens in background for process with pid = 31263 4: Entering Run History Context Manager. 5: Preparing to call script [ hello.py ] with arguments: [] 6: After variable expansion, calling script [ hello.py ] with arguments: [] 7: 8: hello world! 9: Starting the daemon thread to refresh tokens in background for process with pid = 3126310:11:12: The experiment completed successfully. Finalizing run...13: Logging experiment finalizing status in history service.14: [2020-08-04T22:15:46.541334] TimeoutHandler __init__15: [2020-08-04T22:15:46.541396] TimeoutHandler __enter__16: Cleaning up all outstanding Run operations, waiting 300.0 seconds17: 1 items cleaning up...18: Cleanup took 0.1812913417816162 seconds19: [2020-08-04T22:15:47.040203] TimeoutHandler __exit__```On line 8 above, you see the "Hello world!" output. The 70_driver_log.txt file contains the standard output from run and can be useful when debugging remote runs in the cloud. You can also view the run by clicking on the **Click here to see the run in Azure Machine Learning studio** link in the widget. Next stepsIn this tutorial, you took a simple "hello world" script and ran it on Azure. You saw how to connect to your Azure Machine Learning workspace, create an Experiment, and submit your `hello.py` code to the cloud.In the [next tutorial](2.pytorch-model.ipynb), you build on these learnings by running something more interesting than `print("hello world!")`.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Tutorial: "hello world!" (Part 1 of 3)--- IntroductionIn **part 1 of this get started series**, you will submit a trivial "hello world" python script to the cloud by:- Running Python code in the cloud with Azure Machine Learning SDK- Switching between debugging locally on a compute instance- Submitting remote runs in the cloud- Monitoring and recording runs in the Azure Machine Learning studio Write files
###Code
%%writefile hello.py
print("hello world!")
###Output
_____no_output_____
###Markdown
Test in your development environmentYou can test your code works on a compute instance or locally (for example, a laptop), which has the benefit of interactive debugging of code:
###Code
!python hello.py
###Output
_____no_output_____
###Markdown
Submit your code to Azure Machine LearningBelow you create a __*control script*__ this is where you specify _how_ your code is submitted to Azure Machine Learning. The code you submit to Azure Machine Learning (in this case `hello.py`) does not need anything specific to Azure Machine Learning - it can be any valid Python code. It is only the control script that is Azure Machine Learning specific.The code below will show a Jupyter widget that tracks the progress of your run, and displays logs.> ! NOTE > The very first run will take 5-10 minutes to complete. This is because in the background a docker image is built in the cloud, the compute cluster is resized from 0 to 1 node, and the docker image is downloaded to the compute. Subsequent runs are much quicker (~15 seconds) as the docker image is cached on the compute - you can test this by resubmitting the code below after the first run has completed.
###Code
from azureml.core import Workspace, Experiment, ScriptRunConfig
from azureml.widgets import RunDetails
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="an-introduction-hello-world-tutorial")
src = ScriptRunConfig(
source_directory=".", script="hello.py", compute_target="cpu-cluster"
)
run = exp.submit(src)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Understanding the control code| Code |Description | |---|---|| `ws = Workspace.from_config()` | [Workspace](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py&preserve-view=true) connects to your Azure Machine Learning workspace, so that you can communicate with your Azure Machine Learning resources. || `exp = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. || `src = ScriptRunConfig( ... )` | [ScriptRunConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.core.scriptrunconfig?view=azure-ml-py&preserve-view=true) wraps your `hello.py` code and passes it to your workspace. As the name suggests, you can use this class to _configure_ how you want your _script_ to _run_ in Azure Machine Learning. Also specifies what compute target the script will run on. In this code, the target is the compute cluster you created in the [setup tutorial](tutorial-1st-experiment-sdk-setup-local.md). || `run = exp.submit(config)` | Submits your script. This submission is called a [Run](https://docs.microsoft.com/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=true). A run encapsulates a single execution of your code. Use a run to monitor the script progress, capture the output, analyze the results, visualize metrics and more. ||`RunDetails(run).show()` | There is an Azure Machine Learning widget that shows the progress of your job along with streaming the log files. View the logsThe widget has a dropdown box titled **Output logs** select `70_driver_log.txt`, which shows the following standard output: ``` 1: [2020-08-04T22:15:44.407305] Entering context manager injector. 2: [context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError', 'UserExceptions:context_managers.UserExceptions'], invocation=['hello.py']) 3: Starting the daemon thread to refresh tokens in background for process with pid = 31263 4: Entering Run History Context Manager. 5: Preparing to call script [ hello.py ] with arguments: [] 6: After variable expansion, calling script [ hello.py ] with arguments: [] 7: 8: hello world! 9: Starting the daemon thread to refresh tokens in background for process with pid = 3126310:11:12: The experiment completed successfully. Finalizing run...13: Logging experiment finalizing status in history service.14: [2020-08-04T22:15:46.541334] TimeoutHandler __init__15: [2020-08-04T22:15:46.541396] TimeoutHandler __enter__16: Cleaning up all outstanding Run operations, waiting 300.0 seconds17: 1 items cleaning up...18: Cleanup took 0.1812913417816162 seconds19: [2020-08-04T22:15:47.040203] TimeoutHandler __exit__```On line 8 above, you see the "Hello world!" output. The 70_driver_log.txt file contains the standard output from run and can be useful when debugging remote runs in the cloud. You can also view the run by clicking on the **Click here to see the run in Azure Machine Learning studio** link in the widget. Next stepsIn this tutorial, you took a simple "hello world" script and ran it on Azure. You saw how to connect to your Azure Machine Learning workspace, create an Experiment, and submit your `hello.py` code to the cloud.In the [next tutorial](2.pytorch-model.ipynb), you build on these learnings by running something more interesting than `print("hello world!")`.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Tutorial: "hello world!" (Part 1 of 3)--- IntroductionIn **part 1 of this get started series**, you will submit a trivial "hello world" python script to the cloud by:- Running Python code in the cloud with Azure Machine Learning SDK- Switching between debugging locally on a compute instance- Submitting remote runs in the cloud- Monitoring and recording runs in the Azure Machine Learning studio Your codeIn the `code` subdirectory you will write a trivial python script `hello.py` that has the following code:```Pythonprint("hello world!")```In this tutorial you are going to submit this trivial python script to an Azure Machine Learning Compute Cluster.
###Code
import os
os.makedirs("code", exist_ok=True)
%%writefile code/hello.py
print("hello world!")
###Output
_____no_output_____
###Markdown
Test in your development environmentYou can test your code works on a compute instance or locally (for example, a laptop), which has the benefit of interactive debugging of code:
###Code
!python code/hello.py
###Output
_____no_output_____
###Markdown
Submit your code to Azure Machine LearningBelow you create a __*control script*__ this is where you specify _how_ your code is submitted to Azure Machine Learning. The code you submit to Azure Machine Learning (in this case `hello.py`) does not need anything specific to Azure Machine Learning - it can be any valid Python code. It is only the control script that is Azure Machine Learning specific.The code below will show a Jupyter widget that tracks the progress of your run, and displays logs.> ! NOTE > The very first run will take 5-10 minutes to complete. This is because in the background a docker image is built in the cloud, the compute cluster is resized from 0 to 1 node, and the docker image is downloaded to the compute. Subsequent runs are much quicker (~15 seconds) as the docker image is cached on the compute - you can test this by resubmitting the code below after the first run has completed.
###Code
from azureml.core import Workspace, Experiment, ScriptRunConfig
from azureml.widgets import RunDetails
ws = Workspace.from_config()
exp = Experiment(workspace=ws, name="getting-started-hello-world-tutorial")
src = ScriptRunConfig(
source_directory="code", script="hello.py", compute_target="cpu-cluster"
)
run = exp.submit(src)
RunDetails(run).show()
###Output
_____no_output_____
###Markdown
Understanding the control code| Code |Description | |---|---|| `ws = Workspace.from_config()` | [Workspace](https://docs.microsoft.com/python/api/azureml-core/azureml.core.workspace.workspace?view=azure-ml-py&preserve-view=true) connects to your Azure Machine Learning workspace, so that you can communicate with your Azure Machine Learning resources. || `exp = Experiment( ... )` | [Experiment](https://docs.microsoft.com/python/api/azureml-core/azureml.core.experiment.experiment?view=azure-ml-py&preserve-view=true) provides a simple way to organize multiple runs under a single name. Later you can see how experiments make it easy to compare metrics between dozens of runs. || `src = ScriptRunConfig( ... )` | [ScriptRunConfig](https://docs.microsoft.com/python/api/azureml-core/azureml.core.scriptrunconfig?view=azure-ml-py&preserve-view=true) wraps your `hello.py` code and passes it to your workspace. As the name suggests, you can use this class to _configure_ how you want your _script_ to _run_ in Azure Machine Learning. Also specifies what compute target the script will run on. In this code, the target is the compute cluster you created in the [setup tutorial](tutorial-1st-experiment-sdk-setup-local.md). || `run = exp.submit(config)` | Submits your script. This submission is called a [Run](https://docs.microsoft.com/python/api/azureml-core/azureml.core.run(class)?view=azure-ml-py&preserve-view=true). A run encapsulates a single execution of your code. Use a run to monitor the script progress, capture the output, analyze the results, visualize metrics and more. ||`RunDetails(run).show()` | There is an Azure Machine Learning widget that shows the progress of your job along with streaming the log files. View the logsThe widget has a dropdown box titled **Output logs** select `70_driver_log.txt`, which shows the following standard output: ``` 1: [2020-08-04T22:15:44.407305] Entering context manager injector. 2: [context_manager_injector.py] Command line Options: Namespace(inject=['ProjectPythonPath:context_managers.ProjectPythonPath', 'RunHistory:context_managers.RunHistory', 'TrackUserError:context_managers.TrackUserError', 'UserExceptions:context_managers.UserExceptions'], invocation=['hello.py']) 3: Starting the daemon thread to refresh tokens in background for process with pid = 31263 4: Entering Run History Context Manager. 5: Preparing to call script [ hello.py ] with arguments: [] 6: After variable expansion, calling script [ hello.py ] with arguments: [] 7: 8: hello world! 9: Starting the daemon thread to refresh tokens in background for process with pid = 3126310:11:12: The experiment completed successfully. Finalizing run...13: Logging experiment finalizing status in history service.14: [2020-08-04T22:15:46.541334] TimeoutHandler __init__15: [2020-08-04T22:15:46.541396] TimeoutHandler __enter__16: Cleaning up all outstanding Run operations, waiting 300.0 seconds17: 1 items cleaning up...18: Cleanup took 0.1812913417816162 seconds19: [2020-08-04T22:15:47.040203] TimeoutHandler __exit__```On line 8 above, you see the "Hello world!" output. The 70_driver_log.txt file contains the standard output from run and can be useful when debugging remote runs in the cloud. You can also view the run by clicking on the **Click here to see the run in Azure Machine Learning studio** link in the widget. Next stepsIn this tutorial, you took a simple "hello world" script and ran it on Azure. You saw how to connect to your Azure Machine Learning workspace, create an Experiment, and submit your `hello.py` code to the cloud.In the [next tutorial](2.pytorch-model.ipynb), you build on these learnings by running something more interesting than `print("hello world!")`.
###Code
run.wait_for_completion(show_output=True)
###Output
_____no_output_____ |
TextSummarization_bertsum.ipynb | ###Markdown
###Code
!git clone https://github.com/nlpyang/BertSum.git
cd BertSum
!git clone https://github.com/thangarani/raw_stories.git
!wget http://nlp.stanford.edu/software/stanford-corenlp-latest.zip
!wget http://nlp.stanford.edu/software/stanford-corenlp-full-2016-10-31.zip
mv stanford-corenlp-full-2016-10-31.zip /BertSum
!wget http://nlp.stanford.edu/software/stanford-corenlp-full-2017-06-09.zip
!sudo apt install unzip
!unzip stanford-corenlp-full-2017-06-09.zip
pwd
import os
os.environ['CLASSSPATH']="/content/BertSum/stanford-corenlp-full-2017-06-09/stanford-corenlp-3.8.0.jar"
env
!pip3 install pytorch_pretrained_bert
!pip install tensorboardX
!pip install multiprocess
!git clone https://github.com/andersjo/pyrouge.git
cd pyrouge/tools/ROUGE-1.5.5
pip install perl
env
!export ROUGE_EVAL_HOME="/content/BertSum/pyrouge/tools/ROUGE-1.5.5/data/"
cd data/WordNet-2.0-Exceptions/
!apt-get install synaptic
!./buildExeptionDB.pl . exc WordNet-2.0.exc.db
cd ../
!ln -s WordNet-2.0-Exceptions/WordNet-2.0.exc.db WordNet-2.0.exc.db
ls
rm WordNet-2.0.exc.db
!ln -s WordNet-2.0-Exceptions/WordNet-2.0.exc.db WordNet-2.0.exc.db
!apt-get install libxml-dom-perl
!git clone https://github.com/bheinzerling/pyrouge.git
cd pyrouge
!python setup.py install
!pyrouge_set_rouge_path /content/BertSum/pyrouge/tools/ROUGE-1.5.5/
!python -m pyrouge.test
cd /content/BertSum/
###Output
_____no_output_____
###Markdown
File "/content/BertSum/src/models/data_loader.py", line 31, in __init__ mask = 1 - (src == 0) -> change -> mask = ~ ( src == 0 )
###Code
mkdir merged_stories_tokenized
cd src
ls /content/BertSum/stanford-corenlp-4.0.0/stanford-corenlp-4.0.0.jar
!python preprocess.py -mode tokenize -raw_path ../raw_stories -save_path ../merged_stories_tokenized -log_file ../logs/cnndm.log
!python train.py -mode train -encoder rnn -dropout 0.1 -bert_data_path ../bert_data/cnndm -model_path ../models/bert_rnn -lr 2e-3 -visible_gpus 0,1,2 -gpu_ranks 0,1,2 -world_size 1 -report_every 1000 -save_checkpoint_steps 1000 -batch_size 3000 -decay_method noam -train_steps 5000 -accum_count 2 -log_file ../logs/bert_rnn -use_interval true -warmup_steps 10000 -rnn_size 768 -dropout 0.1
!python train.py -mode validate -bert_data_path ../bert_data/cnndm -model_path ../models/bert_rnn -visible_gpus 0 -gpu_ranks 0 -batch_size 30000 -log_file ../logs/model_bert_rnn -result_path ../results/cnndm -test_all -block_trigram true
###Output
_____no_output_____ |
1_week_2_independent_work.ipynb | ###Markdown
Важность признаков 1. Загрузите выборку из файла titanic.csv с помощью пакета Pandas.
###Code
import pandas
import numpy as np
data = pandas.read_csv('./data/titanic.csv', index_col='PassengerId')
print(len(data))
data.head()
###Output
891
###Markdown
2. Оставьте в выборке четыре признака: класс пассажира (Pclass), цену билета (Fare), возраст пассажира (Age) и его пол (Sex).В данных есть пропущенные значения — например, для некоторых пассажиров неизвестен их возраст. Такие записи при чтении их в pandas принимают значение nan. Найдите все объекты, у которых есть пропущенные признаки, и удалите их из выборки.
###Code
data = data[['Pclass', 'Fare', 'Age', 'Sex', 'Survived']].dropna()
features = data[['Pclass', 'Fare', 'Age', 'Sex']]
features = features.replace('male', 0)
features = features.replace('female', 1)
features.head()
###Output
_____no_output_____
###Markdown
---`Примечание: обратите внимание, что признак Sex имеет строковые значения.`--- 4. Выделите целевую переменную — она записана в столбце Survived.
###Code
y = data['Survived']
y.head()
###Output
_____no_output_____
###Markdown
6. Обучите решающее дерево с параметром random_state=241 и остальными параметрами по умолчанию (речь идет о параметрах конструктора DecisionTreeСlassifier).
###Code
from sklearn.tree import DecisionTreeClassifier
X = features
clf = DecisionTreeClassifier(random_state=241)
clf.fit(X, y)
X.head()
###Output
_____no_output_____
###Markdown
7. Вычислите важности признаков и найдите два признака с наибольшей важностью. Их названия будут ответами для данной задачи (в качестве ответа укажите названия признаков через запятую или пробел, порядок не важен).
###Code
importances = clf.feature_importances_
for i in importances:
print(f"{i}")
###Output
0.14751816099515025
0.2953846784065746
0.25658494964003575
0.3005122109582393
|
03-Numpy Operations.ipynb | ###Markdown
NumPy Operations ArithmeticYou can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
###Code
import numpy as np
arr = np.arange(0,10)
arr
arr + arr
arr * arr
arr - arr
# Warning on division by zero, but not an error!
# Just replaced with nan
arr/arr
# Also warning, but not an error instead infinity
1/arr
arr**arr
###Output
_____no_output_____
###Markdown
Universal Array FunctionsNumpy comes with many [universal array functions](http://docs.scipy.org/doc/numpy/reference/ufuncs.html), which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones:
###Code
#Taking Square Roots
np.sqrt(arr)
#Calcualting exponential (e^)
np.exp(arr)
np.max(arr) #same as arr.max()
np.sin(arr)
np.log(arr)
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in log
if __name__ == '__main__':
###Markdown
NumPy Operations ArithmeticYou can easily perform array with array arithmetic, or scalar with array arithmetic. Let's see some examples:
###Code
import numpy as np
arr = np.arange(0,10)
arr
arr + arr
arr * arr
arr - arr
# Warning on division by zero, but not an error!
# Just replaced with nan
arr/arr
# Also warning, but not an error instead infinity
1/arr
arr**arr
###Output
_____no_output_____
###Markdown
Universal Array FunctionsNumpy comes with many [universal array functions](http://docs.scipy.org/doc/numpy/reference/ufuncs.html), which are essentially just mathematical operations you can use to perform the operation across the array. Let's show some common ones:
###Code
#Taking Square Roots
np.sqrt(arr)
#Calcualting exponential (e^)
np.exp(arr)
np.max(arr) #same as arr.max()
np.sin(arr)
np.log(arr)
###Output
/Users/marci/anaconda/lib/python3.5/site-packages/ipykernel/__main__.py:1: RuntimeWarning: divide by zero encountered in log
if __name__ == '__main__':
|
.ipynb_checkpoints/Job A Thon Analytics VIdya Hackathon-checkpoint.ipynb | ###Markdown
Variable Definition ID Unique Identifier for a rowCity_Code Code for the City of the customersRegion_Code Code for the Region of the customersAccomodation_Type Customer Owns or Rents the houseReco_Insurance_Type Joint or Individual type for the recommended insurance Upper_Age Maximum age of the customer Lower _Age Minimum age of the customerIs_Spouse If the customers are married to each other(in case of joint insurance) Health_IndicatorEncoded values for health of the customerHolding_Policy_Duration Duration (in years) of holding policy (a policy that customer has already subscribed to with the company)Holding_Policy_TypeType of holding policyReco_Policy_Cat Encoded value for recommended health insuranceReco_Policy_Premium Annual Premium (INR) for the recommended health insuranceResponse (Target) 0 : Customer did not show interest in the recommended policy, 1 : Customer showed interest in the recommended policy
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
train_df = pd.read_csv('input/train.csv')
test_df = pd.read_csv('input/test.csv')
train_df.head(5)
train_df.shape
train_df.describe().T
train_df.isnull().sum()
test_df.isnull().sum()
###Output
_____no_output_____
###Markdown
Our test also contains null values
###Code
train_df['Response'].value_counts()
train_df['Reco_Insurance_Type'].value_counts()
train_df['Accomodation_Type'].value_counts()
train_df['City_Code'].value_counts()
train_df['Health Indicator'].isnull().sum()
train_df['Holding_Policy_Type'].value_counts()
train_df['Holding_Policy_Duration'].value_counts()
train_df.info()
plt.figure(figsize=(14,7))
sns.heatmap(train_df.corr(),cmap='gist_gray',fmt='0.2%', annot=True)
###Output
_____no_output_____
###Markdown
First we predict the missing values of health Indicator </h5
###Code
from sklearn.impute import KNNImputer
knn = KNNImputer(n_neighbors=5)
train_df.columns
num = [col for col in train_df.columns if train_df[col].dtypes != 'O']
num
num.remove('ID')
num
train_df[num].head()
knn.fit(train_df[num])
train_df[num] = knn.transform(train_df[num])
pd.DataFrame(knn.transform(train_df[num])).head()
train_df
###Output
_____no_output_____ |
scripts/publication/SupplementaryTable6.ipynb | ###Markdown
scRFE results for supplementary table 6
###Code
# facsGlobalobjects-Age
# one vs all for 3m, 18m, 24m
adata = read_h5ad('/data/madeline/src3/01_figure_1/tabula-muris-senis-facs-processed-official-annotations.h5ad')
from anndata import read_h5ad
import numpy as np
import pandas as pd
from scRFE.scRFE import scRFE
from scRFE.scRFE import makeOneForest
from scRFE.scRFE import resultWrite
# 3m
forest3m = makeOneForest(dataMatrix=adata, classOfInterest='age', labelOfInterest='3m', nEstimators=1000,
randomState=0, min_cells=15, keep_small_categories=True,
nJobs=-1, oobScore=True, Step=0.2, Cv=5, verbosity=True)
column_headings = ['3m','3m_gini']
resultsdf = pd.DataFrame(columns = column_headings)
resultsdf['3m'] = forest3m[0]
resultsdf['3m_gini'] = forest3m[1]
resultsdf = resultsdf.sort_values(by = ['3m_gini'], ascending = False)
resultsdf.reset_index(drop = True, inplace = True)
resultsdf.to_csv('facsAllGenes3m.csv')
# 18m
forest18m = makeOneForest(dataMatrix=adata, classOfInterest='age', labelOfInterest='18m', nEstimators=1000,
randomState=0, min_cells=15, keep_small_categories=True,
nJobs=-1, oobScore=True, Step=0.2, Cv=5, verbosity=True)
column_headings = ['18m','18m_gini']
resultsdf = pd.DataFrame(columns = column_headings)
resultsdf['18m'] = forest18m[0]
resultsdf['18m_gini'] = forest18m[1]
resultsdf = resultsdf.sort_values(by = ['18m_gini'], ascending = False)
resultsdf.reset_index(drop = True, inplace = True)
resultsdf.to_csv('facsAllGenes18m.csv')
# 24
forest24m = makeOneForest(dataMatrix=adata, classOfInterest='age', labelOfInterest='24m', nEstimators=1000,
randomState=0, min_cells=15, keep_small_categories=True,
nJobs=-1, oobScore=True, Step=0.2, Cv=5, verbosity=True)
column_headings = ['24m','24m_gini']
resultsdf = pd.DataFrame(columns = column_headings)
resultsdf['24m'] = forest24m[0]
resultsdf['24m_gini'] = forest24m[1]
resultsdf = resultsdf.sort_values(by = ['24m_gini'], ascending = False)
resultsdf.reset_index(drop = True, inplace = True)
resultsdf.to_csv('facsAllGenes24m.csv')
###Output
_____no_output_____ |
tests/notebooks/DDM_fitting.ipynb | ###Markdown
Fit the DDM on individual data
###Code
import rlssm
import pandas as pd
import os
###Output
_____no_output_____
###Markdown
Import the data
###Code
par_path = os.path.abspath(os.path.join(os.getcwd(), os.pardir, os.pardir))
data_path = os.path.join(par_path, 'data/data_experiment.csv')
data = pd.read_csv(data_path, index_col=0)
data = data[data.participant == 20].reset_index(drop=True) # Only select 1 participant
data.head()
###Output
_____no_output_____
###Markdown
Initialize the model
###Code
model = rlssm.DDModel(hierarchical_levels = 1)
###Output
Using cached StanModel
###Markdown
Fit
###Code
# sampling parameters
n_iter = 1000
n_chains = 2
n_thin = 1
model_fit = model.fit(
data,
thin = n_thin,
iter = n_iter,
chains = n_chains,
pointwise_waic=False,
verbose = False)
###Output
WARNING:pystan:Maximum (flat) parameter count (1000) exceeded: skipping diagnostic tests for n_eff and Rhat.
To run all diagnostics call pystan.check_hmc_diagnostics(fit)
###Markdown
get Rhat
###Code
model_fit.rhat
###Output
_____no_output_____
###Markdown
get wAIC
###Code
model_fit.waic
###Output
_____no_output_____
###Markdown
Posteriors
###Code
model_fit.samples.describe()
import seaborn as sns
sns.set(context = "talk",
style = "white",
palette = "husl",
rc={'figure.figsize':(15, 8)})
model_fit.plot_posteriors(height=5, show_intervals="HDI", alpha_intervals=.05);
###Output
_____no_output_____
###Markdown
Posterior predictives Ungrouped
###Code
pp = model_fit.get_posterior_predictives_df(n_posterior_predictives=100)
pp
pp_summary = model_fit.get_posterior_predictives_summary(n_posterior_predictives=100)
pp_summary
model_fit.plot_mean_posterior_predictives(n_posterior_predictives=100, figsize=(20,8), show_intervals='HDI');
model_fit.plot_quantiles_posterior_predictives(n_posterior_predictives=100, kind='shades');
###Output
_____no_output_____
###Markdown
Grouped
###Code
import numpy as np
# Define new grouping variables, in this case, for the different choice pairs, but any grouping var can do
data['choice_pair'] = 'AB'
data.loc[(data.cor_option == 3) & (data.inc_option == 1), 'choice_pair'] = 'AC'
data.loc[(data.cor_option == 4) & (data.inc_option == 2), 'choice_pair'] = 'BD'
data.loc[(data.cor_option == 4) & (data.inc_option == 3), 'choice_pair'] = 'CD'
data['block_bins'] = pd.cut(data.trial_block, 8, labels=np.arange(1, 9))
model_fit.get_grouped_posterior_predictives_summary(
grouping_vars=['block_label', 'choice_pair'],
quantiles=[.3, .5, .7],
n_posterior_predictives=100)
model_fit.get_grouped_posterior_predictives_summary(
grouping_vars=['block_bins'],
quantiles=[.3, .5, .7],
n_posterior_predictives=100)
model_fit.plot_mean_grouped_posterior_predictives(grouping_vars=['block_bins'],
n_posterior_predictives=100,
figsize=(20,8));
model_fit.plot_quantiles_grouped_posterior_predictives(
n_posterior_predictives=100,
grouping_var='choice_pair',
kind='shades',
quantiles=[.1, .3, .5, .7, .9]);
###Output
_____no_output_____ |
docs/source/notebooks/09-ilustrative_example.ipynb | ###Markdown
Illustrative exampleIn this example we will create an [acquisition server](https://openbci-stream.readthedocs.io/en/latest/notebooks/A3-server-based_acquisition.html) on a Raspberry Pi and stream EEG data in real-time from them through [Kafka](https://openbci-stream.readthedocs.io/en/latest/notebooks/02-kafka_configuration.html) using the [OpenBCI-Stream library](https://openbci-stream.readthedocs.io/en/latest/index.html).Devices used: * Raspberry Pi 4 Model B 4GB RAM * Cyton Biosensing Board (8-channels) * OpenBCI WiFi Shield * Computer (with WiFi) Conventions used: * The **red window frames** indicate that this window is "owned" by Raspberry through remote connection using SSH. * The **IP** for the **Raspberry** is **192.168.1.1** * The **IP** for the **OpenBCI WiFi Shield** is **192.168.1.113** Acquisition serverThe [guide to create an acquisition server](https://openbci-stream.readthedocs.io/en/latest/notebooks/A3-server-based_acquisition.html) explain the process to set up the server over a Raspberry Pi, after finish and reboot the system the Raspberry will be an **Acces Point**, we must connect the **OpenBCI WiFi Shield** to this network as well as the main computer (where BCI Framework will be executed).We must verify that the respective daemons are running correctly on Raspberry: $ sudo systemctl status kafka zookeeper@kafka $ sudo systemctl status stream_eeg stream_rpycThe system uses the NTP to synchronize clocks, so the Rasberry **must have a wired connection** to the internet to synchronize their own clock, to ensure this we can verify the connection and restart the daemon: $ sudo systemctl restart ntpd$ nptq -pnAfter a while, the clock will be synchronized (notice the * in the server **186.30.58.181**)We can verify the status of the **WiFi Shield** with the command: $ curl -s http:/192.168.1.113/board | jqThe above commands being executed through a SSH connection, also can be done by connecting a monitor and a keyboard to the Raspberry. Configure montageA simple montage 8 channels: Configuration and connection with OpenBCI and then start the stream Connect with the Raspberry that is running under the IP **192.168.1.1** and the **WiFi Shield** on **192.168.1.113**. The sample frequency of **1000 samples per second** with a transmission of packages of **100 samples**. ImpedancesOnce the streamming is started the impedances can be displayed from the **Montages tab** Raw EEG and topoplot P300 speller EEG records reading
###Code
from openbci_stream.utils.hdf5 import HDF5Reader
filename = "record-04_20_21-13_58_25.h5"
file = HDF5Reader(filename)
print(file)
file.close()
with HDF5Reader(filename) as file:
data, classes = file.get_data(tmin=0, duration=0.125)
data.shape, classes.shape
###Output
_____no_output_____ |
step_A.ipynb | ###Markdown
Step A
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import cv2
from matplotlib import pyplot as plt
import collections
import os
###Output
_____no_output_____
###Markdown
The ```compute_all_keypoints``` function calculates all keypoints of all query and train images and stores them in a dictionary, in order to easily access them later.
###Code
def compute_all_keypoints(query_imgs, train_imgs, sift):
img_dict = {}
for img in query_imgs:
file = 'models/' + img + '.jpg'
query = cv2.imread(file, 0)
kp, des = sift.detectAndCompute(query, None)
img_dict[img] = {'kp': kp, 'des': des, 'shape': query.shape}
for img in train_imgs:
file = 'scenes/' + img + '.png'
train = cv2.imread(file, 0)
kp, des = sift.detectAndCompute(train, None)
img_dict[img] = {'kp': kp, 'des': des, 'shape': train.shape}
return img_dict
###Output
_____no_output_____
###Markdown
The ```apply_ratio_test``` function takes all the matches found between the query and the train image, it chooses the good ones with the usual ratio test and it stores them in a dictionary using the indexes of the query keypoints as keys and the indexes of the train keypoints as values.
###Code
def apply_ratio_test(all_matches):
# map of matches kp_query_idx -> kp_train_idx
good_matches = {}
for m, n in all_matches:
if m.distance < LOWE_COEFF * n.distance:
good_matches[m.queryIdx] = m.trainIdx
return good_matches
###Output
_____no_output_____
###Markdown
The ```check_matches``` function orders the good matches in decreasing number of keypoints and it runs a series of tests on them, checking the geometric arrangement and the color consistency.
###Code
def check_matches(global_matches, train_img, img_dict):
sorted_global_matches = collections.OrderedDict(sorted(global_matches.items(), key=lambda item: item[1][0], reverse=True))
recognised = {}
train_file = 'scenes/' + train_img + '.png'
train_bgr = cv2.imread(train_file)
for k, v in sorted_global_matches.items():
if v[0] > MIN_MATCH_COUNT:
query_file = 'models/' + k + '.jpg'
query_bgr = cv2.imread(query_file)
src_pts = np.float32([img_dict[k]['kp'][p].pt for p in v[1].keys()]).reshape(-1, 1, 2)
dst_pts = np.float32([img_dict[train_img]['kp'][p].pt for p in v[1].values()]).reshape(-1, 1, 2)
M, _ = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
h, w, d = query_bgr.shape
pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
dst = cv2.perspectiveTransform(pts, M)
center = tuple((dst[0, 0, i] + dst[1, 0, i] + dst[2, 0, i] + dst[3, 0, i]) / 4 for i in (0, 1))
x_min = int(max((dst[0, 0, 0] + dst[1, 0, 0]) / 2, 0))
y_min = int(max((dst[0, 0, 1] + dst[3, 0, 1]) / 2, 0))
x_max = int(min((dst[2, 0, 0] + dst[3, 0, 0]) / 2, img_dict[train_img]['shape'][1]))
y_max = int(min((dst[1, 0, 1] + dst[2, 0, 1]) / 2, img_dict[train_img]['shape'][0]))
query_color = query_bgr.mean(axis=0).mean(axis=0)
train_crop = train_bgr[y_min:y_max,x_min:x_max]
train_color = train_crop.mean(axis=0).mean(axis=0)
color_diff = np.sqrt(np.sum([value ** 2 for value in abs(query_color - train_color)]))
temp = True
if color_diff < COLOR_T :
for r, corners in recognised.items():
r_center = tuple((corners[0, 0, i] + corners[1, 0, i] + corners[2, 0, i] + corners[3, 0, i]) / 4 for i in (0, 1))
if (center[0] > min(corners[0, 0, 0], corners[1, 0, 0]) and center[0] < max(corners[2, 0, 0], corners[3, 0, 0])\
and center[1] > min(corners[0, 0, 1], corners[3, 0, 1]) and center[1] < max(corners[1, 0, 1], corners[2, 0, 1]))\
or (r_center[0] > x_min and r_center[0] < x_max\
and r_center[1] > y_min and r_center[1] < y_max):
temp = False
break
if temp:
recognised[k] = dst
return recognised
###Output
_____no_output_____
###Markdown
The ```print_matches``` function takes all the recognised images and prints their details, i.e. their position, width, and height.
###Code
def print_matches(train_img, query_imgs, recognised, true_imgs, verbose):
print('Scene: ' + train_img + '\n')
for query_img in query_imgs:
total = int(query_img in recognised.keys())
true_total = int(query_img in true_imgs[train_img])
if total != true_total:
print('\033[1m' + 'Product ' + query_img + ' – ' + str(total) + '/' + str(true_total) + ' instances found' + '\033[0m')
elif total > 0 or verbose == True:
print('Product ' + query_img + ' – ' + str(total) + '/' + str(true_total) + ' instances found')
if total == 1:
dst = recognised[query_img]
center = tuple(int((dst[0, 0, i] + dst[1, 0, i] + dst[2, 0, i] + dst[3, 0, i]) / 4) for i in (0, 1))
w = int(((dst[3, 0, 0] - dst[0, 0, 0]) + (dst[2, 0, 0] - dst[1, 0, 0])) /2)
h = int(((dst[1, 0, 1] - dst[0, 0, 1]) + (dst[2, 0, 1] - dst[3, 0, 1])) /2)
print('\t' + 'Position: ' + str(center)\
+ '\t' + 'Width: ' + str(w)\
+ '\t' + 'Height: ' + str(h))
###Output
_____no_output_____
###Markdown
The ```draw_matches``` function draws on the train image the boxes' homographies and the numbers corresponding to the query images.
###Code
def draw_matches(recognised, train_img, color):
train_file = 'scenes/' + train_img + '.png'
if color == True:
train_bgr = cv2.imread(train_file)
train_temp = cv2.cvtColor(train_bgr, cv2.COLOR_BGR2RGB)
train_rgb = np.zeros(train_bgr.shape, train_bgr.dtype)
for y in range(train_temp.shape[0]):
for x in range(train_temp.shape[1]):
for c in range(train_temp.shape[2]):
train_rgb[y, x, c] = np.clip(0.5 * train_temp[y, x, c], 0, 255)
else:
train_gray = cv2.imread(train_file, 0)
train_rgb = cv2.cvtColor(train_gray // 2, cv2.COLOR_GRAY2RGB)
for k, v in recognised.items():
train_rgb = cv2.polylines(train_rgb, [np.int32(v)], True, (0, 255, 0), 3, cv2.LINE_AA)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(train_rgb, k,\
(int((v[3, 0, 0] - v[0, 0, 0]) * 0.25 + v[0, 0, 0]), int((v[1, 0, 1] - v[0, 0, 1]) * 0.67 + v[0, 0, 1])),\
font, 5, (0, 255, 0), 10, cv2.LINE_AA)
plt.imshow(train_rgb),plt.show();
if color == True:
if not os.path.exists('output/step_A/'):
os.mkdir('output/step_A/')
cv2.imwrite('output/step_A/' + train_img + '.png', cv2.cvtColor(train_rgb, cv2.COLOR_RGB2BGR))
###Output
_____no_output_____
###Markdown
The ```step_A``` function takes the lists of query and train images and performs the product recognition.
###Code
def step_A(query_imgs, train_imgs, true_imgs, verbose, color):
sift = cv2.xfeatures2d.SIFT_create()
bf = cv2.BFMatcher()
img_dict = compute_all_keypoints(query_imgs, train_imgs, sift)
for train_img in train_imgs:
kp_train, des_train = img_dict[train_img]['kp'], img_dict[train_img]['des']
global_matches = {}
for query_img in query_imgs:
kp_query, des_query = img_dict[query_img]['kp'], img_dict[query_img]['des']
all_matches = bf.knnMatch(des_query, des_train, k=2)
good_matches = apply_ratio_test(all_matches)
global_matches[query_img] = (len(good_matches), good_matches)
recognised = check_matches(global_matches, train_img, img_dict)
print_matches(train_img, query_imgs, recognised, true_imgs, verbose)
draw_matches(recognised, train_img, color)
print('\n')
###Output
_____no_output_____
###Markdown
Parameters:
###Code
LOWE_COEFF = 0.5
MIN_MATCH_COUNT = 30
COLOR_T = 50
query_imgs = ['0', '1', '11', '19', '24', '25', '26']
train_imgs = ['e1', 'e2', 'e3', 'e4', 'e5']
true_imgs = {
'e1': {'0', '11'},
'e2': {'24', '25', '26'},
'e3': {'0', '1', '11'},
'e4': {'0', '11', '25', '26'},
'e5': {'19', '25'},
}
# verbose=False does not print the true negative instances
# color=True outputs all the scenes in color instead of grayscale and saves them, but the process is quite slow
step_A(query_imgs, train_imgs, true_imgs, verbose=False, color=False)
###Output
_____no_output_____
###Markdown
Step A
###Code
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import cv2
from matplotlib import pyplot as plt
import collections
###Output
_____no_output_____
###Markdown
The ```compute_all_keypoints``` function calculates all keypoints of all query and train images and stores them in a dictionary, in order to easily access them later.
###Code
def compute_all_keypoints(query_imgs, train_imgs, sift):
img_dict = {}
for img in query_imgs:
file = 'models/' + img + '.jpg'
query = cv2.imread(file, 0)
kp, des = sift.detectAndCompute(query, None)
img_dict[img] = {'kp': kp, 'des': des, 'shape': query.shape}
for img in train_imgs:
file = 'scenes/' + img + '.png'
train = cv2.imread(file, 0)
kp, des = sift.detectAndCompute(train, None)
img_dict[img] = {'kp': kp, 'des': des, 'shape': train.shape}
return img_dict
###Output
_____no_output_____
###Markdown
The ```apply_ratio_test``` function takes all the matches found between the query and the train image, it chooses the good ones with the usual ratio test and it stores them in a dictionary using the indexes of the query keypoints as keys and the indexes of the train keypoints as values.
###Code
def apply_ratio_test(all_matches):
# map of matches kp_query_idx -> kp_train_idx
good_matches = {}
for m, n in all_matches:
if m.distance < LOWE_COEFF * n.distance:
good_matches[m.queryIdx] = m.trainIdx
return good_matches
###Output
_____no_output_____
###Markdown
The ```check_matches``` function orders the good matches in decreasing number of keypoints and it runs a series of tests on them, checking the geometric arrangement and the color consistency.
###Code
def check_matches(global_matches, train_img, img_dict):
sorted_global_matches = collections.OrderedDict(sorted(global_matches.items(), key=lambda item: item[1][0], reverse=True))
recognised = {}
train_file = 'scenes/' + train_img + '.png'
train_bgr = cv2.imread(train_file)
for k, v in sorted_global_matches.items():
if v[0] > MIN_MATCH_COUNT:
query_file = 'models/' + k + '.jpg'
query_bgr = cv2.imread(query_file)
src_pts = np.float32([img_dict[k]['kp'][p].pt for p in v[1].keys()]).reshape(-1, 1, 2)
dst_pts = np.float32([img_dict[train_img]['kp'][p].pt for p in v[1].values()]).reshape(-1, 1, 2)
M, _ = cv2.findHomography(src_pts, dst_pts, cv2.RANSAC, 5.0)
h, w, d = query_bgr.shape
pts = np.float32([[0, 0], [0, h - 1], [w - 1, h - 1], [w - 1, 0]]).reshape(-1, 1, 2)
dst = cv2.perspectiveTransform(pts, M)
center = tuple((dst[0, 0, i] + dst[1, 0, i] + dst[2, 0, i] + dst[3, 0, i]) / 4 for i in (0, 1))
x_min = int(max((dst[0, 0, 0] + dst[1, 0, 0]) / 2, 0))
y_min = int(max((dst[0, 0, 1] + dst[3, 0, 1]) / 2, 0))
x_max = int(min((dst[2, 0, 0] + dst[3, 0, 0]) / 2, img_dict[train_img]['shape'][1]))
y_max = int(min((dst[1, 0, 1] + dst[2, 0, 1]) / 2, img_dict[train_img]['shape'][0]))
query_color = query_bgr.mean(axis=0).mean(axis=0)
train_crop = train_bgr[y_min:y_max,x_min:x_max]
train_color = train_crop.mean(axis=0).mean(axis=0)
color_diff = np.sqrt(np.sum([value ** 2 for value in abs(query_color - train_color)]))
temp = True
if color_diff < COLOR_T :
for r, corners in recognised.items():
r_center = tuple((corners[0, 0, i] + corners[1, 0, i] + corners[2, 0, i] + corners[3, 0, i]) / 4 for i in (0, 1))
if (center[0] > min(corners[0, 0, 0], corners[1, 0, 0]) and center[0] < max(corners[2, 0, 0], corners[3, 0, 0])\
and center[1] > min(corners[0, 0, 1], corners[3, 0, 1]) and center[1] < max(corners[1, 0, 1], corners[2, 0, 1]))\
or (r_center[0] > x_min and r_center[0] < x_max\
and r_center[1] > y_min and r_center[1] < y_max):
temp = False
break
if temp:
recognised[k] = dst
return recognised
###Output
_____no_output_____
###Markdown
The ```print_matches``` function takes all the recognised images and prints their details, i.e. their position, width, and height.
###Code
def print_matches(train_img, query_imgs, recognised, true_imgs, verbose):
print('Scene: ' + train_img + '\n')
for query_img in query_imgs:
total = int(query_img in recognised.keys())
true_total = int(query_img in true_imgs[train_img])
if total != true_total:
print('\033[1m' + 'Product ' + query_img + ' – ' + str(total) + '/' + str(true_total) + ' instances found' + '\033[0m')
elif total > 0 or verbose == True:
print('Product ' + query_img + ' – ' + str(total) + '/' + str(true_total) + ' instances found')
if total == 1:
dst = recognised[query_img]
center = tuple(int((dst[0, 0, i] + dst[1, 0, i] + dst[2, 0, i] + dst[3, 0, i]) / 4) for i in (0, 1))
w = int(((dst[3, 0, 0] - dst[0, 0, 0]) + (dst[2, 0, 0] - dst[1, 0, 0])) /2)
h = int(((dst[1, 0, 1] - dst[0, 0, 1]) + (dst[2, 0, 1] - dst[3, 0, 1])) /2)
print('\t' + 'Position: ' + str(center)\
+ '\t' + 'Width: ' + str(w)\
+ '\t' + 'Height: ' + str(h))
###Output
_____no_output_____
###Markdown
The ```draw_matches``` function draws on the train image the boxes' homographies and the numbers corresponding to the query images.
###Code
def draw_matches(recognised, train_img, color):
train_file = 'scenes/' + train_img + '.png'
if color == True:
train_bgr = cv2.imread(train_file)
train_temp = cv2.cvtColor(train_bgr, cv2.COLOR_BGR2RGB)
train_rgb = np.zeros(train_bgr.shape, train_bgr.dtype)
for y in range(train_temp.shape[0]):
for x in range(train_temp.shape[1]):
for c in range(train_temp.shape[2]):
train_rgb[y, x, c] = np.clip(0.5 * train_temp[y, x, c], 0, 255)
else:
train_gray = cv2.imread(train_file, 0)
train_rgb = cv2.cvtColor(train_gray // 2, cv2.COLOR_GRAY2RGB)
for k, v in recognised.items():
train_rgb = cv2.polylines(train_rgb, [np.int32(v)], True, (0, 255, 0), 3, cv2.LINE_AA)
font = cv2.FONT_HERSHEY_SIMPLEX
cv2.putText(train_rgb, k,\
(int((v[3, 0, 0] - v[0, 0, 0]) * 0.25 + v[0, 0, 0]), int((v[1, 0, 1] - v[0, 0, 1]) * 0.67 + v[0, 0, 1])),\
font, 5, (0, 255, 0), 10, cv2.LINE_AA)
plt.imshow(train_rgb),plt.show();
if color == True:
cv2.imwrite('output/step_A/' + train_img + '.png', cv2.cvtColor(train_rgb, cv2.COLOR_RGB2BGR))
###Output
_____no_output_____
###Markdown
The ```step_A``` function takes the lists of query and train images and performs the product recognition.
###Code
def step_A(query_imgs, train_imgs, true_imgs, verbose, color):
sift = cv2.xfeatures2d.SIFT_create()
bf = cv2.BFMatcher()
img_dict = compute_all_keypoints(query_imgs, train_imgs, sift)
for train_img in train_imgs:
kp_train, des_train = img_dict[train_img]['kp'], img_dict[train_img]['des']
global_matches = {}
for query_img in query_imgs:
kp_query, des_query = img_dict[query_img]['kp'], img_dict[query_img]['des']
all_matches = bf.knnMatch(des_query, des_train, k=2)
good_matches = apply_ratio_test(all_matches)
global_matches[query_img] = (len(good_matches), good_matches)
recognised = check_matches(global_matches, train_img, img_dict)
print_matches(train_img, query_imgs, recognised, true_imgs, verbose)
draw_matches(recognised, train_img, color)
print('\n')
###Output
_____no_output_____
###Markdown
Parameters:
###Code
LOWE_COEFF = 0.5
MIN_MATCH_COUNT = 30
COLOR_T = 50
query_imgs = ['0', '1', '11', '19', '24', '25', '26']
train_imgs = ['e1', 'e2', 'e3', 'e4', 'e5']
true_imgs = {
'e1': {'0', '11'},
'e2': {'24', '25', '26'},
'e3': {'0', '1', '11'},
'e4': {'0', '11', '25', '26'},
'e5': {'19', '25'},
}
# verbose=False does not print the true negative instances
# color=True outputs all the scenes in color insted of grayscale, but the process is quite slow, therefore it is False by default
step_A(query_imgs, train_imgs, true_imgs, verbose=False, color=False)
###Output
_____no_output_____ |
01 python/lect 6 materials/Разбор задач семинар 2.ipynb | ###Markdown
Разбор задач после семинара 2 A. Максимум из двух
###Code
a = int(input())
b = int(input())
print((a>b)*a + (a<=b)*b)
###Output
_____no_output_____
###Markdown
B. Какое число больше? (Вариант 1)
###Code
a = int(input())
b = int(input())
print((a>b) + (a<b)*2)
###Output
_____no_output_____
###Markdown
B. Какое число больше? (Вариант 2)
###Code
a = int(input())
b = int(input())
if a > b:
print(1)
elif a < b:
print(2)
else:
print(0)
###Output
_____no_output_____
###Markdown
C. Знак числа (Вариант 1)
###Code
a = int(input())
print(-(a<0) + (a>0))
###Output
_____no_output_____
###Markdown
C. Знак числа (Вариант 2)
###Code
a = int(input())
if a > 0:
print(1)
elif a < b:
print(-1)
else:
print(0)
###Output
_____no_output_____
###Markdown
D. Максимум трех чисел
###Code
a = int(input())
b = int(input())
c = int(input())
print(max([a, b, c]))
###Output
_____no_output_____
###Markdown
E. Координатные четверти
###Code
x1 = int(input())
y1 = int(input())
x2 = int(input())
y2 = int(input())
if x1*x2 > 0 and y1*y2 > 0:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
F. Високосный год
###Code
a = int(input())
if (a % 4 == 0 and a % 100 != 0) or a % 400 == 0:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
G. Ход короля
###Code
a = int(input())
b = int(input())
c = int(input())
d = int(input())
if abs(a-c) <= 1 and abs(b-d) <= 1:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
H. Четные и нечетные
###Code
a = int(input())
b = int(input())
c = int(input())
S = (a % 2 == 0) + (b % 2 == 0) + (c % 2 == 0)
if 1 <= S <= 2:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
I. Квартиры
###Code
a = int(input())
b = int(input())
if (a - 1) % (b - a + 1) == 0:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
J. Цвет клеток шахматной доски
###Code
a = int(input())
b = int(input())
c = int(input())
d = int(input())
if (abs(a-c) + abs(b-d)) % 2 == 0:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
K. Шоколадка
###Code
n = int(input())
m = int(input())
k = int(input())
if k <= n*m and (k % n == 0 or k % m == 0):
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
L. Шашки
###Code
a = int(input())
b = int(input())
c = int(input())
d = int(input())
if (abs(a-c) + abs(d-b)) % 2 == 0 and abs(c-a) <= d-b:
print('YES')
else:
print('NO')
###Output
_____no_output_____
###Markdown
M. Упорядочить три числа
###Code
a = int(input())
b = int(input())
c = int(input())
if a > b:
(a, b) = (b, a)
if b > c:
(c, b) = (b, c)
if a > b:
(a, b) = (b, a)
print(a, b, c)
###Output
_____no_output_____
###Markdown
N. Сколько совпадает чисел
###Code
a = int(input())
b = int(input())
c = int(input())
k = 0
if a == b:
k += 1
if b == c:
k += 1
if a == c:
k += 1
print(k + (k == 1))
###Output
_____no_output_____
###Markdown
O. Тип треугольника
###Code
a = int(input())
b = int(input())
c = int(input())
if a > b:
(a, b) = (b, a)
if b > c:
(c, b) = (b, c)
if a > b:
(a, b) = (b, a)
D = (a**2 + b**2) ** 0.5
if c < D:
print('acute')
elif c == D:
print('rectangular')
elif D < c < a + b:
print('obtuse')
else:
print('impossible')
###Output
_____no_output_____
###Markdown
P. Узник замка Иф
###Code
a = int(input())
b = int(input())
c = int(input())
d = int(input())
e = int(input())
if a <= d and b <= e or a <= e and b <= d:
print("YES")
elif c <= d and b <= e or c <= e and b <= d:
print("YES")
elif a <= d and c <= e or a <= e and c <= d:
print("YES")
else:
print("NO")
###Output
_____no_output_____
###Markdown
Q. Коробки
###Code
A1 = int(input())
B1 = int(input())
C1 = int(input())
A2 = int(input())
B2 = int(input())
C2 = int(input())
if ((A1 == A2 and B1 == B2 and C1 == C2) or
(A1 == A2 and B1 == C2 and C1 == B2) or
(A1 == C2 and B1 == A2 and C1 == B2) or
(A1 == B2 and B1 == A2 and C1 == C2) or
(A1 == B2 and B1 == C2 and C1 == A2) or
(A1 == C2 and B1 == B2 and C1 == A2)):
print('Boxes are equal')
elif ((A1 <= A2 and B1 <= B2 and C1 <= C2) or
(A1 <= A2 and B1 <= C2 and C1 <= B2) or
(A1 <= C2 and B1 <= A2 and C1 <= B2) or
(A1 <= B2 and B1 <= A2 and C1 <= C2) or
(A1 <= B2 and B1 <= C2 and C1 <= A2) or
(A1 <= C2 and B1 <= B2 and C1 <= A2)):
print('The first box is smaller than the second one')
elif ((A1 >= A2 and B1 >= B2 and C1 >= C2) or
(A1 >= A2 and B1 >= C2 and C1 >= B2) or
(A1 >= C2 and B1 >= A2 and C1 >= B2) or
(A1 >= B2 and B1 >= A2 and C1 >= C2) or
(A1 >= B2 and B1 >= C2 and C1 >= A2) or
(A1 >= C2 and B1 >= B2 and C1 >= A2)):
print('The first box is larger than the second one')
else:
print('Boxes are incomparable')
###Output
_____no_output_____
###Markdown
S. Коровы
###Code
n = int(input())
if n >= 11 and n <= 14:
print(n, 'korov')
else:
temp = n % 10
if temp == 0 or (temp >= 5 and temp <= 9):
print(n, 'korov')
if temp == 1:
print(n, 'korova')
if temp >=2 and temp <=4:
print(n, 'korovy')
###Output
_____no_output_____
###Markdown
T. Мороженое
###Code
a = int(input())
if a != 1 and a != 2 and a != 4 and a != 7:
print('YES')
else:
print('NO')
###Output
_____no_output_____ |
QOSF_Task.ipynb | ###Markdown
As the `binary_vector` contains binary strings of given array. We will add the ccx gates according to the binary data given. The adding of binary data into the QRAM is clearly specified in `qram_function` Importing Necessary libraries and tools for visualization
###Code
#Using Qiskit for this project
from qiskit import *
#Importing Histogram Plot for visualization of results
from qiskit.visualization import plot_histogram
#Import OR gate for the oracle working in program.
from qiskit.circuit.library import OR
#[0001] [0101] [0111] [1010]
###Output
_____no_output_____
###Markdown
Using [Diffuser function](https://qiskit.org/textbook/ch-algorithms/grover.html3.1-Qiskit-Implementation-) for data in address registers
###Code
def diffuser(nqubits):
qc = QuantumCircuit(nqubits)
# Apply transformation |s> -> |00..0> (H-gates)
for qubit in range(nqubits):
qc.h(qubit)
# Apply transformation |00..0> -> |11..1> (X-gates)
for qubit in range(nqubits):
qc.x(qubit)
# Do multi-controlled-Z gate
qc.h(nqubits-1)
qc.mct(list(range(nqubits-1)), nqubits-1) # multi-controlled-toffoli
qc.h(nqubits-1)
# Apply transformation |11..1> -> |00..0>
for qubit in range(nqubits):
qc.x(qubit)
# Apply transformation |00..0> -> |s>
for qubit in range(nqubits):
qc.h(qubit)
# We will return the diffuser as a gate
U_s = qc.to_gate()
U_s.name = "U$_s$"
return U_s
###Output
_____no_output_____
###Markdown
Using a standard Counter function which increase the value when it finds valid solution in the array.
###Code
def counter(qc, flip,auxiliary):
for i in range(len(flip)):
qc.ccx(flip[i],auxiliary[0],auxiliary[1])
qc.cx(flip[i],auxiliary[0])
###Output
_____no_output_____
###Markdown
Using a Qram function specially made for the given input data. It comprises of ccx gates and x gates. The x gates are placed according to the position of data and The ccx gates are placed according to the bit string given in the array as input.
###Code
def qram(qc,add,flip):
# Address 0 = 00 -> Data = 1 (0001)
qc.x(add)
qc.ccx(add[0],add[1],flip[3])
qc.x(add)
qc.barrier()
# Address 1 = 01 -> Data = 5 (0101)
qc.x(add[0])
qc.ccx(add[0],add[1],flip[1])
qc.ccx(add[0],add[1],flip[3])
qc.x(add[0])
qc.barrier()
# Address 2 = 10 -> Data = 7 (0111)
qc.x(add[1])
qc.ccx(add[0],add[1],flip[1])
qc.ccx(add[0],add[1],flip[2])
qc.ccx(add[0],add[1],flip[3])
qc.x(add[1])
qc.barrier()
# Address 3 = 11-> Data = 10 (1010)
qc.ccx(add[0],add[1],flip[0])
qc.ccx(add[0],add[1],flip[2])
qc.barrier()
###Output
_____no_output_____
###Markdown
Initialization of the necessary Quantum and Classical registers
###Code
# Address are used to get the position of valid solutions in the given array
add=QuantumRegister(2,name='address')
# Input will be used for entering the data given by QRAM in the bit string form.
flip=QuantumRegister(4,name='input')
# Auxiliary qubits are used to store and transfer the solution in output qubit by oracle
aux=QuantumRegister(2,name='auxiliary')
# Output qubit flips the phase of the valid solution and return rest the of the inputs for computation
out=QuantumRegister(1,name='output')
# Classical registers are used for address qubits for measurement
cbits=ClassicalRegister(2,name='cbits')
# Finally plugging all things in Quantum circuit
qc=QuantumCircuit(add,flip,aux,out,cbits)
###Output
_____no_output_____
###Markdown
Adding necceary gates initially
###Code
## Initialization
# Adding H gate and X gate in address qubits for making them in superposition
qc.x(add)
qc.h(add)
# Doing same thing with output qubit as it will flip the valid solution
qc.x(out)
qc.h(out)
qc.barrier()
###Output
_____no_output_____
###Markdown
Oracle which returns 0101 and 1010 as valid solution
###Code
# Calling QRAM function with neccary argument
qram(qc,add,flip)
# Using Multi control toffoli gate for checking 1010 as the valid solution
qc.mct([flip[0],flip[2]],aux[0])
qc.barrier()
# Using Multi control toffoli gate for checking 0101 as the valid solution
qc.mct([flip[1],flip[3]],aux[0])
qc.barrier()
# Using Counter for counting the valid solutions
counter(qc,flip,aux)
qc.barrier()
# Using OR gate for adding both MCT gate output and adding them and transfering then to Output qubit
qc.append(OR(2),[6,7,8])
qc.barrier()
# Performing Uncomputation for returing rest of the incorrect solutions
qc.mct([flip[1],flip[3]],aux[0])
qc.barrier()
qc.mct([flip[0],flip[2]],aux[0])
qc.barrier()
# Applying QRAM for Uncomputation. As QRAM is commutative in nature so no need to take care of reverse gates
qram(qc,add,flip)
###Output
_____no_output_____
###Markdown
Diffuser and Measurement on Adderess qubits
###Code
## Applying Diffuser on address bits for getting the positons of valid solutions in the given array
qc.append(diffuser(2),[0,1])
qc.barrier()
## Applying the measure to measure the address qubits and returing them to classical bits
qc.measure(add,cbits)
## Using Draw function for drawing circuit
qc.draw();
###Output
_____no_output_____
###Markdown
Visualization of Results
###Code
# Reversing Quantum Circuit for getting expected results
qc=qc.reverse_bits()
# Transpiling the circuit for more clarity in results
qc = transpile(qc, basis_gates=['cx', 'u'], optimization_level=3)
# Using the QASM simulator for visualization
aer_sim = Aer.get_backend('qasm_simulator')
# Defining Shots
shots = 1024
# Assembling the circut to run on qasm_simulator
qobj = assemble(qc, aer_sim)
# Extracting Results out of it
results = aer_sim.run(qobj).result()
# Getting counts of shots of results
answer = results.get_counts()
# Using plot_histogram for plotting the shots count and representing it in probabilities
plot_histogram(answer);
###Output
_____no_output_____ |
examples/gallery/demos/bokeh/measles_example.ipynb | ###Markdown
This notebook reproduces a [visualization by the Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15) about the incidence of measles over time, which the brilliant [Brian Granger](https://github.com/ellisonbg) adapted into an [example for the Altair library](http://nbviewer.jupyter.org/github/ellisonbg/altair/blob/master/altair/notebooks/12-Measles.ipynb).Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib measles_example](../matplotlib/measles_example.ipynb)
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
import pandas as pd
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
url = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'
data = pd.read_csv(url, skiprows=2, na_values='-')
yearly_data = data.drop('WEEK', axis=1).groupby('YEAR').sum().reset_index()
measles = pd.melt(yearly_data, id_vars=['YEAR'], var_name='State', value_name='Incidence')
heatmap = hv.HeatMap(measles, label='Measles Incidence')
aggregate = hv.Dataset(heatmap).aggregate('YEAR', np.mean, np.std)
vline = hv.VLine(1963)
marker = hv.Text(1964, 800, 'Vaccine introduction', halign='left')
agg = hv.ErrorBars(aggregate) * hv.Curve(aggregate)
###Output
_____no_output_____
###Markdown
Plot
###Code
overlay = (heatmap + agg * vline * marker).cols(1)
overlay.options(
opts.HeatMap(width=900, height=500, tools=['hover'], logz=True,
invert_yaxis=True, labelled=[], toolbar='above', xaxis=None),
opts.VLine(line_color='black'),
opts.Overlay(width=900, height=200, show_title=False, xrotation=90))
###Output
_____no_output_____
###Markdown
This notebook reproduces a [visualization by the Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15) about the incidence of measles over time, which the brilliant [Brian Granger](https://github.com/ellisonbg) adapted into an [example for the Altair library](http://nbviewer.jupyter.org/github/ellisonbg/altair/blob/master/altair/notebooks/12-Measles.ipynb).Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib measles_example](../matplotlib/measles_example.ipynb)
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
import pandas as pd
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
url = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'
data = pd.read_csv(url, skiprows=2, na_values='-')
yearly_data = data.drop('WEEK', axis=1).groupby('YEAR').sum().reset_index()
measles = pd.melt(yearly_data, id_vars=['YEAR'], var_name='State', value_name='Incidence')
heatmap = hv.HeatMap(measles, label='Measles Incidence')
aggregate = hv.Dataset(heatmap).aggregate('YEAR', np.mean, np.std)
vline = hv.VLine(1963)
marker = hv.Text(1964, 800, 'Vaccine introduction', halign='left')
agg = hv.ErrorBars(aggregate) * hv.Curve(aggregate)
###Output
_____no_output_____
###Markdown
Plot
###Code
overlay = (heatmap + agg * vline * marker).cols(1)
overlay.opts(
opts.HeatMap(width=900, height=500, tools=['hover'], logz=True,
invert_yaxis=True, labelled=[], toolbar='above',
xaxis=None, colorbar=True, clim=(1, np.nan)),
opts.VLine(line_color='black'),
opts.Overlay(width=900, height=200, show_title=False, xrotation=90))
###Output
_____no_output_____
###Markdown
This notebook reproduces a [visualization by the Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15) about the incidence of measles over time, which the brilliant [Brian Granger](https://github.com/ellisonbg) adapted into an [example for the Altair library](http://nbviewer.jupyter.org/github/ellisonbg/altair/blob/master/altair/notebooks/12-Measles.ipynb).Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib measles_example](../matplotlib/measles_example.ipynb)
###Code
import numpy as np
import holoviews as hv
import pandas as pd
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
url = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'
data = pd.read_csv(url, skiprows=2, na_values='-')
yearly_data = data.drop('WEEK', axis=1).groupby('YEAR').sum().reset_index()
measles = pd.melt(yearly_data, id_vars=['YEAR'], var_name='State', value_name='Incidence')
heatmap = hv.HeatMap(measles, label='Measles Incidence')
aggregate = hv.Dataset(heatmap).aggregate('YEAR', np.mean, np.std)
vline = hv.VLine(1963)
marker = hv.Text(1964, 800, 'Vaccine introduction', halign='left')
agg = hv.ErrorBars(aggregate) * hv.Curve(aggregate)
###Output
_____no_output_____
###Markdown
Plot
###Code
options = {'HeatMap': dict(width=900, height=500, tools=['hover'], logz=True, invert_yaxis=True,
labelled=[], toolbar='above', xaxis=None),
'Overlay': dict(width=900, height=200, show_title=False, xrotation=90),
'VLine': dict(line_color='black')}
(heatmap + agg * vline * marker).options(options).cols(1)
###Output
_____no_output_____
###Markdown
This notebook reproduces a [visualization by the Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15) about the incidence of measles over time, which the brilliant [Brian Granger](https://github.com/ellisonbg) adapted into an [example for the Altair library](http://nbviewer.jupyter.org/github/ellisonbg/altair/blob/master/altair/notebooks/12-Measles.ipynb).Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib measles_example](../matplotlib/measles_example.ipynb)
###Code
import numpy as np
import holoviews as hv
import pandas as pd
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
url = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'
data = pd.read_csv(url, skiprows=2, na_values='-')
yearly_data = data.drop('WEEK', axis=1).groupby('YEAR').sum().reset_index()
measles = pd.melt(yearly_data, id_vars=['YEAR'], var_name='State', value_name='Incidence')
heatmap = hv.HeatMap(measles, label='Measles Incidence')
aggregate = hv.Dataset(heatmap).aggregate('YEAR', np.mean, np.std)
vline = hv.VLine(1963)
marker = hv.Text(1964, 800, 'Vaccine introduction', halign='left')
agg = hv.ErrorBars(aggregate) * hv.Curve(aggregate)
###Output
_____no_output_____
###Markdown
Plot
###Code
hm_opts = dict(width=900, height=500, tools=['hover'], logz=True, invert_yaxis=True,
xrotation=90, labelled=[], toolbar='above', xaxis=None)
overlay_opts = dict(width=900, height=200, show_title=False)
vline_opts = dict(line_color='black')
opts = {'HeatMap': {'plot': hm_opts}, 'Overlay': {'plot': overlay_opts}, 'VLine': {'style': vline_opts}}
(heatmap + agg * vline * marker).opts(opts).cols(1)
###Output
_____no_output_____
###Markdown
This notebook reproduces a [visualization by the Wall Street Journal](http://graphics.wsj.com/infectious-diseases-and-vaccines/b02g20t20w15) about the incidence of measles over time, which the brilliant [Brian Granger](https://github.com/ellisonbg) adapted into an [example for the Altair library](http://nbviewer.jupyter.org/github/ellisonbg/altair/blob/master/altair/notebooks/12-Measles.ipynb).Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib measles_example](../matplotlib/measles_example.ipynb)
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
import pandas as pd
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Declaring data
###Code
url = 'https://raw.githubusercontent.com/blmoore/blogR/master/data/measles_incidence.csv'
data = pd.read_csv(url, skiprows=2, na_values='-')
yearly_data = data.drop('WEEK', axis=1).groupby('YEAR').sum().reset_index()
measles = pd.melt(yearly_data, id_vars=['YEAR'], var_name='State', value_name='Incidence')
heatmap = hv.HeatMap(measles, label='Measles Incidence')
aggregate = hv.Dataset(heatmap).aggregate('YEAR', np.mean, np.std)
vline = hv.VLine(1963)
marker = hv.Text(1964, 800, 'Vaccine introduction', halign='left')
agg = hv.ErrorBars(aggregate) * hv.Curve(aggregate)
###Output
_____no_output_____
###Markdown
Plot
###Code
overlay = (heatmap + agg * vline * marker).cols(1)
overlay.opts(
opts.HeatMap(width=900, height=500, tools=['hover'], logz=True,
invert_yaxis=True, labelled=[], toolbar='above', xaxis=None),
opts.VLine(line_color='black'),
opts.Overlay(width=900, height=200, show_title=False, xrotation=90))
###Output
_____no_output_____ |
Starter_Files/challenge.ipynb | ###Markdown
Challenge Identifying Outliers using Standard Deviation
###Code
# initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# create a connection to the database
engine = create_engine("postgresql://postgres:postgres@localhost:5432/fraud_detection")
# code a function to identify outliers based on standard deviation
# find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# code a function to identify outliers based on interquartile range
# find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
Challenge Identifying Outliers using Standard Deviation
###Code
# initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# create a connection to the database
engine = create_engine("postgresql://postgres:postgres@localhost:5432/fraud_detection")
# code a function to identify outliers based on standard deviation
# find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# code a function to identify outliers based on interquartile range
# find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
ChallengeAnother approach to identifying fraudulent transactions is to look for outliers in the data. Standard deviation or quartiles are often used to detect outliers. Using this starter notebook, code two Python functions:* One that uses standard deviation to identify anomalies for any cardholder.* Another that uses interquartile range to identify anomalies for any cardholder. Identifying Outliers using Standard Deviation
###Code
# Initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# Create a connection to the database
engine = create_engine("postgresql://postgres:postgres@localhost:5432/Unit 7 HW")
# Write function that locates outliers using standard deviation
query = "SELECT * FROM transaction"
transactions = pd.read_sql(query, engine)
# Find anomalous transactions for 3 random card holders
query =''' SELECT CH.id, TR.date, TR.amount, TR.card, tr.id_merchant, MC.name
FROM card_holder CH
JOIN credit_card CC
ON CH.id = CC.cardholder_id
JOIN transaction TR
ON TR.card = CC.card
JOIN merchant ME
ON TR.id_merchant = ME.id_merchant_category
JOIN merchant_category MC
ON TR.id_merchant = MC.id
WHERE CH.id IN ('2', '18', '25')
'''
pd.read_sql(query, engine).head()
ccholder_df = pd.read_sql(query, engine)
ccholder_df.head()
ccholder_df['amount'] = ccholder_df['amount'].astype(str).astype(float)
ccholder2_data = ccholder_df.loc[ccholder_df['id'] == "2"].set_index('date')
ccholder2_data.index = pd.to_datetime(ccholder2_data.index)
ccholder2_data.head()
ccholder_df.hvplot.line(
x="date",
y="amount",
xlabel="Date",
ylabel="Amount",
title="Transactions by Id Holder 2",
rot = 90
)
###Output
_____no_output_____
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# Write a function that locates outliers using interquartile range
# Find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
Challenge Identifying Outliers using Standard Deviation
###Code
# initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
import os
POSTGRES_KEY= os.getenv('POSTGRES_KEY')
# create a connection to the database
engine = create_engine(f"postgresql://postgres:{POSTGRES_KEY}@localhost:5432/Credit_Card_Transactions")
query = "SELECT t.id, t.date, t.amount, t.card, t.id_merchant, cc.cardholder_id FROM transaction t JOIN credit_card cc ON (t.card = cc.card)"
transactions_df = pd.read_sql(query, engine)
transactions_df.head()
# code a function to identify outliers based on standard deviation
def outliners_std (client_id):
# outliners_std function receives cardholder id as parameter and returns
# all the transactions_df rows with amounts > (mean + 1std) and amounts < (mean - 1std)
transactions_client_df = transactions_df[transactions_df['cardholder_id'] == int(client_id)]
client_std = transactions_client_df['amount'].std()
client_mean = transactions_client_df['amount'].mean()
outliners_df = transactions_client_df[(transactions_client_df['amount'] > (client_mean+client_std)) + (transactions_client_df['amount'] < (client_mean-client_std))]
return outliners_df
# find anomalous transactions for 3 random card holders
outliners_2 = outliners_std(2)
outliners_25 = outliners_std(25)
outliners_18 = outliners_std(18)
print("========================================")
print("Example of Outliners for Card Holder #2:")
print(outliners_2[['id', 'date', 'amount','card','id_merchant']].head())
print(f"Total Outliners : {outliners_2['amount'].count()}")
print("========================================")
print("Example of Outliners for Card Holder #25:")
print(outliners_25[['id', 'date', 'amount','card','id_merchant']].head())
print(f"Total Outliners : {outliners_25['amount'].count()}")
print("========================================")
print("Example of Outliners for Card Holder #18:")
print(outliners_18[['id', 'date', 'amount','card','id_merchant']].head())
print(f"Total Outliners : {outliners_18['amount'].count()}")
print("========================================")
###Output
========================================
Example of Outliners for Card Holder #2:
id date amount card id_merchant
44 2439 2018-01-06 02:16:41 1.33 4866761290278198714 127
57 3028 2018-01-07 15:10:27 17.29 4866761290278198714 126
141 2655 2018-01-16 06:29:35 17.64 675911140852 136
333 3395 2018-02-03 18:05:39 1.41 4866761290278198714 65
369 2878 2018-02-08 05:12:18 18.32 4866761290278198714 57
Total Outliners : 45
========================================
Example of Outliners for Card Holder #25:
id date amount card id_merchant
296 1415 2018-01-30 18:31:00 1177.0 4319653513507 64
636 2840 2018-03-06 07:18:09 1334.0 4319653513507 87
960 1341 2018-04-08 06:03:50 1063.0 4319653513507 16
1306 1377 2018-05-13 06:31:20 1046.0 4319653513507 48
1510 1790 2018-06-04 03:46:15 1162.0 4319653513507 96
Total Outliners : 9
========================================
Example of Outliners for Card Holder #18:
id date amount card id_merchant
487 3098 2018-02-19 22:48:25 1839.0 344119623920892 95
925 1359 2018-04-03 03:23:37 1077.0 344119623920892 100
1508 3139 2018-06-03 20:02:28 1814.0 344119623920892 123
1956 136 2018-07-18 09:19:08 974.0 344119623920892 19
2363 2103 2018-09-02 11:20:42 458.0 344119623920892 10
Total Outliners : 8
========================================
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# code a function to identify outliers based on interquartile range
def outliners_iqr (client_id):
# outliners_iqr function receives cardholder id as parameter and returns
# all the transactions_df rows with amounts > (percentile 75) and amounts < (percentile 25)
transactions_client_df = transactions_df[transactions_df['cardholder_id'] == int(client_id)]
#client_std = transactions_client_df['amount'].std()
#client_mean = transactions_client_df['amount'].mean()
client_q75, client_q25 = np.percentile(transactions_client_df['amount'], [75 ,25])
outliners_df = transactions_client_df[(transactions_client_df['amount'] > client_q75) + (transactions_client_df['amount'] < client_q25)]
return outliners_df
# find anomalous transactions for 3 random card holders
outliners_2 = outliners_iqr(2)
outliners_25 = outliners_iqr(25)
outliners_18 = outliners_iqr(18)
print("========================================")
print("Example of Outliners for Card Holder #2:")
print(outliners_2[['id', 'date', 'amount','card','id_merchant']].head())
print(f"Total Outliners : {outliners_2['amount'].count()}")
print("========================================")
print("Example of Outliners for Card Holder #25:")
print(outliners_25[['id', 'date', 'amount','card','id_merchant']].head())
print(f"Total Outliners : {outliners_25['amount'].count()}")
print("========================================")
print("Example of Outliners for Card Holder #18:")
print(outliners_18[['id', 'date', 'amount','card','id_merchant']].head())
print(f"Total Outliners : {outliners_18['amount'].count()}")
print("========================================")
###Output
========================================
Example of Outliners for Card Holder #2:
id date amount card id_merchant
44 2439 2018-01-06 02:16:41 1.33 4866761290278198714 127
57 3028 2018-01-07 15:10:27 17.29 4866761290278198714 126
141 2655 2018-01-16 06:29:35 17.64 675911140852 136
333 3395 2018-02-03 18:05:39 1.41 4866761290278198714 65
369 2878 2018-02-08 05:12:18 18.32 4866761290278198714 57
Total Outliners : 50
========================================
Example of Outliners for Card Holder #25:
id date amount card id_merchant
6 2083 2018-01-02 02:06:21 1.46 4319653513507 93
56 2108 2018-01-07 14:57:23 2.93 4319653513507 137
79 754 2018-01-10 00:25:40 1.39 372414832802279 50
120 3023 2018-01-14 05:02:22 17.84 372414832802279 52
138 3333 2018-01-16 02:26:16 1.65 372414832802279 31
Total Outliners : 62
========================================
Example of Outliners for Card Holder #18:
id date amount card id_merchant
4 567 2018-01-01 23:15:10 2.95 4498002758300 64
40 2077 2018-01-05 07:19:27 1.36 344119623920892 30
53 3457 2018-01-07 01:10:54 175.00 344119623920892 12
67 812 2018-01-08 11:15:36 333.00 344119623920892 95
146 665 2018-01-16 19:19:48 2.55 344119623920892 99
Total Outliners : 66
========================================
###Markdown
Challenge Identifying Outliers using Standard Deviation
###Code
# initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# create a connection to the database
engine = create_engine("postgresql://postgres:postgres@localhost:5432/fraud_detection")
# code a function to identify outliers based on standard deviation
# find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# code a function to identify outliers based on interquartile range
# find anomalous transactions for 3 random card holders
###Output
_____no_output_____
###Markdown
ChallengeAnother approach to identifying fraudulent transactions is to look for outliers in the data. Standard deviation or quartiles are often used to detect outliers. Using this starter notebook, code two Python functions:* One that uses standard deviation to identify anomalies for any cardholder.* Another that uses interquartile range to identify anomalies for any cardholder. Identifying Outliers using Standard Deviation
###Code
# Initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# Create a connection to the database
engine = create_engine("postgresql://postgres:MJU&nhy6bgt5@localhost:5432/fraud_detection")
# Create Query for Dataset
query = """
SELECT card_holder.id AS "id", transaction.date AS "date", transaction.amount AS "amount"
FROM transaction
JOIN credit_card on credit_card.card = transaction.card
JOIN card_holder on card_holder.id = credit_card.cardholder_id;
"""
# Create a DataFrame from the query result
transaction_df = pd.read_sql(query, engine)
# Show the data of the the new dataframe
transaction_df.head()
# code a function to identify outliers based on standard deviation
def outliers_std(card_id):
# get transaction amounts for card id
transaction_amounts_df = transaction_df.loc[transaction_df['id']==card_id, 'amount']
return pd.DataFrame(transaction_amounts_df[transaction_amounts_df> transaction_amounts_df.mean()+3*transaction_amounts_df.std()])
# Find anomalous transactions for 3 random card holders
rand_card_id = np.random.randint(1,25,3)
for id in rand_card_id:
if len(outliers_std(id)) == 0:
print(f"Card holder {id} has no outlier transactions.")
else:
print(f"Card holder {id} has the following outlier transactions.:\n{outliers_std(id)}.")
###Output
Card holder 24 has the following outlier transactions.:
amount
797 1011.0
1260 1901.0
3405 1301.0
3433 1035.0.
Card holder 13 has no outlier transactions.
Card holder 18 has the following outlier transactions.:
amount
487 1839.0
925 1077.0
1508 1814.0
2425 1176.0
3095 1769.0
3324 1154.0.
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# Write a function that locates outliers using interquartile range
def outliers_iqr(card_id):
# get transaction amounts for card id
transaction_amounts_df = transaction_df.loc[transaction_df['id']==card_id, 'amount']
iqr_threshold = np.quantile(transaction_amounts_df, .75)+(np.quantile(transaction_amounts_df, .75)-np.quantile(transaction_amounts_df, .25))*1.5
# return values above the iqr threshold
return pd.DataFrame(transaction_amounts_df[transaction_amounts_df> iqr_threshold])
# find anomalous transactions for 3 random card holders
#Use the 3 random card as above for comparison
for id in rand_card_id:
if len(outliers_iqr(id)) == 0:
print(f"Card holder {id} has no outlier transactions.")
else:
print(f"Card holder {id} has the following outlier transactions:\n{outliers_iqr(id)}.")
###Output
Card holder 24 has the following outlier transactions:
amount
797 1011.0
1107 525.0
1260 1901.0
1652 258.0
1984 291.0
3064 466.0
3405 1301.0
3433 1035.0.
Card holder 13 has no outlier transactions.
Card holder 18 has the following outlier transactions:
amount
53 175.0
67 333.0
487 1839.0
925 1077.0
1508 1814.0
1763 121.0
1832 117.0
1956 974.0
2363 458.0
2425 1176.0
3095 1769.0
3324 1154.0.
###Markdown
Challenge Identifying Outliers using Standard Deviation
###Code
# initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# create a connection to the database
engine = create_engine("postgresql://postgres:postgres@localhost:5432/fraud_detection")
query = """
SELECT *
FROM transactions t
JOIN credit_card c ON t.card = c.card
JOIN card_holder a ON a.card_holder_id = c.card_holder_id
"""
# Load data into the DataFrame using the read_sql() method from pandas
trans_df = pd.read_sql(query, engine)
# Show the data of the new DataFrame
trans_df.head()
df = trans_df.loc[:, ~trans_df.columns.duplicated()]
df.head()
df['date'] = df['trans_date'].dt.date
df.head()
df_2 = df.pivot_table(values="amount", index='trans_date', columns='card_holder_id')
df_2.dropna()
df_2.head()
# code a function to identify outliers based on standard deviation
std_s = df_2.std().sort_values(ascending=False)
std_s
# find anomalous transactions for 3 random card holders
anoma_s = std_s[:3]
anoma_s
# create the dataframe
anoma_df = pd.DataFrame(anoma_s, columns=['std'])
anoma_df
# dataframe of the names
names_df = df[['card_holder_id', 'card_holder_name']]
names_df.set_index('card_holder_id', inplace=True)
names_df
names_2_df = names_df.loc[anoma_df.index].drop_duplicates()
names_2_df
###Output
_____no_output_____
###Markdown
Names of the top 3 outliers based on standard deviation
###Code
#
final_df = names_2_df.join(anoma_df)
final_df
###Output
_____no_output_____
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# code a function to identify outliers based on interquartile range
df['amount'].describe()
df.dtypes
min_amount_s = df.groupby('card_holder_id')['amount'].min()
min_amount_df = pd.DataFrame(min_amount_s)
cutoff_val = min_amount_s.quantile(q=[0.1]).tolist()[0]
cutoff_val
anoma_2_df = min_amount_df[min_amount_df['amount']<cutoff_val]
anoma_2_df
names_anoma_2_df = names_df.loc[anoma_2_df.index].drop_duplicates()
names_anoma_2_df
###Output
_____no_output_____
###Markdown
Names of the top3 outliers based on small payments
###Code
final_2_df = names_anoma_2_df.join(anoma_2_df)
final_2_df
outlier_small_payments_df = final_2_df.reset_index()
outlier_small_payments_df
# find anomalous transactions for 3 random card holders
max_amount_s = df.groupby('card_holder_id')['amount'].max()
max_amount_df = pd.DataFrame(max_amount_s)
cutoff_val = max_amount_s.quantile(q=[0.6]).tolist()[0]
cutoff_val
anoma_3_df = max_amount_df[max_amount_df['amount']>cutoff_val]
anoma_3_df
anoma_3_df = anoma_3_df.nlargest(3, 'amount')
anoma_3_df
names_anoma_3_df = names_df.loc[anoma_3_df.index].drop_duplicates()
names_anoma_3_df
###Output
_____no_output_____
###Markdown
Names of the top3 outliers based on large payments
###Code
final_3_df = names_anoma_3_df.join(anoma_3_df)
final_3_df
outlier_larege_payments_df=final_3_df.reset_index()
outlier_larege_payments_df
###Output
_____no_output_____
###Markdown
The Outlier based on samll and large payments
###Code
pd.merge(outlier_larege_payments_df, outlier_small_payments_df, on='card_holder_id')
###Output
_____no_output_____
###Markdown
Challenge Identifying Outliers using Standard Deviation
###Code
# initial imports
import pandas as pd
import numpy as np
import random
from sqlalchemy import create_engine
# create a connection to the database
engine = create_engine("postgresql://postgres:postgres@localhost:5432/fraud_detection")
def outlier_based_on_stdev(df,card_holder_ids):
"""This function identifies outliers from the dataframe for the ids in the list based on standard deviation method
Args:
df (Datafarme): dataframe which the ourliers need to be identified.
card_holder_ids (list): a list of card_holder_ids.
Returns:
Nothing
"""
print(f'WE ONLY CONSIDER A VALUE TO BE A MAJOR OUTLIER IF IT IS 3 STANDARD DEVIATIONS FROM THE MEAN'
'\n--------------------------------------------------------------------------------------------\n')
#sort the dataframe just incase its not already sorted
sorted_df = df.sort_values(['card_holder_id','amount'])
#If there are n card_holders_ids sent as a list
for id in card_holder_ids:
amount_col = sorted_df['amount'][sorted_df['card_holder_id'] == id]
print(f'Card Holder Id: {id}')
# calculate mean and std deviation
mean = amount_col.mean()
stdev = amount_col.std()
# identify outliers as major outliers if its 3 std dev away
cut_off = stdev * 3
lower, upper = mean - cut_off, mean + cut_off
# identify outliers
outliers = [x for x in amount_col if x < lower or x > upper]
if len(outliers) == 0:
print(f'OUTLIERS: NONE\n'
'----------------------------------------------------------------')
else:
print(f'OUTLIERS:\n{outliers} \n'
'----------------------------------------------------------------')
def random_id_selection(list_of_card_holders, num_of_card_holders):
"""This function picks a random list of ids from given list.
Args:
list_of_card_holders (list): List from the sample has to be chosen.
num_of_card_holders (int): Number of ids to be choosen from the list
Returns:
transactions_df(Dataframe): Read the dataframe from database once
"""
card_holder_ids = random.sample(list_of_card_holders, num_of_card_holders)
return card_holder_ids
def read_data_from_db():
"""This function reads from db only once, we don't want too many I/Os
Args:
None
Returns:
transactions_df(Dataframe): Read the dataframe from database once
"""
#get the entire data for the
query = "SELECT * FROM transactions_bunched_by_card_holder"
# Read the SQL query into a DataFrame
transactions_df = pd.read_sql(query, engine)
return transactions_df
# get the transaction and store it
transactions_df = read_data_from_db()
#generate random number based on a list card holder ids
list_of_card_holders = transactions_df['card_holder_id'].drop_duplicates().tolist()
# find anomalous transactions for 3 random card holders
# we need three random card holders
num_of_card_holders = 3
# call the random id selector function
card_holder_ids = random_id_selection(list_of_card_holders, num_of_card_holders)
# df only with the selected card_holder_ids
card_holder_df = transactions_df[transactions_df['card_holder_id'].isin(card_holder_ids)]
print(f'RANDOMLY GENERATED card_holder_ids: {card_holder_ids}'
'\n-----------------------------------------------\n')
card_holder_df.head()
outlier_based_on_stdev(card_holder_df,card_holder_ids)
###Output
RANDOMLY GENERATED card_holder_ids: [23, 6, 21]
-----------------------------------------------
WE ONLY CONSIDER A VALUE TO BE A MAJOR OUTLIER IF IT IS 3 STANDARD DEVIATIONS FROM THE MEAN
--------------------------------------------------------------------------------------------
Card Holder Id: 23
OUTLIERS: NONE
----------------------------------------------------------------
Card Holder Id: 6
OUTLIERS:
[1379.0, 1398.0, 1855.9999999999998, 2001.0000000000002, 2108.0]
----------------------------------------------------------------
Card Holder Id: 21
OUTLIERS: NONE
----------------------------------------------------------------
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# code a function to identify outliers based on interquartile range
def outlier_based_on_iqr(df,card_holder_ids):
"""This function identifies outliers from the dataframe for the ids in the list based on IQR method
Args:
df (Datafarme): dataframe which the ourliers need to be identified.
card_holder_ids (list): a list of card_holder_ids.
Returns:
Nothing
"""
print(f"A LIST OF BOTH MILD OUTLIERS (1.5 * IQR) AND MAJOR OUTLIERS (3 * IQR)"
'\n---------------------------------------------------------------------\n')
#sort the dataframe just incase its not already sorted
sorted_df = df.sort_values(['card_holder_id','amount'])
#If there are n card_holders_ids sent as a list
for id in card_holder_ids:
amount_col = sorted_df['amount'][sorted_df['card_holder_id'] == id]
print(f'Card Holder Id: {id}')
# calculate interquartile range
q25, q75 = percentile(amount_col, 25), percentile(amount_col, 75)
iqr = q75 - q25
# print('Percentiles: 25th=%.3f, 75th=%.3f, IQR=%.3f \n' % (q25, q75, iqr))
# calculate the outlier cutoff
cut_off = iqr * 1.5
lower, upper = q25 - cut_off, q75 + cut_off
print('Inner fence lower boundry=%.3f, upper boundry=%.3f' % (lower, upper))
major_outlier_cut_off = iqr * 3
maj_lower, maj_upper = q25 - major_outlier_cut_off, q75 + major_outlier_cut_off
print('Outter fence lower boundry=%.3f, upper boundry=%.3f' % (maj_lower, maj_upper))
# identify outliers
outliers = [x for x in amount_col if x < lower or x > upper]
if len(outliers) == 0:
print(f'MILD & MAJOR OUTLIERS:\nNONE\n'
'----------------------------------------------------------------')
else:
print(f'\nMILD Outliers:\n{outliers} \n')
maj_outliers = [x for x in amount_col if x < maj_lower or x > maj_upper]
urlsIwant = [x for x in outliers if x in maj_outliers]
if len (urlsIwant) == 0:
print(f'MAJOR OUTLIERS:\nNONE\n')
else:
print(f"MAJOR OUTLIERS:\n{urlsIwant}")
print('----------------------------------------------------------------\n')
# find anomalous transactions for 3 random card holders
# we need three random card holders
num_of_card_holders = 3
# call the random id selector function
card_holder_ids = random_id_selection(list_of_card_holders, num_of_card_holders)
# df only with the selected card_holder_ids
card_holder_df = transactions_df[transactions_df['card_holder_id'].isin(card_holder_ids)]
print(f'RANDOMLY GENERATED card_holder_ids: {card_holder_ids}'
'\n-----------------------------------------------\n')
card_holder_df.head()
outlier_based_on_iqr(card_holder_df,card_holder_ids)
# TEST
# call the random id selector function
card_holder_ids = [25]
# df only with the selected card_holder_ids
card_holder_df = transactions_df[transactions_df['card_holder_id'].isin(card_holder_ids)]
print(f'RANDOMLY GENERATED card_holder_ids: {card_holder_ids}'
'\n-----------------------------------------------\n')
card_holder_df.head()
outlier_based_on_iqr(card_holder_df,card_holder_ids)
###Output
RANDOMLY GENERATED card_holder_ids: [25]
-----------------------------------------------
A LIST OF BOTH MILD OUTLIERS (1.5 * IQR) AND MAJOR OUTLIERS (3 * IQR)
---------------------------------------------------------------------
Card Holder Id: 25
Inner fence lower boundry=-14.151, upper boundry=31.579
Outter fence lower boundry=-31.300, upper boundry=48.727
MILD Outliers:
[100.0, 137.0, 269.0, 749.0, 1001.0, 1046.0, 1063.0, 1074.0, 1162.0, 1177.0, 1334.0, 1813.0]
MAJOR OUTLIERS:
[100.0, 137.0, 269.0, 749.0, 1001.0, 1046.0, 1063.0, 1074.0, 1162.0, 1177.0, 1334.0, 1813.0]
----------------------------------------------------------------
###Markdown
ChallengeAnother approach to identifying fraudulent transactions is to look for outliers in the data. Standard deviation or quartiles are often used to detect outliers. Using this starter notebook, code two Python functions:* One that uses standard deviation to identify anomalies for any cardholder.* Another that uses interquartile range to identify anomalies for any cardholder. Identifying Outliers using Standard Deviation
###Code
# Initial imports
import pandas as pd
import numpy as np
import random
import os
from sqlalchemy import create_engine
username = "Randolph"
password = "s0grycfkrbz6bjlz"
host = "pg-335df34c-rpatronage-18ac.aivencloud.com"
port = 19786
database = "Randolph"
connection_str = f"postgresql://{username}:{password}@{host}:{port}/{database}"
print(connection_str)
# Create a connection to the database
engine = create_engine("postgresql://Randolph:[email protected]:19786/Randolph")
query = "select a.id, c.date, c.amount \
from card_holder a \
inner join credit_card b \
on a.id = b.cardholder_id \
inner join transaction c \
on b.card = c.card"
df = pd.read_sql(query, engine)
df.head()
# Write function that locates outliers using standard deviation
def card_transaction(input_id):
return df.loc[df['id']==input_id, 'amount']
def outliers(input_id):
df1 =card_transaction(input_id)
return pd.DataFrame(df1[df1> df1.mean()+3*df1.std()])
# Find anomalous transactions for 3 random card holders
rand_card_holders = np.random.randint(1,25,3)
for id in rand_card_holders:
if len(outliers(id)) == 0:
print(f"Card holder {id} has no outlier transactions.")
else:
print(f"Card holder {id} has outlier transactions as below:\n{outliers(id)}.")
###Output
Card holder 9 has outlier transactions as below:
amount
613 1534.0
1578 1795.0
3389 1724.0.
Card holder 10 has no outlier transactions.
Card holder 19 has no outlier transactions.
###Markdown
Identifying Outliers Using Interquartile Range
###Code
# Write a function that locates outliers using interquartile range
def outliers_iqr(input_id):
df1 =card_transaction(input_id)
IQR_threshold = np.quantile(df1, .75)+(np.quantile(df1, .75)-np.quantile(df1, .25))*1.5
return pd.DataFrame(df1[df1> IQR_threshold])
# Find anomalous transactions for 3 random card holders
for id in rand_card_holders:
if len(outliers_iqr(id)) == 0:
print(f"Card holder {id} has no outlier transactions.")
else:
print(f"Card holder {id} has outlier transactions as below:\n{outliers_iqr(id)}.")
###Output
Card holder 9 has outlier transactions as below:
amount
613 1534.0
852 1009.0
1001 325.0
1466 245.0
1578 1795.0
1632 691.0
1909 267.0
2575 1095.0
2703 1179.0
3251 57.0
3389 1724.0.
Card holder 10 has no outlier transactions.
Card holder 19 has no outlier transactions.
|
Library_design/CTP-10_Aire/generate_codebook/Generate_codebook_CTP10-Aire.ipynb | ###Markdown
0.1 load required packages
###Code
%run "..\..\..\Startup_py3.py"
sys.path.append(r"..\..\..\..\..\Documents")
import ImageAnalysis3 as ia
%matplotlib notebook
from ImageAnalysis3 import *
print(os.getpid())
# library design specific tools
from ImageAnalysis3.library_tools import LibraryDesigner as ld
from ImageAnalysis3.library_tools import LibraryTools as lt
# biopython imports
from Bio import SeqIO
from Bio.Seq import Seq
from Bio.SeqRecord import SeqRecord
from Bio.Blast.Applications import NcbiblastnCommandline
from Bio.Blast import NCBIXML
## Some folders
# human genome
reference_folder = r'\\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\Genomes\mouse\GRCm38_ensembl'
genome_folder = os.path.join(reference_folder, 'Genome')
# Library directories
pool_folder = r'\\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire'
resolution = 0
flanking = 10000
# folder for sub-pool
library_folder = os.path.join(pool_folder, f'Genes_TSS_DNA')
if not os.path.exists(library_folder):
print(f"create library folder: {library_folder}")
os.makedirs(library_folder)
# folder for fasta sequences
sequence_folder = os.path.join(library_folder, 'sequences')
if not os.path.exists(sequence_folder):
print(f"create sequence folder: {sequence_folder}")
os.makedirs(sequence_folder)
# folder to save result probes
report_folder = os.path.join(library_folder, 'reports')
if not os.path.exists(report_folder):
print(f"create report folder: {report_folder}")
os.makedirs(report_folder)
print(f"-- library_folder: {library_folder}")
print(f"-- sequence_folder: {sequence_folder}")
print(f"-- report_folder: {report_folder}")
###Output
-- library_folder: \\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire\Genes_TSS_DNA
-- sequence_folder: \\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire\Genes_TSS_DNA\sequences
-- report_folder: \\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire\Genes_TSS_DNA\reports
###Markdown
1.1 load gene list
###Code
gene_list_folder = os.path.join(pool_folder, 'Gene_list')
gene_list_filename = os.path.join(gene_list_folder, 'uniqued_clustered_genes_for_yuan_2021-04-22.txt')
import pandas as pd
gene_df = pd.read_csv(gene_list_filename, delimiter = "\t", header=None)
gene_df.columns = ['Cluster', 'Gene']
gene_df
np.unique(gene_df['Gene'])
# laod encoding
# summarize total readout usage
encoding_folder = os.path.join(pool_folder, f'Genes_intronic_RNA')
gene_2_readout_dict = pickle.load(open(os.path.join(encoding_folder, 'gene_2_readout.pkl'), 'rb'))
gene_2_readout_dict
# load used readouts
readout_usage_file = os.path.join(library_folder, 'readout_usage.pkl')
readout_dict = pickle.load(open(readout_usage_file, 'rb'))
readout_dict
# generate the codebook
codebook = pd.DataFrame(columns=['name','id']+[_r.id for _r in readout_dict['c']])
#codebook.add(['name', 'id'], axis=1)
codebook['name'] = list(gene_2_readout_dict.keys())
codebook.loc[codebook['name']=='Ccl21a']
for _gname, _bits in gene_2_readout_dict.items():
binary_code = []
for _i in range(50):
if f"c{_i}" in _bits:
binary_code.append(1)
else:
binary_code.append(0)
codebook.loc[codebook['name']==_gname, codebook.columns[2:]] = binary_code
codebook.loc[codebook['name']==_gname,'id'] = int(_bits[0].split('u')[1])
chr_2_region_num = pickle.load(open(os.path.join(r'\\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire\Genes_intronic_RNA',
'chr_2_region_num.pkl'), 'rb'))
chr_2_region_num
chr_2_gene_names = pickle.load(open(os.path.join(r'\\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire\Genes_intronic_RNA',
'chr_2_gene_names.pkl'), 'rb'))
chr_2_gene_names
# generate gene_to_chr_id
gene_2_chr = {}
gene_2_chr_order = {}
for _chr, _genes in chr_2_gene_names.items():
for _i, _gene in enumerate(_genes):
gene_2_chr[_gene] = _chr
gene_2_chr_order[_gene] = _i
codebook['chr'] = [gene_2_chr[_g] for _g in codebook['name']]
codebook['chr_order'] = [gene_2_chr_order[_g] for _g in codebook['name']]
codebook
codebook.to_csv(r'\\10.245.74.212\Chromatin_NAS_2\Chromatin_Libraries\CTP-10_Aire\Summary_tables\CTP10-Aire_codebook.csv',
index=None)
###Output
_____no_output_____ |
Rafay notes/Samsung Course/Chapter 7/Lecture/Lecture 3/.ipynb_checkpoints/ex_0609-checkpoint.ipynb | ###Markdown
Coding Exercise 0609
###Code
import nltk
from numpy.random import randint, seed
from sklearn.feature_extraction.text import CountVectorizer
###Output
_____no_output_____
###Markdown
1. n-Gram based autofill:
###Code
# Text data for training.
my_text = """Machine learning is the scientific study of algorithms and statistical models that computer systems use to effectively perform a specific task without using explicit instructions, relying on patterns and inference instead. It is seen as a subset of artificial intelligence. Machine learning algorithms build a mathematical model of sample data, known as "training data", in order to make predictions or decisions without being explicitly programmed to perform the task.[1][2]:2 Machine learning algorithms are used in the applications of email filtering, detection of network intruders, and computer vision, where it is infeasible to develop an algorithm of specific instructions for performing the task. Machine learning is closely related to computational statistics, which focuses on making predictions using computers. The study of mathematical optimization delivers methods, theory and application domains to the field of machine learning. Data mining is a field of study within machine learning, and focuses on exploratory data analysis through unsupervised learning In its application across business problems, machine learning is also referred to as predictive analytics."""
my_text = [my_text.lower()] # Convert to lowercase and make a list. => Required by the CountVectorizer().
###Output
_____no_output_____
###Markdown
1.1. n-Gram trial run:
###Code
n = 3 # Can be changed to a number equal or larger than 2.
n_min = n
n_max = n
n_gram_type = 'word' # n-Gram with words.
vectorizer = CountVectorizer(ngram_range=(n_min,n_max), analyzer = n_gram_type)
n_grams = vectorizer.fit(my_text).get_feature_names() # Get the n-Grams as a list.
n_gram_cts = vectorizer.transform(my_text).toarray() # The output is an array of array.
n_gram_cts = list(n_gram_cts[0]) # Convert into a simple list.
list(zip(n_grams,n_gram_cts)) # Make a list of tuples and show.
###Output
_____no_output_____
###Markdown
1.2. Train by making a dictionary based on n-Grams:
###Code
n = 3 # Can be changed to a number equal or larger than 2.
n_min = n
n_max = n
n_gram_type = 'word'
vectorizer = CountVectorizer(ngram_range=(n_min,n_max), analyzer = n_gram_type)
n_grams = vectorizer.fit(my_text).get_feature_names() # A list of n-Grams.
my_dict = {}
for a_gram in n_grams:
words = nltk.word_tokenize(a_gram)
a_nm1_gram = ' '.join(words[0:n-1]) # (n-1)-Gram.
next_word = words[-1] # Word after the a_nm1_gram.
if a_nm1_gram not in my_dict.keys():
my_dict[a_nm1_gram] = [next_word] # a_nm1_gram is a new key. So, initialize the dictionary entry.
else:
my_dict[a_nm1_gram] += [next_word] # an_nm1_gram is already in the dictionary.
# View the dictionary.
my_dict
###Output
_____no_output_____
###Markdown
1.3. Predict the next word:
###Code
# Helper function that picks the following word.
def predict_next(a_nm1_gram):
value_list_size = len(my_dict[a_nm1_gram]) # length of the value corresponding to the key = a_nm1_gram.
i_pick = randint(0, value_list_size) # A random number from the range 0 ~ value_list_size.
return(my_dict[a_nm1_gram][i_pick]) # Return the randomly chosen next word.
# Test.
input_str = 'order to' # Has to be a VALID (n-1)-Gram!
predict_next(input_str)
# Another test.
# Repeat for 10 times and see that the next word is chosen randomly with a probability proportional to the occurrence.
input_str = 'machine learning' # Has to be a VALID (n-1)-Gram!
for i in range(10):
print(predict_next(input_str))
###Output
_____no_output_____
###Markdown
1.4. Predict a sequence:
###Code
# Initialize the random seed.
seed(123)
# A seed string has to be input by the user.
my_seed_str = 'machine learning' # Has to be a VALID (n-1)-Gram!
# my_seed_str = 'in order' # Has to be a VALID (n-1)-Gram!
a_nm1_gram = my_seed_str
output_string = my_seed_str # Initialize the output string.
while a_nm1_gram in my_dict:
output_string += " " + predict_next(a_nm1_gram)
words = nltk.word_tokenize(output_string)
a_nm1_gram = ' '.join(words[-n+1:]) # Update a_nm1_gram.
# Output the predicted sequence.
output_string
###Output
_____no_output_____ |
Datasets/Data Cleaning(18-19,17-18).ipynb | ###Markdown
Year : 2017-18
###Code
import pandas as pd
import numpy as np
import random
from random import randint
from datetime import timedelta
from datetime import datetime
from random import uniform
###Output
_____no_output_____
###Markdown
Leads(2018-19)
###Code
Ld = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2018-19/Leads(2018-19).csv")
def product_id(row):
if row["Product_Name"] == "Proxima-C":
return "PRO-23-0493"
elif row["Product_Name"] == "Kits Dragon":
return "KTD-32-3231"
elif row["Product_Name"] == "Phoenix":
return "PHO-52-1928"
elif row["Product_Name"] == "Sirius":
return "SIR-10-0293"
elif row["Product_Name"] == "Aurora":
return "AUR-67-4989"
elif row["Product_Name"] == "Apollo":
return "APO-09-8723"
elif row["Product_Name"] == "Agyrap-S":
return "AGY-90-2818"
else:
return "ANH-02-0987"
Ld = Ld.assign(Product_ID = Ld.apply(product_id, axis = 1))
def random_dates(start, end, n=1000):
start_u = start.value//10**9
end_u = end.value//10**9
return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
start = pd.to_datetime('31-10-2018')
end = pd.to_datetime('01-11-2019')
Ld['Lead_Created_on'] = random_dates(start, end)
Ld.head(10)
Ld.to_excel("Leads_final(2018-19).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Leads(2017-18)
###Code
Ld2 = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2017-18/Leads(2017-18).csv")
def product_id(row):
if row["Product_Name"] == "Proxima-C":
return "PRO-23-0493"
elif row["Product_Name"] == "Kits Dragon":
return "KTD-32-3231"
elif row["Product_Name"] == "Phoenix":
return "PHO-52-1928"
elif row["Product_Name"] == "Sirius":
return "SIR-10-0293"
elif row["Product_Name"] == "Aurora":
return "AUR-67-4989"
elif row["Product_Name"] == "Apollo":
return "APO-09-8723"
elif row["Product_Name"] == "Agyrap-S":
return "AGY-90-2818"
else:
return "ANH-02-0987"
Ld2 = Ld2.assign(Product_ID = Ld2.apply(product_id, axis = 1))
def random_dates(start, end, n=1000):
start_u = start.value//10**9
end_u = end.value//10**9
return pd.to_datetime(np.random.randint(start_u, end_u, n), unit='s')
start = pd.to_datetime('31-10-2017')
end = pd.to_datetime('01-11-2018')
Ld2['Lead_Created_on'] = random_dates(start, end)
Ld2.head(10)
Ld2.to_excel("Leads_final(2017-18).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Opportunities(2018-19)
###Code
Op = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2018-19/Opportunities(2018-19).csv")
Op.head(10)
Op['Lead_ID'] = Ld['Lead_ID']
Op['Product_Name'] = Ld['Product_Name']
Op['Product_ID'] = Ld['Product_ID']
Op['Email_address'] = Ld['Email_address']
Op['Opportunity_Created_on'] = Ld['Lead_Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,20), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Op['Opportunity_Close_Date'] = Op['Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,10), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Op.drop(['Created_on', 'Close_Date'], axis = 1, inplace = True)
Op.head(10)
Op['Revenue'] = randint(1,100)
Proxima_C = 70
Kits_Dragon = 30
Phoenix = 42
Sirius = 55
Aurora = 40
Apollo = 50
Agyrap_S = 65
Anhee_C = 60
Op['Revenue'] = Op['Revenue'].map(lambda a: (Proxima_C*randint(0,5))+(Kits_Dragon*randint(0,5))+(Phoenix*randint(0,5))+(Sirius*randint(0,5))+(Aurora*randint(0,2))+(Apollo*randint(0,2))+(Agyrap_S*randint(0,5))+(Anhee_C*randint(0,5)))
Op.head(10)
Op.to_excel("Opportunities_final(2018-19).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Opportunities(2017-18)
###Code
Op2 = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2017-18/Opportunities(2017-18).csv")
Op2.head(10)
Op2['Lead_ID'] = Ld2['Lead_ID']
Op2['Product_Name'] = Ld2['Product_Name']
Op2['Product_ID'] = Ld2['Product_ID']
Op2['Email_address'] = Ld2['Email_address']
Op2['Opportunity_Created_on'] = Ld2['Lead_Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,20), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Op2['Opportunity_Close_Date'] = Op2['Opportunity_Created_on'].map(lambda a: a + pd.DateOffset(days=randint(5,10), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Op2.head(10)
Op2['Revenue'] = randint(1,100)
Proxima_C = 60
Kits_Dragon = 30
Phoenix = 35
Sirius = 55
Aurora = 40
Apollo = 40
Agyrap_S = 60
Anhee_C = 55
Op2['Revenue'] = Op2['Revenue'].map(lambda a: (Proxima_C*randint(0,5))+(Kits_Dragon*randint(0,5))+(Phoenix*randint(0,5))+(Sirius*randint(0,5))+(Aurora*randint(0,2))+(Apollo*randint(0,2))+(Agyrap_S*randint(0,5))+(Anhee_C*randint(0,5)))
Op2.head(10)
Op2.to_excel("Opportunities_final(2017-18).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Accounts(2018-19)
###Code
Ac = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2018-19/Accounts(2018-19).csv', delimiter = ',')
# Acc['City'].unique()
Ac['Lead_ID'] = Op['Lead_ID']
Ac['Opportunity_ID'] = Op['Opportunity_ID']
Ac['Full_Name'] = Ld['Full_Name']
Ac['Email_address'] = Op['Email_address']
Ac.head(10)
Ac = Ac[['Lead_ID', 'Opportunity_ID', 'Account_ID', 'Full_Name', 'City', 'Phone', 'Email_address']]
Ac.head(10)
Ac.to_excel("Accounts_final(2018-19).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Accounts(2017-18)
###Code
Ac2 = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2017-18/Accounts(2017-18).csv", delimiter = ',')
Ac2['Lead_ID'] = Op2['Lead_ID']
Ac2['Opportunity_ID'] = Op2['Opportunity_ID']
Ac2['Full_Name'] = Ld2['Full_Name']
Ac2['Email_address'] = Op2['Email_address']
Ac2.head(10)
Ac2 = Ac2[['Lead_ID', 'Opportunity_ID', 'Account_ID', 'Full_Name', 'City', 'Phone', 'Email_address']]
Ac2.head(10)
Ac2.to_excel("Accounts_final(2017-18).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Quotes(2018-19)
###Code
Qu = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2018-19/Quotes(2018-19).csv')
Qu['Lead_ID'] = Ac['Lead_ID']
Qu['Opportunity_ID'] = Ac['Opportunity_ID']
Qu['Account_ID'] = Ac['Account_ID']
Qu['Product_Name'] = Op['Product_Name']
Qu['Product_ID'] = Op['Product_ID']
Qu['Product_Category'] = Ld['Product_Category']
Qu['Revenue'] = Op['Revenue']
Qu['Email_address'] = Ac['Email_address']
Qu['Quote_Created_On'] = Op['Opportunity_Close_Date'].map(lambda a: a + pd.DateOffset(days=randint(10,25), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Qu.head(10)
###Output
_____no_output_____
###Markdown
Quotes(2017-18)
###Code
Qu2 = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2017-18/Quotes(2017-18).csv')
Qu2['Lead_ID'] = Ac2['Lead_ID']
Qu2['Opportunity_ID'] = Ac2['Opportunity_ID']
Qu2['Account_ID'] = Ac2['Account_ID']
Qu2['Product_Name'] = Op2['Product_Name']
Qu2['Product_ID'] = Op2['Product_ID']
Qu2['Product_Category'] = Ld2['Product_Category']
Qu2['Revenue'] = Op2['Revenue']
Qu2['Email_address'] = Ac2['Email_address']
Qu2['Quote_Created_On'] = Op2['Opportunity_Close_Date'].map(lambda a: a + pd.DateOffset(days=randint(10,25), hours=randint(0,12), minutes=randint(0,60), seconds=randint(0,60)))
Qu2.head(10)
Qu.to_excel("Quotes(2018-19).xlsx", index = False)
Qu2.to_excel("Quotes(2017-18).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Orders(2018-19)
###Code
Ord = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2018-19/Orders(2018-19).csv')
Ord['Lead_ID'] = Qu['Lead_ID']
Ord['Opportunity_ID'] = Qu['Opportunity_ID']
Ord['Account_ID'] = Qu['Account_ID']
Ord['Quote_ID'] = Qu['Quote_ID']
Ord['Product_Name'] = Qu['Product_Name']
Ord['Product_Category'] = Qu['Product_Category']
Ord['Revenue'] = Qu['Revenue']
Ord['Email_address'] = Qu['Email_address']
Ord.head(10)
###Output
_____no_output_____
###Markdown
Orders(2017-18)
###Code
Ord2 = pd.read_csv('C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2017-18/Orders(2017-18).csv')
Ord2['Lead_ID'] = Qu2['Lead_ID']
Ord2['Opportunity_ID'] = Qu2['Opportunity_ID']
Ord2['Account_ID'] = Qu2['Account_ID']
Ord2['Quote_ID'] = Qu2['Quote_ID']
Ord2['Product_Name'] = Qu2['Product_Name']
Ord2['Product_Category'] = Qu2['Product_Category']
Ord2['Revenue'] = Qu2['Revenue']
Ord2['Email_address'] = Qu2['Email_address']
Ord2.head(10)
Ord.to_excel("Orders(2018-19).xlsx", index = False)
Ord2.to_excel("Orders(2017-18).xlsx", index = False)
###Output
_____no_output_____
###Markdown
Invoices(2018-19)
###Code
Iv = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2018-19/Invoice(2018-19).csv")
Iv['Lead_ID'] = Ord['Lead_ID']
Iv['Opportunity_ID'] = Ord['Opportunity_ID']
Iv['Account_ID'] = Ord['Account_ID']
Iv['Quote_ID'] = Ord['Quote_ID']
Iv['Order_ID'] = Ord['Order_ID']
Iv['Product_Name'] = Ord['Product_Name']
Iv['Product_ID'] = Op['Product_ID']
Iv['Revenue'] = Ord['Revenue']
Iv['Email_address'] = Ord['Email_address']
Iv['Phone'] = Ac['Phone']
Iv.head(10)
###Output
_____no_output_____
###Markdown
Invoices(2017-18)
###Code
Iv2 = pd.read_csv("C:/Users/Jaswinder Singh/Downloads/BIBA/Datasets/Mockaroo/2017-18/Invoice(2017-18).csv")
Iv2['Lead_ID'] = Ord2['Lead_ID']
Iv2['Opportunity_ID'] = Ord2['Opportunity_ID']
Iv2['Account_ID'] = Ord2['Account_ID']
Iv2['Quote_ID'] = Ord2['Quote_ID']
Iv2['Order_ID'] = Ord2['Order_ID']
Iv2['Product_Name'] = Ord2['Product_Name']
Iv2['Product_ID'] = Op2['Product_ID']
Iv2['Revenue'] = Ord2['Revenue']
Iv2['Email_address'] = Ord2['Email_address']
Iv2['Phone'] = Ac2['Phone']
Iv2.head(10)
Iv.to_excel("Invoices(2018-19).xlsx", index = False)
Iv2.to_excel("Invoices(2017-18).xlsx", index = False)
###Output
_____no_output_____ |
02-fundamentals/mathbackground.ipynb | ###Markdown
[](https://colab.research.google.com/github/machinelearningmindset/probability-for-machine-learning/blob/master/02-fundamentals/mathbackground.ipynb) Mathmatical Backgrounds in Probability TheoryThis code is associated with the [mathematical backgrounds](https://www.machinelearningmindset.com/course/mathematical-background/) in probability theory.Check the original blog post for details: https://www.machinelearningmindset.com/course/mathematical-background/.The direct colab link to this notebook is [here](https://colab.research.google.com/github/machinelearningmindset/probability-for-machine-learning/blob/master/02-fundamentals/mathbackground.ipynb). n factorial!
###Code
## First Approach ##
# Input n for calculating n!
n = int(input("n: "))
factorial = 1
for i in range (1,n+1):
factorial = factorial * i
print("n! is:",factorial)
## Second Approach ##
import math
# Input number
n = int(input("n: "))
# Create factorial object
fac = math.factorial
# Print output
print("n! is:",fac(n))
###Output
_____no_output_____
###Markdown
Combination
###Code
## Calculate combination(n,r) ##
import math
# Input numbers
n = int(input("n: "))
r = int(input("r: "))
assert (r<=n),"We should always have r<=n!"
# Create factorial object
fac = math.factorial
# Print output
comb = int(fac(n) / (fac(r)*fac(n-r)))
print("comb({},{}) is: {}".format(n,r,comb))
###Output
_____no_output_____ |
flax_bert_demo_neurips_2020.ipynb | ###Markdown
Fine-tuning BERT in Flax on GLUEThis notebook fine-tunes a BERT model one of the [GLUE tasks](https://gluebenchmark.com/). It has the following features:* Uses the [HuggingFace](https://github.com/huggingface/) datasets and tokenizers libraries.* Loads the pre-trained BERT weights from HuggingFace.* Model and training code is written in [Flax](http://www.github.com/google/flax).* Can be configured to fine-tune on COLA, MRPC, SST2, STSB, QNLI, and RTE.Run-times on MRPC:* Cloud TPU v3-8: 40s
###Code
# General imports.
import os
import jax
import jax.numpy as jnp
import flax
# Huggingface datasets and transformers libraries.
import datasets
from transformers import BertTokenizerFast
# flax_bert-specific imports.
from flax import optim
import data
import modeling as flax_models
import training
from demo_lib import get_config, get_validation_splits, get_prefix, import_pretrained_params, create_model, create_optimizer, get_num_train_steps, get_learning_rate_fn
os.environ['TOKENIZERS_PARALLELISM'] = 'true'
###Output
_____no_output_____
###Markdown
Set your Training Settings
###Code
train_settings = {
'train_batch_size': 32,
'eval_batch_size': 8,
'learning_rate': 5e-5,
'num_train_epochs': 3,
'dataset_path': 'glue',
'dataset_name': 'mrpc' # ['cola', 'mrpc', 'sst2', 'stsb', 'qnli', 'rte']
}
###Output
_____no_output_____
###Markdown
Load dataset, tokenizers, and model.
###Code
# Load the GLUE task.
dataset = datasets.load_dataset('glue', train_settings['dataset_name'])
# Get pre-trained config and update it with the train configuration.
config = get_config('bert-base-uncased', dataset)
config.update(train_settings)
# Load HuggingFace tokenizer and data pipeline.
tokenizer = BertTokenizerFast.from_pretrained(config.tokenizer)
data_pipeline = data.ClassificationDataPipeline(dataset, tokenizer)
# Create Flax model and optimizer.
pretrained_params = import_pretrained_params(config)
model = create_model(config, pretrained_params)
optimizer = create_optimizer(config, model, pretrained_params)
# Setup tokenizer, train step function and train iterator.
tokenizer.model_max_length = config.max_seq_length
num_train_steps = get_num_train_steps(config, data_pipeline)
learning_rate_fn = get_learning_rate_fn(config, num_train_steps)
train_history = training.TrainStateHistory(learning_rate_fn)
train_state = train_history.initial_state()
train_step_fn = training.create_train_step(clip_grad_norm=1.0)
train_iter = data_pipeline.get_inputs(
split='train', batch_size=config.train_batch_size, training=True)
###Output
Reusing dataset glue (/home/marcvanzee/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4)
Loading cached processed dataset at /home/marcvanzee/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4/cache-6ebbe5cc20e1150b.arrow
Loading cached processed dataset at /home/marcvanzee/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4/cache-ddcb7009256661ae.arrow
Loading cached processed dataset at /home/marcvanzee/.cache/huggingface/datasets/glue/mrpc/1.0.0/7c99657241149a24692c402a5c3f34d4c9f1df5ac2e4c3759fadea38f6cb29c4/cache-51f953271952eb23.arrow
###Markdown
Run Training
###Code
print(f'\nStarting training on {config.dataset_name} for {num_train_steps} '
f'steps ({config.num_train_epochs:.0f} epochs)...\n')
for step, batch in zip(range(0, num_train_steps), train_iter):
optimizer, train_state = train_step_fn(optimizer, batch, train_state)
if step % 10 == 0:
print(f'step {step}/{num_train_steps}')
print('\nTraining finished!')
###Output
Starting training on mrpc for 343 steps (3 epochs)...
Compiling train (takes about 20s)
Step 0 grad_norm = 43.7472038269043
loss = 0.6523172855377197
step 0/343
step 10/343
step 20/343
step 30/343
step 40/343
step 50/343
step 60/343
step 70/343
step 80/343
step 90/343
step 100/343
step 110/343
step 120/343
step 130/343
step 140/343
step 150/343
step 160/343
step 170/343
step 180/343
step 190/343
Step 200 grad_norm = 155.39651489257812
loss = 1.4091753959655762
seconds_per_step = 0.051291774958372116
step 200/343
step 210/343
step 220/343
step 230/343
step 240/343
step 250/343
step 260/343
step 270/343
step 280/343
step 290/343
step 300/343
step 310/343
step 320/343
step 330/343
step 340/343
Training finished! Running eval...
###Markdown
Run EvaluationThe target eval_f1 for MRPC is 88.9 (variance of about 1.0).
###Code
eval_step = training.create_eval_fn()
for split in get_validation_splits(config.dataset_name):
eval_iter = data_pipeline.get_inputs(
split='validation', batch_size=config.eval_batch_size, training=False)
eval_stats = eval_step(optimizer, eval_iter)
eval_metric = datasets.load_metric(config.dataset_path, config.dataset_name)
eval_metric.add_batch(
predictions=eval_stats['prediction'],
references=eval_stats['label'])
eval_metrics = eval_metric.compute()
for name, val in sorted(eval_metrics.items()):
print(f'{get_prefix(split)}_{name} = {val:.06f}', flush=True)
###Output
eval_accuracy = 0.877451
eval_f1 = 0.914384
|
Image segmentation using VGG16 (1).ipynb | ###Markdown
Retrieve Images
###Code
from urllib import urlopen
from urllib import urlretrieve
from bs4 import BeautifulSoup
link = "http://www.cs.toronto.edu/~vmnih/data/mass_roads/train/map/"
L = len("10378780_15.tif")
root = '/datasets/home/92/992/tal089/Labels/'
html = link + "index.html"
page = urlopen(html).read()
source = BeautifulSoup(page, 'lxml')
for text in source.find_all(['script', 'style']):
text.decompose()
T = source.get_text(strip=True)
List = []
for i in range(len(T)//L):
S = ""
for j in range(L):
S += T[i*L+j]
List.append(S)
#print(List)
t = 0
for name in List:
t += 1
print(str(t)+'/'+str(len(List)))
urlretrieve(link+name, root+name)
###Output
1/1108
2/1108
3/1108
4/1108
5/1108
6/1108
7/1108
8/1108
9/1108
10/1108
11/1108
12/1108
13/1108
14/1108
15/1108
16/1108
17/1108
18/1108
19/1108
20/1108
21/1108
22/1108
23/1108
24/1108
25/1108
26/1108
27/1108
28/1108
29/1108
30/1108
31/1108
32/1108
33/1108
34/1108
35/1108
36/1108
37/1108
38/1108
39/1108
40/1108
41/1108
42/1108
43/1108
44/1108
45/1108
46/1108
47/1108
48/1108
49/1108
50/1108
51/1108
52/1108
53/1108
54/1108
55/1108
56/1108
57/1108
58/1108
59/1108
60/1108
61/1108
62/1108
63/1108
64/1108
65/1108
66/1108
67/1108
68/1108
69/1108
70/1108
71/1108
72/1108
73/1108
74/1108
75/1108
76/1108
77/1108
78/1108
79/1108
80/1108
81/1108
82/1108
83/1108
84/1108
85/1108
86/1108
87/1108
88/1108
89/1108
90/1108
91/1108
92/1108
93/1108
94/1108
95/1108
96/1108
97/1108
98/1108
99/1108
100/1108
101/1108
102/1108
103/1108
104/1108
105/1108
106/1108
107/1108
108/1108
109/1108
110/1108
111/1108
112/1108
113/1108
114/1108
115/1108
116/1108
117/1108
118/1108
119/1108
120/1108
121/1108
122/1108
123/1108
124/1108
125/1108
126/1108
127/1108
128/1108
129/1108
130/1108
131/1108
132/1108
133/1108
134/1108
135/1108
136/1108
137/1108
138/1108
139/1108
140/1108
141/1108
142/1108
143/1108
144/1108
145/1108
146/1108
147/1108
148/1108
149/1108
150/1108
151/1108
152/1108
153/1108
154/1108
155/1108
156/1108
157/1108
158/1108
159/1108
160/1108
161/1108
162/1108
163/1108
164/1108
165/1108
166/1108
167/1108
168/1108
169/1108
170/1108
171/1108
172/1108
173/1108
174/1108
175/1108
176/1108
177/1108
178/1108
179/1108
180/1108
181/1108
182/1108
183/1108
184/1108
185/1108
186/1108
187/1108
188/1108
189/1108
190/1108
191/1108
192/1108
193/1108
194/1108
195/1108
196/1108
197/1108
198/1108
199/1108
200/1108
201/1108
202/1108
203/1108
204/1108
205/1108
206/1108
207/1108
208/1108
209/1108
210/1108
211/1108
212/1108
213/1108
214/1108
215/1108
216/1108
217/1108
218/1108
219/1108
220/1108
221/1108
222/1108
223/1108
224/1108
225/1108
226/1108
227/1108
228/1108
229/1108
230/1108
231/1108
232/1108
233/1108
234/1108
235/1108
236/1108
237/1108
238/1108
239/1108
240/1108
241/1108
242/1108
243/1108
244/1108
245/1108
246/1108
247/1108
248/1108
249/1108
250/1108
251/1108
252/1108
253/1108
254/1108
255/1108
256/1108
257/1108
258/1108
259/1108
260/1108
261/1108
262/1108
263/1108
264/1108
265/1108
266/1108
267/1108
268/1108
269/1108
270/1108
271/1108
272/1108
273/1108
274/1108
275/1108
276/1108
277/1108
278/1108
279/1108
280/1108
281/1108
282/1108
283/1108
284/1108
285/1108
286/1108
287/1108
288/1108
289/1108
290/1108
291/1108
292/1108
293/1108
294/1108
295/1108
296/1108
297/1108
298/1108
299/1108
300/1108
301/1108
302/1108
303/1108
304/1108
305/1108
306/1108
307/1108
308/1108
309/1108
310/1108
311/1108
312/1108
313/1108
314/1108
315/1108
316/1108
317/1108
318/1108
319/1108
320/1108
321/1108
322/1108
323/1108
324/1108
325/1108
326/1108
327/1108
328/1108
329/1108
330/1108
331/1108
332/1108
333/1108
334/1108
335/1108
336/1108
337/1108
338/1108
339/1108
340/1108
341/1108
342/1108
343/1108
344/1108
345/1108
346/1108
347/1108
348/1108
349/1108
350/1108
351/1108
352/1108
353/1108
354/1108
355/1108
356/1108
357/1108
358/1108
359/1108
360/1108
361/1108
362/1108
363/1108
364/1108
365/1108
366/1108
367/1108
368/1108
369/1108
370/1108
371/1108
372/1108
373/1108
374/1108
375/1108
376/1108
377/1108
378/1108
379/1108
380/1108
381/1108
382/1108
383/1108
384/1108
385/1108
386/1108
387/1108
388/1108
389/1108
390/1108
391/1108
392/1108
393/1108
394/1108
395/1108
396/1108
397/1108
398/1108
399/1108
400/1108
401/1108
402/1108
403/1108
404/1108
405/1108
406/1108
407/1108
408/1108
409/1108
410/1108
411/1108
412/1108
413/1108
414/1108
415/1108
416/1108
417/1108
418/1108
419/1108
420/1108
421/1108
422/1108
423/1108
424/1108
425/1108
426/1108
427/1108
428/1108
429/1108
430/1108
431/1108
432/1108
433/1108
434/1108
435/1108
436/1108
437/1108
438/1108
439/1108
440/1108
441/1108
442/1108
443/1108
444/1108
445/1108
446/1108
447/1108
448/1108
449/1108
450/1108
451/1108
452/1108
453/1108
454/1108
455/1108
456/1108
457/1108
458/1108
459/1108
460/1108
461/1108
462/1108
463/1108
464/1108
465/1108
466/1108
467/1108
468/1108
469/1108
470/1108
471/1108
472/1108
473/1108
474/1108
475/1108
476/1108
477/1108
478/1108
479/1108
480/1108
481/1108
482/1108
483/1108
484/1108
485/1108
486/1108
487/1108
488/1108
489/1108
490/1108
491/1108
492/1108
493/1108
494/1108
495/1108
496/1108
497/1108
498/1108
499/1108
500/1108
501/1108
502/1108
503/1108
504/1108
505/1108
506/1108
507/1108
508/1108
509/1108
510/1108
511/1108
512/1108
513/1108
514/1108
515/1108
516/1108
517/1108
518/1108
519/1108
520/1108
521/1108
522/1108
523/1108
524/1108
525/1108
526/1108
527/1108
528/1108
529/1108
530/1108
531/1108
532/1108
533/1108
534/1108
535/1108
536/1108
537/1108
538/1108
539/1108
540/1108
541/1108
542/1108
543/1108
544/1108
545/1108
546/1108
547/1108
548/1108
549/1108
550/1108
551/1108
552/1108
553/1108
554/1108
555/1108
556/1108
557/1108
558/1108
559/1108
560/1108
561/1108
562/1108
563/1108
564/1108
565/1108
566/1108
567/1108
568/1108
569/1108
570/1108
571/1108
572/1108
573/1108
574/1108
575/1108
576/1108
577/1108
578/1108
579/1108
580/1108
581/1108
582/1108
583/1108
584/1108
585/1108
586/1108
587/1108
588/1108
589/1108
590/1108
591/1108
592/1108
593/1108
594/1108
595/1108
596/1108
597/1108
598/1108
599/1108
600/1108
601/1108
602/1108
603/1108
604/1108
605/1108
606/1108
607/1108
608/1108
609/1108
610/1108
611/1108
612/1108
613/1108
614/1108
615/1108
616/1108
617/1108
618/1108
619/1108
620/1108
621/1108
622/1108
623/1108
624/1108
625/1108
626/1108
627/1108
628/1108
629/1108
630/1108
631/1108
632/1108
633/1108
634/1108
635/1108
636/1108
637/1108
638/1108
639/1108
640/1108
641/1108
642/1108
643/1108
644/1108
645/1108
646/1108
647/1108
648/1108
649/1108
650/1108
651/1108
652/1108
653/1108
654/1108
655/1108
656/1108
657/1108
658/1108
659/1108
660/1108
661/1108
662/1108
663/1108
664/1108
665/1108
666/1108
667/1108
668/1108
669/1108
670/1108
671/1108
672/1108
673/1108
674/1108
675/1108
676/1108
677/1108
678/1108
679/1108
680/1108
681/1108
682/1108
683/1108
684/1108
685/1108
686/1108
687/1108
688/1108
689/1108
690/1108
691/1108
692/1108
693/1108
694/1108
695/1108
696/1108
697/1108
698/1108
699/1108
700/1108
701/1108
702/1108
703/1108
704/1108
705/1108
706/1108
707/1108
708/1108
709/1108
710/1108
711/1108
712/1108
713/1108
714/1108
715/1108
716/1108
717/1108
718/1108
719/1108
720/1108
721/1108
722/1108
723/1108
724/1108
725/1108
726/1108
727/1108
728/1108
729/1108
730/1108
731/1108
732/1108
733/1108
734/1108
735/1108
736/1108
737/1108
738/1108
739/1108
740/1108
741/1108
742/1108
743/1108
744/1108
745/1108
746/1108
747/1108
748/1108
749/1108
750/1108
751/1108
752/1108
753/1108
754/1108
755/1108
756/1108
757/1108
758/1108
759/1108
760/1108
761/1108
762/1108
763/1108
764/1108
765/1108
766/1108
767/1108
768/1108
769/1108
770/1108
771/1108
772/1108
773/1108
774/1108
775/1108
776/1108
777/1108
778/1108
779/1108
780/1108
781/1108
782/1108
783/1108
784/1108
785/1108
786/1108
787/1108
788/1108
789/1108
790/1108
791/1108
792/1108
793/1108
794/1108
795/1108
796/1108
797/1108
798/1108
799/1108
800/1108
801/1108
802/1108
803/1108
804/1108
805/1108
806/1108
807/1108
808/1108
809/1108
810/1108
811/1108
812/1108
813/1108
814/1108
815/1108
816/1108
817/1108
818/1108
819/1108
820/1108
821/1108
822/1108
823/1108
824/1108
825/1108
826/1108
827/1108
828/1108
829/1108
830/1108
831/1108
832/1108
833/1108
834/1108
835/1108
836/1108
837/1108
838/1108
839/1108
840/1108
841/1108
842/1108
843/1108
844/1108
845/1108
846/1108
847/1108
848/1108
849/1108
850/1108
851/1108
852/1108
853/1108
854/1108
855/1108
856/1108
857/1108
858/1108
859/1108
860/1108
861/1108
862/1108
863/1108
864/1108
865/1108
866/1108
867/1108
868/1108
869/1108
870/1108
871/1108
872/1108
873/1108
874/1108
875/1108
876/1108
877/1108
878/1108
879/1108
880/1108
881/1108
882/1108
883/1108
884/1108
885/1108
886/1108
887/1108
888/1108
889/1108
890/1108
891/1108
892/1108
893/1108
894/1108
895/1108
896/1108
897/1108
898/1108
899/1108
900/1108
901/1108
902/1108
903/1108
904/1108
905/1108
906/1108
907/1108
908/1108
909/1108
910/1108
911/1108
912/1108
913/1108
914/1108
915/1108
916/1108
917/1108
918/1108
919/1108
920/1108
921/1108
922/1108
923/1108
###Markdown
Initiate the environment
###Code
import numpy as np
import cv2
import matplotlib.pyplot as plt
from random import randint
from glob import glob
import os
import os.path
import scipy
import math
%matplotlib inline
import tensorflow as tf
print('Default GPU Device: {}'.format(tf.test.gpu_device_name()))
print('TensorFlow Version: {}'.format(tf.__version__))
#useful variables
num_classes = 3 # none and 12 options, 0-12
image_shape = (160, 576)
# image_shape = (576, 160)
weights_initializer_stddev = 0.01
weights_regularized_l2 = 1e-3
###Output
_____no_output_____
###Markdown
Download VGG16
###Code
from urllib import urlretrieve
import zipfile
if not os.path.exists("vgg16.zip"):
urlretrieve(
'https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/vgg.zip',
"./vgg16.zip")
print("Downloaded VGG16 model weights")
else:
print("Already exists, skipping download")
import zipfile
with zipfile.ZipFile('/datasets/home/92/992/tal089/vgg16.zip', 'r') as zip_ref: #type your won path here
zip_ref.extractall('/datasets/home/92/992/tal089/vgg16')
###Output
_____no_output_____
###Markdown
Helper Function
###Code
# Unfortunately We did not figure out a way to succesfully get the truthmap we desired for trianing with those functions that
# which cause corrpution of our training results we wish we would have more knowledge of semantic image segementation
def imageToTruth(img):
return [list(map(pixelToTruth, row)) for row in img]
def truthToImage(truth):
return [list(map(truthToPixel, row)) for row in truth]
def truthToPixel(value):
tmp = value.tolist()
return (tmp.index(max(tmp)), 0, 0)
# This function is supposed to do the truth map generating
def pixelToTruth(value):
if value<85:
return (0, 1, 0)
elif value>85 & value<170:
return (0, 0, 1)
else:
return (1, 0, 0)
###Output
_____no_output_____
###Markdown
Get the image files
###Code
# This function grab data from local folder and feed them into truth map generating function. Then generate batches for training
def get_training_data(batch_size):
image_paths = glob(os.path.join("./Images", "*.tiff"))
label_paths = glob(os.path.join("./Labels", "*.tif"))
for batch in range(0, len(image_paths), batch_size):
images = []
maps = []
for batch in range(0, len(image_paths), batch_size):
images = []
maps = []
for index, image_file in enumerate(image_paths[batch:batch + batch_size]):
map_file = os.path.join(label_paths[index])
image = scipy.misc.imread(image_file)
image = scipy.misc.imresize(image, image_shape)
map_image = scipy.misc.imread(map_file)
map_image = scipy.misc.imresize(map_image, image_shape)
map_image = imageToTruth(map_image)
images.append(image)
maps.append(map_image)
yield np.array(images), np.array(maps)
model = {}
def get_placeholders(model):
model['placeholders'] = {}
#Those are Placeholders
model['placeholders']['label'] = tf.placeholder(tf.int32, (None, image_shape[0], image_shape[1], model['settings']['num_classes']), name='label')
model['placeholders']['learning_rate'] = tf.placeholder(tf.float32, name='learning_rate')
###Output
_____no_output_____
###Markdown
Load VGG
###Code
def load_vgg(model):
#Grab layers from pretrained VGG
tf.saved_model.loader.load(sess, ["vgg16"], '/datasets/home/92/992/tal089/vgg16/vgg' #type your own path here
)
model['graph'] = tf.get_default_graph()
#define key layers to take infromation from VGG
model['input_layer'] = model['graph'].get_tensor_by_name("image_input:0")
model['keep_prob'] = model['graph'].get_tensor_by_name("keep_prob:0") #Dropout settings
#grab more layers from vgg to backfeed into our fcn
model['layer_3'] = model['graph'].get_tensor_by_name("layer3_out:0")
model['layer_4'] = model['graph'].get_tensor_by_name("layer4_out:0")
model['layer_7'] = model['graph'].get_tensor_by_name("layer7_out:0")
def vgg_fcn(model):
# Skip connections for later
model['skip_conv_3'] = tf.layers.conv2d(model['layer_3'], model['settings']['num_classes'], 1, padding='same',
kernel_initializer = tf.random_normal_initializer(stddev=weights_initializer_stddev),
kernel_regularizer= tf.contrib.layers.l2_regularizer(weights_regularized_l2),
name='skip_conv_3')
model['skip_conv_4'] = tf.layers.conv2d(model['layer_4'], model['settings']['num_classes'], 1, padding='same',
kernel_initializer = tf.random_normal_initializer(stddev=weights_initializer_stddev),
kernel_regularizer= tf.contrib.layers.l2_regularizer(weights_regularized_l2),
name='skip_conv_4')
#Layer 7 isn't skipped, it's passed right to transpose
model['fully_connected_convs'] = tf.layers.conv2d(model['layer_7'], model['settings']['num_classes'], 1, padding='same',
kernel_initializer = tf.random_normal_initializer(stddev=weights_initializer_stddev),
kernel_regularizer= tf.contrib.layers.l2_regularizer(weights_regularized_l2),
name='fully_connected_convs')
#From layer 7 we need to transpose up
model['transpose_1'] = tf.layers.conv2d_transpose(model['fully_connected_convs'], model['settings']['num_classes'], 4, 2, padding='same',
kernel_initializer = tf.random_normal_initializer(stddev=weights_initializer_stddev),
kernel_regularizer= tf.contrib.layers.l2_regularizer(weights_regularized_l2),
name='transpose_1')
# Add the skip layer from layer 4
model['skip_1'] = tf.add(model['transpose_1'], model['skip_conv_4'], name='skip_1')
#Tranpose up from resultant layer
model['transpose_2'] = tf.layers.conv2d_transpose(model['skip_1'], model['settings']['num_classes'], 4, 2, padding='same',
kernel_initializer = tf.random_normal_initializer(stddev=weights_initializer_stddev),
kernel_regularizer= tf.contrib.layers.l2_regularizer(weights_regularized_l2),
name='transpose_2')
#Create skip layer from layer 3
model['skip_2'] = tf.add(model['skip_conv_3'], model['transpose_2'], name='skip_2')
#Final output layer
model['output_layer'] = tf.layers.conv2d_transpose(model['skip_2'], model['settings']['num_classes'], 16, 8, padding='same',
kernel_initializer = tf.random_normal_initializer(stddev=weights_initializer_stddev),
kernel_regularizer= tf.contrib.layers.l2_regularizer(weights_regularized_l2),
activation=tf.sigmoid, name='output_layer')
return model['output_layer']
def get_logits(model):
model['logits'] = {}
#optimzer
model['logits']['logits'] = tf.reshape(model['output_layer'], (-1, model['settings']['num_classes']))
model['logits']['correct_label'] = tf.reshape(model['placeholders']['label'], (-1, model['settings']['num_classes']))
def get_loss(model):
model['loss'] = {}
model['loss']['softmax'] = tf.nn.softmax_cross_entropy_with_logits(logits=model['logits']['logits'], labels=model['logits']['correct_label'])
model['loss']['cross_entropy_loss'] = tf.reduce_mean(model['loss']['softmax'])
model['loss']['optimizer'] = tf.train.AdamOptimizer(learning_rate=model['placeholders']['learning_rate'])
model['loss']['train_op'] = model['loss']['optimizer'].minimize(model['loss']['cross_entropy_loss'])
def train(sess, model, epochs=1, batch_size=10, keep_probability=0.5, learning_rate_alpha=0.001):
sess.run(tf.global_variables_initializer())
print("launching training")
for epoch in range(epochs):
print("Launching Epoch {}".format(epoch))
loss_log = []
batch_count = 0
for image, truth in get_training_data(batch_size):
print("")
batch_count += 1
loss = sess.run(
[model['loss']['train_op'], model['loss']['cross_entropy_loss']],
feed_dict = {
model['input_layer']: image,
model['placeholders']['label']: truth,
model['keep_prob']: keep_probability,
model['placeholders']['learning_rate']: learning_rate_alpha
},
)
print("loss is ",loss)
loss_log.append('{:3f}'.format(loss[1]))
if(batch_count % 10 == 0):
print("Batch {} - loss of {}".format(batch_count, loss))
print("Training for epoch finished - ", loss_log)
chkpt_path = "check_point/fcn_model".format(epoch)
saver.save(sess, chkpt_path)
print("Model saved as {}".format(chkpt_path))
print()
print("Training finished")
saver = None
tf.reset_default_graph()
with tf.Session() as sess:
model = {}
model['settings'] = { "num_classes": 3 }
get_placeholders(model)
load_vgg(model)
vgg_fcn(model)
get_logits(model)
get_loss(model)
saver = tf.train.Saver()
train(sess, model, 10, 10, 0.5, 0.002)
def execute_on_image(sess, model):
image_file = "10078660_15-Copy1.tiff"
truth_file = "10078660_15-Copy1.tif"
image = scipy.misc.imread(image_file)
image = scipy.misc.imresize(image, image_shape)
plt.figure(figsize=(20,15))
plt.imshow(image)
truth = scipy.misc.imread(truth_file)
truth = scipy.misc.imresize(truth, image_shape)
plt.figure(figsize=(20,15))
plt.imshow(colorizeMap(truth))
truth = imageToTruth(truth)
output = sess.run(model["output_layer"], feed_dict={
model["input_layer"]: [image],
model['placeholders']['label'] : [truth],
"keep_prob:0": 1.0,
model["placeholders"]["learning_rate"] : 0.01
})
output = output[0]
plt.figure(figsize=(20,15))
outputImage = truthToImage(output)
colorizedOutput = colorizeMap(outputImage)
plt.imshow(outputImage)
return output
###Output
_____no_output_____
###Markdown
Execute on image
###Code
tf.reset_default_graph()
with tf.Session() as sess:
model = {}
model['settings'] = { "num_classes": 3 }
get_placeholders(model)
load_vgg(model)
vgg_fcn(model)
get_logits(model)
get_loss(model)
saver = tf.train.Saver()
saver.restore(sess, "check_point/fcn_model")
results = execute_on_image(sess, model)
###Output
INFO:tensorflow:Restoring parameters from /datasets/home/92/992/tal089/vgg16/vgg/variables/variables
INFO:tensorflow:Restoring parameters from check_point/fcn_model
|
chapter5/05_03_missing.ipynb | ###Markdown
05_03: Filling Missing Values
###Code
import math
import collections
import urllib
import numpy as np
import pandas as pd
import matplotlib.pyplot as pp
%matplotlib inline
import getweather
pasadena = getweather.getyear('PASADENA', ['TMIN', 'TMAX'], 2001)
np.mean(pasadena['TMIN']), np.min(pasadena['TMIN']), np.max(pasadena['TMIN'])
pasadena['TMIN']
np.nan + 1
np.isnan(pasadena['TMIN'])
False + True + True
np.sum(np.isnan(pasadena['TMIN']))
np.nanmin(pasadena['TMIN']), np.nanmax(pasadena['TMAX'])
pasadena['TMIN'][np.isnan(pasadena['TMIN'])] = np.nanmean(pasadena['TMIN'])
pasadena['TMAX'][np.isnan(pasadena['TMAX'])] = np.nanmean(pasadena['TMAX'])
pasadena['TMIN']
pp.plot(pasadena['TMIN'])
xdata = np.array([0,1,4,5,7,8], 'd')
ydata = np.array([10,5,2,7,7.5,10], 'd')
pp.plot(xdata, ydata, '--o')
# interpolate x/y data with missing values to continuous x values
xnew = np.linspace(0, 8, 9)
ynew = np.interp(xnew, xdata, ydata)
pp.plot(xdata, ydata, '--o', ms=10)
pp.plot(xnew, ynew, 's')
# interpolate x/y data with missing values to denser, continuous x values
xnew = np.linspace(0, 8, 30)
ynew = np.interp(xnew, xdata, ydata)
pp.plot(xdata, ydata, '--o', ms=10)
pp.plot(xnew, ynew, 's')
pasadena = getweather.getyear('PASADENA', ['TMIN', 'TMAX'], 2001)
# build a Boolean mask of "good" (non-NaN) TMIN values;
# interpolate "good" days/TMIN to full range of days
good = ~np.isnan(pasadena['TMIN'])
x = np.arange(0, 365)
np.interp(x, x[good], pasadena['TMIN'][good])
# fill NaNs in any array by interpolation
def fillnans(array):
good = ~np.isnan(array)
x = np.arange(len(array))
return np.interp(x, x[good], array[good])
pp.plot(fillnans(pasadena['TMIN']))
pp.plot(fillnans(pasadena['TMAX']))
###Output
_____no_output_____ |
notebooks/ERDDAP-Access.ipynb | ###Markdown
Demo Accessing ERDDAP Grid & Table DatasetsThis is a demo of accessing `griddap` and `tabledap`datasets from the SalishSeaCast ERDDAP server(https://salishsea.eos.ubc.ca/erddap/).
###Code
%matplotlib inline
import cmocean
import pandas
import xarray
###Output
_____no_output_____
###Markdown
About ERDDAPThe SalishSeaCast ERDDAP server (https://salishsea.eos.ubc.ca/erddap/)is one of a growing herd of ERDDAP servers around the world;see the **Easier Access to Scientific Data** section on its front pagefor information and links about the ERDDAP project,and https://coastwatch.pfeg.noaa.gov/erddap/download/setup.htmlorganizationsfor a list of many of the ERDDAP instances.ERDDAP provides [2 types of datasets](https://coastwatch.pfeg.noaa.gov/erddap/download/setupDatasetsXml.htmldatasetTypes):* `griddap` for gridded data such as model results fields* `tabledap` for tabular data field-deploy instruments like CTDs, ADCPs, buoys, etc.It also has a special-case of the tablular dataset type that allows it to provide avirtual file system interface to facilitate downloading files.Datasets are accessible via a web app interface that provides some basic data visualization capabilities,and file downloads in various formats of hyperslabs from the datasets.They are also accessible via a RESTful web service interface.Key features of ERDDAP is that:* It provides rich metadata for the datasets. That metadata is generally richer and more complete than the metadata contained in the underlying files (depending on how committed to good metadata the particular ERDDAP's administrators are :-)* It abstracts away the storage details of the underlying files. Users don't have to know about how to slice or concatenate a dataset's files in order to get a 43 day long time series of variable values at a particular depth in a sub-region of a model domain.The SalishSeaCast ERDDAP serves results from Susan Allen's UBC-MOAD group Salish Sea domain NEMO model,and Johannes Gemmrich's uVic/IOS Strait of Georgia domain WaveWatch III® model as `griddap` datasets(https://salishsea.eos.ubc.ca/erddap/griddap/).It serves results from Michael Dunphy's IOS Vancouver Harbour and Fraser River FVCOM domain model,and the Strait of Georgia domain WaveWatch III® model via the virtual file system interface(https://salishsea.eos.ubc.ca/erddap/files/).A small selection of aggregated datasets from Ocean Networks Canada real-time instruments,and an IOS horizontal ADCP are served at `tabledap` datasets(https://salishsea.eos.ubc.ca/erddap/tabledap/).Links on the `griddap` and `tabledap` pages connect to the `data` request,`graph` visualization,and `M`etadata views for each of the datasets(and the `files` views for the virtual file system interface). `griddap` DatasetsWhile you can use the OPeNDAP protocol to request hyperslabs from datasetsin various file formats the Python `netCDF4` and `xarray` package provide higher levelinterfaces with lazy loading.The code below uses `xarray`.To make the `%%time` measurements a little more consistentI've used a context manager to open the dataset in each cell.Of course that only limits client-side carry-over from one cell to anotherand doesn't get rid caching that the ERDDAP server does.`xarray` and `netCDF4` both access `griddap` datasets using a URL of the form: server-address/griddap/dataset-id An easy way to get the correct URL is to take the URL of the dataset's[data access page](https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12.html)and drop the `.html` from the end.Opening the dataset is quite fast because it just loads metadata:
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
print(ds)
###Output
<xarray.Dataset>
Dimensions: (depth: 40, gridX: 398, gridY: 898, time: 43968)
Coordinates:
* time (time) datetime64[ns] 2015-01-01T00:30:00 ... 2020-01-12T23:30:00
* depth (depth) float32 0.5000003 1.5000031 ... 414.5341 441.4661
* gridY (gridY) int16 0 1 2 3 4 5 6 7 ... 891 892 893 894 895 896 897
* gridX (gridX) int16 0 1 2 3 4 5 6 7 ... 391 392 393 394 395 396 397
Data variables:
salinity (time, depth, gridY, gridX) float32 ...
temperature (time, depth, gridY, gridX) float32 ...
Attributes:
acknowledgement: MEOPAR, ONC, Compute Canada
cdm_data_type: Grid
comment: If you use this dataset in your research,\nple...
Conventions: CF-1.6, COARDS, ACDD-1.3
creator_email: [email protected]
creator_name: Salish Sea MEOPAR Project Contributors
creator_url: https://salishsea-meopar-docs.readthedocs.io/
description: ocean T grid variables
drawLandMask: over
history: 2020-01-13T00:51:57Z (local files)\n2020-01-13...
infoUrl: https://salishsea-meopar-docs.readthedocs.io/e...
institution: UBC EOAS
institution_fullname: Earth, Ocean & Atmospheric Sciences, Universit...
keywords: conservative temperature, deptht, Earth Scienc...
license: The Salish Sea MEOPAR NEMO model results are c...
project: Salish Sea MEOPAR NEMO Model
sourceUrl: (local files)
standard_name_vocabulary: CF Standard Name Table v29
summary: Green, Salish Sea, 3d Tracer Fields, Hourly, v...
testOutOfDate: now-16hours
time_coverage_end: 2020-01-12T23:30:00Z
time_coverage_start: 2015-01-01T00:30:00Z
timeStamp: 2020-Jan-12 17:24:39 GMT
title: Green, Salish Sea, 3d Tracer Fields, Hourly, v...
uuid: ad937476-6a0f-4beb-b9e4-2982c1fa9aa8
CPU times: user 37.3 ms, sys: 12.4 ms, total: 49.8 ms
Wall time: 139 ms
###Markdown
Specifying a hyperslab from the dataset does not trigger data access:
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
salinity = ds.salinity.isel(depth=0).sel(time="2020-01-09 12:30")
###Output
CPU times: user 41.7 ms, sys: 4.94 ms, total: 46.6 ms
Wall time: 130 ms
###Markdown
Data is accessed only when it is operated on:
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
(ds.salinity.isel(depth=0).sel(time="2020-01-09 12:30")
.plot(cmap=cmocean.cm.haline))
###Output
CPU times: user 115 ms, sys: 11.6 ms, total: 127 ms
Wall time: 628 ms
###Markdown
Or when it is loaded explicitly:
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
salinity = ds.salinity.isel(depth=0).sel(time="2020-01-09 12:30").load()
###Output
CPU times: user 69.2 ms, sys: 9.16 ms, total: 78.3 ms
Wall time: 588 ms
###Markdown
Depth LayersFocussing here on depth layer (i.e. horizontal slice) hyperslabs,and switching to use the `.sel(..., method="nearest")` methodto avoid having to be too explicit depth and time.
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
(ds.salinity.sel(depth=0, time="2020-01-09 11:00", method="nearest")
.plot(cmap=cmocean.cm.haline))
###Output
CPU times: user 210 ms, sys: 35.3 ms, total: 245 ms
Wall time: 759 ms
###Markdown
There are at least 2 really ugly issues with the renderings above:1. The dataset coordinates on the horizontal plane are the x/y grid indices, not longitude/latitude. What is more, the dataset *does not even include longitude/latitude* as variables.2. The coastline is an odd, blocky mixture of `NaN`s (white) and zero salinity (dark blue). The second is the easier to deal with.* The white `NaN`s are due to the land processor elmination that is used in the SalishSeaCast NEMO runs to avoid spending computational effort in MPI sub-domains that contain no water.* The dark blue zero salinity "fringe" is the land in the MPI sub-domains that contain both land and water.Both of those artifacts can be eliminated by masking the salinity field withthe appropriate field from the model mesh mask.The mesh mask is stored in 2 datasets on the SalishSeaCast ERDDAP server:* https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn2DMeshMaskV17-02* https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DMeshMaskV17-02It is split like that because one of ERDDAP's rules is that all of the variablesin a dataset must have the same shape.So, the 2D mesh mask dataset contains NEMO mesh mask variables with coordinates`(t, y, x)`,and the 3D dataset contains variables with coordinates `(t, z, y, x)`.The inclusion of a rather pointless `t` coordinate in the mesh mask variablesis a NEMO idiosyncrasy.For salinity, we use the `tmask` variable from the 3D mesh mask dataset.Its values are 0/1 integer flags indicating whether the T-grid point is land/water.
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DMeshMaskV17-02") as mesh:
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
depth=0
salinity = ds.salinity.sel(depth=depth, time="2020-01-09 11:00", method="nearest")
mask = mesh.tmask.isel(time=0).sel(gridZ=depth, method="nearest")
salinity.where(mask).plot(cmap=cmocean.cm.haline)
###Output
CPU times: user 306 ms, sys: 26 ms, total: 332 ms
Wall time: 944 ms
###Markdown
The solution to the longitude/latitude issue is to use the SalishSeaCast NEMOmodel grid geo-reference dataset https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02.It provides 2D `longitude` and `latitude` variables with `(gridY, gridX)` coordinates.Since those are the same coordinates as our salinity depth layer has,we can construct a new `xarray.DataArray` from the masked salinity valuesand the longitude/latitude coordinate that we want to plot them on.
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSnBathymetryV17-02") as geo:
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DMeshMaskV17-02") as mesh:
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
depth=0
salinity = ds.salinity.sel(depth=depth, time="2020-01-09 11:00", method="nearest")
mask = mesh.tmask.isel(time=0).sel(gridZ=depth, method="nearest")
geo_salinity = xarray.DataArray(
salinity.where(mask),
coords={
"longitude": geo.longitude,
"latitude": geo.latitude,
}
)
geo_salinity.plot.pcolormesh("longitude", "latitude", cmap=cmocean.cm.haline)
###Output
CPU times: user 374 ms, sys: 64.5 ms, total: 438 ms
Wall time: 1.36 s
###Markdown
Time SeriesWhile ERDDAP allows us to ignore the storage details of the underlying files,knowing that the files are chunked as `(1, 40, 898, 398)`(which we can learn by looking at the metadata for one of the variables in the dataset),explains why getting 12 values for a time series at a single point in the domaintakes significantly longer than getting the thousands of values in the depth layers above.
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
(ds.salinity
.sel(depth=0, method="nearest")
.sel(gridX=250, gridY=500, time=slice("2020-01-09 00:00", "2020-01-09 12:00"))
.plot())
###Output
CPU times: user 62.2 ms, sys: 3.69 ms, total: 65.9 ms
Wall time: 3.67 s
###Markdown
And knowing that the model results are stored in daily files containing24 hourly average values means that we shouldn't be surprised that gettinga time series that spans more than one day takes even longer becauseERDDAP has to open more than one underlying file.
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
(ds.salinity
.sel(depth=0, method="nearest")
.sel(gridX=250, gridY=500, time=slice("2020-01-08 00:00", "2020-01-10 00:00"))
.plot())
###Output
CPU times: user 66.3 ms, sys: 4.36 ms, total: 70.6 ms
Wall time: 13 s
###Markdown
Profiles and Vertical SlicesThe `(1, 40, 898, 398)` chunking means that getting the values forprofiles and vertical slices are comparable in speed to getting them fordepth layers.
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
(ds.salinity
.sel(gridX=250, gridY=500, time="2020-01-09 00:00", method="nearest")
.plot(y="depth", yincrease=False))
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
(ds.salinity
.sel(gridY=500, time="2020-01-09 00:00", method="nearest")
.plot(y="depth", yincrease=False, cmap=cmocean.cm.haline))
###Output
CPU times: user 202 ms, sys: 12.8 ms, total: 215 ms
Wall time: 689 ms
###Markdown
Unfortunately,the depth coordinate of the mesh mask dataset is called `gridZ` not `depth`(as is the case in the 3D tracer fields and other model fields datasets).That causes problems when we try to mask a profile or vertical slice.The issue is most easily(if not elegantly)resolved by using the underlying `numpy` array of mask values in the `.where()` method;i.e. `.where(mask.values)`
###Code
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DMeshMaskV17-02") as mesh:
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
gridX, gridY = 250, 500
salinity = ds.salinity.sel(gridX=gridX, gridY=gridY, time="2020-01-09 00:00", method="nearest")
mask = mesh.tmask.isel(time=0).sel(gridX=gridX, gridY=gridY)
salinity.where(mask.values).plot(y="depth", yincrease=False)
%%time
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSn3DMeshMaskV17-02") as mesh:
with xarray.open_dataset("https://salishsea.eos.ubc.ca/erddap/griddap/ubcSSg3DTracerFields1hV18-12") as ds:
gridX = slice(150, 375)
gridY = 500
salinity = ds.salinity.sel(gridX=gridX, gridY=gridY).sel(time="2020-01-09 00:00", method="nearest")
mask = mesh.tmask.isel(time=0).sel(gridX=gridX, gridY=gridY)
salinity.where(mask.values).plot(y="depth", yincrease=False, cmap=cmocean.cm.haline)
###Output
CPU times: user 241 ms, sys: 14.2 ms, total: 255 ms
Wall time: 835 ms
|
v0.12.2/examples/notebooks/generated/glm_weights.ipynb | ###Markdown
Weighted Generalized Linear Models
###Code
import numpy as np
import pandas as pd
import statsmodels.formula.api as smf
import statsmodels.api as sm
###Output
_____no_output_____
###Markdown
Weighted GLM: Poisson response data Load dataIn this example, we'll use the affair dataset using a handful of exogenous variables to predict the extra-marital affair rate. Weights will be generated to show that `freq_weights` are equivalent to repeating records of data. On the other hand, `var_weights` is equivalent to aggregating data.
###Code
print(sm.datasets.fair.NOTE)
###Output
::
Number of observations: 6366
Number of variables: 9
Variable name definitions:
rate_marriage : How rate marriage, 1 = very poor, 2 = poor, 3 = fair,
4 = good, 5 = very good
age : Age
yrs_married : No. years married. Interval approximations. See
original paper for detailed explanation.
children : No. children
religious : How relgious, 1 = not, 2 = mildly, 3 = fairly,
4 = strongly
educ : Level of education, 9 = grade school, 12 = high
school, 14 = some college, 16 = college graduate,
17 = some graduate school, 20 = advanced degree
occupation : 1 = student, 2 = farming, agriculture; semi-skilled,
or unskilled worker; 3 = white-colloar; 4 = teacher
counselor social worker, nurse; artist, writers;
technician, skilled worker, 5 = managerial,
administrative, business, 6 = professional with
advanced degree
occupation_husb : Husband's occupation. Same as occupation.
affairs : measure of time spent in extramarital affairs
See the original paper for more details.
###Markdown
Load the data into a pandas dataframe.
###Code
data = sm.datasets.fair.load_pandas().data
###Output
_____no_output_____
###Markdown
The dependent (endogenous) variable is ``affairs``
###Code
data.describe()
data[:3]
###Output
_____no_output_____
###Markdown
In the following we will work mostly with Poisson. While using decimal affairs works, we convert them to integers to have a count distribution.
###Code
data["affairs"] = np.ceil(data["affairs"])
data[:3]
(data["affairs"] == 0).mean()
np.bincount(data["affairs"].astype(int))
###Output
_____no_output_____
###Markdown
Condensing and Aggregating observationsWe have 6366 observations in our original dataset. When we consider only some selected variables, then we have fewer unique observations. In the following we combine observations in two ways, first we combine observations that have values for all variables identical, and secondly we combine observations that have the same explanatory variables. Dataset with unique observationsWe use pandas's groupby to combine identical observations and create a new variable `freq` that count how many observation have the values in the corresponding row.
###Code
data2 = data.copy()
data2['const'] = 1
dc = data2['affairs rate_marriage age yrs_married const'.split()].groupby('affairs rate_marriage age yrs_married'.split()).count()
dc.reset_index(inplace=True)
dc.rename(columns={'const': 'freq'}, inplace=True)
print(dc.shape)
dc.head()
###Output
(476, 5)
###Markdown
Dataset with unique explanatory variables (exog)For the next dataset we combine observations that have the same values of the explanatory variables. However, because the response variable can differ among combined observations, we compute the mean and the sum of the response variable for all combined observations.We use again pandas ``groupby`` to combine observations and to create the new variables. We also flatten the ``MultiIndex`` into a simple index.
###Code
gr = data['affairs rate_marriage age yrs_married'.split()].groupby('rate_marriage age yrs_married'.split())
df_a = gr.agg(['mean', 'sum','count'])
def merge_tuple(tpl):
if isinstance(tpl, tuple) and len(tpl) > 1:
return "_".join(map(str, tpl))
else:
return tpl
df_a.columns = df_a.columns.map(merge_tuple)
df_a.reset_index(inplace=True)
print(df_a.shape)
df_a.head()
###Output
(130, 6)
###Markdown
After combining observations with have a dataframe `dc` with 467 unique observations, and a dataframe `df_a` with 130 observations with unique values of the explanatory variables.
###Code
print('number of rows: \noriginal, with unique observations, with unique exog')
data.shape[0], dc.shape[0], df_a.shape[0]
###Output
number of rows:
original, with unique observations, with unique exog
###Markdown
AnalysisIn the following, we compare the GLM-Poisson results of the original data with models of the combined observations where the multiplicity or aggregation is given by weights or exposure. original data
###Code
glm = smf.glm('affairs ~ rate_marriage + age + yrs_married',
data=data, family=sm.families.Poisson())
res_o = glm.fit()
print(res_o.summary())
res_o.pearson_chi2 / res_o.df_resid
###Output
_____no_output_____
###Markdown
condensed data (unique observations with frequencies)Combining identical observations and using frequency weights to take into account the multiplicity of observations produces exactly the same results. Some results attribute will differ when we want to have information about the observation and not about the aggregate of all identical observations. For example, residuals do not take ``freq_weights`` into account.
###Code
glm = smf.glm('affairs ~ rate_marriage + age + yrs_married',
data=dc, family=sm.families.Poisson(), freq_weights=np.asarray(dc['freq']))
res_f = glm.fit()
print(res_f.summary())
res_f.pearson_chi2 / res_f.df_resid
###Output
_____no_output_____
###Markdown
condensed using ``var_weights`` instead of ``freq_weights``Next, we compare ``var_weights`` to ``freq_weights``. It is a common practice to incorporate ``var_weights`` when the endogenous variable reflects averages and not identical observations.I do not see a theoretical reason why it produces the same results (in general).This produces the same results but ``df_resid`` differs the ``freq_weights`` example because ``var_weights`` do not change the number of effective observations.
###Code
glm = smf.glm('affairs ~ rate_marriage + age + yrs_married',
data=dc, family=sm.families.Poisson(), var_weights=np.asarray(dc['freq']))
res_fv = glm.fit()
print(res_fv.summary())
###Output
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: affairs No. Observations: 476
Model: GLM Df Residuals: 472
Model Family: Poisson Df Model: 3
Link Function: log Scale: 1.0000
Method: IRLS Log-Likelihood: -10351.
Date: Tue, 02 Feb 2021 Deviance: 15375.
Time: 06:51:25 Pearson chi2: 3.23e+04
No. Iterations: 6
Covariance Type: nonrobust
=================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------
Intercept 2.7155 0.107 25.294 0.000 2.505 2.926
rate_marriage -0.4952 0.012 -41.702 0.000 -0.518 -0.472
age -0.0299 0.004 -6.691 0.000 -0.039 -0.021
yrs_married -0.0108 0.004 -2.507 0.012 -0.019 -0.002
=================================================================================
###Markdown
Dispersion computed from the results is incorrect because of wrong ``df_resid``. It is correct if we use the original ``df_resid``.
###Code
res_fv.pearson_chi2 / res_fv.df_resid, res_f.pearson_chi2 / res_f.df_resid
###Output
_____no_output_____
###Markdown
aggregated or averaged data (unique values of explanatory variables)For these cases we combine observations that have the same values of the explanatory variables. The corresponding response variable is either a sum or an average. using ``exposure``If our dependent variable is the sum of the responses of all combined observations, then under the Poisson assumption the distribution remains the same but we have varying `exposure` given by the number of individuals that are represented by one aggregated observation.The parameter estimates and covariance of parameters are the same with the original data, but log-likelihood, deviance and Pearson chi-squared differ
###Code
glm = smf.glm('affairs_sum ~ rate_marriage + age + yrs_married',
data=df_a, family=sm.families.Poisson(), exposure=np.asarray(df_a['affairs_count']))
res_e = glm.fit()
print(res_e.summary())
res_e.pearson_chi2 / res_e.df_resid
###Output
_____no_output_____
###Markdown
using var_weightsWe can also use the mean of all combined values of the dependent variable. In this case the variance will be related to the inverse of the total exposure reflected by one combined observation.
###Code
glm = smf.glm('affairs_mean ~ rate_marriage + age + yrs_married',
data=df_a, family=sm.families.Poisson(), var_weights=np.asarray(df_a['affairs_count']))
res_a = glm.fit()
print(res_a.summary())
###Output
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: affairs_mean No. Observations: 130
Model: GLM Df Residuals: 126
Model Family: Poisson Df Model: 3
Link Function: log Scale: 1.0000
Method: IRLS Log-Likelihood: -5954.2
Date: Tue, 02 Feb 2021 Deviance: 967.46
Time: 06:51:25 Pearson chi2: 926.
No. Iterations: 5
Covariance Type: nonrobust
=================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------
Intercept 2.7155 0.107 25.294 0.000 2.505 2.926
rate_marriage -0.4952 0.012 -41.702 0.000 -0.518 -0.472
age -0.0299 0.004 -6.691 0.000 -0.039 -0.021
yrs_married -0.0108 0.004 -2.507 0.012 -0.019 -0.002
=================================================================================
###Markdown
ComparisonWe saw in the summary prints above that ``params`` and ``cov_params`` with associated Wald inference agree across versions. We summarize this in the following comparing individual results attributes across versions.Parameter estimates `params`, standard errors of the parameters `bse` and `pvalues` of the parameters for the tests that the parameters are zeros all agree. However, the likelihood and goodness-of-fit statistics, `llf`, `deviance` and `pearson_chi2` only partially agree. Specifically, the aggregated version do not agree with the results using the original data.**Warning**: The behavior of `llf`, `deviance` and `pearson_chi2` might still change in future versions.Both the sum and average of the response variable for unique values of the explanatory variables have a proper likelihood interpretation. However, this interpretation is not reflected in these three statistics. Computationally this might be due to missing adjustments when aggregated data is used. However, theoretically we can think in these cases, especially for `var_weights` of the misspecified case when likelihood analysis is inappropriate and the results should be interpreted as quasi-likelihood estimates. There is an ambiguity in the definition of ``var_weights`` because they can be used for averages with correctly specified likelihood as well as for variance adjustments in the quasi-likelihood case. We are currently not trying to match the likelihood specification. However, in the next section we show that likelihood ratio type tests still produce the same result for all aggregation versions when we assume that the underlying model is correctly specified.
###Code
results_all = [res_o, res_f, res_e, res_a]
names = 'res_o res_f res_e res_a'.split()
pd.concat([r.params for r in results_all], axis=1, keys=names)
pd.concat([r.bse for r in results_all], axis=1, keys=names)
pd.concat([r.pvalues for r in results_all], axis=1, keys=names)
pd.DataFrame(np.column_stack([[r.llf, r.deviance, r.pearson_chi2] for r in results_all]),
columns=names, index=['llf', 'deviance', 'pearson chi2'])
###Output
_____no_output_____
###Markdown
Likelihood Ratio type testsWe saw above that likelihood and related statistics do not agree between the aggregated and original, individual data. We illustrate in the following that likelihood ratio test and difference in deviance agree across versions, however Pearson chi-squared does not.As before: This is not sufficiently clear yet and could change.As a test case we drop the `age` variable and compute the likelihood ratio type statistics as difference between reduced or constrained and full or unconstrained model. original observations and frequency weights
###Code
glm = smf.glm('affairs ~ rate_marriage + yrs_married',
data=data, family=sm.families.Poisson())
res_o2 = glm.fit()
#print(res_f2.summary())
res_o2.pearson_chi2 - res_o.pearson_chi2, res_o2.deviance - res_o.deviance, res_o2.llf - res_o.llf
glm = smf.glm('affairs ~ rate_marriage + yrs_married',
data=dc, family=sm.families.Poisson(), freq_weights=np.asarray(dc['freq']))
res_f2 = glm.fit()
#print(res_f2.summary())
res_f2.pearson_chi2 - res_f.pearson_chi2, res_f2.deviance - res_f.deviance, res_f2.llf - res_f.llf
###Output
_____no_output_____
###Markdown
aggregated data: ``exposure`` and ``var_weights``Note: LR test agrees with original observations, ``pearson_chi2`` differs and has the wrong sign.
###Code
glm = smf.glm('affairs_sum ~ rate_marriage + yrs_married',
data=df_a, family=sm.families.Poisson(), exposure=np.asarray(df_a['affairs_count']))
res_e2 = glm.fit()
res_e2.pearson_chi2 - res_e.pearson_chi2, res_e2.deviance - res_e.deviance, res_e2.llf - res_e.llf
glm = smf.glm('affairs_mean ~ rate_marriage + yrs_married',
data=df_a, family=sm.families.Poisson(), var_weights=np.asarray(df_a['affairs_count']))
res_a2 = glm.fit()
res_a2.pearson_chi2 - res_a.pearson_chi2, res_a2.deviance - res_a.deviance, res_a2.llf - res_a.llf
###Output
_____no_output_____
###Markdown
Investigating Pearson chi-square statisticFirst, we do some sanity checks that there are no basic bugs in the computation of `pearson_chi2` and `resid_pearson`.
###Code
res_e2.pearson_chi2, res_e.pearson_chi2, (res_e2.resid_pearson**2).sum(), (res_e.resid_pearson**2).sum()
res_e._results.resid_response.mean(), res_e.model.family.variance(res_e.mu)[:5], res_e.mu[:5]
(res_e._results.resid_response**2 / res_e.model.family.variance(res_e.mu)).sum()
res_e2._results.resid_response.mean(), res_e2.model.family.variance(res_e2.mu)[:5], res_e2.mu[:5]
(res_e2._results.resid_response**2 / res_e2.model.family.variance(res_e2.mu)).sum()
(res_e2._results.resid_response**2).sum(), (res_e._results.resid_response**2).sum()
###Output
_____no_output_____
###Markdown
One possible reason for the incorrect sign is that we are subtracting quadratic terms that are divided by different denominators. In some related cases, the recommendation in the literature is to use a common denominator. We can compare pearson chi-squared statistic using the same variance assumption in the full and reduced model. In this case we obtain the same pearson chi2 scaled difference between reduced and full model across all versions. (Issue [3616](https://github.com/statsmodels/statsmodels/issues/3616) is intended to track this further.)
###Code
((res_e2._results.resid_response**2 - res_e._results.resid_response**2) / res_e2.model.family.variance(res_e2.mu)).sum()
((res_a2._results.resid_response**2 - res_a._results.resid_response**2) / res_a2.model.family.variance(res_a2.mu)
* res_a2.model.var_weights).sum()
((res_f2._results.resid_response**2 - res_f._results.resid_response**2) / res_f2.model.family.variance(res_f2.mu)
* res_f2.model.freq_weights).sum()
((res_o2._results.resid_response**2 - res_o._results.resid_response**2) / res_o2.model.family.variance(res_o2.mu)).sum()
###Output
_____no_output_____
###Markdown
RemainderThe remainder of the notebook just contains some additional checks and can be ignored.
###Code
np.exp(res_e2.model.exposure)[:5], np.asarray(df_a['affairs_count'])[:5]
res_e2.resid_pearson.sum() - res_e.resid_pearson.sum()
res_e2.mu[:5]
res_a2.pearson_chi2, res_a.pearson_chi2, res_a2.resid_pearson.sum(), res_a.resid_pearson.sum()
((res_a2._results.resid_response**2) / res_a2.model.family.variance(res_a2.mu) * res_a2.model.var_weights).sum()
((res_a._results.resid_response**2) / res_a.model.family.variance(res_a.mu) * res_a.model.var_weights).sum()
((res_a._results.resid_response**2) / res_a.model.family.variance(res_a2.mu) * res_a.model.var_weights).sum()
res_e.model.endog[:5], res_e2.model.endog[:5]
res_a.model.endog[:5], res_a2.model.endog[:5]
res_a2.model.endog[:5] * np.exp(res_e2.model.exposure)[:5]
res_a2.model.endog[:5] * res_a2.model.var_weights[:5]
from scipy import stats
stats.chi2.sf(27.19530754604785, 1), stats.chi2.sf(29.083798806764687, 1)
res_o.pvalues
print(res_e2.summary())
print(res_e.summary())
print(res_f2.summary())
print(res_f.summary())
###Output
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: affairs No. Observations: 476
Model: GLM Df Residuals: 6363
Model Family: Poisson Df Model: 2
Link Function: log Scale: 1.0000
Method: IRLS Log-Likelihood: -10374.
Date: Tue, 02 Feb 2021 Deviance: 15420.
Time: 06:51:26 Pearson chi2: 3.24e+04
No. Iterations: 6
Covariance Type: nonrobust
=================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------
Intercept 2.0754 0.050 41.512 0.000 1.977 2.173
rate_marriage -0.4947 0.012 -41.743 0.000 -0.518 -0.471
yrs_married -0.0360 0.002 -17.542 0.000 -0.040 -0.032
=================================================================================
Generalized Linear Model Regression Results
==============================================================================
Dep. Variable: affairs No. Observations: 476
Model: GLM Df Residuals: 6362
Model Family: Poisson Df Model: 3
Link Function: log Scale: 1.0000
Method: IRLS Log-Likelihood: -10351.
Date: Tue, 02 Feb 2021 Deviance: 15375.
Time: 06:51:26 Pearson chi2: 3.23e+04
No. Iterations: 6
Covariance Type: nonrobust
=================================================================================
coef std err z P>|z| [0.025 0.975]
---------------------------------------------------------------------------------
Intercept 2.7155 0.107 25.294 0.000 2.505 2.926
rate_marriage -0.4952 0.012 -41.702 0.000 -0.518 -0.472
age -0.0299 0.004 -6.691 0.000 -0.039 -0.021
yrs_married -0.0108 0.004 -2.507 0.012 -0.019 -0.002
=================================================================================
|
code/rainfallSimulator_FinancialAnalysis.ipynb | ###Markdown
Weather Derivatites Rainfall Simulator -- Final Modelling + Pricing Developed by [Jesus Solano](mailto:[email protected]) 16 November 2018
###Code
# Import needed libraries.
import numpy as np
import pandas as pd
import random as rand
import matplotlib.pyplot as plt
from scipy.stats import bernoulli
from scipy.stats import gamma
import pickle
import time
import datetime
from scipy import stats
###Output
_____no_output_____
###Markdown
Generate artificial Data
###Code
### ENSO probabilistic forecast.
# Open saved data.
ensoForecast = pickle.load(open('../datasets/ensoForecastProb/ensoForecastProbabilities.pickle','rb'))
# Print an example .. ( Format needed)
ensoForecast['2017-01']
### Create total dataframe.
def createTotalDataFrame(daysNumber, startDate , initialState , initialPrep , ensoForecast, optionMonthTerm ):
# Set variables names.
totalDataframeColumns = ['state','Prep','Month','probNina','probNino', 'nextState']
# Create dataframe.
allDataDataframe = pd.DataFrame(columns=totalDataframeColumns)
# Number of simulation days(i.e 30, 60)
daysNumber = daysNumber
# Simulation start date ('1995-04-22')
startDate = startDate
# State of rainfall last day before start date --> Remember 0 means dry and 1 means wet.
initialState = initialState
initialPrep = initialPrep # Only fill when initialState == 1
dates = pd.date_range(startDate, periods = daysNumber + 2 , freq='D')
for date in dates:
# Fill precipitation amount.
allDataDataframe.loc[date.strftime('%Y-%m-%d'),'Prep'] = np.nan
# Fill month of date
allDataDataframe.loc[date.strftime('%Y-%m-%d'),'Month'] = date.month
tempDate = None
if optionMonthTerm==1:
tempDate = date
else:
tempDate = date - pd.DateOffset(months=optionMonthTerm-1)
# Fill El Nino ENSO forecast probability.
allDataDataframe.loc[date.strftime('%Y-%m-%d'),'probNino'] = float(ensoForecast[tempDate.strftime('%Y-%m')].loc[optionMonthTerm-1,'El Niño'].strip('%').strip('~'))/100
# Fill La Nina ENSO forecast probability.
allDataDataframe.loc[date.strftime('%Y-%m-%d'),'probNina'] = float(ensoForecast[tempDate.strftime('%Y-%m')].loc[optionMonthTerm-1,'La Niña'].strip('%').strip('~'))/100
# Fill State.
allDataDataframe.loc[date.strftime('%Y-%m-%d'),'state'] = np.nan
simulationDataFrame = allDataDataframe[:-1]
# Fill initial conditions.
simulationDataFrame['state'][0] = initialState
if initialState == 1:
simulationDataFrame['Prep'][0] = initialPrep
else:
simulationDataFrame['Prep'][0] = 0.0
return simulationDataFrame
simulationDataFrame = createTotalDataFrame(daysNumber= 30, startDate = '2005-01-01', initialState = 1 , initialPrep = 0.4, ensoForecast = ensoForecast, optionMonthTerm=6)
simulationDataFrame
### Load transitions and amount parameters.
# Transitions probabilites.
transitionsParametersDry = pd.read_csv('../results/visibleMarkov/transitionsParametersDry.csv', sep = ' ', header=None, names = ['variable', 'value'])
transitionsParametersDry.index += 1
transitionsParametersDry
transitionsParametersWet = pd.read_csv('../results/visibleMarkov/transitionsParametersWet.csv', sep = ' ', header=None, names = ['variable', 'value'])
transitionsParametersWet.index += 1
transitionsParametersWet
amountParametersGamma = pd.read_csv('../results/visibleMarkov/amountGammaPro.csv', sep = ' ', header=None, names = ['variable', 'mu', 'shape'])
amountParametersGamma.index += 1
print(transitionsParametersDry)
print('\n * Intercept means firts month (January) ')
###Output
variable value
1 (Intercept) -1.168017
2 Month2 0.346713
3 Month3 0.848934
4 Month4 1.563185
5 Month5 1.567584
6 Month6 1.132592
7 Month7 1.311161
8 Month8 1.432857
9 Month9 0.924944
10 Month10 1.587704
11 Month11 1.356612
12 Month12 0.518480
13 probNino -0.453497
14 probNina 0.176919
* Intercept means firts month (January)
###Markdown
Simulation Function Core
###Code
### Build the simulation core.
# Updates the state of the day based on yesterday state.
def updateState(yesterdayIndex, simulationDataFrame, transitionsParametersDry, transitionsParametersWet):
# Additional data of day.
yesterdayState = simulationDataFrame['state'][yesterdayIndex]
yesterdayPrep = simulationDataFrame['Prep'][yesterdayIndex]
yesterdayProbNino = simulationDataFrame['probNino'][yesterdayIndex]
yesterdayProbNina = simulationDataFrame['probNina'][yesterdayIndex]
yesterdayMonth = simulationDataFrame['Month'][yesterdayIndex]
# Calculate transition probability.
if yesterdayState == 0:
# Includes month factor + probNino value + probNino value.
successProbabilityLogit = transitionsParametersDry['value'][1]+transitionsParametersDry['value'][yesterdayMonth] + yesterdayProbNino*transitionsParametersDry['value'][13] + yesterdayProbNina*transitionsParametersDry['value'][14]
if yesterdayMonth==1:
# Includes month factor + probNino value + probNino value.
successProbabilityLogit = transitionsParametersDry['value'][yesterdayMonth] + yesterdayProbNino*transitionsParametersDry['value'][13] + yesterdayProbNina*transitionsParametersDry['value'][14]
successProbability = (np.exp(successProbabilityLogit))/(1+np.exp(successProbabilityLogit))
elif yesterdayState == 1:
# Includes month factor + probNino value + probNino value + prep value .
successProbabilityLogit = transitionsParametersDry['value'][1]+ transitionsParametersDry['value'][yesterdayMonth] + yesterdayProbNino*transitionsParametersWet['value'][14] + yesterdayProbNina*transitionsParametersWet['value'][15] + yesterdayPrep*transitionsParametersWet['value'][13]
if yesterdayMonth==1:
# Includes month factor + probNino value + probNino value + prep value .
successProbabilityLogit = transitionsParametersDry['value'][yesterdayMonth] + yesterdayProbNino*transitionsParametersWet['value'][14] + yesterdayProbNina*transitionsParametersWet['value'][15] + yesterdayPrep*transitionsParametersWet['value'][13]
successProbability = (np.exp(successProbabilityLogit))/(1+np.exp(successProbabilityLogit))
else:
print('State of date: ', simulationDataFrame.index[yesterdayIndex],' not found.')
#print(successProbability)
#successProbability = monthTransitions['p'+str(yesterdayState)+'1'][yesterdayMonth]
todayState = bernoulli.rvs(successProbability)
return todayState
# Simulates one run of simulation.
def oneRun(simulationDataFrame, transitionsParametersDry, transitionsParametersWet, amountParametersGamma):
# Define the total rainfall amount over the simulation.
rainfall = 0
# Total rainfall days.
wetDays = 0
# Loop over days in simulation to calculate rainfall ammount.
for day in range(1,len(simulationDataFrame)):
# Get today date.
dateOfDay = datetime.datetime.strptime(simulationDataFrame.index[day],'%Y-%m-%d')
# Update today state based on the yesterday state.
todayState = updateState(day-1, simulationDataFrame, transitionsParametersDry, transitionsParametersWet)
# Write new day information.
simulationDataFrame['state'][day] = todayState
simulationDataFrame['nextState'][day-1] = todayState
# Computes total accumulated rainfall.
if todayState == 1:
# Sum wet day.
wetDays+=1
# Additional data of day.
todayProbNino = simulationDataFrame['probNino'][day]
todayProbNina = simulationDataFrame['probNina'][day]
todayMonth = simulationDataFrame['Month'][day]
# Calculates gamma log(mu).
gammaLogMu = amountParametersGamma['mu'][1] + amountParametersGamma['mu'][todayMonth]+ todayProbNino*amountParametersGamma['mu'][13]+todayProbNino*amountParametersGamma['mu'][13]
#print(gammaMu)
# Calculates gamma scale
gammaLogShape = amountParametersGamma['shape'][1] + amountParametersGamma['shape'][todayMonth]+ todayProbNino*amountParametersGamma['shape'][13]+todayProbNino*amountParametersGamma['shape'][13]
#print(gammaShape)
if todayMonth==1:
# Calculates gamma log(mu).
gammaLogMu = amountParametersGamma['mu'][todayMonth]+ todayProbNino*amountParametersGamma['mu'][13]+todayProbNino*amountParametersGamma['mu'][13]
#print(gammaMu)
# Calculates gamma scale
gammaLogShape = amountParametersGamma['shape'][todayMonth]+ todayProbNino*amountParametersGamma['shape'][13]+todayProbNino*amountParametersGamma['shape'][13]
#print(gammaShape)
# Update mu
gammaMu = np.exp(gammaLogMu)
# Update shape
gammaShape = np.exp(gammaLogShape)
# Calculate gamma scale.
gammaScale = gammaMu / gammaShape
# Generate random rainfall.
todayRainfall = gamma.rvs(a = gammaShape, scale = gammaScale)
# Write new day information.
simulationDataFrame['Prep'][day] = todayRainfall
# Updates rainfall amount.
rainfall += todayRainfall
else:
# Write new day information.
simulationDataFrame['Prep'][day] = 0
yesterdayState = todayState
return rainfall,wetDays
###Output
_____no_output_____
###Markdown
Complete Simulation
###Code
# Run total iterations.
def totalRun(simulationDataFrame, transitionsParametersDry, transitionsParametersWet, amountParametersGamma,iterations):
# Initialize time
startTime = time.time()
# Array to store all precipitations.
rainfallPerIteration = [None]*iterations
wetDaysPerIteration = [None]*iterations
# Loop over each iteration(simulation)
for i in range(iterations):
simulationDataFrameC = simulationDataFrame.copy()
iterationRainfall,wetDays = oneRun(simulationDataFrameC, transitionsParametersDry, transitionsParametersWet, amountParametersGamma)
rainfallPerIteration[i] = iterationRainfall
wetDaysPerIteration[i] = wetDays
# Calculate time
currentTime = time.time() - startTime
# Print mean of wet days.
#print('The mean of wet days is: ', np.mean(wetDaysPerIteration))
# Logging time.
#print('The elapsed time over simulation is: ', currentTime, ' seconds.')
return rainfallPerIteration
###Output
_____no_output_____
###Markdown
Financial Analysis
###Code
def calculatePrice(strikePrice, interestRate, finalSimulationData):
presentValueArray = [0]*len(finalSimulationData)
for i in range(len(finalSimulationData)):
tempDiff = finalSimulationData[i]-strikePrice
realDiff = max(0,tempDiff)
presentValue = realDiff*np.exp(-interestRate/12)
presentValueArray[i] = presentValue
#print('The option price should be: \n ' , np.mean(presentValueArray))
return np.mean(presentValueArray)
###Output
_____no_output_____
###Markdown
Final Results
###Code
def plotRainfallDistribution(rainfallSimulated):
# Create Figure.
fig = plt.figure(figsize=(20, 10))
# Plot histogram.
plt.hist(rainfallSimulated,facecolor='steelblue',bins=100, density=True,
histtype='stepfilled', edgecolor = 'black' , hatch = '+')
# Add axis names.
plt.title('Rainfall Simulation')
plt.xlabel('Rainfall Amount [mm]')
plt.ylabel('Probability ')
plt.grid()
plt.show()
def optionRainfallCalculator(iterations, startDate, transitionsParametersDry, transitionsParametersWet, amountParametersGamma, optionMonthTerm):
## Generates initial conditions.
# Defines initial state based on proportions.
successProbability = 0.5
initialState = bernoulli.rvs(successProbability)
# Calculates initial prepicipitation.
if initialState == 1:
initialPrep = 1.0
else:
initialPrep = 0.0
## Create dataframe to simulate.
simulationDataFrame = createTotalDataFrame(daysNumber= 30, startDate = startDate, initialState = initialState , initialPrep = initialPrep, ensoForecast = ensoForecast, optionMonthTerm = optionMonthTerm)
## Run all iterations.
rainfallPerIteration = totalRun(simulationDataFrame, transitionsParametersDry, transitionsParametersWet, amountParametersGamma,iterations)
## Plot histogram.
#plotRainfallDistribution(rainfallPerIteration)
## Print Statistics.
#print(stats.describe(rainfallPerIteration))
return rainfallPerIteration
###Output
_____no_output_____
###Markdown
Basic Simulation Simulation Function Core
###Code
### Build the simulation core.
# Updates the state of the day based on yesterday state.
def updateState_S(yesterdayDate, yesterdayState, monthTransitions):
yesterdayMonth = yesterdayDate.month
successProbability = monthTransitions['p'+str(yesterdayState)+'1'][yesterdayMonth]
todayState = bernoulli.rvs(successProbability)
return todayState
# Simulates one run of simulation.
def oneRun_S(daysNumber, startDate, initialState, monthTransitions,fittedGamma):
# Create a variable to store the last day state.
yesterdayState = initialState
# Generate a timestamp with all days in simulation.
dates = pd.date_range(startDate, periods=daysNumber, freq='D')
# Define the total rainfall amount over the simulation.
rainfall = 0
# Loop over days in simulation to calculate rainfall ammount.
for day in dates:
# Update today state based on the yesterday state.
todayState = updateState_S(day-1, yesterdayState, monthTransitions)
# Computes total accumulated rainfall.
if todayState == 1:
todayRainfall = gamma.rvs(fittedGamma['Shape'][0],fittedGamma['Loc'][0],fittedGamma['Scale'][0])
# Updates rainfall amount.
rainfall += todayRainfall
yesterdayState = todayState
return rainfall
###Output
_____no_output_____
###Markdown
Complete Simulation
###Code
# Run total iterations.
def totalRun_S(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma,iterations):
# Initialize time
startTime = time.time()
# Array to store all precipitations.
rainfallPerIteration = [None]*iterations
# Loop over each iteration(simulation)
for i in range(iterations):
iterationRainfall = oneRun_S(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma)
rainfallPerIteration[i] = iterationRainfall
# Calculate time
currentTime = time.time() - startTime
# Logging time.
print('The elapsed time over simulation is: ', currentTime, ' seconds.')
return rainfallPerIteration
#### Define parameters simulation.
# Simulations iterations.
iterations = 1000
# Transitions probabilites.
monthTransitionsProb = pd.read_csv('../results/visibleMarkov/monthTransitions.csv', index_col=0)
# Rainfall amount parameters( Gamma parameters)
fittedGamma = pd.read_csv('../results/visibleMarkov/fittedGamma.csv', index_col=0)
# Number of simulation days(i.e 30, 60)
daysNumber = 30
# Simulation start date ('1995-04-22')
startDate = '2018-08-18'
# State of rainfall last day before start date --> Remember 0 means dry and 1 means wet.
initialState = 1
def optionRainfallCalculator_S(iterations, startDate,initialState, monthTransitionsProb,fittedGamma, optionMonthTerm):
## Generates initial conditions.
# Defines initial state based on proportions.
successProbability = 0.5
initialState = bernoulli.rvs(successProbability)
# Calculates initial prepicipitation.
if initialState == 1:
initialPrep = 1.0
else:
initialPrep = 0.0
## Create dataframe to simulate.
simulationDataFrame = createTotalDataFrame(daysNumber= 30, startDate = startDate, initialState = initialState , initialPrep = initialPrep, ensoForecast = ensoForecast, optionMonthTerm = optionMonthTerm)
daysNumber = 30
## Run all iterations.
rainfallPerIteration = totalRun_S(daysNumber,startDate,initialState, monthTransitionsProb,fittedGamma,iterations)
## Plot histogram.
#plotRainfallDistribution(rainfallPerIteration)
## Print Statistics.
#print(stats.describe(rainfallPerIteration))
return rainfallPerIteration
###Output
_____no_output_____
###Markdown
Get USD Libor
###Code
liborUSD2017 = [0.77333, 0.82000, 0.99872, 1.3176]
# [1m, 3m, 6m, 12m ]
###Output
_____no_output_____
###Markdown
Plotting final results
###Code
def finalComparison(iterations, startDate, transitionsParametersDry, transitionsParametersWet, amountParametersGamma, strikePrices, interestRates):
fig = plt.figure(figsize=(20, 10))
for strikePrice in strikePrices:
rainfallOptionTo={}
pricePerOption = {}
for optionMonthTerm in range(1,8):
rainfallOptionTo[optionMonthTerm] = optionRainfallCalculator(iterations, startDate, transitionsParametersDry, transitionsParametersWet, amountParametersGamma, optionMonthTerm)
interestRate = interestRates
pricePerOption[optionMonthTerm] = calculatePrice(strikePrice, interestRate, rainfallOptionTo[optionMonthTerm])
plotList = list(pricePerOption.values())
# Create Figure.
'''
# Plot histogram.
plt.hist(rainfallSimulated,facecolor='steelblue',bins=100, density=True,
histtype='stepfilled', edgecolor = 'black' , hatch = '+')
'''
x = range(1,8)
plt.plot(x,plotList, label='Strike Price ='+ str(strikePrice))
# Add axis names.
plt.title('Rainfall Option Simulation')
plt.xlabel('Month')
plt.ylabel('Price')
plt.legend()
plt.grid()
plt.show()
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
def finalComparisonGraph(iterations, startDate, transitionsParametersDry, transitionsParametersWet, amountParametersGamma, strikePrices, interestRates):
plt.figure(figsize=(20, 10))
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
count =1
for strikePrice in strikePrices:
rainfallOptionTo={}
pricePerOption = {}
for optionMonthTerm in range(1,8):
rainfallOptionTo[optionMonthTerm] = optionRainfallCalculator(iterations, startDate, transitionsParametersDry, transitionsParametersWet, amountParametersGamma, optionMonthTerm)
interestRate = interestRates
pricePerOption[optionMonthTerm] = calculatePrice(strikePrice, interestRate, rainfallOptionTo[optionMonthTerm])
plotList = list(pricePerOption.values())
# Create Figure.
par = host.twinx()
offset = 45
new_fixed_axis = par.get_grid_helper().new_fixed_axis
par.axis["right"] = new_fixed_axis(loc="right", axes=par,
offset=(offset*count, 0))
par.axis["right"].toggle(all=True)
par.set_ylabel('Price')
'''
# Plot histogram.
plt.hist(rainfallSimulated,facecolor='steelblue',bins=100, density=True,
histtype='stepfilled', edgecolor = 'black' , hatch = '+')
'''
x = range(1,8)
p, =par.plot(x,plotList, label='Strike Price ='+ str(strikePrice))
par.axis['right'].label.set_color(p.get_color())
count+=1
# Add axis names.
plt.title('Rainfall Option Simulation')
plt.xlabel('Month')
plt.ylabel('Price')
plt.legend()
plt.grid()
plt.show()
from mpl_toolkits.axes_grid1 import host_subplot
import mpl_toolkits.axisartist as AA
import matplotlib.pyplot as plt
def finalComparisonGraph_S(iterations, startDate,initialState, monthTransitionsProb,fittedGamma, strikePrices, interestRates):
plt.figure(figsize=(20, 10))
host = host_subplot(111, axes_class=AA.Axes)
plt.subplots_adjust(right=0.75)
count =1
for strikePrice in strikePrices:
rainfallOptionTo={}
pricePerOption = {}
for optionMonthTerm in range(1,8):
rainfallOptionTo[optionMonthTerm] = optionRainfallCalculator_S(iterations, startDate, transitionsParametersDry, transitionsParametersWet, amountParametersGamma, optionMonthTerm)
interestRate = interestRates
pricePerOption[optionMonthTerm] = calculatePrice(strikePrice, interestRate, rainfallOptionTo[optionMonthTerm])
plotList = list(pricePerOption.values())
# Create Figure.
par = host.twinx()
offset = 45
new_fixed_axis = par.get_grid_helper().new_fixed_axis
par.axis["right"] = new_fixed_axis(loc="right", axes=par,
offset=(offset*count, 0))
par.axis["right"].toggle(all=True)
par.set_ylabel('Price')
'''
# Plot histogram.
plt.hist(rainfallSimulated,facecolor='steelblue',bins=100, density=True,
histtype='stepfilled', edgecolor = 'black' , hatch = '+')
'''
x = range(1,8)
p, =par.plot(x,plotList, label='Strike Price ='+ str(strikePrice))
par.axis['right'].label.set_color(p.get_color())
count+=1
# Add axis names.
plt.title('Rainfall Option Simulation')
plt.xlabel('Month')
plt.ylabel('Price')
plt.legend()
plt.grid()
plt.show()
strikePrices = [25,50,75,100,125]
finalComparison(iterations=500,
startDate='2017-01-01',
transitionsParametersDry= transitionsParametersDry ,
transitionsParametersWet = transitionsParametersWet,
amountParametersGamma = amountParametersGamma,
strikePrices=strikePrices,
interestRates= 0.0235 )
strikePrices = [25,50,75,100,125]
finalComparisonGraph(iterations=100,
startDate='2017-01-01',
transitionsParametersDry= transitionsParametersDry ,
transitionsParametersWet = transitionsParametersWet,
amountParametersGamma = amountParametersGamma,
strikePrices=strikePrices,
interestRates= 0.0235 )
strikePrices = [50,100,150]
finalComparisonGraph(iterations=200,
startDate='2017-01-01',
transitionsParametersDry= transitionsParametersDry ,
transitionsParametersWet = transitionsParametersWet,
amountParametersGamma = amountParametersGamma,
strikePrices=strikePrices,
interestRates= 0.0235 )
strikePrices = [25,50,75,100,125]
finalComparisonGraph_S(iterations=100,
startDate='2017-01-01',
transitionsParametersDry= transitionsParametersDry ,
transitionsParametersWet = transitionsParametersWet,
amountParametersGamma = amountParametersGamma,
strikePrices=strikePrices,
interestRates= 0.0235 )
strikePrices = [50,100,150]
finalComparisonGraph(iterations=200,
startDate='2017-04-01',
transitionsParametersDry= transitionsParametersDry ,
transitionsParametersWet = transitionsParametersWet,
amountParametersGamma = amountParametersGamma,
strikePrices=strikePrices,
interestRates= 0.0235 )
###Output
_____no_output_____ |
Assignment9_Plotting Vector using NumPy and MatPlotLib.ipynb | ###Markdown
Plotting Vector using NumPy and MatPlotLib In this laboratory we will be discussing the basics of numerical and scientific programming by working with Vectors using NumPy and MatPlotLib. ObjectivesAt the end of this activity you will be able to:1. Be familiar with the libraries in Python for numerical and scientific programming.2. Visualize vectors through Python programming.3. Perform simple vector operations through code. **NumPy** * Numpy executes a massive variety of array-based mathematical operations.It enriches Python with advanced analytical structures that provide systematic calculations with arrays and matrices and a vast collection of elevated mathematical operations that function on these arrays and matrices. In lieu, the Numpy (np. array) class is represented by matrices and vectors. Whereas a vector is a single-dimensional array that represents magnitude with directions. Scalars \\Represent magnitude or a single valueVectors \\Represent magnitude with directions ***Representing Vectors*** Now that you know how to represent vectors using their component and matrix form we can now hard-code them in Python. Let's say that you have the vectors: $$ P = 13\hat{x} + 4\hat{y} \\J = 2\hat{x} + 8\hat{y}\\K = -3ax + 6ay - 3az \\D = 5\hat{i} + 2\hat{j} - 9\hat{k}$$ In which it's matrix equivalent is: $$ P = \begin{bmatrix} 13 \\ 4\end{bmatrix} , J = \begin{bmatrix} 2 \\ 8\end{bmatrix} , K = \begin{bmatrix} -3 \\ 6 \\ -3 \end{bmatrix}, D = \begin{bmatrix} 5 \\ 2 \\ -9\end{bmatrix}$$$$ P = \begin{bmatrix} 13 & 4\end{bmatrix} , J = \begin{bmatrix} 2 & 8\end{bmatrix} , K = \begin{bmatrix} -3 & 6 & -3\end{bmatrix} , D = \begin{bmatrix} 5 & 2 & -9\end{bmatrix} $$ We can then start doing numpy code with this by:
###Code
## Importing necessary libraries
import numpy as np ## 'np' here is short-hand name of the library (numpy) or a nickname.
P = np.array([0, 2])
A = np.array([-1, 2])
T = np.array([
[5],
[5],
[5]
])
Y = np.array ([[-2],
[0],
[-3]])
print('Vector P is ', P)
print('Vector A is ', A)
print('Vector T is ', T)
print('Vector Y is ', Y)
###Output
Vector P is [0 2]
Vector A is [-1 2]
Vector T is [[5]
[5]
[5]]
Vector Y is [[-2]
[ 0]
[-3]]
###Markdown
***Describing Vectors in NumPy*** * Describing vectors is a fundamental step for the user in order to conduct basic to complex operations. It entails describing the shape, size, and dimensions of vectors which are the vital techniques used to describe them.
###Code
### Checking shapes
### Shapes declare how many elements are there on each row and column
P.shape
A = np.array([2, 1, -4, 2, -1.5, 3])
A.shape
T.shape
### Checking size
### Array/Vector sizes declare the total number of elements are there in the vector
Y.size
### Checking dimensions
### The dimensions or rank of a vector declare how many dimensions are there for the vector.
T.ndim
###Output
_____no_output_____
###Markdown
Good job! ✅ We are done checking shapes, sizes, and dimensions the next step is to explore performing operations with these vectors **ADDITION** * The operation of addition is straightforward; the users should simply add the elements of the matrices depending on their index. *For instance, adding vector $P$ and vector $B$ we will have a resulting vector:* $$R = 10\hat{x} -3\hat{y} \\ \\or \\ \\ R = \begin{bmatrix} 10 \\ -3\end{bmatrix} $$ Creating that in NumPy in various ways:
###Code
R1 = np.add(T, Y) ## this is the functional method using the numpy library
R2= np.add(T, A)
R1,R2
R = P + Y ## this is the explicit method, since Python does a value-reference so it can
## know that these variables would need to do array operations.
R
R = np.add(T,A)
R
R = np.subtract(T,Y)
R
R = np.multiply(P,T)
R
R = np.divide(T,A)
R
pos1 = np.array([1,0,1])
pos2 = np.array([1,2,3])
pos3 = np.array([24,22,13])
pos4 = np.array([2,4,2])
R = pos1 + pos2 + pos3 + pos4
R
pos1 = np.array([1,0,1])
pos2 = np.array([1,2,3])
pos3 = np.array([24,22,13])
pos4 = np.array([2,4,2])
R = np.multiply(pos3, pos4)
R
pos1 = np.array([1,0,1])
pos2 = np.array([1,2,3])
pos3 = np.array([24,22,13])
pos4 = np.array([2,4,2])
R = pos3 / pos4
R
###Output
_____no_output_____
###Markdown
Try for yourself! Try to implement subtraction, multiplication, and division with vectors $V$ and $K$!
###Code
### Try out your code here!
V = np.array([12, 28, 20])
K = np.array([2, 4, 6])
R = np.divide(V,K)
R
R = V - K
R
R = np.multiply(V,K)
R
###Output
_____no_output_____
###Markdown
**Scaling** * Scaling, also known as scalar multiplication, takes a scalar value and multiplies it with a vector. Let's take the example below: $$S = 13 \cdot P$$ We can do this in numpy through:
###Code
#S = 13 * P
S = np.multiply(13,P)
S
###Output
_____no_output_____
###Markdown
Try to implement scaling with two vectors. $$S = 12 \cdot T$$
###Code
#S = 12 * T
S = np.multiply(12,T)
S
###Output
_____no_output_____
###Markdown
$$S = 3 \cdot A\cdot P$$
###Code
#S = 3 * A * P
S = np.multiply(3,A,P)
S
###Output
_____no_output_____
###Markdown
$$S = A \cdot T$$
###Code
#S = A * T
S = np.multiply(A, T)
S
S1 = np.multiply(2,A)
S2 = np.multiply(P , Y)
S1,S2
S = np.multiply(A,Y)
S
###Output
_____no_output_____
###Markdown
**MatPlotLib** * Matplotlib is a Python library that allows users to build static, animated, and collaborative plots. Numpy is a prerequisite for matplotlib, which employs numpy functions for numerical data and multi-dimensional arrays. ***Visualizing Data*** It may be useful to visualize these vectors in addition to solving them. For that, we'll employ MatPlotLib. Initially, we'll need to import it.
###Code
import matplotlib.pyplot as plt
import matplotlib
%matplotlib inline
A = [7, 8]
B = [9, -4]
plt.scatter(A[0], A[1], label='M', c='purple')
plt.scatter(B[0], B[1], label='L', c='pink')
plt.grid()
plt.legend()
plt.show()
A = np.array([2, -1])
B = np.array([-1, 2])
R = A + B
Magnitude = np.sqrt(np.sum(R**2))
print(Magnitude)
plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-2, 3)
plt.ylim(-3, 2)
plt.quiver(0, 0, A[0], A[1], angles='xy', scale_units='xy', scale=1, color='orange')
plt.quiver(A[0], A[1], B[0], B[1], angles='xy', scale_units='xy', scale=1, color='yellow')
P = A + B
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='blue')
plt.grid()
plt.show()
print(P)
print(Magnitude)
Slope = P[1]/P[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
n = P.shape[0]
plt.xlim(-7, 7)
plt.ylim(-7, 7)
plt.quiver(0,0, A[0], A[1], angles='xy', scale_units='xy',scale=1, color='pink')
plt.quiver(A[0],A[1], B[0], B[1], angles='xy', scale_units='xy',scale=1, color='gray')
plt.quiver(0,0, R[0], R[1], angles='xy', scale_units='xy',scale=1, color='brown')
plt.show()
###Output
_____no_output_____
###Markdown
Try plotting Three Vectors and show the Resultant Vector as a result.Use Head to Tail Method.
###Code
K = np.array([13, 1])
D = np.array([10, 5])
P = np.array ([7, 13])
B = K + D
R = K + D + P
Magnitude = np.sqrt(np.sum(R**2))
plt.title("Resultant Vector\nMagnitude:{}" .format(Magnitude))
plt.xlim(-35, 35)
plt.ylim(-35, 35)
plt.quiver(0, 0, K[0], K[1], angles='xy', scale_units='xy', scale=1, color='brown')
plt.quiver(K[0], K[1], D[0], D[1], angles='xy', scale_units='xy', scale=1, color='violet')
plt.quiver( B[0], B[1], P[0], P[1], angles='xy', scale_units='xy', scale=1, color='grey')
plt.quiver(0, 0, R[0], R[1], angles='xy', scale_units='xy', scale=1, color='black')
plt.grid()
plt.show()
print(R)
print(Magnitude)
Slope = R[1]/R[0]
print(Slope)
Angle = (np.arctan(Slope))*(180/np.pi)
print(Angle)
###Output
_____no_output_____ |
ESS_algorithm.ipynb | ###Markdown
Estimation of Stationary Sleep-segments (ESS) Algorithm Implementation It is described in this paper https://ieeexplore.ieee.org/abstract/document/7052479
###Code
import os
import numpy as np
from sklearn import metrics
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.decomposition import PCA
from sklearn.model_selection import KFold
#from google.colab import files
###Output
_____no_output_____
###Markdown
Prepare data
###Code
#data_path = os.path.join("/content/gdrive/My Drive/", "DRU-MAWI-project/ICHI14_dataset/data")
data_path = os.path.join("ICHI14_dataset/data")
patient_list = ['002','003','005','007','08a','08b','09a','09b', '10a','011',"012", '013','014','15a','15b','016',
'017','018','019','020','021','022','023','025','026','027','028','029','030','031','032',
'033','034','035','036','037','038','040','042','043','044','045','047','048','049', '050','051']
train_patient_list, test_patient_list = train_test_split(patient_list, random_state=152, test_size=0.3)
test_patient_list, valid_patient_list = train_test_split(test_patient_list, random_state=151, test_size=0.5)
print(len(patient_list))
print(len(train_patient_list))
print(len(valid_patient_list))
print(len(test_patient_list))
print(train_patient_list)
print(valid_patient_list)
print(test_patient_list)
def change_labels(sample):
"""
Returns:
sample - contains only label 1(awake) and 0(sleep) for polisomnography
"""
sample.gt[sample.gt==0] = 8
sample.gt[np.logical_or.reduce((sample.gt==1, sample.gt==2, sample.gt==3, sample.gt==5))] = 0
sample.gt[np.logical_or.reduce((sample.gt==6, sample.gt==7, sample.gt==8))] = 1
return sample
#-------------------------------------------------------------------------
def decoder(sample):
'''
Returns:
decoded_sample - contains accelerometer and ps data for each sensor record, ndarray of shape (n_records, 4)
'''
sample = np.repeat(sample, sample.d, axis=0)
n_records = sample.shape[0]
decoded_sample = np.zeros((n_records, 4))
decoded_sample[:, 0] = sample.x
decoded_sample[:, 1] = sample.y
decoded_sample[:, 2] = sample.z
decoded_sample[:, 3] = sample.gt
return decoded_sample
#-------------------------------------------------------------------------
def divide_by_windows(decoded_sample, window_len=60):
"""
Parameters:
wondow_len - length of each window in seconds, int
Returns:
X - accelerometer data, ndarray of shape (n_windows, window_len, 3)
y - polisomnography data, ndarray of shape (n_windows, )
"""
window_len *= 100
n_windows = decoded_sample.shape[0] // window_len
X = np.zeros((n_windows, window_len, 3))
y = np.zeros(n_windows)
for i in range(n_windows):
X[i] = decoded_sample[window_len * i: window_len * i + window_len, 0: 3]
ones = np.count_nonzero(decoded_sample[window_len*i: window_len*i+window_len, 3])
if ones >= (window_len / 2):
y[i] = 1
else:
y[i] = 0
return X, y
#-------------------------------------------------------------------------
def get_one_patient_data(data_path, patient, window_len=60):
"""
Returns:
X, y - for one patient
"""
sample = np.load("%s/p%s.npy"%(data_path, patient)).view(np.recarray)
sample = change_labels(sample)
sample = decoder(sample)
X, y = divide_by_windows(sample, window_len)
return X, y
#-------------------------------------------------------------------------
def get_data_for_model(data_path, patient_list, window_len=60):
"""
Returns:
X, y - for all patient list, ndarray of shape (n_records, n_features, n_channels=3)
"""
X_all_data = []
y_all_data = []
for patient in patient_list:
X, y = get_one_patient_data(data_path, patient, window_len)
X_all_data.append(X)
y_all_data.append(y)
X_all_data = np.concatenate(X_all_data, axis=0)
y_all_data = np.concatenate(y_all_data, axis=0)
return X_all_data, y_all_data
#-------------------------------------------------------------------------
def get_dawnsampled_data(data_path, patient_list, window_len=60, dawnsample="pca", n_components=10, n_windows=10):
"""
Parameters:
dawnsample - "pca", "mean", "max", "mode", None - determine the type of data reducing
Returns:
X, y - reduced data for all patient list and combine several windows data, ndarray of shape (n_records, n_components * n_windows, n_channels=3)
"""
X_all_data = []
y_all_data = []
for patient in patient_list:
X, y = get_one_patient_data(data_path, patient, window_len)
if dawnsample.lower() == "pca":
X = reduce_data_pca(X, n_components=n_components)
elif dawnsample.lower() == "mean":
X = reduce_data_mean(X, n_components=n_components)
elif dawnsample.lower() == "max":
X = reduce_data_max(X, n_components=n_components)
elif dawnsample.lower() == "mode":
X = reduce_data_mode(X, n_components=n_components)
elif dawnsample.lower() == "simple":
X = reduce_data_simple(X, n_components=n_components)
elif dawnsample.lower() == "statistic":
X = reduce_data_statistic(X, n_components=n_components)
elif dawnsample.lower() == "ess":
X = reduce_data_ess(X, n_components=n_components)
X_new = np.zeros((X.shape[0] - n_windows, X.shape[1] * (n_windows + 1), X.shape[2]))
for i in range(0, X.shape[0] - n_windows):
X_buff = X[i]
for j in range(1, n_windows + 1):
X_buff = np.concatenate([X_buff, X[i+j]], axis=0)
X_new[i] = X_buff
if n_windows != 0:
y = y[(n_windows//2): -(n_windows//2)]
X_all_data.append(X_new)
y_all_data.append(y)
#np.save(("X_p%s.npy"%(patient)), X_new)
#np.save(("y_p%s.npy"%(patient)), y)
X_all_data = np.concatenate(X_all_data, axis=0)
y_all_data = np.concatenate(y_all_data, axis=0)
return X_all_data, y_all_data
def reduce_data_pca(X, n_components=300):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_features, n_channels=3)
"""
pca1 = PCA(n_components)
pca2 = PCA(n_components)
pca3 = PCA(n_components)
pca1.fit(X[:, :, 0])
pca2.fit(X[:, :, 1])
pca3.fit(X[:, :, 2])
X1 = pca1.transform(X[:, :, 0])
X2 = pca2.transform(X[:, :, 1])
X3 = pca3.transform(X[:, :, 2])
X_reduced = np.concatenate([X1, X2, X3], axis=1).reshape(X.shape[0], n_components, 3)
return X_reduced
def reduce_data_max(X, n_components=600):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_components, n_channels=3)
"""
X_reduced = np.zeros((X.shape[0], n_components, 3))
window_len = X.shape[1] // n_components
for i in range(n_components):
X_reduced[:, i, :] = np.amax(X[:, i * window_len: (i + 1) * window_len, :], axis=1)
X_reduced = X_reduced.reshape(X.shape[0], n_components, 3)
return X_reduced
def reduce_data_mean(X, n_components=600):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_components, n_channels=3)
"""
X_reduced = np.zeros((X.shape[0], n_components, 3))
window_len = X.shape[1] // n_components
for i in range(n_components):
X_reduced[:, i, :] = np.mean(X[:, i * window_len: (i + 1) * window_len, :], axis=1)
X_reduced = X_reduced.reshape(X.shape[0], n_components, 3)
return X_reduced
def reduce_data_mode(X, n_components=600):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_components, n_channels=3)
"""
from scipy.stats import mode
X_reduced = np.zeros((X.shape[0], n_components, 3))
window_len = X.shape[1] // n_components
for i in range(n_components):
X_reduced[:, i, :] = mode(X[:, i * window_len: (i + 1) * window_len, :], axis=1)
X_reduced = X_reduced.reshape(X.shape[0], n_components, 3)
return X_reduced
def reduce_data_simple(X, n_components=600):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_components, n_channels=3)
"""
X_reduced = np.zeros((X.shape[0], n_components, 3))
window_len = X.shape[1] // n_components
for i in range(n_components):
X_reduced[:, i, :] = X[:, i * window_len, :]
X_reduced = X_reduced.reshape(X.shape[0], n_components, 3)
return X_reduced
def reduce_data_statistics(X, n_components=600):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_components, n_channels=3)
"""
X_reduced = np.zeros((X.shape[0], n_components, 3))
window_len = X.shape[1] // n_components
for i in range(n_components):
X_reduced[:, i, :] = np.std(X[:, i * window_len: (i + 1) * window_len, :], axis=1)
X_reduced = X_reduced.reshape(X.shape[0], n_components, 3)
return X_reduced
def reduce_data_ess(X, n_components=1):
"""
Parameters:
X - ndarray of shape (n_samples, n_features)
Returns:
X, y - reduced data, ndarray of shape (n_records, n_components, n_channels=1)
"""
X_reduced = np.zeros((X.shape[0], n_components, 1))
window_len = X.shape[1] // n_components
for i in range(n_components):
X_reduced[:, i, 0] = np.std(X[:, i * window_len: (i + 1) * window_len, 2], axis=1)
X_reduced = X_reduced.reshape(X.shape[0], n_components, 1)
return X_reduced
%%time
X_train, y_train = get_dawnsampled_data(data_path, train_patient_list,
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
X_valid, y_valid = get_dawnsampled_data(data_path, valid_patient_list,
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
X_test, y_test = get_dawnsampled_data(data_path, test_patient_list,
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
print(X_train.shape)
print(y_train.shape)
print(X_valid.shape)
print(X_test.shape)
###Output
(987195, 1, 1)
(987195,)
(230140, 1, 1)
(219281, 1, 1)
###Markdown
ESS algorithm
###Code
def compare_std_with_threshold(X_std, std_threshold=6):
X = np.zeros(X_std.shape)
X[X_std > std_threshold] = 1
return np.squeeze(X)
def ess_divide_res_for_windows(y, window_len=60):
"""
1 -awake, 0 - sleep
Parameters:
windows_len - int, in seconds - for comparison with same windows in other algorithms
"""
n_windows = y.shape[0] // window_len
y_new = np.zeros(n_windows)
for i in range(n_windows):
ones = np.count_nonzero(y[window_len * i: window_len * i + window_len])
if ones >= (window_len / 2):
y_new[i] = 1
else:
y_new[i] = 0
return np.squeeze(y_new)
def ess_predict(X, std_threshold=6, interval=600, window_len=1):
"""
Parameters:
windows_len - int, in seconds - for comparison with same windows in other algorithms
"""
X = compare_std_with_threshold(X, std_threshold=std_threshold)
count = 0
y = np.ones(X.shape[0])
for i in range(X.shape[0]):
if X[i] == 0:
count += 1
if count >= interval:
y[i - interval + 1: i + 1] = 0
else:
count = 0
if window_len > 1:
y = ess_divide_res_for_windows(y, window_len=window_len)
return y
y_train = ess_divide_res_for_windows(y_train, window_len=60)
y_valid = ess_divide_res_for_windows(y_valid, window_len=60)
y_test = ess_divide_res_for_windows(y_test, window_len=60)
print(y_train.shape)
print(y_valid.shape)
print(y_test.shape)
y_predict = ess_predict(X_test, window_len=60)
y_predict.shape
from sklearn import metrics
print("\nTrain set result: ")
print(metrics.classification_report(y_test, y_predict))
print("Confussion matrix: \n", metrics.confusion_matrix(y_test, y_predict))
accuracy = metrics.accuracy_score(y_test, y_predict)
print("\nAccuracy on train set: ", accuracy)
y_predict = ess_predict(X_train, window_len=60)
print("\nTrain set result: ")
print(metrics.classification_report(y_train, y_predict))
print("Confussion matrix: \n", metrics.confusion_matrix(y_train, y_predict))
accuracy = metrics.accuracy_score(y_train, y_predict)
print("\nAccuracy on train set: ", accuracy)
y_predict = ess_predict(X_valid, window_len=60)
print("\nValid set result: ")
print(metrics.classification_report(y_valid, y_predict))
print("Confussion matrix: \n", metrics.confusion_matrix(y_valid, y_predict))
accuracy = metrics.accuracy_score(y_valid, y_predict)
print("\nAccuracy on valid set: ", accuracy)
y_predict = ess_predict(X_test, window_len=60)
print("\nTest set result: ")
print(metrics.classification_report(y_test, y_predict))
print("Confussion matrix: \n", metrics.confusion_matrix(y_test, y_predict))
accuracy = metrics.accuracy_score(y_test, y_predict)
print("\nAccuracy on test set: ", accuracy)
X_all, y_all = get_dawnsampled_data(data_path, patient_list,
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
print(X_all.shape)
print(y_all.shape)
y_predict = ess_predict(X_all, window_len=60)
y_all = ess_divide_res_for_windows(y_all, window_len=60)
print(y_all.shape)
print(y_predict.shape)
print("\nAll set result: ")
print(metrics.classification_report(y_all, y_predict))
print("Confussion matrix: \n", metrics.confusion_matrix(y_all, y_predict))
accuracy = metrics.accuracy_score(y_all, y_predict)
print("\nAccuracy on all set: ", accuracy)
X_all, y_all = get_dawnsampled_data(data_path, ["033"],
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
print(X_all.shape)
print(y_all.shape)
y_predict = ess_predict(X_all, window_len=60)
y_all = ess_divide_res_for_windows(y_all, window_len=60)
print(y_all.shape)
print(y_predict.shape)
print("\nAll set result: ")
print(metrics.classification_report(y_all, y_predict))
print("Confussion matrix: \n", metrics.confusion_matrix(y_all, y_predict))
accuracy = metrics.accuracy_score(y_all, y_predict)
print("\nAccuracy on all set: ", accuracy)
import seaborn as sns
import matplotlib.pyplot as plt
plt.rc('figure', figsize=(20, 5), dpi=90, facecolor='w', edgecolor='k')
ax1 = plt.subplot()
ax1.plot(y_predict)
ax1.set_ylim([0, 1.1])
plt.rc('figure', figsize=(20, 5), dpi=90, facecolor='w', edgecolor='k')
ax1 = plt.subplot()
ax1.plot(y_all)
ax1.set_ylim([0, 1.1])
###Output
_____no_output_____
###Markdown
Calculate median accuracy:
###Code
accuracy_list = []
for patient in patient_list:
X, y = get_dawnsampled_data(data_path, [patient],
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
y_predict = ess_predict(X, window_len=1)
y = ess_divide_res_for_windows(y, window_len=1)
accuracy = metrics.accuracy_score(y, y_predict)
accuracy_list.append(accuracy)
np.median(accuracy_list)
np.mean(accuracy_list)
max(accuracy_list)
len(accuracy_list)
kf = KFold(n_splits=5, random_state=5, shuffle=True) # Define the split - into 3 folds
kf.get_n_splits(patient_list) # returns the number of splitting iterations in the cross-validator
n_others_windows = 30
%%time
accuracy_list = []
f1_score_list = []
for train_index, test_index in kf.split(patient_list):
train_patient_list = [patient_list[i] for i in train_index]
test_patient_list = [patient_list[i] for i in test_index]
X_train, y_train = get_dawnsampled_data(data_path, train_patient_list,
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
X_test, y_test = get_dawnsampled_data(data_path, test_patient_list,
window_len=1, dawnsample="ess",
n_components=1, n_windows=0)
y_predict = ess_predict(X_train, window_len=60)
y_train = ess_divide_res_for_windows(y_train, window_len=60)
print("/Train set results:")
accuracy_train = metrics.accuracy_score(y_train, y_predict)
f1_train = metrics.f1_score(y_train, y_predict)
print("Accuracy on train set: ", accuracy_train)
print("F1-score on train set: ", f1_train)
y_predict = ess_predict(X_test, window_len=60)
y_test = ess_divide_res_for_windows(y_test, window_len=60)
print("/Test set results:")
f1_test = metrics.f1_score(y_test, y_predict)
accuracy_list.append(accuracy)
f1_score_list.append(f1_test)
print("Accuracy on test set: ", accuracy)
print("F1-score on test set: ", f1_test)
print(metrics.classification_report(y_test, y_predict, target_names=["sleep", "awake"]))
print("Confussion matrix: \n", metrics.confusion_matrix(y_test, y_predict))
print("\n-------------------------------------------------------")
print("\nMean accuracy =", np.mean(accuracy_list))
print("\nMean f1-score =", np.mean(f1_score_list))
###Output
/Train set results:
Accuracy on train set: 0.731198808637379
F1-score on train set: 0.6560500884714849
/Test set results:
Accuracy on test set: 0.7309417040358744
F1-score on test set: 0.6312308055752421
precision recall f1-score support
sleep 0.64 0.88 0.74 2545
awake 0.82 0.51 0.63 2596
avg / total 0.73 0.70 0.69 5141
Confussion matrix:
[[2244 301]
[1260 1336]]
-------------------------------------------------------
/Train set results:
Accuracy on train set: 0.7222222222222222
F1-score on train set: 0.647145144076841
/Test set results:
Accuracy on test set: 0.7309417040358744
F1-score on test set: 0.664289353031075
precision recall f1-score support
sleep 0.68 0.91 0.78 2503
awake 0.85 0.54 0.66 2395
avg / total 0.76 0.73 0.72 4898
Confussion matrix:
[[2276 227]
[1091 1304]]
-------------------------------------------------------
/Train set results:
Accuracy on train set: 0.7116985845129059
F1-score on train set: 0.6406331084587442
/Test set results:
Accuracy on test set: 0.7309417040358744
F1-score on test set: 0.6939007092198581
precision recall f1-score support
sleep 0.74 0.92 0.82 2631
awake 0.86 0.58 0.69 2096
avg / total 0.79 0.77 0.76 4727
Confussion matrix:
[[2425 206]
[ 873 1223]]
-------------------------------------------------------
/Train set results:
Accuracy on train set: 0.7323035314921724
F1-score on train set: 0.6585737976782753
/Test set results:
Accuracy on test set: 0.7309417040358744
F1-score on test set: 0.6190228690228691
precision recall f1-score support
sleep 0.67 0.82 0.74 2519
awake 0.72 0.54 0.62 2197
avg / total 0.69 0.69 0.68 4716
Confussion matrix:
[[2059 460]
[1006 1191]]
-------------------------------------------------------
/Train set results:
Accuracy on train set: 0.7210388543858749
F1-score on train set: 0.6498292635783777
/Test set results:
Accuracy on test set: 0.7309417040358744
F1-score on test set: 0.6495327102803738
precision recall f1-score support
sleep 0.71 0.87 0.78 2466
awake 0.78 0.56 0.65 1994
avg / total 0.74 0.73 0.72 4460
Confussion matrix:
[[2148 318]
[ 882 1112]]
-------------------------------------------------------
Mean accuracy = 0.7309417040358744
Mean f1-score = 0.6515952894258836
Wall time: 1min 36s
|
site/en-snapshot/federated/tutorials/simulations.ipynb | ###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. Now, let's start by loading the MNIST example from the TFF website, anddeclaring the Python function that will run a small experiment loop overa group of 10 clients.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated-nightly
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
Single-machine simulationsNow on by default.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. Now, let's start by loading the MNIST example from the TFF website, anddeclaring the Python function that will run a small experiment loop overa group of 10 clients.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
Single-machine simulationsNow on by default.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. Now, let's start by loading the MNIST example from the TFF website, anddeclaring the Python function that will run a small experiment loop overa group of 10 clients.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow_federated_nightly
!pip install --quiet --upgrade nest_asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.Input(shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
Single-machine simulationsNow on by default.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. Now, let's start by loading the MNIST example from the TFF website, anddeclaring the Python function that will run a small experiment loop overa group of 10 clients.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated-nightly
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
Single-machine simulationsNow on by default.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. Now, let's start by loading the MNIST example from the TFF website, anddeclaring the Python function that will run a small experiment loop overa group of 10 clients.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
Single-machine simulationsNow on by default.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
###Markdown
High-performance simulations with TFFThis tutorial will describe how to setup high-performance simulations with TFFin a variety of common scenarios.TODO(b/134543154): Populate the content, some of the things to cover here:- using GPUs in a single-machine setup,- multi-machine setup on GCP/GKE, with and without TPUs,- interfacing MapReduce-like backends,- current limitations and when/how they will be relaxed. View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook Before we beginFirst, make sure your notebook is connected to a backend that has the relevantcomponents (including gRPC dependencies for multi-machine scenarios) compiled. Now, let's start by loading the MNIST example from the TFF website, anddeclaring the Python function that will run a small experiment loop overa group of 10 clients.
###Code
#@test {"skip": true}
!pip install --quiet --upgrade tensorflow-federated-nightly
!pip install --quiet --upgrade nest-asyncio
import nest_asyncio
nest_asyncio.apply()
import collections
import time
import tensorflow as tf
import tensorflow_federated as tff
source, _ = tff.simulation.datasets.emnist.load_data()
def map_fn(example):
return collections.OrderedDict(
x=tf.reshape(example['pixels'], [-1, 784]), y=example['label'])
def client_data(n):
ds = source.create_tf_dataset_for_client(source.client_ids[n])
return ds.repeat(10).shuffle(500).batch(20).map(map_fn)
train_data = [client_data(n) for n in range(10)]
element_spec = train_data[0].element_spec
def model_fn():
model = tf.keras.models.Sequential([
tf.keras.layers.InputLayer(input_shape=(784,)),
tf.keras.layers.Dense(units=10, kernel_initializer='zeros'),
tf.keras.layers.Softmax(),
])
return tff.learning.from_keras_model(
model,
input_spec=element_spec,
loss=tf.keras.losses.SparseCategoricalCrossentropy(),
metrics=[tf.keras.metrics.SparseCategoricalAccuracy()])
trainer = tff.learning.build_federated_averaging_process(
model_fn, client_optimizer_fn=lambda: tf.keras.optimizers.SGD(0.02))
def evaluate(num_rounds=10):
state = trainer.initialize()
for _ in range(num_rounds):
t1 = time.time()
state, metrics = trainer.next(state, train_data)
t2 = time.time()
print('metrics {m}, round time {t:.2f} seconds'.format(
m=metrics, t=t2 - t1))
###Output
_____no_output_____
###Markdown
Single-machine simulationsNow on by default.
###Code
evaluate()
###Output
metrics <sparse_categorical_accuracy=0.13858024775981903,loss=3.0073554515838623>, round time 3.59 seconds
metrics <sparse_categorical_accuracy=0.1796296238899231,loss=2.749046802520752>, round time 2.29 seconds
metrics <sparse_categorical_accuracy=0.21656379103660583,loss=2.514779567718506>, round time 2.33 seconds
metrics <sparse_categorical_accuracy=0.2637860178947449,loss=2.312587261199951>, round time 2.06 seconds
metrics <sparse_categorical_accuracy=0.3334362208843231,loss=2.068122386932373>, round time 2.00 seconds
metrics <sparse_categorical_accuracy=0.3737654387950897,loss=1.9268712997436523>, round time 2.42 seconds
metrics <sparse_categorical_accuracy=0.4296296238899231,loss=1.7216310501098633>, round time 2.20 seconds
metrics <sparse_categorical_accuracy=0.4655349850654602,loss=1.6489890813827515>, round time 2.18 seconds
metrics <sparse_categorical_accuracy=0.5048353672027588,loss=1.5485210418701172>, round time 2.16 seconds
metrics <sparse_categorical_accuracy=0.5564814805984497,loss=1.4140453338623047>, round time 2.41 seconds
|
tests/test-model.ipynb | ###Markdown
Methods Condensed sparse column matrices
###Code
data = np.array([1, 2, 3, 4, 5, 6])
row = np.array([0, 2, 2, 0, 1, 2])
col = np.array([0, 0, 1, 2, 2, 2])
sp.sparse.csc_matrix((data, (row, col)), shape=(3, 3)).toarray()
# topics = ['anatomy', 'biochemistry', 'cognitive science', 'evolutionary biology',
# 'genetics', 'immunology', 'molecular biology', 'chemistry', 'biophysics',
# 'energy', 'optics', 'earth science', 'geology', 'meteorology']
topics = ['earth science']
path_saved = '/Users/harangju/Developer/data/wiki/graphs/dated/'
networks = {}
for topic in topics:
print(topic, end=' ')
networks[topic] = wiki.Net()
networks[topic].load_graph(path_saved + topic + '.pickle')
graph = networks[topic].graph
len(networks[topic].graph.nodes)
v = networks[topic].graph.graph['tfidf']
v
v.sum()
v[:,0].indices[:5]
v[4,0]
networks[topic].graph.name
networks[topic].graph.nodes['Biology']
core = [n for n in networks[topic].graph.nodes if networks[topic].graph.nodes[n]['core_rb']>.9]
core
[(i,n) for i,n in enumerate(networks[topic].graph.nodes) if networks[topic].graph.nodes[n]['year']<-1800]
vi = v[:,9]
vi
###Output
_____no_output_____
###Markdown
CSC & networkx operations
###Code
graph = networks[topic].graph
core = [n for n in networks[topic].graph.nodes if networks[topic].graph.nodes[n]['year']<-2000]
subgraph = graph.subgraph(core).copy()
import scipy.sparse as ss
tfidf = ss.hstack([v[:,list(graph.nodes).index(n)] for n in subgraph.nodes])
tfidf
subgraph.nodes
subgraph.add_node('Hello')
subgraph.nodes
###Output
_____no_output_____
###Markdown
AlgorithmInitialize with core set of nodes.\For each year,\initialize an "baby" node for each existing node that doesn't already have a baby node,\mutate tf-idf for each "baby" node (including the name),\and if the "baby" node gets a probability drawn from the distribution of similarities (to what?),the "baby" node is born.
###Code
import sklearn.metrics.pairwise as smp
import scipy.sparse as ss
from scipy.stats import norm
import seaborn as sn
###Output
_____no_output_____
###Markdown
Mutation Prior: power law distributions of weights
###Code
graph = networks[topic].graph
tfidf = graph.graph['tfidf'].copy()
import powerlaw
fit = powerlaw.Fit(tfidf[:,0].data, xmax=np.max(tfidf[:,0].data))
fit.plot_pdf()
fit.power_law.plot_pdf();
plt.title(f"xmin={fit.xmin:.1e}, α={fit.alpha:.1f}");
max_val = np.max(tfidf[:,0].data)
np.max(np.vectorize(lambda x: max_val if x>max_val else x)(fit.power_law.generate_random(10000)))
###Output
_____no_output_____
###Markdown
Prior: new words / year between neighbors[gist](https://gist.github.com/ptocca/e18a9e4e35930c0958fdaa62958bdf82)
###Code
def year_diffs(graph):
return [graph.nodes[node]['year'] - graph.nodes[neighbor]['year']
for node in graph.nodes
for neighbor in list(graph.successors(node))]
yd = year_diffs(graph)
sns.distplot(yd)
plt.title(topic)
plt.xlabel('year difference');
%reload_ext cython
%%cython -f
import numpy as np
cimport numpy as np
from cython cimport floating,boundscheck,wraparound
from cython.parallel import prange
from libc.math cimport fabs
np.import_array()
@boundscheck(False) # Deactivate bounds checking
@wraparound(False)
def cython_manhattan(floating[::1] X_data, int[:] X_indices, int[:] X_indptr,
floating[::1] Y_data, int[:] Y_indices, int[:] Y_indptr,
double[:, ::1] D):
"""Pairwise L1 distances for CSR matrices.
Usage:
>>> D = np.zeros(X.shape[0], Y.shape[0])
>>> cython_manhattan(X.data, X.indices, X.indptr,
... Y.data, Y.indices, Y.indptr,
... D)
"""
cdef np.npy_intp px, py, i, j, ix, iy
cdef double d = 0.0
cdef int m = D.shape[0]
cdef int n = D.shape[1]
with nogil:
for px in prange(m):
for py in range(n):
i = X_indptr[px]
j = Y_indptr[py]
d = 0.0
while i < X_indptr[px+1] and j < Y_indptr[py+1]:
if i < X_indptr[px+1]: ix = X_indices[i]
if j < Y_indptr[py+1]: iy = Y_indices[j]
if ix==iy:
d = d+fabs(X_data[i]-Y_data[j])
i = i+1
j = j+1
elif ix<iy:
d = d+fabs(X_data[i])
i = i+1
else:
d = d+fabs(Y_data[j])
j = j+1
if i== X_indptr[px+1]:
while j < Y_indptr[py+1]:
iy = Y_indices[j]
d = d+fabs(Y_data[j])
j = j+1
else:
while i < X_indptr[px+1]:
ix = X_indices[i]
d = d+fabs(X_data[i])
i = i+1
D[px,py] = d
import sklearn.preprocessing as skp
import sklearn.metrics.pairwise as smp
from scipy.sparse import csr_matrix,random
from sklearn.metrics.pairwise import check_pairwise_arrays
def sparse_manhattan(X,Y=None):
X, Y = check_pairwise_arrays(X, Y)
X = csr_matrix(X, copy=False)
Y = csr_matrix(Y, copy=False)
res = np.empty(shape=(X.shape[0],Y.shape[0]))
cython_manhattan(X.data,X.indices,X.indptr,
Y.data,Y.indices,Y.indptr,
res)
return res
def word_diffs(graph, tfidf):
dists = sparse_manhattan(X=skp.binarize(tfidf).transpose())
nodes = list(graph.nodes)
return [dists[nodes.index(node), nodes.index(neighbor)]
for node in nodes
for neighbor in list(graph.successors(node))]
plt.figure(figsize=(14,5))
plt.subplot(121)
wd = word_diffs(graph, tfidf)
sns.scatterplot(x=np.abs(yd), y=wd)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), wd)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e}")
plt.xlabel('year')
plt.ylabel('manhattan distance');
plt.subplot(122)
sns.distplot(wd)
mu, std = sp.stats.norm.fit(wd)
x = np.linspace(min(wd), max(wd), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.xlabel('manhattan distance')
plt.ylabel('probability distribution');
###Output
_____no_output_____
###Markdown
Prior: similarity / year between neighbors
###Code
def neighbor_similarity(graph, tfidf):
nodes = list(graph.nodes)
return [smp.cosine_similarity(tfidf[:,nodes.index(node)].transpose(),
tfidf[:,nodes.index(neighbor)].transpose())[0,0]
for node in nodes
for neighbor in list(graph.successors(node))]
neighbors = neighbor_similarity(graph, tfidf)
plt.figure(figsize=(6,4))
sns.scatterplot(x=np.abs(yd), y=neighbors)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), neighbors)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e}")
plt.xlabel('Δyear')
plt.ylabel('cosine similarity');
###Output
_____no_output_____
###Markdown
Prior: weight distributions of nodes
###Code
import numpy as np
import matplotlib.pyplot as plt
def plot_distribution(data):
bins = np.logspace(np.log10(min(data)), np.log10(max(data)), 30)
hist, edges = np.histogram(data, bins=bins)
# hist_norm = hist/(bins[1:] - bins[:-1])
sns.scatterplot(bins[:-1], hist/len(data))
plt.yscale('log')
plt.xscale('log')
plt.xlim(bins[0]/2, bins[-1]*2)
plt.ylim(min(hist[hist>0])/len(data)/2, 1)
plt.xlabel('x')
plt.ylabel('P(x)')
plt.figure(figsize=(14,6))
plt.subplot(121)
sns.scatterplot(x='index', y='weight',
data=pd.DataFrame({'index': tfidf.indices,
'weight': tfidf.data}))
sns.scatterplot(x='index', y='weight',
data=pd.DataFrame({'index': tfidf.indices,
'weight': tfidf.data})\
.groupby('index').mean()\
.reset_index())
plt.ylim([-.2,1.2]);
plt.subplot(122)
plot_distribution(tfidf.data)
###Output
_____no_output_____
###Markdown
Prior: year distribution
###Code
sns.distplot([graph.nodes[node]['year'] for node in graph.nodes], rug=True)
plt.xlabel('year');
###Output
_____no_output_____
###Markdown
Method
###Code
import numpy.random as npr
def mutate(x, rvs, point=(0,0), insert=(0,0,None), delete=(0,0)):
""" Mutates vector ``x`` with point mutations,
insertions, and deletions. Insertions and point
mutations draw from a random process ``rvs``.
Parameters
----------
x: spipy.sparse.csc_matrix
rvs: lambda (n)-> float
returns ``n`` random weights in [0,1]
point: tuple (int n, float p)
n = number of elements to insert
p = probability of insertion for each trial
insert: tuple (n, p, iterable s)
s = set of elements from which to select
if None, select from all zero elements
delete: tuple (n, p)
max_weight: float
"""
data = x.data
idx = x.indices
n_point = npr.binomial(point[0], point[1])
i_point = npr.choice(x.size, size=n_point, replace=False)
data[i_point] = rvs(n_point)
# insertion
n_insert = npr.binomial(insert[0], insert[1])
for _ in range(n_insert):
while True:
insert_idx = npr.choice(insert[2]) if insert[2]\
else npr.choice(x.shape[0])
if insert_idx not in idx: break
idx = np.append(idx, insert_idx)
data = np.append(data, rvs(1))
# deletion
n_delete = npr.binomial(delete[0], delete[1])
i_delete = npr.choice(idx.size, size=n_delete, replace=False)
idx = np.delete(idx, i_delete)
data = np.delete(data, i_delete)
y = ss.csc_matrix((data, (idx, np.zeros(idx.shape, dtype=int))),
shape=x.shape)
return y
###Output
_____no_output_____
###Markdown
Test
###Code
x = tfidf[:,0].copy()
y = tfidf[:,0].copy()
T = 1000
sim = np.zeros(T)
size = np.zeros(T)
mag = np.zeros(T)
for i in range(sim.size):
sim[i] = smp.cosine_similarity(x.transpose(),y.transpose())[0,0]
size[i] = y.size
mag[i] = np.sum(y.data)
y = mutate(y, lambda n: fit.power_law.generate_random(n),
point=(1,.5), insert=(1,.3,None), delete=(1,.3))
plt.figure(figsize=(16,4))
ax = plt.subplot(121)
sn.lineplot(x=range(sim.size), y=sim)
plt.ylabel('similarity')
ax2 = ax.twinx()
sn.lineplot(x=range(sim.size), y=mag, ax=ax2, color='darkorange')
plt.ylabel('magnitude')
plt.title(graph.name)
plt.xlabel('years')
plt.subplot(122)
sn.lineplot(x=range(sim.size), y=size)
plt.title(graph.name)
plt.ylabel('size')
plt.xlabel('years')
plt.figure(figsize=(16,4))
plt.subplot(121)
plot_distribution(graph.graph['tfidf'][:,1].data)
plt.xlabel('tf-idf values')
plot_distribution(y.data)
plt.title(graph.name)
plt.legend(['before mutation', 'after mutation'])
plt.xlabel('tf-idf values')
plt.subplot(122)
plot_distribution(x.data)
plot_distribution(y.data)
plt.title(graph.name)
plt.yscale('linear')
plt.xscale('linear')
plt.ylim([0,.2])
plt.xlim([0,.1])
plt.legend(['before','after']);
###Output
_____no_output_____
###Markdown
Create new nodes Prior: distribution of similarities
###Code
def non_neighbor_similarity(graph, tfidf):
nodes = list(graph.nodes)
sim = [smp.cosine_similarity(tfidf[:,nodes.index(n1)].transpose(),
tfidf[:,nodes.index(n2)].transpose())[0,0]
for n1 in graph.nodes
for n2 in graph.nodes
if (n2 is not n1) and (n2 not in list(graph.neighbors(n1)))]
return sim
non_neighbors = non_neighbor_similarity(graph, tfidf)
plt.figure()
sns.distplot(neighbors)
x = np.linspace(min(neighbors), max(neighbors), 100)
mu, std = sp.stats.norm.fit(neighbors)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
sns.distplot(non_neighbors)
plt.title(topic)
plt.legend([f"fit-neighbors (m={mu:.2f}; s={std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity');
###Output
_____no_output_____
###Markdown
MethodJust draw from normal pdf Test
###Code
npr.normal(loc=mu, scale=std, size=4)
###Output
_____no_output_____
###Markdown
CrossoverWhat prior should I use? It needs to be more similar than neighbors. Some kind of a t-test? Prior: maybe just 3 std above mean?
###Code
mu + 3*std
###Output
_____no_output_____
###Markdown
Methodaverage? or combine elements?
###Code
def crossover(v1, v2):
""" Crosses two vectors by combining half of one
and half of the other.
Parameters
----------
v1, v2: scipy.sparse.matrix
Returns
-------
v3: scipy.sparse.matrix
"""
idx1 = npr.choice(v1.size, size=int(v1.size/2))
idx2 = npr.choice(v2.size, size=int(v2.size/2))
data = np.array([v1.data[i] for i in idx1] +
[v2.data[i] for i in idx2])
idx = np.array([v1.indices[i] for i in idx1] +
[v2.indices[i] for i in idx2])
v3 = ss.csc_matrix((data, (idx, np.zeros(idx.shape,
dtype=int))),
shape=v1.shape)
return v3
def crossover_seeds(seeds, graph, vectors, threshold=.7):
""" Crosses ``seeds`` if similarity between two seeds
is greater than ``threshold``. Then, it sets one of the
seeds to the item in ``vectors``.
Parameters
----------
seeds: dict {str: [scipy.sparse.csc_matrix]}
threshold: float
"""
nodes = list(graph.nodes)
for i, node_i in enumerate(seeds.keys()):
for j, node_j in enumerate(seeds.keys()):
if i==j: continue
for k, seed_vec_k in enumerate(seeds[node_i]):
for l, seed_vec_l in enumerate(seeds[node_j]):
similarity = smp.cosine_similarity(seed_vec_k.transpose(),
seed_vec_l.transpose())[0,0]
if similarity > threshold:
if bool(npr.rand() < 0.5):
seeds[node_i][k] = crossover(seed_vec_k, seed_vec_l)
seeds[node_j][l] = vectors[:,nodes.index(node_j)]
else:
seeds[node_i][k] = vectors[:,nodes.index(node_i)]
seeds[node_j][l] = crossover(seed_vec_k, seed_vec_l)
###Output
_____no_output_____
###Markdown
Test
###Code
tfidf = graph.graph['tfidf'].copy()
nodes = list(graph.nodes)[:20]
seeds = {node: [tfidf[:,list(graph.nodes).index(node)],
tfidf[:,list(graph.nodes).index(node)]]
for node in nodes}
print(nodes, '\n')
vectors = ss.hstack([v for node in nodes for v in seeds[node]])
print(np.triu(smp.cosine_similarity(vectors.transpose())))
crossover_seeds(seeds, graph, tfidf, threshold=0.3)
print('\n----------------------------------------------------------\n')
vectors = ss.hstack([v for node in nodes if node in seeds.keys() for v in seeds[node]])
print(np.triu(smp.cosine_similarity(vectors.transpose())))
###Output
_____no_output_____
###Markdown
Method
###Code
def consume_seeds(seeds, vectors, threshold=0.9):
""" Consumes a seed in ``seeds`` if similarity
between a seed and an existing vector in ``vectors``
is greater than ``threshold``.
Parameters
----------
seeds: dict {string: scipy.sparse.csc_matrix}
vectors: scipy.sparse.csc_matrix
threshold: float
"""
for seed, vec in list(seeds.items()):
for i in range(vectors.shape[1]):
s = smp.cosine_similarity(vec.transpose(), vectors[:,i].transpose())
if s[0,0] > threshold:
seeds.pop(seed, None)
###Output
_____no_output_____
###Markdown
Test
###Code
tfidf = graph.graph['tfidf'].copy()
nodes = list(graph.nodes)[:4]
seeds = {node: tfidf[:,list(graph.nodes).index(node)]
for node in nodes}
T = 100
seeds
for _ in range(10):
seeds['Hydrosphere'] = mutate(seeds['Hydrosphere'],
lambda n: fit.power_law.generate_random(n),
point=(1,1), insert=(5,.5,None), delete=(5,.5))
consume_seeds(seeds, tfidf[:,:4])
seeds
###Output
_____no_output_____
###Markdown
Create nodes Get words from tf-idf vector
###Code
import pickle
import gensim.utils as gu
path_models = '/Users/harangju/Developer/data/wiki/models/'
model = gu.SaveLoad.load(path_models + 'tfidf.model')
dct = pickle.load(open(path_models + 'dict.model','rb'))
words = [dct[i] for i in tfidf[:,0].indices]
words[:5]
###Output
_____no_output_____
###Markdown
Prior: word weight vs title
###Code
idx = np.argsort(tfidf[:,0].data)
idx[-5:], tfidf[:,0].data[idx[-10:]]
def find_top_words(x, dct, top_n=5, stoplist=set('for a of the and to in'.split())):
"""
Parameters
----------
x: scipy.sparse.csc_matrix
dct: gensim.corpora.dictionary
top_n: int
Returns
-------
words:
idx_vector:
"""
top_idx = np.argsort(x.data)[-top_n:]
idx = [x.indices[i] for i in top_idx if dct[x.indices[i]] not in stoplist]
words = [dct[i] for i in idx]
return words, idx
stoplist=set('for a of the and to in'.split())
nodes = []
words1 = []
words2 = []
for i in range(tfidf.shape[1]):
if tfidf[:,i].data.size == 0:
print(list(graph.nodes)[i], tfidf[:,i].data)
continue
nodes += [list(graph.nodes)[i]]
idx_sorted = np.argsort(tfidf[:,i].data)
words1 += [[dct[tfidf[:,i].indices[idx]]
for idx in idx_sorted[-5:]
if dct[tfidf[:,i].indices[idx]] not in stoplist]]
top_words, idx = find_top_words(tfidf[:,i], dct, top_n=5)
words2 += [top_words]
pd.DataFrame(data={'Node': nodes, 'Top words 1': words1, 'Top words 2': words2})
###Output
_____no_output_____
###Markdown
MethodIf article has any two of the top five words, connect.```for new_article in new_articles: for article in articles: if any two of the top five words are in new_article: connect new_article to article```
###Code
def connect(seed_vector, graph, vectors, dct, top_words=5, match_n=2):
"""
Parameters
----------
seed_vector: scipy.sparse.csc_matrix
graph: networkx.DiGraph (not optional)
vectors: scipy.sparse.csc_matrix (not optional)
dct: gensim.corpora.dictionary (not optional)
top_words: int (default=5)
match_n: int
how many words should be matched by...
"""
seed_top_words, seed_top_idx = find_top_words(seed_vector, dct)
seed_name = ' '.join(seed_top_words)
nodes = list(graph.nodes)
graph.add_node(seed_name)
for i, node in enumerate(nodes):
node_vector = vectors[:,i]
node_top_words, node_top_idx = find_top_words(node_vector, dct)
if len(set(seed_top_idx).intersection(set(node_vector.indices))) >= match_n:
graph.add_edge(node, seed_name)
if len(set(node_top_idx).intersection(set(seed_vector.indices))) >= match_n:
graph.add_edge(seed_name, node)
###Output
_____no_output_____
###Markdown
Test
###Code
graph = networks[topic].graph
core_nodes = [n for n in graph.nodes if graph.nodes[n]['year'] < -2000]
subgraph = graph.subgraph(core_nodes).copy()
subgraph.graph.clear()
subgraph.name = graph.name + '-cutting'
print(f"Core nodes: {core_nodes} in '{subgraph.name}'")
test_graph = subgraph.copy()
test_vector = ss.hstack([tfidf[:,list(graph.nodes).index(n)] for n in test_graph.nodes])
seed = 'Meteorology'
seed_vector = tfidf[:,list(graph.nodes).index(seed)]
print('Nodes:', test_graph.nodes)
print('Edges:', test_graph.edges, '\n')
print(f"Seed: {seed}\n")
connect(seed_vector, test_graph, test_vector, dct, match_n=3)
print('Nodes:', test_graph.nodes)
print('Edges:', test_graph.edges)
###Output
_____no_output_____
###Markdown
Test with all nodes without node names Evolve Priors
###Code
# import powerlaw
# fit = powerlaw.Fit(graph.graph['tfidf'].data)
fit.plot_pdf()
fit.power_law.plot_pdf();
plt.title(f"Power law x_min={fit.xmin:.1e}, α={fit.alpha:.1f}");
sns.scatterplot(x=np.abs(yd), y=wd)
slope, intercept, fit_r, p, stderr = sp.stats.linregress(np.abs(yd), wd)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={fit_r:.2f}; p={p:.1e}")
plt.xlabel('Δyear')
plt.ylabel('manhattan distance (# different words)');
neighbors = neighbor_similarity(graph, tfidf)
fit_mu, fit_std = sp.stats.norm.fit(neighbors)
non_neighbors = non_neighbor_similarity(graph, tfidf)
sns.distplot(neighbors)
x = np.linspace(min(neighbors), max(neighbors), 100)
plt.plot(x, sp.stats.norm.pdf(x, fit_mu, fit_std))
sns.distplot(non_neighbors)
plt.title(topic)
plt.legend([f"fit-neighbors (m={fit_mu:.2f}; s={fit_std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity');
fit_mu + 3*fit_std
###Output
_____no_output_____
###Markdown
Method1. Initialize a bag of seeds from a set of nodes.2. For each year, 1. Mutate seeds. For each seed, 1. Change a word with `p_point`. Draw weight from power law prior. 2. Delete a word with `p_delete`. 3. Insert new word with `p_insert`. Draw weight from power law prior. 2. Crossover seeds if `μ+3σ < similarity`. 3. Create new node from seed if `x < similarity` where `x~Norm(θ)`. 1. Connect new node. 2. Initialize new seed.
###Code
import sys
import scipy.sparse as ss
def initialize_seeds(seeds, graph, n_seeds, vectors, thresholds, thresholds_create):
for node in graph.nodes:
if node not in seeds.keys():
seeds[node] = []
thresholds[node] = []
if len(seeds[node]) < n_seeds:
seeds[node].append(vectors[:,list(graph.nodes).index(node)].copy())
thresholds[node].append(thresholds_create(1))
def mutate_seeds(seeds, rvs, point, insert, delete):
for node, vecs in seeds.items():
seeds[node] = [mutate(vec, rvs, point=point, insert=insert, delete=delete)
for vec in vecs]
def create_nodes(seeds, graph, vectors, thresholds, year):
nodes = list(graph.nodes) # graph.nodes changed in ``connect()``
for i, node in enumerate(nodes):
parent_vec = vectors[:,i].transpose()
for j, seed_vec in enumerate(seeds[node]):
sim_to_parent = smp.cosine_similarity(seed_vec.transpose(), parent_vec)
if sim_to_parent[0,0] < thresholds[node][j]:
connect(seed_vec, graph, vectors, dct, match_n=3)
vectors = ss.hstack([vectors, seed_vec])
seeds[node].pop(j)
thresholds[node].pop(j)
for node in graph.nodes:
if 'year' not in graph.nodes[node].keys():
graph.nodes[node]['year'] = year
return vectors
def evolve(graph, vectors, year_end, n_seeds, rvs, point, insert, delete,
thresholds_create, threshold_crossover):
""" Evolves a graph based on vector representations
Parameters
----------
graph: networkx.DiGraph
vectors: scipy.sparse.csc_matrix
year_end: int
n_seeds: int
number of seeds per node
rvs: lambda n->float
random values for point mutations & insertions
point, insert, delete: tuple
See ``mutate()``.
thresholds_create: lambda n-> float
thresholds of cosine similarity between parent
for node creation
threshold_crossover: float
threshold of cosine similarity between parent
for crossing over nodes
"""
seeds = {}
thresholds = {}
data = pd.DataFrame()
year_start = max([graph.nodes[n]['year'] for n in graph.nodes])+1
for year in range(year_start, year_end+1):
sys.stdout.write(f"\r{year_start}\t> {year}\t> {year_end}"+\
f"\tn={graph.number_of_nodes()}"+\
f" {list(seeds.keys())}")
sys.stdout.flush()
initialize_seeds(seeds, graph, n_seeds, vectors, thresholds, thresholds_create)
mutate_seeds(seeds, rvs, point=point, insert=insert, delete=delete)
vectors = create_nodes(seeds, graph, vectors, thresholds, year)
crossover_seeds(seeds, graph, vectors, threshold_crossover)
for seed, seed_vecs in seeds.items():
for seed_vec in seed_vecs:
data = data.append({'Year': year,
'Parent': seed,
'Seed vectors': seed_vec},
ignore_index=True)
return vectors, data
###Output
_____no_output_____
###Markdown
Test
###Code
start_year = -500
core_nodes = [n for n in graph.nodes if graph.nodes[n]['year'] < start_year]
subgraph = graph.subgraph(core_nodes).copy()
subgraph.graph.clear()
tfidf = graph.graph['tfidf']
vectors = ss.hstack([tfidf[:,list(graph.nodes).index(n)] for n in core_nodes])
print(f"Topic: '{graph.name}'" +\
f"Core nodes: {core_nodes}" +\
f"Parameters:\tα (power law): {fit.alpha:.2f}\n\t\t" +\
f"p_insert/delete: {fit_r:.2f}/2\n\t\t" +\
f"neighbor_mu, std: {fit_mu:.2f}, {fit_std:.2f}\n\t\t" +\
f"threshold: {fit_mu+3*fit_std:.2f}")
max_val = np.max(tfidf.data)
rvs = lambda n: np.vectorize(lambda x: max_val if x>max_val else x, otypes=[np.float64])\
(fit.power_law.generate_random(n))
vectors, data = evolve(subgraph, vectors,
year_end=2000,
n_seeds=3,
rvs=rvs,
point=(1,2*fit_std),
insert=(1,fit_r/2,list(set(tfidf.indices))),
delete=(1,fit_r/2),
thresholds_create=lambda n: npr.normal(loc=fit_mu+fit_std,
scale=fit_std, size=n),
threshold_crossover=fit_mu+3*fit_std)
print('\n'+repr(vectors))
print(f"Prior: {graph.number_of_nodes()}\n" +\
f"Model: {subgraph.number_of_nodes()}")
data
x = data.merge(pd.DataFrame([[i,j,v] for i,u in data['Seed vectors'].apply(list).iteritems()
for j,v in enumerate(u)], columns=['index','item','vector'])\
.set_index('index'),
left_index=True, right_index=True)\
.drop('Seed vectors', axis=1)\
.reset_index()
x['Parent_indexed'] = x['Parent'] + '_' + x['item'].map(str)
x = x.drop('item', axis=1)
nodes = list(subgraph.nodes)
s = lambda a,b: smp.cosine_similarity(a.transpose(), b.transpose())[0,0]
x['similarity to parent'] = [s(x.iloc[i]['vector'], vectors[:,nodes.index(x.iloc[i]['Parent'])])
for i in range(len(x.index))]
x
plt.figure(figsize=(16,20))
sns.lineplot(x='Year', y='similarity to parent', hue='Parent_indexed', data=x);
###Output
_____no_output_____
###Markdown
Compare priors
###Code
plt.figure(figsize=(16,4))
plt.subplot(121)
sns.distplot(neighbors)
x = np.linspace(min(neighbors), max(neighbors), 100)
mu, std = sp.stats.norm.fit(neighbors)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
sns.distplot(non_neighbors)
plt.title(topic + ' (prior)')
plt.legend([f"fit-neighbors (m={mu:.2f}; s={std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity');
plt.xlim([-.2,1.2])
plt.subplot(122)
neighbors_model = neighbor_similarity(subgraph, vectors)
non_neighbors_model = non_neighbor_similarity(subgraph, vectors)
sns.distplot(neighbors_model)
x = np.linspace(min(neighbors_model), max(neighbors_model), 100)
mu, std = sp.stats.norm.fit(neighbors_model)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
sns.distplot(non_neighbors_model)
plt.title(topic + ' (model)')
plt.legend([f"fit-neighbors (m={mu:.2f}; s={std:.2f})", 'neighbors', 'non-neighbors'])
plt.xlabel('cos similarity')
plt.xlim([-.2,1.2]);
plt.figure(figsize=(16,4))
plt.subplot(121)
sns.distplot([graph.nodes[node]['year'] for node in graph.nodes], rug=True)
plt.xlim([-5000,2100])
plt.title('prior')
plt.ylabel('discoveries')
plt.xlabel('year')
plt.subplot(122)
sns.distplot([subgraph.nodes[node]['year'] for node in subgraph.nodes], rug=True)
plt.xlim([-5000,3100])
plt.title('model')
plt.ylabel('discoveries')
plt.xlabel('year');
plt.figure(figsize=(16,6))
plt.subplot(121)
fit.plot_pdf()
fit.power_law.plot_pdf()
plt.title(f"empirical xmin={fit.xmin:.1e}, α={fit.alpha:.1f}");
plt.subplot(122)
fit_model = powerlaw.Fit(vectors.data)
fit_model.plot_pdf()
fit_model.power_law.plot_pdf()
plt.title(f"model xmin={fit_model.xmin:.1e}, α={fit_model.alpha:.1f}");
plt.figure(figsize=(16,4))
plt.subplot(121)
sns.distplot(yd)
plt.title(topic + ' prior')
plt.xlabel('year difference')
plt.subplot(122)
yd_model = year_diffs(subgraph)
sns.distplot(yd_model)
plt.title(topic + ' model')
plt.xlabel('year difference');
plt.figure(figsize=(16,10))
plt.subplot(221)
sns.scatterplot(x=np.abs(yd), y=wd)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), wd)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (prior)")
plt.xlabel('year')
plt.ylabel('manhattan distance');
plt.subplot(222)
sns.distplot(wd)
mu, std = sp.stats.norm.fit(wd)
x = np.linspace(min(wd), max(wd), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.xlabel('manhattan distance')
plt.ylabel('probability distribution');
plt.title(f"μ={mu:.2}, σ={std:.2} (prior)")
wd_model = word_diffs(subgraph, vectors)
plt.subplot(223)
sns.scatterplot(x=np.abs(yd_model), y=wd_model)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd_model), wd_model)
x = np.linspace(0, max(yd_model), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (model)")
plt.xlabel('year')
plt.ylabel('manhattan distance');
plt.subplot(224)
sns.distplot(wd_model)
mu, std = sp.stats.norm.fit(wd_model)
x = np.linspace(min(wd_model), max(wd_model), 100)
plt.plot(x, sp.stats.norm.pdf(x, mu, std))
plt.xlabel('manhattan distance')
plt.ylabel('probability distribution');
plt.title(f"μ={mu:.2}, σ={std:.2} (model)");
neighbors_model = neighbor_similarity(subgraph, vectors)
plt.figure(figsize=(16,6))
plt.subplot(121)
sns.scatterplot(x=np.abs(yd), y=neighbors)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd), neighbors)
x = np.linspace(0, max(yd), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (prior)")
plt.xlabel('Δyear')
plt.ylabel('cosine similarity');
plt.subplot(122)
sns.scatterplot(x=np.abs(yd_model), y=neighbors_model)
slope, intercept, r, p, stderr = sp.stats.linregress(np.abs(yd_model), neighbors_model)
x = np.linspace(0, max(yd_model), 100)
sns.lineplot(x, np.multiply(slope, x) + intercept)
plt.title(f"slope={slope:.2f}; r={r:.2f}; p={p:.1e} (model)")
plt.xlabel('Δyear')
plt.ylabel('cosine similarity');
plt.figure(figsize=(16,6))
plt.subplot(121)
sns.scatterplot(x='index', y='weight',
data=pd.DataFrame({'index': vectors.indices,
'weight': vectors.data}))
plt.ylim([-.1,1.1]);
plt.subplot(122)
plot_distribution(vectors.data)
plt.figure(figsize=(16,10))
plt.subplot(121)
nx.draw_networkx(graph, node_color=['r' if graph.nodes[n]['year']<start_year else 'b'
for n in graph.nodes])
plt.title('original graph')
plt.subplot(122)
nx.draw_networkx(subgraph, node_color=['r' if subgraph.nodes[n]['year']<start_year else 'b'
for n in subgraph.nodes])
plt.title('new graph');
import json
%matplotlib inline
def save_graph(g, name):
nodes = [{'name': str(i)}#, 'club': 0 #g.node[i]['club']}
for i in g.nodes()]
links = [{'source': list(g.nodes).index(u),
'target': list(g.nodes).index(v)}
for u,v in g.edges()]
with open(name + '.json', 'w') as f:
json.dump({'nodes': nodes, 'links': links},
f, indent=4,)
save_graph(graph, 'graph')
save_graph(subgraph, 'subgraph')
%%html
<div id="graph"></div>
<style>
.node {stroke: #fff; stroke-width: 1.5px;}
.link {stroke: #999; stroke-opacity: .6;}
</style>
%%html
<div id="subgraph"></div>
<style>
.node {stroke: #fff; stroke-width: 1.5px;}
.link {stroke: #999; stroke-opacity: .6;}
</style>
%%javascript
// We load the d3.js library from the Web.
require.config({paths:
{d3: "http://d3js.org/d3.v3.min"}});
require(["d3"], function(d3) {
// The code in this block is executed when the
// d3.js library has been loaded.
// First, we specify the size of the canvas
// containing the visualization (size of the
// <div> element).
var width = 800, height = 400;
var g = 'subgraph';
// We create a color scale.
var color = d3.scale.category10();
// We create a force-directed dynamic graph layout.
var force = d3.layout.force()
.charge(-120)
.linkDistance(50)
.size([width, height]);
// In the <div> element, we create a <svg> graphic
// that will contain our interactive visualization.
var svg = d3.select('#'.concat(g)).select("svg")
if (svg.empty()) {
svg = d3.select('#'.concat(g)).append("svg")
.attr("width", width)
.attr("height", height);
}
// We load the JSON file.
d3.json(g.concat('.json'), function(error, graph) {
// In this block, the file has been loaded
// and the 'graph' object contains our graph.
// We load the nodes and links in the
// force-directed graph.
force.nodes(graph.nodes)
.links(graph.links)
.start();
// We create a <line> SVG element for each link
// in the graph.
var link = svg.selectAll(".link")
.data(graph.links)
.enter().append("line")
.attr("class", "link");
// We create a <circle> SVG element for each node
// in the graph, and we specify a few attributes.
var node = svg.selectAll(".node")
.data(graph.nodes)
.enter().append("circle")
.attr("class", "node")
.attr("r", 5) // radius
.style("fill", function(d) {
// The node color depends on the club.
return color(d.club);
})
.call(force.drag);
// The name of each node is the node number.
node.append("title")
.text(function(d) { return d.name; });
// We bind the positions of the SVG elements
// to the positions of the dynamic force-directed
// graph, at each time step.
force.on("tick", function() {
link.attr("x1", function(d){return d.source.x})
.attr("y1", function(d){return d.source.y})
.attr("x2", function(d){return d.target.x})
.attr("y2", function(d){return d.target.y});
node.attr("cx", function(d){return d.x})
.attr("cy", function(d){return d.y});
});
});
});
###Output
_____no_output_____
###Markdown
Save/load graph
###Code
subgraph.graph['tfidf'] = vectors
nx.write_gpickle(subgraph, 'graph7.pickle')
subgraph = nx.read_gpickle('graph.pickle')
vectors = subgraph.graph['tfidf']
###Output
_____no_output_____ |
eeg-02/eeg-02-notebook.ipynb | ###Markdown
EEG-02 Solutions
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
sns.set_style('white')
sns.set_context('notebook', font_scale=1.5)
%matplotlib inline
###Output
_____no_output_____
###Markdown
Today's demonstration will introduce epoching and event-related potential analysis using `mne-python`. We will inspect EEG data in response to visual and auditory stimuli. We will start by loading the data from last session.
###Code
from mne.io import read_raw_fif
## Load data.
f = os.path.join('..','eeg-data','sub-01_task-audvis_preproc_raw.fif')
raw = read_raw_fif(f, preload=True, verbose=False)
###Output
_____no_output_____
###Markdown
Section 1: Finding and defining eventsIn addition to the EEG and peripheral channels, our recording includes trigger channels. Trigger channels mark the onset/offset of events during recording. In our recording in particular, STI 014 is the trigger channel that was used for combining all the events to a single channel. It has several pulses of different amplitude throughout the recording. These pulses correspond to different stimuli presented to the subject during the acquisition. The pulses and their corresponding events are defined in the table below.| Name | | Contents ||--------|----|-----------------------------------------|| LA | 1 | Response to left-ear auditory stimulus || RA | 2 | Response to right-ear auditory stimulus || LV | 3 | Response to left visual field stimulus || RV | 4 | Response to right visual field stimulus || Smiley | 5 | Response to the smiley face || Button | 32 | Response triggered by the button press |These are the events we are going to align the epochs to. To create an event list from raw data, we simply call a function dedicated just for that. Since the event list is simply a numpy array, you can also manually create one. If you create one from an outside source (like a separate file of events), pay special attention in aligning the events correctly with the raw data.
###Code
from mne import find_events
from mne.viz import plot_events
## Find events.
events = find_events(raw)
print(events[:5])
###Output
320 events found
Event IDs: [ 1 2 3 4 5 32]
[[27977 0 2]
[28345 0 3]
[28771 0 1]
[29219 0 4]
[29652 0 2]]
###Markdown
The event list contains three columns. The first column corresponds to sample number. To convert this to seconds, you should divide the sample number by the used sampling frequency. The second column is reserved for the old value of the trigger channel at the time of transition, but is currently not in use. The third column is the trigger id (amplitude of the pulse).To get a better picture of the task design, we'll plot out the events. First we'll need to construct an *event_id*, which is a Python dictionary matching event labels to event integers.
###Code
## Plot the events.
event_id = {'Auditory/Left': 1, 'Auditory/Right': 2, 'Visual/Left': 3, 'Visual/Right': 4,
'smiley': 5, 'button': 32}
###Output
_____no_output_____
###Markdown
Now we plot.
###Code
## Initialize canvas.
fig, ax = plt.subplots(1,1,figsize=(16,4))
## Plot events.
color = {1: '#1f77b4', 2: '#ff7f0e', 3: '#2ca02c', 4: '#d62728', 5: '#9467bd', 32: '#8c564b'}
plot_events(events, raw.info['sfreq'], raw.first_samp, color=color, event_id=event_id, axes=ax);
###Output
_____no_output_____
###Markdown
Section 2: EpochingEpoching describes the process of taking snapshots of the data centered around some event of interest. We will perform epoching using the `mne.Epochs` constructor. To do so, we need to define some parameters for our epoching.In this tutorial we are only interested in triggers 1, 2, 3 and 4. These triggers correspond to auditory and visual stimuli. The event_id here can be an int, a list of ints or a dict. With dicts it is possible to assign these ids to distinct categories.
###Code
## Define events of interest.
event_id = dict(LA=1, RA=2, LV=3, RV=4)
###Output
_____no_output_____
###Markdown
Next we need to define the windows of interest. The values tmin and tmax refer to offsets in relation to the events. Here we make epochs that collect the data from -500 ms before to 500 ms after the event. To get some meaningful results, we also want to baseline the epochs. Baselining computes the mean over the baseline period and adjusts the data accordingly. The epochs constructor uses a baseline period from tmin to 0.0 seconds by default, but it is wise to be explicit. That way you are less likely to end up with surprises along the way. None as the first element of the tuple refers to the start of the time window (-200 ms in this case). See `mne.Epochs` for more information.
###Code
## Define epoch lengths.
tmin = -0.2
tmax = 0.5
baseline = (None, -0.1)
###Output
_____no_output_____
###Markdown
Next we define our Rejection parameters for peak-to-peak amplitude.
###Code
## Define rejection threshold.
reject = dict(eeg = 100e-6)
###Output
_____no_output_____
###Markdown
Finally we perform epoching, choosing only the EEG channels.
###Code
from mne import Epochs, pick_types
## Perform epoching.
picks = pick_types(raw.info, meg=False, eeg=True)
epochs = Epochs(raw, events, event_id=event_id, tmin=tmin, tmax=tmax, baseline=baseline,
picks=picks, reject=reject, preload=True, verbose=False)
print(epochs)
###Output
<Epochs | 286 events (all good), -0.199795 - 0.499488 sec, baseline [None, -0.1], ~57.3 MB, data loaded,
'LA': 72
'LV': 71
'RA': 72
'RV': 71>
###Markdown
Next we remove all bad epochs (i.e. those violating amplitude rejection).
###Code
## Crop epochs.
epochs = epochs.crop(tmin=-0.1)
## Drop bad epochs.
epochs.drop_bad()
print(epochs)
###Output
<Epochs | 286 events (all good), -0.0998976 - 0.499488 sec, baseline [None, -0.1], ~49.6 MB, data loaded,
'LA': 72
'LV': 71
'RA': 72
'RV': 71>
###Markdown
Finally we save the data. Note the naming convenction.
###Code
## Save.
fout = os.path.join('..','eeg-data','sub-01_task-audvis-epo.fif')
epochs.save(fout)
###Output
_____no_output_____
###Markdown
Section 3: ERP Analysis & VisualizationNow that we have defined our epochs, we can inspect the event-related potentials. There are a great many example tutorials on visualizing evoked potentials [here](https://www.martinos.org/mne/stable/auto_tutorials/plot_visualize_evoked.html). We demonstrate a few below. Visual StimuliFirst we compute the evoked response for each auditory stimulus condition.
###Code
## Average within each condition.
LV_evoked = epochs['LV'].average()
RV_evoked = epochs['RV'].average()
###Output
_____no_output_____
###Markdown
Evoked Related PotentialsNext let's plot the ERP for each condition.
###Code
print('Left visual')
fig, ax = plt.subplots(1,1,figsize=(12,4))
fig = LV_evoked.plot(spatial_colors=True, xlim=(-0.1,0.5), axes=ax);
print('Right visual')
fig, ax = plt.subplots(1,1,figsize=(12,4))
fig = RV_evoked.plot(spatial_colors=True, xlim=(-0.1,0.5), axes=ax);
###Output
Left visual
###Markdown
Compare Evoked Potentials
###Code
from mne.viz import plot_evoked_topo
## Initialize canvas.
fig, ax = plt.subplots(1,1,figsize=(12,10))
## Plot.
plot_evoked_topo([LV_evoked, RV_evoked], axes=ax);
###Output
_____no_output_____
###Markdown
Topographic PlotsWe can also plot the scalp topographic maps for each condition.
###Code
print('Left visual')
LV_evoked.plot_topomap(times=np.arange(0.05,0.25,0.025));
print('Right visual')
RV_evoked.plot_topomap(times=np.arange(0.05,0.25,0.025));
###Output
Left visual
###Markdown
Comparing Evoked PotentialsUsing the sensor layout from the previous notebook, we can choose a sensor that is clearly picking up a response to the visual stimuli. Here we are observing strong laterality, so we visual two sensors: **EEG 056** and **EEG 057**. We will clearly see that that right- and left-presented stimuli are more strongly represented in the contralateral hemispheres.**EEG 056**
###Code
from mne.stats import permutation_cluster_test
## Extract data.
eeg_056 = epochs.copy().pick_channels(['EEG 056'])
data = [eeg_056['LV'].get_data().squeeze(), eeg_056['RV'].get_data().squeeze()]
evokeds = np.mean(data, axis=1) * 1e6
## Using F-statistic as default.
## F-stat = abs(t-stat ** 2)
F_obs, clusters, cluster_pv, H0 = permutation_cluster_test(data, n_permutations=1024,
seed=47404, verbose=False)
## Plotting.
fig, ax = plt.subplots(1,1,figsize=(12,4))
ax.plot(epochs.times, evokeds[0], lw=2.5, label='LV') # Cond: LV
ax.plot(epochs.times, evokeds[1], lw=2.5, label='RV') # Cond: RV
# ax.plot(epochs.times, F_obs, lw=2, color='0.2', label='F-vals') # F-stats
ymin, ymax = ax.get_ylim()
## Plot clusters.
for cluster, pval in zip(clusters, cluster_pv):
if pval < 0.05:
center = epochs.times[cluster].mean()
ax.fill_between(epochs.times[cluster], ymin, ymax, color='0.8', alpha=0.5)
ax.annotate('p = %0.3f' %pval, (0,0), (center, ymax), ha='center', va='top', fontsize=14)
## Add details.
ax.hlines(0, epochs.tmin, epochs.tmax, linewidth=0.5, alpha=0.5, zorder=0)
ax.set(xlim=(epochs.tmin, epochs.tmax), xlabel='Time (s)', ylim=(ymin, ymax),
ylabel='uV', title='EEG 056 (Left Visual Cortex)')
ax.legend(loc=4, frameon=False)
sns.despine()
plt.tight_layout()
###Output
<ipython-input-16-8d52efc13ece>:11: RuntimeWarning: Ignoring argument "tail", performing 1-tailed F-test
seed=47404, verbose=False)
###Markdown
**EEG 057**
###Code
from mne.stats import permutation_cluster_test
## Extract data.
eeg_057 = epochs.copy().pick_channels(['EEG 057'])
data = [eeg_057['LV'].get_data().squeeze(), eeg_057['RV'].get_data().squeeze()]
evokeds = np.mean(data, axis=1) * 1e6
## Using F-statistic as default.
## F-stat = abs(t-stat ** 2)
F_obs, clusters, cluster_pv, H0 = permutation_cluster_test(data, n_permutations=1024,
seed=47404, verbose=False)
## Plotting.
fig, ax = plt.subplots(1,1,figsize=(12,4))
ax.plot(epochs.times, evokeds[0], lw=2.5, label='LV') # Cond: LV
ax.plot(epochs.times, evokeds[1], lw=2.5, label='RV') # Cond: RV
# ax.plot(epochs.times, F_obs, lw=2, color='0.2', label='F-vals') # F-stats
ymin, ymax = ax.get_ylim()
## Plot clusters.
for cluster, pval in zip(clusters, cluster_pv):
if pval < 0.05:
center = epochs.times[cluster].mean()
ax.fill_between(epochs.times[cluster], ymin, ymax, color='0.8', alpha=0.5)
ax.annotate('p = %0.3f' %pval, (0,0), (center, ymax), ha='center', va='top', fontsize=14)
## Add details.
ax.hlines(0, epochs.tmin, epochs.tmax, linewidth=0.5, alpha=0.5, zorder=0)
ax.set(xlim=(epochs.tmin, epochs.tmax), xlabel='Time (s)', ylim=(ymin, ymax),
ylabel='uV', title='EEG 057 (Right Visual Cortex)')
ax.legend(loc=4, frameon=False)
sns.despine()
plt.tight_layout()
###Output
<ipython-input-17-f26e8d8155f4>:11: RuntimeWarning: Ignoring argument "tail", performing 1-tailed F-test
seed=47404, verbose=False)
###Markdown
Difference Waves
###Code
from mne import combine_evoked
## Compute difference wave.
DV_evoked = combine_evoked([LV_evoked, RV_evoked], [-1,1])
## Initialize canvas.
fig, ax = plt.subplots(1,1,figsize=(12,4))
## Plot difference waves.
for ch in ['EEG 056', 'EEG 057']:
ax.plot(DV_evoked.times, DV_evoked.data[DV_evoked.ch_names.index(ch)] * 1e6, lw=2, label=ch)
## Add info.
ax.set(xlabel='Time (s)', ylabel='uV', title='Difference (Right - Left)')
ax.legend(loc=1, frameon=False)
sns.despine()
plt.tight_layout()
###Output
_____no_output_____ |
code/ch14/14_trading_platform.ipynb | ###Markdown
Python for Finance (2nd ed.)**Mastering Data-Driven Finance**© Dr. Yves J. Hilpisch | The Python Quants GmbH Trading Platform Risk Disclaimer Trading forex/CFDs on margin carries a high level of risk and may not be suitable for all investors as you could sustain losses in excess of deposits. Leverage can work against you. Due to the certain restrictions imposed by the local law and regulation, German resident retail client(s) could sustain a total loss of deposited funds but are not subject to subsequent payment obligations beyond the deposited funds. Be aware and fully understand all risks associated with the market and trading. Prior to trading any products, carefully consider your financial situation and experience level. Any opinions, news, research, analyses, prices, or other information is provided as general market commentary, and does not constitute investment advice. FXCM & TPQ will not accept liability for any loss or damage, including without limitation to, any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Author Disclaimer The author is neither an employee, agent nor representative of FXCM and is therefore acting independently. The opinions given are their own, constitute general market commentary, and do not constitute the opinion or advice of FXCM or any form of personal or investment advice. FXCM assumes no responsibility for any loss or damage, including but not limited to, any loss or gain arising out of the direct or indirect use of this or any other content. Trading forex/CFDs on margin carries a high level of risk and may not be suitable for all investors as you could sustain losses in excess of deposits. Retrieving Tick Data
###Code
import time
import numpy as np
import pandas as pd
import datetime as dt
from pylab import mpl, plt
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
%matplotlib inline
from fxcmpy import fxcmpy_tick_data_reader as tdr
print(tdr.get_available_symbols())
start = dt.datetime(2018, 6, 25)
stop = dt.datetime(2018, 6, 30)
td = tdr('EURUSD', start, stop)
td.get_raw_data().info()
td.get_data().info()
td.get_data().head()
sub = td.get_data(start='2018-06-29 12:00:00',
end='2018-06-29 12:15:00')
sub.head()
sub['Mid'] = sub.mean(axis=1)
sub['SMA'] = sub['Mid'].rolling(1000).mean()
sub[['Mid', 'SMA']].plot(figsize=(10, 6), lw=0.75);
# plt.savefig('../../images/ch14/fxcm_plot_01.png')
###Output
_____no_output_____
###Markdown
Retrieving Candles Data
###Code
from fxcmpy import fxcmpy_candles_data_reader as cdr
print(cdr.get_available_symbols())
start = dt.datetime(2018, 5, 1)
stop = dt.datetime(2018, 6, 30)
###Output
_____no_output_____
###Markdown
`period` must be one of `m1`, `H1` or `D1`
###Code
period = 'H1'
candles = cdr('EURUSD', start, stop, period)
data = candles.get_data()
data.info()
data[data.columns[:4]].tail()
data[data.columns[4:]].tail()
data['MidClose'] = data[['BidClose', 'AskClose']].mean(axis=1)
data['SMA1'] = data['MidClose'].rolling(30).mean()
data['SMA2'] = data['MidClose'].rolling(100).mean()
data[['MidClose', 'SMA1', 'SMA2']].plot(figsize=(10, 6));
# plt.savefig('../../images/ch14/fxcm_plot_02.png')
###Output
_____no_output_____
###Markdown
Connecting to the API
###Code
import fxcmpy
fxcmpy.__version__
api = fxcmpy.fxcmpy(config_file='../../cfg/fxcm.cfg')
instruments = api.get_instruments()
print(instruments)
###Output
['EUR/USD', 'USD/JPY', 'GBP/USD', 'USD/CHF', 'EUR/CHF', 'AUD/USD', 'USD/CAD', 'NZD/USD', 'EUR/GBP', 'EUR/JPY', 'GBP/JPY', 'CHF/JPY', 'GBP/CHF', 'EUR/AUD', 'EUR/CAD', 'AUD/CAD', 'AUD/JPY', 'CAD/JPY', 'NZD/JPY', 'GBP/CAD', 'GBP/NZD', 'GBP/AUD', 'AUD/NZD', 'USD/SEK', 'EUR/SEK', 'EUR/NOK', 'USD/NOK', 'USD/MXN', 'AUD/CHF', 'EUR/NZD', 'USD/ZAR', 'USD/HKD', 'ZAR/JPY', 'USD/TRY', 'EUR/TRY', 'NZD/CHF', 'CAD/CHF', 'NZD/CAD', 'TRY/JPY', 'USD/CNH', 'AUS200', 'ESP35', 'FRA40', 'GER30', 'HKG33', 'JPN225', 'NAS100', 'SPX500', 'UK100', 'US30', 'Copper', 'CHN50', 'EUSTX50', 'USDOLLAR', 'USOil', 'UKOil', 'SOYF', 'NGAS', 'Bund', 'XAU/USD', 'XAG/USD']
###Markdown
Retrieving Historical Data
###Code
candles = api.get_candles('USD/JPY', period='D1', number=10)
candles[candles.columns[:4]]
candles[candles.columns[4:]]
start = dt.datetime(2017, 1, 1)
end = dt.datetime(2018, 1, 1)
candles = api.get_candles('EUR/GBP', period='D1',
start=start, stop=end)
candles.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 309 entries, 2017-01-03 22:00:00 to 2018-01-01 22:00:00
Data columns (total 9 columns):
bidopen 309 non-null float64
bidclose 309 non-null float64
bidhigh 309 non-null float64
bidlow 309 non-null float64
askopen 309 non-null float64
askclose 309 non-null float64
askhigh 309 non-null float64
asklow 309 non-null float64
tickqty 309 non-null int64
dtypes: float64(8), int64(1)
memory usage: 24.1 KB
###Markdown
The parameter `period` must be one of `m1, m5, m15, m30, H1, H2, H3, H4, H6, H8, D1, W1` or `M1`.
###Code
candles = api.get_candles('EUR/USD', period='m1', number=250)
candles['askclose'].plot(figsize=(10, 6))
# plt.savefig('../../images/ch14/fxcm_plot_03.png');
###Output
_____no_output_____
###Markdown
Streaming Data
###Code
def output(data, dataframe):
print('%3d | %s | %s | %6.5f, %6.5f'
% (len(dataframe), data['Symbol'],
pd.to_datetime(int(data['Updated']), unit='ms'),
data['Rates'][0], data['Rates'][1]))
api.subscribe_market_data('EUR/USD', (output,))
api.get_last_price('EUR/USD')
api.unsubscribe_market_data('EUR/USD')
###Output
7 | EUR/USD | 2018-07-24 12:40:10.007000 | 1.16940, 1.16943
###Markdown
Placing Orders
###Code
api.get_open_positions()
order = api.create_market_buy_order('EUR/USD', 100)
sel = ['tradeId', 'amountK', 'currency',
'grossPL', 'isBuy']
api.get_open_positions()[sel]
order = api.create_market_buy_order('EUR/GBP', 50)
api.get_open_positions()[sel]
order = api.create_market_sell_order('EUR/USD', 25)
order = api.create_market_buy_order('EUR/GBP', 50)
api.get_open_positions()[sel]
api.close_all_for_symbol('EUR/GBP')
api.get_open_positions()[sel]
api.close_all()
api.get_open_positions()
###Output
_____no_output_____
###Markdown
Account Information
###Code
api.get_default_account()
api.get_accounts().T
###Output
_____no_output_____
###Markdown
Python for Finance (2nd ed.)**Mastering Data-Driven Finance**© Dr. Yves J. Hilpisch | The Python Quants GmbH Trading Platform Risk Disclaimer Trading forex/CFDs on margin carries a high level of risk and may not be suitable for all investors as you could sustain losses in excess of deposits. Leverage can work against you. Due to the certain restrictions imposed by the local law and regulation, German resident retail client(s) could sustain a total loss of deposited funds but are not subject to subsequent payment obligations beyond the deposited funds. Be aware and fully understand all risks associated with the market and trading. Prior to trading any products, carefully consider your financial situation and experience level. Any opinions, news, research, analyses, prices, or other information is provided as general market commentary, and does not constitute investment advice. FXCM & TPQ will not accept liability for any loss or damage, including without limitation to, any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Author Disclaimer The author is neither an employee, agent nor representative of FXCM and is therefore acting independently. The opinions given are their own, constitute general market commentary, and do not constitute the opinion or advice of FXCM or any form of personal or investment advice. FXCM assumes no responsibility for any loss or damage, including but not limited to, any loss or gain arising out of the direct or indirect use of this or any other content. Trading forex/CFDs on margin carries a high level of risk and may not be suitable for all investors as you could sustain losses in excess of deposits. Retrieving Tick Data
###Code
import time
import numpy as np
import pandas as pd
import datetime as dt
from pylab import mpl, plt
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
%config InlineBackend.figure_format = 'svg'
from fxcmpy import fxcmpy_tick_data_reader as tdr
print(tdr.get_available_symbols())
start = dt.datetime(2018, 6, 25)
stop = dt.datetime(2018, 6, 30)
td = tdr('EURUSD', start, stop)
td.get_raw_data().info()
td.get_data().info()
td.get_data().head()
sub = td.get_data(start='2018-06-29 12:00:00',
end='2018-06-29 12:15:00')
sub.head()
sub['Mid'] = sub.mean(axis=1)
sub['SMA'] = sub['Mid'].rolling(1000).mean()
sub[['Mid', 'SMA']].plot(figsize=(10, 6), lw=0.75);
# plt.savefig('../../images/ch14/fxcm_plot_01.png')
###Output
_____no_output_____
###Markdown
Retrieving Candles Data
###Code
from fxcmpy import fxcmpy_candles_data_reader as cdr
print(cdr.get_available_symbols())
start = dt.datetime(2018, 5, 1)
stop = dt.datetime(2018, 6, 30)
###Output
_____no_output_____
###Markdown
`period` must be one of `m1`, `H1` or `D1`
###Code
period = 'H1'
candles = cdr('EURUSD', start, stop, period)
data = candles.get_data()
data.info()
data[data.columns[:4]].tail()
data[data.columns[4:]].tail()
data['MidClose'] = data[['BidClose', 'AskClose']].mean(axis=1)
data['SMA1'] = data['MidClose'].rolling(30).mean()
data['SMA2'] = data['MidClose'].rolling(100).mean()
data[['MidClose', 'SMA1', 'SMA2']].plot(figsize=(10, 6));
# plt.savefig('../../images/ch14/fxcm_plot_02.png')
###Output
_____no_output_____
###Markdown
Connecting to the API
###Code
import fxcmpy
fxcmpy.__version__
api = fxcmpy.fxcmpy(config_file='../../cfg/fxcm.cfg')
instruments = api.get_instruments()
print(instruments)
###Output
['EUR/USD', 'USD/JPY', 'GBP/USD', 'USD/CHF', 'EUR/CHF', 'AUD/USD', 'USD/CAD', 'NZD/USD', 'EUR/GBP', 'EUR/JPY', 'GBP/JPY', 'CHF/JPY', 'GBP/CHF', 'EUR/AUD', 'EUR/CAD', 'AUD/CAD', 'AUD/JPY', 'CAD/JPY', 'NZD/JPY', 'GBP/CAD', 'GBP/NZD', 'GBP/AUD', 'AUD/NZD', 'USD/SEK', 'EUR/SEK', 'EUR/NOK', 'USD/NOK', 'USD/MXN', 'AUD/CHF', 'EUR/NZD', 'USD/ZAR', 'USD/HKD', 'ZAR/JPY', 'USD/TRY', 'EUR/TRY', 'NZD/CHF', 'CAD/CHF', 'NZD/CAD', 'TRY/JPY', 'USD/ILS', 'USD/CNH', 'AUS200', 'ESP35', 'FRA40', 'GER30', 'HKG33', 'JPN225', 'NAS100', 'SPX500', 'UK100', 'US30', 'Copper', 'CHN50', 'EUSTX50', 'USDOLLAR', 'US2000', 'USOil', 'UKOil', 'SOYF', 'NGAS', 'USOilSpot', 'UKOilSpot', 'WHEATF', 'CORNF', 'Bund', 'XAU/USD', 'XAG/USD', 'EMBasket', 'JPYBasket', 'BTC/USD', 'BCH/USD', 'ETH/USD', 'LTC/USD', 'XRP/USD', 'CryptoMajor', 'EOS/USD', 'XLM/USD', 'ESPORTS', 'BIOTECH', 'CANNABIS', 'FAANG', 'CHN.TECH', 'CHN.ECOMM', 'USEquities']
###Markdown
Retrieving Historical Data
###Code
candles = api.get_candles('USD/JPY', period='D1', number=10)
candles[candles.columns[:4]]
candles[candles.columns[4:]]
start = dt.datetime(2017, 1, 1)
end = dt.datetime(2018, 1, 1)
candles = api.get_candles('EUR/GBP', period='D1',
start=start, stop=end)
candles.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 309 entries, 2017-01-03 22:00:00 to 2018-01-01 22:00:00
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 bidopen 309 non-null float64
1 bidclose 309 non-null float64
2 bidhigh 309 non-null float64
3 bidlow 309 non-null float64
4 askopen 309 non-null float64
5 askclose 309 non-null float64
6 askhigh 309 non-null float64
7 asklow 309 non-null float64
8 tickqty 309 non-null int64
dtypes: float64(8), int64(1)
memory usage: 24.1 KB
###Markdown
The parameter `period` must be one of `m1, m5, m15, m30, H1, H2, H3, H4, H6, H8, D1, W1` or `M1`.
###Code
candles = api.get_candles('EUR/USD', period='m1', number=250)
candles['askclose'].plot(figsize=(10, 6))
# plt.savefig('../../images/ch14/fxcm_plot_03.png');
###Output
_____no_output_____
###Markdown
Streaming Data
###Code
def output(data, dataframe):
print('%3d | %s | %s | %6.5f, %6.5f'
% (len(dataframe), data['Symbol'],
pd.to_datetime(int(data['Updated']), unit='ms'),
data['Rates'][0], data['Rates'][1]))
api.subscribe_market_data('EUR/USD', (output,))
api.get_last_price('EUR/USD')
api.unsubscribe_market_data('EUR/USD')
###Output
_____no_output_____
###Markdown
Placing Orders
###Code
api.get_open_positions()
order = api.create_market_buy_order('EUR/USD', 100)
sel = ['tradeId', 'amountK', 'currency',
'grossPL', 'isBuy']
api.get_open_positions()[sel]
order = api.create_market_buy_order('EUR/GBP', 50)
api.get_open_positions()[sel]
order = api.create_market_sell_order('EUR/USD', 25)
order = api.create_market_buy_order('EUR/GBP', 50)
api.get_open_positions()[sel]
api.close_all_for_symbol('EUR/GBP')
api.get_open_positions()[sel]
api.close_all()
api.get_open_positions()
###Output
_____no_output_____
###Markdown
Account Information
###Code
api.get_default_account()
api.get_accounts().T
###Output
_____no_output_____
###Markdown
Python for Finance (2nd ed.)**Mastering Data-Driven Finance**© Dr. Yves J. Hilpisch | The Python Quants GmbH Trading Platform Risk Disclaimer Trading forex/CFDs on margin carries a high level of risk and may not be suitable for all investors as you could sustain losses in excess of deposits. Leverage can work against you. Due to the certain restrictions imposed by the local law and regulation, German resident retail client(s) could sustain a total loss of deposited funds but are not subject to subsequent payment obligations beyond the deposited funds. Be aware and fully understand all risks associated with the market and trading. Prior to trading any products, carefully consider your financial situation and experience level. Any opinions, news, research, analyses, prices, or other information is provided as general market commentary, and does not constitute investment advice. FXCM & TPQ will not accept liability for any loss or damage, including without limitation to, any loss of profit, which may arise directly or indirectly from use of or reliance on such information. Author Disclaimer The author is neither an employee, agent nor representative of FXCM and is therefore acting independently. The opinions given are their own, constitute general market commentary, and do not constitute the opinion or advice of FXCM or any form of personal or investment advice. FXCM assumes no responsibility for any loss or damage, including but not limited to, any loss or gain arising out of the direct or indirect use of this or any other content. Trading forex/CFDs on margin carries a high level of risk and may not be suitable for all investors as you could sustain losses in excess of deposits. Retrieving Tick Data
###Code
import time
import numpy as np
import pandas as pd
import datetime as dt
from pylab import mpl, plt
plt.style.use('seaborn')
mpl.rcParams['font.family'] = 'serif'
%matplotlib inline
from fxcmpy import fxcmpy_tick_data_reader as tdr
print(tdr.get_available_symbols())
start = dt.datetime(2018, 6, 25)
stop = dt.datetime(2018, 6, 30)
td = tdr('EURUSD', start, stop)
td.get_raw_data().info()
td.get_data().info()
td.get_data().head()
sub = td.get_data(start='2018-06-29 12:00:00',
end='2018-06-29 12:15:00')
sub.head()
sub['Mid'] = sub.mean(axis=1)
sub['SMA'] = sub['Mid'].rolling(1000).mean()
sub[['Mid', 'SMA']].plot(figsize=(10, 6), lw=0.75);
# plt.savefig('../../images/ch14/fxcm_plot_01.png')
###Output
_____no_output_____
###Markdown
Retrieving Candles Data
###Code
from fxcmpy import fxcmpy_candles_data_reader as cdr
print(cdr.get_available_symbols())
start = dt.datetime(2018, 5, 1)
stop = dt.datetime(2018, 6, 30)
###Output
_____no_output_____
###Markdown
`period` must be one of `m1`, `H1` or `D1`
###Code
period = 'H1'
candles = cdr('EURUSD', start, stop, period)
data = candles.get_data()
data.info()
data[data.columns[:4]].tail()
data[data.columns[4:]].tail()
data['MidClose'] = data[['BidClose', 'AskClose']].mean(axis=1)
data['SMA1'] = data['MidClose'].rolling(30).mean()
data['SMA2'] = data['MidClose'].rolling(100).mean()
data[['MidClose', 'SMA1', 'SMA2']].plot(figsize=(10, 6));
# plt.savefig('../../images/ch14/fxcm_plot_02.png')
###Output
_____no_output_____
###Markdown
Connecting to the API
###Code
import fxcmpy
fxcmpy.__version__
api = fxcmpy.fxcmpy(config_file='../../cfg/fxcm.cfg')
instruments = api.get_instruments()
print(instruments)
###Output
['EUR/USD', 'USD/JPY', 'GBP/USD', 'USD/CHF', 'EUR/CHF', 'AUD/USD', 'USD/CAD', 'NZD/USD', 'EUR/GBP', 'EUR/JPY', 'GBP/JPY', 'CHF/JPY', 'GBP/CHF', 'EUR/AUD', 'EUR/CAD', 'AUD/CAD', 'AUD/JPY', 'CAD/JPY', 'NZD/JPY', 'GBP/CAD', 'GBP/NZD', 'GBP/AUD', 'AUD/NZD', 'USD/SEK', 'EUR/SEK', 'EUR/NOK', 'USD/NOK', 'USD/MXN', 'AUD/CHF', 'EUR/NZD', 'USD/ZAR', 'USD/HKD', 'ZAR/JPY', 'USD/TRY', 'EUR/TRY', 'NZD/CHF', 'CAD/CHF', 'NZD/CAD', 'TRY/JPY', 'USD/ILS', 'USD/CNH', 'AUS200', 'ESP35', 'FRA40', 'GER30', 'HKG33', 'JPN225', 'NAS100', 'SPX500', 'UK100', 'US30', 'Copper', 'CHN50', 'EUSTX50', 'USDOLLAR', 'US2000', 'USOil', 'UKOil', 'SOYF', 'NGAS', 'USOilSpot', 'UKOilSpot', 'WHEATF', 'CORNF', 'Bund', 'XAU/USD', 'XAG/USD', 'EMBasket', 'JPYBasket', 'BTC/USD', 'BCH/USD', 'ETH/USD', 'LTC/USD', 'XRP/USD', 'CryptoMajor', 'EOS/USD', 'XLM/USD', 'ESPORTS', 'BIOTECH', 'CANNABIS', 'FAANG', 'CHN.TECH', 'CHN.ECOMM', 'USEquities']
###Markdown
Retrieving Historical Data
###Code
candles = api.get_candles('USD/JPY', period='D1', number=10)
candles[candles.columns[:4]]
candles[candles.columns[4:]]
start = dt.datetime(2017, 1, 1)
end = dt.datetime(2018, 1, 1)
candles = api.get_candles('EUR/GBP', period='D1',
start=start, stop=end)
candles.info()
###Output
<class 'pandas.core.frame.DataFrame'>
DatetimeIndex: 309 entries, 2017-01-03 22:00:00 to 2018-01-01 22:00:00
Data columns (total 9 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 bidopen 309 non-null float64
1 bidclose 309 non-null float64
2 bidhigh 309 non-null float64
3 bidlow 309 non-null float64
4 askopen 309 non-null float64
5 askclose 309 non-null float64
6 askhigh 309 non-null float64
7 asklow 309 non-null float64
8 tickqty 309 non-null int64
dtypes: float64(8), int64(1)
memory usage: 24.1 KB
###Markdown
The parameter `period` must be one of `m1, m5, m15, m30, H1, H2, H3, H4, H6, H8, D1, W1` or `M1`.
###Code
candles = api.get_candles('EUR/USD', period='m1', number=250)
candles['askclose'].plot(figsize=(10, 6))
# plt.savefig('../../images/ch14/fxcm_plot_03.png');
###Output
_____no_output_____
###Markdown
Streaming Data
###Code
def output(data, dataframe):
print('%3d | %s | %s | %6.5f, %6.5f'
% (len(dataframe), data['Symbol'],
pd.to_datetime(int(data['Updated']), unit='ms'),
data['Rates'][0], data['Rates'][1]))
api.subscribe_market_data('EUR/USD', (output,))
api.get_last_price('EUR/USD')
api.unsubscribe_market_data('EUR/USD')
###Output
_____no_output_____
###Markdown
Placing Orders
###Code
api.get_open_positions()
order = api.create_market_buy_order('EUR/USD', 100)
sel = ['tradeId', 'amountK', 'currency',
'grossPL', 'isBuy']
api.get_open_positions()[sel]
order = api.create_market_buy_order('EUR/GBP', 50)
api.get_open_positions()[sel]
order = api.create_market_sell_order('EUR/USD', 25)
order = api.create_market_buy_order('EUR/GBP', 50)
api.get_open_positions()[sel]
api.close_all_for_symbol('EUR/GBP')
api.get_open_positions()[sel]
api.close_all()
api.get_open_positions()
###Output
_____no_output_____
###Markdown
Account Information
###Code
api.get_default_account()
api.get_accounts().T
###Output
_____no_output_____ |
8-Labs/Z-Spring2021/Lab1/.ipynb_checkpoints/Lab1_Dev-checkpoint.ipynb | ###Markdown
Laboratory 1: First Steps...  Notice the code cell below! From this notebook forward please include and run the script in the cell, it will help in debugging a notebook.
###Code
# Preamble script block to identify host, user, and kernel
import sys
! hostname
! whoami
print(sys.executable)
print(sys.version)
print(sys.version_info)
###Output
DESKTOP-EH6HD63
desktop-eh6hd63\farha
C:\Users\Farha\Anaconda3\python.exe
3.7.4 (default, Aug 9 2019, 18:34:13) [MSC v.1915 64 bit (AMD64)]
sys.version_info(major=3, minor=7, micro=4, releaselevel='final', serial=0)
###Markdown
Also, from now on, please make sure that you have the following markdown cell, filled with your own information, on top of your notebooks: Full name: R: Title of the notebook: Date: Now, let's get to work! VariablesVariables are names given to data that we want to store and manipulate in programs. Avariable has a name and a value. The value representation depends on what type of objectthe variable represents.The utility of variables comes in when we have a structure that is universal, but values ofvariables within the structure will change - otherwise it would be simple enough to justhardwire the arithmetic.Suppose we want to store the time of concentration for some hydrologic calculation. To do so, we can name a variable `TimeOfConcentration`, and then `assign` a value to the variable,for instance:
###Code
TimeOfConcentration = 0.0
###Output
_____no_output_____
###Markdown
After this assignment statement the variable is created in the program and has a value of 0.0. The use of a decimal point in the initial assignment establishes the variable as a float (a real variable is called a floating point representation -- or just a float).
###Code
TimeOfConcentration + 5
###Output
_____no_output_____
###Markdown
Naming RulesVariable names in Python can only contain letters (a - z, A - Z), numerals (0 - 9), or underscores. The first character cannot be a number, otherwise there is considerable freedom in naming. The names can be reasonably long. `runTime`, `run_Time`, `_run_Time2`, `_2runTime` are all valid names, but `2runTime` is not valid, and will create an error when you try to use it.
###Code
# Script to illustrate variable names
runTime = 1
_2runTime = 2 # change to 2runTime = 2 and rerun script
runTime2 = 2
print(runTime,_2runTime,runTime2)
###Output
1 2 2
###Markdown
There are some reserved words that cannot be used as variable names because they have preassigned meaning in Parseltongue. These words include print, input, if, while, and for. There are several more; the interpreter won't allow you to use these names as variables and will issue an error message when you attempt to run a program with such words used as variables. OperatorsThe `=` sign used in the variable definition is called an assignment operator (or assignment sign). The symbol means that the expression to the right of the symbol is to be evaluated and the result placed into the variable on the left side of the symbol. The "operation" is assignment, the "=" symbol is the operator name.Consider the script below:
###Code
# Assignment Operator
x = 5
y = 10
print (x,y)
x=y # reverse order y=x and re-run, what happens?
print (x,y)
###Output
5 10
10 10
###Markdown
So look at what happened. When we assigned values to the variables named `x` and `y`, they started life as 5 and 10. We then wrote those values to the console, and the program returned 5 and 10. Then we assigned `y` to `x` which took the value in y and replaced the value that was in x with this value. We then wrote the contents again, and both variables have the value 10. What's it with ?> Comments are added by writing a hashtag symbol () followed by any text of your choice. Any text that follows the hashtag symbol on the same line is ignored by the Python interpreter. Arithmetic OperatorsIn addition to assignment we can also perform arithmetic operations on variables. Thefundamental arithmetic operators are:| Symbol | Meaning | Example ||:---|:---|:---|| = |Assignment| x=3 Assigns value of 3 to x.|| + |Addition| x+y Adds values in x and y.|| - |Subtraction| x-y Subtracts values in y from x.||$*$ |Multiplication| x*y Multiplies values in x and y.|| / |Division| x/y Divides value in x by value in y.|| // |Floor division| x//y Divide x by y, truncate result to whole number.|| % |Modulus| x%y Returns remainder when x is divided by y.||$**$ |Exponentation| x$**$y Raises value in x by value in y. ( e.g. xy)|| += |Additive assignment| x+=2 Equivalent to x = x+2.|| -= |Subtractive assignment| x-=2 Equivalent to x = x-2.|| *= |Multiplicative assignment| x\*=3 Equivalent to x = x\*3.|| /= |Divide assignment| x/3 Equivalent to x = x/3.|Run the script in the next cell for some illustrative results
###Code
# Uniary Arithmetic Operators
x = 10
y = 5
print(x, y)
print(x+y)
print(x-y)
print(x*y)
print(x/y)
print((x+1)//y)
print((x+1)%y)
print(x**y)
# Arithmetic assignment operators
x = 1
x += 2
print(type(x),x)
x = 1
x -= 2
print(type(x),x)
x = 1
x *=3
print(type(x),x)
x = 10
x /= 2
print(type(x),x) # Interesting what division does to variable type
###Output
<class 'int'> 3
<class 'int'> -1
<class 'int'> 3
<class 'float'> 5.0
###Markdown
Data TypeIn the computer data are all binary digits (actually 0 and +5 volts). At a higher level of abstraction data are typed into integers, real, or alphanumeric representation. The type affects the kind of arithmetic operations that are allowed (as well as the kind of arithmetic - integer versus real arithmetic; lexicographical ordering of alphanumeric , etc.)In scientific programming, a common (and really difficult to detect) source of slight inaccuracies (that tend to snowball as the program runs) is mixed mode arithmetic required because two numeric values are of different types (integer and real).Learn more from the textbookhttps://www.inferentialthinking.com/chapters/04/Data_Types.htmlHere we present a quick summary IntegerIntegers are numbers without any fractional portion (nothing after the decimal point { whichis not used in integers). Numbers like -3, -2, -1, 0, 1, 2, 200 are integers. A number like 1.1is not an integer, and 1.0 is also not an integer (the presence of the decimal point makes thenumber a real).To declare an integer in Python, just assign the variable name to an integer for example MyPhoneNumber = 14158576309 Real (Float)A real or float is a number that has (or can have) a fractional portion - the number hasdecimal parts. The numbers 3.14159, -0.001, 11.11, 1., are all floats. The last one is especially tricky, if you don't notice the decimal point you might think it is an integer but theinclusion of the decimal point in Python tells the program that the value is to be treated as a float.To declare a float in Python, just assign the variable name to a float for example MyMassInKilos = 74.8427 String(Alphanumeric)A string is a data type that is treated as text elements. The usual letters are strings, butnumbers can be included. The numbers in a string are simply characters and cannot bedirectly used in arithmetic. There are some kinds of arithmetic that can be performed on strings but generally we process string variables to capture the text nature of their contents. To declare a string in Python, just assign the variable name to a string value - the trick is the value is enclosed in quotes. The quotes are delimiters that tell the program that the characters between the quotes are characters and are to be treated as literal representation.For example MyName = 'Theodore' MyCatName = "Dusty" DustyMassInKilos = "7.48427" are all string variables. The last assignment is made a string on purpose. String variables can be combined using an operation called concatenation. The symbol for concatenation is the plus symbol `+`.Strings can also be converted to all upper case using the `upper()` function. The syntax forthe `upper()` function is `'string to be upper case'.upper()`. Notice the "dot" in the syntax. The operation passes everything to the left of the dot to the function which thenoperates on that content and returns the result all upper case (or an error if the input streamis not a string).
###Code
# Variable Types Example
MyPhoneNumber = 14158576309
MyMassInKilos = 74.8427
MyName = 'Theodore'
MyCatName = "Dusty"
DustyMassInKilos = "7.48427"
print("All about me")
print("Name: ",MyName, " Mass :",MyMassInKilos,"Kg" )
print('Phone : ',MyPhoneNumber)
print('My cat\'s name :', MyCatName) # the \ escape character is used to get the ' into the literal
print("All about concatenation!")
print("A Silly String : ",MyCatName+MyName+DustyMassInKilos)
print("A SILLY STRING : ", (MyCatName+MyName+DustyMassInKilos).upper())
print(MyName[0:4]) # Notice how the string is sliced- This is Python: ALWAYS start counting from zero!
###Output
All about me
Name: Theodore Mass : 74.8427 Kg
Phone : 14158576309
My cat's name : Dusty
All about concatenation!
A Silly String : DustyTheodore7.48427
A SILLY STRING : DUSTYTHEODORE7.48427
Theo
###Markdown
Strings can be formatted using the `%` operator or the `format()` function. The concepts willbe introduced later on as needed in the workbook, you can Google search for examples ofhow to do such formatting. Changing TypesA variable type can be changed. This activity is called type casting. Three functions allowtype casting: `int()`, `float()`, and `str()`. The function names indicate the result of usingthe function, hence `int()` returns an integer, `float()` returns a oat, and `str()` returns astring.There is also the useful function `type()` which returns the type of variable.The easiest way to understand is to see an example.
###Code
# Type Casting Examples
MyInteger = 234
MyFloat = 876.543
MyString = 'What is your name?'
print(MyInteger,MyFloat,MyString)
print('Integer as float',float(MyInteger))
print('Float as integer',int(MyFloat))
print('Integer as string',str(MyInteger))
print('Integer as hexadecimal',hex(MyInteger))
print('Integer Type',type((MyInteger))) # insert the hex conversion and see what happens!
###Output
_____no_output_____
###Markdown
Expressions Expressions are the "algebraic" constructions that are evaluated and then placed into a variable.Consider x1 = 7 + 3 * 6 / 2 - 1The expression is evaluated from the left to right and in words is Into the object named x1 place the result of: integer 7 + (integer 6 divide by integer 2 = float 3 * integer 3 = float 9 - integer 1 = float 8) = float 15The division operation by default produces a float result unless forced otherwise. The result is the variable `x1` is a float with a value of `15.0`
###Code
# Expressions Example
x1 = 7 + 3 * 6 // 2 - 1 # Change / into // and see what happens!
print(type(x1),x1)
## Simple I/O (Input/Output)
###Output
<class 'int'> 15
###Markdown
Example: Simple Input/OutputGet two floating point numbers via the `input()` function and store them under the variable names `float1` and `float2`. Then, compare them, and try a few operations on them! float1 = input("Please enter float1: ") float1 = float(float1) ... Print `float1` and `float2` to the output screen. print("float1:", float1) ...Then check whether `float1` is greater than or equal to `float2`.
###Code
float1 = input("Please enter float1: ")
float2 = input("Please enter float2: ")
print("float1:", float1)
print("float2:", float2)
float1 = float(float1)
float2 = float(float2)
print("float1:", float1)
print("float2:", float2)
float1>float2
float1+float2
float1/float2
###Output
_____no_output_____ |
notebooks/semantic parameter performance.ipynb | ###Markdown
Parameters that might affect performance----------------------------------------This notebook examines how parameters in the semantic model of the Danish language affects its performance.- Number of pages read- Use of stopwords- Exclusion of short pages- Scaling of matrix tfidf/count- Normalization of document- Factorization of matrix
###Code
from everything import *
from dasem.semantic import Semantic
from dasem.data import wordsim353 as wordsim353_data
# Read datasets
four_words = read_csv('../dasem/data/four_words.csv', encoding='utf-8')
wordsim353 = wordsim353_data()
def compute_accuracy(semantic, four_words):
outlier = []
for idx, words in four_words.iterrows():
sorted_words = semantic.sort_by_outlierness(words.values[:4])
outlier.append(sorted_words[0])
accuracy = mean(four_words.word4 == outlier)
return accuracy
def compute_correlation(semantic, wordsim):
human = []
relatednesses = []
for idx, row in wordsim.iterrows():
R = semantic.relatedness([row.da1, row.da2])
relatednesses.append(R[0, 1])
human.append(row['Human (mean)'])
human = array(human)
relatednesses = array(relatednesses)
indices = (~isnan(relatednesses)).nonzero()[0]
C = corrcoef(human[indices], relatednesses[indices])
return C[0, 1]
max_n_pagess = [3000, 30000, None]
norms = ['l1', 'l2', None]
stop_wordss = [None, set(nltk.corpus.stopwords.words('danish'))]
use_idfs = [True, False]
sublinear_tfs = [True, False]
columns = ['accuracy', 'correlation', 'stop_words', 'use_idf', 'norm', 'sublinear_tf', 'max_n_pages']
n_total = len(max_n_pagess) * len(norms) * len(stop_wordss) * len(use_idfs) * \
len(sublinear_tfs)
results = DataFrame(dtype=float, index=range(n_total), columns=columns)
n = 0
for stop_words_index, stop_words in (enumerate(stop_wordss)):
for norm in (norms):
for use_idf in (use_idfs):
for sublinear_tf in (sublinear_tfs):
for max_n_pages in (max_n_pagess):
results.ix[n, 'max_n_pages'] = max_n_pages
results.ix[n, 'stop_words'] = stop_words_index
results.ix[n, 'norm'] = str(norm)
results.ix[n, 'use_idf'] = use_idf
results.ix[n, 'sublinear_tf'] = sublinear_tf
semantic = Semantic(stop_words=stop_words, norm=norm,
use_idf=use_idf, sublinear_tf=sublinear_tf,
max_n_pages=max_n_pages)
results.ix[n, 'accuracy'] = compute_accuracy(semantic, four_words)
results.ix[n, 'correlation'] = compute_correlation(semantic, wordsim353)
n += 1
relatednesses = []
for idx, row in wordsim353.iterrows():
R = semantic.relatedness([row.da1, row.da2])
relatednesses.append(R[0, 1])
wordsim353['relatedness'] = relatednesses
wordsim353
wordsim353.plot(x='Human (mean)', y='relatedness', kind='scatter')
yscale('log')
ylim(0.0001, 1)
title('Scatter plot of Wordsim353 data')
show()
results
formula = 'accuracy ~ stop_words + use_idf + norm + sublinear_tf + max_n_pages'
model = smf.glm(formula, data=results).fit()
model.summary()
###Output
_____no_output_____ |
Copy of YOLOv5 Tutorial.ipynb | ###Markdown
This is the **official YOLOv5 🚀 notebook** by **Ultralytics**, and is freely available for redistribution under the [GPL-3.0 license](https://choosealicense.com/licenses/gpl-3.0/). For more information please visit https://github.com/ultralytics/yolov5 and https://ultralytics.com. Thank you! SetupClone repo, install dependencies and check PyTorch and GPU.
###Code
!git clone https://github.com/ultralytics/yolov5 # clone repo
%cd yolov5
%pip install -qr requirements.txt # install dependencies
import torch
from IPython.display import Image, clear_output # to display images
clear_output()
print(f"Setup complete. Using torch {torch.__version__} ({torch.cuda.get_device_properties(0).name if torch.cuda.is_available() else 'CPU'})")
###Output
Setup complete. Using torch 1.9.0+cu102 (Tesla K80)
###Markdown
1. Inference`detect.py` runs YOLOv5 inference on a variety of sources, downloading models automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and saving results to `runs/detect`. Example inference sources are:```shellpython detect.py --source 0 webcam file.jpg image file.mp4 video path/ directory path/*.jpg glob 'https://youtu.be/NUsoVlDFqZg' YouTube 'rtsp://example.com/media.mp4' RTSP, RTMP, HTTP stream```
###Code
%rm -rf runs
!python detect.py --weights yolov5s.pt --img 640 --conf 0.25 --source data/images/
#Image(filename='runs/detect/exp/zidane.jpg', width=600)
###Output
[34m[1mdetect: [0mweights=['yolov5s.pt'], source=data/images/, imgsz=[640, 640], conf_thres=0.25, iou_thres=0.45, max_det=1000, device=, view_img=False, save_txt=False, save_conf=False, save_crop=False, nosave=False, classes=None, agnostic_nms=False, augment=False, visualize=False, update=False, project=runs/detect, name=exp, exist_ok=False, line_thickness=3, hide_labels=False, hide_conf=False, half=False
YOLOv5 🚀 v5.0-380-g11f85e7 torch 1.9.0+cu102 CUDA:0 (Tesla K80, 11441.1875MB)
Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5s.pt to yolov5s.pt...
100% 14.1M/14.1M [00:00<00:00, 15.9MB/s]
Fusing layers...
Model Summary: 224 layers, 7266973 parameters, 0 gradients
image 1/2 /content/yolov5/data/images/bus.jpg: 640x480 4 persons, 1 bus, 1 fire hydrant, Done. (0.060s)
image 2/2 /content/yolov5/data/images/zidane.jpg: 384x640 2 persons, 2 ties, Done. (0.027s)
Results saved to [1mruns/detect/exp[0m
Done. (0.328s)
###Markdown
2. ValidateValidate a model's accuracy on [COCO](https://cocodataset.org/home) val or test-dev datasets. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases). To show results by class use the `--verbose` flag. Note that `pycocotools` metrics may be ~1% better than the equivalent repo metrics, as is visible below, due to slight differences in mAP computation. COCO val2017Download [COCO val 2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yamlL14) dataset (1GB - 5000 images), and test model accuracy.
###Code
# Download COCO val2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017val.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../datasets && rm tmp.zip
# Run YOLOv5x on COCO val2017
!python val.py --weights yolov5x.pt --data coco.yaml --img 640 --iou 0.65 --half
###Output
[34m[1mval: [0mdata=./data/coco.yaml, weights=['yolov5x.pt'], batch_size=32, imgsz=640, conf_thres=0.001, iou_thres=0.65, task=val, device=, single_cls=False, augment=False, verbose=False, save_txt=False, save_hybrid=False, save_conf=False, save_json=True, project=runs/val, name=exp, exist_ok=False, half=True
YOLOv5 🚀 v5.0-380-g11f85e7 torch 1.9.0+cu102 CUDA:0 (Tesla K80, 11441.1875MB)
Downloading https://github.com/ultralytics/yolov5/releases/download/v5.0/yolov5x.pt to yolov5x.pt...
100% 168M/168M [00:08<00:00, 21.9MB/s]
Fusing layers...
Model Summary: 476 layers, 87730285 parameters, 0 gradients
[34m[1mval: [0mScanning '../datasets/coco/val2017' images and labels...4952 found, 48 missing, 0 empty, 0 corrupted: 100% 5000/5000 [00:02<00:00, 1987.49it/s]
[34m[1mval: [0mNew cache created: ../datasets/coco/val2017.cache
Class Images Labels P R [email protected] [email protected]:.95: 100% 157/157 [10:01<00:00, 3.83s/it]
all 5000 36335 0.746 0.626 0.68 0.49
Speed: 0.2ms pre-process, 111.9ms inference, 1.7ms NMS per image at shape (32, 3, 640, 640)
Evaluating pycocotools mAP... saving runs/val/exp/yolov5x_predictions.json...
loading annotations into memory...
Done (t=0.45s)
creating index...
index created!
Loading and preparing results...
DONE (t=5.26s)
creating index...
index created!
Running per image evaluation...
Evaluate annotation type *bbox*
DONE (t=99.14s).
Accumulating evaluation results...
DONE (t=14.26s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.504
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=100 ] = 0.688
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=100 ] = 0.546
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.351
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.551
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.644
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 1 ] = 0.382
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets= 10 ] = 0.629
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.682
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.524
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.735
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.827
Results saved to [1mruns/val/exp[0m
###Markdown
COCO test-dev2017Download [COCO test2017](https://github.com/ultralytics/yolov5/blob/74b34872fdf41941cddcf243951cdb090fbac17b/data/coco.yamlL15) dataset (7GB - 40,000 images), to test model accuracy on test-dev set (**20,000 images, no labels**). Results are saved to a `*.json` file which should be **zipped** and submitted to the evaluation server at https://competitions.codalab.org/competitions/20794.
###Code
# Download COCO test-dev2017
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco2017labels.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../ && rm tmp.zip # unzip labels
!f="test2017.zip" && curl http://images.cocodataset.org/zips/$f -o $f && unzip -q $f && rm $f # 7GB, 41k images
%mv ./test2017 ../coco/images # move to /coco
# Run YOLOv5s on COCO test-dev2017 using --task test
!python val.py --weights yolov5s.pt --data coco.yaml --task test
###Output
_____no_output_____
###Markdown
3. TrainDownload [COCO128](https://www.kaggle.com/ultralytics/coco128), a small 128-image tutorial dataset, start tensorboard and train YOLOv5s from a pretrained checkpoint for 3 epochs (note actual training is typically much longer, around **300-1000 epochs**, depending on your dataset).
###Code
# Download COCO128
torch.hub.download_url_to_file('https://github.com/ultralytics/yolov5/releases/download/v1.0/coco128.zip', 'tmp.zip')
!unzip -q tmp.zip -d ../datasets && rm tmp.zip
###Output
_____no_output_____
###Markdown
Train a YOLOv5s model on [COCO128](https://www.kaggle.com/ultralytics/coco128) with `--data coco128.yaml`, starting from pretrained `--weights yolov5s.pt`, or from randomly initialized `--weights '' --cfg yolov5s.yaml`. Models are downloaded automatically from the [latest YOLOv5 release](https://github.com/ultralytics/yolov5/releases), and **COCO, COCO128, and VOC datasets are downloaded automatically** on first use.All training results are saved to `runs/train/` with incrementing run directories, i.e. `runs/train/exp2`, `runs/train/exp3` etc.
###Code
# Tensorboard (optional)
%load_ext tensorboard
%tensorboard --logdir runs/train
# Weights & Biases (optional)
%pip install -q wandb
import wandb
wandb.login()
# Train YOLOv5s on COCO128 for 3 epochs
!python train.py --img 640 --batch 16 --epochs 3 --data coco128.yaml --weights yolov5s.pt --cache
###Output
_____no_output_____
###Markdown
4. Visualize Weights & Biases Logging 🌟 NEW[Weights & Biases](https://wandb.ai/site?utm_campaign=repo_yolo_notebook) (W&B) is now integrated with YOLOv5 for real-time visualization and cloud logging of training runs. This allows for better run comparison and introspection, as well improved visibility and collaboration for teams. To enable W&B `pip install wandb`, and then train normally (you will be guided through setup on first use). During training you will see live updates at [https://wandb.ai/home](https://wandb.ai/home?utm_campaign=repo_yolo_notebook), and you can create and share detailed [Reports](https://wandb.ai/glenn-jocher/yolov5_tutorial/reports/YOLOv5-COCO128-Tutorial-Results--VmlldzozMDI5OTY) of your results. For more information see the [YOLOv5 Weights & Biases Tutorial](https://github.com/ultralytics/yolov5/issues/1289). Local LoggingAll results are logged by default to `runs/train`, with a new experiment directory created for each new training as `runs/train/exp2`, `runs/train/exp3`, etc. View train and val jpgs to see mosaics, labels, predictions and augmentation effects. Note an Ultralytics **Mosaic Dataloader** is used for training (shown below), which combines 4 images into 1 mosaic during training.> `train_batch0.jpg` shows train batch 0 mosaics and labels> `test_batch0_labels.jpg` shows val batch 0 labels> `test_batch0_pred.jpg` shows val batch 0 _predictions_Training results are automatically logged to [Tensorboard](https://www.tensorflow.org/tensorboard) and [CSV](https://github.com/ultralytics/yolov5/pull/4148) as `results.csv`, which is plotted as `results.png` (below) after training completes. You can also plot any `results.csv` file manually:```pythonfrom utils.plots import plot_results plot_results('path/to/results.csv') plot 'results.csv' as 'results.png'``` EnvironmentsYOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including [CUDA](https://developer.nvidia.com/cuda)/[CUDNN](https://developer.nvidia.com/cudnn), [Python](https://www.python.org/) and [PyTorch](https://pytorch.org/) preinstalled):- **Google Colab and Kaggle** notebooks with free GPU: - **Google Cloud** Deep Learning VM. See [GCP Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/GCP-Quickstart)- **Amazon** Deep Learning AMI. See [AWS Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/AWS-Quickstart)- **Docker Image**. See [Docker Quickstart Guide](https://github.com/ultralytics/yolov5/wiki/Docker-Quickstart) StatusIf this badge is green, all [YOLOv5 GitHub Actions](https://github.com/ultralytics/yolov5/actions) Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training ([train.py](https://github.com/ultralytics/yolov5/blob/master/train.py)), testing ([val.py](https://github.com/ultralytics/yolov5/blob/master/val.py)), inference ([detect.py](https://github.com/ultralytics/yolov5/blob/master/detect.py)) and export ([export.py](https://github.com/ultralytics/yolov5/blob/master/export.py)) on MacOS, Windows, and Ubuntu every 24 hours and on every commit. AppendixOptional extras below. Unit tests validate repo functionality and should be run on any PRs submitted.
###Code
# Reproduce
for x in 'yolov5s', 'yolov5m', 'yolov5l', 'yolov5x':
!python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.25 --iou 0.45 # speed
!python val.py --weights {x}.pt --data coco.yaml --img 640 --conf 0.001 --iou 0.65 # mAP
# PyTorch Hub
import torch
# Model
model = torch.hub.load('ultralytics/yolov5', 'yolov5s')
# Images
dir = 'https://ultralytics.com/images/'
imgs = [dir + f for f in ('zidane.jpg', 'bus.jpg')] # batch of images
# Inference
results = model(imgs)
results.print() # or .show(), .save()
# Unit tests
%%shell
export PYTHONPATH="$PWD" # to run *.py. files in subdirectories
rm -rf runs # remove runs/
for m in yolov5s; do # models
python train.py --weights $m.pt --epochs 3 --img 320 --device 0 # train pretrained
python train.py --weights '' --cfg $m.yaml --epochs 3 --img 320 --device 0 # train scratch
for d in 0 cpu; do # devices
python detect.py --weights $m.pt --device $d # detect official
python detect.py --weights runs/train/exp/weights/best.pt --device $d # detect custom
python val.py --weights $m.pt --device $d # val official
python val.py --weights runs/train/exp/weights/best.pt --device $d # val custom
done
python hubconf.py # hub
python models/yolo.py --cfg $m.yaml # inspect
python export.py --weights $m.pt --img 640 --batch 1 # export
done
# Profile
from utils.torch_utils import profile
m1 = lambda x: x * torch.sigmoid(x)
m2 = torch.nn.SiLU()
results = profile(input=torch.randn(16, 3, 640, 640), ops=[m1, m2], n=100)
# Evolve
!python train.py --img 640 --batch 64 --epochs 100 --data coco128.yaml --weights yolov5s.pt --cache --noautoanchor --evolve
!d=runs/train/evolve && cp evolve.* $d && zip -r evolve.zip $d && gsutil mv evolve.zip gs://bucket # upload results (optional)
# VOC
for b, m in zip([64, 48, 32, 16], ['yolov5s', 'yolov5m', 'yolov5l', 'yolov5x']): # zip(batch_size, model)
!python train.py --batch {b} --weights {m}.pt --data VOC.yaml --epochs 50 --cache --img 512 --nosave --hyp hyp.finetune.yaml --project VOC --name {m}
###Output
_____no_output_____ |
ml/Assignment 1.ipynb | ###Markdown
Assignment 1 - Introduction to Machine Learning For this assignment, you will be using the Breast Cancer Wisconsin (Diagnostic) Database to create a classifier that can help diagnose patients. First, read through the description of the dataset (below).
###Code
import numpy as np
import pandas as pd
from sklearn.datasets import load_breast_cancer
from sklearn.model_selection import train_test_split
from sklearn.neighbors import KNeighborsClassifier
cancer = load_breast_cancer()
print(cancer.DESCR)
###Output
.. _breast_cancer_dataset:
Breast cancer wisconsin (diagnostic) dataset
--------------------------------------------
**Data Set Characteristics:**
:Number of Instances: 569
:Number of Attributes: 30 numeric, predictive attributes and the class
:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three
worst/largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 0 is Mean Radius, field
10 is Radius SE, field 20 is Worst Radius.
- class:
- WDBC-Malignant
- WDBC-Benign
:Summary Statistics:
===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======
:Missing Attribute Values: None
:Class Distribution: 212 - Malignant, 357 - Benign
:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian
:Donor: Nick Street
:Date: November, 1995
This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
https://goo.gl/U2Uwz2
Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.
Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.
The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
.. topic:: References
- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
163-171.
###Markdown
The object returned by `load_breast_cancer()` is a scikit-learn Bunch object, which is similar to a dictionary.
###Code
cancer.keys()
###Output
_____no_output_____
###Markdown
Question 0 (Example) How many features does the breast cancer dataset have?
###Code
def answer_zero():
return len(cancer['feature_names'])
answer_zero()
###Output
_____no_output_____
###Markdown
Question 1Scikit-learn works with lists, numpy arrays, scipy-sparse matrices, and pandas DataFrames, so converting the dataset to a DataFrame is not necessary for training this model. Using a DataFrame does however help make many things easier such as munging data, so let's practice creating a classifier with a pandas DataFrame. Convert the sklearn.dataset `cancer` to a DataFrame. *This function should return a `(569, 31)` DataFrame with * *columns = * ['mean radius', 'mean texture', 'mean perimeter', 'mean area', 'mean smoothness', 'mean compactness', 'mean concavity', 'mean concave points', 'mean symmetry', 'mean fractal dimension', 'radius error', 'texture error', 'perimeter error', 'area error', 'smoothness error', 'compactness error', 'concavity error', 'concave points error', 'symmetry error', 'fractal dimension error', 'worst radius', 'worst texture', 'worst perimeter', 'worst area', 'worst smoothness', 'worst compactness', 'worst concavity', 'worst concave points', 'worst symmetry', 'worst fractal dimension', 'target']*and index = * RangeIndex(start=0, stop=569, step=1)
###Code
def answer_one():
df = pd.DataFrame(cancer['data'], columns=cancer['feature_names'])
df['target'] = cancer['target'].astype('float64')
return df
answer_one()
###Output
_____no_output_____
###Markdown
Question 2What is the class distribution? (i.e. how many instances of `malignant` (encoded 0) and how many `benign` (encoded 1)?)*This function should return a Series named `target` of length 2 with integer values and index =* `['malignant', 'benign']`
###Code
def answer_two():
cancer_df = answer_one()
target = cancer_df.groupby('target').size().rename({0: 'malignant', 1: 'benign'})
return target
answer_two()
###Output
_____no_output_____
###Markdown
Question 3Split the DataFrame into `X` (the data) and `y` (the labels).*This function should return a tuple of length 2:* `(X, y)`*, where* * `X`*, a pandas DataFrame, has shape* `(569, 30)`* `y`*, a pandas Series, has shape* `(569,)`.
###Code
def answer_three():
cancer_df = answer_one()
X = cancer_df[cancer['feature_names']]
y = cancer_df['target']
return X, y
answer_three()
###Output
_____no_output_____
###Markdown
Question 4Using `train_test_split`, split `X` and `y` into training and test sets `(X_train, X_test, y_train, and y_test)`.**Set the random number generator state to 0 using `random_state=0` to make sure your results match the autograder!***This function should return a tuple of length 4:* `(X_train, X_test, y_train, y_test)`*, where* * `X_train` *has shape* `(426, 30)`* `X_test` *has shape* `(143, 30)`* `y_train` *has shape* `(426,)`* `y_test` *has shape* `(143,)`
###Code
def answer_four():
X, y = answer_three()
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
return X_train, X_test, y_train, y_test
answer_four()
###Output
_____no_output_____
###Markdown
Question 5Using KNeighborsClassifier, fit a k-nearest neighbors (knn) classifier with `X_train`, `y_train` and using one nearest neighbor (`n_neighbors = 1`).*This function should return a * `sklearn.neighbors.classification.KNeighborsClassifier`.
###Code
def answer_five():
X_train, X_test, y_train, y_test = answer_four()
knn = KNeighborsClassifier(n_neighbors=1)
return knn.fit(X_train, y_train)
answer_five()
###Output
_____no_output_____
###Markdown
Question 6Using your knn classifier, predict the class label using the mean value for each feature.Hint: You can use `cancerdf.mean()[:-1].values.reshape(1, -1)` which gets the mean value for each feature, ignores the target column, and reshapes the data from 1 dimension to 2 (necessary for the precict method of KNeighborsClassifier).*This function should return a numpy array either `array([ 0.])` or `array([ 1.])`*
###Code
def answer_six():
cancer_df = answer_one()
means = cancer_df.mean()[:-1].values.reshape(1, -1)
knn = answer_five()
return knn.predict(means)
answer_six()
###Output
_____no_output_____
###Markdown
Question 7Using your knn classifier, predict the class labels for the test set `X_test`.*This function should return a numpy array with shape `(143,)` and values either `0.0` or `1.0`.*
###Code
def answer_seven():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
return knn.predict(X_test)
answer_seven()
###Output
_____no_output_____
###Markdown
Question 8Find the score (mean accuracy) of your knn classifier using `X_test` and `y_test`.*This function should return a float between 0 and 1*
###Code
def answer_eight():
X_train, X_test, y_train, y_test = answer_four()
knn = answer_five()
return knn.score(X_test, y_test)
answer_eight()
###Output
_____no_output_____ |
machine_learning/handling_imbalanced_data.ipynb | ###Markdown
This will show how to handle imbalanced dataset.**Imbalanced datasets are those which contain more no. of records for one class as compared to other class (say 80-20)**Two main ways of handling imbalanced datasets :1. Undersampling which consists in down-sizing the majority class by removing observations until the dataset is balanced.2. Oversampling which consists in over-sizing the minority class by adding observations. Importing the libraries
###Code
from sklearn.datasets import make_classification
from collections import Counter
from imblearn.under_sampling import NearMiss
from imblearn.combine import SMOTETomek
###Output
_____no_output_____
###Markdown
Creating the dataset
###Code
X, y = make_classification(n_classes=2, class_sep=2, weights=[0.1, 0.9],
n_informative=3, n_redundant=1, flip_y=0,
n_features=20, n_clusters_per_class=1,
n_samples=1000, random_state=10)
print('Original dataset shape %s' % Counter(y))
###Output
Original dataset shape Counter({1: 900, 0: 100})
###Markdown
It is clear from the data that it contains more datapoints class 1 as compared to class 0.Perform Undersampling and oversampling on the data. Undersampling
###Code
nm = NearMiss()
X_res,y_res=nm.fit_sample(X,y)
print(f"Original data shape : {Counter(y)}")
print(f"Resampled data shape : {Counter(y_res)} ")
###Output
_____no_output_____
###Markdown
Oversampling
###Code
smk = SMOTETomek(random_state = 42)
X_res,y_res=smk.fit_sample(X,y)
print(f"Original data shape : {Counter(y)}")
print(f"Resampled data shape : {Counter(y_res)} ")
###Output
Original data shape : Counter({1: 900, 0: 100})
Resampled data shape : Counter({0: 900, 1: 900})
|
quizzes/quiz8/quiz8CalculatorParsing.ipynb | ###Markdown
Q8 ** Run these cells one by one. Study the questions and the answers here. Then answer the corresponding Canvas questions in Quiz-8. ** You will be provided answers here. You'll be provided the answers here. We will tag them as "Free Answer". You will see that only after you run the cells. Write these ** free answers** on Canvas. Our goal is that you ran through these cells at least once !!! Background information for youSomeone was asked to build a calculator following these CFG rules.```RULESRule 0 S -> expressionRule 1 expression -> expression PLUS termRule 2 expression -> expression MINUS termRule 3 expression -> termRule 4 term -> term TIMES factorRule 5 term -> term DIVIDE factorRule 6 term -> factorRule 7 factor -> innerfactor EXP factorRule 8 factor -> innerfactorRule 9 innerfactor -> UMINUS innerfactorRule 10 innerfactor -> LPAREN expression RPARENRule 11 innerfactor -> NUMBER```They implemented these CFGs in a parser that we shall present in Section 2 below. THINGS TO NOTE* We will use "~" (tilde) for unary minus, and "-" (regular minus) for binary infix minus* we will use "^" for exponentiation The ParserYou may be interested in roughly how abstract CFG rules such as listed above turn into CFG rules as supported by a tool such as PLY.
###Code
from lex import lex
from yacc import yacc
from jove.StateNameSanitizers import ResetStNum, NxtStateStr
from jove.SystemImports import *
# Following ideas from http://www.dabeaz.com/ply/example.html heavily
tokens = ('NUMBER','LPAREN','RPAREN','PLUS', 'MINUS', 'TIMES','DIVIDE', 'UMINUS', 'EXP')
# Tokens
t_PLUS = r'\+'
t_MINUS = r'\-'
t_TIMES = r'\*'
t_DIVIDE = r'\/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_UMINUS = r'\~'
t_EXP = r'\^'
# parsing + semantic actions in one place!
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print("Integer value too large %d", t.value)
t.value = 0
return t
# Ignored characters
t_ignore = " \t"
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
def p_expression_1(t):
'expression : expression PLUS term'
#
t[0] = (t[1][0] + t[3][0],
attrDyadicInfix("+", t[1][1], t[3][1]))
def p_expression_2(t):
'expression : expression MINUS term'
#
t[0] = (t[1][0] - t[3][0],
attrDyadicInfix("-", t[1][1], t[3][1]))
def p_expression_3(t):
'expression : term'
#
t[0] = t[1]
# Consult this excellent reference for info on precedences
# https://www.cs.utah.edu/~zachary/isp/worksheets/operprec/operprec.html
def p_term_1(t):
'term : term TIMES factor'
#
t[0] = (t[1][0] * t[3][0],
attrDyadicInfix("*", t[1][1], t[3][1]))
def p_term_2(t):
'term : term DIVIDE factor'
#
if (t[3][0] == 0):
print("Error, divide by zero!")
t[3][0] = 1 # fix it
t[0] = (t[1][0] / t[3][0],
attrDyadicInfix("/", t[1][1], t[3][1]))
def p_term_3(t):
'term : factor'
#
t[0] = t[1]
def p_factor_1(t):
'factor : innerfactor EXP factor'
#
t[0] = (t[1][0] ** t[3][0],
attrDyadicInfix("^", t[1][1], t[3][1]))
def p_factor_2(t):
'factor : innerfactor'
#
t[0] = t[1]
def p_innerfactor_1(t):
'innerfactor : UMINUS innerfactor'
#
ast = ('~', t[2][1]['ast'])
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("~E_")
left = NxtStateStr("~_")
t[0] =(-t[2][0],
{'ast' : ast,
'dig' : {'nl' : [ root, left ] + nlin, # this order important for proper layout!
'el' : elin + [ (root, left),
(root, rootin) ]
}})
def p_innerfactor_2(t):
'innerfactor : LPAREN expression RPAREN'
#
ast = t[2][1]['ast']
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("(E)_")
left = NxtStateStr("(_")
right= NxtStateStr(")_")
t[0] =(t[2][0],
{'ast' : ast,
'dig' : {'nl' : [root, left] + nlin + [right], #order important f. proper layout!
'el' : elin + [ (root, left),
(root, rootin),
(root, right) ]
}})
def p_innerfactor_3(t):
'innerfactor : NUMBER'
#
strn = str(t[1])
ast = ('NUMBER', strn)
t[0] =(t[1],
{ 'ast' : ast,
'dig' : {'nl' : [ strn + NxtStateStr("_") ],
'el' : []
}})
def p_error(t):
print("Syntax error at '%s'" % t.value)
#--
def attrDyadicInfix(op, attr1, attr3):
ast = (op, (attr1['ast'], attr3['ast']))
nlin1 = attr1['dig']['nl']
nlin3 = attr3['dig']['nl']
nlin = nlin1 + nlin3
elin1 = attr1['dig']['el']
elin3 = attr3['dig']['el']
elin = elin1 + elin3
rootin1 = nlin1[0]
rootin3 = nlin3[0]
root = NxtStateStr("E1"+op+"E2"+"_") # NxtStateStr("$_")
left = rootin1
middle = NxtStateStr(op+"_")
right = rootin3
return {'ast' : ast,
'dig' : {'nl' : [ root, left, middle, right ] + nlin,
'el' : elin + [ (root, left),
(root, middle),
(root, right) ]
}}
#===
# This is the main function in this Jove file.
#===
def parseExp(s):
"""In: a string s containing a regular expression.
Out: An attribute triple consisting of
1) An abstract syntax tree suitable for processing in the derivative-based scanner
2) A node-list for the parse-tree digraph generated. Good for drawing a parse tree
using the drawPT function below
3) An edge list for the parse-tree generated (again good for drawing using the
drawPT function below)
"""
mylexer = lex()
myparser = yacc()
pt = myparser.parse(s, lexer = mylexer)
# print('parsed result is ', pt)
# (result, ast, nodes, edges)
return (pt[0], pt[1]['ast'], pt[1]['dig']['nl'], pt[1]['dig']['el'])
def drawPT(ast_rslt_nl_el, comment="PT"):
"""Given an (ast, nl, el) triple where nl is the node and el the edge-list,
draw the Parse Tree by returning a dot object.
"""
(rslt, ast, nl, el) = ast_rslt_nl_el
print("Result calculated = ", rslt)
print("Drawing AST for ", ast)
dotObj_pt = Digraph(comment)
dotObj_pt.graph_attr['rankdir'] = 'TB'
for n in nl:
prNam = n.split('_')[0]
dotObj_pt.node(n, prNam, shape="oval", peripheries="1")
for e in el:
dotObj_pt.edge(e[0], e[1])
return dotObj_pt
###Output
_____no_output_____
###Markdown
Now answer these questions How does the calculator above parse "~2^2" ?
###Code
drawPT(parseExp("~2^2"))
###Output
Result calculated = 4
Drawing AST for ('^', (('~', ('NUMBER', '2')), ('NUMBER', '2')))
###Markdown
** Free answer: ** In our calculator, unary minus has higher precedence than exponentiation, as evidenced by a parse tree of **this many** nodes. Why does Python give a different answer for the expression ```-2**2``` ?
###Code
# Python evaluation
-2 ** 2
###Output
_____no_output_____
###Markdown
** Free answer: ** The experiment with the above Python expression shows how our calculator and Python differ. Specifically, the answers are **this one** (calculator) and **this one** (Python). In Python, unary minus binds less tightly than exponentiation. In parsing "2^~3^~4", the following parse tree was produced.How can we tell that the calculator gives higher precedence to "~" (unary minus) and that it right-associates the exponentiation operator?
###Code
drawPT(parseExp("2^~3^~4"))
###Output
Result calculated = 1.008594091576999
Drawing AST for ('^', (('NUMBER', '2'), ('^', (('~', ('NUMBER', '3')), ('~', ('NUMBER', '4'))))))
###Markdown
** Free answer: ** We see how the tree of **this many nodes** for the above expression 2^~3^~4 is built, where the value of the second "^" flows as the EXPONENT of the first "^". Also see that "~" (the unary minus) is incorporated before any exponentiation gets done. What does ```2**-3**-4``` produce in Python? Is it the same answer?
###Code
# The above expression typed into Python in Python's syntax is below, and see what it produces!
2**-3**-4
###Output
_____no_output_____
###Markdown
** Free answer: ** Python differs from our calculator in how it handles ```2**-3**-4``` versus 2^~3^~4. It yields **this answer**. Show by full parenthesization how Python parses 2**-3**-4
###Code
2**(-(3**(-4)))
###Output
_____no_output_____
###Markdown
** Free answer: ** It parses it as ```2**(-(3**(-4)))```, showing that unary "-" does not have the same precedence as exp. However, the exps are processed right-associatively! Parsing ```6*3/4*~5/(2+3-4-5-6/7*~8)-~9```How many nodes in this parse tree? Same as Python's answer?
###Code
drawPT(parseExp("6*3/4*~5/(2+3-4-5-6/7*~8)-~9"))
# Check against Python!
6*3/4*-5/(2+3-4-5-6/7*-8)--9
###Output
_____no_output_____
###Markdown
Q8 ** Run these cells one by one. Study the questions and the answers here. Then answer the corresponding Canvas questions in Quiz-8. ** You will be provided answers here. You'll be provided the answers here. We will tag them as "Free Answer". You will see that only after you run the cells. Write these ** free answers** on Canvas. Our goal is that you ran through these cells at least once !!! Background information for youSomeone was asked to build a calculator following these CFG rules.```RULESRule 0 S -> expressionRule 1 expression -> expression PLUS termRule 2 expression -> expression MINUS termRule 3 expression -> termRule 4 term -> term TIMES factorRule 5 term -> term DIVIDE factorRule 6 term -> factorRule 7 factor -> innerfactor EXP factorRule 8 factor -> innerfactorRule 9 innerfactor -> UMINUS innerfactorRule 10 innerfactor -> LPAREN expression RPARENRule 11 innerfactor -> NUMBER```They implemented these CFGs in a parser that we shall present in Section 2 below. THINGS TO NOTE* We will use "~" (tilde) for unary minus, and "-" (regular minus) for binary infix minus* we will use "^" for exponentiation The ParserYou may be interested in roughly how abstract CFG rules such as listed above turn into CFG rules as supported by a tool such as PLY.
###Code
import sys
sys.path[0:0] = ['../..','../../3rdparty'] # Put these at the head of the search path
from lex import lex
from yacc import yacc
from jove.StateNameSanitizers import ResetStNum, NxtStateStr
from jove.SystemImports import *
# Following ideas from http://www.dabeaz.com/ply/example.html heavily
tokens = ('NUMBER','LPAREN','RPAREN','PLUS', 'MINUS', 'TIMES','DIVIDE', 'UMINUS', 'EXP')
# Tokens
t_PLUS = r'\+'
t_MINUS = r'\-'
t_TIMES = r'\*'
t_DIVIDE = r'\/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_UMINUS = r'\~'
t_EXP = r'\^'
# parsing + semantic actions in one place!
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print("Integer value too large %d", t.value)
t.value = 0
return t
# Ignored characters
t_ignore = " \t"
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
def p_expression_1(t):
'expression : expression PLUS term'
#
t[0] = (t[1][0] + t[3][0],
attrDyadicInfix("+", t[1][1], t[3][1]))
def p_expression_2(t):
'expression : expression MINUS term'
#
t[0] = (t[1][0] - t[3][0],
attrDyadicInfix("-", t[1][1], t[3][1]))
def p_expression_3(t):
'expression : term'
#
t[0] = t[1]
# Consult this excellent reference for info on precedences
# https://www.cs.utah.edu/~zachary/isp/worksheets/operprec/operprec.html
def p_term_1(t):
'term : term TIMES factor'
#
t[0] = (t[1][0] * t[3][0],
attrDyadicInfix("*", t[1][1], t[3][1]))
def p_term_2(t):
'term : term DIVIDE factor'
#
if (t[3][0] == 0):
print("Error, divide by zero!")
t[3][0] = 1 # fix it
t[0] = (t[1][0] / t[3][0],
attrDyadicInfix("/", t[1][1], t[3][1]))
def p_term_3(t):
'term : factor'
#
t[0] = t[1]
def p_factor_1(t):
'factor : innerfactor EXP factor'
#
t[0] = (t[1][0] ** t[3][0],
attrDyadicInfix("^", t[1][1], t[3][1]))
def p_factor_2(t):
'factor : innerfactor'
#
t[0] = t[1]
def p_innerfactor_1(t):
'innerfactor : UMINUS innerfactor'
#
ast = ('~', t[2][1]['ast'])
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("~E_")
left = NxtStateStr("~_")
t[0] =(-t[2][0],
{'ast' : ast,
'dig' : {'nl' : [ root, left ] + nlin, # this order important for proper layout!
'el' : elin + [ (root, left),
(root, rootin) ]
}})
def p_innerfactor_2(t):
'innerfactor : LPAREN expression RPAREN'
#
ast = t[2][1]['ast']
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("(E)_")
left = NxtStateStr("(_")
right= NxtStateStr(")_")
t[0] =(t[2][0],
{'ast' : ast,
'dig' : {'nl' : [root, left] + nlin + [right], #order important f. proper layout!
'el' : elin + [ (root, left),
(root, rootin),
(root, right) ]
}})
def p_innerfactor_3(t):
'innerfactor : NUMBER'
#
strn = str(t[1])
ast = ('NUMBER', strn)
t[0] =(t[1],
{ 'ast' : ast,
'dig' : {'nl' : [ strn + NxtStateStr("_") ],
'el' : []
}})
def p_error(t):
print("Syntax error at '%s'" % t.value)
#--
def attrDyadicInfix(op, attr1, attr3):
ast = (op, (attr1['ast'], attr3['ast']))
nlin1 = attr1['dig']['nl']
nlin3 = attr3['dig']['nl']
nlin = nlin1 + nlin3
elin1 = attr1['dig']['el']
elin3 = attr3['dig']['el']
elin = elin1 + elin3
rootin1 = nlin1[0]
rootin3 = nlin3[0]
root = NxtStateStr("E1"+op+"E2"+"_") # NxtStateStr("$_")
left = rootin1
middle = NxtStateStr(op+"_")
right = rootin3
return {'ast' : ast,
'dig' : {'nl' : [ root, left, middle, right ] + nlin,
'el' : elin + [ (root, left),
(root, middle),
(root, right) ]
}}
#===
# This is the main function in this Jove file.
#===
def parseExp(s):
"""In: a string s containing a regular expression.
Out: An attribute triple consisting of
1) An abstract syntax tree suitable for processing in the derivative-based scanner
2) A node-list for the parse-tree digraph generated. Good for drawing a parse tree
using the drawPT function below
3) An edge list for the parse-tree generated (again good for drawing using the
drawPT function below)
"""
mylexer = lex()
myparser = yacc()
pt = myparser.parse(s, lexer = mylexer)
# print('parsed result is ', pt)
# (result, ast, nodes, edges)
return (pt[0], pt[1]['ast'], pt[1]['dig']['nl'], pt[1]['dig']['el'])
def drawPT(ast_rslt_nl_el, comment="PT"):
"""Given an (ast, nl, el) triple where nl is the node and el the edge-list,
draw the Parse Tree by returning a dot object.
"""
(rslt, ast, nl, el) = ast_rslt_nl_el
print("Result calculated = ", rslt)
print("Drawing AST for ", ast)
dotObj_pt = Digraph(comment)
dotObj_pt.graph_attr['rankdir'] = 'TB'
for n in nl:
prNam = n.split('_')[0]
dotObj_pt.node(n, prNam, shape="oval", peripheries="1")
for e in el:
dotObj_pt.edge(e[0], e[1])
return dotObj_pt
###Output
_____no_output_____
###Markdown
Now answer these questions How does the calculator above parse "~2^2" ?
###Code
drawPT(parseExp("~2^2"))
###Output
Result calculated = 4
Drawing AST for ('^', (('~', ('NUMBER', '2')), ('NUMBER', '2')))
###Markdown
** Free answer: ** In our calculator, unary minus has higher precedence than exponentiation, as evidenced by a parse tree of **this many** nodes. Why does Python give a different answer for the expression ```-2**2``` ?
###Code
# Python evaluation
-2 ** 2
###Output
_____no_output_____
###Markdown
** Free answer: ** The experiment with the above Python expression shows how our calculator and Python differ. Specifically, the answers are **this one** (calculator) and **this one** (Python). In Python, unary minus binds less tightly than exponentiation. In parsing "2^~3^~4", the following parse tree was produced.How can we tell that the calculator gives higher precedence to "~" (unary minus) and that it right-associates the exponentiation operator?
###Code
drawPT(parseExp("2^~3^~4"))
###Output
Result calculated = 1.008594091576999
Drawing AST for ('^', (('NUMBER', '2'), ('^', (('~', ('NUMBER', '3')), ('~', ('NUMBER', '4'))))))
###Markdown
** Free answer: ** We see how the tree of **this many nodes** for the above expression 2^~3^~4 is built, where the value of the second "^" flows as the EXPONENT of the first "^". Also see that "~" (the unary minus) is incorporated before any exponentiation gets done. What does ```2**-3**-4``` produce in Python? Is it the same answer?
###Code
# The above expression typed into Python in Python's syntax is below, and see what it produces!
2**-3**-4
###Output
_____no_output_____
###Markdown
** Free answer: ** Python differs from our calculator in how it handles ```2**-3**-4``` versus 2^~3^~4. It yields **this answer**. Show by full parenthesization how Python parses 2**-3**-4
###Code
2**(-(3**(-4)))
###Output
_____no_output_____
###Markdown
** Free answer: ** It parses it as ```2**(-(3**(-4)))```, showing that unary "-" does not have the same precedence as exp. However, the exps are processed right-associatively! Parsing ```6*3/4*~5/(2+3-4-5-6/7*~8)-~9```How many nodes in this parse tree? Same as Python's answer?
###Code
drawPT(parseExp("6*3/4*~5/(2+3-4-5-6/7*~8)-~9"))
# Check against Python!
6*3/4*-5/(2+3-4-5-6/7*-8)--9
###Output
_____no_output_____
###Markdown
Q8 ** Run these cells one by one. Study the questions and the answers here. Then answer the corresponding Canvas questions in Quiz-8. ** You will be provided answers here. You'll be provided the answers here. We will tag them as "Free Answer". You will see that only after you run the cells. Write these ** free answers** on Canvas. Our goal is that you ran through these cells at least once !!! Background information for youSomeone was asked to build a calculator following these CFG rules.```RULESRule 0 S -> expressionRule 1 expression -> expression PLUS termRule 2 expression -> expression MINUS termRule 3 expression -> termRule 4 term -> term TIMES factorRule 5 term -> term DIVIDE factorRule 6 term -> factorRule 7 factor -> innerfactor EXP factorRule 8 factor -> innerfactorRule 9 innerfactor -> UMINUS innerfactorRule 10 innerfactor -> LPAREN expression RPARENRule 11 innerfactor -> NUMBER```They implemented these CFGs in a parser that we shall present in Section 2 below. THINGS TO NOTE* We will use "~" (tilde) for unary minus, and "-" (regular minus) for binary infix minus* we will use "^" for exponentiation The ParserYou may be interested in roughly how abstract CFG rules such as listed above turn into CFG rules as supported by a tool such as PLY.
###Code
from lex import lex
from yacc import yacc
from jove.StateNameSanitizers import ResetStNum, NxtStateStr
from jove.SystemImports import *
# Following ideas from http://www.dabeaz.com/ply/example.html heavily
tokens = ('NUMBER','LPAREN','RPAREN','PLUS', 'MINUS', 'TIMES','DIVIDE', 'UMINUS', 'EXP')
# Tokens
t_PLUS = r'\+'
t_MINUS = r'\-'
t_TIMES = r'\*'
t_DIVIDE = r'\/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_UMINUS = r'\~'
t_EXP = r'\^'
# parsing + semantic actions in one place!
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print("Integer value too large %d", t.value)
t.value = 0
return t
# Ignored characters
t_ignore = " \t"
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
def p_expression_1(t):
'expression : expression PLUS term'
#
t[0] = (t[1][0] + t[3][0],
attrDyadicInfix("+", t[1][1], t[3][1]))
def p_expression_2(t):
'expression : expression MINUS term'
#
t[0] = (t[1][0] - t[3][0],
attrDyadicInfix("-", t[1][1], t[3][1]))
def p_expression_3(t):
'expression : term'
#
t[0] = t[1]
# Consult this excellent reference for info on precedences
# https://www.cs.utah.edu/~zachary/isp/worksheets/operprec/operprec.html
def p_term_1(t):
'term : term TIMES factor'
#
t[0] = (t[1][0] * t[3][0],
attrDyadicInfix("*", t[1][1], t[3][1]))
def p_term_2(t):
'term : term DIVIDE factor'
#
if (t[3][0] == 0):
print("Error, divide by zero!")
t[3][0] = 1 # fix it
t[0] = (t[1][0] / t[3][0],
attrDyadicInfix("/", t[1][1], t[3][1]))
def p_term_3(t):
'term : factor'
#
t[0] = t[1]
def p_factor_1(t):
'factor : innerfactor EXP factor'
#
t[0] = (t[1][0] ** t[3][0],
attrDyadicInfix("^", t[1][1], t[3][1]))
def p_factor_2(t):
'factor : innerfactor'
#
t[0] = t[1]
def p_innerfactor_1(t):
'innerfactor : UMINUS innerfactor'
#
ast = ('~', t[2][1]['ast'])
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("~E_")
left = NxtStateStr("~_")
t[0] =(-t[2][0],
{'ast' : ast,
'dig' : {'nl' : [ root, left ] + nlin, # this order important for proper layout!
'el' : elin + [ (root, left),
(root, rootin) ]
}})
def p_innerfactor_2(t):
'innerfactor : LPAREN expression RPAREN'
#
ast = t[2][1]['ast']
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("(E)_")
left = NxtStateStr("(_")
right= NxtStateStr(")_")
t[0] =(t[2][0],
{'ast' : ast,
'dig' : {'nl' : [root, left] + nlin + [right], #order important f. proper layout!
'el' : elin + [ (root, left),
(root, rootin),
(root, right) ]
}})
def p_innerfactor_3(t):
'innerfactor : NUMBER'
#
strn = str(t[1])
ast = ('NUMBER', strn)
t[0] =(t[1],
{ 'ast' : ast,
'dig' : {'nl' : [ strn + NxtStateStr("_") ],
'el' : []
}})
def p_error(t):
print("Syntax error at '%s'" % t.value)
#--
def attrDyadicInfix(op, attr1, attr3):
ast = (op, (attr1['ast'], attr3['ast']))
nlin1 = attr1['dig']['nl']
nlin3 = attr3['dig']['nl']
nlin = nlin1 + nlin3
elin1 = attr1['dig']['el']
elin3 = attr3['dig']['el']
elin = elin1 + elin3
rootin1 = nlin1[0]
rootin3 = nlin3[0]
root = NxtStateStr("E1"+op+"E2"+"_") # NxtStateStr("$_")
left = rootin1
middle = NxtStateStr(op+"_")
right = rootin3
return {'ast' : ast,
'dig' : {'nl' : [ root, left, middle, right ] + nlin,
'el' : elin + [ (root, left),
(root, middle),
(root, right) ]
}}
#===
# This is the main function in this Jove file.
#===
def parseExp(s):
"""In: a string s containing a regular expression.
Out: An attribute triple consisting of
1) An abstract syntax tree suitable for processing in the derivative-based scanner
2) A node-list for the parse-tree digraph generated. Good for drawing a parse tree
using the drawPT function below
3) An edge list for the parse-tree generated (again good for drawing using the
drawPT function below)
"""
mylexer = lex()
myparser = yacc()
pt = myparser.parse(s, lexer = mylexer)
# print('parsed result is ', pt)
# (result, ast, nodes, edges)
return (pt[0], pt[1]['ast'], pt[1]['dig']['nl'], pt[1]['dig']['el'])
def drawPT(ast_rslt_nl_el, comment="PT"):
"""Given an (ast, nl, el) triple where nl is the node and el the edge-list,
draw the Parse Tree by returning a dot object.
"""
(rslt, ast, nl, el) = ast_rslt_nl_el
print("Result calculated = ", rslt)
print("Drawing AST for ", ast)
dotObj_pt = Digraph(comment)
dotObj_pt.graph_attr['rankdir'] = 'TB'
for n in nl:
prNam = n.split('_')[0]
dotObj_pt.node(n, prNam, shape="oval", peripheries="1")
for e in el:
dotObj_pt.edge(e[0], e[1])
return dotObj_pt
###Output
_____no_output_____
###Markdown
Now answer these questions How does the calculator above parse "~2^2" ?
###Code
drawPT(parseExp("~2^2"))
###Output
Generating LALR tables
###Markdown
** Free answer: ** In our calculator, unary minus has higher precedence than exponentiation, as evidenced by a parse tree of **this many** nodes. Why does Python give a different answer for the expression ```-2**2``` ?
###Code
# Python evaluation
-2 ** 2
###Output
_____no_output_____
###Markdown
** Free answer: ** The experiment with the above Python expression shows how our calculator and Python differ. Specifically, the answers are **this one** (calculator) and **this one** (Python). In Python, unary minus binds less tightly than exponentiation. In parsing "2^~3^~4", the following parse tree was produced.How can we tell that the calculator gives higher precedence to "~" (unary minus) and that it right-associates the exponentiation operator?
###Code
drawPT(parseExp("2^~3^~4"))
###Output
Result calculated = 1.008594091576999
Drawing AST for ('^', (('NUMBER', '2'), ('^', (('~', ('NUMBER', '3')), ('~', ('NUMBER', '4'))))))
###Markdown
** Free answer: ** We see how the tree of **this many nodes** for the above expression 2^~3^~4 is built, where the value of the second "^" flows as the EXPONENT of the first "^". Also see that "~" (the unary minus) is incorporated before any exponentiation gets done. What does ```2**-3**-4``` produce in Python? Is it the same answer?
###Code
# The above expression typed into Python in Python's syntax is below, and see what it produces!
2**-3**-4
###Output
_____no_output_____
###Markdown
** Free answer: ** Python differs from our calculator in how it handles ```2**-3**-4``` versus 2^~3^~4. It yields **this answer**. Show by full parenthesization how Python parses 2**-3**-4
###Code
2**(-(3**(-4)))
###Output
_____no_output_____
###Markdown
** Free answer: ** It parses it as ```2**(-(3**(-4)))```, showing that unary "-" does not have the same precedence as exp. However, the exps are processed right-associatively! Parsing ```6*3/4*~5/(2+3-4-5-6/7*~8)-~9```How many nodes in this parse tree? Same as Python's answer?
###Code
drawPT(parseExp("6*3/4*~5/(2+3-4-5-6/7*~8)-~9"))
# Check against Python!
6*3/4*-5/(2+3-4-5-6/7*-8)--9
###Output
_____no_output_____
###Markdown
Q8 ** Run these cells one by one. Study the questions and the answers here. Then answer the corresponding Canvas questions in Quiz-8. ** You will be provided answers here. You'll be provided the answers here. We will tag them as "Free Answer". You will see that only after you run the cells. Write these ** free answers** on Canvas. Our goal is that you ran through these cells at least once !!! Background information for youSomeone was asked to build a calculator following these CFG rules.```RULESRule 0 S -> expressionRule 1 expression -> expression PLUS termRule 2 expression -> expression MINUS termRule 3 expression -> termRule 4 term -> term TIMES factorRule 5 term -> term DIVIDE factorRule 6 term -> factorRule 7 factor -> innerfactor EXP factorRule 8 factor -> innerfactorRule 9 innerfactor -> UMINUS innerfactorRule 10 innerfactor -> LPAREN expression RPARENRule 11 innerfactor -> NUMBER```They implemented these CFGs in a parser that we shall present in Section 2 below. THINGS TO NOTE* We will use "~" (tilde) for unary minus, and "-" (regular minus) for binary infix minus* we will use "^" for exponentiation The ParserYou may be interested in roughly how abstract CFG rules such as listed above turn into CFG rules as supported by a tool such as PLY.
###Code
from lex import lex
from yacc import yacc
from jove.StateNameSanitizers import ResetStNum, NxtStateStr
from jove.SystemImports import *
# Following ideas from http://www.dabeaz.com/ply/example.html heavily
tokens = ('NUMBER','LPAREN','RPAREN','PLUS', 'MINUS', 'TIMES','DIVIDE', 'UMINUS', 'EXP')
# Tokens
t_PLUS = r'\+'
t_MINUS = r'\-'
t_TIMES = r'\*'
t_DIVIDE = r'\/'
t_LPAREN = r'\('
t_RPAREN = r'\)'
t_UMINUS = r'\~'
t_EXP = r'\^'
# parsing + semantic actions in one place!
def t_NUMBER(t):
r'\d+'
try:
t.value = int(t.value)
except ValueError:
print("Integer value too large %d", t.value)
t.value = 0
return t
# Ignored characters
t_ignore = " \t"
def t_newline(t):
r'\n+'
t.lexer.lineno += t.value.count("\n")
def t_error(t):
print("Illegal character '%s'" % t.value[0])
t.lexer.skip(1)
def p_expression_1(t):
'expression : expression PLUS term'
#
t[0] = (t[1][0] + t[3][0],
attrDyadicInfix("+", t[1][1], t[3][1]))
def p_expression_2(t):
'expression : expression MINUS term'
#
t[0] = (t[1][0] - t[3][0],
attrDyadicInfix("-", t[1][1], t[3][1]))
def p_expression_3(t):
'expression : term'
#
t[0] = t[1]
# Consult this excellent reference for info on precedences
# https://www.cs.utah.edu/~zachary/isp/worksheets/operprec/operprec.html
def p_term_1(t):
'term : term TIMES factor'
#
t[0] = (t[1][0] * t[3][0],
attrDyadicInfix("*", t[1][1], t[3][1]))
def p_term_2(t):
'term : term DIVIDE factor'
#
if (t[3][0] == 0):
print("Error, divide by zero!")
t[3][0] = 1 # fix it
t[0] = (t[1][0] / t[3][0],
attrDyadicInfix("/", t[1][1], t[3][1]))
def p_term_3(t):
'term : factor'
#
t[0] = t[1]
def p_factor_1(t):
'factor : innerfactor EXP factor'
#
t[0] = (t[1][0] ** t[3][0],
attrDyadicInfix("^", t[1][1], t[3][1]))
def p_factor_2(t):
'factor : innerfactor'
#
t[0] = t[1]
def p_innerfactor_1(t):
'innerfactor : UMINUS innerfactor'
#
ast = ('~', t[2][1]['ast'])
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("~E_")
left = NxtStateStr("~_")
t[0] =(-t[2][0],
{'ast' : ast,
'dig' : {'nl' : [ root, left ] + nlin, # this order important for proper layout!
'el' : elin + [ (root, left),
(root, rootin) ]
}})
def p_innerfactor_2(t):
'innerfactor : LPAREN expression RPAREN'
#
ast = t[2][1]['ast']
nlin = t[2][1]['dig']['nl']
elin = t[2][1]['dig']['el']
rootin = nlin[0]
root = NxtStateStr("(E)_")
left = NxtStateStr("(_")
right= NxtStateStr(")_")
t[0] =(t[2][0],
{'ast' : ast,
'dig' : {'nl' : [root, left] + nlin + [right], #order important f. proper layout!
'el' : elin + [ (root, left),
(root, rootin),
(root, right) ]
}})
def p_innerfactor_3(t):
'innerfactor : NUMBER'
#
strn = str(t[1])
ast = ('NUMBER', strn)
t[0] =(t[1],
{ 'ast' : ast,
'dig' : {'nl' : [ strn + NxtStateStr("_") ],
'el' : []
}})
def p_error(t):
print("Syntax error at '%s'" % t.value)
#--
def attrDyadicInfix(op, attr1, attr3):
ast = (op, (attr1['ast'], attr3['ast']))
nlin1 = attr1['dig']['nl']
nlin3 = attr3['dig']['nl']
nlin = nlin1 + nlin3
elin1 = attr1['dig']['el']
elin3 = attr3['dig']['el']
elin = elin1 + elin3
rootin1 = nlin1[0]
rootin3 = nlin3[0]
root = NxtStateStr("E1"+op+"E2"+"_") # NxtStateStr("$_")
left = rootin1
middle = NxtStateStr(op+"_")
right = rootin3
return {'ast' : ast,
'dig' : {'nl' : [ root, left, middle, right ] + nlin,
'el' : elin + [ (root, left),
(root, middle),
(root, right) ]
}}
#===
# This is the main function in this Jove file.
#===
def parseExp(s):
"""In: a string s containing a regular expression.
Out: An attribute triple consisting of
1) An abstract syntax tree suitable for processing in the derivative-based scanner
2) A node-list for the parse-tree digraph generated. Good for drawing a parse tree
using the drawPT function below
3) An edge list for the parse-tree generated (again good for drawing using the
drawPT function below)
"""
mylexer = lex()
myparser = yacc()
pt = myparser.parse(s, lexer = mylexer)
# print('parsed result is ', pt)
# (result, ast, nodes, edges)
return (pt[0], pt[1]['ast'], pt[1]['dig']['nl'], pt[1]['dig']['el'])
def drawPT(ast_rslt_nl_el, comment="PT"):
"""Given an (ast, nl, el) triple where nl is the node and el the edge-list,
draw the Parse Tree by returning a dot object.
"""
(rslt, ast, nl, el) = ast_rslt_nl_el
print("Result calculated = ", rslt)
print("Drawing AST for ", ast)
dotObj_pt = Digraph(comment)
dotObj_pt.graph_attr['rankdir'] = 'TB'
for n in nl:
prNam = n.split('_')[0]
dotObj_pt.node(n, prNam, shape="oval", peripheries="1")
for e in el:
dotObj_pt.edge(e[0], e[1])
return dotObj_pt
###Output
_____no_output_____
###Markdown
Now answer these questions How does the calculator above parse "~2^2" ?
###Code
drawPT(parseExp("~2^2"))
###Output
_____no_output_____
###Markdown
** Free answer: ** In our calculator, unary minus has higher precedence than exponentiation, as evidenced by a parse tree of **this many** nodes. Why does Python give a different answer for the expression ```-2**2``` ?
###Code
# Python evaluation
-2 ** 2
###Output
_____no_output_____
###Markdown
** Free answer: ** The experiment with the above Python expression shows how our calculator and Python differ. Specifically, the answers are **this one** (calculator) and **this one** (Python). In Python, unary minus binds less tightly than exponentiation. In parsing "2^~3^~4", the following parse tree was produced.How can we tell that the calculator gives higher precedence to "~" (unary minus) and that it right-associates the exponentiation operator?
###Code
drawPT(parseExp("2^~3^~4"))
###Output
_____no_output_____
###Markdown
** Free answer: ** We see how the tree of **this many nodes** for the above expression 2^~3^~4 is built, where the value of the second "^" flows as the EXPONENT of the first "^". Also see that "~" (the unary minus) is incorporated before any exponentiation gets done. What does ```2**-3**-4``` produce in Python? Is it the same answer?
###Code
# The above expression typed into Python in Python's syntax is below, and see what it produces!
2**-3**-4
###Output
_____no_output_____
###Markdown
** Free answer: ** Python differs from our calculator in how it handles ```2**-3**-4``` versus 2^~3^~4. It yields **this answer**. Show by full parenthesization how Python parses 2**-3**-4
###Code
2**(-(3**(-4)))
###Output
_____no_output_____
###Markdown
** Free answer: ** It parses it as ```2**(-(3**(-4)))```, showing that unary "-" does not have the same precedence as exp. However, the exps are processed right-associatively! Parsing ```6*3/4*~5/(2+3-4-5-6/7*~8)-~9```How many nodes in this parse tree? Same as Python's answer?
###Code
drawPT(parseExp("6*3/4*~5/(2+3-4-5-6/7*~8)-~9"))
# Check against Python!
6*3/4*-5/(2+3-4-5-6/7*-8)--9
###Output
_____no_output_____ |
pages/workshop/AWIPS/Grid_Levels_and_Parameters.ipynb | ###Markdown
This example covers the callable methods of the Python AWIPS DAF when working with gridded data. We start with a connection to an EDEX server, then query data types, then grid names, parameters, levels, and other information. Finally the gridded data is plotted for its domain using Matplotlib and Cartopy. DataAccessLayer.getSupportedDatatypes()getSupportedDatatypes() returns a list of available data types offered by the EDEX server defined above.
###Code
from awips.dataaccess import DataAccessLayer
import unittest
DataAccessLayer.changeEDEXHost("edex-cloud.unidata.ucar.edu")
dataTypes = DataAccessLayer.getSupportedDatatypes()
dataTypes.sort()
list(dataTypes)
###Output
_____no_output_____
###Markdown
DataAccessLayer.getAvailableLocationNames()Now create a new data request, and set the data type to **grid** to request all available grids with **getAvailableLocationNames()**
###Code
request = DataAccessLayer.newDataRequest()
request.setDatatype("grid")
available_grids = DataAccessLayer.getAvailableLocationNames(request)
available_grids.sort()
list(available_grids)
###Output
_____no_output_____
###Markdown
DataAccessLayer.getAvailableParameters()After datatype and model name (locationName) are set, you can query all available parameters with **getAvailableParameters()**
###Code
request.setLocationNames("RAP13")
availableParms = DataAccessLayer.getAvailableParameters(request)
availableParms.sort()
list(availableParms)
###Output
_____no_output_____
###Markdown
DataAccessLayer.getAvailableLevels()Selecting **"T"** for temperature.
###Code
request.setParameters("T")
availableLevels = DataAccessLayer.getAvailableLevels(request)
for lvl in availableLevels:
print(lvl)
###Output
_____no_output_____
###Markdown
* **0.0SFC** is the Surface level* **FHAG** stands for Fixed Height Above Ground (in meters)* **NTAT** stands for Nominal Top of the ATmosphere* **BL** stands for Boundary Layer, where **0.0_30.0BL** reads as *0-30 mb above ground level* * **TROP** is the Tropopause level**request.setLevels()**For this example we will use Surface Temperature
###Code
request.setLevels("2.0FHAG")
###Output
_____no_output_____
###Markdown
DataAccessLayer.getAvailableTimes()* **getAvailableTimes(request, True)** will return an object of *run times* - formatted as `YYYY-MM-DD HH:MM:SS`* **getAvailableTimes(request)** will return an object of all times - formatted as `YYYY-MM-DD HH:MM:SS (F:ff)`* **getForecastRun(cycle, times)** will return a DataTime array for a single forecast cycle.
###Code
cycles = DataAccessLayer.getAvailableTimes(request, True)
times = DataAccessLayer.getAvailableTimes(request)
fcstRun = DataAccessLayer.getForecastRun(cycles[-1], times)
list(fcstRun)
###Output
_____no_output_____
###Markdown
DataAccessLayer.getGridData()Now that we have our `request` and DataTime `fcstRun` arrays ready, it's time to request the data array from EDEX.
###Code
response = DataAccessLayer.getGridData(request, [fcstRun[-1]])
for grid in response:
data = grid.getRawData()
lons, lats = grid.getLatLonCoords()
print('Time :', str(grid.getDataTime()))
print('Model:', str(grid.getLocationName()))
print('Parm :', str(grid.getParameter()))
print('Unit :', str(grid.getUnit()))
print(data.shape)
###Output
_____no_output_____
###Markdown
Plotting with Matplotlib and Cartopy**1. pcolormesh**
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib
import cartopy.crs as ccrs
import cartopy.feature as cfeature
from cartopy.mpl.gridliner import LONGITUDE_FORMATTER, LATITUDE_FORMATTER
import numpy as np
import numpy.ma as ma
from scipy.io import loadmat
from scipy.constants import convert_temperature
def make_map(bbox, projection=ccrs.PlateCarree()):
fig, ax = plt.subplots(figsize=(16, 9),
subplot_kw=dict(projection=projection))
ax.set_extent(bbox)
ax.coastlines(resolution='50m')
gl = ax.gridlines(draw_labels=True)
gl.top_labels = gl.right_labels = False
gl.xformatter = LONGITUDE_FORMATTER
gl.yformatter = LATITUDE_FORMATTER
return fig, ax
#convert temp from K to F
dataf = convert_temperature(data, 'K', 'F')
cmap = plt.get_cmap('rainbow')
bbox = [lons.min(), lons.max(), lats.min(), lats.max()]
fig, ax = make_map(bbox=bbox)
cs = ax.pcolormesh(lons, lats, dataf, cmap=cmap)
cbar = fig.colorbar(cs, extend='both', shrink=0.5, orientation='horizontal')
cbar.set_label(grid.getLocationName() +" " + grid.getLevel() + " " \
+ grid.getParameter() + " (F) " \
+ "valid " + str(grid.getDataTime().getRefTime()))
###Output
_____no_output_____
###Markdown
**2. contourf**
###Code
fig2, ax2 = make_map(bbox=bbox)
cs2 = ax2.contourf(lons, lats, dataf, 80, cmap=cmap,
vmin=dataf.min(), vmax=dataf.max(), extend='both')
cbar2 = fig2.colorbar(cs2, shrink=0.5, orientation='horizontal')
cbar2.set_label(grid.getLocationName() +" " + grid.getLevel() + " " \
+ grid.getParameter() + " (F) " \
+ "valid " + str(grid.getDataTime().getRefTime()))
###Output
_____no_output_____ |
02_statistics/02_a_dinucleotides from class.ipynb | ###Markdown
Dinucleotides and dipeptidesWe counted the occurrence of individual nucleotides in the genome and residues in the proteome.In real biological sequences, adjacent positions are rarely independent. We now have ways to talk about these sort of inter-dependencies using probabilities.We'll start by counting adjacent pairs of nucleotides in the genome. When a sequence has $N$ bases, it has $N-1$ adjacent pairs: $0$ and $1$, $1$ and $2$, $2$ and $3$, and so forth all the way to $N-2$, $N-1$.An easy way to get a pandas `Series` of these adjacent pairs is to:1. create a Series of first nucleotides in a pair2. create a Series of second nucleotides in a pair3. add together these two seriesWe'll see how this works on a test string```alphabet='abcdefghijklmnopqrstuvwxyz'```
###Code
import pandas as pd
alphabet='abcdefghijklmnopqrstuvwxyz'
first_letters = pd.Series(list(alphabet[0:-1]))
second_letters = pd.Series(list(alphabet[1:]))
pairs = first_letters + second_letters
pairs
###Output
_____no_output_____
###Markdown
Yeast proteome dipeptidesFirst we need to import the `Bio.SeqIO` module from `biopython` so we can read in our yeast sequences.
###Code
from Bio import SeqIO
###Output
_____no_output_____
###Markdown
Then we need to import the `pandas` module for our `Series` and `DataFrame` types, and the `matplotlib.pyplot` module to make graphs.
###Code
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Here is a copy of our code to1. Create `proteins` as an iterator over all the protein sequences2. Create an empty `Series` of amino acid counts3. Loop over each protein 1. Count the nubmer of residues in that one protein 1. Add that residue count to the running tally5. Print the sorted version of our count series6. Plot a bar graph of our counts
###Code
proteins = SeqIO.parse("../S288C_R64-3-1/orf_trans_R64-3-1_20210421.fasta", "fasta")
total_counts = pd.Series(dtype='int64')
for protein in proteins:
protein_count = pd.Series(list(protein.seq)).value_counts()
total_counts = total_counts.add(protein_count, fill_value = 0)
print(total_counts.sort_values())
total_counts.sort_values().plot(kind='bar')
###Output
* 6064.0
W 30592.0
C 37287.0
M 61220.0
H 63795.0
Y 99429.0
Q 116054.0
P 128629.0
F 130264.0
R 130554.0
G 146138.0
A 161450.0
V 163368.0
D 171556.0
T 173814.0
N 180883.0
E 191723.0
I 192717.0
K 215733.0
S 264092.0
L 279435.0
dtype: float64
###Markdown
DipeptidesNow we'll use the approach above to count every adjacent pair of amino acids.We'll make a series of first amino acids in `first_aas`, a series of second amino acids in `second_aas`, and then combine them to count them.
###Code
proteins = SeqIO.parse("../S288C_R64-3-1/orf_trans_R64-3-1_20210421.fasta", "fasta")
total_counts = pd.Series(dtype='int64')
for protein in proteins:
# protein.seq is the sequence of the protein
first_aas = pd.Series(list(protein.seq[0:-1]))
second_aas = pd.Series(list(protein.seq[1:]))
aa_pairs = first_aas + second_aas
protein_count = aa_pairs.value_counts()
total_counts = total_counts.add(protein_count, fill_value = 0)
print(total_counts.sort_values())
#total_counts.sort_values().plot(kind='bar')
total_counts.sort_values()
###Output
_____no_output_____
###Markdown
ProbabilitiesConvert the counts to probabilities in a variable `dipep_probs` by 1. Using the `.sum()` method to find the total number of amino acid pairs counted2. Dividing the `total_counts` series by this sum to get "normalized" probabilities
###Code
total_counts.sum()
dipep_probs = total_counts / total_counts.sum()
dipep_probs.sort_values()
dipep_probs['WW']
###Output
_____no_output_____
###Markdown
Marginal probabilitiesThe table of amino acid _pair_ probabilities give the _joint_ distribution.There are two way to compute the _marginal_ probability of an `A`. We can count every time an `A` shows up in the first position, and we can count every time an `A` shows up in the second position.Compute this both ways and compare it to the value we got from the single-nucleotide counting above.
###Code
dipep_probs[ dipep_probs.index.str.startswith('A') ].sum()
dipep_probs[ dipep_probs.index.str.endswith('A') ].sum()
###Output
_____no_output_____
###Markdown
Compute all of the marginal probabilities. There are many reasonable ways to approach this -- one is to use a for loop
###Code
one = pd.Series([1, 2, 3], index=['a', 'b', 'c'])
two = pd.Series([10, 20, 30], index=['x', 'y', 'z'])
two.combine_first(one)
marginal_probs = pd.Series(dtype='float64')
for aa in list("ACDEFGHIKLMNPQRSTVWY"):
aa_prob = dipep_probs[ dipep_probs.index.str.startswith(aa)].sum()
aa_prob = pd.Series([aa_prob], index=[aa])
# print(aa_prob)
marginal_probs = marginal_probs.combine_first(aa_prob)
print(marginal_probs.sort_values())
###Output
W 0.010410
C 0.012688
M 0.020832
H 0.021708
Y 0.033834
Q 0.039491
P 0.043770
F 0.044326
R 0.044425
G 0.049728
A 0.054938
V 0.055591
D 0.058377
T 0.059145
N 0.061551
E 0.065239
I 0.065577
K 0.073409
S 0.089865
L 0.095086
dtype: float64
###Markdown
Conditional probabilitiesCompute the _conditional_ probability of a `C` following a first `A`. Is this higher or lower than the unconditional (marginal) probability of a `C`?
###Code
# P( C 2nd | A 1st ) = P( C 2nd and A 1st ) / P( A 1st, unconditional )
dipep_probs['AC'] / marginal_probs['A']
###Output
_____no_output_____
###Markdown
Another way of looking at this is to compute the ratio `P(CA) / (P(C) * P(A))`, which is the ratio between the observed dinucleotide probability and the expected dinucleotide probabilty under the assumption of independence.
###Code
dipep_probs['AC'] / (marginal_probs['A'] * marginal_probs['C'])
###Output
_____no_output_____ |
src/TA2_main.ipynb | ###Markdown
Building Knowledge Graph Extract Data from CSV
###Code
def get_csv_data(filepath, handle_row_func, row_limit=None):
data = dict()
with open(filepath) as file:
next(file)
rows = csv.reader(file, delimiter=",")
cnt = 0
for row in rows:
handle_row_func(data, row)
cnt += 1
if row_limit != None and cnt == row_limit:
break
return data
def handle_csv_kanji_func(data, row):
kanji,*meanings = row
if len(meanings) >= 2:
meanings = ",".join(meanings)
else:
meanings = meanings[0]
meanings = meanings.split(":")
meanings = meanings[0]
data[kanji] = meanings
data_kanji = get_csv_data(
"dataset/s5_kanjis_output.csv",
handle_csv_kanji_func,
row_limit=4900,
)
print("len(data_kanji) = ", len(data_kanji))
# pp(sample_from_dict(data_kanji))
radical_no_meaning = {"|", "丶", "丿", "乙", "亅", "冖"}
def handle_csv_radical_func(data, row):
radical,meaning,_ = row
if radical not in radical_no_meaning:
data[radical] = meaning
data_radical = get_csv_data(
"dataset/s7_nodes_radical_meaning.csv",
handle_csv_radical_func
)
print("len(data_radical) = ", len(data_radical))
pp(sample_from_dict(data_radical))
def handle_csv_edges_func(data, row):
kanji,radical_list = row
if kanji in data_kanji:
data[kanji] = radical_list.split(':')
data[kanji] = list(set(data[kanji]) - radical_no_meaning)
data_edges = get_csv_data("dataset/s7_edges_kanji_radical.csv", handle_csv_edges_func)
print("len(data_edges) = ", len(data_edges))
# pp(sample_from_dict(data_edges))
###Output
len(data_edges) = 4393
###Markdown
Data Structure Node Manager
###Code
get_key = lambda symbol, dtype : f"{symbol}-{dtype}"
node_kanji = {
get_key(symbol, 'kanji'): {
'symbol' : symbol,
'meaning': meaning,
'visual' : f"{symbol}\n{meaning}",
'idx' : idx,
'color' : 'red',
} for idx, (symbol, meaning) in enumerate(list(data_kanji.items()))}
node_radical = {
get_key(symbol, 'radical'): {
'symbol' : symbol,
'meaning': meaning,
'visual' : f"{symbol}\n{meaning}",
'idx' : idx,
'color' : 'yellow',
} for idx, (symbol, meaning) in enumerate(list(data_radical.items()))}
full_node = {**node_radical, **node_kanji}
# pp(sample_from_dict(full_node))
###Output
_____no_output_____
###Markdown
Edge Manager
###Code
def get_graph_edge(data_edges):
edges = []
for kanji, radicals in data_edges.items():
for r in radicals:
edges.append( (f"{kanji}-kanji", f"{r}-radical") )
return edges
full_edges = get_graph_edge(data_edges)
# full_edges[:10]
###Output
_____no_output_____
###Markdown
Kanji Graph
###Code
kjg_raw = nx.Graph()
kjg_raw.add_nodes_from(full_node.items())
kjg_raw.add_edges_from(full_edges)
# PREPROCESSING: ENFORCE CONNECTED GRAPH
# https://networkx.org/documentation/stable/reference/algorithms/isolates.html
# EDA + Preprocessing: Removing Isolated Nodes
def enforce_connected_graph(G):
n_conn = nx.number_connected_components(G)
n_iso = nx.number_of_isolates(G)
print('number of connected components: ', nx.number_connected_components(G))
print('number of isolated: ', n_iso)
if n_iso > 1:
G.remove_nodes_from(list(nx.isolates(G)))
n_conn = nx.number_connected_components(G)
if n_conn != 1:
raise ValueError(f"Number of connected components must be 1, not {n_conn}")
else:
print("Graph is already connected")
return G
kjg = enforce_connected_graph(kjg_raw)
print(nx.info(kjg))
###Output
Graph with 4612 nodes and 15297 edges
###Markdown
Graph Viz Lib
###Code
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.font_manager as fm
# Reference: https://albertauyeung.github.io/2020/03/15/matplotlib-cjk-fonts.html
[f for f in fm.fontManager.ttflist if 'CJK JP' in f.name]
def visualize_graph(
Graph: nx.Graph,
figsize: tuple=(7,7),
color_map: List[str]=None,
node_size: int=3000,
label_attr: str='visual' # valid value: 'idx', 'visual'
) -> None:
if color_map == None:
color_map = [Graph.nodes[n]["color"] for n in Graph]
else:
color_map = color_map
plt.figure(1,figsize=figsize)
labels = nx.get_node_attributes(Graph, label_attr)
nx.draw_kamada_kawai(Graph,
node_color=color_map,
with_labels=True,
labels=labels,
node_size=node_size,
font_size=20,
font_family="Noto Serif CJK JP")
plt.show()
visualize_graph(
Graph = kjg.subgraph(random.sample(kjg.nodes, 100)),
node_size = 100,
label_attr = '', #: idx or visual
)
def get_sg_kanji_with(kjg) -> nx.Graph:
sg = nx.Graph()
p = '嘩-kanji'
radicals = [n for n in kjg.neighbors(p)]
sg.add_nodes_from([(p, kjg.nodes[p])] + [(r, kjg.nodes[r]) for r in radicals])
sg.add_edges_from([(p, rp) for rp in radicals])
return sg
visualize_graph(
Graph = get_sg_kanji_with(kjg),
node_size = 2000,
figsize = (4,4),
label_attr = None
)
###Output
_____no_output_____
###Markdown
Graph Function
###Code
def generate_graph(G: nx.Graph, nodes: List) -> nx.Graph:
R = nx.Graph()
R.add_nodes_from([(n, G.nodes[n]) for n in nodes])
R.add_edges_from(nx.utils.pairwise(nodes))
return R
def get_node_color_result(g, kinputs, koutputs):
color_map = []
for n in g:
if n in kinputs:
color_map.append("green") # input
elif n in koutputs:
color_map.append("blue") # output
else:
color_map.append(g.nodes[n]["color"])
return color_map
def visualize_result(g_res, kinputs, koutputs, figsize=(5,5), label_attr='visual'):
visualize_graph(
Graph=g_res,
color_map=get_node_color_result(g_res, kinputs, koutputs),
figsize=figsize,
label_attr=label_attr,
)
###Output
_____no_output_____
###Markdown
Graph Alternatives Graph Internet
###Code
g_ias = nx.random_internet_as_graph(300, 10)
nx.set_node_attributes(g_ias, {n: {"visual": n, "color": "yellow"} for n in g_ias.nodes})
# visualize_graph(
# Graph = g_ias,
# node_size = 500,
# figsize = (4,4),
# with_labels = True,
# )
# kin = 49 # say
# kout = 41 # lie
# result_shortest_path = nx.shortest_path(G=g_ias, source=kin, target=kout)
# result = generate_graph(g_ias, result_shortest_path)
# visualize_result(
# g_res=result,
# kinputs=[kin],
# koutputs=[kout]
# )
###Output
_____no_output_____
###Markdown
Graph List
###Code
subgraph_edges = [(e[0][0], [1][0]) for e in random.sample(full_edges, 2000)]
sample_graph = nx.Graph(subgraph_edges)
sample_graph = enforce_connected_graph(sample_graph)
gg = {
'kjg': kjg,
'ias': g_ias
}
print('number of isolated clean: ', nx.number_of_isolates(gg['kjg']))
print('number of connected clean: ', nx.number_connected_components(gg['kjg']))
###Output
number of isolated clean: 0
number of connected clean: 1
###Markdown
Querying Knowledge Graph Data Structure
###Code
kin = '貴-kanji' # precious
kout = '業-kanji' # business
result_sp = nx.shortest_path(G=kjg, source=kin, target=kout)
result = generate_graph(kjg, result_sp)
visualize_result(
g_res=result,
kinputs=[kin],
koutputs=[kout],
label_attr='visual'
)
###Output
_____no_output_____
###Markdown
Algorithm Brute Force Algorithm
###Code
def find_path_bf(G: nx.Graph, MOrig: List, MDest: List) -> nx.Graph:
result = []
for kin in MOrig: # O(|MOrig|)
for kout in MDest: # O(|MDest|)
sp_raw = nx.dijkstra_path(G, source=kin, target=kout)
# O( (|GV|+|GE|) log |GV|)
sp_graph = generate_graph(G, sp_raw)
result.append(sp_graph)
return nx.compose_all(result)
MOrig = ['逢-kanji','嘩-kanji']
MDest = ['颶-kanji','鴪-kanji','賠-kanji','蛤-kanji']
MOrig = ['姻-kanji','寥-kanji'] # matrimony, noisy
MDest = ['姑-kanji','嘩-kanji','蛤-kanji'] # mother in law, lonely
result = find_path_bf(gg['kjg'], MOrig, MDest)
visualize_result(
g_res=result,
kinputs=MOrig,
koutputs=MDest,
figsize=(8,8),
label_attr='visual'
)
###Output
_____no_output_____
###Markdown
Astar Algorithm Heuristic
###Code
g_curr = gg['kjg']
def common_neighbor_helper(g_curr, u,v):
nu = g_curr[u].keys() # O(1)
nv = g_curr[v].keys() # O(1)
return len(nu & nv) # O(min(|nu|,|nv|))
def common_neighbor(u, v):
return common_neighbor_helper(g_curr,u,v)
def jaccard_similarity(u, v):
G = g_curr
union_size = len(set(G[u]) | set(G[v])) # union neighbor
if union_size == 0:
return 0
return common_neighbor_helper(G,u,v) / union_size
###Output
_____no_output_____
###Markdown
Main A*
###Code
def find_path_astar(G: nx.Graph, MOrig: List, MDest: List, heuristic_func) -> nx.Graph:
result = []
for kin in MOrig: # O(|MOrig|)
for kout in MDest: # (|MDest|)
sp_raw = nx.astar_path(G, source=kin, target=kout, heuristic=heuristic_func)
# O( (|GV|+|GE|) log |GV|)
sp_graph = generate_graph(G, sp_raw)
result.append(sp_graph)
return nx.compose_all(result)
###Output
_____no_output_____
###Markdown
Steiner Tree
###Code
def _dijkstra_multisource(
G, sources, weight, pred=None, paths=None, cutoff=None, target=None
):
G_succ = G._succ if G.is_directed() else G._adj
push = heappush
pop = heappop
dist = {} # dictionary of final distances
seen = {}
# fringe is heapq with 3-tuples (distance,c,node)
# use the count c to avoid comparing nodes (may not be able to)
c = count()
fringe = []
for source in sources:
if source not in G:
raise nx.NodeNotFound(f"Source {source} not in G")
seen[source] = 0
push(fringe, (0, next(c), source))
while fringe:
(d, _, v) = pop(fringe)
if v in dist:
continue # already searched this node.
dist[v] = d
if v == target:
break
for u, e in G_succ[v].items():
cost = weight(v, u, e)
if cost is None:
continue
vu_dist = dist[v] + cost
if cutoff is not None:
if vu_dist > cutoff:
continue
if u in dist:
u_dist = dist[u]
if vu_dist < u_dist:
raise ValueError("Contradictory paths found:", "negative weights?")
elif pred is not None and vu_dist == u_dist:
pred[u].append(v)
elif u not in seen or vu_dist < seen[u]:
seen[u] = vu_dist
push(fringe, (vu_dist, next(c), u))
if paths is not None:
paths[u] = paths[v] + [u]
if pred is not None:
pred[u] = [v]
elif vu_dist == seen[u]:
if pred is not None:
pred[u].append(v)
return dist
def multi_source_dijkstra(G, sources, target=None, cutoff=None, weight="weight"):
if target in sources:
return (0, [target])
weight = lambda u, v, data: data.get(weight, 1)
paths = {source: [source] for source in sources} # dictionary of paths
dist = _dijkstra_multisource(G, sources, weight, paths=paths)
if target is None:
return (dist, paths)
try:
return (dist[target], paths[target])
except KeyError as e:
raise nx.NetworkXNoPath(f"No path to {target}.") from e
def my_all_pairs_dijkstra(G):
i = 0
for n in G:
i += 1
print('\r%s' % i, end = '\r')
dist, path = multi_source_dijkstra(G, {n})
yield (n, (dist, path))
def metric_closure(G, weight="weight"):
M = nx.Graph()
Gnodes = set(G)
all_paths_iter = my_all_pairs_dijkstra(G)
for u, (distance, path) in all_paths_iter:
Gnodes.remove(u)
for v in Gnodes:
M.add_edge(u, v, distance=distance[v], path=path[v])
return M
mcg = metric_closure(g_curr, weight='weight')
from itertools import chain
from networkx.utils import pairwise
def my_steiner_tree(G, terminal_nodes, weight="weight", is_mcg=True):
global mcg
# H is the subgraph induced by terminal_nodes in the metric closure M of G.
if is_mcg:
M = mcg
else:
M = metric_closure(G, weight=weight) # O(|GV|^2)
H = M.subgraph(terminal_nodes) # O(|GV|^2) * O(|MOrig| + |MDest|)
# Use the 'distance' attribute of each edge provided by M.
mst_edges = nx.minimum_spanning_edges(H, weight="distance", data=True) # O (|GE| log GV)
# Create an iterator over each edge in each shortest path; repeats are okay
edges = chain.from_iterable(pairwise(d["path"]) for u, v, d in mst_edges)
T = G.edge_subgraph(edges)
return T
def find_path_steiner(G: nx.Graph, MOrig: List, MDest: List, is_mcg: bool) -> nx.Graph:
return my_steiner_tree(G, MOrig + MDest, is_mcg=is_mcg)
# result = find_path_steiner(gg['ias'], MOrig, MDest)
# visualize_result(
# g_res=result,
# kinputs=MOrig,
# koutputs=MDest,
# figsize=(8,8)
# )
###Output
_____no_output_____
###Markdown
Experiment Test Case
###Code
from kanji_lists import JLPT, KYOIKU
# https://github.com/ffe4/kanji-lists
test_cases_raw = {
'N5 to N4': {
'MOrig': list(JLPT.N5)[:3],
'MDest': list(JLPT.N4)[:7],
},
'G1 to G2': {
'MOrig': list(KYOIKU.GRADE1)[:13],
'MDest': list(KYOIKU.GRADE2)[:17],
},
'G2 to G3': {
'MOrig': list(KYOIKU.GRADE2)[:20],
'MDest': list(KYOIKU.GRADE3)[:30],
},
# 'N4 to N3': {
# 'MOrig': list(JLPT.N4)[:30],
# 'MDest': list(JLPT.N3)[:70],
# },
# 'N3 to N2': {
# 'MOrig': list(JLPT.N3)[:60],
# 'MDest': list(JLPT.N2)[:140],
# },
# 'G3 to G4': {
# 'MOrig': list(KYOIKU.GRADE3)[:160],
# 'MDest': list(KYOIKU.GRADE4)[:240],
# },
# 'G4 to G5': {
# 'MOrig': list(KYOIKU.GRADE4)[:240],
# 'MDest': list(KYOIKU.GRADE5)[:360],
# },
# 'N2 to N1': {
# 'MOrig': list(JLPT.N2)[:400],
# 'MDest': list(JLPT.N1)[:600],
# },
# 'G5 to G6': {
# 'MOrig': list(KYOIKU.GRADE5)[:500],
# 'MDest': list(KYOIKU.GRADE6)[:700],
# },
# gg['ias']
# 'tc_ias_1': {
# 'MOrig': [1,2,3],
# 'MDest': [91,92,93]
# },
# 'tc_ias_2': {
# 'MOrig': [i*11 for i in range(33)],
# 'MDest': [i*10 for i in range(66)]
# },
# 'tc_ias_3': {
# 'MOrig': [i*12 for i in range(100) if i*12 < 3000],
# 'MDest': [i*13 for i in range(200) if i*13 < 3000]
# },
# 'tc_ias_4': {
# 'MOrig': [i*13 for i in range(200) if i*13 < 3000],
# 'MDest': [i*14 for i in range(300) if i*14 < 3000]
# },
# 'tc_ias_5': {
# 'MOrig': [i*5 for i in range(400) if i*5 < 3000],
# 'MDest': [i*7 for i in range(500) if i*7 < 3000]
# },
# 'tc_ias_6': {
# 'MOrig': [i*7 for i in range(600) if i*7 < 3000],
# 'MDest': [i*3 for i in range(700) if i*3 < 3000]
# },
}
# test_cases_raw
def tc_filter_in_graph(G, kanji_list):
tc_filtered = []
for k in kanji_list:
tk = get_key(k, 'kanji')
if tk in G:
tc_filtered.append(tk)
continue
tr = get_key(k, 'radical')
if tr in G:
tc_filtered.append(tr)
continue
return tc_filtered
def filter_test_cases_raw(G, test_cases_raw):
test_cases_clean = {}
for tc_name, tc in test_cases_raw.items():
MOrig = tc_filter_in_graph(G, tc['MOrig'])
MDest = tc_filter_in_graph(G, tc['MDest'])
test_cases_clean[tc_name] = {
'MOrig': MOrig,
'MDest': MDest,
}
return test_cases_clean
test_cases_clean = filter_test_cases_raw(g_curr, test_cases_raw)
def find_path(G: nx.Graph, MOrig: List, MDest: List, method='brute_force') -> nx.Graph:
if method == 'brute_force':
return find_path_bf(G, MOrig, MDest)
elif method == 'steiner_tree':
return find_path_steiner(G, MOrig, MDest, is_mcg=False)
elif method == 'steiner_tree_precompute':
return find_path_steiner(G, MOrig, MDest, is_mcg=True)
elif method == 'astar_common_neighbor':
return find_path_astar(G, MOrig, MDest, common_neighbor)
elif method == 'astar_jaccard':
return find_path_astar(G, MOrig, MDest, jaccard_similarity)
elif method == 'astar_0':
return find_path_astar(G, MOrig, MDest, lambda x, y: 0)
else:
raise ValueError(f"method {method} is not valid")
###Output
_____no_output_____
###Markdown
Testing
###Code
def get_result_accuracy(G0, Gt):
common_nodes = len(G0.nodes() & Gt.nodes())
G0_nodes = len(G0.nodes())
return common_nodes / G0_nodes
def get_results_kanjigen(G: nx.Graph, test_cases_clean: dict, algo_list: List):
results = {tc_name: dict() for tc_name in test_cases_clean}
for tc_name, tc in test_cases_clean.items():
MOrig = tc['MOrig']
MDest = tc['MDest']
if len(MOrig) == 0 or len(MDest) == 0:
continue
print(f"""
################
len(MOrig) == {len(MOrig)}
len(MDest) == {len(MDest)}
################
""")
# get time
for algo in algo_list:
print(f"algo: {algo}")
print(f"tc: {tc_name} of {len(test_cases_clean)}")
start = time.time()
graph = find_path(G, MOrig, MDest, algo)
end = time.time()
print(f"finish at {datetime.datetime.now()} after {end - start} seconds")
print("==============")
results[tc_name][algo] = {'graph': graph, 'time': (end - start)}
# get used_vertices
for algo in algo_list:
Gt = results[tc_name][algo]['graph']
results[tc_name][algo]["used_vertices"] = len(Gt) - len(MOrig) - len(MDest)
print("----------------------")
print(results)
print("----------------------")
return results
algo_list = ['brute_force', 'astar_0', 'astar_common_neighbor', 'astar_jaccard', 'steiner_tree', 'steiner_tree_precompute']
# algo_list = ['brute_force', 'astar_0', 'astar_common_neighbor', 'astar_jaccard', 'steiner_tree_precompute']
results = get_results_kanjigen(g_curr, test_cases_clean, algo_list)
pp(results)
###Output
_____no_output_____
###Markdown
Analysis
###Code
results
###Output
_____no_output_____
###Markdown
General
###Code
def get_df_results(results, algo_list, metric=None):
df_results = {algo: dict() for algo in algo_list}
for tc_name, res in results.items():
for algo in algo_list:
if algo not in res:
continue
if tc_name not in df_results[algo]:
df_results[algo][tc_name] = dict()
if metric == None:
df_results[algo][tc_name]['time'] = round(res[algo]['time'], 2)
df_results[algo][tc_name]['uv'] = res[algo]['used_vertices']
elif metric == 'time':
df_results[algo][tc_name] = round(res[algo]['time'], 2)
elif metric == 'uv':
df_results[algo][tc_name] = res[algo]['used_vertices']
return df_results
df_results = get_df_results(results, algo_list)
df_results
import pandas as pd
import numpy as np
df = pd.DataFrame(df_results)
df
y_color = {
'brute_force': 'black',
'astar_common_neighbor': 'red',
'astar_jaccard': 'yellow',
'steiner_tree': 'purple',
'steiner_tree_precompute': 'green',
}
###Output
_____no_output_____
###Markdown
Time
###Code
df_results = get_df_results(results, algo_list, metric='time')
df = pd.DataFrame(df_results)
df
ax = plt.gca()
for algo in algo_list:
df.plot(kind='line',ax=ax, y=algo, color=y_color[algo])
plt.show()
###Output
_____no_output_____
###Markdown
Used Vertices
###Code
df_results = get_df_results(results, algo_list, metric='uv')
df = pd.DataFrame(df_results)
df
ax = plt.gca()
for algo in algo_list:
df.plot(kind='line',ax=ax, y=algo, color=y_color[algo])
plt.show()
###Output
_____no_output_____
###Markdown
Demo
###Code
# What user input into application
# Demo: 1
# MOrig = ['姻','寥'] # matrimony, noisy
# MDest = ['姑','嘩','唹'] # mother in law, lonely, laugh
# What user input into application
# Demo: 2
MOrig = ['学','栄'] # learn, flourish
MDest = ['塾','術','宝', '輔'] # cram school, art, treasure, help
# Transformation: to differentiate 日-radical and 日-kanji
MOrigf = [f"{x}-kanji" for x in MOrig]
MDestf = [f"{x}-kanji" for x in MDest]
result = find_path(gg['kjg'], MOrigf, MDestf, 'brute_force')
visualize_result(
g_res=result,
kinputs=MOrigf,
koutputs=MDestf,
figsize=(14,14),
label_attr='visual'
)
###Output
_____no_output_____ |
02_cmd_ta_matching_golden_pandas.ipynb | ###Markdown
DATA QUALITY - Fruital census Library
###Code
import warnings
warnings.filterwarnings('ignore')
import pandas as pd
import numpy as np
from pyspark.sql.functions import pandas_udf
from pyspark.sql.functions import udf
from typing import List, Dict, Optional, Callable, Union
import re
import string
import pyspark
import shapely
import pandas as pd
import geopandas as gpd
import nltk
import unidecode
from shapely.geometry import Polygon, Point
from geopandas import GeoDataFrame
import os
import fastparquet
from nltk import ngrams
from ngram import NGram
from textdistance import damerau_levenshtein
from textdistance import jaro_winkler
from textdistance import sorensen_dice
from textdistance import jaccard
from textdistance import overlap
from textdistance import ratcliff_obershelp
###Output
_____no_output_____
###Markdown
Matching preparation functions **function toools**
###Code
def function_vectorizer(input_function: Callable) -> Callable:
"""This function takes an input funcion that works with arbitrary input
and vectorizes it so that the input function is applied to iterables
(such as columns of a Spark DataFrame).
The ouptut is always going to be pandas Series to ensure compliance
with Spark DataFrames.
Arguments:
input_function {Callable} -- The input function.
Returns:
Callable -- The function vectorized (i.e. acting on each element of an
iterable).
"""
def vectorized_function(*args):
return pd.Series([input_function(*tup) for tup in zip(*args)])
return vectorized_function
###Output
_____no_output_____
###Markdown
**Test processing**
###Code
name_column_blacklist = ["cafe", "cf", "restaurant", "estaurant", "rest", "ag", "ste", "café", "snack", "hotel", "sarl", "rotisserie", "marrakech"]
name_column_regex_replace = {r"\'": "", r"\d{5}": "", r"\s+": " "}
address_column_blacklist = []
address_column_regex_replace = {r"\'": "", r"\s+": " ", "avenu ": "av ", "boulevard ": "bd "}
# Snowball stemmer was chosen in favor of Porter Stemmer which is a bit more aggressive and tends to remove too much from a word
import nltk
from nltk.stem.snowball import SnowballStemmer
from nltk.tokenize import word_tokenize
from nltk.corpus import stopwords
nltk.download("punkt")
nltk.download("stopwords")
# unidecode is the library needed for ASCII folding
from unidecode import unidecode
import string
# Compact Language Detector v3 is a very fast and performant algorithm by Google for language detection: more info here: https://pypi.org/project/pycld3/
import re
import pyspark.sql.functions as F
from typing import List, Dict, Optional, Callable
from langdetect import detect
name_column_blacklist = ["cafe", "cf", "restaurant", "estaurant", "rest", "ag", "ste", "café", "snack", "hotel", "sarl", "rotisserie", "marrakech"]
name_column_regex_replace = {r"\'": "", r"\d{5}": "", r"\s+": " "}
address_column_blacklist = []
address_column_regex_replace = {r"\'": "", r"\s+": " ", "avenu ": "av ", "boulevard ": "bd "}
def make_text_prep_func(row, word_blacklist, regex_replace, colonne) :
try:
STOPWORDS_EN = stopwords.words("english")
STOPWORDS_FR = stopwords.words("french")
STEMMER_EN = SnowballStemmer(language='english')
STEMMER_FR = SnowballStemmer(language='french')
except:
nltk.download("punkt")
nltk.download("stopwords")
STOPWORDS_EN = stopwords.words("english")
STOPWORDS_FR = stopwords.words("french")
STEMMER_EN = SnowballStemmer(language='english')
STEMMER_FR = SnowballStemmer(language='french')
s=row[colonne]
if s is None or s=="":
return ""
# STOPWORDS_EN = ['i', 'me', 'my', 'myself', 'we', 'our', 'ours', 'ourselves', 'you', "you're", "you've", "you'll", "you'd", 'your', 'yours', 'yourself', 'yourselves', 'he', 'him', 'his', 'himself', 'she', "she's", 'her', 'hers', 'herself', 'it', "it's", 'its', 'itself', 'they', 'them', 'their', 'theirs', 'themselves', 'what', 'which', 'who', 'whom', 'this', 'that', "that'll", 'these', 'those', 'am', 'is', 'are', 'was', 'were', 'be', 'been', 'being', 'have', 'has', 'had', 'having', 'do', 'does', 'did', 'doing', 'a', 'an', 'the', 'and', 'but', 'if', 'or', 'because', 'as', 'until', 'while', 'of', 'at', 'by', 'for', 'with', 'about', 'against', 'between', 'into', 'through', 'during', 'before', 'after', 'above', 'below', 'to', 'from', 'up', 'down', 'in', 'out', 'on', 'off', 'over', 'under', 'again', 'further', 'then', 'once', 'here', 'there', 'when', 'where', 'why', 'how', 'all', 'any', 'both', 'each', 'few', 'more', 'most', 'other', 'some', 'such', 'no', 'nor', 'not', 'only', 'own', 'same', 'so', 'than', 'too', 'very', 's', 't', 'can', 'will', 'just', 'don', "don't", 'should', "should've", 'now', 'd', 'll', 'm', 'o', 're', 've', 'y', 'ain', 'aren', "aren't", 'couldn', "couldn't", 'didn', "didn't", 'doesn', "doesn't", 'hadn', "hadn't", 'hasn', "hasn't", 'haven', "haven't", 'isn', "isn't", 'ma', 'mightn', "mightn't", 'mustn', "mustn't", 'needn', "needn't", 'shan', "shan't", 'shouldn', "shouldn't", 'wasn', "wasn't", 'weren', "weren't", 'won', "won't", 'wouldn', "wouldn't"]
# STOPWORDS_FR = ['au', 'aux', 'avec', 'ce', 'ces', 'dans', 'de', 'des', 'du', 'elle', 'en', 'et', 'eux', 'il', 'ils', 'je', 'la', 'le', 'les', 'leur', 'lui', 'ma', 'mais', 'me', 'même', 'mes', 'moi', 'mon', 'ne', 'nos', 'notre', 'nous', 'on', 'ou', 'par', 'pas', 'pour', 'qu', 'que', 'qui', 'sa', 'se', 'ses', 'son', 'sur', 'ta', 'te', 'tes', 'toi', 'ton', 'tu', 'un', 'une', 'vos', 'votre', 'vous', 'c', 'd', 'j', 'l', 'à', 'm', 'n', 's', 't', 'y', 'été', 'étée', 'étées', 'étés', 'étant', 'étante', 'étants', 'étantes', 'suis', 'es', 'est', 'sommes', 'êtes', 'sont', 'serai', 'seras', 'sera', 'serons', 'serez', 'seront', 'serais', 'serait', 'serions', 'seriez', 'seraient', 'étais', 'était', 'étions', 'étiez', 'étaient', 'fus', 'fut', 'fûmes', 'fûtes', 'furent', 'sois', 'soit', 'soyons', 'soyez', 'soient', 'fusse', 'fusses', 'fût', 'fussions', 'fussiez', 'fussent', 'ayant', 'ayante', 'ayantes', 'ayants', 'eu', 'eue', 'eues', 'eus', 'ai', 'as', 'avons', 'avez', 'ont', 'aurai', 'auras', 'aura', 'aurons', 'aurez', 'auront', 'aurais', 'aurait', 'aurions', 'auriez', 'auraient', 'avais', 'avait', 'avions', 'aviez', 'avaient', 'eut', 'eûmes', 'eûtes', 'eurent', 'aie', 'aies', 'ait', 'ayons', 'ayez', 'aient', 'eusse', 'eusses', 'eût', 'eussions', 'eussiez', 'eussent']
stop_words = STOPWORDS_EN + word_blacklist
stemmer = STEMMER_EN
s = s.lower()
# check if the language is French
s_lang = detect(s)
if s_lang=="fr":
stop_words = STOPWORDS_FR + word_blacklist
stemmer = STEMMER_FR
stop_words = STOPWORDS_FR + word_blacklist
stemmer = STEMMER_FR
s_clean = s.translate(str.maketrans(string.punctuation, ' ' * len(string.punctuation)))
s_tokens = word_tokenize(s_clean)
s_tokens_no_stop = [word for word in s_tokens if word not in stop_words]
s_tokens_stemmed = [stemmer.stem(word) for word in s_tokens_no_stop]
s_ascii = unidecode(" ".join(s_tokens_stemmed))
for regex, replace in regex_replace.items():
s_ascii = re.sub(regex, replace, s_ascii)
return(s_ascii.strip())
address_column_blacklist
###Output
_____no_output_____
###Markdown
**geospatial function**
###Code
def haversine_distance(row):
longit_a=row.R_location_lon
latit_a=row.R_location_lat
longit_b=row.L_LONGITUDE
latit_b=row.L_LATITUDE
# Transform to radians
longit_a, latit_a, longit_b, latit_b = map(np.radians, [longit_a, latit_a, longit_b, latit_b])
dist_longit = longit_b - longit_a
dist_latit = latit_b - latit_a
# Calculate area
area = np.sin(dist_latit/2)**2 + np.cos(latit_a) * np.cos(latit_b) * np.sin(dist_longit/2)**2
# Calculate the central angle
central_angle = 2 * np.arcsin(np.sqrt(area))
# central_angle = 2 * np.arctan2(np.sqrt(area), np.sqrt(1-area))
radius = 6371000
# Calculate Distance
distance = central_angle * radius
return abs(round(distance, 2))
# haversine_distance_sdf = F.pandas_udf(function_vectorizer(haversine_distance),"double")
def sdf_to_gdf(sdf: pyspark.sql.dataframe.DataFrame,
longitude: str = "longitude",
latitude: str = "latitude",
crs: str = "epsg:4326"
) -> GeoDataFrame:
pdf = sdf.toPandas()
gdf = gpd.GeoDataFrame(pdf, geometry=gpd.points_from_xy(pdf[longitude], pdf[latitude]), crs=crs)
return(gdf)
###Output
_____no_output_____
###Markdown
**test similaity**
###Code
def compound_similarity(row,col1,col2):
s1 = row[col1]
s2 = row[col2]
if s1 is None:
s1 = ""
if s2 is None:
s2 = ""
if s1 == "" and s2 == "":
return 0.
scores = [
damerau_levenshtein.normalized_similarity(s1, s2),
jaro_winkler.normalized_similarity(s1, s2),
sorensen_dice.normalized_similarity(s1, s2),
jaccard.normalized_similarity(s1, s2),
overlap.normalized_similarity(s1, s2),
ratcliff_obershelp.normalized_similarity(s1, s2),
NGram.compare(s1, s2, N=2)
]
return np.mean(scores)
###Output
_____no_output_____
###Markdown
Data :CMD Golden **cmd dataset**
###Code
cmd = pd.read_excel("D:/data_quality/data/customer_invoice_tizi_ouzou.xlsx") [["Client", "LONGITUDE", "LATITUDE", "Nom", "Adresse"]]
cmd=cmd.rename(columns={"Client":"CUSTOMER_COD"})
cmd
###Output
_____no_output_____
###Markdown
**cmd golden Id**
###Code
cmd_golden_uri="C:/Users/Salif SAWADOGO/OneDrive - EQUATORIAL COCA-COLA BOTTLING COMPANY S.L/dynamic segmentation/matching/output/horeca_tz_customer_subset.csv"
cmd_golden_ids= pd.read_csv(cmd_golden_uri) [["CUSTOMER_COD"]]
cmd_golden_ids
###Output
_____no_output_____
###Markdown
**merge datasets**
###Code
cmd_golden = cmd.merge(cmd_golden_ids, on="CUSTOMER_COD")
cmd_golden
###Output
_____no_output_____
###Markdown
**CLEAN string for Analysis**
###Code
cmd_golden["ADRESSE_CLEAN"]=cmd_golden.apply(lambda p:make_text_prep_func(p, address_column_blacklist, address_column_regex_replace,"Adresse"),axis=1)
cmd_golden["NOM_CLEAN"]=cmd_golden.apply(lambda p:make_text_prep_func(p, name_column_blacklist, name_column_regex_replace,"Nom"),axis=1)
###Output
_____no_output_____
###Markdown
Data :TripAdvisor **Reading**
###Code
tripadvisor_data_uri ="D:/dynamic segmentation/data_acquisition/TripAdvisor/code/output/ta_combined_l3_Algeria_prepped.parquet"
ta=pd.read_parquet(tripadvisor_data_uri)
ta
###Output
_____no_output_____
###Markdown
**clean addresses and names**
###Code
ta["name_CLEAN"]=ta.apply(lambda p:make_text_prep_func(p, name_column_blacklist, name_column_regex_replace,"name"),axis=1)
ta["address_CLEAN"]=ta.apply(lambda p:make_text_prep_func(p, address_column_blacklist, address_column_regex_replace,'address'),axis=1)
###Output
_____no_output_____
###Markdown
similarity analysis **cross join TripAdvisor data and cmd golden**
###Code
def match_join(l_sdf,
l_id,
l_lon,
l_lat,
l_name,
l_addr,
r_sdf,
r_id,
r_lon,
r_lat,
r_name,
r_addr,
distance_threshold_m,
minimal = True
):
l_slice = l_sdf[[l_id, l_lon, l_lat, l_name, l_addr,"Nom", "Adresse"]]
r_slice = r_sdf[[r_id, r_lon, r_lat, r_name, r_addr,"name", "address"]]
l_slice.columns= "L_"+l_slice.columns
r_slice.columns = "R_"+ r_slice.columns
l_slice['key'] = 1
r_slice['key'] = 1
# to obtain the cross join we will merge on
# the key and drop it.
inner_joined = l_slice.merge(r_slice, on ='key').drop("key", 1)
# l_joined = l_slice.join(inner_joined, l_slice.columns)
return(inner_joined)
matched = match_join(cmd_golden, "CUSTOMER_COD", "LONGITUDE", "LATITUDE", "NOM_CLEAN", "ADRESSE_CLEAN", ta, "id", "location_lon", "location_lat", "name_CLEAN", "address_CLEAN", 2000)
matched.nunique()
###Output
_____no_output_____
###Markdown
**Compute distance between TripAdvisor outlets and cmd golden outlets**
###Code
matched["dist_m"]=matched.apply(lambda p:haversine_distance(p),axis=1)
matched.shape
#matched=matched.loc[matched["dist_m"]<=2000]
matched["dist_m"].hist()
distance_threshold_m=2000
###Output
_____no_output_____
###Markdown
**distance similarity**
###Code
matched["dist_similarity"] =(distance_threshold_m - matched["dist_m"])/distance_threshold_m
matched
###Output
_____no_output_____
###Markdown
**addresses and names similarities**
###Code
matched["name_similarity"]=matched.apply(lambda p:compound_similarity(p,"L_NOM_CLEAN","R_name_CLEAN"),axis=1)
matched["address_similarity"]=matched.apply(lambda p:compound_similarity(p,"L_ADRESSE_CLEAN","R_address_CLEAN"),axis=1)
###Output
_____no_output_____
###Markdown
**similarity overall**
###Code
matched["similarity"]= matched["name_similarity"]*0.15 + matched["dist_similarity"]*0.8+matched["address_similarity"]*0.05
###Output
_____no_output_____
###Markdown
**Rank by Customer ID**
###Code
matched["rank"]=matched.groupby(by="L_CUSTOMER_COD")["similarity"].rank("dense", ascending=False)
(matched["rank"]<=15).value_counts()
matched_filter=matched.loc[matched["rank"]<=15]
matched_filter.to_excel("C:/Users/Salif SAWADOGO/OneDrive - EQUATORIAL COCA-COLA BOTTLING COMPANY S.L/dynamic segmentation/matching/output/manual_match.xlsx")
matched.L_CUSTOMER_COD.nunique()
cmd_golden.CUSTOMER_COD.nunique()
#Marrakech city shape to filter for
tizi_ouzou = gpd.read_file(os.path.join("C:/Users/Salif SAWADOGO/OneDrive - EQUATORIAL COCA-COLA BOTTLING COMPANY S.L/dynamic segmentation/urbanicty/Tizi ouzou shapefile", "TZ.shp"))
sub_set = gpd.sjoin(tizi_ouzou,temp, op="intersects")
from matplotlib import pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
sub_set = gpd.sjoin(tizi_ouzou,ta_gdf, op="intersects")
sub_set.plot(ax=ax, color='darkred', lw=0.5)
tizi_ouzou.geometry.boundary.plot(color=None,edgecolor='k',linewidth = 1,ax=ax)
gpd.GeoDataFrame(sub_set,
geometry=gpd.points_from_xy(sub_set["location_lon"],sub_set["location_lat"]),
crs="epsg:4326").plot(ax=ax,marker="o",color="red")
#algeria=gpd.read_file("D:dynamic segmentation/algeria census/data/algeria_administrative_level_data/dza_admbnda_adm1_unhcr_20200120.shp")
data_fruital=algeria.set_index("ADM1_EN")
fruital=["Alger",'Tizi Ouzou','Boumerdes','Blida','Medea','Tipaza','Bouira',"Bordj Bou Arrer",'Ain-Defla','Djelfa','Ghardaia','Laghouat','Tamanrasset',"M'Sila",'Chlef','Ouargla']
data_fruital=data_fruital.loc[fruital]
data_fruital=data_fruital.reset_index()
from matplotlib import pyplot as plt
fig, ax = plt.subplots(figsize=(10, 10))
sub_set = gpd.sjoin(algeria,ta_gdf, op="intersects")
sub_set.plot(ax=ax, color='None', lw=0.5)
c=data_fruital.plot(column='ADM1_EN',
ax=ax,color="darkred")
algeria.geometry.boundary.plot(color=None,edgecolor='k',linewidth = 1,ax=ax)
sub_set2=sub_set.merge(temp, how='inner')
#gpd.GeoDataFrame(sub_set,
# geometry=gpd.points_from_xy(sub_set["location_lon"],sub_set["location_lat"]),
# crs="epsg:4326").plot(ax=ax,marker="o",color="red",column="id")
for x, y, label in zip(sub_set2.geometry.centroid.x, sub_set2.geometry.centroid.y, sub_set2["count poi horeca"]):
ax.annotate(label, xy=(x, y), xytext=(1, 1),textcoords="offset points")
temp=sub_set.groupby("ADM1_EN")['id'].count().\
reset_index().\
rename(columns={"id":"count poi horeca"}).\
sort_values(by="count poi horeca",ascending=False)
sub_set
ta_gdf = gpd.GeoDataFrame(ta, geometry=gpd.points_from_xy(ta["location_lon"], ta["location_lat"]), crs="epsg:4326")
sub_set.shape
import pandas
data = pandas.read_csv("D:/data_quality/data/customer_invoice_tizi_ouzou.csv")
data['CHANNEL_CUSTOM'] = data['Détail Canal'].replace(['Frui-ALIMENTATION GE','Frui-SUPERETTE ET LI'], 'AG')
data['CHANNEL_CUSTOM'] = data['CHANNEL_CUSTOM'].replace(['Frui-CREMERIE','Frui-RESTAURANT / RO' ,'Frui-CAFE/CAFETERIA/','Frui-NIGHT CLUB','Frui-HOTELS',], 'HORECA')
data['CHANNEL_CUSTOM'] = data['CHANNEL_CUSTOM'].replace(['Frui-FAST FOOD / PIZ', 'Frui-PIZZERIA'], 'SNACK')
data['CHANNEL_CUSTOM'] = data['CHANNEL_CUSTOM'].replace(['Frui-DOUCHE','Frui-BUREAUX DE TABA','Frui-MOUKASSIRAT',"Frui-PATISSERIES",'Frui-FOYER','Frui-CREMERIE',"Frui-LOISIR",'Frui-SALLE DES FETES','Frui-MDN',"Frui-loisir","Frui-ADMINISTRATION","Frui-CYBER CAFE"], 'OTHER')
data=data.loc[~data["Classification client"].isin(["Platinum","Prestigieux"])]
data=data.loc[~data["CHANNEL_CUSTOM"].isin(["AG"])]
pandas.crosstab(data["Classification client"], data.CHANNEL_CUSTOM, margins=True)
###Output
_____no_output_____ |
content/04. Not quite intelligent robots/04.1 Introducing program functions.ipynb | ###Markdown
1 Introduction to functions and robot control strategiesSensors are at the heart of robotics. A machine without sensors cannot be a robot in our terms. The human body is replete with sensors. Our five external senses – sight, hearing, touch, smell and taste – and internal sensing such as balance and proprioception (body awareness) are all marvellously sophisticated.For this week’s practical activities, we will be concerned with various techniques that can be used to allow a robot to use sensory information to control its actuators. We will investigate a progression of control strategies:1. dead reckoning – no sensor input2. reflex behaviour – sensors *linked directly* to motors according to the sense–act model3. deliberative behaviour – actuation depends on *reasoning* about sensor information and other knowledge, according to the sense–think–act model.The first control strategy, dead reckoning, is an ‘open-loop’ control approach, since it does not use sensor input.The second is an example of a ‘sense–act’ control strategy that you encountered earlier in the block; we will illustrate this control strategy using simulated implementations of simple Braitenberg vehicles.Finally, there is the most complex control strategy, in which the robot deliberates on the sensor inputs in the context of other knowledge using an approach we refer to as ‘sense–think–act’. This involves *reasoning* and corresponds more closely to the way humans solve complex problems and plan actions in the long and short term.But before we do that, we’ll have a look in a bit more detail at another powerful idea in computer programming: *functions*. You’ve already met some of these, but without much explanation. So now let’s introduce you to them for real. 1.1 Defining simple Python functionsMany of the programs we have used so far have been quite short with little, if any, reused code.As programs get larger, it is often convenient to encapsulate several lines of code within a *function*. Multiple lines of code within a function can then be called conveniently from a single statement whenever they are needed.Functions are very powerful, and if you have studied other programming courses then you may well be familiar with them.For our purposes, the following provides a very quick overview of some of the key behaviours of Python functions. Remember that this isn’t a Python programming module *per se*; rather, it’s a module where we explore how to use Python to get things done. What follows should be enough to get you started writing your own functions, without creating too many bad habits along the way.To see how we can create our own functions, let’s consider a really simple example: a function that just prints out the word *Hello*.The function definition has a very specific syntax:
###Code
def FUNCTION_NAME():
# ONE_OR_MORE_LINES_OF_CODE
pass
###Output
_____no_output_____
###Markdown
*A Python function requires at least one line of valid code (which does not include comments) in the function body. If we don’t know what lines of code we want just yet, the `pass` command is enough to create a valid program line that doesn’t actually have to do anything.* Here are some of the rules relating to the syntactic definition of a Python function:- the `FUNCTION_NAME` __MUST NOT__ contain any spaces or punctuation other than underscore (`_`) characters- the function name __MUST__ be followed by a pair of brackets (`()`), that may contain something (we’ll see what later), followed by a colon (`:`)- the body of the function __MUST__ be indented using space or tab characters; the level of indentation of the first line sets the effective ‘left-hand margin’ for the remaining lines of code in the function- the body of the function must include __AT LEAST__ one valid statement or line of code __EXCLUDING__ comments; if you don’t want the function to do anything, but need it as a placeholder, use `pass` as the single line of required code in the function body.It is good practice to annotate your function with a so-called ‘docstring’ (*documentation string*) providing a concise, imperative description of what the function does.
###Code
def FUNCTION_NAME():
""""Docstring containing a concise summary of the function behaviour."""
# ONE_OR_MORE_LINES_OF_CODE
pass
###Output
_____no_output_____
###Markdown
Run the following code cell to define a simple function that prints the message *Hello*:
###Code
def sayHello():
"""Print a hello message."""
print('Hello')
###Output
_____no_output_____
###Markdown
When we *call* the function, the code contained within the function body is executed.Run the following cell to call the function:
###Code
sayHello()
###Output
_____no_output_____
###Markdown
Functions can contain multiple lines of code, which means they can provide a convenient way of calling multiple lines of code from a single line of code. 1.2 Passing arguments into functionsFunctions can also be used to perform actions over one or more *arguments* passed into the function. For example, if you want to say hello to a specific person by name, we can pass their name into the function as an argument, and then use that argument within the body of the function.We’ll use a Python *f-string* as a convenient way of passing the variable value, by reference, into a string:
###Code
def sayHelloName(name):
"""Print a welcome message."""
print(f"Hello, {name}")
###Output
_____no_output_____
###Markdown
Let’s call that function to see how it behaves:
###Code
sayHelloName("Sam")
###Output
_____no_output_____
###Markdown
What happens if we forget to provide a name?
###Code
sayHelloName()
###Output
_____no_output_____
###Markdown
Oops... We have defined the argument as a *positional* argument that is REQUIRED if the function is to be called without raising an error.If we want to make the argument optional then we need to provide a *default value*:
###Code
def sayHelloName(name='there'):
"""Print a message to welcome someone by name."""
print(f"Hello, {name}")
sayHelloName()
###Output
_____no_output_____
###Markdown
If we want to have different behaviours depending on whether a value is passed for the name, then we can set a default such as `None` and then use a conditional statement to determine what to do based on the value that is presented:
###Code
def sayHelloName(name=None):
"""Print a message to welcome someone optionally by name."""
if name:
print(f"Hello, {name}")
else:
print("Hi there!")
sayHelloName()
###Output
_____no_output_____
###Markdown
Sometimes, we may want to get one or more values returned back from a function. We can do that using the `return` statement. The `return` statement essentially does two things when it is called: firstly, it terminates the function’s execution at that point; secondly, it optionally returns a value to the part of the program that called the function. 1.2.1 Activity – Defining a simple functionRun the following code cell to define a function that constructs a welcome message, displays the message *and returns it*:
###Code
def sayAndReturnHelloName(name):
"""Print a welcome message and return it."""
message = f"Hello, {name}"
print("Printing:", message)
return message
###Output
_____no_output_____
###Markdown
What do you think will happen when we call the function? *Write your prediction here about what you think will happen when the function is run here __before__ you run the code cell to call it.*
###Code
sayAndReturnHelloName('Sam')
###Output
_____no_output_____
###Markdown
Run the above cell to call the function. Did you get the response you expected? Discussion*Click on the arrow in the sidebar or run this cell to reveal my observations.* In the first case, a message was *printed* out in the cell’s print area. In the second case, the message was returned as the value returned by the function. As the function appeared on the last line of the code cell, its value was *displayed* as the cell output. 1.3 Setting variables to values returned from a functionAs you might expect, we can set a variable to the value returned from a function:
###Code
message = sayAndReturnHelloName('Sam')
###Output
_____no_output_____
###Markdown
If we view the value of that variable by running the following cell, what do you think you will see? Will the message be printed as well as displayed? *Write your prediction about what you think will happen when the function is run here __before__ you run the code cell to call it.*
###Code
message
###Output
_____no_output_____
###Markdown
Only the value returned from the function is displayed. The function is not called again, and so there is no instruction to *print* the message.To return multiple values, we still use a single `return` statement:
###Code
def sayAndReturnHelloName(name):
"""Print a welcome message and return it."""
message = f"Hello, {name}"
print("Printing:", message)
return (name, message)
sayAndReturnHelloName('Sam')
###Output
_____no_output_____
###Markdown
Finally, we can have multiple return statements in a function, but only one of them can be called from a single invocation of the function:
###Code
def sayHelloName(name=None):
"""Print a message to welcome someone optionally by name."""
if name:
print(f"Hello, {name}")
return (name, message)
else:
print("Hi there!")
return
print(sayHelloName(), 'and', sayHelloName("Sam"))
###Output
_____no_output_____
###Markdown
Generally, it is *not* good practice to return different sorts of object from different parts of the same function. If you try to assign the values returned from the function to a particular variable, that variable could end up being defined in different ways depending on which part of the function returned the value to it. There is quite a lot more to know about functions, particularly in respect of how variables inside the function relate to variables defined outside the function, a topic referred to as *variable scope*.Variables defined within a Python are *scoped* according to where they are defined. Variables are *in scope* at a particular part of a program if they can be seen and referred to at that part of the program.In Python, variables defined _outside_ a function can typically be seen and referred to from within the function. Variables can also be passed into a function via the function’s arguments. But variables defined _inside_ the function _cannot_ be seen outside the function.We will not consider issues of scope any further here, but it is a _very_ powerful concept and one that any comprehensive introduction to programming should cover. 1.4 Using functions in robot control programsLet’s now consider how we might use our functions in a robot control program.We’ll start by considering the simple program we explored previously to make the robot trace out a square.If you recall, our first version of this program explicitly coded each turn and edge movement, and then we used a loop to repeat the same action several times.To get things set up correctly, load the simulator into the notebook in the usual way:
###Code
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
%load_ext nbtutor
###Output
_____no_output_____
###Markdown
Tweak the constant value settings in the program below until the robot approximately traces out the shape of a square.
###Code
%%sim_magic_preloaded -x 200 -y 500 -a 0 -p -C --pencolor red
SIDES = 4
# Try to draw a square, ish...
STEERING = -100
TURN_ROTATIONS = 1.6
TURN_SPEED = 10
STRAIGHT_SPEED_PC = SpeedPercent(40)
STRAIGHT_ROTATIONS = 4
for side in range(SIDES):
# Go straight
# Set the left and right motors in a forward direction
# and run for the specified number of forward rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
STRAIGHT_ROTATIONS)
# Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED),
TURN_ROTATIONS)
###Output
_____no_output_____
###Markdown
We can extract this code into a function that allows us to draw a square whenever we want. By adding an optional `side_length` parameter we can change the side length as required.Download the following program to the simulator and run it there.Can you modify the program to draw a third square with a size somewhere between the size of the first two squares?
###Code
%%sim_magic_preloaded -p --pencolor green -a 0
SIDES = 4
# Try to draw a square
STEERING = -100
TURN_ROTATIONS = 1.6
TURN_SPEED = 10
STRAIGHT_SPEED_PC = SpeedPercent(40)
STRAIGHT_ROTATIONS = 6
def draw_square(side=STRAIGHT_ROTATIONS):
"""Draw square of specified side length."""
for side_number in range(SIDES):
# Go straight
# Set the left and right motors in a forward direction
# and run for the number of rotations specified by: side
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
# Use provided side length
side)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED),
TURN_ROTATIONS)
# Call the function to draw a small size square
draw_square(4)
# And an even smaller square
draw_square(2)
###Output
_____no_output_____
###Markdown
1.4.1 Optional activityCopy the code used to define the `draw_square()` function, and modify it so that it takes a second `turn` parameter that replaces the `TURN_ROTATIONS` value.Use the `turn` parameter to tune how far the robot turns at each corner.Then see if you can use a `for...in range(N)` loop to call the square-drawing function several times.Can you further modify the program so that the side length is increased each time the function is called by the loop?*Share your programs in your Cluster group forum.* 1.5 Previewing the simulated robot state from a notebook code cellAs well as viewing the sensor state via the simulator user interface, we can also review it in the notebook itself.We can create a reference to an object that uses some magic to grab a snapshot of the state of the robot in the default `roboSim` simulator.
###Code
robotState = %sim_robot_state
###Output
_____no_output_____
###Markdown
The `%sim_robot_state` needs to be run in a cell, before we can check that captured state in *another* cell.But once it has run, we can use the `robotState` variable to preview the snapshot of the state of the robot.The state data itself is returned as a Python dictionary which we can reference into to view specific data values. For example, the `x`-coordinate:
###Code
robotState.state["x"]
###Output
_____no_output_____
###Markdown
Or how about the `penDown` state?
###Code
robotState.state['penDown']
###Output
_____no_output_____
###Markdown
For a full list of possible values, we can review all the *keys* associated with the `robotState.state` dictionary:
###Code
robotState.state.keys()
###Output
_____no_output_____
###Markdown
Let’s use some magic to change the pen down state and then take another snapshot of the robot’s state:
###Code
%sim_magic --pendown
robotState = %sim_robot_state
robotState.state['penDown']
###Output
_____no_output_____
###Markdown
1.6 Reporting on robot state in the notebookHaving grabbed a snapshot of the robot’s state into the notebook, we can create a function to write reports in the notebook’s own code environment describing the state of the robot.For example, the `robotState.state` dictionary includes the following keys:- `left_light_raw / right_light_raw` for the raw RGB values- `left_light / right_light` for the `reflected_light_intensity` values- `left_light_pc / right_light_pc` for the `reflected_light_intensity_pc` values- `left_light_full / right_light_full` for the `full_reflected_light_intensity` values.We can create a simple function to display this values to make it easier for us to probe the state of the robot:
###Code
def report_robot_left_sensor(state):
"""Print a report of the left light sensor values."""
print(f"""
RGB: {state['left_light_raw']}
Reflected light intensity: {state['left_light']}
Reflected light intensity per cent: {state['left_light_pc']}
Full reflected light intensity (%): {state['left_light_full']}
""")
###Output
_____no_output_____
###Markdown
Let’s see how it works:
###Code
report_robot_left_sensor(robotState.state)
###Output
_____no_output_____
###Markdown
1. Introduction to functions and robot control strategiesSensors are at the heart of robotics. A machine without sensors cannot be a robot in our terms. The human body is replete with sensors. Our five external senses – sight, hearing, touch, smell and taste – and internal sensing such as balance and proprioception (body awareness) are all marvellously sophisticated.For this week’s practical activities, we will be concerned with various techniques that can be used to allow a robot to use sensory information to control its actuators. We will investigate a progression of control strategies:1. dead reckoning – no sensor input2. reflex behaviour – sensors *linked directly* to motors according to the sense–act model3. deliberative behaviour – actuation depends on *reasoning* about sensor information and other knowledge, according to the sense–think–act model.The first control strategy, dead reckoning, is an ‘open-loop’ control approach, since it does not use sensor input.The second is an example of a ‘sense–act’ control strategy that you encountered earlier in the block; we will illustrate this control strategy using simulated implementations of simple Braitenberg vehicles.Finally, there is the most complex control strategy, in which the robot deliberates on the sensor inputs in the context of other knowledge using an approach we refer to as ‘sense–think–act’. This involves *reasoning* and corresponds more closely to the way humans solve complex problems and plan actions in the long and short term.But before we do that, we’ll have a look in a bit more detail at another powerful idea in computer programming: *functions*. You’ve already met some of these, but without much explanation. So now let’s introduce you to them for real. 1.1 Defining simple Python functionsMany of the programs we have used so far have been quite short with little, if any, reused code.As programs get larger, it is often convenient to encapsulate several lines of code within a *function*. Multiple lines of code within a function can then be called conveniently from a single statement whenever they are needed.Functions are very powerful, and if you have studied other programming courses then you may well be familiar with them.For our purposes, the following provides a very quick overview of some of the key behaviours of Python functions. Remember that this isn’t a Python programming module *per se*; rather, it’s a module where we explore how to use Python to get things done. What follows should be enough to get you started writing your own functions, without creating too many bad habits along the way.To see how we can create our own functions, let’s consider a really simple example: a function that just prints out the word *Hello*.The function definition has a very specific syntax:
###Code
def FUNCTION_NAME():
#ONE_OR_MORE_LINES_OF_CODE
pass
###Output
_____no_output_____
###Markdown
*A Python function requires at least one line of valid code (which does not include comments) in the function body. If we don't know what lines of code we want just yet, the `pass` command is enough to create a valid program line that doesn't actually have to do anything.* Here are some of the rules relating to the syntactic definition of a Python function:- the `FUNCTION_NAME` __MUST NOT__ contain any spaces or punctuation other than underscore (`_`) characters- the function name __MUST__ be followed by a pair of brackets (`()`), that may contain something (we’ll see what later), followed by a colon (`:`)- the body of the function __MUST__ be indented using space or tab characters; the level of indentation of the first line sets the effective ‘left-hand margin’ for the remaining lines of code in the function- the body of the function must include __AT LEAST__ one valid statement or line of code __EXCLUDING__ comments; if you don’t want the function to do anything, but need it as a placeholder, use `pass` as the single line of required code in the function body.It is good practice to annotate your function with a so-called ‘docstring’ (*documentation string*) providing a concise, imperative description of what the function does.
###Code
def FUNCTION_NAME():
""""Docstring containing a concise summary of the function behaviour."""
#ONE_OR_MORE_LINES_OF_CODE
pass
###Output
_____no_output_____
###Markdown
Run the following code cell to define a simple function that prints the message *Hello*:
###Code
def sayHello():
"""Print a hello message."""
print('Hello')
###Output
_____no_output_____
###Markdown
When we *call* the function, the code contained within the function body is executed.Run the following cell to call the function:
###Code
sayHello()
###Output
_____no_output_____
###Markdown
Functions can contain multiple lines of code, which means they can provide a convenient way of calling multiple lines of code from a single line of code. 1.2 Passing arguments into functionsFunctions can also be used to perform actions over one or more *arguments* passed into the function. For example, if you want to say hello to a specific person by name, we can pass their name into the function as an argument, and then use that argument within the body of the function.We’ll use a Python *f-string* as a convenient way of passing the variable value, by reference, into a string:
###Code
def sayHelloName(name):
"""Print a welcome message."""
print(f"Hello, {name}")
###Output
_____no_output_____
###Markdown
Let’s call that function to see how it behaves:
###Code
sayHelloName("Sam")
###Output
_____no_output_____
###Markdown
What happens if we forget to provide a name?
###Code
sayHelloName()
###Output
_____no_output_____
###Markdown
Oops... We have defined the argument as a *positional* argument that is REQUIRED if the function is to be called without raising an error.If we want to make the argument optional then we need to provide a *default value*:
###Code
def sayHelloName(name='there'):
"""Print a message to welcome someone by name."""
print(f"Hello, {name}")
sayHelloName()
###Output
_____no_output_____
###Markdown
If we want to have different behaviours depending on whether a value is passed for the name, then we can set a default such as `None` and then use a conditional statement to determine what to do based on the value that is presented:
###Code
def sayHelloName(name=None):
"""Print a message to welcome someone optionally by name."""
if name:
print(f"Hello, {name}")
else:
print("Hi there!")
sayHelloName()
###Output
_____no_output_____
###Markdown
Sometimes, we may want to get one or more values returned back from a function. We can do that using the `return` statement. The `return` statement essentially does two things when it is called: firstly, it terminates the function’s execution at that point; secondly, it optionally returns a value to the part of the program that called the function. 1.2.1 Activity — Defining a simple functionRun the following code cell to define a function that constructs a welcome message, displays the message *and returns it*:
###Code
def sayAndReturnHelloName(name):
"""Print a welcome message and return it."""
message = f"Hello, {name}"
print("Printing:", message)
return message
###Output
_____no_output_____
###Markdown
What do you think will happen when we call the function? *Write your prediction here about what you think will happen when the function is run here __before__ you run the code cell to call it.*
###Code
sayAndReturnHelloName('Sam')
###Output
_____no_output_____
###Markdown
Run the above cell to call the function. Did you get the response you expected? Discussion*Click on the arrow in the sidebar or run this cell to reveal my observations.* In the first case, a message was *printed* out in the cell’s print area. In the second case, the message was returned as the value returned by the function. As the function appeared on the last line of the code cell, its value was *displayed* as the cell output. 1.3 Setting variables to values returned from a functionAs you might expect, we can set a variable to the value returned from a function:
###Code
message = sayAndReturnHelloName('Sam')
###Output
_____no_output_____
###Markdown
If we view the value of that variable by running the following cell, what do you think you will see? Will the message be printed as well as displayed? *Write your prediction about what you think will happen when the function is run here __before__ you run the code cell to call it.*
###Code
message
###Output
_____no_output_____
###Markdown
Only the value returned from the function is displayed. The function is not called again, and so there is no instruction to *print* the message.To return multiple values, we still use a single `return` statement:
###Code
def sayAndReturnHelloName(name):
"""Print a welcome message and return it."""
message = f"Hello, {name}"
print("Printing:", message)
return (name, message)
sayAndReturnHelloName('Sam')
###Output
_____no_output_____
###Markdown
Finally, we can have multiple return statements in a function, but only one of them can be called from a single invocation of the function:
###Code
def sayHelloName(name=None):
"""Print a message to welcome someone optionally by name."""
if name:
print(f"Hello, {name}")
return (name, message)
else:
print("Hi there!")
return
print(sayHelloName(), 'and', sayHelloName("Sam"))
###Output
_____no_output_____
###Markdown
Generally, it is *not* good practice to return different sorts of object from different parts of the same function. If you try to assign the values returned from the function to a particular variable, that variable could end up being defined in different ways depending on which part of the function returned the value to it. *There is quite a lot more to know about functions, particularly in respect of how variables inside the function relate to variables defined outside the function, a topic referred to as `variable scope`.**Variables defined within a Python are `scoped` according to where they are defined. Variables are `in scope` at a particular part of a program if they can be seen and referred to at that part of the program.**In Python, variables defined _outside_ a function can typically be seen and referred to from within the function. Variables can also be passed into a function via the function's arguments. But variables defined _inside_ the function _cannot_ be seen outside the function.**We will not consider issues of scope any further here, but it is a _very_ powerful concept and one that any comprehensive introduction to programming should cover.* 1.4 Using functions in robot control programsLet's now consider how we might use our our functions in a robot control program.We’ll start by considering the simple program we explored previously to make the robot trace out a square.If you recall, our first version of this program explicitly coded each turn and edge movement, and then we used a loop to repeat the same action several times.To get things set up correctly, load the simulator into the notebook in the usual way:
###Code
from nbev3devsim.load_nbev3devwidget import roboSim, eds
%load_ext nbev3devsim
%load_ext nbtutor
###Output
_____no_output_____
###Markdown
Tweak the constant value settings in the program below until the robot approximately traces out the shape of a square.
###Code
%%sim_magic_preloaded -x 200 -y 500 -a 0 -p -C --pencolor red
SIDES = 4
# Try to draw a square, ish...
STEERING = -100
TURN_ROTATIONS = 1.6
TURN_SPEED = 10
STRAIGHT_SPEED_PC = SpeedPercent(40)
STRAIGHT_ROTATIONS = 4
for side in range(SIDES):
#Go straight
# Set the left and right motors in a forward direction
# and run for the specified number of forward rotations
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
STRAIGHT_ROTATIONS)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED),
TURN_ROTATIONS)
###Output
_____no_output_____
###Markdown
We can extract this code into a function that allows us to draw a square whenever we want. By adding an optional `side_length` parameter we can change the side length as required.Download the following program to the simulator and run it there.Can you modify the program to draw a third square with a size somewhere between the size of the first two squares?
###Code
%%sim_magic_preloaded -p --pencolor green -a 0
SIDES = 4
# Try to draw a square
STEERING = -100
TURN_ROTATIONS = 1.6
TURN_SPEED = 10
STRAIGHT_SPEED_PC = SpeedPercent(40)
STRAIGHT_ROTATIONS = 6
def draw_square(side=STRAIGHT_ROTATIONS):
"""Draw square of specified side length."""
for side_number in range(SIDES):
#Go straight
# Set the left and right motors in a forward direction
# and run for the number of rotations specified by: side
tank_drive.on_for_rotations(STRAIGHT_SPEED_PC,
STRAIGHT_SPEED_PC,
#Use provided side length
side)
#Turn
# Set the robot to turn on the spot
# and run for a certain number of rotations *of the wheels*
tank_turn.on_for_rotations(STEERING,
SpeedPercent(TURN_SPEED),
TURN_ROTATIONS)
# Call the function to draw a small size square
draw_square(4)
# And an even smaller square
draw_square(2)
###Output
_____no_output_____
###Markdown
1.4.1 Optional activityCopy the code used to define the `draw_square()` function, and modify it so that it takes a second `turn` parameter that replaces the `TURN_ROTATIONS` value.Use the `turn` parameter to tune how far the robot turns at each corner.Then see if you can use a `for...in range(N)` loop to call the square-drawing function several times.Can you further modify the program so that the side length is increased each time the function is called by the loop?*Share your programs in your Cluster group forum.* 1.5 Previewing the simulated robot state from a notebook code cellAs well viewing the sensor state via the simulator user interface, we can also review it in the notebook itself.We can create a reference to an object that uses some magic to grab a snapshot of the state of the robot in the default `roboSim` simulator.
###Code
robotState = %sim_robot_state
###Output
_____no_output_____
###Markdown
The `%sim_robot_state` needs to be run in a cell, before we can check that captured state in *another* cell.But once it has run, we can use the `robotState` variable to preview the snapshot of the state of the robot.The state data itself returned as a Python dictionary which we can reference into to view specific data values. For example, the `x` coordinate:
###Code
robotState.state["x"]
###Output
_____no_output_____
###Markdown
Or how about the `penDown` state?
###Code
robotState.state['penDown']
###Output
_____no_output_____
###Markdown
For a full list of possible values, we can review all the *keys* associated with the `robotState.state` dictionary:
###Code
robotState.state.keys()
###Output
_____no_output_____
###Markdown
Let's use some magic to change the pen down state and then take another snapshot of the robot's state:
###Code
%sim_magic --pendown
robotState = %sim_robot_state
robotState.state['penDown']
###Output
_____no_output_____
###Markdown
1.6 Reporting on robot state in the notebookHaving grabbed a snapshot of the robot's state into the notebook, we can create a function to write reports in the notebook's own code environment describing the state of the robot.For example, the `robotState.state` dictionary includes the following keys:- `left_light_raw / right_light_raw` for the raw RGB values- `left_light / right_light` for the `reflected_light_intensity` values- `left_light_pc / right_light_pc` for the `reflected_light_intensity_pc` values, and- `left_light_full / right_light_full` for the `full_reflected_light_intensity` values.We can create a simple function to display this values to make it easier for us to probe the state of the robot:
###Code
def report_robot_left_sensor(state):
"""Print a report of the left light sensor values."""
print(f"""
RGB: {state['left_light_raw']}
Reflected light intensity: {state['left_light']}
Reflected light intensity per cent: {state['left_light_pc']}
Full reflected light intensity (%): {state['left_light_full']}
""")
###Output
_____no_output_____
###Markdown
Let's see how it works:
###Code
report_robot_left_sensor(robotState.state)
###Output
_____no_output_____ |
jupyter/annotation/english/graph-extraction/graph_extraction_explode_entities.ipynb | ###Markdown
To use Merge Entities parameter we need to set allowSparkContext parameter to true
###Code
spark = SparkSession.builder \
.appName("SparkNLP") \
.master("local[*]") \
.config("spark.driver.memory", "12G") \
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \
.config("spark.kryoserializer.buffer.max", "2000M") \
.config("spark.driver.maxResultSize", "0") \
.config("spark.jars", "jars/sparknlp.jar") \
.config("spark.executor.allowSparkContext", "true") \
.getOrCreate()
spark
from pyspark.sql.types import StringType
text = ['Peter Parker is a nice lad and lives in New York']
data_set = spark.createDataFrame(text, StringType()).toDF("text")
data_set.show(truncate=False)
###Output
+------------------------------------------------+
|text |
+------------------------------------------------+
|Peter Parker is a nice lad and lives in New York|
+------------------------------------------------+
###Markdown
Graph Extraction Graph Extraction will use pretrained POS, Dependency Parser and Typed Dependency Parser annotators when the pipeline does not have those defined
###Code
document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
tokenizer = Tokenizer().setInputCols(["document"]).setOutputCol("token")
word_embeddings = WordEmbeddingsModel.pretrained() \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings")
ner_tagger = NerDLModel.pretrained() \
.setInputCols(["document", "token", "embeddings"]) \
.setOutputCol("ner")
###Output
glove_100d download started this may take some time.
Approximate size to download 145.3 MB
[OK!]
ner_dl download started this may take some time.
Approximate size to download 13.6 MB
[OK!]
###Markdown
When setting ExplodeEntities to true, Graph Extraction will find paths between all possible pair of entities Since this sentence only has two entities, it will display the paths between PER and LOC. Each pair of entities will have a left path and a right path. By default the paths starts from the root of the dependency tree, which in this case is the token *lad*:* Left path: lad-PER, will output the path between lad and Peter Parker* Right path: lad-LOC, will output the path between lad and New York
###Code
graph_extraction = GraphExtraction() \
.setInputCols(["document", "token", "ner"]) \
.setOutputCol("graph") \
.setMergeEntities(True) \
.setExplodeEntities(True)
graph_pipeline = Pipeline().setStages([document_assembler, tokenizer,
word_embeddings, ner_tagger,
graph_extraction])
###Output
_____no_output_____
###Markdown
The result dataset has a *graph* column with the paths between PER,LOC
###Code
graph_data_set = graph_pipeline.fit(data_set).transform(data_set)
graph_data_set.select("graph").show(truncate=False)
###Output
+--------------------------------------------------------------------------------------------------------------------------------------------------+
|graph |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
|[[node, 23, 25, lad, [entities -> PER,LOC, path -> lad,Peter Parker,lad,New York, left_path -> lad,Peter Parker, right_path -> lad,New York], []]]|
+--------------------------------------------------------------------------------------------------------------------------------------------------+
###Markdown
To use Merge Entities parameter we need to set allowSparkContext parameter to true
###Code
spark = SparkSession.builder \
.appName("SparkNLP") \
.master("local[*]") \
.config("spark.driver.memory", "12G") \
.config("spark.serializer", "org.apache.spark.serializer.KryoSerializer") \
.config("spark.kryoserializer.buffer.max", "2000M") \
.config("spark.driver.maxResultSize", "0") \
.config("spark.jars", "jars/sparknlp.jar") \
.config("spark.executor.allowSparkContext", "true") \
.getOrCreate()
spark
from pyspark.sql.types import StringType
text = ['Peter Parker is a nice lad and lives in New York']
data_set = spark.createDataFrame(text, StringType()).toDF("text")
data_set.show(truncate=False)
###Output
+------------------------------------------------+
|text |
+------------------------------------------------+
|Peter Parker is a nice lad and lives in New York|
+------------------------------------------------+
###Markdown
Graph Extraction Graph Extraction will use pretrained POS, Dependency Parser and Typed Dependency Parser annotators when the pipeline does not have those defined
###Code
document_assembler = DocumentAssembler().setInputCol("text").setOutputCol("document")
tokenizer = Tokenizer().setInputCols(["document"]).setOutputCol("token")
word_embeddings = WordEmbeddingsModel.pretrained() \
.setInputCols(["document", "token"]) \
.setOutputCol("embeddings")
ner_tagger = NerDLModel.pretrained() \
.setInputCols(["document", "token", "embeddings"]) \
.setOutputCol("ner")
###Output
glove_100d download started this may take some time.
Approximate size to download 145.3 MB
[OK!]
ner_dl download started this may take some time.
Approximate size to download 13.6 MB
[OK!]
###Markdown
When setting ExplodeEntities to true, Graph Extraction will find paths between all possible pair of entities Since this sentence only has two entities, it will display the paths between PER and LOC. Each pair of entities will have a left path and a right path. By default the paths starts from the root of the dependency tree, which in this case is the token *lad*:* Left path: lad-PER, will output the path between lad and Peter Parker* Right path: lad-LOC, will output the path between lad and New York
###Code
graph_extraction = GraphExtraction() \
.setInputCols(["document", "token", "ner"]) \
.setOutputCol("graph") \
.setMergeEntities(True) \
.setExplodeEntities(True)
graph_pipeline = Pipeline().setStages([document_assembler, tokenizer,
word_embeddings, ner_tagger,
graph_extraction])
###Output
_____no_output_____
###Markdown
The result dataset has a *graph* column with the paths between PER,LOC
###Code
graph_data_set = graph_pipeline.fit(data_set).transform(data_set)
graph_data_set.select("graph").show(truncate=False)
###Output
+--------------------------------------------------------------------------------------------------------------------------------------------------+
|graph |
+--------------------------------------------------------------------------------------------------------------------------------------------------+
|[[node, 23, 25, lad, [entities -> PER,LOC, path -> lad,Peter Parker,lad,New York, left_path -> lad,Peter Parker, right_path -> lad,New York], []]]|
+--------------------------------------------------------------------------------------------------------------------------------------------------+
|
apphub/NLP/imdb/imdb.ipynb | ###Markdown
Sentiment Prediction in IMDB Reviews using an LSTM
###Code
import tempfile
import os
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as fn
import fastestimator as fe
from fastestimator.dataset.data import imdb_review
from fastestimator.op.numpyop.univariate.reshape import Reshape
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
from fastestimator.trace.io import BestModelSaver
from fastestimator.trace.metric import Accuracy
from fastestimator.backend import load_model
MAX_WORDS = 10000
MAX_LEN = 500
batch_size = 64
epochs = 10
max_train_steps_per_epoch = None
max_eval_steps_per_epoch = None
###Output
_____no_output_____
###Markdown
Building components Step 1: Prepare training & evaluation data and define a `Pipeline` We are loading the dataset from tf.keras.datasets.imdb which contains movie reviews and sentiment scores. All the words have been replaced with the integers that specifies the popularity of the word in corpus. To ensure all the sequences are of same length we need to pad the input sequences before defining the `Pipeline`.
###Code
train_data, eval_data = imdb_review.load_data(MAX_LEN, MAX_WORDS)
pipeline = fe.Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=batch_size,
ops=Reshape(1, inputs="y", outputs="y"))
###Output
_____no_output_____
###Markdown
Step 2: Create a `model` and FastEstimator `Network` First, we have to define the neural network architecture, and then pass the definition, associated model name, and optimizer into fe.build:
###Code
class ReviewSentiment(nn.Module):
def __init__(self, embedding_size=64, hidden_units=64):
super().__init__()
self.embedding = nn.Embedding(MAX_WORDS, embedding_size)
self.conv1d = nn.Conv1d(in_channels=64, out_channels=32, kernel_size=3, padding=1)
self.maxpool1d = nn.MaxPool1d(kernel_size=4)
self.lstm = nn.LSTM(input_size=125, hidden_size=hidden_units, num_layers=1)
self.fc1 = nn.Linear(in_features=hidden_units, out_features=250)
self.fc2 = nn.Linear(in_features=250, out_features=1)
def forward(self, x):
x = self.embedding(x)
x = x.permute((0, 2, 1))
x = self.conv1d(x)
x = fn.relu(x)
x = self.maxpool1d(x)
output, _ = self.lstm(x)
x = output[:, -1] # sequence output of only last timestamp
x = fn.tanh(x)
x = self.fc1(x)
x = fn.relu(x)
x = self.fc2(x)
x = fn.sigmoid(x)
return x
###Output
_____no_output_____
###Markdown
`Network` is the object that defines the whole training graph, including models, loss functions, optimizers etc. A `Network` can have several different models and loss functions (ex. GANs). `fe.Network` takes a series of operators, in this case just the basic `ModelOp`, loss op, and `UpdateOp` will suffice. It should be noted that "y_pred" is the key in the data dictionary which will store the predictions.
###Code
model = fe.build(model_fn=lambda: ReviewSentiment(), optimizer_fn="adam")
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="loss"),
UpdateOp(model=model, loss_name="loss")
])
###Output
_____no_output_____
###Markdown
Step 3: Prepare `Estimator` and configure the training loop `Estimator` is the API that wraps the `Pipeline`, `Network` and other training metadata together. `Estimator` also contains `Traces`, which are similar to the callbacks of Keras. In the training loop, we want to measure the validation loss and save the model that has the minimum loss. `BestModelSaver` is a convenient `Trace` to achieve this. Let's also measure accuracy over time using another `Trace`:
###Code
model_dir = tempfile.mkdtemp()
traces = [Accuracy(true_key="y", pred_key="y_pred"), BestModelSaver(model=model, save_dir=model_dir)]
estimator = fe.Estimator(network=network,
pipeline=pipeline,
epochs=epochs,
traces=traces,
max_train_steps_per_epoch=max_train_steps_per_epoch,
max_eval_steps_per_epoch=max_eval_steps_per_epoch)
###Output
_____no_output_____
###Markdown
Training
###Code
estimator.fit()
###Output
______ __ ______ __ _ __
/ ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____
/ /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/
/ __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / /
/_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/
FastEstimator-Start: step: 1; model_lr: 0.001;
###Markdown
Inferencing For inferencing, first we have to load the trained model weights. We previously saved model weights corresponding to our minimum loss, and now we will load the weights using `load_model()`:
###Code
model_name = 'model_best_loss.pt'
model_path = os.path.join(model_dir, model_name)
load_model(model, model_path)
###Output
Loaded model weights from /tmp/tmp69qyfzvm/model_best_loss.pt
###Markdown
Let's get some random sequence and compare the prediction with the ground truth:
###Code
selected_idx = np.random.randint(10000)
print("Ground truth is: ",eval_data[selected_idx]['y'])
###Output
Ground truth is: 0
###Markdown
Create data dictionary for the inference. The `Transform()` function in Pipeline and Network applies all the operations on the given data:
###Code
infer_data = {"x":eval_data[selected_idx]['x'], "y":eval_data[selected_idx]['y']}
data = pipeline.transform(infer_data, mode="infer")
data = network.transform(data, mode="infer")
###Output
_____no_output_____
###Markdown
Finally, print the inferencing results.
###Code
print("Prediction for the input sequence: ", np.array(data["y_pred"])[0][0])
###Output
Prediction for the input sequence: 0.30634004
###Markdown
Sentiment Prediction in IMDB Reviews using an LSTM
###Code
import tempfile
import os
import numpy as np
import torch.nn as nn
import torch.nn.functional as fn
import fastestimator as fe
from fastestimator.dataset.data import imdb_review
from fastestimator.op.numpyop.univariate.reshape import Reshape
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
from fastestimator.trace.io import BestModelSaver
from fastestimator.trace.metric import Accuracy
from fastestimator.backend import load_model
MAX_WORDS = 10000
MAX_LEN = 500
batch_size = 64
epochs = 10
train_steps_per_epoch = None
eval_steps_per_epoch = None
###Output
_____no_output_____
###Markdown
Building components Step 1: Prepare training & evaluation data and define a `Pipeline` We are loading the dataset from tf.keras.datasets.imdb which contains movie reviews and sentiment scores. All the words have been replaced with the integers that specifies the popularity of the word in corpus. To ensure all the sequences are of same length we need to pad the input sequences before defining the `Pipeline`.
###Code
train_data, eval_data = imdb_review.load_data(MAX_LEN, MAX_WORDS)
pipeline = fe.Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=batch_size,
ops=Reshape(1, inputs="y", outputs="y"))
###Output
_____no_output_____
###Markdown
Step 2: Create a `model` and FastEstimator `Network` First, we have to define the neural network architecture, and then pass the definition, associated model name, and optimizer into fe.build:
###Code
class ReviewSentiment(nn.Module):
def __init__(self, embedding_size=64, hidden_units=64):
super().__init__()
self.embedding = nn.Embedding(MAX_WORDS, embedding_size)
self.conv1d = nn.Conv1d(in_channels=64, out_channels=32, kernel_size=3, padding=1)
self.maxpool1d = nn.MaxPool1d(kernel_size=4)
self.lstm = nn.LSTM(input_size=125, hidden_size=hidden_units, num_layers=1)
self.fc1 = nn.Linear(in_features=hidden_units, out_features=250)
self.fc2 = nn.Linear(in_features=250, out_features=1)
def forward(self, x):
x = self.embedding(x)
x = x.permute((0, 2, 1))
x = self.conv1d(x)
x = fn.relu(x)
x = self.maxpool1d(x)
output, _ = self.lstm(x)
x = output[:, -1] # sequence output of only last timestamp
x = fn.tanh(x)
x = self.fc1(x)
x = fn.relu(x)
x = self.fc2(x)
x = fn.sigmoid(x)
return x
###Output
_____no_output_____
###Markdown
`Network` is the object that defines the whole training graph, including models, loss functions, optimizers etc. A `Network` can have several different models and loss functions (ex. GANs). `fe.Network` takes a series of operators, in this case just the basic `ModelOp`, loss op, and `UpdateOp` will suffice. It should be noted that "y_pred" is the key in the data dictionary which will store the predictions.
###Code
model = fe.build(model_fn=lambda: ReviewSentiment(), optimizer_fn="adam")
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="loss"),
UpdateOp(model=model, loss_name="loss")
])
###Output
_____no_output_____
###Markdown
Step 3: Prepare `Estimator` and configure the training loop `Estimator` is the API that wraps the `Pipeline`, `Network` and other training metadata together. `Estimator` also contains `Traces`, which are similar to the callbacks of Keras. In the training loop, we want to measure the validation loss and save the model that has the minimum loss. `BestModelSaver` is a convenient `Trace` to achieve this. Let's also measure accuracy over time using another `Trace`:
###Code
model_dir = tempfile.mkdtemp()
traces = [Accuracy(true_key="y", pred_key="y_pred"), BestModelSaver(model=model, save_dir=model_dir)]
estimator = fe.Estimator(network=network,
pipeline=pipeline,
epochs=epochs,
traces=traces,
train_steps_per_epoch=train_steps_per_epoch,
eval_steps_per_epoch=eval_steps_per_epoch)
###Output
_____no_output_____
###Markdown
Training
###Code
estimator.fit()
###Output
______ __ ______ __ _ __
/ ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____
/ /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/
/ __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / /
/_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/
FastEstimator-Start: step: 1; model_lr: 0.001;
###Markdown
Inferencing For inferencing, first we have to load the trained model weights. We previously saved model weights corresponding to our minimum loss, and now we will load the weights using `load_model()`:
###Code
model_name = 'model_best_loss.pt'
model_path = os.path.join(model_dir, model_name)
load_model(model, model_path)
###Output
Loaded model weights from /tmp/tmp69qyfzvm/model_best_loss.pt
###Markdown
Let's get some random sequence and compare the prediction with the ground truth:
###Code
selected_idx = np.random.randint(10000)
print("Ground truth is: ",eval_data[selected_idx]['y'])
###Output
Ground truth is: 0
###Markdown
Create data dictionary for the inference. The `Transform()` function in Pipeline and Network applies all the operations on the given data:
###Code
infer_data = {"x":eval_data[selected_idx]['x'], "y":eval_data[selected_idx]['y']}
data = pipeline.transform(infer_data, mode="infer")
data = network.transform(data, mode="infer")
###Output
_____no_output_____
###Markdown
Finally, print the inferencing results.
###Code
print("Prediction for the input sequence: ", np.array(data["y_pred"])[0][0])
###Output
Prediction for the input sequence: 0.30634004
###Markdown
Sentiment Prediction in IMDB Reviews using an LSTM
###Code
import tempfile
import os
import numpy as np
import torch
import torch.nn as nn
import fastestimator as fe
from fastestimator.dataset.data import imdb_review
from fastestimator.op.numpyop.univariate.reshape import Reshape
from fastestimator.op.tensorop.loss import CrossEntropy
from fastestimator.op.tensorop.model import ModelOp, UpdateOp
from fastestimator.trace.io import BestModelSaver
from fastestimator.trace.metric import Accuracy
from fastestimator.backend import load_model
MAX_WORDS = 10000
MAX_LEN = 500
batch_size = 64
epochs = 10
train_steps_per_epoch = None
eval_steps_per_epoch = None
###Output
_____no_output_____
###Markdown
Building components Step 1: Prepare training & evaluation data and define a `Pipeline` We are loading the dataset from tf.keras.datasets.imdb which contains movie reviews and sentiment scores. All the words have been replaced with the integers that specifies the popularity of the word in corpus. To ensure all the sequences are of same length we need to pad the input sequences before defining the `Pipeline`.
###Code
train_data, eval_data = imdb_review.load_data(MAX_LEN, MAX_WORDS)
pipeline = fe.Pipeline(train_data=train_data,
eval_data=eval_data,
batch_size=batch_size,
ops=Reshape(1, inputs="y", outputs="y"))
###Output
_____no_output_____
###Markdown
Step 2: Create a `model` and FastEstimator `Network` First, we have to define the neural network architecture, and then pass the definition, associated model name, and optimizer into fe.build:
###Code
class ReviewSentiment(nn.Module):
def __init__(self, embedding_size=64, hidden_units=64):
super().__init__()
self.embedding = nn.Embedding(MAX_WORDS, embedding_size)
self.conv1d = nn.Conv1d(in_channels=64, out_channels=32, kernel_size=3, padding=1)
self.maxpool1d = nn.MaxPool1d(kernel_size=4)
self.lstm = nn.LSTM(input_size=125, hidden_size=hidden_units, num_layers=1)
self.fc1 = nn.Linear(in_features=hidden_units, out_features=250)
self.fc2 = nn.Linear(in_features=250, out_features=1)
def forward(self, x):
x = self.embedding(x)
x = x.permute((0, 2, 1))
x = self.conv1d(x)
x = torch.relu(x)
x = self.maxpool1d(x)
output, _ = self.lstm(x)
x = output[:, -1] # sequence output of only last timestamp
x = torch.tanh(x)
x = self.fc1(x)
x = torch.relu(x)
x = self.fc2(x)
x = torch.sigmoid(x)
return x
###Output
_____no_output_____
###Markdown
`Network` is the object that defines the whole training graph, including models, loss functions, optimizers etc. A `Network` can have several different models and loss functions (ex. GANs). `fe.Network` takes a series of operators, in this case just the basic `ModelOp`, loss op, and `UpdateOp` will suffice. It should be noted that "y_pred" is the key in the data dictionary which will store the predictions.
###Code
model = fe.build(model_fn=lambda: ReviewSentiment(), optimizer_fn="adam")
network = fe.Network(ops=[
ModelOp(model=model, inputs="x", outputs="y_pred"),
CrossEntropy(inputs=("y_pred", "y"), outputs="loss"),
UpdateOp(model=model, loss_name="loss")
])
###Output
_____no_output_____
###Markdown
Step 3: Prepare `Estimator` and configure the training loop `Estimator` is the API that wraps the `Pipeline`, `Network` and other training metadata together. `Estimator` also contains `Traces`, which are similar to the callbacks of Keras. In the training loop, we want to measure the validation loss and save the model that has the minimum loss. `BestModelSaver` is a convenient `Trace` to achieve this. Let's also measure accuracy over time using another `Trace`:
###Code
model_dir = tempfile.mkdtemp()
traces = [Accuracy(true_key="y", pred_key="y_pred"), BestModelSaver(model=model, save_dir=model_dir)]
estimator = fe.Estimator(network=network,
pipeline=pipeline,
epochs=epochs,
traces=traces,
train_steps_per_epoch=train_steps_per_epoch,
eval_steps_per_epoch=eval_steps_per_epoch)
###Output
_____no_output_____
###Markdown
Training
###Code
estimator.fit()
###Output
______ __ ______ __ _ __
/ ____/___ ______/ /_/ ____/____/ /_(_)___ ___ ____ _/ /_____ _____
/ /_ / __ `/ ___/ __/ __/ / ___/ __/ / __ `__ \/ __ `/ __/ __ \/ ___/
/ __/ / /_/ (__ ) /_/ /___(__ ) /_/ / / / / / / /_/ / /_/ /_/ / /
/_/ \__,_/____/\__/_____/____/\__/_/_/ /_/ /_/\__,_/\__/\____/_/
FastEstimator-Start: step: 1; logging_interval: 100; num_device: 0;
FastEstimator-Train: step: 1; loss: 0.6982045;
FastEstimator-Train: step: 100; loss: 0.69076145; steps/sec: 4.55;
FastEstimator-Train: step: 200; loss: 0.6970146; steps/sec: 5.49;
FastEstimator-Train: step: 300; loss: 0.67406845; steps/sec: 5.6;
FastEstimator-Train: step: 358; epoch: 1; epoch_time: 69.22 sec;
FastEstimator-BestModelSaver: Saved model to /var/folders/lx/drkxftt117gblvgsp1p39rlc0000gn/T/tmpds6dz9wa/model_best_loss.pt
FastEstimator-Eval: step: 358; epoch: 1; accuracy: 0.6826793843485801; loss: 0.59441286; min_loss: 0.59441286; since_best_loss: 0;
FastEstimator-Train: step: 400; loss: 0.579373; steps/sec: 5.39;
FastEstimator-Train: step: 500; loss: 0.5601772; steps/sec: 4.79;
FastEstimator-Train: step: 600; loss: 0.3669433; steps/sec: 5.2;
FastEstimator-Train: step: 700; loss: 0.5050458; steps/sec: 4.86;
FastEstimator-Train: step: 716; epoch: 2; epoch_time: 71.36 sec;
FastEstimator-BestModelSaver: Saved model to /var/folders/lx/drkxftt117gblvgsp1p39rlc0000gn/T/tmpds6dz9wa/model_best_loss.pt
FastEstimator-Eval: step: 716; epoch: 2; accuracy: 0.7672230652503793; loss: 0.48858097; min_loss: 0.48858097; since_best_loss: 0;
FastEstimator-Train: step: 800; loss: 0.43962425; steps/sec: 5.57;
FastEstimator-Train: step: 900; loss: 0.33729357; steps/sec: 5.71;
FastEstimator-Train: step: 1000; loss: 0.31596264; steps/sec: 5.23;
FastEstimator-Train: step: 1074; epoch: 3; epoch_time: 77.79 sec;
FastEstimator-BestModelSaver: Saved model to /var/folders/lx/drkxftt117gblvgsp1p39rlc0000gn/T/tmpds6dz9wa/model_best_loss.pt
FastEstimator-Eval: step: 1074; epoch: 3; accuracy: 0.8103186646433991; loss: 0.4192897; min_loss: 0.4192897; since_best_loss: 0;
FastEstimator-Train: step: 1100; loss: 0.33041656; steps/sec: 3.22;
FastEstimator-Train: step: 1200; loss: 0.41677344; steps/sec: 5.75;
FastEstimator-Train: step: 1300; loss: 0.43493804; steps/sec: 5.68;
FastEstimator-Train: step: 1400; loss: 0.26938343; steps/sec: 5.34;
FastEstimator-Train: step: 1432; epoch: 4; epoch_time: 64.02 sec;
FastEstimator-BestModelSaver: Saved model to /var/folders/lx/drkxftt117gblvgsp1p39rlc0000gn/T/tmpds6dz9wa/model_best_loss.pt
FastEstimator-Eval: step: 1432; epoch: 4; accuracy: 0.823845653587687; loss: 0.3995199; min_loss: 0.3995199; since_best_loss: 0;
FastEstimator-Train: step: 1500; loss: 0.323763; steps/sec: 5.76;
FastEstimator-Train: step: 1600; loss: 0.21561582; steps/sec: 5.84;
FastEstimator-Train: step: 1700; loss: 0.20746922; steps/sec: 5.59;
FastEstimator-Train: step: 1790; epoch: 5; epoch_time: 63.49 sec;
FastEstimator-Eval: step: 1790; epoch: 5; accuracy: 0.8291784088445697; loss: 0.4008124; min_loss: 0.3995199; since_best_loss: 1;
FastEstimator-Train: step: 1800; loss: 0.2219275; steps/sec: 5.12;
FastEstimator-Train: step: 1900; loss: 0.2188505; steps/sec: 5.11;
FastEstimator-Train: step: 2000; loss: 0.14373234; steps/sec: 5.53;
FastEstimator-Train: step: 2100; loss: 0.20883155; steps/sec: 1.96;
FastEstimator-Train: step: 2148; epoch: 6; epoch_time: 100.15 sec;
FastEstimator-Eval: step: 2148; epoch: 6; accuracy: 0.8313461955343594; loss: 0.41437832; min_loss: 0.3995199; since_best_loss: 2;
FastEstimator-Train: step: 2200; loss: 0.20082837; steps/sec: 5.64;
FastEstimator-Train: step: 2300; loss: 0.22870378; steps/sec: 5.65;
FastEstimator-Train: step: 2400; loss: 0.28569937; steps/sec: 5.7;
FastEstimator-Train: step: 2500; loss: 0.16878708; steps/sec: 5.69;
FastEstimator-Train: step: 2506; epoch: 7; epoch_time: 63.07 sec;
FastEstimator-Eval: step: 2506; epoch: 7; accuracy: 0.8314762627357468; loss: 0.42922923; min_loss: 0.3995199; since_best_loss: 3;
FastEstimator-Train: step: 2600; loss: 0.20338291; steps/sec: 5.77;
FastEstimator-Train: step: 2700; loss: 0.17639604; steps/sec: 5.68;
FastEstimator-Train: step: 2800; loss: 0.12155069; steps/sec: 5.7;
FastEstimator-Train: step: 2864; epoch: 8; epoch_time: 62.75 sec;
FastEstimator-Eval: step: 2864; epoch: 8; accuracy: 0.8294818989811402; loss: 0.46396694; min_loss: 0.3995199; since_best_loss: 4;
FastEstimator-Train: step: 2900; loss: 0.20103803; steps/sec: 5.34;
FastEstimator-Train: step: 3000; loss: 0.10518805; steps/sec: 5.71;
FastEstimator-Train: step: 3100; loss: 0.10425654; steps/sec: 5.64;
FastEstimator-Train: step: 3200; loss: 0.13740686; steps/sec: 5.5;
FastEstimator-Train: step: 3222; epoch: 9; epoch_time: 64.67 sec;
FastEstimator-Eval: step: 3222; epoch: 9; accuracy: 0.8254498157381314; loss: 0.5149529; min_loss: 0.3995199; since_best_loss: 5;
FastEstimator-Train: step: 3300; loss: 0.080922514; steps/sec: 5.77;
FastEstimator-Train: step: 3400; loss: 0.088989146; steps/sec: 5.41;
FastEstimator-Train: step: 3500; loss: 0.1620798; steps/sec: 5.3;
FastEstimator-Train: step: 3580; epoch: 10; epoch_time: 64.87 sec;
FastEstimator-Eval: step: 3580; epoch: 10; accuracy: 0.8214177324951225; loss: 0.5555562; min_loss: 0.3995199; since_best_loss: 6;
FastEstimator-Finish: step: 3580; model_lr: 0.001; total_time: 1124.45 sec;
###Markdown
Inferencing For inferencing, first we have to load the trained model weights. We previously saved model weights corresponding to our minimum loss, and now we will load the weights using `load_model()`:
###Code
model_name = 'model_best_loss.pt'
model_path = os.path.join(model_dir, model_name)
load_model(model, model_path)
###Output
_____no_output_____
###Markdown
Let's get some random sequence and compare the prediction with the ground truth:
###Code
selected_idx = np.random.randint(10000)
print("Ground truth is: ",eval_data[selected_idx]['y'])
###Output
Ground truth is: 1
###Markdown
Create data dictionary for the inference. The `Transform()` function in Pipeline and Network applies all the operations on the given data:
###Code
infer_data = {"x":eval_data[selected_idx]['x'], "y":eval_data[selected_idx]['y']}
data = pipeline.transform(infer_data, mode="infer")
data = network.transform(data, mode="infer")
###Output
_____no_output_____
###Markdown
Finally, print the inferencing results.
###Code
print("Prediction for the input sequence: ", np.array(data["y_pred"])[0][0])
###Output
Prediction for the input sequence: 0.91389465
|
Case Study/notebooks/EDA-3.ipynb | ###Markdown
Exploratory Data Analysis-III Task - Finding the Length of each Sequence and Creating a Length Column - Calculating GC Ratio and Creating GC Column - Finding the % of N and Creating % of N Column- Per Base Sequence Content Basic Questions - Maximum and minimum sequence length?- Returns the maximum and minimum Length with their index. - Maximum and minimum sequence GC Ratio?- Returns the maximum and minimum GC Ratio with their index. - GC distributions. - No. of Gen and bar plot
###Code
# library
import numpy as np # for linear algebra
import pandas as pd # for data mangement
import matplotlib.pyplot as plt # for data visualizations
import seaborn as sns # advance visualizations
# set figure styles
plt.rcParams['figure.figsize'] =(10,8)
plt.rcParams['font.size'] = 14
sns.set_style('whitegrid')
# dask
import dask.array as da # faster numpy calculations
import dask.dataframe as dd # for faster data management
###Output
_____no_output_____
###Markdown
Reading and Exploring Data
###Code
# read data
df = pd.read_csv('./data/3 Id Desc Gen Seq.csv')
# examine first few rows
df.head()
# columns names
df.columns
# observations X columns
df.shape
# basic info
df.info()
# set index as Id
df = df.set_index('Id')
df.head()
###Output
_____no_output_____
###Markdown
Missing Values
###Code
# missing values
sns.heatmap(df.isnull(), cmap='viridis');
###Output
_____no_output_____
###Markdown
Data Cleaning
###Code
# replace newline or tab character with empty space
df.Seq.str.replace("\n", "")
df.Seq.str.replace("\t", "")
# check length of first seq
len(df.Seq.iloc[0])
###Output
_____no_output_____
###Markdown
Task 1: Finding the Length of each Sequences and Creating a Length Column
###Code
# find the length
L = lambda seq: len(seq)
# test
L('TGGATTTGTTAGTGATACGAATCGCTTTATAATCATATGTTTCTC')
# seqLen apply
df['Len'] = df['Seq'].apply(lambda seq: len(seq))
df.head(10)
###Output
_____no_output_____
###Markdown
Task 2: Calculating GC Ratio and Creating GC Column GC Usefulness - GC-content (or guanine-cytosine content) is the percentage of nitrogenous bases in a DNA or RNA molecule that are either guanine (G) or cytosine (C)- In polymerase chain reaction (PCR) experiments, the GC-content of short oligonucleotides known as primers is often used to predict their annealing temperature to the template DNA.- A higher GC-content level indicates a relatively higher melting temperature.- DNA with low GC-content is less stable than DNA with high GC-content
###Code
def calculateGC(seq):
"""Returns GC Ratio of a sequence"""
return round((seq.count('G') + seq.count('C'))/len(seq) * 100, 2)
# test fun
calculateGC('TGGATTTGTTAGTGATACGAATCGCTTTATAATCATATGTTTCTCT')
# calculateGC apply
df['GC'] = df['Seq'].apply(calculateGC)
df.head(10)
###Output
_____no_output_____
###Markdown
Task 3: Finding the % of N and Creating % of N ColumnThe bases marked by N could not be identified due to the low quality of the DNA sequence.
###Code
counts = lambda seq: round((seq.count('N') /len(seq)) * 100,2)
# test
counts('TGGATTTGTTAGTGATACGANATCGCTTTATAATCATNATGTTTNCTCN')
# create a column N
df['%N'] = df['Seq'].apply(lambda seq: seq.count('N'))
df.head(10)
###Output
_____no_output_____
###Markdown
Task 4: Per Base Sequence Content
###Code
A = lambda seq: round((seq.count('A') /len(seq)) * 100,2)
# test
A('TGGATTTGTTAGTGATACGANATCGCTTTATAATCATNATGTTTNCTCN')
T = lambda seq: round((seq.count('T') /len(seq)) * 100,2)
# test
T('TGGATTTGTTAGTGATACGANATCGCTTTATAATCATNATGTTTNCTCN')
G = lambda seq: round((seq.count('G') /len(seq)) * 100,2)
# test
G('TGGATTTGTTAGTGATACGANATCGCTTTATAATCATNATGTTTNCTCN')
C = lambda seq: round((seq.count('C') /len(seq)) * 100,2)
# test
C('TGGATTTGTTAGTGATACGANATCGCTTTATAATCATNATGTTTNCTCN')
# create a column N
df['%A'] = df['Seq'].apply(lambda seq: round((seq.count('A') /len(seq)) * 100,2))
df.head()
# create a column N
df['%T'] = df['Seq'].apply(lambda seq: round((seq.count('T') /len(seq)) * 100,2))
df.head()
# create a column N
df['%G'] = df['Seq'].apply(lambda seq: round((seq.count('G') /len(seq)) * 100,2))
df.head()
# create a column N
df['%C'] = df['Seq'].apply(lambda seq: round((seq.count('C') /len(seq)) * 100,2))
df.head()
plt.plot(df['%A'], 'bo')
plt.plot(df['%T'], 'rs', linestyle='--')
plt.plot(df['%G'], 'bs')
plt.plot(df['%C'], 'bo')
plt.xlabel('Base-pair Position')
plt.ylabel('% of Bases')
plt.legend(['%A', '%T', '%G', '%C'])
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Question 1: Whta is the maximum and minimum length of sequence?
###Code
# length Summary
df.Len.describe()
# max length
df.Len.max()
# min length
df.Len.min()
# avg length
df.Len.mean()
###Output
_____no_output_____
###Markdown
Question 2: Returns the maximum and minimum length with their index.
###Code
# maximum length with index
df.loc[df.Len.idxmax()]
# minimum length with index
df.loc[df.Len.idxmin()]
sns.catplot(x='Gen', y='Len', data=df, kind='bar', aspect=3, palette='Set2')
###Output
_____no_output_____
###Markdown
Question 3: What is the maximum and minimum GC Ratio of sequence?
###Code
# GC Summary
df.GC.describe()
###Output
_____no_output_____
###Markdown
Question 4: Returns the maximum and minimum GC Ratio with their index.
###Code
# returns the maximum GC with index
df.loc[df.GC.idxmax()]
# returns the minimum GC with index
df.loc[df.GC.idxmin()]
###Output
_____no_output_____
###Markdown
Question 4: GC distributions
###Code
# GC distributions
sns.distplot(df['GC']);
# GC distributions
sns.distplot(df['GC'], kde=False);
###Output
_____no_output_____
###Markdown
Question 5: No. of Gen and bar plot
###Code
# Gen counts
df['Gen'].value_counts()
# bar chart
df['Gen'].value_counts().plot(kind='bar');
###Output
_____no_output_____ |
Ensayo 20200208/Lectura datos/Resultado 5.ipynb | ###Markdown
Título: Visualización de frecuencias fundamental y armónicas de ensayo Descripción Ensayo Masa: 3 Fecha de ensayo: 08/02/2020 Sensor: derecho Señal: sensor óptico o inductivo
###Code
# Inicialización
%matplotlib inline
import numpy as np
import matplotlib.pyplot as plt
from scipy.signal import find_peaks
# Definiciones de funciones
def fourier_spectrum( nsamples, data, deltat, logdb, power, rms ):
"""Given nsamples of real voltage data spaced deltat seconds apart,
find the spectrum of the data (its frequency components). If logdb,
return in dBV, otherwise linear volts. If power, return the power
spectrum, otherwise the amplitude spectrum. If rms, use RMS volts,
otherwise use peak-peak volts. Also return the number of frequency
samples, the frequency sample spacing and maximum frequency. Note:
The results from this agree pretty much with my HP 3582A FFT
Spectrum Analyzer,
although that has higher dynamic range than the 8 bit scope."""
data_freq = np.fft.rfft(data * np.hanning( nsamples ))
nfreqs = data_freq.size
data_freq = data_freq / nfreqs
ascale = 4
if( rms ):
ascale = ascale / ( 2 * np.sqrt(2) )
if( power ):
spectrum = ( ascale * absolute(data_freq) )**2
if( logdb ):
spectrum = 10.0 * np.log10( spectrum )
else:
spectrum = ascale * np.absolute(data_freq)
if( logdb ):
spectrum = 20.0 * log10( spectrum )
freq_step = 1.0 / (deltat * 2 * nfreqs);
max_freq = nfreqs * freq_step
return( nfreqs, freq_step, max_freq, spectrum )
###Output
_____no_output_____
###Markdown
Apertura de archivos de ensayosLos archivos abiertos corresponden a la configuración de ensayos planificados. Cada uno de ellos corresponden a la siguiente configuración,* Js11: Banco de sensor inductivo con carga a 1kOhm* Js12: Banco de sensor inductivo configurado a 2kOhm* Js13: Banco de sensor inductivo configurado a 4,7kOhm* Js14: Banco de sensor inductivo configurado a 10kOhmCada conjunto de ensayos corresponden a una masa determinada.Para el caso de este informe se relevan los siguientes archivos,| Nro ensayo | R configuración | Sensor | Masas elegida | Registro ||------|------|------|------|------|| 17 | Js11| izquierdo | 3 | Ensayo s_alim 10 CH...|| 18 | Js12| izquierdo | 3 | Ensayo s_alim 12 CH...|| 19 | Js13| izquierdo | 3 | Ensayo s_alim 14 CH...|| 20 | Js14| izquierdo | 3 | Ensayo s_alim 16 CH...|
###Code
# Definición de variables de ensayos
fecha = '08/02/2020'
rotor = 'rotor número 3'
sensor = 'izquierdo'
tipo_sensor = 'optico'
# Definición de ruta de archivos de ensayos
ruta_ensayos = 'Registro osciloscopio' + '/'
Js11 = 'Ensayo s_alim 10'
Js12 = 'Ensayo s_alim 12'
Js13 = 'Ensayo s_alim 14'
Js14 = 'Ensayo s_alim 16'
canal_1 = 'CH1'
canal_2 = 'CH2'
extension = '.npz'
ensayo_1_1 = ruta_ensayos + Js11 + ' ' + canal_1 + extension
ensayo_1_2 = ruta_ensayos + Js11 + ' ' + canal_2 + extension
ensayo_2_1 = ruta_ensayos + Js12 + ' ' + canal_1 + extension
ensayo_2_2 = ruta_ensayos + Js12 + ' ' + canal_2 + extension
ensayo_3_1 = ruta_ensayos + Js13 + ' ' + canal_1 + extension
ensayo_3_2 = ruta_ensayos + Js13 + ' ' + canal_2 + extension
ensayo_4_1 = ruta_ensayos + Js14 + ' ' + canal_1 + extension
ensayo_4_2 = ruta_ensayos + Js14 + ' ' + canal_2 + extension
with np.load(ensayo_1_1) as archivo:
time_V1_1 = archivo['x']
V1_1 = archivo['y']
with np.load(ensayo_1_2) as archivo:
time_V1_2 = archivo['x']
V1_2 = archivo['y']
with np.load(ensayo_2_1) as archivo:
time_V2_1 = archivo['x']
V2_1 = archivo['y']
with np.load(ensayo_2_2) as archivo:
time_V2_2 = archivo['x']
V2_2 = archivo['y']
with np.load(ensayo_3_1) as archivo:
time_V3_1 = archivo['x']
V3_1 = archivo['y']
with np.load(ensayo_3_2) as archivo:
time_V3_2 = archivo['x']
V3_2 = archivo['y']
with np.load(ensayo_4_1) as archivo:
time_V4_1 = archivo['x']
V4_1 = archivo['y']
with np.load(ensayo_4_2) as archivo:
time_V4_2 = archivo['x']
V4_2 = archivo['y']
###Output
_____no_output_____
###Markdown
Descripción y elección de canales de mediciónCada ensayo consta de dos mediciones realizadas con el osciloscopio rigol DS1052Z. El canal 1 representa las mediciones relevadas sobre el circuito óptico, el canal 2 representa las tensión medida sobre el sensor inductivo de la balanceadora.En esta sección se podrá decidir entre una medición y otra.
###Code
# Elección entre las dos mediciones 'optica' o 'inductivo'
if (tipo_sensor == 'optico'):
tiempo_1 = time_V1_1
tiempo_2 = time_V2_1
tiempo_3 = time_V3_1
tiempo_4 = time_V4_1
V1 = V1_1
V2 = V2_1
V3 = V3_1
V4 = V4_1
elif (tipo_sensor == 'inductivo'):
tiempo_1 = time_V1_2
tiempo_2 = time_V2_2
tiempo_3 = time_V3_2
tiempo_4 = time_V4_2
V1 = V1_2
V2 = V2_2
V3 = V3_2
V4 = V4_2
###Output
_____no_output_____
###Markdown
Recorte de señales medidasEsta acción se hace necesaria debido a que la adquisición por parte del osciloscopio tiene en sus últimos valores un tramo de datos que no corresponden a la adquisición.
###Code
# Recortador de la imagen
ini_cut = np.empty(1)
ini_cut = 0
fin_cut = np.empty(1)
#Definición de cantidad de puntos a recortar desde el final
fin_cut = V1.size - 20
V1_cort = V1[ ini_cut: fin_cut]
tiempo_1_cort = tiempo_1[ ini_cut: fin_cut ]
V2_cort = V2[ ini_cut: fin_cut]
tiempo_2_cort = tiempo_2[ ini_cut: fin_cut ]
V3_cort = V3[ ini_cut: fin_cut]
tiempo_3_cort = tiempo_3[ ini_cut: fin_cut ]
V4_cort = V4[ ini_cut: fin_cut]
tiempo_4_cort = tiempo_4[ ini_cut: fin_cut ]
###Output
_____no_output_____
###Markdown
Creción de variables para cálculo de transformada de FourierEl cálculo de fft de cada una de las señales medida requiere variables como cantidad de muestras y el intervalo temporal entre cada muestra.
###Code
## Creación de variables para función fourier_spectrum
# Para V1
nro_muestras_V1 = V1_cort.size
deltat_V1 = tiempo_1_cort[1] - tiempo_1_cort[0]
#Para V2
nro_muestras_V2 = V2_cort.size
deltat_V2 = tiempo_2_cort[1] - tiempo_2_cort[0]
# Para V3
nro_muestras_V3 = V3_cort.size
deltat_V3 = tiempo_3_cort[1] - tiempo_3_cort[0]
# Para V4
nro_muestras_V4 = V4_cort.size
deltat_V4 = tiempo_4_cort[1] - tiempo_4_cort[0]
###Output
_____no_output_____
###Markdown
Cálculo de transformadas
###Code
# Cálculo de transformada de fourier para V1
( nfreqs_V1, freq_step_V1, max_freq_V1, spectrum_V1 ) = fourier_spectrum( nro_muestras_V1, V1_cort, deltat_V1, False, False, True )
# Presentación de datos principales en consola del espectro de V1
print ("Freq step", freq_step_V1, "Max freq", max_freq_V1, "Freq bins",nfreqs_V1)
# Cálcula de transformada de fourier para V2
( nfreqs_V2, freq_step_V2, max_freq_V2, spectrum_V2 ) = fourier_spectrum( nro_muestras_V2, V2_cort, deltat_V2, False, False, True )
# Presentación de datos principales en consola del espectro de V2
("Freq step", freq_step_V2, "Max freq", max_freq_V2, "Freq bins", nfreqs_V2)
# Presentación de datos principales en consola del espectro de V1
print ("Freq step", freq_step_V2, "Max freq", max_freq_V2, "Freq bins",nfreqs_V2)
# Cálculo de transformada de fourier para V3
( nfreqs_V3, freq_step_V3, max_freq_V3, spectrum_V3 ) = fourier_spectrum( nro_muestras_V3, V3_cort, deltat_V3, False, False, True )
# Presentación de datos principales en consola del espectro de V1
print ("Freq step", freq_step_V3, "Max freq", max_freq_V3, "Freq bins",nfreqs_V3)
# Cálculo de transformada de fourier para V1
( nfreqs_V4, freq_step_V4, max_freq_V4, spectrum_V4 ) = fourier_spectrum( nro_muestras_V4, V4_cort, deltat_V4, False, False, True )
# Presentación de datos principales en consola del espectro de V1
print ("Freq step", freq_step_V4, "Max freq", max_freq_V4, "Freq bins",nfreqs_V4)
###Output
Freq step 0.41768639227847015 Max freq 1706.6665988498291 Freq bins 4086
Freq step 0.41768639227847015 Max freq 1706.6665988498291 Freq bins 4086
Freq step 0.41768639227847015 Max freq 1706.6665988498291 Freq bins 4086
Freq step 0.41768639227847015 Max freq 1706.6665988498291 Freq bins 4086
###Markdown
Creación de gráfico temporal de todos los ensayos
###Code
fig, axs = plt.subplots(2, 2, figsize=(15,15))
fig.suptitle('Ensayo sobre ' + rotor + ' medido en sensor ' + sensor + ' ' + tipo_sensor + ' fecha ' + fecha )
axs[0,0].plot(tiempo_1_cort, V1_cort)
axs[0,0].set_title('Tension ensayo Js11')
axs[0,0].grid(True)
#axs[0,1].set_xlim( 0, 100 )
axs[0,1].plot( tiempo_2_cort, V2_cort, 'tab:red')
axs[0,1].set_title('Tension ensayo Js12')
axs[0,1].grid(True)
axs[1,0].plot(tiempo_3_cort, V3_cort, 'tab:orange')
axs[1,0].set_title('Tension ensayo Js13')
axs[1,0].grid(True)
#axs[1,1].set_xlim( 0, 100 )
axs[1,1].plot( tiempo_4_cort, V4_cort, 'tab:green')
axs[1,1].set_title('Tension ensayo Js14')
axs[1,1].grid(True)
###Output
_____no_output_____
###Markdown
Creación de gráfico de espectro de mediciones y representación de picos
###Code
# Creción de eje de frecuencias para cada gráfico de espectro
freqs_V1 = np.arange( 0, max_freq_V1, freq_step_V1 )
freqs_V2 = np.arange( 0, max_freq_V2, freq_step_V2 )
freqs_V3 = np.arange( 0, max_freq_V3, freq_step_V3 )
freqs_V4 = np.arange( 0, max_freq_V4, freq_step_V4 )
# Acondicionamiento de vector de frecuencias creado para evitar problemas si la cantidad de puntos es par o impar
freqs_V1 = freqs_V1[0:spectrum_V1.size]
freqs_V2 = freqs_V2[0:spectrum_V2.size]
freqs_V3 = freqs_V3[0:spectrum_V3.size]
freqs_V4 = freqs_V4[0:spectrum_V4.size]
# Búsque da picos en espectro con su umbral
umbral = 0.008
picos_V1, _ = find_peaks(spectrum_V1, height=umbral)
picos_V2, _ = find_peaks(spectrum_V2, height=umbral)
picos_V3, _ = find_peaks(spectrum_V3, height=umbral)
picos_V4, _ = find_peaks(spectrum_V4, height=umbral)
# Representación en subplot de gráficos como vienen e invertidos
fig, axs = plt.subplots(2, 2, figsize=(15,15))
fig.suptitle('Espectros de ' + rotor + ' medido en sensor ' + sensor + ' ' + tipo_sensor + ' fecha ' + fecha )
axs[0,0].set_xlim( 0, 100 )
axs[0,0].plot(freqs_V1, spectrum_V1)
axs[0,0].plot(freqs_V1[picos_V1], spectrum_V1[picos_V1], "x")
axs[0,0].plot(np.ones(spectrum_V1.size)*umbral, "--", color="gray")
axs[0,0].set_title('Espectro sensor ' + tipo_sensor + ' Js11')
axs[0,0].grid(True)
axs[0,1].set_xlim( 0, 100 )
axs[0,1].plot( freqs_V2, spectrum_V2, 'tab:red')
axs[0,1].plot(freqs_V2[picos_V2], spectrum_V2[picos_V2], "x")
axs[0,1].plot(np.ones(spectrum_V2.size)*umbral, "--", color="gray")
axs[0,1].set_title('Espectro sensor ' + tipo_sensor + ' Js12')
axs[0,1].grid(True)
axs[1,0].set_xlim( 0, 100 )
axs[1,0].plot(freqs_V3, spectrum_V3, 'tab:orange')
axs[1,0].plot(freqs_V3[picos_V3], spectrum_V3[picos_V3], "x")
axs[1,0].plot(np.ones(spectrum_V3.size)*umbral, "--", color="gray")
axs[1,0].set_title('Espectro sensor ' + tipo_sensor + ' Js13')
axs[1,0].grid(True)
axs[1,1].set_xlim( 0, 100 )
axs[1,1].plot( freqs_V4, spectrum_V4, 'tab:green')
axs[1,1].plot(freqs_V4[picos_V4], spectrum_V4[picos_V4], "x")
axs[1,1].plot(np.ones(spectrum_V4.size)*umbral, "--", color="gray")
axs[1,1].set_title('Espectro sensor ' + tipo_sensor + ' Js14')
axs[1,1].grid(True)
###Output
_____no_output_____
###Markdown
Comparación de picos de fundamental y armónicosComparación de cada uno de los picos en frecuencia y rms
###Code
fig, axs = plt.subplots(1, 2, figsize=(15,4))
fig.suptitle('Picos de espectros en ensayos sobre ' + rotor + ' medido en sensor ' + sensor + ' ' + tipo_sensor + ' fecha ' + fecha )
axs[0].plot(freqs_V1[picos_V1[0]], spectrum_V1[picos_V1[0]], "x", color = "green", label = 'Js11')
axs[0].plot(freqs_V2[picos_V2[0]], spectrum_V2[picos_V2[0]], "o", color = "blue", label = 'Js12')
axs[0].plot(freqs_V3[picos_V3[0]], spectrum_V3[picos_V3[0]], "s", color = "red", label = 'Js13')
axs[0].plot(freqs_V4[picos_V4[0]], spectrum_V4[picos_V4[0]], "^", color = "black", label = 'Js14')
axs[0].set_title('Primer pico')
axs[0].grid(True)
axs[0].legend(loc="upper right")
axs[1].plot(freqs_V1[picos_V1[1]], spectrum_V1[picos_V1[1]], "x", color = "green", label = 'Js11')
axs[1].plot(freqs_V2[picos_V2[1]], spectrum_V2[picos_V2[1]], "o", color = "blue", label = 'Js12')
axs[1].plot(freqs_V3[picos_V3[1]], spectrum_V3[picos_V3[1]], "s", color = "red", label = 'Js13')
axs[1].plot(freqs_V4[picos_V4[1]], spectrum_V4[picos_V4[1]], "^", color = "black", label = 'Js14')
axs[1].set_title('Segundo pico')
axs[1].grid(True)
axs[1].legend(loc="upper right")
###Output
_____no_output_____
###Markdown
Valores de picos
###Code
print('Frecuencia 1er pico Js11', np.around(freqs_V1[picos_V1[0]], decimals = 3), '\nAmplitud 1er pico Js11', np.around(spectrum_V1[picos_V1[0]], decimals = 4 ), '\n')
print('Frecuencia 2do pico Js11', np.around(freqs_V1[picos_V1[1]], decimals = 3), '\nAmplitud 2do pico Js11', np.around(spectrum_V1[picos_V1[1]], decimals = 4 ), '\n')
#print('Frecuencia 3er pico Js11', np.around(freqs_V1[picos_V1[2]], decimals = 3), '\nAmplitud 3er pico Js11', np.around(spectrum_V1[picos_V1[2]], decimals = 4 ), '\n')
print('Frecuencia 1er pico Js12', np.around(freqs_V2[picos_V2[0]], decimals = 3), '\nAmplitud 1er pico Js12', np.around(spectrum_V2[picos_V2[0]], decimals = 4 ), '\n')
print('Frecuencia 2do pico Js12', np.around(freqs_V2[picos_V2[1]], decimals = 3), '\nAmplitud 2do pico Js12', np.around(spectrum_V2[picos_V2[1]], decimals = 4 ), '\n')
#print('Frecuencia 3er pico Js12', np.around(freqs_V2[picos_V2[2]], decimals = 3), '\nAmplitud 3er pico Js12', np.around(spectrum_V2[picos_V2[2]], decimals = 4 ), '\n')
print('Frecuencia 1er pico Js13', np.around(freqs_V3[picos_V3[0]], decimals = 3), '\nAmplitud 1er pico Js13', np.around(spectrum_V3[picos_V3[0]], decimals = 4 ), '\n')
print('Frecuencia 2do pico Js13', np.around(freqs_V3[picos_V3[1]], decimals = 3), '\nAmplitud 2do pico Js13', np.around(spectrum_V3[picos_V3[1]], decimals = 4 ), '\n')
#print('Frecuencia 3er pico Js13', np.around(freqs_V3[picos_V3[2]], decimals = 3), '\nAmplitud 3er pico Js13', np.around(spectrum_V3[picos_V3[2]], decimals = 4 ), '\n')
print('Frecuencia 1er pico Js14', np.around(freqs_V4[picos_V4[0]], decimals = 3), '\nAmplitud 1er pico Js14', np.around(spectrum_V4[picos_V4[0]], decimals = 4 ), '\n')
print('Frecuencia 2do pico Js14', np.around(freqs_V4[picos_V4[1]], decimals = 3), '\nAmplitud 2do pico Js14', np.around(spectrum_V4[picos_V4[1]], decimals = 4 ), '\n')
#print('Frecuencia 3er pico Js14', np.around(freqs_V4[picos_V4[2]], decimals = 3), '\nAmplitud 3er pico Js14', np.around(spectrum_V4[picos_V4[2]], decimals = 4 ), '\n')
###Output
Frecuencia 1er pico Js11 10.024
Amplitud 1er pico Js11 0.0824
Frecuencia 2do pico Js11 50.122
Amplitud 2do pico Js11 0.0173
Frecuencia 1er pico Js12 10.024
Amplitud 1er pico Js12 0.0827
Frecuencia 2do pico Js12 50.122
Amplitud 2do pico Js12 0.0173
Frecuencia 1er pico Js13 10.442
Amplitud 1er pico Js13 0.077
Frecuencia 2do pico Js13 50.122
Amplitud 2do pico Js13 0.0175
Frecuencia 1er pico Js14 10.024
Amplitud 1er pico Js14 0.0873
Frecuencia 2do pico Js14 50.122
Amplitud 2do pico Js14 0.0176
|
notebooks/EDI metadata.ipynb | ###Markdown
EDI metadata
###Code
from pprint import pprint
###Output
_____no_output_____
###Markdown
When loading data from an EDI file, all the information from the original file will be included in ``Site['datasource']``
###Code
import mtwaffle
edi = mtwaffle.read_edi('bwa2890.edi')
type(edi)
pprint(edi['datasource'])
###Output
{'EDI': {'DEFINEMEAS': {'MAXCHAN': 6,
'MAXMEAS': 99999,
'MAXRUN': 999,
'REFEASTING': nan,
'REFELEV': 0,
'REFLAT': -29.26535489,
'REFLONG': 136.6646507,
'REFNORTHING': nan,
'REFTYPE': 'CART',
'REFX': nan,
'REFY': nan,
'REFZONE': '53J',
'UNITS': 'M'},
'FREQ': [0.030518,
0.045776,
0.061035,
0.24414,
0.48828,
0.73242,
1.4648,
1.9531,
2.9297,
3.9062,
5.8594,
7.8125,
11.719,
15.625,
23.438,
31.25,
46.875,
62.5,
93.75,
125.0],
'HEAD': {'DATAID': 'bwa2890',
'EASTING': nan,
'ELEV': 0,
'LAT': -29.26535489,
'LONG': 136.6646507,
'NORTHING': nan,
'ZONE': '53J'},
'MTSECT': {'EX': 1003.001,
'EY': 1004.001,
'HX': 1001.001,
'HY': 1002.001,
'NFREQ': 20,
'RX': 1005.001,
'RY': 1006.001,
'SECTID': 114},
'ZXX.SDEV': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]),
'ZXX.VAR': [0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0],
'ZXXI': [-0.79361,
-0.94345,
-0.48594,
0.068462,
-0.43625,
-0.37074,
-0.50657,
-0.48218,
-0.37645,
-0.29186,
-0.20842,
0.27786,
0.34704,
0.63522,
0.95935,
1.2004,
1.2771,
1.2227,
0.9778,
0.2749],
'ZXXR': [-2.6522,
-3.4179,
-3.4198,
-3.1276,
-3.6339,
-3.6531,
-3.9647,
-4.0128,
-4.2103,
-4.3636,
-4.6565,
-4.7214,
-4.7217,
-4.4021,
-4.2256,
-3.8702,
-3.3716,
-2.9752,
-2.4059,
-2.3529],
'ZXY.SDEV': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]),
'ZXY.VAR': [0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0],
'ZXYI': [3.114,
3.07,
3.0008,
3.0562,
2.6207,
2.2388,
2.6642,
2.7633,
3.142,
3.3817,
3.8248,
4.027,
4.3743,
4.9527,
5.6541,
6.362,
7.5711,
8.9053,
10.924,
14.68],
'ZXYR': [3.8817,
5.6012,
7.1878,
10.282,
11.353,
11.625,
12.421,
12.714,
13.439,
14.214,
15.307,
15.653,
17.061,
17.162,
18.218,
19.14,
20.221,
21.311,
23.019,
24.179],
'ZYX.SDEV': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]),
'ZYX.VAR': [0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0],
'ZYXI': [-2.891,
-3.179,
-3.1703,
-2.9489,
-2.7661,
-2.7694,
-3.3196,
-3.5342,
-4.0317,
-4.1105,
-4.3562,
-4.4124,
-4.5648,
-5.0349,
-5.6596,
-6.2459,
-7.5514,
-8.9957,
-11.534,
-15.141],
'ZYXR': [-4.3553,
-5.7065,
-6.8215,
-9.2003,
-10.217,
-10.625,
-11.681,
-12.23,
-13.42,
-14.345,
-15.982,
-16.426,
-17.753,
-17.462,
-18.556,
-19.448,
-20.34,
-21.286,
-22.896,
-23.807],
'ZYY.SDEV': array([ 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0., 0.,
0., 0., 0., 0., 0., 0., 0.]),
'ZYY.VAR': [0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0,
0.0],
'ZYYI': [0.59825,
0.7519,
0.91463,
0.27891,
0.33967,
0.41102,
0.10122,
-0.011621,
-0.1323,
-0.31541,
-0.49362,
-0.099365,
-1.124,
-0.87051,
-1.195,
-1.3723,
-1.6244,
-1.7395,
-1.7048,
-1.7843],
'ZYYR': [2.3661,
2.6531,
2.2656,
2.9761,
3.2388,
3.3357,
3.7179,
3.6997,
3.7148,
3.6293,
3.7238,
3.6658,
3.5965,
3.2334,
3.0066,
2.6907,
2.2263,
1.8311,
1.2392,
0.93289]}}
|
coursera/ml_yandex/course4/course4week4/check/first.ipynb | ###Markdown
Для выполнения этого задания вам понадобятся данные о кредитных историях клиентов одного из банков. Поля в предоставляемых данных имеют следующий смысл:* LIMIT_BAL: размер кредитного лимита (в том числе и на семью клиента)* SEX: пол клиента (1 = мужской, 2 = женский )* EDUCATION: образование (0 = доктор, 1 = магистр; 2 = бакалавр; 3 = выпускник школы; 4 = начальное образование; 5= прочее; 6 = нет данных ).* MARRIAGE: (0 = отказываюсь отвечать; 1 = замужем/женат; 2 = холост; 3 = нет данных).* AGE: возраст в годах* PAY_0 - PAY_6 : История прошлых платежей по кредиту. PAY_6 - платеж в апреле, ... Pay_0 - платеж в сентябре. Платеж = (0 = исправный платеж, 1=задержка в один месяц, 2=задержка в 2 месяца ...)* BILL_AMT1 - BILL_AMT6: задолженность, BILL_AMT6 - на апрель, BILL_AMT1 - на сентябрь* PAY_AMT1 - PAY_AMT6: сумма уплаченная в PAY_AMT6 - апреле, ..., PAY_AMT1 - сентябре* default - индикатор невозврата денежных средств Задание 1 Размер кредитного лимита (LIMIT_BAL). В двух группах, тех людей, кто вернул кредит (default = 0) и тех, кто его не вернул (default = 1) проверьте гипотезы:* a) о равенстве медианных значений кредитного лимита с помощью подходящей интервальной оценки * b) о равенстве распределений с помощью одного из подходящих непараметрических критериев проверки равенства средних. Значимы ли полученные результаты с практической точки зрения ?
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('ggplot')
import scipy
from scipy import stats
raw_data = pd.read_csv('credit_card_default_analysis.csv')
raw_data.head()
print 'Размер датасета: {}'.format(raw_data.shape)
d0_group = raw_data[raw_data['default']==0]
d1_group = raw_data[raw_data['default']==1]
fig, ax = plt.subplots(1, 2, figsize = (15, 5), sharey = True)
data = [d0_group, d1_group]
title = [u'Вернувшие кредит (default=0)',u'Невернувшие кредит (default=1)']
for i in range(2):
ax[i].hist(data[i].LIMIT_BAL.values, bins = 20,ec = 'black');
ax[i].axis([0, 1e6, 0, 6000])
ax[i].set_xlabel(u'Размер кредитного лимита')
ax[i].set_ylabel(u'Число людей')
ax[i].set_title(title[i])
###Output
_____no_output_____
###Markdown
**а) Найдем медианные значения распределений:** * Гипотеза H0 - медианы равны* Гипотеза Н1 - медианы различны
###Code
print 'Медианное значение кредитного лимита для тех, кто вернул кредит: {}'.format(d0_group.LIMIT_BAL.median())
print 'Медианное значение кредитного лимита для тех, кто не вернул кредит: {}'.format(d1_group.LIMIT_BAL.median())
###Output
Медианное значение кредитного лимита для тех, кто вернул кредит: 150000.0
Медианное значение кредитного лимита для тех, кто не вернул кредит: 90000.0
###Markdown
Видно, что медианы различаются. Проверим доверительные интервалы распределений с помощью бутстрепа
###Code
def get_bootstrap_samples(data, n_samples):
indicies = np.random.randint(0, len(data), (n_samples, len(data)))
samples = data[indicies]
return samples
def get_stat_intervals(stat, alpha):
boundaries = np.percentile(stat, [100*alpha/2., 100*(1-alpha/2.)])
return boundaries
np.random.seed(0)
d0_median_scores = np.median(get_bootstrap_samples(d0_group.LIMIT_BAL.values, 1000), axis=0)
d1_median_scores = np.median(get_bootstrap_samples(d1_group.LIMIT_BAL.values, 1000), axis=0)
delta_median_scores = map(lambda x: x[1] - x[0], zip(d0_median_scores, d1_median_scores))
print "95% доверительный интервал для группы вернувших кредит: {}".format(get_stat_intervals(d0_median_scores, 0.05))
print "95% доверительный интервал для группы не вернувших кредит: {}".format(get_stat_intervals(d1_median_scores, 0.05))
print '95% доверительный интервал для разности групп: {}'.format(get_stat_intervals(delta_median_scores, 0.05))
###Output
95% доверительный интервал для группы вернувших кредит: [145000. 160000.]
95% доверительный интервал для группы не вернувших кредит: [ 80000. 100000.]
95% доверительный интервал для разности групп: [-80000. -50000.]
###Markdown
Интервалы не пересекаются, интрервал разности находится левее нуля. Различие статистически значимо. Гипотеза Н0 отвергается Оценим практическую значимость
###Code
Fc = d0_group.LIMIT_BAL.median()/d1_group.LIMIT_BAL.median()
print 'Fold change: ', np.round(Fc, 3)
###Output
Fold change: 1.667
###Markdown
Разница практически значима **б) Проверим равенство распределений с помощью критерия Манна - Уитни** * Гипотеза H0 - распределения одинаковы* Гипотеза Н1 - распределения отличаются на величину сдвига
###Code
stats.mannwhitneyu(d0_group.LIMIT_BAL.values, d1_group.LIMIT_BAL.values)
###Output
_____no_output_____
###Markdown
На уровне значимости 0.05 гипотеза H0 уверенно отвергается Задание 2 Пол (SEX): Проверьте гипотезу о том, что гендерный состав группы людей вернувших и не вернувших кредит отличается. Хорошо, если вы предоставите несколько различных решений этой задачи (с помощью доверительного интервала и подходящего статистического критерия)
###Code
fig, ax = plt.subplots(1, 2, figsize = (15, 5), sharey = True)
data = [d0_group, d1_group]
title = [u'Вернувшие кредит (default=0)',u'Невернувшие кредит (default=1)']
for i in range(2):
ax[i].hist(data[i].SEX.values, bins = 20,ec = 'black');
#ax[i].axis([0, 1e6, 0, 6000])
ax[i].set_xlabel(u'Пол респодентов')
ax[i].set_ylabel(u'Число людей')
ax[i].set_title(title[i])
###Output
_____no_output_____
###Markdown
* Гипотеза Н0 - гендерный состав в выборках не отличается* Гипотеза Н1 - гендерный состав различается Построим доверительный интервал и проверим гипотезу с помощью Z-критерия для доли двух независимых выборок
###Code
def proportions_diff_confint_ind(sample1, sample2, alpha = 0.05):
z = scipy.stats.norm.ppf(1 - alpha / 2.)
p1 = float(sum(sample1)) / len(sample1)
p2 = float(sum(sample2)) / len(sample2)
left_boundary = (p1 - p2) - z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
right_boundary = (p1 - p2) + z * np.sqrt(p1 * (1 - p1)/ len(sample1) + p2 * (1 - p2)/ len(sample2))
return (left_boundary, right_boundary)
def proportions_diff_z_stat_ind(sample1, sample2):
n1 = len(sample1)
n2 = len(sample2)
p1 = float(sum(sample1)) / n1
p2 = float(sum(sample2)) / n2
P = float(p1*n1 + p2*n2) / (n1 + n2)
return (p1 - p2) / np.sqrt(P * (1 - P) * (1. / n1 + 1. / n2))
def proportions_diff_z_test(z_stat, alternative = 'two-sided'):
if alternative not in ('two-sided', 'less', 'greater'):
raise ValueError("alternative not recognized\n"
"should be 'two-sided', 'less' or 'greater'")
if alternative == 'two-sided':
return 2 * (1 - scipy.stats.norm.cdf(np.abs(z_stat)))
if alternative == 'less':
return scipy.stats.norm.cdf(z_stat)
if alternative == 'greater':
return 1 - scipy.stats.norm.cdf(z_stat)
###Output
_____no_output_____
###Markdown
Закодируем доли для использования в функциях
###Code
d0_zeros = np.zeros(len(d0_group[d0_group['SEX']==1].SEX.values))
d0_ones = np.ones(len(d0_group[d0_group['SEX']==2].SEX.values))
d1_zeros = np.zeros(len(d1_group[d1_group['SEX']==1].SEX.values))
d1_ones = np.ones(len(d1_group[d1_group['SEX']==2].SEX.values))
d0_group_sex_binary = np.concatenate([d0_zeros,d0_ones])
d1_group_sex_binary = np.concatenate([d1_zeros,d1_ones])
print len(d0_group_sex_binary), sum(d0_group_sex_binary)
print "95%% доверительный интервал: [%f, %f]" %\
proportions_diff_confint_ind(d0_group_sex_binary, d1_group_sex_binary )
###Output
95% доверительный интервал: [0.033635, 0.060548]
###Markdown
Доверительный интервал не содержит нуля, гендерный состав отличается
###Code
print "p-value: %f" % proportions_diff_z_test(proportions_diff_z_stat_ind(d0_group_sex_binary, d1_group_sex_binary))
###Output
p-value: 0.000000
###Markdown
Гипотеза Н0 уверенно отвергается Задание 3 Образование (EDUCATION): Проверьте гипотезу о том, что образование не влияет на то, вернет ли человек долг. Предложите способ наглядного представления разницы в ожидаемых и наблюдаемых значениях количества человек вернувших и не вернувших долг. Например, составьте таблицу сопряженности "образование" на "возврат долга", где значением ячейки была бы разность между наблюдаемым и ожидаемым количеством человек. Как бы вы предложили модифицировать таблицу так, чтобы привести значения ячеек к одному масштабу не потеряв в интерпретируемости ? Наличие какого образования является наилучшим индикатором того, что человек отдаст долг ? наоборт, не отдаст долг ?
###Code
fig, ax = plt.subplots(1, 2, figsize = (15, 5), sharey = True)
data = [d0_group, d1_group]
title = [u'Вернувшие кредит (default=0)',u'Невернувшие кредит (default=1)']
for i in range(2):
ax[i].hist(data[i].EDUCATION.values, bins = 20,ec = 'black');
#ax[i].axis([0, 1e6, 0, 6000])
ax[i].set_xlabel(u'Уровень образования')
ax[i].set_ylabel(u'Число людей')
ax[i].set_title(title[i])
###Output
_____no_output_____
###Markdown
Проверим взаимосвзяь данных с помощью критерия хи квадрат для критериальных признаков * Гипотеза Н0 - уровень образование не влияет на вероятность возврата долга* Гипотеза Н1 - уровень образования влияет на это
###Code
table_edu = raw_data.pivot_table(index='default',
values="LIMIT_BAL", columns='EDUCATION', aggfunc = len, fill_value=0)
table_edu.head()
# stats.chi2_contingency(table_edu.values)
print 'Значение уровня значимости: ', stats.chi2_contingency(table_edu.values)[1]
###Output
Значение уровня значимости: 8.825862457577375e-08
###Markdown
Гипотеза H0 отвергается, влияние есть Отобразим диаграмму ожидания/реальности при возврате долга
###Code
table_delta = table_edu.values - stats.chi2_contingency(table_edu.values)[3]
fig, ax = plt.subplots(1, 2, figsize = (15, 5), sharey = True)
data = [table_delta[0], table_delta[1]]
label = ['0', '1', '2', '3', '4', '5', '6']
title = [u'Вернувшие кредит (default=0)',u'Невернувшие кредит (default=1)']
for i in range(2):
ax[i].bar(label, data[i], ec = 'black');
#ax[i].axis([0, 1e6, 0, 6000])
ax[i].set_xlabel(u'Уровень образования')
ax[i].set_ylabel(u'Величина отклонения')
ax[i].set_title(title[i])
###Output
_____no_output_____
###Markdown
Масштабируем признаки
###Code
data_edu_scale = (table_edu.loc[0] - table_edu.loc[1])/(table_edu.loc[0]+table_edu.loc[1])
data_edu_scale.plot.bar(figsize = (15, 5))
plt.title(u'Соотношение кредитных выплат');
###Output
_____no_output_____
###Markdown
Видно, что * хуже всего кредиты возвращают группы 2 и 3: выпускник и начальное образование* лучше всего кредиты возвращают респонденты из группы 0 - доктора наук Задание 4 Семейное положение (MARRIAGE): Проверьте, как связан семейный статус с индикатором дефолта: нужно предложить меру, по которой можно измерить возможную связь этих переменных и посчитать ее значение.
###Code
fig, ax = plt.subplots(1, 2, figsize = (15, 5), sharey = True)
data = [d0_group, d1_group]
title = [u'Вернувшие кредит (default=0)',u'Невернувшие кредит (default=1)']
for i in range(2):
ax[i].hist(data[i].MARRIAGE.values, bins = 20,ec = 'black');
#ax[i].axis([0, 1e6, 0, 6000])
ax[i].set_xlabel(u'Семейный статус')
ax[i].set_ylabel(u'Число людей')
ax[i].set_title(title[i])
###Output
_____no_output_____
###Markdown
* гипотеза Н0 - данные связаны между собой* гипотеза Н1 - данные не связаны Оценим значимость взаимосвязи с помощью критерия хи-квадрат обобщенного на случай критериальных признаков Подготовка матрицы сопряженности:
###Code
table_mar = raw_data.pivot_table(index='default',
values="LIMIT_BAL", columns='MARRIAGE', aggfunc = len, fill_value=0)
table_mar.head()
###Output
_____no_output_____
###Markdown
Вычислим V коэфициент Крамера
###Code
def cramers_stat(confusion_matrix):
chi2 = stats.chi2_contingency(confusion_matrix)[0]
n = confusion_matrix.sum()
return np.sqrt(chi2 / (n*(min(confusion_matrix.shape)-1)))
print 'Значение V коэффициента Крамера: ', cramers_stat(table_mar.values)
###Output
Значение V коэффициента Крамера: 0.034478203662766466
###Markdown
Коэффициент близок к нулю, данные слабо коррелируют друг с другом
###Code
print 'Значение уровня значимости: ', stats.chi2_contingency(table_mar)[1]
###Output
Значение уровня значимости: 8.825862457577375e-08
###Markdown
Гипотеза H0 отвергается, данные не связаны Задание 5 Возраст (AGE): Относительно двух групп людей вернувших и не вернувших кредит проверьте следующие гипотезы: a) о равенстве медианных значений возрастов людей b) о равенстве распределений с помощью одного из подходящих непараметрических критериев проверки равенства средних. Значимы ли полученные результаты с практической точки зрения ?
###Code
fig, ax = plt.subplots(1, 2, figsize = (15, 5), sharey = True)
data = [d0_group, d1_group]
title = [u'Вернувшие кредит (default=0)',u'Невернувшие кредит (default=1)']
for i in range(2):
ax[i].hist(data[i].AGE.values, bins = 20,ec = 'black');
#ax[i].axis([0, 1e6, 0, 6000])
ax[i].set_xlabel(u'Возраст')
ax[i].set_ylabel(u'Число людей')
ax[i].set_title(title[i])
###Output
_____no_output_____
###Markdown
**а) Найдем медианные значения распределений:** * Гипотеза H0 - медианы распределений равны* Гипотеза Н1 - медианы распределений различны
###Code
print 'Медианное значение возраста тех, кто вернул кредит: {}'.format(d0_group.AGE.median())
print 'Медианное значение возраста тех, кто не вернул кредит: {}'.format(d1_group.AGE.median())
###Output
Медианное значение возраста тех, кто вернул кредит: 34.0
Медианное значение возраста тех, кто не вернул кредит: 34.0
###Markdown
Медианы равны. Построим доверительные интревалы с помощью бутстрепа:
###Code
np.random.seed(0)
d0_median_scores = np.median(get_bootstrap_samples(d0_group.AGE.values, 1000), axis=0)
d1_median_scores = np.median(get_bootstrap_samples(d1_group.AGE.values, 1000), axis=0)
delta_median_scores = map(lambda x: x[1] - x[0], zip(d0_median_scores, d1_median_scores))
print "95% доверительный интервал для группы вернувших кредит: {}".format(get_stat_intervals(d0_median_scores, 0.05))
print "95% доверительный интервал для группы не вернувших кредит: {}".format(get_stat_intervals(d1_median_scores, 0.05))
print '95% доверительный интервал для разности групп: {}'.format(get_stat_intervals(delta_median_scores, 0.05))
###Output
95% доверительный интервал для группы вернувших кредит: [33. 35.]
95% доверительный интервал для группы не вернувших кредит: [33. 35.]
95% доверительный интервал для разности групп: [-1. 2.]
###Markdown
0 попадает в доверительный интревал разности, отвергунть гипотезу Н0 нельзя **б) Проверим равенство распределений с помощью критерия Манна - Уитни** * Гипотеза H0 - распределения одинаковы* Гипотеза Н1 - распределения отличаются на величину сдвига
###Code
stats.mannwhitneyu(d0_group.AGE.values, d1_group.AGE.values)
###Output
_____no_output_____ |
Pengolahan_Audio.ipynb | ###Markdown
###Code
from google.colab import drive
drive.mount('/content/drive')
import sys
sys.path.append('/content/drive/My Drive/Colab Notebooks/Latihan/')
import numpy as np
import matplotlib.pyplot as plt
from scipy.io.wavfile import read, write
from IPython.display import Audio
from numpy.fft import fft, ifft
%matplotlib inline
Fs, data = read('/content/drive/My Drive/Colab Notebooks/Latihan/MIRA1.wav')
#data = data[:,0]
print("Sampling Frequency adalah", Fs)
Audio(data, rate=Fs)
plt.figure()
plt.plot(data)
plt.xlabel('Sample Index')
plt.ylabel('Amplitude')
plt.title('Waveform pada Audio')
plt.show()
write('/content/drive/My Drive/Colab Notebooks/Latihan/output.wav', Fs, data)
###Output
_____no_output_____ |
03B_Layers_API.ipynb | ###Markdown
TensorFlow Tutorial 03-B Layers APIby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**The Layers API was intended to be a basic builder API for creating Neural Networks in TensorFlow, but the Layers API was never fully completed. Although it still works in TensorFlow v. 1.9, it seems quite possible that it may be deprecated in the future. It is recommended that you use the more complete _Keras API_ instead, see Tutorial 03-C.** IntroductionIt is important to use a builder API when constructing Neural Networks in TensorFlow because it makes it easier to implement and modify the source-code. This also lowers the risk of bugs.Many of the other tutorials used the TensorFlow builder API called PrettyTensor for easy construction of Neural Networks. But there are several other builder APIs available for TensorFlow. PrettyTensor was used in these tutorials, because at the time in mid-2016, PrettyTensor was the most complete and polished builder API available for TensorFlow. But PrettyTensor is only developed by a single person working at Google and although it has some unique and elegant features, it is possible that it may become deprecated in the future.This tutorial is about a small builder API that has recently been added to TensorFlow version 1.1. It is simply called *Layers* or the *Layers API* or by its Python name `tf.layers`. This builder API is automatically installed as part of TensorFlow, so you no longer have to install a separate Python package as was needed with PrettyTensor.This tutorial is very similar to Tutorial 03 on PrettyTensor and shows how to implement the same Convolutional Neural Network using the Layers API. It is recommended that you are familiar with Tutorial 02 on Convolutional Neural Networks. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial 02 for a more detailed description of convolution.  The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled using max-pooling so the image resolution is decreased from 28x28 to 14x14.These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are also down-sampled using max-pooling to 7x7 pixels.The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import math
###Output
_____no_output_____
###Markdown
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional neural network.* A so-called cost-measure or loss-function that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
PrettyTensor ImplementationThis section shows the implementation of a Convolutional Neural Network using PrettyTensor taken from Tutorial 03 so it can be compared to the implementation using the Layers API below. This code has been enclosed in an `if False:` block so it does not run here.The basic idea is to wrap the input tensor `x_image` in a PrettyTensor object which has helper-functions for adding new computational layers so as to create an entire Convolutional Neural Network. This is a fairly simple and elegant syntax.
###Code
if False:
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Layers ImplementationWe now implement the same Convolutional Neural Network using the Layers API that is included in TensorFlow version 1.1. This requires more code than PrettyTensor, although a lot of the following are just comments.We use the `net`-variable to refer to the last layer while building the Neural Network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the `net`-variable to the reshaped input image.
###Code
net = x_image
###Output
_____no_output_____
###Markdown
The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial 02.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',
filters=16, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
One of the advantages of constructing neural networks in this fashion, is that we can now easily pull out a reference to a layer. This was more complicated in PrettyTensor.Further below we want to plot the output of the first convolutional layer, so we create another variable for holding a reference to that layer.
###Code
layer_conv1 = net
###Output
_____no_output_____
###Markdown
We now do the max-pooling on the output of the convolutional layer. This was also described in more detail in Tutorial 02.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
We now add the second convolutional layer which has 36 filters each with 5x5 pixels, and a ReLU activation function again.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',
filters=36, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We also want to plot the output of this convolutional layer, so we keep a reference for later use.
###Code
layer_conv2 = net
###Output
_____no_output_____
###Markdown
The output of the second convolutional layer is also max-pooled for down-sampling the images.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
The tensors that are being output by this max-pooling are 4-rank, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
Next we want to add fully-connected layers to the Neural Network, but these require 2-rank tensors as input, so we must first flatten the tensors.The `tf.layers` API was first located in `tf.contrib.layers` before it was moved into TensorFlow Core. But even though it has taken the TensorFlow developers a year to move these fairly simple functions, they have somehow forgotten to move the even simpler `flatten()` function. So we still need to use the one in `tf.contrib.layers`.
###Code
net = tf.contrib.layers.flatten(net)
# This should eventually be replaced by:
# net = tf.layers.flatten(net)
###Output
_____no_output_____
###Markdown
This has now flattened the data to a 2-rank tensor, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
We can now add fully-connected layers to the neural network. These are called *dense* layers in the Layers API.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has `num_classes=10` output neurons.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc_out',
units=num_classes, activation=None)
###Output
_____no_output_____
###Markdown
The output of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name.
###Code
logits = net
###Output
_____no_output_____
###Markdown
We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one.
###Code
y_pred = tf.nn.softmax(logits=logits)
###Output
_____no_output_____
###Markdown
This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
We have now created the exact same Convolutional Neural Network in a few lines of code that required many complex lines of code in the direct TensorFlow implementation.The Layers API is perhaps not as elegant as PrettyTensor, but it has some other advantages, e.g. that we can more easily refer to intermediate layers, and it is also easier to construct neural networks with branches and multiple outputs using the Layers API. Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the Convolutional Neural Network.The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model.TensorFlow has a function for calculating the cross-entropy, which uses the values of the `logits`-layer because it also calculates the softmax internally, so as to to improve numerical stability.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)
###Output
_____no_output_____
###Markdown
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
###Code
loss = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
Optimization MethodNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Classification AccuracyWe need to calculate the classification accuracy so we can report progress to the user.First we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
Getting the WeightsFurther below, we want to plot the weights of the convolutional layers. In the TensorFlow implementation we had created the variables ourselves so we could just refer to them directly. But when the network is constructed using a builder API such as `tf.layers`, all the variables of the layers are created indirectly by the builder API. We therefore have to retrieve the variables from TensorFlow.First we need a list of the variable names in the TensorFlow graph:
###Code
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
print(var)
###Output
<tf.Variable 'layer_conv1/kernel:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'beta1_power:0' shape=() dtype=float32_ref>
<tf.Variable 'beta2_power:0' shape=() dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam_1:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam_1:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam_1:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam_1:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam_1:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam_1:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam_1:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam_1:0' shape=(10,) dtype=float32_ref>
###Markdown
Each of the convolutional layers has two variables. For the first convolutional layer they are named `layer_conv1/kernel:0` and `layer_conv1/bias:0`. The `kernel` variables are the ones we want to plot further below.It is somewhat awkward to get references to these variables, because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'kernel' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('kernel')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for the TensorFlow graph must be initialized before we start optimizing them.
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
###Code
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 5.8% (577 / 10000)
###Markdown
Performance after 1 optimization iterationThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
Accuracy on Test-Set: 6.6% (659 / 10000)
###Markdown
Performance after 100 optimization iterationsAfter 100 optimization iterations, the model has significantly improved its classification accuracy.
###Code
%%time
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 81.2% (8125 / 10000)
Example errors:
###Markdown
Performance after 1000 optimization iterationsAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
###Code
%%time
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 94.5% (9455 / 10000)
Example errors:
###Markdown
Performance after 10,000 optimization iterationsAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
###Code
%%time
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.8% (9884 / 10000)
Example errors:
###Markdown
Visualization of Weights and Layers Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for plotting the output of a convolutional layer
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Input ImagesHelper-function for plotting an image.
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
Plot an image from the test-set which will be used as an example below.
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
Plot another example image from the test-set.
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 1 Now plot the filter-weights for the first convolutional layer.Note that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
The following images are the results of applying the convolutional filters to the second image.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 2 Now plot the filter-weights for the second convolutional layer.There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.Note again that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.Note that these are down-sampled to 14 x 14 pixels which is half the resolution of the original input images, because the first convolutional layer was followed by a max-pooling layer with stride 2. Max-pooling is also done after the second convolutional layer, but we retrieve these images before that has been applied.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
And these are the results of applying the filter-weights to the second image.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
텐서플로우 튜토리얼 03-B 레이어 APIby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) 경고!**레이어 API는 텐서플로에서 Neural Networks를 생성하기 위한 기본 빌더 API로 의도되었지만 Layers API가 완전히 완성된 적은 없었다. 텐서플로 대 1.9에서는 여전히 작동하지만, 앞으로 더 이상 사용되지 않을 가능성이 꽤 있어 보인다. 대신 보다 완전한 _Keras API_를 사용하는 것이 좋다. 자습서 03-C를 참조하십시오.** 소개텐서플로우(TensorFlow)에서 Neural Networks를 구축할 때는 소스 코드를 구현하고 수정하기 쉽기 때문에 빌더 API를 사용하는 것이 중요하다. 이것은 또한 벌레의 위험을 낮춘다.다른 튜토리얼 중 상당수가 뉴럴 네트워크의 손쉬운 구축을 위해 PretyTensor라는 텐서플로 빌더 API를 사용했다. 그러나 텐서플로우를 위한 몇 가지 다른 빌더 API가 있다. 이 튜토리얼에는 FeatTensor가 사용되었는데, 당시 2016년 중반에는 FeatTensor가 텐서플로우가 사용할 수 있는 가장 완벽하고 세련된 빌더 API였기 때문이다. 하지만 프리티텐서는 구글에서 일하는 한 사람에 의해서만 개발되며, 비록 독특하고 우아한 특징을 가지고 있지만, 미래에 그것이 더 이상 사용되지 않을 수도 있다.이 튜토리얼은 최근 텐서플로 버전 1.1에 추가된 소형 빌더 API에 관한 것이다. 단순히 *레이어* 또는 *레이어 API* 또는 파이썬 이름 tf.레이어즈(tf.layers)로 불린다. 이 빌더 API는 텐서플로우의 일부로 자동 설치되기 때문에 더 이상 PyteTensor와 함께 필요했던 대로 별도의 파이썬 패키지를 설치할 필요가 없다.이 자습서는 Feattensor의 Tutorial 03과 매우 유사하며 Layers API를 사용하여 동일한 Convolutional Neural Network를 구현하는 방법을 보여준다. 콘볼루션 신경 네트워크의 튜토리얼 02에 익숙할 것을 권장한다. 플로우 차트 다음 도표는 아래에 구현된 컨볼루션 신경망에서 데이터가 어떻게 흐르는지 대략적으로 보여준다. 콘볼루션에 대한 자세한 설명은 자습서 02를 참조하십시오.  입력 이미지는 필터-가중치를 사용하여 첫 번째 경련층에서 처리된다. 이로 인해 16개의 새로운 이미지가 생성되며, 각 필터마다 한 개의 합성 층이 생성된다. 영상 또한 최대 풀을 사용하여 하향 샘플링되므로 영상 해상도가 28x28에서 14x14로 감소한다.이 16개의 작은 이미지들은 두 번째 경련층에서 처리된다. 우리는 이 16개 채널 각각에 대한 필터-가중치가 필요하고, 이 계층의 각 출력 채널에 대한 필터-가중치가 필요하다. 출력 채널은 36개가 있으므로 두 번째 경련층에는 총 16 x 36 = 576개의 필터가 있다. 결과 영상도 최대 풀링을 사용하여 7x7 픽셀까지 다운샘플링된다.제2 콘볼루션층의 출력은 각각 7x7픽셀의 36개 이미지다. 그리고 나서 이것들은 길이 7 x 7 x 36 = 1764의 단일 벡터로 평평하게 되는데, 이것은 128개의 뉴런(또는 원소)으로 완전히 연결된 층에 대한 입력으로 사용된다. 이것은 10개의 뉴런으로 완전히 연결된 또 다른 층으로 공급되는데, 각 계층마다 하나씩, 즉 영상의 등급을 결정하는 데 사용되는, 즉 영상에 어떤 숫자가 묘사되어 있는가 하는 것이다.경련형 필터는 처음에 무작위로 선택되기 때문에 분류는 무작위로 이루어진다. 입력 영상의 예측 클래스와 실제 클래스 사이의 오차는 이른바 교차 엔트로피로 측정된다. 그런 다음 최적기는 분화의 체인 규칙을 사용하여 자동으로 이 오류를 콘볼루션 네트워크를 통해 다시 전파하고 분류 오류를 개선하도록 필터 가중치를 업데이트한다. 이는 분류 오류가 충분히 낮을 때까지 반복적으로 수천 번 이루어진다.이러한 특정 필터 무게와 중간 이미지는 하나의 최적화 실행의 결과물이며 이 노트북을 다시 실행하면 다르게 보일 수 있다.TensorFlow의 계산은 실제로 단일 영상 대신 영상 배치에서 수행되므로 계산의 효율성이 향상된다는 점에 유의하십시오. 이는 실제로 TensorFlow에서 구현했을 때 플로우차트가 한 가지 더 데이터 경감을 갖는다는 것을 의미한다. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import math
###Output
_____no_output_____
###Markdown
이것은 Python 3.6 (Anaconda)과 TensorFlow 버전을 사용하여 개발되었다.
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
데이터 로드 MNIST 데이터 세트는 약 12MB로 주어진 경로에 위치하지 않으면 자동으로 다운로드된다.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
_____no_output_____
###Markdown
MNIST 데이터 세트는 현재 로드되었으며 7만 개의 영상과 관련 라벨(즉, 영상 분류)으로 구성되어 있다. 데이터 세트는 3개의 상호 배타적인 하위 세트로 분할된다. 이번 튜토리얼에서는 훈련과 시험 세트만 사용할 것이라고 말했다.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
_____no_output_____
###Markdown
클래스 라벨은 One-Hot 인코딩된 것으로, 각 라벨은 10개의 원소를 가진 벡터라는 뜻이며, 한 요소를 제외하고는 모두 0이다. 이 한 요소의 색인은 클래스 번호, 즉 관련 이미지에 표시된 숫자다. 시험 세트의 정수로도 학급번영자가 필요하니까 지금 계산하는 거야.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
데이터 디멘션 데이터 치수는 아래의 소스 코드의 여러 곳에서 사용된다. 이 변수들은 한 번 정의되기 때문에 우리는 아래의 소스 코드 전체에 걸쳐 숫자 대신 이러한 변수를 사용할 수 있다.
###Code
# 우리는 MNIST 영상이 각 차원에 28픽셀이라는 것을 알고 있다.
img_size = 28
# 이미지는 이 길이의 1차원 배열로 저장된다.
img_size_flat = img_size * img_size
# 배열을 재구성하는 데 사용되는 이미지의 높이와 폭을 가진 튜플.
img_shape = (img_size, img_size)
# 영상의 색상 채널 수: 그레이 스케일의 경우 1 채널.
num_channels = 1
# 클래스 수, 열 자리마다 한 클래스씩.
num_classes = 10
###Output
_____no_output_____
###Markdown
이미지 플롯을 위한 도우미 함수 9개의 영상을 3x3 그리드에 플로팅하고 각 이미지 아래에 참 클래스와 예측 클래스를 작성하는 데 사용되는 기능
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# 3x3 서브플롯으로 피규어를 만든다.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# 플롯 이미지.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# 진실되고 예측된 클래스를 보여라.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# x축에 클래스를 라벨로 표시한다.
ax.set_xlabel(xlabel)
# 플롯에서 눈금을 제거하라.
ax.set_xticks([])
ax.set_yticks([])
# 여러 플롯을 사용하여 플롯이 올바르게 표시되도록 하십시오.
# 단일 노트북 셀에.
plt.show()
###Output
_____no_output_____
###Markdown
데이터가 올바른지 보기 위해 몇 개의 이미지를 플롯하십시오.
###Code
# 시험 세트에서 첫 번째 영상을 얻으십시오.
images = data.test.images[0:9]
# 그 이미지들에 대한 진정한 수업을 받으세요.
cls_true = data.test.cls[0:9]
# 위의 도우미 함수를 사용하여 이미지와 라벨을 플롯하십시오.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
텐서플로우 그래프텐서플로우의 전체 목적은 파이톤에서 직접 동일한 계산을 수행할 경우보다 훨씬 효율적으로 실행할 수 있는 이른바 계산 그래프를 갖추는 것이다. 텐서플로우는 실행해야 하는 전체 연산 그래프를 알고 있는 반면, NumPy는 한 번에 하나의 수학 연산만을 알고 있기 때문에 텐서플로우는 NumPy보다 효율적일 수 있다.텐서플로우의 전체 목적은 파이톤에서 직접 동일한 계산을 수행할 경우보다 훨씬 효율적으로 실행할 수 있는 이른바 계산 그래프를 갖추는 것이다. 텐서플로우는 실행해야 하는 전체 연산 그래프를 알고 있는 반면, NumPy는 한 번에 하나의 수학 연산만을 알고 있기 때문에 텐서플로우는 NumPy보다 효율적일 수 있다.TensorFlow는 GPU뿐만 아니라 멀티 코어 CPU도 활용할 수 있으며 구글은 TPU(Tensor Processing Units)라고 불리며 GPU보다 더 빠른 텐서플로우만을 위한 특수 칩까지 만들었다.TensorFlow 그래프는 아래에 자세히 설명될 다음과 같은 부분으로 구성된다:* 그래프로 데이터를 입력하는 데 사용되는 자리 표시자 변수.* 컨볼루션 네트워크를 더 잘 수행할 수 있도록 최적화될 변수.* 콘볼루션 신경망의 수학적 공식.* 변수의 최적화를 안내하는 데 사용할 수 있는 이른바 비용 측정 또는 손실 기능.* 변수를 업데이트하는 최적화 방법.또한 TensorFlow 그래프는 본 자습서에서 다루지 않는 TensorBoard를 사용하여 표시되는 데이터 로깅과 같은 다양한 디버깅 문도 포함할 수 있다. Placeholder variables 플레이스홀더 변수는 우리가 그래프를 실행할 때마다 변경될 수 있는 텐서플로 계산 그래프의 입력 역할을 한다. 우리는 이것을 자리 표시자 변수 공급이라고 부르는데, 그것은 아래에 더 자세히 설명되어 있다.먼저 우리는 입력 영상에 대한 자리 표시자 변수를 정의한다. 이를 통해 TensorFlow 그래프에 입력되는 영상을 변경할 수 있다. 이것은 단지 다차원 배열이라는 뜻의 이른바 텐서다. 데이터 타입은 `float32`로, 형태는 `[None, img_size_flat]`로 설정되어 있는데, 여기서 `None`은 각각의 이미지가 길이 `img_size_flat`의 벡터인 상태에서 임의의 수의 영상을 담을 수 있다는 것을 의미한다.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
콘볼루션층에서는 `x`가 4차원 텐서로 인코딩될 것으로 예상하므로 그 모양이 `[num_images, img_hight, img_width, num_channels]`가 되도록 재구성해야 한다. `img_hight == img_width==img_size`와 `num_images`는 첫 번째 치수 크기에 대해 -1을 사용하여 자동으로 추론할 수 있다. 따라서 재편성 작업은 다음과 같다.
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
다음에는 자리 표시자 변수 `x`에 입력된 이미지와 관련된 실제 레이블에 대한 자리 표시자 변수가 있다. 이 자리 표시자 변수의 모양은 임의의 수의 레이블을 포함할 수 있다는 뜻의 `[None, num_classes]`이며, 각 레이블은 길이가 10인 `num_classes`의 벡터다.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
클래스 번호에 대한 자리 표시자 변수도 가질 수 있지만 대신 argmax를 사용해 계산하겠다. 이 연산자는 텐서플로우 연산자이므로 이 시점에서는 아무것도 계산되지 않는다는 점에 유의하십시오.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
프리티 텐서 구현이 절은 Tutorial 03에서 가져온 FeatTensor를 이용한 Convolutional Neural Network의 구현을 보여줌으로써 아래의 Layers API를 이용한 구현과 비교할 수 있다. 이 코드는 if False: 블록으로 동봉되어 있어 여기서 실행되지 않는다.입력 텐서 `x_image`를 새로운 연산 레이어를 추가하는 도우미 기능이 있는 Feattensor 객체에 포장해 전체 콘볼루션 신경망을 만드는 것이 기본 아이디어다. 이것은 상당히 단순하고 우아한 구문이다.
###Code
if False:
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
레이어 구현우리는 이제 텐서플로 버전 1.1에 포함된 레이어 API를 이용하여 동일한 컨버전스 뉴럴 네트워크를 구현한다. 다음은 코멘트일 뿐이지만, 이것은 프리티텐서보다 더 많은 코드를 필요로 하는 코드는 다음과 같다.신경망을 구축하는 과정에서 마지막 층을 지칭하기 위해 `net`변수를 사용한다. 이렇게 하면 실험하려는 경우 코드의 레이어를 쉽게 추가하거나 제거할 수 있다. 우선 우리는 `net`변수를 재구성한 입력 이미지로 설정한다.
###Code
net = x_image
###Output
_____no_output_____
###Markdown
입력 이미지는 첫 번째 경련층에 입력되는데, 이 경련층에는 크기가 5x5픽셀씩 16개의 필터가 있다. 활성화 기능은 Tutorial 02에 자세히 설명된 Corrective Linear Unit(ReLU)이다.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',
filters=16, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
이러한 방식으로 신경망을 구축할 때의 장점 중 하나는 이제 쉽게 층에 대한 참조를 끌어낼 수 있다는 겁니다. 이것은 프리티텐서에서는 더 복잡했다.아래 더 나아가 우리는 첫 번째 경련층의 출력을 그림으로 그리고자 하기 때문에, 우리는 그 층에 대한 참조를 보유하기 위한 또 다른 변수를 만든다.
###Code
layer_conv1 = net
###Output
_____no_output_____
###Markdown
우리는 이제 콘볼루션 층의 산출물에 대한 최대 풀을 한다. 이는 자습서 02에서도 자세히 설명하였다.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
5x5 픽셀의 필터 각각 36개가 달린 두 번째 콘볼루션 레이어와 다시 ReLU 활성화 기능을 추가했다
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',
filters=36, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
우리는 또한 이 경련층의 산출물을 그림으로 그리고 싶기 때문에 나중에 사용하기 위해 참조를 유지한다.
###Code
layer_conv2 = net
###Output
_____no_output_____
###Markdown
두 번째 콘볼루션층의 출력도 영상을 다운샘플링하기 위해 최대 풀로 되어 있다.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
이 최대 풀링에 의해 출력되고 있는 텐서는 다음과 같이 4등급이다.
###Code
net
###Output
_____no_output_____
###Markdown
다음으로는 뉴럴 네트워크에 완전히 연결된 층을 추가하고 싶지만, 이 층들은 입력으로 2등급 텐서가 필요하기 때문에 우선 텐더를 평평하게 해야 한다.`tf.layers` API는 텐서플로우 코어에 옮겨지기 전 `tf.contrib.layer`에 처음 위치했다. 그러나 텐서플로우 개발자들이 이런 간단한 기능들을 옮기는 데 1년이 걸렸지만 그들은 심지어 더 간단한 `flatten()` 기능을 옮기는 것을 잊어버렸다. 그래서 우리는 `tf.contrib.layers`에 있는 것을 여전히 사용할 필요가 있다.
###Code
net = tf.contrib.layers.flatten(net)
# 이것은 결국 다음으로 대체되어야 한다.:
# net = tf.layers.flatten(net)
###Output
_____no_output_____
###Markdown
이것은 이제 데이터를 2단계 텐서(tensor)로 납작하게 만들었다. 여기서 알 수 있듯이.
###Code
net
###Output
_____no_output_____
###Markdown
이제 우리는 신경망에 완전히 연결된 층을 추가할 수 있다. 이를 레이어 API에서 *dense* 레이어라고 한다.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
입력 영상을 10가지 등급으로 분류하려면 신경망이 필요하다. 그래서 최종 완전연결층에는 `num_classes=10` 출력 뉴런이 있다.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc_out',
units=num_classes, activation=None)
###Output
_____no_output_____
###Markdown
최종 완전 연결 레이어의 출력을 로짓이라고 부르기도 하기 때문에 그런 이름의 편의 변수를 가지고 있다.
###Code
logits = net
###Output
_____no_output_____
###Markdown
우리는 소프트맥스 기능을 사용하여 출력이 0과 1 사이에 있도록 `spuash`하고, 따라서 합쳐서 1이 되도록 한다.
###Code
y_pred = tf.nn.softmax(logits=logits)
###Output
_____no_output_____
###Markdown
이것은 신경 네트워크가 입력 이미지가 가능한 각 등급의 것이라고 생각하는 가능성을 말해준다. 가장 높은 값을 가진 것이 가장 가능성이 높은 것으로 간주되어 그 지수는 클래스 번호로 간주된다.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
우리는 이제 직접 텐서플로 구현에서 많은 복잡한 코드 라인이 필요했던 코드의 몇 줄에 정확히 같은 콘볼루션 신경망을 만들었다.계층 API는 아마도 FeattyTensor만큼 우아하지는 않지만, 예를 들어 우리가 더 쉽게 중간 계층을 언급할 수 있고, 계층 API를 사용하여 가지와 다중 출력을 가진 신경 네트워크를 구축하는 것도 더 용이하다. 최적화해야 하는 손실 함수 입력된 영상을 더 잘 분류하기 위해서는 어떻게든 콘볼루션 신경망의 변수를 바꿔야 한다.크로스 엔트로피는 분류에 사용되는 성능 측정값이다. 교차-엔트로피는 항상 양의 함수로서 모델의 예측 출력이 원하는 출력과 정확히 일치하면 교차-엔트로피는 0이다. 따라서 최적화의 목적은 교차 엔트로피를 최소화하여 모델의 변수를 변경하여 가능한 한 0에 가깝게 하는 것이다.,TensorFlow는 내부에서도 소프트맥스를 계산하기 때문에 로짓 레이어의 값을 사용하는 교차 엔트로피를 계산하는 기능을 갖고 있어 수학적 안정성을 높일 수 있다.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)
###Output
_____no_output_____
###Markdown
우리는 이제 각 이미지 분류에 대한 교차 엔트로피를 계산했기 때문에 모델이 각 이미지에서 개별적으로 얼마나 잘 수행되는지 측정할 수 있다. 그러나 교차 엔트로피를 사용하여 모델의 변수의 최적화를 유도하려면 단일 스칼라 값이 필요하므로 모든 이미지 분류에 대한 교차 엔트로피의 평균을 단순히 취한다.
###Code
loss = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
최적화 매서드이제 최소화해야 할 비용측정이 생겼으니, 그러면 최적기를 만들 수 있다. 이 경우 학습 속도가 1e-4인 아담 최적화 도구다.이 시점에서는 최적화가 수행되지 않는다는 점에 유의하십시오. 사실 아무것도 계산되지 않고, 나중에 실행할 수 있도록 최적화 도구-개체를 텐서플로 그래프에 추가하기만 하면 된다.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
분류 정확도구분 정확도를 계산해야 사용자에게 진행 상황을 보고할 수 있다.먼저 우리는 예측된 클래스가 각 이미지의 진정한 클래스와 동일한지 여부를 알려주는 술래들의 벡터를 만든다.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
분류 정확도는 우선 술집의 벡터를 물에 뜨게 하여 거짓이 0이 되고 참이 1이 되게 한 다음 이 숫자들의 평균을 취함으로써 계산된다.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
가중치 얻기아래 더 나아가서 우리는 콘볼루션 층의 무게를 그려보고자 한다. 텐서 플로우 구현에서는 변수를 직접 참조할 수 있도록 변수를 직접 생성했다. 그러나 `tf.layers`와 같은 빌더 API를 사용하여 네트워크를 구축할 때 계층의 모든 변수는 빌더 API에 의해 간접적으로 생성된다. 그러므로 우리는 텐서플로에서 변수를 회수해야 한다.우선 텐서플로 그래프의 변수 이름 목록이 필요하다.
###Code
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
print(var)
###Output
_____no_output_____
###Markdown
각각의 콘볼루션층에는 두 가지 변수가 있다. 첫 번째 경련층에는 `layer_conv1/kernel:0`과 `layer_conv1/bias:0`으로 명명된다. `kernel` 변수는 우리가 아래에서 더 나아가서 구상하고자 하는 변수들이다.새로운 변수를 만들거나 기존 변수를 재사용하는 등 다른 용도로 설계된 텐서플로 함수 `get_variable()`을 사용해야 하기 때문에 이러한 변수에 대한 참조를 얻기가 다소 어색하다. 가장 쉬운 것은 다음과 같은 도우미 함수를 만드는 것이다.
###Code
def get_weights_variable(layer_name):
# 스코프에서 'kernel'이라는 기존 변수 검색
# 지정된 layer_name과 함께
# 텐서플로 함수가 있어서 어색하다.
# 정말로 다른 목적을 위해 의도된
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('kernel')
return variable
###Output
_____no_output_____
###Markdown
이 도우미 함수를 이용하면 변수를 되찾을 수 있다. 이것들은 텐서플로우 물체 입니다. 변수의 내용을 얻으려면 아래에 설명된 `contents=session.run(weights_conv1)`과 같은 작업을 수행해야 한다.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
텐서플로우 실행 텐서 플로우 세션 생성TensorFlow 그래프가 생성되면 그래프를 실행하는 데 사용되는 TensorFlow 세션을 생성해야 한다.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
변수 초기화TensorFlow 그래프의 변수는 최적화를 시작하기 전에 초기화되어야 한다.
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
최적화 반복을 수행하는 도우미 함수 훈련 세트에는 5만 5천 개의 이미지가 있다. 이 모든 영상을 사용하여 모델의 그라데이션 계산에 오랜 시간이 걸린다. 따라서 최적기의 각 반복에는 작은 이미지 배치만 사용한다.RAM이 부족하여 컴퓨터가 고장 나거나 속도가 매우 느려지면 이 숫자를 줄이려고 할 수 있지만 최적화 반복을 더 많이 수행해야 할 수도 있다.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
이 함수는 신경망 층의 변수를 점진적으로 개선하기 위해 다수의 최적화 반복을 수행한다. 각 반복에서 교육 세트에서 새로운 데이터 배치를 선택한 다음 텐서플로우는 해당 교육 샘플을 사용하여 최적기를 실행한다. 진행상황은 100회마다 출력된다.
###Code
# 지금까지 수행된 총 반복 횟수에 대한 카운터
total_iterations = 0
def optimize(num_iterations):
# 로컬 복사본이 아닌 글로벌 변수를 업데이트하십시오.
global total_iterations
for i in range(total_iterations,
total_iterations + num_iterations):
# 교육용 예시 한 묶음 가져오기
# x_batch는 이제 한 묶음의 이미지를 가지고 있으며
# y_true_batch는 해당 이미지의 실제 레이블이다.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# 적절한 이름을 가진 명령어에 배치
# TensorFlow 그래프에서 자리 표시자 변수의 경우.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# 이 교육 데이터 배치를 사용하여 최적화 도구 실행.
# TensorFlow는 feed_dict_train에 변수를 할당한다.
# 자리 표시자 변수로 이동한 다음 최적화 도구를 실행하십시오.
session.run(optimizer, feed_dict=feed_dict_train)
# 100번 반복마다 상태 출력
if i % 100 == 0:
# 교육 세트의 정확도 계산.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# 출력용 메시지
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# 출력
print(msg.format(i + 1, acc))
# 수행된 총 반복 횟수 업데이트
total_iterations += num_iterations
###Output
_____no_output_____
###Markdown
예시 오류를 플롯하는 도우미 함수 잘못 분류된 테스트 세트의 이미지 예시를 플로팅하는 함수.
###Code
def plot_example_errors(cls_pred, correct):
# 이 함수는 아래의 print_test_정확도()에서 호출된다.
# cls_pred는 다음에 대해 예측된 클래스 번호의 배열이다.
# 테스트 세트의 모든 이미지
# correct는 예측 클래스의 여부를 나타내는 부울 배열임
# 테스트 세트의 각 이미지에 대한 True 클래스와 동일함.
# 부울 배열 취소
incorrect = (correct == False)
# 테스트 세트에서 이미지의 이미지를 가져오십시오.
# 잘못 분류된
images = data.test.images[incorrect]
# 해당 이미지의 예측 클래스 가져오기
cls_pred = cls_pred[incorrect]
# 해당 이미지의 실제 클래스 가져오기
cls_true = data.test.cls[incorrect]
# 처음 9개 이미지 플롯
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
혼동 행렬을 표시하는 도우미 함수
###Code
def plot_confusion_matrix(cls_pred):
# 이 함수는 아래의 print_test_정확도()에서 호출된다.
# cls_pred는 다음에 대해 예측된 클래스 번호의 배열이다.
# 테스트 세트의 모든 이미지
# 테스트 세트에 대한 실제 분류 가져오기
cls_true = data.test.cls
# sklearn을 사용하여 혼동 매트릭스 가져오기
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# 혼동 행렬을 텍스트로 출력
print(cm)
# 혼동 행렬을 이미지로 표시
plt.matshow(cm)
# 플롯을 다양하게 조정
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# 그림이 여러 그림으로 올바르게 표시되는지 확인
# 단일 노트북 셀에.
plt.show()
###Output
_____no_output_____
###Markdown
성능을 보여주는 도우미 함수 아래는 시험 세트에 분류 정확도를 인쇄하는 함수이다.테스트셋의 모든 영상에 대한 분류를 계산하는 데 시간이 걸리므로, 이 함수에서 위의 함수를 직접 호출하여 결과를 다시 사용하는 것이므로 각 함수에 의해 분류를 다시 계산할 필요가 없다.이 함수는 컴퓨터 메모리를 많이 사용할 수 있으므로 테스트 세트가 더 작은 배치로 분할되는 것에 유의하십시오. 컴퓨터에 RAM이 거의 없는데 작동이 안 되면 배치 크기를 줄이려고 하면 된다.
###Code
# 테스트 세트를 이 크기의 작은 배치로 분할
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# 테스트 세트의 이미지 수
num_test = len(data.test.images)
# 다음과 같은 예측 클래스에 배열 할당
# 일괄적으로 계산되어 이 배열로 채워질 것이다.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# 이제 배치에 대한 예측 클래스를 계산하십시오.
# 우리는 단지 모든 배치들을 반복할 것이다.
# 이것을 하는 더 영리하고 피톤적인 방법이 있을 것이다.
# 다음 배치의 시작 지수는 i로 표시된다.
i = 0
while i < num_test:
# 다음 배치에 대한 끝 지수는 j로 표시된다.
j = min(i + test_batch_size, num_test)
# 인덱스 i와 j 사이의 테스트 세트에서 이미지 가져오기.
images = data.test.images[i:j, :]
# 연결된 레이블 가져오기.
labels = data.test.labels[i:j, :]
# 이러한 이미지 및 레이블로 피드 딕트 만들기.
feed_dict = {x: images,
y_true: labels}
# 텐서플로우를 사용하여 예측 클래스 계산.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# 다음 배치에 대한 시작 인덱스를 다음으로 설정
# 현재 배치에 대한 엔드 인덱스
i = j
# 테스트 세트의 실제 클래스 번호에 대한 편의 변수
cls_true = data.test.cls
# 각 이미지가 올바르게 분류되었는지 여부를 나타내는 부울 배열 만들기
correct = (cls_true == cls_pred)
# 올바르게 분류된 이미지 수 계산
# 부울 배열을 합할 때 False는 0을 의미하고 True는 1을 의미한다.
correct_sum = correct.sum()
# 분류 정확도는 정확하게 분류된 수입니다
# 이미지를 테스트 세트의 총 이미지 수로 나눈다.
acc = float(correct_sum) / num_test
# 정확도 출력
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# 필요한 경우 일부 잘못된 분류 예제 그림 그리기.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# 원하는 경우 혼동 행렬 그림 그리기
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
최적화 전 성능신경망 변수가 초기화됐을 뿐 전혀 최적화되지 않아 무작위로 영상을 분류할 뿐이어서 시험 세트의 정확도가 매우 낮다.
###Code
print_test_accuracy()
###Output
_____no_output_____
###Markdown
최적화 1번 반복 후 성능최적기에 대한 학습 비율이 매우 낮게 설정되어 있기 때문에 분류 정확도는 한 번의 최적화 반복만으로 크게 개선되지 않는다.
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
_____no_output_____
###Markdown
최적화를 100회 반복한 후의 성능100회의 최적화 반복을 거쳐, 그 모델은 분류 정확도를 현저히 향상시켰다.
###Code
%%time
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
_____no_output_____
###Markdown
1000번의 최적화 반복 후 성능1000회 최적화 반복 후 이 모델은 시험 세트의 정확도를 90% 이상으로 크게 높였다.
###Code
%%time
optimize(num_iterations=900) # 위에서 100번 반복했다.
print_test_accuracy(show_example_errors=True)
###Output
_____no_output_____
###Markdown
최적화 반복 10,000회 이후 성능최적화 1만 번 반복한 후, 이 모델은 약 99%의 시험 세트에 대한 분류 정확도를 갖는다.
###Code
%%time
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
_____no_output_____
###Markdown
가중치와 층의 시각화 콘볼루션 가중치 플로팅을 위한 도우미 함수
###Code
def plot_conv_weights(weights, input_channel=0):
# 가중치가 4-dim 변수에 대한 텐서 흐름 ops라고 가정
# 예: weights_conv1 또는 weights_conv2
# TensorFlow에서 가중치 변수 값 검색.
# 아무것도 계산되지 않기 때문에 피드 딕트가 필요하지 않다.
w = session.run(weights)
# 가중치에 대한 가장 낮은 값과 가장 높은 값 가져오기
# 이것은 전체에서 색 강도를 보정하는 데 사용된다
# 서로 비교될 수 있도록 하는 이미지들
w_min = np.min(w)
w_max = np.max(w)
# Conv. 층에 사용된 필터 수
num_filters = w.shape[3]
# 플롯할 그리드 수
# 필터 수의 반올림, 제곱근
num_grids = math.ceil(math.sqrt(num_filters))
# 하위 그림 그리드를 사용하여 그림 작성
fig, axes = plt.subplots(num_grids, num_grids)
# 모든 필터-가중치 그림 그리기
for i, ax in enumerate(axes.flat):
# 유효한 필터-가중치만 표시
if i<num_filters:
# 입력 채널의 i번째 필터에 대한 가중치 가져오기
# 형식에 대한 자세한 내용은 new_conv_layer()를 참조하십시오.
# 4-dim 텐서의
img = w[:, :, input_channel, i]
# 플롯 이미지
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# 플롯에서 눈금 제거
ax.set_xticks([])
ax.set_yticks([])
# 그림이 여러 그림으로 올바르게 표시되는지 확인
# 단일 노트북 셀에
plt.show()
###Output
_____no_output_____
###Markdown
콘볼루션 층의 출력을 플로팅하기 위한 도우미 함수
###Code
def plot_conv_layer(layer, image):
# 레이어가 4-dim 텐서를 출력하는 텐서플로우 op이라고 가정하십시오
# 콘볼루션 층의 결과물,
# 예: layer_layer1 또는 layer_layer2.
# 하나의 이미지만 포함하는 피드 딕트 생성.
# y_true는 y_true이기 때문에 공급할 필요가 없다는 점에 유의하십시오
# 이 계산에 사용되지 않는
feed_dict = {x: [image]}
# 레이어의 출력 값 계산 및 검색
# 그 이미지를 입력할 때
values = session.run(layer, feed_dict=feed_dict)
# Conv.층에 사용된 필터 수
num_filters = values.shape[3]
# 플롯할 그리드 수
# 필터 수의 반올림, 제곱근
num_grids = math.ceil(math.sqrt(num_filters))
# 하위 그림 그리드를 사용하여 그림 작성
fig, axes = plt.subplots(num_grids, num_grids)
# 모든 필터의 출력 영상 플롯
for i, ax in enumerate(axes.flat):
# 유효한 필터에 대한 영상만 표시
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# 플롯 이미지
ax.imshow(img, interpolation='nearest', cmap='binary')
# 플롯에서 눈금 제거
ax.set_xticks([])
ax.set_yticks([])
# 그림이 여러 그림으로 올바르게 표시되는지 확인
# 단일 노트북 셀에
plt.show()
###Output
_____no_output_____
###Markdown
입력 이미지이미지를 플롯하는 데 도움이 되는 함수
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
아래 예제로 사용할 테스트 세트의 이미지를 플롯하십시오.
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
테스트 세트에서 다른 예제 이미지를 플롯하십시오.
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
콘볼루션 층 1 이제 첫 번째 경련 층에 대한 필터-가중치를 플롯하십시오.양체중은 빨간색이고 음체중은 파란색이라는 점에 유의하십시오.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
이러한 각 콘볼루션 필터를 첫 번째 입력 영상에 적용하면 다음과 같은 출력 영상이 제공되며, 이는 두 번째 콘볼루션층에 대한 입력으로 사용된다.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
다음 이미지는 두 번째 이미지에 콘볼루션 필터를 적용한 결과 입니다.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
콘볼루션 층 2 이제 두 번째 콘볼루션층에 대한 필터-가중치를 그려보십시오.첫 번째 콘볼루션층에서 16개의 출력 채널이 있는데, 이것은 두 번째 콘볼루션층으로 16개의 입력 채널이 있다는 것을 의미한다. 두 번째 콘블레이어에는 각 입력 채널에 대한 필터-가중치 집합이 있다. 첫 번째 채널의 필터-가중치를 계획하는 것부터 시작합시다.다시 한 번 긍정적인 체중은 빨간색이고 부정적인 체중은 파란색이라는 것을 주목하라.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
제2의 콘볼루션층에는 16개의 입력 채널이 있으므로 이와 같은 필터-가중치를 15개 더 만들 수 있다. 두 번째 채널의 필터-가중치로 한 개만 더 만들면 된다.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
차원이 높기 때문에 이러한 필터가 어떻게 적용되는지 이해하고 추적하기가 어려울 수 있다.이러한 콘볼루션 필터를 첫 번째 콘블레이어로부터 출력된 영상에 적용하면 다음과 같은 영상이 나온다.이러한 것들은 원래 입력 이미지의 절반인 14 x 14 픽셀로 다운샘플링된다는 점에 유의하십시오. 첫 번째 경구층에는 보폭을 가진 최대 풀링 층이 뒤따랐기 때문이다.2. 최대 풀링 또한 두 번째 경구층 이후에 이루어지지만, 우리는 그 전에 이러한 이미지들을 회수한다.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
그리고 두 번째 이미지에 필터-가중치를 적용한 결과 입니다.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
텐서플로우 세션 닫기 이제 텐서플로우를 이용한 작업이 끝났기 때문에 그 자원을 공개하기 위해 세션을 닫는다.
###Code
# 수정 및 실험을 원할 경우에 대비하여 언급된 사항.
# 노트북을 다시 시작할 필요 없이.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 03-B Layers APIby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) IntroductionIt is important to use a builder API when constructing Neural Networks in TensorFlow because it makes it easier to implement and modify the source-code. This also lowers the risk of bugs.Many of the other tutorials used the TensorFlow builder API called PrettyTensor for easy construction of Neural Networks. But there are several other builder APIs available for TensorFlow. PrettyTensor was used in these tutorials, because at the time in mid-2016, PrettyTensor was the most complete and polished builder API available for TensorFlow. But PrettyTensor is only developed by a single person working at Google and although it has some unique and elegant features, it is possible that it may become deprecated in the future.This tutorial is about a small builder API that has recently been added to TensorFlow version 1.1. It is simply called *Layers* or the *Layers API* or by its Python name `tf.layers`. This builder API is automatically installed as part of TensorFlow, so you no longer have to install a separate Python package as was needed with PrettyTensor.This tutorial is very similar to Tutorial 03 on PrettyTensor and shows how to implement the same Convolutional Neural Network using the Layers API. It is recommended that you are familiar with Tutorial 02 on Convolutional Neural Networks. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial 02 for a more detailed description of convolution.  The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled using max-pooling so the image resolution is decreased from 28x28 to 14x14.These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are also down-sampled using max-pooling to 7x7 pixels.The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import math
###Output
_____no_output_____
###Markdown
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional neural network.* A so-called cost-measure or loss-function that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
PrettyTensor ImplementationThis section shows the implementation of a Convolutional Neural Network using PrettyTensor taken from Tutorial 03 so it can be compared to the implementation using the Layers API below. This code has been enclosed in an `if False:` block so it does not run here.The basic idea is to wrap the input tensor `x_image` in a PrettyTensor object which has helper-functions for adding new computational layers so as to create an entire Convolutional Neural Network. This is a fairly simple and elegant syntax.
###Code
if False:
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Layers ImplementationWe now implement the same Convolutional Neural Network using the Layers API that is included in TensorFlow version 1.1. This requires more code than PrettyTensor, although a lot of the following are just comments.We use the `net`-variable to refer to the last layer while building the Neural Network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the `net`-variable to the reshaped input image.
###Code
net = x_image
###Output
_____no_output_____
###Markdown
The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial 02.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',
filters=16, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
One of the advantages of constructing neural networks in this fashion, is that we can now easily pull out a reference to a layer. This was more complicated in PrettyTensor.Further below we want to plot the output of the first convolutional layer, so we create another variable for holding a reference to that layer.
###Code
layer_conv1 = net
###Output
_____no_output_____
###Markdown
We now do the max-pooling on the output of the convolutional layer. This was also described in more detail in Tutorial 02.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
We now add the second convolutional layer which has 36 filters each with 5x5 pixels, and a ReLU activation function again.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',
filters=36, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We also want to plot the output of this convolutional layer, so we keep a reference for later use.
###Code
layer_conv2 = net
###Output
_____no_output_____
###Markdown
The output of the second convolutional layer is also max-pooled for down-sampling the images.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
The tensors that are being output by this max-pooling are 4-rank, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
Next we want to add fully-connected layers to the Neural Network, but these require 2-rank tensors as input, so we must first flatten the tensors.The `tf.layers` API was first located in `tf.contrib.layers` before it was moved into TensorFlow Core. But even though it has taken the TensorFlow developers a year to move these fairly simple functions, they have somehow forgotten to move the even simpler `flatten()` function. So we still need to use the one in `tf.contrib.layers`.
###Code
net = tf.contrib.layers.flatten(net)
# This should eventually be replaced by:
# net = tf.layers.flatten(net)
###Output
_____no_output_____
###Markdown
This has now flattened the data to a 2-rank tensor, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
We can now add fully-connected layers to the neural network. These are called *dense* layers in the Layers API.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has `num_classes=10` output neurons.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc_out',
units=num_classes, activation=None)
###Output
_____no_output_____
###Markdown
The output of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name.
###Code
logits = net
###Output
_____no_output_____
###Markdown
We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one.
###Code
y_pred = tf.nn.softmax(logits=logits)
###Output
_____no_output_____
###Markdown
This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
We have now created the exact same Convolutional Neural Network in a few lines of code that required many complex lines of code in the direct TensorFlow implementation.The Layers API is perhaps not as elegant as PrettyTensor, but it has some other advantages, e.g. that we can more easily refer to intermediate layers, and it is also easier to construct neural networks with branches and multiple outputs using the Layers API. Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the Convolutional Neural Network.The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model.TensorFlow has a function for calculating the cross-entropy, which uses the values of the `logits`-layer because it also calculates the softmax internally, so as to to improve numerical stability.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)
###Output
_____no_output_____
###Markdown
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
###Code
loss = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
Optimization MethodNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Classification AccuracyWe need to calculate the classification accuracy so we can report progress to the user.First we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
Getting the WeightsFurther below, we want to plot the weights of the convolutional layers. In the TensorFlow implementation we had created the variables ourselves so we could just refer to them directly. But when the network is constructed using a builder API such as `tf.layers`, all the variables of the layers are created indirectly by the builder API. We therefore have to retrieve the variables from TensorFlow.First we need a list of the variable names in the TensorFlow graph:
###Code
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
print(var)
###Output
<tf.Variable 'layer_conv1/kernel:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'beta1_power:0' shape=() dtype=float32_ref>
<tf.Variable 'beta2_power:0' shape=() dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam_1:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam_1:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam_1:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam_1:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam_1:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam_1:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam_1:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam_1:0' shape=(10,) dtype=float32_ref>
###Markdown
Each of the convolutional layers has two variables. For the first convolutional layer they are named `layer_conv1/kernel:0` and `layer_conv1/bias:0`. The `kernel` variables are the ones we want to plot further below.It is somewhat awkward to get references to these variables, because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'kernel' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('kernel')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for the TensorFlow graph must be initialized before we start optimizing them.
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
###Code
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 5.8% (577 / 10000)
###Markdown
Performance after 1 optimization iterationThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
Accuracy on Test-Set: 6.6% (659 / 10000)
###Markdown
Performance after 100 optimization iterationsAfter 100 optimization iterations, the model has significantly improved its classification accuracy.
###Code
%%time
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 81.2% (8125 / 10000)
Example errors:
###Markdown
Performance after 1000 optimization iterationsAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
###Code
%%time
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 94.5% (9455 / 10000)
Example errors:
###Markdown
Performance after 10,000 optimization iterationsAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
###Code
%%time
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.8% (9884 / 10000)
Example errors:
###Markdown
Visualization of Weights and Layers Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for plotting the output of a convolutional layer
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Input ImagesHelper-function for plotting an image.
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
Plot an image from the test-set which will be used as an example below.
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
Plot another example image from the test-set.
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 1 Now plot the filter-weights for the first convolutional layer.Note that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
The following images are the results of applying the convolutional filters to the second image.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 2 Now plot the filter-weights for the second convolutional layer.There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.Note again that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.Note that these are down-sampled to 14 x 14 pixels which is half the resolution of the original input images, because the first convolutional layer was followed by a max-pooling layer with stride 2. Max-pooling is also done after the second convolutional layer, but we retrieve these images before that has been applied.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
And these are the results of applying the filter-weights to the second image.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 03-B Layers APIby [Abbas Malekpour](https://github.com/abbasmalekpour)/ [GitHub](https://github.com/abbasmalekpour/TensorFlow-Deeplearning) / [ ](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**The Layers API was intended to be a basic builder API for creating Neural Networks in TensorFlow, but the Layers API was never fully completed. Although it still works in TensorFlow v. 1.9, it seems quite possible that it may be deprecated in the future. It is recommended that you use the more complete _Keras API_ instead, see Tutorial 03-C.** IntroductionIt is important to use a builder API when constructing Neural Networks in TensorFlow because it makes it easier to implement and modify the source-code. This also lowers the risk of bugs.Many of the other tutorials used the TensorFlow builder API called PrettyTensor for easy construction of Neural Networks. But there are several other builder APIs available for TensorFlow. PrettyTensor was used in these tutorials, because at the time in mid-2016, PrettyTensor was the most complete and polished builder API available for TensorFlow. But PrettyTensor is only developed by a single person working at Google and although it has some unique and elegant features, it is possible that it may become deprecated in the future.This tutorial is about a small builder API that has recently been added to TensorFlow version 1.1. It is simply called *Layers* or the *Layers API* or by its Python name `tf.layers`. This builder API is automatically installed as part of TensorFlow, so you no longer have to install a separate Python package as was needed with PrettyTensor.This tutorial is very similar to Tutorial 03 on PrettyTensor and shows how to implement the same Convolutional Neural Network using the Layers API. It is recommended that you are familiar with Tutorial 02 on Convolutional Neural Networks. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial 02 for a more detailed description of convolution.  The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled using max-pooling so the image resolution is decreased from 28x28 to 14x14.These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are also down-sampled using max-pooling to 7x7 pixels.The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import math
###Output
_____no_output_____
###Markdown
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional neural network.* A so-called cost-measure or loss-function that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
PrettyTensor ImplementationThis section shows the implementation of a Convolutional Neural Network using PrettyTensor taken from Tutorial 03 so it can be compared to the implementation using the Layers API below. This code has been enclosed in an `if False:` block so it does not run here.The basic idea is to wrap the input tensor `x_image` in a PrettyTensor object which has helper-functions for adding new computational layers so as to create an entire Convolutional Neural Network. This is a fairly simple and elegant syntax.
###Code
if False:
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Layers ImplementationWe now implement the same Convolutional Neural Network using the Layers API that is included in TensorFlow version 1.1. This requires more code than PrettyTensor, although a lot of the following are just comments.We use the `net`-variable to refer to the last layer while building the Neural Network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the `net`-variable to the reshaped input image.
###Code
net = x_image
###Output
_____no_output_____
###Markdown
The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial 02.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',
filters=16, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
One of the advantages of constructing neural networks in this fashion, is that we can now easily pull out a reference to a layer. This was more complicated in PrettyTensor.Further below we want to plot the output of the first convolutional layer, so we create another variable for holding a reference to that layer.
###Code
layer_conv1 = net
###Output
_____no_output_____
###Markdown
We now do the max-pooling on the output of the convolutional layer. This was also described in more detail in Tutorial 02.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
We now add the second convolutional layer which has 36 filters each with 5x5 pixels, and a ReLU activation function again.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',
filters=36, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We also want to plot the output of this convolutional layer, so we keep a reference for later use.
###Code
layer_conv2 = net
###Output
_____no_output_____
###Markdown
The output of the second convolutional layer is also max-pooled for down-sampling the images.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
The tensors that are being output by this max-pooling are 4-rank, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
Next we want to add fully-connected layers to the Neural Network, but these require 2-rank tensors as input, so we must first flatten the tensors.The `tf.layers` API was first located in `tf.contrib.layers` before it was moved into TensorFlow Core. But even though it has taken the TensorFlow developers a year to move these fairly simple functions, they have somehow forgotten to move the even simpler `flatten()` function. So we still need to use the one in `tf.contrib.layers`.
###Code
net = tf.contrib.layers.flatten(net)
# This should eventually be replaced by:
# net = tf.layers.flatten(net)
###Output
_____no_output_____
###Markdown
This has now flattened the data to a 2-rank tensor, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
We can now add fully-connected layers to the neural network. These are called *dense* layers in the Layers API.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has `num_classes=10` output neurons.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc_out',
units=num_classes, activation=None)
###Output
_____no_output_____
###Markdown
The output of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name.
###Code
logits = net
###Output
_____no_output_____
###Markdown
We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one.
###Code
y_pred = tf.nn.softmax(logits=logits)
###Output
_____no_output_____
###Markdown
This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
We have now created the exact same Convolutional Neural Network in a few lines of code that required many complex lines of code in the direct TensorFlow implementation.The Layers API is perhaps not as elegant as PrettyTensor, but it has some other advantages, e.g. that we can more easily refer to intermediate layers, and it is also easier to construct neural networks with branches and multiple outputs using the Layers API. Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the Convolutional Neural Network.The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model.TensorFlow has a function for calculating the cross-entropy, which uses the values of the `logits`-layer because it also calculates the softmax internally, so as to to improve numerical stability.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)
###Output
_____no_output_____
###Markdown
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
###Code
loss = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
Optimization MethodNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Classification AccuracyWe need to calculate the classification accuracy so we can report progress to the user.First we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
Getting the WeightsFurther below, we want to plot the weights of the convolutional layers. In the TensorFlow implementation we had created the variables ourselves so we could just refer to them directly. But when the network is constructed using a builder API such as `tf.layers`, all the variables of the layers are created indirectly by the builder API. We therefore have to retrieve the variables from TensorFlow.First we need a list of the variable names in the TensorFlow graph:
###Code
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
print(var)
###Output
<tf.Variable 'layer_conv1/kernel:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'beta1_power:0' shape=() dtype=float32_ref>
<tf.Variable 'beta2_power:0' shape=() dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam_1:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam_1:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam_1:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam_1:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam_1:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam_1:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam_1:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam_1:0' shape=(10,) dtype=float32_ref>
###Markdown
Each of the convolutional layers has two variables. For the first convolutional layer they are named `layer_conv1/kernel:0` and `layer_conv1/bias:0`. The `kernel` variables are the ones we want to plot further below.It is somewhat awkward to get references to these variables, because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'kernel' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('kernel')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for the TensorFlow graph must be initialized before we start optimizing them.
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
###Code
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 5.8% (577 / 10000)
###Markdown
Performance after 1 optimization iterationThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
Accuracy on Test-Set: 6.6% (659 / 10000)
###Markdown
Performance after 100 optimization iterationsAfter 100 optimization iterations, the model has significantly improved its classification accuracy.
###Code
%%time
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 81.2% (8125 / 10000)
Example errors:
###Markdown
Performance after 1000 optimization iterationsAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
###Code
%%time
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 94.5% (9455 / 10000)
Example errors:
###Markdown
Performance after 10,000 optimization iterationsAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
###Code
%%time
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.8% (9884 / 10000)
Example errors:
###Markdown
Visualization of Weights and Layers Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for plotting the output of a convolutional layer
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Input ImagesHelper-function for plotting an image.
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
Plot an image from the test-set which will be used as an example below.
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
Plot another example image from the test-set.
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 1 Now plot the filter-weights for the first convolutional layer.Note that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
The following images are the results of applying the convolutional filters to the second image.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 2 Now plot the filter-weights for the second convolutional layer.There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.Note again that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.Note that these are down-sampled to 14 x 14 pixels which is half the resolution of the original input images, because the first convolutional layer was followed by a max-pooling layer with stride 2. Max-pooling is also done after the second convolutional layer, but we retrieve these images before that has been applied.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
And these are the results of applying the filter-weights to the second image.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow Tutorial 03-B Layers APIby [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) WARNING!**The Layers API was intended to be a basic builder API for creating Neural Networks in TensorFlow, but the Layers API was never fully completed. Although it still works in TensorFlow v. 1.9, it seems quite possible that it may be deprecated in the future. It is recommended that you use the more complete _Keras API_ instead, see Tutorial 03-C.** IntroductionIt is important to use a builder API when constructing Neural Networks in TensorFlow because it makes it easier to implement and modify the source-code. This also lowers the risk of bugs.Many of the other tutorials used the TensorFlow builder API called PrettyTensor for easy construction of Neural Networks. But there are several other builder APIs available for TensorFlow. PrettyTensor was used in these tutorials, because at the time in mid-2016, PrettyTensor was the most complete and polished builder API available for TensorFlow. But PrettyTensor is only developed by a single person working at Google and although it has some unique and elegant features, it is possible that it may become deprecated in the future.This tutorial is about a small builder API that has recently been added to TensorFlow version 1.1. It is simply called *Layers* or the *Layers API* or by its Python name `tf.layers`. This builder API is automatically installed as part of TensorFlow, so you no longer have to install a separate Python package as was needed with PrettyTensor.This tutorial is very similar to Tutorial 03 on PrettyTensor and shows how to implement the same Convolutional Neural Network using the Layers API. It is recommended that you are familiar with Tutorial 02 on Convolutional Neural Networks. Flowchart The following chart shows roughly how the data flows in the Convolutional Neural Network that is implemented below. See Tutorial 02 for a more detailed description of convolution.  The input image is processed in the first convolutional layer using the filter-weights. This results in 16 new images, one for each filter in the convolutional layer. The images are also down-sampled using max-pooling so the image resolution is decreased from 28x28 to 14x14.These 16 smaller images are then processed in the second convolutional layer. We need filter-weights for each of these 16 channels, and we need filter-weights for each output channel of this layer. There are 36 output channels so there are a total of 16 x 36 = 576 filters in the second convolutional layer. The resulting images are also down-sampled using max-pooling to 7x7 pixels.The output of the second convolutional layer is 36 images of 7x7 pixels each. These are then flattened to a single vector of length 7 x 7 x 36 = 1764, which is used as the input to a fully-connected layer with 128 neurons (or elements). This feeds into another fully-connected layer with 10 neurons, one for each of the classes, which is used to determine the class of the image, that is, which number is depicted in the image.The convolutional filters are initially chosen at random, so the classification is done randomly. The error between the predicted and true class of the input image is measured as the so-called cross-entropy. The optimizer then automatically propagates this error back through the Convolutional Network using the chain-rule of differentiation and updates the filter-weights so as to improve the classification error. This is done iteratively thousands of times until the classification error is sufficiently low.These particular filter-weights and intermediate images are the results of one optimization run and may look different if you re-run this Notebook.Note that the computation in TensorFlow is actually done on a batch of images instead of a single image, which makes the computation more efficient. This means the flowchart actually has one more data-dimension when implemented in TensorFlow. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import math
###Output
_____no_output_____
###Markdown
This was developed using Python 3.6 (Anaconda) and TensorFlow version:
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
Load Data The MNIST data-set is about 12 MB and will be downloaded automatically if it is not located in the given path.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
The MNIST data-set has now been loaded and consists of 70,000 images and associated labels (i.e. classifications of the images). The data-set is split into 3 mutually exclusive sub-sets. We will only use the training and test-sets in this tutorial.
###Code
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
print("- Test-set:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
Size of:
- Training-set: 55000
- Test-set: 10000
- Validation-set: 5000
###Markdown
The class-labels are One-Hot encoded, which means that each label is a vector with 10 elements, all of which are zero except for one element. The index of this one element is the class-number, that is, the digit shown in the associated image. We also need the class-numbers as integers for the test-set, so we calculate it now.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions The data dimensions are used in several places in the source-code below. They are defined once so we can use these variables instead of numbers throughout the source-code below.
###Code
# We know that MNIST images are 28 pixels in each dimension.
img_size = 28
# Images are stored in one-dimensional arrays of this length.
img_size_flat = img_size * img_size
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# Number of colour channels for the images: 1 channel for gray-scale.
num_channels = 1
# Number of classes, one class for each of 10 digits.
num_classes = 10
###Output
_____no_output_____
###Markdown
Helper-function for plotting images Function used to plot 9 images in a 3x3 grid, and writing the true and predicted classes below each image.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Plot a few images to see if data is correct
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphThe entire purpose of TensorFlow is to have a so-called computational graph that can be executed much more efficiently than if the same calculations were to be performed directly in Python. TensorFlow can be more efficient than NumPy because TensorFlow knows the entire computation graph that must be executed, while NumPy only knows the computation of a single mathematical operation at a time.TensorFlow can also automatically calculate the gradients that are needed to optimize the variables of the graph so as to make the model perform better. This is because the graph is a combination of simple mathematical expressions so the gradient of the entire graph can be calculated using the chain-rule for derivatives.TensorFlow can also take advantage of multi-core CPUs as well as GPUs - and Google has even built special chips just for TensorFlow which are called TPUs (Tensor Processing Units) and are even faster than GPUs.A TensorFlow graph consists of the following parts which will be detailed below:* Placeholder variables used for inputting data to the graph.* Variables that are going to be optimized so as to make the convolutional network perform better.* The mathematical formulas for the convolutional neural network.* A so-called cost-measure or loss-function that can be used to guide the optimization of the variables.* An optimization method which updates the variables.In addition, the TensorFlow graph may also contain various debugging statements e.g. for logging data to be displayed using TensorBoard, which is not covered in this tutorial. Placeholder variables Placeholder variables serve as the input to the TensorFlow computational graph that we may change each time we execute the graph. We call this feeding the placeholder variables and it is demonstrated further below.First we define the placeholder variable for the input images. This allows us to change the images that are input to the TensorFlow graph. This is a so-called tensor, which just means that it is a multi-dimensional array. The data-type is set to `float32` and the shape is set to `[None, img_size_flat]`, where `None` means that the tensor may hold an arbitrary number of images with each image being a vector of length `img_size_flat`.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
The convolutional layers expect `x` to be encoded as a 4-dim tensor so we have to reshape it so its shape is instead `[num_images, img_height, img_width, num_channels]`. Note that `img_height == img_width == img_size` and `num_images` can be inferred automatically by using -1 for the size of the first dimension. So the reshape operation is:
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
Next we have the placeholder variable for the true labels associated with the images that were input in the placeholder variable `x`. The shape of this placeholder variable is `[None, num_classes]` which means it may hold an arbitrary number of labels and each label is a vector of length `num_classes` which is 10 in this case.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
We could also have a placeholder variable for the class-number, but we will instead calculate it using argmax. Note that this is a TensorFlow operator so nothing is calculated at this point.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
PrettyTensor ImplementationThis section shows the implementation of a Convolutional Neural Network using PrettyTensor taken from Tutorial 03 so it can be compared to the implementation using the Layers API below. This code has been enclosed in an `if False:` block so it does not run here.The basic idea is to wrap the input tensor `x_image` in a PrettyTensor object which has helper-functions for adding new computational layers so as to create an entire Convolutional Neural Network. This is a fairly simple and elegant syntax.
###Code
if False:
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Layers ImplementationWe now implement the same Convolutional Neural Network using the Layers API that is included in TensorFlow version 1.1. This requires more code than PrettyTensor, although a lot of the following are just comments.We use the `net`-variable to refer to the last layer while building the Neural Network. This makes it easy to add or remove layers in the code if you want to experiment. First we set the `net`-variable to the reshaped input image.
###Code
net = x_image
###Output
_____no_output_____
###Markdown
The input image is then input to the first convolutional layer, which has 16 filters each of size 5x5 pixels. The activation-function is the Rectified Linear Unit (ReLU) described in more detail in Tutorial 02.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',
filters=16, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
One of the advantages of constructing neural networks in this fashion, is that we can now easily pull out a reference to a layer. This was more complicated in PrettyTensor.Further below we want to plot the output of the first convolutional layer, so we create another variable for holding a reference to that layer.
###Code
layer_conv1 = net
###Output
_____no_output_____
###Markdown
We now do the max-pooling on the output of the convolutional layer. This was also described in more detail in Tutorial 02.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
We now add the second convolutional layer which has 36 filters each with 5x5 pixels, and a ReLU activation function again.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',
filters=36, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We also want to plot the output of this convolutional layer, so we keep a reference for later use.
###Code
layer_conv2 = net
###Output
_____no_output_____
###Markdown
The output of the second convolutional layer is also max-pooled for down-sampling the images.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
The tensors that are being output by this max-pooling are 4-rank, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
Next we want to add fully-connected layers to the Neural Network, but these require 2-rank tensors as input, so we must first flatten the tensors.The `tf.layers` API was first located in `tf.contrib.layers` before it was moved into TensorFlow Core. But even though it has taken the TensorFlow developers a year to move these fairly simple functions, they have somehow forgotten to move the even simpler `flatten()` function. So we still need to use the one in `tf.contrib.layers`.
###Code
net = tf.contrib.layers.flatten(net)
# This should eventually be replaced by:
# net = tf.layers.flatten(net)
###Output
_____no_output_____
###Markdown
This has now flattened the data to a 2-rank tensor, as can be seen from this:
###Code
net
###Output
_____no_output_____
###Markdown
We can now add fully-connected layers to the neural network. These are called *dense* layers in the Layers API.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
We need the neural network to classify the input images into 10 different classes. So the final fully-connected layer has `num_classes=10` output neurons.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc_out',
units=num_classes, activation=None)
###Output
_____no_output_____
###Markdown
The output of the final fully-connected layer are sometimes called logits, so we have a convenience variable with that name.
###Code
logits = net
###Output
_____no_output_____
###Markdown
We use the softmax function to 'squash' the outputs so they are between zero and one, and so they sum to one.
###Code
y_pred = tf.nn.softmax(logits=logits)
###Output
_____no_output_____
###Markdown
This tells us how likely the neural network thinks the input image is of each possible class. The one that has the highest value is considered the most likely so its index is taken to be the class-number.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
We have now created the exact same Convolutional Neural Network in a few lines of code that required many complex lines of code in the direct TensorFlow implementation.The Layers API is perhaps not as elegant as PrettyTensor, but it has some other advantages, e.g. that we can more easily refer to intermediate layers, and it is also easier to construct neural networks with branches and multiple outputs using the Layers API. Loss-Function to be Optimized To make the model better at classifying the input images, we must somehow change the variables of the Convolutional Neural Network.The cross-entropy is a performance measure used in classification. The cross-entropy is a continuous function that is always positive and if the predicted output of the model exactly matches the desired output then the cross-entropy equals zero. The goal of optimization is therefore to minimize the cross-entropy so it gets as close to zero as possible by changing the variables of the model.TensorFlow has a function for calculating the cross-entropy, which uses the values of the `logits`-layer because it also calculates the softmax internally, so as to to improve numerical stability.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)
###Output
_____no_output_____
###Markdown
We have now calculated the cross-entropy for each of the image classifications so we have a measure of how well the model performs on each image individually. But in order to use the cross-entropy to guide the optimization of the model's variables we need a single scalar value, so we simply take the average of the cross-entropy for all the image classifications.
###Code
loss = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
Optimization MethodNow that we have a cost measure that must be minimized, we can then create an optimizer. In this case it is the Adam optimizer with a learning-rate of 1e-4.Note that optimization is not performed at this point. In fact, nothing is calculated at all, we just add the optimizer-object to the TensorFlow graph for later execution.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
Classification AccuracyWe need to calculate the classification accuracy so we can report progress to the user.First we create a vector of booleans telling us whether the predicted class equals the true class of each image.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
The classification accuracy is calculated by first type-casting the vector of booleans to floats, so that False becomes 0 and True becomes 1, and then taking the average of these numbers.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
Getting the WeightsFurther below, we want to plot the weights of the convolutional layers. In the TensorFlow implementation we had created the variables ourselves so we could just refer to them directly. But when the network is constructed using a builder API such as `tf.layers`, all the variables of the layers are created indirectly by the builder API. We therefore have to retrieve the variables from TensorFlow.First we need a list of the variable names in the TensorFlow graph:
###Code
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
print(var)
###Output
<tf.Variable 'layer_conv1/kernel:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'beta1_power:0' shape=() dtype=float32_ref>
<tf.Variable 'beta2_power:0' shape=() dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam_1:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam_1:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam_1:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam_1:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam_1:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam_1:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam_1:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam_1:0' shape=(10,) dtype=float32_ref>
###Markdown
Each of the convolutional layers has two variables. For the first convolutional layer they are named `layer_conv1/kernel:0` and `layer_conv1/bias:0`. The `kernel` variables are the ones we want to plot further below.It is somewhat awkward to get references to these variables, because we have to use the TensorFlow function `get_variable()` which was designed for another purpose; either creating a new variable or re-using an existing variable. The easiest thing is to make the following helper-function.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'kernel' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('kernel')
return variable
###Output
_____no_output_____
###Markdown
Using this helper-function we can retrieve the variables. These are TensorFlow objects. In order to get the contents of the variables, you must do something like: `contents = session.run(weights_conv1)` as demonstrated further below.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
TensorFlow Run Create TensorFlow sessionOnce the TensorFlow graph has been created, we have to create a TensorFlow session which is used to execute the graph.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
Initialize variablesThe variables for the TensorFlow graph must be initialized before we start optimizing them.
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
Helper-function to perform optimization iterations There are 55,000 images in the training-set. It takes a long time to calculate the gradient of the model using all these images. We therefore only use a small batch of images in each iteration of the optimizer.If your computer crashes or becomes very slow because you run out of RAM, then you may try and lower this number, but you may then need to do more optimization iterations.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
This function performs a number of optimization iterations so as to gradually improve the variables of the neural network layers. In each iteration, a new batch of data is selected from the training-set and then TensorFlow executes the optimizer using those training samples. The progress is printed every 100 iterations.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
###Output
_____no_output_____
###Markdown
Helper-function to plot example errors Function for plotting examples of images from the test-set that have been mis-classified.
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
Helper-function to plot confusion matrix
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for showing the performance Below is a function for printing the classification accuracy on the test-set.It takes a while to compute the classification for all the images in the test-set, that's why the results are re-used by calling the above functions directly from this function, so the classifications don't have to be recalculated by each function.Note that this function can use a lot of computer memory, which is why the test-set is split into smaller batches. If you have little RAM in your computer and it crashes, then you can try and lower the batch-size.
###Code
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
Performance before any optimizationThe accuracy on the test-set is very low because the variables for the neural network have only been initialized and not optimized at all, so it just classifies the images randomly.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 5.8% (577 / 10000)
###Markdown
Performance after 1 optimization iterationThe classification accuracy does not improve much from just 1 optimization iteration, because the learning-rate for the optimizer is set very low.
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
Accuracy on Test-Set: 6.6% (659 / 10000)
###Markdown
Performance after 100 optimization iterationsAfter 100 optimization iterations, the model has significantly improved its classification accuracy.
###Code
%%time
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 81.2% (8125 / 10000)
Example errors:
###Markdown
Performance after 1000 optimization iterationsAfter 1000 optimization iterations, the model has greatly increased its accuracy on the test-set to more than 90%.
###Code
%%time
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 94.5% (9455 / 10000)
Example errors:
###Markdown
Performance after 10,000 optimization iterationsAfter 10,000 optimization iterations, the model has a classification accuracy on the test-set of about 99%.
###Code
%%time
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.8% (9884 / 10000)
Example errors:
###Markdown
Visualization of Weights and Layers Helper-function for plotting convolutional weights
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Helper-function for plotting the output of a convolutional layer
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
Input ImagesHelper-function for plotting an image.
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
Plot an image from the test-set which will be used as an example below.
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
Plot another example image from the test-set.
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 1 Now plot the filter-weights for the first convolutional layer.Note that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
Applying each of these convolutional filters to the first input image gives the following output images, which are then used as input to the second convolutional layer.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
The following images are the results of applying the convolutional filters to the second image.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
Convolution Layer 2 Now plot the filter-weights for the second convolutional layer.There are 16 output channels from the first conv-layer, which means there are 16 input channels to the second conv-layer. The second conv-layer has a set of filter-weights for each of its input channels. We start by plotting the filter-weigths for the first channel.Note again that positive weights are red and negative weights are blue.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
There are 16 input channels to the second convolutional layer, so we can make another 15 plots of filter-weights like this. We just make one more with the filter-weights for the second channel.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
It can be difficult to understand and keep track of how these filters are applied because of the high dimensionality.Applying these convolutional filters to the images that were ouput from the first conv-layer gives the following images.Note that these are down-sampled to 14 x 14 pixels which is half the resolution of the original input images, because the first convolutional layer was followed by a max-pooling layer with stride 2. Max-pooling is also done after the second convolutional layer, but we retrieve these images before that has been applied.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
And these are the results of applying the filter-weights to the second image.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
Close TensorFlow Session We are now done using TensorFlow, so we close the session to release its resources.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____
###Markdown
TensorFlow 자습서 03-B Layers API원저자 [Magnus Erik Hvass Pedersen](http://www.hvass-labs.org/)/ [GitHub](https://github.com/Hvass-Labs/TensorFlow-Tutorials) / [Videos on YouTube](https://www.youtube.com/playlist?list=PL9Hr9sNUjfsmEu1ZniY0XpHSzl5uihcXZ) / 번역 곽병권 개요TensorFlow에서 신경망을 만들 때 빌더 API를 사용하는 것이 중요합니다. 왜냐하면 소스 코드를 쉽게 구현하고 수정할 수 있기 때문입니다. 이것은 또한 오류의 가능성을 낮추어 줍니다.본장 이외의 자습서의 대부분은 PrettyTensor라는 TensorFlow 빌더 API를 사용하여 신경망을 쉽게 만들 수 있습니다. TensorFlow에 사용할 수 있는 여러 가지 빌더 API가 있습니다. PrettyTensor는 2016년 중반의 시점에 TensorFlow에서 가장 완벽하고 세련된 빌더 API였기 때문에이 자습서에서 사용 되었습니다. 그러나 PrettyTensor는 Google에서 일하는 한 명의 개인에 의해서만 개발되었으며 독특하고 우아한 기능이 있지만 향후에 더 이상 사용되지 않을 수 있습니다.이 자습서는 최근 TensorFlow 버전 1.1에 추가 된 작은 빌더 API에 대한 것 입니다. 이것은 단순히 *Layers* 또는 *Layers API* 또는 Python 이름 `tf.layers`로 불립니다. 이 빌더 API는 TensorFlow의 일부로 자동 설치되므로 PrettyTensor에서 필요했던 별도의 Python 패키지를 설치할 필요가 없습니다.이 튜토리얼은 PrettyTensor의 Tutorial 03과 매우 유사하며 Layers API를 사용하여 동일한 컨볼루션 신경망을 구현하는 방법을 보여줍니다. 컨볼루션 신경망에 관한 튜토리얼 02을 미리 보고 이 장을 살펴보시기 바랍니다. 흐름도 다음 차트는 아래에 구현 된 컨볼루션 신경망의 데이터 흐름을 대략적으로 보여줍니다. 컨볼루션에 대한 자세한 설명은 자습서 02를 참조하십시오.  입력 이미지는 필터 웨이트를 사용하여 첫 번째 컨볼루션 레이어에서 처리됩니다. 결과적으로 16 개의 새로운 이미지가 생성되며, 하나는 컨볼루션 레이어의 각 필터에 해당합니다. 이미지는 또한 다운 샘플링되므로 이미지 해상도가 28x28에서 14x14로 감소합니다.이 16 개의 작은 이미지는 두 번째 컨볼루션 레이어에서 처리됩니다. 이 16 개의 채널마다 필터 가중치가 필요하며 이 레이어의 각 출력 채널에 대해 필터 가중치가 필요합니다. 36 개의 출력 채널이 있으므로 두 번째 컨볼루션 레이어에는 총 16 x 36 = 576 개의 필터가 있습니다. 결과 이미지는 다시 7x7 픽셀로 다운 샘플링됩니다.두번째 컨볼루션 레이어의 출력은 각각 7x7 픽셀의 36장의 이미지가 됩니다. 그런 다음 길이가 7 x 7 x 36 = 1764 인 단일 벡터로 평탄화되며, 이는 128 개의 뉴런이 있는 완전 연결된 레이어의 입력으로 사용됩니다. 이것은 이미지의 클래스, 즉 어떤 숫자가 이미지에 묘사되어 있는지를 결정하는 데 사용되는 각 클래스에 하나씩, 10 개의 뉴런을 가진 완전히 연결된 또 다른 레이어로 공급됩니다.컨볼루션 필터는 처음에 무작위로 선택되므로 분류가 무작위로 수행됩니다. 입력 이미지의 예측 클래스와 참 클래스 간의 오차는 소위 교차 엔트로피 (cross-entropy)로 측정됩니다. 그런 다음 옵티마이저가 미분의 체인 규칙을 사용하여 컨볼루션 네트워크를 통해, 이 오류를 자동으로 전파하고 분류 오류를 개선하기 위해 필터 가중치를 업데이트합니다. 이것은 분류 오류가 충분히 낮을 때까지 반복적으로 수천 번 반복됩니다.이 특정 필터 가중치와 중간 이미지는 한 번의 최적화 실행 결과이며이 노트북을 다시 실행하면 다르게 보일 수 있습니다.TensorFlow의 계산은 실제로 단일 이미지 대신 이미지 일괄 처리로 수행되므로 계산이 더 효율적입니다. 즉, TensorFlow에서 구현 될 때 순서도에는 실제로 하나 이상의 데이터 차원이 있습니다. Imports
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import tensorflow as tf
import numpy as np
from sklearn.metrics import confusion_matrix
import math
###Output
_____no_output_____
###Markdown
이 문서는 Python 3.6.1 (Anaconda) 및 아래의 TensorFlow 버전을 사용하여 개발되었습니다.
###Code
tf.__version__
###Output
_____no_output_____
###Markdown
Load Data MNIST 데이터 세트는 약 12MB이며 주어진 경로에 위치하지 않으면 자동으로 다운로드됩니다.
###Code
from tensorflow.examples.tutorials.mnist import input_data
data = input_data.read_data_sets('data/MNIST/', one_hot=True)
###Output
Extracting data/MNIST/train-images-idx3-ubyte.gz
Extracting data/MNIST/train-labels-idx1-ubyte.gz
Extracting data/MNIST/t10k-images-idx3-ubyte.gz
Extracting data/MNIST/t10k-labels-idx1-ubyte.gz
###Markdown
MNIST 데이터 세트는 현재 로드 되었으며 70.000개의 이미지 및 관련 라벨 (즉, 이미지의 분류)로 구성됩니다. 데이터 집합은 3개의 상호 배타적인 하위 집합으로 나뉩니다. 이 튜토리얼에서는 훈련 및 테스트 세트 만 사용합니다.
###Code
print("크기:")
print("- 훈련 세트:\t\t{}".format(len(data.train.labels)))
print("- 테스트 세트:\t\t{}".format(len(data.test.labels)))
print("- Validation-set:\t{}".format(len(data.validation.labels)))
###Output
크기:
- 훈련 세트: 55000
- 테스트 세트: 10000
- Validation-set: 5000
###Markdown
클래스 레이블은 One-Hot로 인코딩 됩니다. 즉, 각 레이블은 하나의 요소를 제외하고 모두 0인 요소가 포함 된 10개의 벡터입니다. 이 요소의 색인은 클래스 번호, 즉 연관된 이미지에 표시된 숫자입니다. 테스트 집합에 대한 클래스 수를 정수로 필요로 하므로 지금 계산합니다.
###Code
data.test.cls = np.argmax(data.test.labels, axis=1)
###Output
_____no_output_____
###Markdown
Data Dimensions 데이터 차원은 아래 소스 코드의 여러 위치에서 사용됩니다. 그것들은 한 번 정의되어 있으므로 아래의 소스 코드에서 숫자 대신 이러한 변수를 사용할 수 있습니다.
###Code
# MNIST 데이터는 이미지의 한 변이 28 픽셀입니다.
img_size = 28
# 이미지는 각 변의 크기를 곱한 수의 일차원 배열로 표현이 됩니다.
img_size_flat = img_size * img_size
# 높이와 넓이로 구성된 튜플은 이미지를 재구성하기 위해서 필요합니다.
img_shape = (img_size, img_size)
# 이미지의 컬러 채널의 수: 1 그레이 스케일 이미지의 경우 1
num_channels = 1
# 클래스의 수, 클래스는 0~9까지의 숫자를 의미합니다.
num_classes = 10
###Output
_____no_output_____
###Markdown
이미지를 그리는 도움 함수 3x3그리드에 9개의 이미지를 플롯하고 각 이미지 아래에 참 및 예측 클래스를 쓰는 데 사용되는 함수입니다.
###Code
def plot_images(images, cls_true, cls_pred=None):
assert len(images) == len(cls_true) == 9
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_shape), cmap='binary')
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
일부의 이미지를 그려서 데이터가 정확한지 확인해 봅니다.¶
###Code
# Get the first images from the test-set.
images = data.test.images[0:9]
# Get the true classes for those images.
cls_true = data.test.cls[0:9]
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow GraphTensorFlow의 목적은 동일한 계산이 파이썬에서 직접 수행되는 것보다 훨씬 효율적으로 실행될 수있는 소위 연산 그래프를 구성하는 것입니다. TensorFlow는 실행해야 하는 전체 연산 그래프를 알고 있기 때문에 NumPy보다 더 효율적입니다. NumPy는 한 번에 하나의 수학 연산 만 계산합니다.또한 TensorFlow는 그래프의 변수를 최적화하여 모델의 성능을 향상시키는 데 필요한 그래디언트를 자동으로 계산할 수 있습니다. 이것은 그래프가 간단한 수학적 표현의 조합이기 때문에 전체 그래프의 그래디언트가 미분에 대한 체인 규칙을 사용하여 계산 될 수 있기 때문입니다.TensorFlow는 GPU뿐 아니라 멀티 코어 CPU를 활용할 수도 있습니다. Google은 TPU (Tensor Processing Units)라고 불리는 TensorFlow 용 특수 칩을 구축했으며 GPU보다 훨씬 빠릅니다.TensorFlow 그래프는 아래에 설명 된 다음 부분으로 구성됩니다.* Placeholder 변수: 변경되는 값을 입력으로 사용할 수 있도록 합니다.* 모델 변수: 모델이 더 좋은 성능을 내도록 최적화 할 수 있습니다.* 모델은 본질적으로 Placeholder 변수와 모델 변수의 입력이 제공되면 출력을 계산하는 수학 함수입니다.* 비용: 변수들을 최적화 하기 위해서 사용되는 측정값 입니다.* 최적화 기법: 모델의 변수를 변경합니다.또한 TensorFlow 그래프는 다양한 디버깅 문을 포함 할 수 있습니다. 이 노트북에서는 다루지 않지만 TensorBoard를 사용하여 데이터를 표시하도록 로깅합니다. Placeholder 변수 Placeholder 변수는 그래프를 실행할 때마다 변경할 수 있는 TensorFlow 계산 그래프의 입력 값 역할을 합니다.먼저 입력 이미지의 placeholder 변수를 정의합니다. 이를 통해 TensorFlow 그래프에 입력되는 이미지를 변경할 수 있습니다. 이것은 소위 텐서 (tensor)라고 불리는데, 이는 그것이 다차원 벡터 또는 행렬이라는 것을 의미합니다. 데이터 형은 float32로 설정되고 형태는 `[None, img_size_flat]`으로 설정됩니다. 여기서 `None`은 텐서가 임의의 수의 이미지를 보유 할 수 있음을 의미합니다. 각 이미지는 길이가 `img_size_flat`인 벡터입니다.
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
###Output
_____no_output_____
###Markdown
콘볼루션 레이어는 `x`가 4 차원 텐서로 인코딩 될 것으로 요구하므로 모양을 바꿔야 합니다. 요구되는 형태는 `[num_images, img_height, img_width, num_channels]`이 됩니다. 첫 번째 차원은 -1을 사용하여 자동으로 추론 할 수 있습니다. 따라서 재구성 작업은 다음과 같습니다.
###Code
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
###Output
_____no_output_____
###Markdown
다음으로 우리는 placeholder `x`에 입력 된 이미지와 관련된 실제 레이블에 대한 placeholder 변수를 갖습니다. 이 placeholder 변수의 모양은`[None, num_classes]`입니다. 이는 임의의 수의 레이블을 보유 할 수 있음을 의미하며 각 레이블은 이 경우에는 길이가 `num_classes`인 벡터입니다.
###Code
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
###Output
_____no_output_____
###Markdown
클래스 번호에 대한 placeholder 변수를 사용할 수도 있지만 대신 argmax를 사용하여 이를 계산합니다. 이 값은 TensorFlow 연산자이므로 이 시점에서는 아무 것도 계산되지 않습니다.
###Code
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
WARNING:tensorflow:From <ipython-input-12-4674210f2acc>:1: calling argmax (from tensorflow.python.ops.math_ops) with dimension is deprecated and will be removed in a future version.
Instructions for updating:
Use the `axis` argument instead
###Markdown
PrettyTensor 구현이 섹션에서는 튜토리얼 03에서 가져온 PrettyTensor를 사용하는 컨볼루션 신경망 구현을 보여줍니다. 따라서 아래 Layers API를 사용한 구현과 비교할 수 있습니다. 이 코드는 `if False:` 블록으로 묶여 있으므로 여기서는 실행되지 않습니다.기본 개념은 새로운 연산 Layer를 추가하여 전체 컨볼루션 신경망을 생성하는 도움 함수가 있는 PrettyTensor 객체로 입력 텐서 `x_image`를 감싸는 것 입니다. 이것은 매우 간단하고 우아한 구문입니다.
###Code
if False:
x_pretty = pt.wrap(x_image)
with pt.defaults_scope(activation_fn=tf.nn.relu):
y_pred, loss = x_pretty.\
conv2d(kernel=5, depth=16, name='layer_conv1').\
max_pool(kernel=2, stride=2).\
conv2d(kernel=5, depth=36, name='layer_conv2').\
max_pool(kernel=2, stride=2).\
flatten().\
fully_connected(size=128, name='layer_fc1').\
softmax_classifier(num_classes=num_classes, labels=y_true)
###Output
_____no_output_____
###Markdown
Layers 구현이제 TensorFlow 버전 1.1 이후에 포함 된 Layers API를 사용하여 동일한 컨볼루션 신경망을 구현합니다. PrettyTensor보다는 많은 코드가 필요합니다. 대부분의 코드는 상당량은 넘겨주어야 할 매개변수의 설명에 해당합니다.신경망을 구축하는 동안 마지막 레이어를 참조하기 위해 `net`-변수를 사용합니다. 다양한 실험을 하려는 경우 코드에서 레이어를 쉽게 추가하거나 제거 할 수 있습니다. 먼저 `net` 변수에 형태를 변경한 입력 이미지를 설정합니다.
###Code
net = x_image
###Output
_____no_output_____
###Markdown
그러면 입력 이미지는 크기가 5x5 픽셀 인 각각 16개의 필터가있는 첫 번째 컨볼 루션 레이어에 입력됩니다. 활성화 함수는 Rectified Linear Unit(ReLU)이고, 이에 대해서는 자습서 2장에서 자세히 설명되어 있습니다.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv1', padding='same',
filters=16, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
이러한 방식으로 신경망을 구성하는 장점 중 하나는 레이어에 대한 참조를 쉽게 추출 할 수 있다는 것입니다. 이것은 PrettyTensor 같은 경우에는 조금 더 복잡합니다.더 아래에서 첫 번째 컨볼루션 레이어의 출력을 플롯해야 하므로 해당 레이어를 참조하는 또 다른 변수를 만듭니다.
###Code
layer_conv1 = net
###Output
_____no_output_____
###Markdown
우리는 이제 컨볼루션 레이어의 출력에서 max-pooling을 수행합니다. 이것은 자습서 02에 상세히 설명되어 있습니다.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
이제 각각 5 x 5 픽셀의 36 개의 필터와 ReLU 활성화 함수를 가진 두 번째 컨볼루션 레이어를 추가합니다.
###Code
net = tf.layers.conv2d(inputs=net, name='layer_conv2', padding='same',
filters=36, kernel_size=5, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
또한 이 컨볼루션 레이어의 출력을 시각화 하려는 것이므로 나중에 사용할 수 있도록 참조를 유지합니다.
###Code
layer_conv2 = net
###Output
_____no_output_____
###Markdown
두번째 컨볼루션 계층의 출력은 이미지를 다운 샘플링하기 위해 max-pool을 적용합니다.
###Code
net = tf.layers.max_pooling2d(inputs=net, pool_size=2, strides=2)
###Output
_____no_output_____
###Markdown
이 max-pooling에 의해 출력되는 텐서는 다음에 보이는 것과 같이 랭크가 4가 됩니다.
###Code
net
###Output
_____no_output_____
###Markdown
다음으로 신경망에 완전히 연결된 레이어를 추가하려고합니다. 하지만 이 레이어는 입력으로 랭크가 2인 텐서가 필요하므로 먼저 텐서를 평평하게 해야합니다.`tf.layers` API는 TensorFlow Core로 옮겨오기 전에 먼저 tf.contrib.layers에 위치 했습니다. 그런데, TensorFlow 개발자가 무려 1년에 걸쳐 이 간단한 함수를 이전 했음에도 불구하고 더 단순한 flatten() 함수를 함께 옮기는 것을 잊어 버렸습니다. 그래서 우리는 여전히 `tf.contrib.layers`를 사용해야 합니다.
###Code
net = tf.contrib.layers.flatten(net)
# This should eventually be replaced by:
# net = tf.layers.flatten(net)
###Output
_____no_output_____
###Markdown
아래에 보이는 것 처럼, 랭크가 2인 텐서로 평탄화를 했습니다.
###Code
net
###Output
_____no_output_____
###Markdown
이제 완전히 연결된 레이어를 신경망에 추가 할 수 있습니다. 이것은 레이어 API에서 *dense* 레이어라고 부릅니다.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc1',
units=128, activation=tf.nn.relu)
###Output
_____no_output_____
###Markdown
입력 이미지를 10 개의 다른 클래스로 분류하기 위해서는 신경망이 필요합니다. 따라서 마지막으로 완전히 연결된 레이어에는 `num_classes=10` 출력 뉴런이 있습니다.
###Code
net = tf.layers.dense(inputs=net, name='layer_fc_out',
units=num_classes, activation=None)
###Output
_____no_output_____
###Markdown
마지막으로 완전히 연결된 레이어의 출력을 logits이라고도 하며, 우리는 편의상 logits이라는 변수를 사용하겠습니다.
###Code
logits = net
###Output
_____no_output_____
###Markdown
우리는 softmax 함수를 사용하여 출력을 변형하여('스쿼시')하여 0과 1 사이에 있도록 하고, 총합이 1이 되도록 합니다.
###Code
y_pred = tf.nn.softmax(logits=logits)
###Output
_____no_output_____
###Markdown
이것은 뉴럴 네트워크가 입력 이미지가 각 가능한 클래스에 있다고 생각 할 가능성이 얼마나 높은지를 알려줍니다. 가장 높은 가치를 가진 것이 가장 가능성이 높은 것으로 간주되므로 해당 색인을 클래스 번호로 간주합니다.
###Code
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
우리는 이제 TensorFlow 직접 구현하는 경우 여러 복잡한 코드 행을 필요로 하는 것을, 단 몇 줄의 코드로 정확히 동일하게 동작하는 컨볼루션 신경망을 만들었습니다.Layers API는 어쩌면 PrettyTensor만큼 우아하지는 않지만 다른 장점이 있습니다. 예를 들어, 중간 계층을 보다 쉽게 참조 할 수 있으며 계층 API를 사용하여 분기 및 다중 출력을 사용하여 신경망을 구성하는 경우 쉽게 할 수 있습니다. 손실함수의 최적화 입력 이미지를 분류 할 때 모델의 성능을 높이려면, 컨볼루션 신경망의 내부의 변수를 최적화 해야만 합니다.교차 엔트로피는 분류에 사용되는 성능 측정 값입니다. 교차 엔트로피는 항상 양의 연속 함수이며 모델의 예측 된 출력이 원하는 출력과 정확하게 일치하면 교차 엔트로피는 0입니다. 따라서 최적화의 목표는 교차 엔트로피를 최소화하여 모델의 변수를 최적화 하여 가능한 한 0에 가까워 지도록하는 것입니다.TensorFlow는 cross-entropy를 계산하는 함수를 가지고 있습니다.이 함수는 내부적으로 softmax를 계산하기 때문에 수치 안정성을 향상시키기 위해 `logits`-layer의 값을 사용합니다.
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(labels=y_true, logits=logits)
###Output
_____no_output_____
###Markdown
이제 각각의 이미지 분류에 대한 교차 엔트로피를 계산 했으므로 모델이 각 이미지에서 얼마나 잘 작동하는지 측정 할 수 있습니다. 그러나 교차 엔트로피를 사용하여 모델 변수의 최적화를 유도하려면 단일 스칼라 값이 필요하므로 모든 이미지 분류에 대해 교차 엔트로피의 평균을 취합니다.
###Code
loss = tf.reduce_mean(cross_entropy)
###Output
_____no_output_____
###Markdown
최적화 방법이제 비용 측정 방법을 알았고 이를 최소화하는 최적화 도구를 만들 수 있습니다. 여기서는 학습 속도를 1e-4로 하고 Adam optimizer를 사용하겠습니다.이 시점에서 최적화는 수행되는 것은 아닙니다. 사실, 아무것도 계산되지 않습니다. 나중에 실행을 위해 TensorFlow 그래프에 최적화 개체를 추가만 했습니다.
###Code
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(loss)
###Output
_____no_output_____
###Markdown
분류 정확도진행 상황을 사용자에게 보고 할 수 있도록 분류 정확도를 계산합니다.먼저 예측 클래스가 각 이미지의 실제 클래스와 같은지 여부를 알려주는 부울 값의 벡터를 만듭니다.
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
###Output
_____no_output_____
###Markdown
분류 정확도는 진리값을 실수로 형변환하여 거짓이 0으로 True가 1이되도록 한 다음이 전체의 평균을 구합니다.
###Code
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
가중치 얻어오기아래에서 우리는 컨볼루션 레이어의 가중치를 시각화하려고 합니다. TensorFlow 구현에서는 변수를 직접 만들었으므로 직접 참조가 가능했습니다. 그러나 `tf.layers`와 같은 빌더 API를 사용하는 경우에는, 레이어의 모든 변수는 빌더 API에 의해 간접적으로 생성됩니다. 따라서 TensorFlow에서 변수를 검색해야합니다.먼저 TensorFlow 그래프에 변수 이름 목록이 필요합니다.
###Code
for var in tf.get_collection(tf.GraphKeys.GLOBAL_VARIABLES):
print(var)
###Output
<tf.Variable 'layer_conv1/kernel:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'beta1_power:0' shape=() dtype=float32_ref>
<tf.Variable 'beta2_power:0' shape=() dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/kernel/Adam_1:0' shape=(5, 5, 1, 16) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv1/bias/Adam_1:0' shape=(16,) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/kernel/Adam_1:0' shape=(5, 5, 16, 36) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_conv2/bias/Adam_1:0' shape=(36,) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/kernel/Adam_1:0' shape=(1764, 128) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc1/bias/Adam_1:0' shape=(128,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/kernel/Adam_1:0' shape=(128, 10) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam:0' shape=(10,) dtype=float32_ref>
<tf.Variable 'layer_fc_out/bias/Adam_1:0' shape=(10,) dtype=float32_ref>
###Markdown
각 컨볼루션 레이어에는 두 가지 변수가 있습니다. 첫 번째 콘볼루션 레이어의 경우 `layer_conv1/kernel:0`과 `layer_conv1/bias:0`으로 명명됩니다. `kernel` 변수는 우리가 저 아래에서 시각화하기를 원하는 변수입니다.우리가 다른 목적으로 설계된 TensorFlow 함수 `get_variable ()`을 사용해야 하기 때문에, 이 변수에 대한 참조를 얻는 것은 다소 어색하지만, 그렇게 하지 않으면 새 변수를 만들거나 기존 변수를 다시 사용해야 합니다. 가장 쉬운 방법은 다음 도움 함수를 만드는 것입니다.
###Code
def get_weights_variable(layer_name):
# Retrieve an existing variable named 'kernel' in the scope
# with the given layer_name.
# This is awkward because the TensorFlow function was
# really intended for another purpose.
with tf.variable_scope(layer_name, reuse=True):
variable = tf.get_variable('kernel')
return variable
###Output
_____no_output_____
###Markdown
이 도움 함수를 사용하여 변수를 검색 할 수 있습니다. TensorFlow 개체입니다. 변수의 내용을 얻으려면 아래에서 더 자세히 설명하는 것처럼 `content = session.run(weights_conv1)`과 같은 것을 해야 합니다.
###Code
weights_conv1 = get_weights_variable(layer_name='layer_conv1')
weights_conv2 = get_weights_variable(layer_name='layer_conv2')
###Output
_____no_output_____
###Markdown
TensorFlow 실행 TensorFlow session 만들기TensorFlow 그래프가 생성되면 그래프를 실행하는 데 사용되는 TensorFlow 세션을 만들어야 합니다.
###Code
session = tf.Session()
###Output
_____no_output_____
###Markdown
변수 초기화`weights`와 `biases`등의 변수는 최적화를 시작하기 전에 초기화되어야합니다.
###Code
session.run(tf.global_variables_initializer())
###Output
_____no_output_____
###Markdown
최적화 반복을 수행하는 도움 함수 트레이닝 세트에는 55,000 개의 이미지가 있습니다. 이 모든 이미지를 사용하여 모델의 그래디언트를 계산하는 것은 시간이 꽤 오래 걸립니다. 따라서 우리는 옵티 마이저의 각 반복에서 작은 배치 이미지만 사용합니다.메모리가 부족하여 컴퓨터가 다운되거나 충돌이 발생하면, 이 수를 줄이거 나 늘릴 수 있지만 더 많은 최적화 반복을 수행해야 할 수 있습니다.
###Code
train_batch_size = 64
###Output
_____no_output_____
###Markdown
점진적으로 네트워크 계층의 변수를 향상시키기 위해 최적화 반복을 수행하는 기능. 각 반복에서 훈련 세트에서 새로운 데이터 배치를 선택한 다음 TensorFlow는 이러한 훈련 샘플을 사용하여 최적화 프로그램을 실행합니다. 진행률은 매 100 회 반복됩니다.
###Code
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch = data.train.next_batch(train_batch_size)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status every 100 iterations.
if i % 100 == 0:
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
# Message for printing.
msg = "Optimization Iteration: {0:>6}, Training Accuracy: {1:>6.1%}"
# Print it.
print(msg.format(i + 1, acc))
# Update the total number of iterations performed.
total_iterations += num_iterations
###Output
_____no_output_____
###Markdown
분류 오류 이미지의 예를 표시하는 도움 함수¶ 잘못 분류 된 테스트 세트의 이미지 예제를 출력하는 함수
###Code
def plot_example_errors(cls_pred, correct):
# This function is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# correct is a boolean array whether the predicted class
# is equal to the true class for each image in the test-set.
# Negate the boolean array.
incorrect = (correct == False)
# Get the images from the test-set that have been
# incorrectly classified.
images = data.test.images[incorrect]
# Get the predicted classes for those images.
cls_pred = cls_pred[incorrect]
# Get the true classes for those images.
cls_true = data.test.cls[incorrect]
# Plot the first 9 images.
plot_images(images=images[0:9],
cls_true=cls_true[0:9],
cls_pred=cls_pred[0:9])
###Output
_____no_output_____
###Markdown
혼란 행렬(Confusion matrix)을 그리기위한 도움 함수
###Code
def plot_confusion_matrix(cls_pred):
# This is called from print_test_accuracy() below.
# cls_pred is an array of the predicted class-number for
# all images in the test-set.
# Get the true classifications for the test-set.
cls_true = data.test.cls
# Get the confusion matrix using sklearn.
cm = confusion_matrix(y_true=cls_true,
y_pred=cls_pred)
# Print the confusion matrix as text.
print(cm)
# Plot the confusion matrix as an image.
plt.matshow(cm)
# Make various adjustments to the plot.
plt.colorbar()
tick_marks = np.arange(num_classes)
plt.xticks(tick_marks, range(num_classes))
plt.yticks(tick_marks, range(num_classes))
plt.xlabel('Predicted')
plt.ylabel('True')
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
성능을 표시하는 도움 함수 테스트 세트에 대한 분류 정확도를 출력하는 함수입니다.테스트 세트의 모든 이미지에 대한 분류를 계산하는 데 시간이 걸리기 때문에, 이 함수에서 위의 함수를 직접 호출하여 결과를 다시 사용하므로 각 함수에서 분류를 다시 계산할 필요가 없습니다.이 함수는 많은 양의 메모리를 사용할 수 있으므로 테스트 세트를 작은 배치로 분할하게 됩니다. 컴퓨터에 메모리가 모자라고 충돌이 발생하면 배치 크기를 줄이고 시도를 해 봐야 합니다.
###Code
# Split the test-set into smaller batches of this size.
test_batch_size = 256
def print_test_accuracy(show_example_errors=False,
show_confusion_matrix=False):
# Number of images in the test-set.
num_test = len(data.test.images)
# Allocate an array for the predicted classes which
# will be calculated in batches and filled into this array.
cls_pred = np.zeros(shape=num_test, dtype=np.int)
# Now calculate the predicted classes for the batches.
# We will just iterate through all the batches.
# There might be a more clever and Pythonic way of doing this.
# The starting index for the next batch is denoted i.
i = 0
while i < num_test:
# The ending index for the next batch is denoted j.
j = min(i + test_batch_size, num_test)
# Get the images from the test-set between index i and j.
images = data.test.images[i:j, :]
# Get the associated labels.
labels = data.test.labels[i:j, :]
# Create a feed-dict with these images and labels.
feed_dict = {x: images,
y_true: labels}
# Calculate the predicted class using TensorFlow.
cls_pred[i:j] = session.run(y_pred_cls, feed_dict=feed_dict)
# Set the start-index for the next batch to the
# end-index of the current batch.
i = j
# Convenience variable for the true class-numbers of the test-set.
cls_true = data.test.cls
# Create a boolean array whether each image is correctly classified.
correct = (cls_true == cls_pred)
# Calculate the number of correctly classified images.
# When summing a boolean array, False means 0 and True means 1.
correct_sum = correct.sum()
# Classification accuracy is the number of correctly classified
# images divided by the total number of images in the test-set.
acc = float(correct_sum) / num_test
# Print the accuracy.
msg = "Accuracy on Test-Set: {0:.1%} ({1} / {2})"
print(msg.format(acc, correct_sum, num_test))
# Plot some examples of mis-classifications, if desired.
if show_example_errors:
print("Example errors:")
plot_example_errors(cls_pred=cls_pred, correct=correct)
# Plot the confusion matrix, if desired.
if show_confusion_matrix:
print("Confusion Matrix:")
plot_confusion_matrix(cls_pred=cls_pred)
###Output
_____no_output_____
###Markdown
최적화 전 성능모델 변수가 초기화되고 전혀 최적화되지 않았기 때문에 테스트 세트의 정확도는 매우 낮으므로 이미지를 무작위로 분류합니다.
###Code
print_test_accuracy()
###Output
Accuracy on Test-Set: 10.2% (1023 / 10000)
###Markdown
1회 훈련 한 후의 성능학습률이 아주 작게 설정 되었기 때문에, 한번의 반복으로는 분류 정확도가 향상되지 않습니다.
###Code
optimize(num_iterations=1)
print_test_accuracy()
###Output
Accuracy on Test-Set: 10.4% (1042 / 10000)
###Markdown
100번 반복 훈련 후의 성능100번의 훈련 후에는 모델의 분류 정확도가 상당히 향상 되었습니다.
###Code
%%time
optimize(num_iterations=99) # We already performed 1 iteration above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 78.6% (7858 / 10000)
Example errors:
###Markdown
1,000번 훈련 후의 성능1,000번의 훈련 후에, 모델의 정확도는 크게 향상이 되어 90%이상의 정확도를 보이게 됩니다.
###Code
%%time
optimize(num_iterations=900) # We performed 100 iterations above.
print_test_accuracy(show_example_errors=True)
###Output
Accuracy on Test-Set: 94.9% (9491 / 10000)
Example errors:
###Markdown
10,000 회의 훈련 후의 성능10,000번을 반복 훈련 한 후에는 테스트 세트의 데이터에 대해서 99%의 분류 정확도를 보여줍니다.
###Code
%%time
optimize(num_iterations=9000) # We performed 1000 iterations above.
print_test_accuracy(show_example_errors=True,
show_confusion_matrix=True)
###Output
Accuracy on Test-Set: 98.9% (9890 / 10000)
Example errors:
###Markdown
가중치와 레이어의 시각화¶ 컨볼루션 레이어의 가중치를 시각화 하기 위한 도움 함수
###Code
def plot_conv_weights(weights, input_channel=0):
# Assume weights are TensorFlow ops for 4-dim variables
# e.g. weights_conv1 or weights_conv2.
# Retrieve the values of the weight-variables from TensorFlow.
# A feed-dict is not necessary because nothing is calculated.
w = session.run(weights)
# Get the lowest and highest values for the weights.
# This is used to correct the colour intensity across
# the images so they can be compared with each other.
w_min = np.min(w)
w_max = np.max(w)
# Number of filters used in the conv. layer.
num_filters = w.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot all the filter-weights.
for i, ax in enumerate(axes.flat):
# Only plot the valid filter-weights.
if i<num_filters:
# Get the weights for the i'th filter of the input channel.
# See new_conv_layer() for details on the format
# of this 4-dim tensor.
img = w[:, :, input_channel, i]
# Plot image.
ax.imshow(img, vmin=w_min, vmax=w_max,
interpolation='nearest', cmap='seismic')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
컨볼루션 레이어의 출력을 표시하기 위한 도움 함수
###Code
def plot_conv_layer(layer, image):
# Assume layer is a TensorFlow op that outputs a 4-dim tensor
# which is the output of a convolutional layer,
# e.g. layer_conv1 or layer_conv2.
# Create a feed-dict containing just one image.
# Note that we don't need to feed y_true because it is
# not used in this calculation.
feed_dict = {x: [image]}
# Calculate and retrieve the output values of the layer
# when inputting that image.
values = session.run(layer, feed_dict=feed_dict)
# Number of filters used in the conv. layer.
num_filters = values.shape[3]
# Number of grids to plot.
# Rounded-up, square-root of the number of filters.
num_grids = math.ceil(math.sqrt(num_filters))
# Create figure with a grid of sub-plots.
fig, axes = plt.subplots(num_grids, num_grids)
# Plot the output images of all the filters.
for i, ax in enumerate(axes.flat):
# Only plot the images for valid filters.
if i<num_filters:
# Get the output image of using the i'th filter.
img = values[0, :, :, i]
# Plot image.
ax.imshow(img, interpolation='nearest', cmap='binary')
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
###Output
_____no_output_____
###Markdown
입력 이미지¶이미지 표시를 위한 도움 함수
###Code
def plot_image(image):
plt.imshow(image.reshape(img_shape),
interpolation='nearest',
cmap='binary')
plt.show()
###Output
_____no_output_____
###Markdown
아래 예제처럼 사용될 테스트 세트의 이미지를 표시합니다.
###Code
image1 = data.test.images[0]
plot_image(image1)
###Output
_____no_output_____
###Markdown
테스트 세트의 다른 이미지를 표시합니다.
###Code
image2 = data.test.images[13]
plot_image(image2)
###Output
_____no_output_____
###Markdown
컨볼루션 레이어 1 이제 첫 번째 컨볼루션 레이어에 대한 필터 가중치를 그립니다.양의 가중치는 빨간색이고 음수 가중치는 파란색입니다.
###Code
plot_conv_weights(weights=weights_conv1)
###Output
_____no_output_____
###Markdown
이들 컨볼루션 필터 각각을 첫번째 입력 이미지에 적용하면 다음의 출력 이미지가 얻어지며, 그 다음에 두번째 컨볼루션 레이어에 대한 입력으로 사용됩니다.
###Code
plot_conv_layer(layer=layer_conv1, image=image1)
###Output
_____no_output_____
###Markdown
다음 이미지는 컨볼루션 필터를 두 번째 이미지에 적용한 결과입니다.
###Code
plot_conv_layer(layer=layer_conv1, image=image2)
###Output
_____no_output_____
###Markdown
컨볼루션 레이어 2 이제 두 번째 컨볼루션 레이어의 필터 가중치를 그립니다.첫 번째 컨볼루션 레이어에는 16 개의 출력 채널이 있으며 이는 두 번째 컨볼루션 레이어에 16 개의 입력 채널이 있습니다. 두 번째 컨볼루션 레이어에는 각 입력 채널에 대한 일련의 필터 가중치가 있습니다. 우리는 첫 번째 채널에 대해 필터 가중치를 시각화 합니다.양의 가중치는 빨간색이고 음수 가중치는 파란색입니다.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=0)
###Output
_____no_output_____
###Markdown
두 번째 컨볼루션 계층에는 16 개의 입력 채널이 있으므로 이와 같이 15 개의 필터 가중치를 추가로 만들 수 있습니다. 우리는 두 번째 채널에 대한 필터 가중치로 하나 더 만듭니다.
###Code
plot_conv_weights(weights=weights_conv2, input_channel=1)
###Output
_____no_output_____
###Markdown
높은 차원 때문에 이러한 필터가 적용되는 방법을 이해하고 추적하는 것은 어려울 수 있습니다.첫 컨볼루션 레이어에서 출력 된 이미지에 컨볼루션 필터를 적용하면 다음 이미지가 표시됩니다.첫 번째 컨볼루션 레이어 다음에 스트라이드 2가 있는 max-pooling 레이어가 있기 때문에 원본 입력 이미지의 해상도의 절반인 14 x 14 픽셀로 다운 샘플링됩니다. 최대 풀링은 두 번째 컨벌루션 이후에도 수행되는데, 아래의 이미지는 적용되기 전에 뽑아낸 것 입니다.
###Code
plot_conv_layer(layer=layer_conv2, image=image1)
###Output
_____no_output_____
###Markdown
그리고 다음이 두 번째 이미지에 필터 가중치를 적용한 결과입니다.
###Code
plot_conv_layer(layer=layer_conv2, image=image2)
###Output
_____no_output_____
###Markdown
TensorFlow Session 닫기 TensorFlow를 사용하여 작업을 마쳤으므로 세션을 닫아 리소스를 해제합니다.
###Code
# This has been commented out in case you want to modify and experiment
# with the Notebook without having to restart it.
# session.close()
###Output
_____no_output_____ |
section-04-research-and-development/06-feature-engineering-with-open-source.ipynb | ###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv(r'D:\Machine Learning_Deep Learning\Deploy_Machine_Learning_Model_Udemy\deploying-machine-learning-models\section-04-research-and-development\train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
#Save the cleaned datasets in csv for use in next notebook
X_train.to_csv('xtrain.csv', index = False)
X_test.to_csv('xtest.csv', index = False)
y_train.to_csv('ytrain.csv', index = False)
y_test.to_csv('ytest.csv', index = False)
#now save the scaling pipeline in case we need it in future
joblib.dump(scaler, 'minmax_scaler.joblib')
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add binary missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer # binarizer for heavily skewed variables
# from feature-engine
# to handle missing values
from feature_engine.imputation import (
AddMissingIndicator, # for creating new variable with binary (0, 1) 'missing' indicator
MeanMedianImputer, # for replacement by the mean
CategoricalImputer, # for replacement by string/number or frequent category.
# fill_value: str, int, float, default=’Missing’
)
from feature_engine.encoding import (
RareLabelEncoder, # for removal of rare labels
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper # used in conjunction with binarizer
# to visualise all the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('../data/train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data into training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm of the target
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot (>=10%) of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations (<10%) without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
###Output
_____no_output_____
###Markdown
Create two lists for imputation: Use list comprehension. =10% missing: with_string_missing
###Code
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() >= 0.1] # more than 10% missing
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1] # less than 10% missing
# I print the values here, because it makes it easier for
# later when we need to add these values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
###Output
_____no_output_____
###Markdown
Use joblib to store fitted class
###Code
# store the fitted class using joblib (sklearn)
import joblib
joblib.dump(cat_imputer_missing,'cat_imp_missing')
# retrieve fitted class using joblib (sklearn)
joblib_cat_imp = joblib.load('cat_imp_missing') # open file to read it
# replace NA by missing
# # IMPORTANT: note that we could store this class with joblib
# X_train = cat_imputer_missing.transform(X_train)
# X_test = cat_imputer_missing.transform(X_test)
X_train = joblib_cat_imp.transform(X_train)
X_test = joblib_cat_imp.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator variables
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere are in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarithm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
X_train[skewed].head() # before transformation
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head() # after transformation
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.**NB Remember that the target is log-transformed, that is why the differences seem so small.** Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('./dataset/input/train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____
###Markdown
Feature Engineering with Open-SourceIn this notebook, we will reproduce the Feature Engineering Pipeline from the notebook 2 (02-Machine-Learning-Pipeline-Feature-Engineering), but we will replace, whenever possible, the manually created functions by open-source classes, and hopefully understand the value they bring forward. Reproducibility: Setting the seedWith the aim to ensure reproducibility between runs of the same notebook, but also between the research and production environment, for each step that includes some element of randomness, it is extremely important that we **set the seed**.
###Code
# data manipulation and plotting
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
# for saving the pipeline
import joblib
# from Scikit-learn
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MinMaxScaler, Binarizer
# from feature-engine
from feature_engine.imputation import (
AddMissingIndicator,
MeanMedianImputer,
CategoricalImputer,
)
from feature_engine.encoding import (
RareLabelEncoder,
OrdinalEncoder,
)
from feature_engine.transformation import (
LogTransformer,
YeoJohnsonTransformer,
)
from feature_engine.selection import DropFeatures
from feature_engine.wrappers import SklearnTransformerWrapper
# to visualise al the columns in the dataframe
pd.pandas.set_option('display.max_columns', None)
# load dataset
data = pd.read_csv('train.csv')
# rows and columns of the data
print(data.shape)
# visualise the dataset
data.head()
###Output
(1460, 81)
###Markdown
Separate dataset into train and testIt is important to separate our data intro training and testing set. When we engineer features, some techniques learn parameters from data. It is important to learn these parameters only from the train set. This is to avoid over-fitting.Our feature engineering techniques will learn:- mean- mode- exponents for the yeo-johnson- category frequency- and category to number mappingsfrom the train set.**Separating the data into train and test involves randomness, therefore, we need to set the seed.**
###Code
# Let's separate into train and test set
# Remember to set the seed (random_state for this sklearn function)
X_train, X_test, y_train, y_test = train_test_split(
data.drop(['Id', 'SalePrice'], axis=1), # predictive variables
data['SalePrice'], # target
test_size=0.1, # portion of dataset to allocate to test set
random_state=0, # we are setting the seed here
)
X_train.shape, X_test.shape
###Output
_____no_output_____
###Markdown
Feature EngineeringIn the following cells, we will engineer the variables of the House Price Dataset so that we tackle:1. Missing values2. Temporal variables3. Non-Gaussian distributed variables4. Categorical variables: remove rare labels5. Categorical variables: convert strings to numbers5. Standardize the values of the variables to the same range TargetWe apply the logarithm
###Code
y_train = np.log(y_train)
y_test = np.log(y_test)
###Output
_____no_output_____
###Markdown
Missing values Categorical variablesWe will replace missing values with the string "missing" in those variables with a lot of missing data. Alternatively, we will replace missing data with the most frequent category in those variables that contain fewer observations without values. This is common practice.
###Code
# let's identify the categorical variables
# we will capture those of type object
cat_vars = [var for var in data.columns if data[var].dtype == 'O']
# MSSubClass is also categorical by definition, despite its numeric values
# (you can find the definitions of the variables in the data_description.txt
# file available on Kaggle, in the same website where you downloaded the data)
# lets add MSSubClass to the list of categorical variables
cat_vars = cat_vars + ['MSSubClass']
# cast all variables as categorical
X_train[cat_vars] = X_train[cat_vars].astype('O')
X_test[cat_vars] = X_test[cat_vars].astype('O')
# number of categorical variables
len(cat_vars)
# make a list of the categorical variables that contain missing values
cat_vars_with_na = [
var for var in cat_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[cat_vars_with_na ].isnull().mean().sort_values(ascending=False)
# variables to impute with the string missing
with_string_missing = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() > 0.1]
# variables to impute with the most frequent category
with_frequent_category = [
var for var in cat_vars_with_na if X_train[var].isnull().mean() < 0.1]
# I print the values here, because it makes it easier for
# later when we need to add this values to a config file for
# deployment
with_string_missing
with_frequent_category
# replace missing values with new label: "Missing"
# set up the class
cat_imputer_missing = CategoricalImputer(
imputation_method='missing', variables=with_string_missing)
# fit the class to the train set
cat_imputer_missing.fit(X_train)
# the class learns and stores the parameters
cat_imputer_missing.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_missing.transform(X_train)
X_test = cat_imputer_missing.transform(X_test)
# replace missing values with most frequent category
# set up the class
cat_imputer_frequent = CategoricalImputer(
imputation_method='frequent', variables=with_frequent_category)
# fit the class to the train set
cat_imputer_frequent.fit(X_train)
# the class learns and stores the parameters
cat_imputer_frequent.imputer_dict_
# replace NA by missing
# IMPORTANT: note that we could store this class with joblib
X_train = cat_imputer_frequent.transform(X_train)
X_test = cat_imputer_frequent.transform(X_test)
# check that we have no missing information in the engineered variables
X_train[cat_vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in cat_vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Numerical variablesTo engineer missing values in numerical variables, we will:- add a binary missing indicator variable- and then replace the missing values in the original variable with the mean
###Code
# now let's identify the numerical variables
num_vars = [
var for var in X_train.columns if var not in cat_vars and var != 'SalePrice'
]
# number of numerical variables
len(num_vars)
# make a list with the numerical variables that contain missing values
vars_with_na = [
var for var in num_vars
if X_train[var].isnull().sum() > 0
]
# print percentage of missing values per variable
X_train[vars_with_na].isnull().mean()
# print, makes my life easier when I want to create the config
vars_with_na
# add missing indicator
missing_ind = AddMissingIndicator(variables=vars_with_na)
missing_ind.fit(X_train)
X_train = missing_ind.transform(X_train)
X_test = missing_ind.transform(X_test)
# check the binary missing indicator variables
X_train[['LotFrontage_na', 'MasVnrArea_na', 'GarageYrBlt_na']].head()
# then replace missing data with the mean
# set the imputer
mean_imputer = MeanMedianImputer(
imputation_method='mean', variables=vars_with_na)
# learn and store parameters from train set
mean_imputer.fit(X_train)
# the stored parameters
mean_imputer.imputer_dict_
X_train = mean_imputer.transform(X_train)
X_test = mean_imputer.transform(X_test)
# IMPORTANT: note that we could save the imputers with joblib
# check that we have no more missing values in the engineered variables
X_train[vars_with_na].isnull().sum()
# check that test set does not contain null values in the engineered variables
[var for var in vars_with_na if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Temporal variables Capture elapsed timeThere is in Feature-engine 2 classes that allow us to perform the 2 transformations below:- [CombineWithFeatureReference](https://feature-engine.readthedocs.io/en/latest/creation/CombineWithReferenceFeature.html) to capture elapsed time- [DropFeatures](https://feature-engine.readthedocs.io/en/latest/selection/DropFeatures.html) to drop the unwanted featuresWe will do the first one manually, so we take the opportunity to create 1 class ourselves for the course. For the second operation, we will use the DropFeatures class.
###Code
def elapsed_years(df, var):
# capture difference between the year variable
# and the year in which the house was sold
df[var] = df['YrSold'] - df[var]
return df
for var in ['YearBuilt', 'YearRemodAdd', 'GarageYrBlt']:
X_train = elapsed_years(X_train, var)
X_test = elapsed_years(X_test, var)
# now we drop YrSold
drop_features = DropFeatures(features_to_drop=['YrSold'])
X_train = drop_features.fit_transform(X_train)
X_test = drop_features.transform(X_test)
###Output
_____no_output_____
###Markdown
Numerical variable transformation Logarithmic transformationIn the previous notebook, we observed that the numerical variables are not normally distributed.We will transform with the logarightm the positive numerical variables in order to get a more Gaussian-like distribution.
###Code
log_transformer = LogTransformer(
variables=["LotFrontage", "1stFlrSF", "GrLivArea"])
X_train = log_transformer.fit_transform(X_train)
X_test = log_transformer.transform(X_test)
# check that test set does not contain null values in the engineered variables
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_test[var].isnull().sum() > 0]
# same for train set
[var for var in ["LotFrontage", "1stFlrSF", "GrLivArea"] if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Yeo-Johnson transformationWe will apply the Yeo-Johnson transformation to LotArea.
###Code
yeo_transformer = YeoJohnsonTransformer(
variables=['LotArea'])
X_train = yeo_transformer.fit_transform(X_train)
X_test = yeo_transformer.transform(X_test)
# the learned parameter
yeo_transformer.lambda_dict_
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_train.columns if X_test[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Binarize skewed variablesThere were a few variables very skewed, we would transform those into binary variables.We can perform the below transformation with open source. We can use the [Binarizer](https://scikit-learn.org/stable/modules/generated/sklearn.preprocessing.Binarizer.html) from Scikit-learn, in combination with the [SklearnWrapper](https://feature-engine.readthedocs.io/en/latest/wrappers/Wrapper.html) from Feature-engine to be able to apply the transformation only to a subset of features.Instead, we are going to do it manually, to give us another opportunity to code the class as an in-house package later in the course.
###Code
skewed = [
'BsmtFinSF2', 'LowQualFinSF', 'EnclosedPorch',
'3SsnPorch', 'ScreenPorch', 'MiscVal'
]
binarizer = SklearnTransformerWrapper(
transformer=Binarizer(threshold=0), variables=skewed
)
X_train = binarizer.fit_transform(X_train)
X_test = binarizer.transform(X_test)
X_train[skewed].head()
###Output
_____no_output_____
###Markdown
Categorical variables Apply mappingsThese are variables which values have an assigned order, related to quality. For more information, check Kaggle website.
###Code
# re-map strings to numbers, which determine quality
qual_mappings = {'Po': 1, 'Fa': 2, 'TA': 3, 'Gd': 4, 'Ex': 5, 'Missing': 0, 'NA': 0}
qual_vars = ['ExterQual', 'ExterCond', 'BsmtQual', 'BsmtCond',
'HeatingQC', 'KitchenQual', 'FireplaceQu',
'GarageQual', 'GarageCond',
]
for var in qual_vars:
X_train[var] = X_train[var].map(qual_mappings)
X_test[var] = X_test[var].map(qual_mappings)
exposure_mappings = {'No': 1, 'Mn': 2, 'Av': 3, 'Gd': 4}
var = 'BsmtExposure'
X_train[var] = X_train[var].map(exposure_mappings)
X_test[var] = X_test[var].map(exposure_mappings)
finish_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'LwQ': 2, 'Rec': 3, 'BLQ': 4, 'ALQ': 5, 'GLQ': 6}
finish_vars = ['BsmtFinType1', 'BsmtFinType2']
for var in finish_vars:
X_train[var] = X_train[var].map(finish_mappings)
X_test[var] = X_test[var].map(finish_mappings)
garage_mappings = {'Missing': 0, 'NA': 0, 'Unf': 1, 'RFn': 2, 'Fin': 3}
var = 'GarageFinish'
X_train[var] = X_train[var].map(garage_mappings)
X_test[var] = X_test[var].map(garage_mappings)
fence_mappings = {'Missing': 0, 'NA': 0, 'MnWw': 1, 'GdWo': 2, 'MnPrv': 3, 'GdPrv': 4}
var = 'Fence'
X_train[var] = X_train[var].map(fence_mappings)
X_test[var] = X_test[var].map(fence_mappings)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
###Output
_____no_output_____
###Markdown
Removing Rare LabelsFor the remaining categorical variables, we will group those categories that are present in less than 1% of the observations. That is, all values of categorical variables that are shared by less than 1% of houses, well be replaced by the string "Rare".To learn more about how to handle categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# capture all quality variables
qual_vars = qual_vars + finish_vars + ['BsmtExposure','GarageFinish','Fence']
# capture the remaining categorical variables
# (those that we did not re-map)
cat_others = [
var for var in cat_vars if var not in qual_vars
]
len(cat_others)
cat_others
rare_encoder = RareLabelEncoder(tol=0.01, n_categories=1, variables=cat_others)
# find common labels
rare_encoder.fit(X_train)
# the common labels are stored, we can save the class
# and then use it later :)
rare_encoder.encoder_dict_
X_train = rare_encoder.transform(X_train)
X_test = rare_encoder.transform(X_test)
###Output
_____no_output_____
###Markdown
Encoding of categorical variablesNext, we need to transform the strings of the categorical variables into numbers. We will do it so that we capture the monotonic relationship between the label and the target.To learn more about how to encode categorical variables visit our course [Feature Engineering for Machine Learning](https://www.udemy.com/course/feature-engineering-for-machine-learning/?referralCode=A855148E05283015CF06) in Udemy.
###Code
# set up the encoder
cat_encoder = OrdinalEncoder(encoding_method='ordered', variables=cat_others)
# create the mappings
cat_encoder.fit(X_train, y_train)
# mappings are stored and class can be saved
cat_encoder.encoder_dict_
X_train = cat_encoder.transform(X_train)
X_test = cat_encoder.transform(X_test)
# check absence of na in the train set
[var for var in X_train.columns if X_train[var].isnull().sum() > 0]
# check absence of na in the test set
[var for var in X_test.columns if X_test[var].isnull().sum() > 0]
# let me show you what I mean by monotonic relationship
# between labels and target
def analyse_vars(train, y_train, var):
# function plots median house sale price per encoded
# category
tmp = pd.concat([X_train, np.log(y_train)], axis=1)
tmp.groupby(var)['SalePrice'].median().plot.bar()
plt.title(var)
plt.ylim(2.2, 2.6)
plt.ylabel('SalePrice')
plt.show()
for var in cat_others:
analyse_vars(X_train, y_train, var)
###Output
_____no_output_____
###Markdown
The monotonic relationship is particularly clear for the variables MSZoning and Neighborhood. Note how, the higher the integer that now represents the category, the higher the mean house sale price.(remember that the target is log-transformed, that is why the differences seem so small). Feature ScalingFor use in linear models, features need to be either scaled. We will scale features to the minimum and maximum values:
###Code
# create scaler
scaler = MinMaxScaler()
# fit the scaler to the train set
scaler.fit(X_train)
# transform the train and test set
# sklearn returns numpy arrays, so we wrap the
# array with a pandas dataframe
X_train = pd.DataFrame(
scaler.transform(X_train),
columns=X_train.columns
)
X_test = pd.DataFrame(
scaler.transform(X_test),
columns=X_train.columns
)
X_train.head()
###Output
_____no_output_____ |
analyses/Evo results with Hiidenportti model.ipynb | ###Markdown
No NMS
###Code
from drone_detector.metrics import *
gt_dis = ground_truth.dissolve(by='label')
res_dis = results.dissolve(by='class_id')
poly_IoU(gt_dis, res_dis)
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'Standing'},
{'supercategory': 'deadwood', 'id':2, 'name':'Fallen'}]
raw_coco_eval = GisCOCOeval('../data/sudenpesankangas/results/raw', '../data/sudenpesankangas/results/raw',
None, None, deadwood_categories)
raw_coco_eval.prepare_data(gt_label_col='label')
raw_coco_eval.prepare_eval()
raw_coco_eval.coco_eval.params.maxDets = (100, 10000)
raw_coco_eval.evaluate()
###Output
Evaluating for category Standing
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=14.48s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.237
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.472
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.195
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.044
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.258
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.476
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.065
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.000
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.066
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.187
Evaluating for category Fallen
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=92.95s).
Accumulating evaluation results...
DONE (t=0.02s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.013
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.054
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.003
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.014
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.022
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.009
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.013
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.004
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = -1.000
Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=108.99s).
Accumulating evaluation results...
DONE (t=0.03s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.125
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.263
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.099
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.029
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.140
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.476
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=100 ] = 0.037
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=100 ] = 0.006
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=100 ] = 0.035
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=100 ] = 0.187
###Markdown
NMS
###Code
from drone_detector.postproc import *
from pathlib import Path
import os
raw_dir = Path('../data/sudenpesankangas/results/predicted_vectors/')
nms_dir = Path('../data/sudenpesankangas/results/nms/predicted_vectors/')
raw_files = os.listdir(raw_dir)
for r in raw_files:
gdf_temp = gpd.read_file(raw_dir/r)
gdf_nms = do_nms(gdf_temp)
gdf_nms.to_file(nms_dir/r, driver='GeoJSON')
gdf_nms = None
gdf_temp = None
res_nms = do_nms(results)
res_nms_dis = res_nms.dissolve(by='class_id')
poly_IoU(gt_dis, res_nms_dis)
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'Standing'},
{'supercategory': 'deadwood', 'id':2, 'name':'Fallen'}]
nms_coco_eval = GisCOCOeval('../data/sudenpesankangas/results/nms', '../data/sudenpesankangas/results/nms',
None, None, deadwood_categories)
nms_coco_eval.prepare_data(gt_label_col='label') # note to self: dont name interesting column as "class"
nms_coco_eval.prepare_eval()
nms_coco_eval.coco_eval.params.maxDets = (1000, 10000)
nms_coco_eval.evaluate()
###Output
Evaluating for category Standing
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=8.39s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.360
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.736
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.290
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.088
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.385
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.518
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.443
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.279
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.453
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.574
Evaluating for category Fallen
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=39.29s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.015
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.067
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.001
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.016
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.022
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.053
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.072
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.031
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000
Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=47.37s).
Accumulating evaluation results...
DONE (t=0.03s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.187
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.402
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.146
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.052
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.204
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.518
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.248
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.176
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.242
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.574
###Markdown
Area-based NMS
###Code
raw_dir = Path('../data/sudenpesankangas/results/predicted_vectors/')
nms_area_dir = Path('../data/sudenpesankangas/results/nms_area/predicted_vectors/')
raw_files = os.listdir(raw_dir)
for r in raw_files:
gdf_temp = gpd.read_file(raw_dir/r)
gdf_nms_area = do_nms(gdf_temp, crit='area')
gdf_nms_area.to_file(nms_area_dir/r, driver='GeoJSON')
gdf_nms_area = None
gdf_temp = None
res_nms_area = do_nms(results)
res_nms_area_dis = res_nms_area.dissolve(by='class_id')
poly_IoU(gt_dis, res_nms_area_dis)
deadwood_categories = [{'supercategory': 'deadwood', 'id':1, 'name':'Standing'},
{'supercategory': 'deadwood', 'id':2, 'name':'Fallen'}]
nms_area_coco_eval = GisCOCOeval('../data/sudenpesankangas/results/nms_area', '../data/sudenpesankangas/results/nms_area',
None, None, deadwood_categories)
nms_area_coco_eval.prepare_data(gt_label_col='label') # note to self: dont name interesting column as "class"
nms_area_coco_eval.prepare_eval()
nms_area_coco_eval.coco_eval.params.maxDets = (1000, 10000)
nms_area_coco_eval.evaluate()
###Output
Evaluating for category Standing
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=8.00s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.370
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.745
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.297
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.154
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.382
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.518
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.439
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.281
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.448
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.574
Evaluating for category Fallen
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=32.56s).
Accumulating evaluation results...
DONE (t=0.01s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.019
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.085
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.002
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.021
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.024
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = -1.000
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.057
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.074
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.037
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = -1.000
Evaluating for full data...
Running per image evaluation...
Evaluate annotation type *segm*
DONE (t=40.40s).
Accumulating evaluation results...
DONE (t=0.02s).
Average Precision (AP) @[ IoU=0.50:0.95 | area= all | maxDets=10000 ] = 0.195
Average Precision (AP) @[ IoU=0.50 | area= all | maxDets=10000 ] = 0.415
Average Precision (AP) @[ IoU=0.75 | area= all | maxDets=10000 ] = 0.149
Average Precision (AP) @[ IoU=0.50:0.95 | area= small | maxDets=10000 ] = 0.087
Average Precision (AP) @[ IoU=0.50:0.95 | area=medium | maxDets=10000 ] = 0.203
Average Precision (AP) @[ IoU=0.50:0.95 | area= large | maxDets=10000 ] = 0.518
Average Recall (AR) @[ IoU=0.50:0.95 | area= all | maxDets=1000 ] = 0.248
Average Recall (AR) @[ IoU=0.50:0.95 | area= small | maxDets=1000 ] = 0.178
Average Recall (AR) @[ IoU=0.50:0.95 | area=medium | maxDets=1000 ] = 0.242
Average Recall (AR) @[ IoU=0.50:0.95 | area= large | maxDets=1000 ] = 0.574
|
example/time_series-Copy1.ipynb | ###Markdown
Copyright 2019 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Time series forecasting View on TensorFlow.org Run in Google Colab View source on GitHub Download notebook This tutorial is an introduction to time series forecasting using TensorFlow. It builds a few different styles of models including Convolutional and Recurrent Neural Networks (CNNs and RNNs).This is covered in two main parts, with subsections: * Forecast for a single timestep: * A single feature. * All features.* Forecast multiple steps: * Single-shot: Make the predictions all at once. * Autoregressive: Make one prediction at a time and feed the output back to the model. Setup
###Code
import os
import datetime
import IPython
import IPython.display
import matplotlib as mpl
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
import tensorflow as tf
mpl.rcParams['figure.figsize'] = (8, 6)
mpl.rcParams['axes.grid'] = False
from tensorflow.python.data.ops import dataset_ops
from tensorflow.python.ops import array_ops
from tensorflow.python.ops import math_ops
from tensorflow.python.util.tf_export import keras_export
def timeseries_dataset_from_array(
data,
targets,
sequence_length,
sequence_stride=1,
sampling_rate=1,
batch_size=128,
shuffle=False,
seed=None,
start_index=None,
end_index=None):
# Validate the shape of data and targets
if targets is not None and len(targets) != len(data):
raise ValueError('Expected data and targets to have the same number of '
'time steps (axis 0) but got '
'shape(data) = %s; shape(targets) = %s.' %
(data.shape, targets.shape))
if start_index and (start_index < 0 or start_index >= len(data)):
raise ValueError('start_index must be higher than 0 and lower than the '
'length of the data. Got: start_index=%s '
'for data of length %s.' % (start_index, len(data)))
if end_index:
if start_index and end_index <= start_index:
raise ValueError('end_index must be higher than start_index. Got: '
'start_index=%s, end_index=%s.' %
(start_index, end_index))
if end_index >= len(data):
raise ValueError('end_index must be lower than the length of the data. '
'Got: end_index=%s' % (end_index,))
if end_index <= 0:
raise ValueError('end_index must be higher than 0. '
'Got: end_index=%s' % (end_index,))
# Validate strides
if sampling_rate <= 0 or sampling_rate >= len(data):
raise ValueError(
'sampling_rate must be higher than 0 and lower than '
'the length of the data. Got: '
'sampling_rate=%s for data of length %s.' % (sampling_rate, len(data)))
if sequence_stride <= 0 or sequence_stride >= len(data):
raise ValueError(
'sequence_stride must be higher than 0 and lower than '
'the length of the data. Got: sequence_stride=%s '
'for data of length %s.' % (sequence_stride, len(data)))
if start_index is None:
start_index = 0
if end_index is None:
end_index = len(data)
# Determine the lowest dtype to store start positions (to lower memory usage).
num_seqs = end_index - start_index - (sequence_length * sampling_rate) + 1
if num_seqs < 2147483647:
index_dtype = 'int32'
else:
index_dtype = 'int64'
# Generate start positions
start_positions = np.arange(0, num_seqs, sequence_stride, dtype=index_dtype)
if shuffle:
if seed is None:
seed = np.random.randint(1e6)
rng = np.random.RandomState(seed)
rng.shuffle(start_positions)
sequence_length = math_ops.cast(sequence_length, dtype=index_dtype)
sampling_rate = math_ops.cast(sampling_rate, dtype=index_dtype)
positions_ds = dataset_ops.Dataset.from_tensors(start_positions).repeat()
# For each initial window position, generates indices of the window elements
indices = dataset_ops.Dataset.zip(
(dataset_ops.Dataset.range(len(start_positions)), positions_ds)).map(
lambda i, positions: math_ops.range( # pylint: disable=g-long-lambda
positions[i],
positions[i] + sequence_length * sampling_rate,
sampling_rate),
num_parallel_calls=dataset_ops.AUTOTUNE)
dataset = sequences_from_indices(data, indices, start_index, end_index)
if targets is not None:
indices = dataset_ops.Dataset.zip(
(dataset_ops.Dataset.range(len(start_positions)), positions_ds)).map(
lambda i, positions: positions[i],
num_parallel_calls=dataset_ops.AUTOTUNE)
target_ds = sequences_from_indices(
targets, indices, start_index, end_index)
dataset = dataset_ops.Dataset.zip((dataset, target_ds))
if shuffle:
# Shuffle locally at each iteration
dataset = dataset.shuffle(buffer_size=batch_size * 8, seed=seed)
dataset = dataset.batch(batch_size)
return dataset
def sequences_from_indices(array, indices_ds, start_index, end_index):
dataset = dataset_ops.Dataset.from_tensors(array[start_index : end_index])
dataset = dataset_ops.Dataset.zip((dataset.repeat(), indices_ds)).map(
lambda steps, inds: array_ops.gather(steps, inds), # pylint: disable=unnecessary-lambda
num_parallel_calls=dataset_ops.AUTOTUNE)
return dataset
###Output
_____no_output_____
###Markdown
The weather datasetThis tutorial uses a weather time series dataset recorded by the Max Planck Institute for Biogeochemistry.This dataset contains 14 different features such as air temperature, atmospheric pressure, and humidity. These were collected every 10 minutes, beginning in 2003. For efficiency, you will use only the data collected between 2009 and 2016. This section of the dataset was prepared by François Chollet for his book [Deep Learning with Python](https://www.manning.com/books/deep-learning-with-python).
###Code
zip_path = tf.keras.utils.get_file(
origin='https://storage.googleapis.com/tensorflow/tf-keras-datasets/jena_climate_2009_2016.csv.zip',
fname='jena_climate_2009_2016.csv.zip',
extract=True)
csv_path, _ = os.path.splitext(zip_path)
###Output
_____no_output_____
###Markdown
This tutorial will just deal with **hourly predictions**, so start by sub-sampling the data from 10 minute intervals to 1h:
###Code
df = pd.read_csv(csv_path)
# slice [start:stop:step], starting from index 5 take every 6th record.
df = df[5::6]
date_time = pd.to_datetime(df.pop('Date Time'), format='%d.%m.%Y %H:%M:%S')
###Output
_____no_output_____
###Markdown
Let's take a glance at the data. Here are the first few rows:
###Code
df.head()
###Output
_____no_output_____
###Markdown
Here is the evolution of a few features over time.
###Code
plot_cols = ['T (degC)', 'p (mbar)', 'rho (g/m**3)']
plot_features = df[plot_cols]
plot_features.index = date_time
_ = plot_features.plot(subplots=True)
plot_features = df[plot_cols][:480]
plot_features.index = date_time[:480]
_ = plot_features.plot(subplots=True)
###Output
_____no_output_____
###Markdown
Inspect and cleanup Next look at the statistics of the dataset:
###Code
df.describe().transpose()
###Output
_____no_output_____
###Markdown
Wind velocity One thing that should stand out is the `min` value of the wind velocity, `wv (m/s)` and `max. wv (m/s)` columns. This `-9999` is likely erroneous. There's a separate wind direction column, so the velocity should be `>=0`. Replace it with zeros:
###Code
wv = df['wv (m/s)']
bad_wv = wv == -9999.0
wv[bad_wv] = 0.0
max_wv = df['max. wv (m/s)']
bad_max_wv = max_wv == -9999.0
max_wv[bad_max_wv] = 0.0
# The above inplace edits are reflected in the DataFrame
df['wv (m/s)'].min()
###Output
_____no_output_____
###Markdown
Feature engineeringBefore diving in to build a model it's important to understand your data, and be sure that you're passing the model appropriately formatted data. WindThe last column of the data, `wd (deg)`, gives the wind direction in units of degrees. Angles do not make good model inputs, 360° and 0° should be close to each other, and wrap around smoothly. Direction shouldn't matter if the wind is not blowing. Right now the distribution of wind data looks like this:
###Code
plt.hist2d(df['wd (deg)'], df['wv (m/s)'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind Direction [deg]')
plt.ylabel('Wind Velocity [m/s]')
###Output
_____no_output_____
###Markdown
But this will be easier for the model to interpret if you convert the wind direction and velocity columns to a wind **vector**:
###Code
wv = df.pop('wv (m/s)')
max_wv = df.pop('max. wv (m/s)')
# Convert to radians.
wd_rad = df.pop('wd (deg)')*np.pi / 180
# Calculate the wind x and y components.
df['Wx'] = wv*np.cos(wd_rad)
df['Wy'] = wv*np.sin(wd_rad)
# Calculate the max wind x and y components.
df['max Wx'] = max_wv*np.cos(wd_rad)
df['max Wy'] = max_wv*np.sin(wd_rad)
###Output
_____no_output_____
###Markdown
The distribution of wind vectors is much simpler for the model to correctly interpret.
###Code
plt.hist2d(df['Wx'], df['Wy'], bins=(50, 50), vmax=400)
plt.colorbar()
plt.xlabel('Wind X [m/s]')
plt.ylabel('Wind Y [m/s]')
ax = plt.gca()
ax.axis('tight')
###Output
_____no_output_____
###Markdown
Time Similarly the `Date Time` column is very useful, but not in this string form. Start by converting it to seconds:
###Code
timestamp_s = date_time.map(datetime.datetime.timestamp)
###Output
_____no_output_____
###Markdown
Similar to the wind direction the time in seconds is not a useful model input. Being weather data it has clear daily and yearly periodicity. There are many ways you could deal with periodicity.A simple approach to convert it to a usable signal is to use `sin` and `cos` to convert the time to clear "Time of day" and "Time of year" signals:
###Code
day = 24*60*60
year = (365.2425)*day
df['Day sin'] = np.sin(timestamp_s * (2 * np.pi / day))
df['Day cos'] = np.cos(timestamp_s * (2 * np.pi / day))
df['Year sin'] = np.sin(timestamp_s * (2 * np.pi / year))
df['Year cos'] = np.cos(timestamp_s * (2 * np.pi / year))
plt.plot(np.array(df['Day sin'])[:25])
plt.plot(np.array(df['Day cos'])[:25])
plt.xlabel('Time [h]')
plt.title('Time of day signal')
###Output
_____no_output_____
###Markdown
This gives the model access to the most important frequency features. In this case we knew ahead of time which frequencies were important. If you didn't know, you can determine which frequencies are important using an `fft`. To check our assumptions, here is the `tf.signal.rfft` of the temperature over time. Note the obvious peaks at frequencies near `1/year` and `1/day`:
###Code
fft = tf.signal.rfft(df['T (degC)'])
f_per_dataset = np.arange(0, len(fft))
n_samples_h = len(df['T (degC)'])
hours_per_year = 24*365.2524
years_per_dataset = n_samples_h/(hours_per_year)
f_per_year = f_per_dataset/years_per_dataset
plt.step(f_per_year, np.abs(fft))
plt.xscale('log')
plt.ylim(0, 400000)
plt.xlim([0.1, max(plt.xlim())])
plt.xticks([1, 365.2524], labels=['1/Year', '1/day'])
_ = plt.xlabel('Frequency (log scale)')
###Output
_____no_output_____
###Markdown
Split the data We'll use a `(70%, 20%, 10%)` split for the training, validation, and test sets. Note the data is **not** being randomly shuffled before splitting. This is for two reasons.1. It ensures that chopping the data into windows of consecutive samples is still possible.2. It ensures that the validation/test results are more realistic, being evaluated on data collected after the model was trained.
###Code
column_indices = {name: i for i, name in enumerate(df.columns)}
n = len(df)
train_df = df[0:int(n*0.7)]
val_df = df[int(n*0.7):int(n*0.9)]
test_df = df[int(n*0.9):]
num_features = df.shape[1]
###Output
_____no_output_____
###Markdown
Normalize the dataIt is important to scale features before training a neural network. Normalization is a common way of doing this scaling. Subtract the mean and divide by the standard deviation of each feature. The mean and standard deviation should only be computed using the training data so that the models have no access to the values in the validation and test sets.It's also arguable that the model shouldn't have access to future values in the training set when training, and that this normalization should be done using moving averages. That's not the focus of this tutorial, and the validation and test sets ensure that we get (somewhat) honest metrics. So in the interest of simplicity this tutorial uses a simple average.
###Code
train_mean = train_df.mean()
train_std = train_df.std()
train_df = (train_df - train_mean) / train_std
val_df = (val_df - train_mean) / train_std
test_df = (test_df - train_mean) / train_std
###Output
_____no_output_____
###Markdown
Now peek at the distribution of the features. Some features do have long tails, but there are no obvious errors like the `-9999` wind velocity value.
###Code
df_std = (df - train_mean) / train_std
df_std = df_std.melt(var_name='Column', value_name='Normalized')
plt.figure(figsize=(12, 6))
ax = sns.violinplot(x='Column', y='Normalized', data=df_std)
_ = ax.set_xticklabels(df.keys(), rotation=90)
###Output
_____no_output_____
###Markdown
Data windowingThe models in this tutorial will make a set of predictions based on a window of consecutive samples from the data. The main features of the input windows are:* The width (number of time steps) of the input and label windows* The time offset between them.* Which features are used as inputs, labels, or both. This tutorial builds a variety of models (including Linear, DNN, CNN and RNN models), and uses them for both:* *Single-output*, and *multi-output* predictions.* *Single-time-step* and *multi-time-step* predictions.This section focuses on implementing the data windowing so that it can be reused for all of those models. Depending on the task and type of model you may want to generate a variety of data windows. Here are some examples:1. For example, to make a single prediction 24h into the future, given 24h of history you might define a window like this: 2. A model that makes a prediction 1h into the future, given 6h of history would need a window like this:  The rest of this section defines a `WindowGenerator` class. This class can:1. Handle the indexes and offsets as shown in the diagrams above.1. Split windows of features into a `(features, labels)` pairs.2. Plot the content of the resulting windows.3. Efficiently generate batches of these windows from the training, evaluation, and test data, using `tf.data.Dataset`s. 1. Indexes and offsetsStart by creating the `WindowGenerator` class. The `__init__` method includes all the necessary logic for the input and label indices.It also takes the train, eval, and test dataframes as input. These will be converted to `tf.data.Dataset`s of windows later.
###Code
class WindowGenerator():
def __init__(self, input_width, label_width, shift,
train_df=train_df, val_df=val_df, test_df=test_df,
label_columns=None):
# Store the raw data.
self.train_df = train_df
self.val_df = val_df
self.test_df = test_df
# Work out the label column indices.
self.label_columns = label_columns
if label_columns is not None:
self.label_columns_indices = {name: i for i, name in
enumerate(label_columns)}
self.column_indices = {name: i for i, name in
enumerate(train_df.columns)}
# Work out the window parameters.
self.input_width = input_width
self.label_width = label_width
self.shift = shift
self.total_window_size = input_width + shift
self.input_slice = slice(0, input_width)
self.input_indices = np.arange(self.total_window_size)[self.input_slice]
self.label_start = self.total_window_size - self.label_width
self.labels_slice = slice(self.label_start, None)
self.label_indices = np.arange(self.total_window_size)[self.labels_slice]
def __repr__(self):
return '\n'.join([
f'Total window size: {self.total_window_size}',
f'Input indices: {self.input_indices}',
f'Label indices: {self.label_indices}',
f'Label column name(s): {self.label_columns}'])
###Output
_____no_output_____
###Markdown
Here is code to create the 2 windows shown in the diagrams at the start of this section:
###Code
w1 = WindowGenerator(input_width=24, label_width=1, shift=24,
label_columns=['T (degC)'])
w1
w2 = WindowGenerator(input_width=6, label_width=1, shift=1,
label_columns=['T (degC)'])
w2
###Output
_____no_output_____
###Markdown
2. SplitGiven a list consecutive inputs, the `split_window` method will convert them to a window of inputs and a window of labels.The example `w2`, above, will be split like this:This diagram doesn't show the `features` axis of the data, but this `split_window` function also handles the `label_columns` so it can be used for both the single output and multi-output examples.
###Code
def split_window(self, features):
inputs = features[:, self.input_slice, :]
labels = features[:, self.labels_slice, :]
if self.label_columns is not None:
labels = tf.stack(
[labels[:, :, self.column_indices[name]] for name in self.label_columns],
axis=-1)
# Slicing doesn't preserve static shape information, so set the shapes
# manually. This way the `tf.data.Datasets` are easier to inspect.
inputs.set_shape([None, self.input_width, None])
labels.set_shape([None, self.label_width, None])
return inputs, labels
WindowGenerator.split_window = split_window
###Output
_____no_output_____
###Markdown
Try it out:
###Code
# Stack three slices, the length of the total window:
example_window = tf.stack([np.array(train_df[:w2.total_window_size]),
np.array(train_df[100:100+w2.total_window_size]),
np.array(train_df[200:200+w2.total_window_size])])
example_inputs, example_labels = w2.split_window(example_window)
print('All shapes are: (batch, time, features)')
print(f'Window shape: {example_window.shape}')
print(f'Inputs shape: {example_inputs.shape}')
print(f'labels shape: {example_labels.shape}')
###Output
All shapes are: (batch, time, features)
Window shape: (3, 7, 19)
Inputs shape: (3, 6, 19)
labels shape: (3, 1, 1)
###Markdown
Typically data in TensorFlow is packed into arrays where the outermost index is across examples (the "batch" dimension). The middle indices are the "time" or "space" (width, height) dimension(s). The innermost indices are the features.The code above took a batch of 2, 7-timestep windows, with 19 features at each time step. It split them into a batch of 6-timestep, 19 feature inputs, and a 1-timestep 1-feature label. The label only has one feature because the `WindowGenerator` was initialized with `label_columns=['T (degC)']`. Initially this tutorial will build models that predict single output labels. 3. PlotHere is a plot method that allows a simple visualization of the split window:
###Code
w2.example = example_inputs, example_labels
def plot(self, model=None, plot_col='T (degC)', max_subplots=3):
inputs, labels = self.example
plt.figure(figsize=(12, 8))
plot_col_index = self.column_indices[plot_col]
max_n = min(max_subplots, len(inputs))
for n in range(max_n):
plt.subplot(3, 1, n+1)
plt.ylabel(f'{plot_col} [normed]')
plt.plot(self.input_indices, inputs[n, :, plot_col_index],
label='Inputs', marker='.', zorder=-10)
if self.label_columns:
label_col_index = self.label_columns_indices.get(plot_col, None)
else:
label_col_index = plot_col_index
if label_col_index is None:
continue
plt.scatter(self.label_indices, labels[n, :, label_col_index],
edgecolors='k', label='Labels', c='#2ca02c', s=64)
if model is not None:
predictions = model(inputs)
plt.scatter(self.label_indices, predictions[n, :, label_col_index],
marker='X', edgecolors='k', label='Predictions',
c='#ff7f0e', s=64)
if n == 0:
plt.legend()
plt.xlabel('Time [h]')
WindowGenerator.plot = plot
###Output
_____no_output_____
###Markdown
This plot aligns inputs, labels, and (later) predictions based on the time that the item refers to:
###Code
w2.plot()
###Output
_____no_output_____
###Markdown
You can plot the other columns, but the example window `w2` configuration only has labels for the `T (degC)` column.
###Code
w2.plot(plot_col='p (mbar)')
###Output
_____no_output_____
###Markdown
4. Create `tf.data.Dataset`s Finally this `make_dataset` method will take a time series `DataFrame` and convert it to a `tf.data.Dataset` of `(input_window, label_window)` pairs using the `preprocessing.timeseries_dataset_from_array` function.
###Code
def make_dataset(self, data):
data = np.array(data, dtype=np.float32)
ds = timeseries_dataset_from_array(
data=data,
targets=None,
sequence_length=self.total_window_size,
sequence_stride=1,
shuffle=True,
batch_size=32,)
ds = ds.map(self.split_window)
return ds
WindowGenerator.make_dataset = make_dataset
###Output
_____no_output_____
###Markdown
The `WindowGenerator` object holds training, validation and test data. Add properties for accessing them as `tf.data.Datasets` using the above `make_dataset` method. Also add a standard example batch for easy access and plotting:
###Code
@property
def train(self):
return self.make_dataset(self.train_df)
@property
def val(self):
return self.make_dataset(self.val_df)
@property
def test(self):
return self.make_dataset(self.test_df)
@property
def example(self):
"""Get and cache an example batch of `inputs, labels` for plotting."""
result = getattr(self, '_example', None)
if result is None:
# No example batch was found, so get one from the `.train` dataset
result = next(iter(self.train))
# And cache it for next time
self._example = result
return result
WindowGenerator.train = train
WindowGenerator.val = val
WindowGenerator.test = test
WindowGenerator.example = example
###Output
_____no_output_____
###Markdown
Now the `WindowGenerator` object gives you access to the `tf.data.Dataset` objects, so you can easily iterate over the data.The `Dataset.element_spec` property tells you the structure, `dtypes` and shapes of the dataset elements.
###Code
# Each element is an (inputs, label) pair
w2.train.element_spec
###Output
_____no_output_____
###Markdown
Iterating over a `Dataset` yields concrete batches:
###Code
for example_inputs, example_labels in w2.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
###Output
Inputs shape (batch, time, features): (32, 6, 19)
Labels shape (batch, time, features): (32, 1, 1)
###Markdown
Single step modelsThe simplest model you can build on this sort of data is one that predicts a single feature's value, 1 timestep (1h) in the future based only on the current conditions.So start by building models to predict the `T (degC)` value 1h into the future.Configure a `WindowGenerator` object to produce these single-step `(input, label)` pairs:
###Code
single_step_window = WindowGenerator(
input_width=1, label_width=1, shift=1,
label_columns=['T (degC)'])
single_step_window
###Output
_____no_output_____
###Markdown
The `window` object creates `tf.data.Datasets` from the training, validation, and test sets, allowing you to easily iterate over batches of data.
###Code
for example_inputs, example_labels in single_step_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
###Output
Inputs shape (batch, time, features): (32, 1, 19)
Labels shape (batch, time, features): (32, 1, 1)
###Markdown
BaselineBefore building a trainable model it would be good to have a performance baseline as a point for comparison with the later more complicated models.This first task is to predict temperature 1h in the future given the current value of all features. The current values include the current temperature. So start with a model that just returns the current temperature as the prediction, predicting "No change". This is a reasonable baseline since temperature changes slowly. Of course, this baseline will work less well if you make a prediction further in the future.
###Code
class Baseline(tf.keras.Model):
def __init__(self, label_index=None):
super().__init__()
self.label_index = label_index
def call(self, inputs):
if self.label_index is None:
return inputs
result = inputs[:, :, self.label_index]
return result[:, :, tf.newaxis]
###Output
_____no_output_____
###Markdown
Instantiate and evaluate this model:
###Code
baseline = Baseline(label_index=column_indices['T (degC)'])
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(single_step_window.val)
performance['Baseline'] = baseline.evaluate(single_step_window.test, verbose=0)
###Output
439/439 [==============================] - 1s 3ms/step - loss: 0.0128 - mean_absolute_error: 0.0785
###Markdown
That printed some performance metrics, but those don't give you a feeling for how well the model is doing.The `WindowGenerator` has a plot method, but the plots won't be very interesting with only a single sample. So, create a wider `WindowGenerator` that generates windows 24h of consecutive inputs and labels at a time. The `wide_window` doesn't change the way the model operates. The model still makes predictions 1h into the future based on a single input time step. Here the `time` axis acts like the `batch` axis: Each prediction is made independently with no interaction between time steps.
###Code
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1,
label_columns=['T (degC)'])
wide_window
###Output
_____no_output_____
###Markdown
This expanded window can be passed directly to the same `baseline` model without any code changes. This is possible because the inputs and labels have the same number of timesteps, and the baseline just forwards the input to the output: 
###Code
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', baseline(single_step_window.example[0]).shape)
###Output
Input shape: (32, 1, 19)
Output shape: (32, 1, 1)
###Markdown
Plotting the baseline model's predictions you can see that it is simply the labels, shifted right by 1h.
###Code
wide_window.plot(baseline)
###Output
_____no_output_____
###Markdown
In the above plots of three examples the single step model is run over the course of 24h. This deserves some explaination:* The blue "Inputs" line shows the input temperature at each time step. The model recieves all features, this plot only shows the temperature.* The green "Labels" dots show the target prediction value. These dots are shown at the prediction time, not the input time. That is why the range of labels is shifted 1 step relative to the inputs.* The orange "Predictions" crosses are the model's prediction's for each output time step. If the model were predicting perfectly the predictions would land directly on the "labels". Linear modelThe simplest **trainable** model you can apply to this task is to insert linear transformation between the input and output. In this case the output from a time step only depends on that step:A `layers.Dense` with no `activation` set is a linear model. The layer only transforms the last axis of the data from `(batch, time, inputs)` to `(batch, time, units)`, it is applied independently to every item across the `batch` and `time` axes.
###Code
linear = tf.keras.Sequential([
tf.keras.layers.Dense(units=1)
])
print('Input shape:', single_step_window.example[0].shape)
print('Output shape:', linear(single_step_window.example[0]).shape)
###Output
Input shape: (32, 1, 19)
Output shape: (32, 1, 1)
###Markdown
This tutorial trains many models, so package the training procedure into a function:
###Code
MAX_EPOCHS = 20
def compile_and_fit(model, window, patience=2):
early_stopping = tf.keras.callbacks.EarlyStopping(monitor='val_loss',
patience=patience,
mode='min')
model.compile(loss=tf.losses.MeanSquaredError(),
optimizer=tf.optimizers.Adam(),
metrics=[tf.metrics.MeanAbsoluteError()])
history = model.fit(window.train, epochs=MAX_EPOCHS,
validation_data=window.val,
callbacks=[early_stopping])
return history
###Output
_____no_output_____
###Markdown
Train the model and evaluate its performance:
###Code
history = compile_and_fit(linear, single_step_window)
val_performance['Linear'] = linear.evaluate(single_step_window.val)
performance['Linear'] = linear.evaluate(single_step_window.test, verbose=0)
###Output
Epoch 1/20
1534/1534 [==============================] - 11s 7ms/step - loss: 0.2789 - mean_absolute_error: 0.3265 - val_loss: 0.0000e+00 - val_mean_absolute_error: 0.0000e+00
Epoch 2/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0220 - mean_absolute_error: 0.1061 - val_loss: 0.0106 - val_mean_absolute_error: 0.0764
Epoch 3/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0101 - mean_absolute_error: 0.0741 - val_loss: 0.0092 - val_mean_absolute_error: 0.0707
Epoch 4/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0094 - mean_absolute_error: 0.0711 - val_loss: 0.0089 - val_mean_absolute_error: 0.0699
Epoch 5/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0092 - mean_absolute_error: 0.0703 - val_loss: 0.0088 - val_mean_absolute_error: 0.0699
Epoch 6/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0091 - mean_absolute_error: 0.0700 - val_loss: 0.0088 - val_mean_absolute_error: 0.0691
Epoch 7/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0091 - mean_absolute_error: 0.0699 - val_loss: 0.0089 - val_mean_absolute_error: 0.0702
Epoch 8/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0091 - mean_absolute_error: 0.0698 - val_loss: 0.0088 - val_mean_absolute_error: 0.0692
Epoch 9/20
1534/1534 [==============================] - 8s 5ms/step - loss: 0.0091 - mean_absolute_error: 0.0698 - val_loss: 0.0087 - val_mean_absolute_error: 0.0688
Epoch 10/20
1333/1534 [=========================>....] - ETA: 0s - loss: 0.0091 - mean_absolute_error: 0.0696WARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are: loss,mean_absolute_error
###Markdown
Like the `baseline` model, the linear model can be called on batches of wide windows. Used this way the model makes a set of independent predictions on consecuitive time steps. The `time` axis acts like another `batch` axis. There are no interactions between the precictions at each time step.
###Code
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', baseline(wide_window.example[0]).shape)
###Output
_____no_output_____
###Markdown
Here is the plot of its example predictions on the `wide_widow`, note how in many cases the prediction is clearly better than just returning the input temperature, but in a few cases it's worse:
###Code
wide_window.plot(linear)
###Output
_____no_output_____
###Markdown
One advantage to linear models is that they're relatively simple to interpret.You can pull out the layer's weights, and see the weight assigned to each input:
###Code
plt.bar(x = range(len(train_df.columns)),
height=linear.layers[0].kernel[:,0].numpy())
axis = plt.gca()
axis.set_xticks(range(len(train_df.columns)))
_ = axis.set_xticklabels(train_df.columns, rotation=90)
###Output
_____no_output_____
###Markdown
Sometimes the model doesn't even place the most weight on the input `T (degC)`. This is one of the risks of random initialization. DenseBefore applying models that actually operate on multiple time-steps, it's worth checking the performance of deeper, more powerful, single input step models.Here's a model similar to the `linear` model, except it stacks several a few `Dense` layers between the input and the output:
###Code
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=1)
])
history = compile_and_fit(dense, single_step_window)
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
###Output
_____no_output_____
###Markdown
Multi-step denseA single-time-step model has no context for the current values of its inputs. It can't see how the input features are changing over time. To address this issue the model needs access to multiple time steps when making predictions: The `baseline`, `linear` and `dense` models handled each time step independently. Here the model will take multiple time steps as input to produce a single output.Create a `WindowGenerator` that will produce batches of the 3h of inputs and, 1h of labels: Note that the `Window`'s `shift` parameter is relative to the end of the two windows.
###Code
CONV_WIDTH = 3
conv_window = WindowGenerator(
input_width=CONV_WIDTH,
label_width=1,
shift=1,
label_columns=['T (degC)'])
conv_window
conv_window.plot()
plt.title("Given 3h as input, predict 1h into the future.")
###Output
_____no_output_____
###Markdown
You could train a `dense` model on a multiple-input-step window by adding a `layers.Flatten` as the first layer of the model:
###Code
multi_step_dense = tf.keras.Sequential([
# Shape: (time, features) => (time*features)
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
# Add back the time dimension.
# Shape: (outputs) => (1, outputs)
tf.keras.layers.Reshape([1, -1]),
])
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', multi_step_dense(conv_window.example[0]).shape)
history = compile_and_fit(multi_step_dense, conv_window)
IPython.display.clear_output()
val_performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.val)
performance['Multi step dense'] = multi_step_dense.evaluate(conv_window.test, verbose=0)
conv_window.plot(multi_step_dense)
###Output
_____no_output_____
###Markdown
The main down-side of this approach is that the resulting model can only be executed on input wndows of exactly this shape.
###Code
print('Input shape:', wide_window.example[0].shape)
try:
print('Output shape:', multi_step_dense(wide_window.example[0]).shape)
except Exception as e:
print(f'\n{type(e).__name__}:{e}')
###Output
_____no_output_____
###Markdown
The convolutional models in the next section fix this problem. Convolution neural network A convolution layer (`layers.Conv1D`) also takes multiple time steps as input to each prediction. Below is the **same** model as `multi_step_dense`, re-written with a convolution. Note the changes:* The `layers.Flatten` and the first `layers.Dense` are replaced by a `layers.Conv1D`.* The `layers.Reshape` is no longer necessary since the convolution keeps the time axis in its output.
###Code
conv_model = tf.keras.Sequential([
tf.keras.layers.Conv1D(filters=32,
kernel_size=(CONV_WIDTH,),
activation='relu'),
tf.keras.layers.Dense(units=32, activation='relu'),
tf.keras.layers.Dense(units=1),
])
###Output
_____no_output_____
###Markdown
Run it on an example batch to see that the model produces outputs with the expected shape:
###Code
print("Conv model on `conv_window`")
print('Input shape:', conv_window.example[0].shape)
print('Output shape:', conv_model(conv_window.example[0]).shape)
###Output
Conv model on `conv_window`
Input shape: (32, 3, 19)
Output shape: (32, 1, 1)
###Markdown
Train and evaluate it on the ` conv_window` and it should give performance similar to the `multi_step_dense` model.
###Code
history = compile_and_fit(conv_model, conv_window)
IPython.display.clear_output()
val_performance['Conv'] = conv_model.evaluate(conv_window.val)
performance['Conv'] = conv_model.evaluate(conv_window.test, verbose=0)
###Output
Epoch 1/20
1/Unknown - 1s 897ms/stepWARNING:tensorflow:Early stopping conditioned on metric `val_loss` which is not available. Available metrics are:
1/Unknown - 1s 899ms/step
###Markdown
The difference between this `conv_model` and the `multi_step_dense` model is that the `conv_model` can be run on inputs on inputs of any length. The convolutional layer is applied to a sliding window of inputs:If you run it on wider input, it produces wider output:
###Code
print("Wide window")
print('Input shape:', wide_window.example[0].shape)
print('Labels shape:', wide_window.example[1].shape)
print('Output shape:', conv_model(wide_window.example[0]).shape)
###Output
_____no_output_____
###Markdown
Note that the output is shorter than the input. To make training or plotting work, you need the labels, and prediction to have the same length. So build a `WindowGenerator` to produce wide windows with a few extra input time steps so the label and prediction lengths match:
###Code
LABEL_WIDTH = 24
INPUT_WIDTH = LABEL_WIDTH + (CONV_WIDTH - 1)
wide_conv_window = WindowGenerator(
input_width=INPUT_WIDTH,
label_width=LABEL_WIDTH,
shift=1,
label_columns=['T (degC)'])
wide_conv_window
print("Wide conv window")
print('Input shape:', wide_conv_window.example[0].shape)
print('Labels shape:', wide_conv_window.example[1].shape)
print('Output shape:', conv_model(wide_conv_window.example[0]).shape)
###Output
_____no_output_____
###Markdown
Now you can plot the model's predictions on a wider window. Note the 3 input time steps before the first prediction. Every prediction here is based on the 3 preceding timesteps:
###Code
wide_conv_window.plot(conv_model)
###Output
_____no_output_____
###Markdown
Recurrent neural networkA Recurrent Neural Network (RNN) is a type of neural network well-suited to time series data. RNNs process a time series step-by-step, maintaining an internal state from time-step to time-step.For more details, read the [text generation tutorial](https://www.tensorflow.org/tutorials/text/text_generation) or the [RNN guide](https://www.tensorflow.org/guide/keras/rnn). In this tutorial, you will use an RNN layer called Long Short Term Memory ([LSTM](https://www.tensorflow.org/versions/r2.0/api_docs/python/tf/keras/layers/LSTM)). An important constructor argument for all keras RNN layers is the `return_sequences` argument. This setting can configure the layer in one of two ways.1. If `False`, the default, the layer only returns the output of the final timestep, giving the model time to warm up its internal state before making a single prediction: 2. If `True` the layer returns an output for each input. This is useful for: * Stacking RNN layers. * Training a model on multiple timesteps simultaneously.
###Code
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=1)
])
###Output
_____no_output_____
###Markdown
With `return_sequences=True` the model can be trained on 24h of data at a time.Note: This will give a pessimistic view of the model's performance. On the first timestep the model has no access to previous steps, and so can't do any better than the simple `linear` and `dense` models shown earlier.
###Code
print('Input shape:', wide_window.example[0].shape)
print('Output shape:', lstm_model(wide_window.example[0]).shape)
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate(wide_window.val)
performance['LSTM'] = lstm_model.evaluate(wide_window.test, verbose=0)
wide_window.plot(lstm_model)
###Output
_____no_output_____
###Markdown
Performance With this dataset typically each of the models does slightly better than the one before it.
###Code
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.ylabel('mean_absolute_error [T (degC), normalized]')
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
_ = plt.legend()
for name, value in performance.items():
print(f'{name:12s}: {value[1]:0.4f}')
###Output
_____no_output_____
###Markdown
Multi-output modelsThe models so far all predicted a single output feature, `T (degC)`, for a single time step.All of these models can be converted to predict multiple features just by changing the number of units in the output layer and adjusting the training windows to include all features in the `labels`.
###Code
single_step_window = WindowGenerator(
# `WindowGenerator` returns all features as labels if you
# don't set the `label_columns` argument.
input_width=1, label_width=1, shift=1)
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
for example_inputs, example_labels in wide_window.train.take(1):
print(f'Inputs shape (batch, time, features): {example_inputs.shape}')
print(f'Labels shape (batch, time, features): {example_labels.shape}')
###Output
_____no_output_____
###Markdown
Note above that the `features` axis of the labels now has the same depth as the inputs, instead of 1. BaselineThe same baseline model can be used here, but this time repeating all features instead of selecting a specific `label_index`.
###Code
baseline = Baseline()
baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
val_performance = {}
performance = {}
val_performance['Baseline'] = baseline.evaluate(wide_window.val)
performance['Baseline'] = baseline.evaluate(wide_window.test, verbose=0)
###Output
_____no_output_____
###Markdown
Dense
###Code
dense = tf.keras.Sequential([
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=64, activation='relu'),
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(dense, single_step_window)
IPython.display.clear_output()
val_performance['Dense'] = dense.evaluate(single_step_window.val)
performance['Dense'] = dense.evaluate(single_step_window.test, verbose=0)
###Output
_____no_output_____
###Markdown
RNN
###Code
%%time
wide_window = WindowGenerator(
input_width=24, label_width=24, shift=1)
lstm_model = tf.keras.models.Sequential([
# Shape [batch, time, features] => [batch, time, lstm_units]
tf.keras.layers.LSTM(32, return_sequences=True),
# Shape => [batch, time, features]
tf.keras.layers.Dense(units=num_features)
])
history = compile_and_fit(lstm_model, wide_window)
IPython.display.clear_output()
val_performance['LSTM'] = lstm_model.evaluate( wide_window.val)
performance['LSTM'] = lstm_model.evaluate( wide_window.test, verbose=0)
print()
###Output
_____no_output_____
###Markdown
Advanced: Residual connectionsThe `Baseline` model from earlier took advantage of the fact that the sequence doesn't change drastically from time step to time step. Every model trained in this tutorial so far was randomly initialized, and then had to learn that the output is a a small change from the previous time step.While you can get around this issue with careful initialization, it's simpler to build this into the model structure.It's common in time series analysis to build models that instead of predicting the next value, predict the how the value will change in the next timestep.Similarly, "Residual networks" or "ResNets" in deep learning refer to architectures where each layer adds to the model's accumulating result.That is how you take advantage of the knowledge that the change should be small.Essentially this initializes the model to match the `Baseline`. For this task it helps models converge faster, with slightly better performance. This approach can be used in conjunction with any model discussed in this tutorial. Here it is being applied to the LSTM model, note the use of the `tf.initializers.zeros` to ensure that the initial predicted changes are small, and don't overpower the residual connection. There are no symmetry-breaking concerns for the gradients here, since the `zeros` are only used on the last layer.
###Code
class ResidualWrapper(tf.keras.Model):
def __init__(self, model):
super().__init__()
self.model = model
def call(self, inputs, *args, **kwargs):
delta = self.model(inputs, *args, **kwargs)
# The prediction for each timestep is the input
# from the previous time step plus the delta
# calculated by the model.
return inputs + delta
%%time
residual_lstm = ResidualWrapper(
tf.keras.Sequential([
tf.keras.layers.LSTM(32, return_sequences=True),
tf.keras.layers.Dense(
num_features,
# The predicted deltas should start small
# So initialize the output layer with zeros
kernel_initializer=tf.initializers.zeros)
]))
history = compile_and_fit(residual_lstm, wide_window)
IPython.display.clear_output()
val_performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.val)
performance['Residual LSTM'] = residual_lstm.evaluate(wide_window.test, verbose=0)
print()
###Output
_____no_output_____
###Markdown
Performance Here is the overall performance for these multi-output models.
###Code
x = np.arange(len(performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in val_performance.values()]
test_mae = [v[metric_index] for v in performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=performance.keys(),
rotation=45)
plt.ylabel('MAE (average over all outputs)')
_ = plt.legend()
for name, value in performance.items():
print(f'{name:15s}: {value[1]:0.4f}')
###Output
_____no_output_____
###Markdown
The above performances are averaged across all model outputs. Multi-step modelsBoth the single-output and multiple-output models in the previous sections made **single time step predictions**, 1h into the future.This section looks at how to expand these models to make **multiple time step predictions**.In a multi-step prediction, the model needs to learn to predict a range of future values. Thus, unlike a single step model, where only a single future point is predicted, a multi-step model predicts a sequence of the future values.There are two rough approaches to this:1. Single shot predictions where the entire time series is predicted at once.2. Autoregressive predictions where the model only makes single step predictions and its output is fed back as its input.In this section all the models will predict **all the features across all output time steps**. For the multi-step model, the training data again consists of hourly samples. However, here, the models will learn to predict 24h of the future, given 24h of the past.Here is a `Window` object that generates these slices from the dataset:
###Code
OUT_STEPS = 24
multi_window = WindowGenerator(input_width=24,
label_width=OUT_STEPS,
shift=OUT_STEPS)
multi_window.plot()
multi_window
###Output
_____no_output_____
###Markdown
Baselines A simple baseline for this task is to repeat the last input time step for the required number of output timesteps:
###Code
class MultiStepLastBaseline(tf.keras.Model):
def call(self, inputs):
return tf.tile(inputs[:, -1:, :], [1, OUT_STEPS, 1])
last_baseline = MultiStepLastBaseline()
last_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance = {}
multi_performance = {}
multi_val_performance['Last'] = last_baseline.evaluate(multi_window.val)
multi_performance['Last'] = last_baseline.evaluate(multi_window.val, verbose=0)
multi_window.plot(last_baseline)
###Output
_____no_output_____
###Markdown
Since this task is to predict 24h given 24h another simple approach is to repeat the previous day, assuming tomorrow will be similar:
###Code
class RepeatBaseline(tf.keras.Model):
def call(self, inputs):
return inputs
repeat_baseline = RepeatBaseline()
repeat_baseline.compile(loss=tf.losses.MeanSquaredError(),
metrics=[tf.metrics.MeanAbsoluteError()])
multi_val_performance['Repeat'] = repeat_baseline.evaluate(multi_window.val)
multi_performance['Repeat'] = repeat_baseline.evaluate(multi_window.test, verbose=0)
multi_window.plot(repeat_baseline)
###Output
_____no_output_____
###Markdown
Single-shot modelsOne high level approach to this problem is use a "single-shot" model, where the model makes the entire sequence prediction in a single step.This can be implemented efficiently as a `layers.Dense` with `OUT_STEPS*features` output units. The model just needs to reshape that output to the required `(OUTPUT_STEPS, features)`. LinearA simple linear model based on the last input time step does better than either baseline, but is underpowered. The model needs to predict `OUTPUT_STEPS` time steps, from a single input time step with a linear projection. It can only capture a low-dimensional slice of the behavior, likely based mainly on the time of day and time of year.
###Code
multi_linear_model = tf.keras.Sequential([
# Take the last time-step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_linear_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Linear'] = multi_linear_model.evaluate(multi_window.val)
multi_performance['Linear'] = multi_linear_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_linear_model)
###Output
_____no_output_____
###Markdown
DenseAdding a `layers.Dense` between the input and output gives the linear model more power, but is still only based on a single input timestep.
###Code
multi_dense_model = tf.keras.Sequential([
# Take the last time step.
# Shape [batch, time, features] => [batch, 1, features]
tf.keras.layers.Lambda(lambda x: x[:, -1:, :]),
# Shape => [batch, 1, dense_units]
tf.keras.layers.Dense(512, activation='relu'),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_dense_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Dense'] = multi_dense_model.evaluate(multi_window.val)
multi_performance['Dense'] = multi_dense_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_dense_model)
###Output
_____no_output_____
###Markdown
CNN A convolutional model makes predictions based on a fixed-width history, which may lead to better performance than the dense model since it can see how things are changing over time:
###Code
CONV_WIDTH = 3
multi_conv_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, CONV_WIDTH, features]
tf.keras.layers.Lambda(lambda x: x[:, -CONV_WIDTH:, :]),
# Shape => [batch, 1, conv_units]
tf.keras.layers.Conv1D(256, activation='relu', kernel_size=(CONV_WIDTH)),
# Shape => [batch, 1, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_conv_model, multi_window)
IPython.display.clear_output()
multi_val_performance['Conv'] = multi_conv_model.evaluate(multi_window.val)
multi_performance['Conv'] = multi_conv_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(multi_conv_model)
###Output
_____no_output_____
###Markdown
RNN A recurrent model can learn to use a long history of inputs, if it's relevant to the predictions the model is making. Here the model will accumulate internal state for 24h, before making a single prediction for the next 24h.In this single-shot format, the LSTM only needs to produce an output at the last time step, so set `return_sequences=False`.
###Code
multi_lstm_model = tf.keras.Sequential([
# Shape [batch, time, features] => [batch, lstm_units]
# Adding more `lstm_units` just overfits more quickly.
tf.keras.layers.LSTM(32, return_sequences=False),
# Shape => [batch, out_steps*features]
tf.keras.layers.Dense(OUT_STEPS*num_features,
kernel_initializer=tf.initializers.zeros),
# Shape => [batch, out_steps, features]
tf.keras.layers.Reshape([OUT_STEPS, num_features])
])
history = compile_and_fit(multi_lstm_model, multi_window)
IPython.display.clear_output()
multi_val_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.val)
multi_performance['LSTM'] = multi_lstm_model.evaluate(multi_window.train, verbose=0)
multi_window.plot(multi_lstm_model)
###Output
_____no_output_____
###Markdown
Advanced: Autoregressive modelThe above models all predict the entire output sequence as a in a single step.In some cases it may be helpful for the model to decompose this prediction into individual time steps. Then each model's output can be fed back into itself at each step and predictions can be made conditioned on the previous one, like in the classic [Generating Sequences With Recurrent Neural Networks](https://arxiv.org/abs/1308.0850).One clear advantage to this style of model is that it can be set up to produce output with a varying length.You could take any of single single-step multi-output models trained in the first half of this tutorial and run in an autoregressive feedback loop, but here we'll focus on building a model that's been explicitly trained to do that. RNNThis tutorial only builds an autoregressive RNN model, but this pattern could be applied to any model that was designed to output a single timestep.The model will have the same basic form as the single-step `LSTM` models: An `LSTM` followed by a `layers.Dense` that converts the `LSTM` outputs to model predictions.A `layers.LSTM` is a `layers.LSTMCell` wrapped in the higher level `layers.RNN` that manages the state and sequence results for you (See [Keras RNNs](https://www.tensorflow.org/guide/keras/rnn) for details).In this case the model has to manually manage the inputs for each step so it uses `layers.LSTMCell` directly for the lower level, single time step interface.
###Code
class FeedBack(tf.keras.Model):
def __init__(self, units, out_steps):
super().__init__()
self.out_steps = out_steps
self.units = units
self.lstm_cell = tf.keras.layers.LSTMCell(units)
# Also wrap the LSTMCell in an RNN to simplify the `warmup` method.
self.lstm_rnn = tf.keras.layers.RNN(self.lstm_cell, return_state=True)
self.dense = tf.keras.layers.Dense(num_features)
feedback_model = FeedBack(units=32, out_steps=OUT_STEPS)
###Output
_____no_output_____
###Markdown
The first method this model needs is a `warmup` method to initialize is its internal state based on the inputs. Once trained this state will capture the relevant parts of the input history. This is equivalent to the single-step `LSTM` model from earlier:
###Code
def warmup(self, inputs):
# inputs.shape => (batch, time, features)
# x.shape => (batch, lstm_units)
x, *state = self.lstm_rnn(inputs)
# predictions.shape => (batch, features)
prediction = self.dense(x)
return prediction, state
FeedBack.warmup = warmup
###Output
_____no_output_____
###Markdown
This method returns a single time-step prediction, and the internal state of the LSTM:
###Code
prediction, state = feedback_model.warmup(multi_window.example[0])
prediction.shape
###Output
_____no_output_____
###Markdown
With the `RNN`'s state, and an initial prediction you can now continue iterating the model feeding the predictions at each step back as the input.The simplest approach to collecting the output predictions is to use a python list, and `tf.stack` after the loop. Note: Stacking a python list like this only works with eager-execution, using `Model.compile(..., run_eagerly=True)` for training, or with a fixed length output. For a dynamic output length you would need to use a `tf.TensorArray` instead of a python list, and `tf.range` instead of the python `range`.
###Code
def call(self, inputs, training=None):
# Use a TensorArray to capture dynamically unrolled outputs.
predictions = []
# Initialize the lstm state
prediction, state = self.warmup(inputs)
# Insert the first prediction
predictions.append(prediction)
# Run the rest of the prediction steps
for n in range(1, self.out_steps):
# Use the last prediction as input.
x = prediction
# Execute one lstm step.
x, state = self.lstm_cell(x, states=state,
training=training)
# Convert the lstm output to a prediction.
prediction = self.dense(x)
# Add the prediction to the output
predictions.append(prediction)
# predictions.shape => (time, batch, features)
predictions = tf.stack(predictions)
# predictions.shape => (batch, time, features)
predictions = tf.transpose(predictions, [1, 0, 2])
return predictions
FeedBack.call = call
###Output
_____no_output_____
###Markdown
Test run this model on the example inputs:
###Code
print('Output shape (batch, time, features): ', feedback_model(multi_window.example[0]).shape)
###Output
_____no_output_____
###Markdown
Now train the model:
###Code
history = compile_and_fit(feedback_model, multi_window)
IPython.display.clear_output()
multi_val_performance['AR LSTM'] = feedback_model.evaluate(multi_window.val)
multi_performance['AR LSTM'] = feedback_model.evaluate(multi_window.test, verbose=0)
multi_window.plot(feedback_model)
###Output
_____no_output_____
###Markdown
Performance There are clearly diminishing returns as a function of model complexity on this problem.
###Code
x = np.arange(len(multi_performance))
width = 0.3
metric_name = 'mean_absolute_error'
metric_index = lstm_model.metrics_names.index('mean_absolute_error')
val_mae = [v[metric_index] for v in multi_val_performance.values()]
test_mae = [v[metric_index] for v in multi_performance.values()]
plt.bar(x - 0.17, val_mae, width, label='Validation')
plt.bar(x + 0.17, test_mae, width, label='Test')
plt.xticks(ticks=x, labels=multi_performance.keys(),
rotation=45)
plt.ylabel(f'MAE (average over all times and outputs)')
_ = plt.legend()
###Output
_____no_output_____
###Markdown
The metrics for the multi-output models in the first half of this tutorial show the performance averaged across all output features. These performances similar but also averaged across output timesteps.
###Code
for name, value in multi_performance.items():
print(f'{name:8s}: {value[1]:0.4f}')
###Output
_____no_output_____ |
Vacation/Vacation1Py.ipynb | ###Markdown
VacationPy---- Note* Instructions have been included for each segment. You do not have to follow them exactly, but they are included to help you think through the steps.
###Code
# Dependencies and Setup
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
import requests
import gmaps
import os
import ipywidgets as widgets
import math
# Import API key
from api_keys import g_key
###Output
_____no_output_____
###Markdown
Store Part I results into DataFrame* Load the csv exported in Part I to a DataFrame
###Code
csvpath="outputcityweather.csv"
csvread=pd.read_csv(csvpath)
df=pd.DataFrame(csvread)
df
#conversion from Kelvin to Fahrenheit
Max_Temp = df["Max Temp"]
df["Temp in F"] = (Max_Temp - 273.15) * (9 / 5) + 32
df.head()
###Output
_____no_output_____
###Markdown
Humidity Heatmap* Configure gmaps.* Use the Lat and Lng as locations and Humidity as the weight.* Add Heatmap layer to map.
###Code
humidity=df["Humidity"]
maxhumidity=humidity.max()
longlatlocation=df[["Lat", "Lng"]]
mapfig = gmaps.figure()
heatmaplayer = gmaps.heatmap_layer(longlatlocation, weights=humidity,dissipating=False, max_intensity=maxhumidity,point_radius=3)
mapfig.add_layer(heatmaplayer)
mapfig
###Output
_____no_output_____
###Markdown
Create new DataFrame fitting weather criteria* Narrow down the cities to fit weather conditions.* Drop any rows will null values.
###Code
narrowe = df.loc[(df["Temp in F"] > 70) & (df["Temp in F"] < 80) & (df["Cloudiness"] == 0), :]
narrowe = narrowe.dropna(how='any')
narrowe.reset_index(inplace=True)
del narrowe['index']
narrowe.head()
hotel= []
for i in range(len(narrowe)):
lat = narrowe.loc[i]['Lat']
lng = narrowe.loc[i]['Lng']
params = {
"location": f"{lat},{lng}",
"radius": 5000,
"types" : "hotel",
"key": g_key
}
base_url = "https://maps.googleapis.com/maps/api/place/nearbysearch/json"
request = requests.get(base_url, params=params)
response= request.json()
try:
hotel.append(response['results'][0]['name'])
except:
hotel.append("")
narrowe["Hotel Name"] = hotel
narrowe = narrowe.dropna(how='any')
narrowe.head()
###Output
_____no_output_____
###Markdown
Hotel Map* Store into variable named `hotel_df`.* Add a "Hotel Name" column to the DataFrame.* Set parameters to search for hotels with 5000 meters.* Hit the Google Places API for each city's coordinates.* Store the first Hotel result into the DataFrame.* Plot markers on top of the heatmap.
###Code
# NOTE: Do not change any of the code in this cell
# Using the template add the hotel marks to the heatmap
info_box_template = """
<dl>
<dt>Name</dt><dd>{Hotel Name}</dd>
<dt>City</dt><dd>{City}</dd>
<dt>Country</dt><dd>{Country}</dd>
</dl>
"""
# Store the DataFrame Row
# NOTE: be sure to update with your DataFrame name
hotel_info = [info_box_template.format(**row) for index, row in narrowe.iterrows()]
locations = narrowe[["Lat", "Lng"]]
# Add marker layer ontop of heat map
marker=gmaps.marker_layer(locations)
mapfig.add_layer(marker)
mapfig
# Display figure
###Output
_____no_output_____ |
sample-run/example_analysis.ipynb | ###Markdown
VegMapperLicense TermsCopyright (c) 2019, California Institute of Technology ("Caltech"). U.S. Government sponsorship acknowledged.All rights reserved.Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.* Redistributions must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.* Neither the name of Caltech nor its operating division, the Jet Propulsion Laboratory, nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
###Code
#load in Python libraries
import ipywidgets as ipw
from ipyfilechooser import FileChooser
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
#set up Python/R dual functionality for notebook
%load_ext rpy2.ipython
%%R
#initialize R cell and load in necessary libraries
library(rgdal)
library(arm)
library(gdalUtils)
library(raster)
###Output
_____no_output_____
###Markdown
User inputs:
###Code
%%R
##### FILES #####
#all GIS data in lat lon WGS84 unless specified
#path to comma-delimited file, must have cols 'latitude', 'longitude', 'class'
in_points = "wwf_2018_clipped.csv"
#remote sensing stack in ENVI flat binary format, get stack info
in_stack = "indo_stack_nov_2018_clipped"
#path to output comma-delimited file, same as in_points with appended remote sensing values
out_points = "wwf_2018_clipped_testPred.csv"
#name of the output map in ENVI flat binary format
out_pred = "op3_indo_nov_2018_clipped"
#name of output GeoTIFF
out_tif = "predSurface.tif"
#--------------------------------------------------------------#
##### PARAMETERS #####
buffer = 1
hhBandIndex = 0 #NOTE: see Apply routine for information
#treshold for logistic model;
#when probability is larger than this threshold, we say that oil palm is present
threshold = 0.5
#RUN: priors (variable order corresponds to order in which bands are read), currently Costa Rica
#sequence of prior values needs to match the sequence of stack_names below
use_prior = TRUE
prior_mean = c(0.06491638, -26.63132179, 0.05590800, -29.64091620)
prior_scale = c(0.02038204, 7.58200324, 0.01686930, 8.73995422)
prior_mean_int = 1.99274801
prior_scale_int = 7.22600112
#lower and upper bounds for posterior credible intervals
lp = 0.025
up = 0.975
#--------------------------------------------------------------#
##### BAND INFO #####
stack_names = c("vcf", "c_rvi", "ndvi", "l_rvi_mosaic") #desired bands from the in_stack (names should match source)
#NOTE: this is a DEVELOPER input and not a user input,
# please do not change this unless you have been approved to do so
bands = c(2,3,1,4) #index of each of the bands defined above (from in_stack)
nodata = -9999 #NA value present in input bands: NEEDS TO BE A LIST OF NUMBERS IF NOT CONSISTENT ACROSS BANDS
###Output
_____no_output_____
###Markdown
EXTRACTObjective: read remote sensing values at training points. The cell below creates a function to get intensity values from stack and executes extract routine:
###Code
%%R
getPixel= function(gdalObject, X, Y, buffer, ulX, ulY, cellSize, bands){
nrow = dim(gdalObject)[1]
ncol = dim(gdalObject)[2]
rowOffset = ((ulY-Y)/cellSize) - buffer
if(rowOffset<0 | (rowOffset+buffer+2) > nrow){
return(NA)
}
colOffset = ((X-ulX)/cellSize) - buffer
if(colOffset<0 | (colOffset+buffer+2) > ncol){
return(NA)
}
windowY = buffer+2
windowX = windowY
pixelValue = getRasterData(gdalObject, band=bands, offset=c(rowOffset, colOffset), region.dim=c(windowY, windowX))
return(pixelValue)
}
#stack information
r_stack = stack(in_stack)
res = xres(r_stack)
r_extent = r_stack@extent
ulX = r_extent@xmin
ulY = r_extent@ymax
# s_info = GDALinfo(in_stack) also works, lacks ulY
#grab stack
gdalObj = new("GDALDataset", in_stack)
#append remote sensing information to point table and write to cvs file
inData <- read.csv(in_points, header=TRUE)
numPoints <- nrow(inData)
header <- c(colnames(inData), stack_names)
write.table(x=t(header), file=out_points, append=FALSE, col.names=FALSE, row.names=FALSE, sep=",")
print("Extracting values for...")
for(i in 1:numPoints){
allBands <- rep(NA, length(bands))
for (j in 1:length(bands)){
oneBand = getPixel(gdalObj, inData$longitude[i], inData$latitude[i], buffer, ulX, ulY, res, bands[j])
w = which (oneBand == nodata[j])
oneBand[w]<-NA
allBands[j] = mean(oneBand, na.rm=TRUE)
if(i==1) print(stack_names[j])
}
mydata <- data.frame(t(as.vector(allBands)))
colnames(mydata) <- stack_names
newRow = cbind(inData[i,],mydata)
write.table(x=newRow, file=out_points, append=TRUE, col.names=FALSE, row.names=FALSE, sep=",")
}
###Output
_____no_output_____
###Markdown
RUNObjective: fit Bayesian model, calculate posteriors and confusion matrix. The cell below creates the model and executes the prediction, constructs a confusion matrix, calculates the prediction accuracy/posterior CI, calculates/builds posteriors for subsequent runs, and prints the result:
###Code
%%R
#ADDITIONAL INPUTS
#columns of the predictor variables to be used in this model (taken from pred csv)
#column order indicated here is vcf, c_rvi, ndvi, l_rvi_mosaic, matching the priors above
index = c(13,14,15,16)
%%R
#impute missing values by variable means
data = read.csv(out_points)
for (i in which(sapply(data, is.numeric))) {
for (j in which(is.na(data[, i]))) {
data[j, i] <- mean(data[data[, "my_class"] == data[j, "my_class"], i], na.rm = TRUE)
}
}
#true_label: 1 for oil_palm and 0 for non oil_palm
true_label = 1*(data$my_class == 'oil_palm') #EDIT: why this --> http://127.0.0.1:54098/notebooks/sample-run/example_analysis.ipynb
#transform interested variables into a matrix which would be used
x = as.matrix(data[, index])
all_names = names(data)
stack_names = all_names[index]
colnames(x) = stack_names
#build model by incorporating those variables
formula = as.formula(paste("true_label ~ ", paste(stack_names, collapse="+"),sep = ""))
use_data = as.data.frame(cbind(x, true_label))
#to specify prior
#if noninformative prior, use prior.mean=apply(x, 2, mean), prior.scale=Inf, prior.df=Inf
#if having a prior, set prior.mean=c(....), prior.scale=c(.....)
#length of prior mean and prior scale should be equal to the number of predictors
if(! use_prior){
model = bayesglm(formula, data=use_data, family=binomial(link='logit'), prior.mean=apply(x, 2, mean), prior.scale=Inf, scale=FALSE)
}
if(use_prior){
model = bayesglm(formula, data=use_data, family=binomial(link='logit'),
prior.mean=prior_mean,
prior.scale=prior_scale,
prior.mean.for.intercept=prior_mean_int,
prior.scale.for.intercept=prior_scale_int,
scale = FALSE)
}
#oil_palm prediction
class_prediction = 1*(model$fitted.values >= threshold) #if the fitted value is above the threshold, value is changed to binary 1
print(class_prediction)
#used instead of na.remove to get rid of NA values in 2018 validation dataset
true_label = true_label[!is.na(true_label)]
#generate confusion matrix
bayesian_conf_matrix = matrix(0,2,2)
bayesian_conf_matrix[1,1] = sum(class_prediction + true_label == 0)
bayesian_conf_matrix[2,2] = sum(class_prediction + true_label == 2)
bayesian_conf_matrix[1,2] = sum((class_prediction == 0) & (true_label == 1))
bayesian_conf_matrix[2,1] = sum((class_prediction == 1) & (true_label == 0))
rownames(bayesian_conf_matrix) = c("Predicted non-oil-palm", "Predicted oil-palm")
colnames(bayesian_conf_matrix) = c("Actual non-oil-palm", "Actual oil-palm")
print(bayesian_conf_matrix)
#overall accuracy of model
accu_bayes = sum(class_prediction == true_label) / nrow(data)
print("Overall accuracy:")
print(accu_bayes)
#EDIT: push values to Python and use numpy/matplotlib to display matrix
# approach posterior distributions of coefficients
# specify number of draws
num_draw = 2000
post_dist = sim(model, n.sims=num_draw)
coef_matrix = coef(post_dist)
# calculate posterior credible intervals for coefficients
posterior_ci_coef = matrix(NA, ncol(x)+1, 2)
for (i in 1:(ncol(x)+1)){
posterior_ci_coef[i, ] = unname(quantile(coef_matrix[, i], probs=c(lp, up), na.rm=TRUE))
}
# calculate posterior credible intervals for every data point
posterior_ci_data = matrix(NA, nrow(x), 2)
for(i in 1:nrow(x)){
temp = as.numeric()
for(j in 1:num_draw){
temp[j] = 1 / (1 + exp(-coef_matrix[j, 1] - sum(coef_matrix[j, 2:(length(index)+1)] * x[i, ])))
}
posterior_ci_data[i, ] = unname(quantile(temp, probs=c(lp, up), na.rm=TRUE))
}
# build posterior objects for next run
posterior_mean = model$coefficients
posterior_scale = apply(coef(post_dist), 2, sd)
#print the posteriors, store them for later
#EDIT: not sure if this works with combinations of other stack variables than current build (need to test in future)
posterior_mean
intercept = posterior_mean[["(Intercept)"]]
numVars = length(posterior_mean)
posteriors = posterior_mean[2:numVars] #EDIT: does this need to be transformed into c() variable?
###Output
_____no_output_____
###Markdown
APPLYObjective: apply model fits to calculate OP3 for the area covered by the data stack. OP3 = oil palm probability presence, ranging between 0-1. The cell below applies the model to the stack and executes the predictive analysis, outputting the prediction surface in GeoTIFF and ENVI binary formats:
###Code
%%R
#generate dummy for Docker
#only use env variables if creation fails (will normally return NULL but still create object)
# Sys.setenv(PROJ_LIB="/usr/bin/proj/")
# Sys.getenv("PROJ_LIB")
#dummy corresponds to one band from the original stack, used to save your output prediction map
in_dummy = "temp_dummy"
gdal_translate(src_dataset=in_stack, dst_dataset=in_dummy, of="ENVI", b=1)
#create GIS objects
gdalObjStack = new("GDALDataset", in_stack)
gdalObjDummy = new("GDALDataset", in_dummy)
rasterWidth = ncol(gdalObjStack)
rasterRows = nrow(gdalObjStack)
#calculate prediction for each pixel and save
print("Checking a few values...")
for(i in 1:rasterRows){
oneRasterLine = getRasterData(gdalObjStack, offset=c(i-1,0), region.dim=c(1, rasterWidth))
hhBand = hhBandIndex #PREVIOUSLY: which(bandNames == "alos2_hh")
#NOTE: previous value was 0, bandNames/modelBands was removed for redundancy,
# the above code has not been tested yet
pred = rep(-9999, rasterWidth)
for(j in 1:rasterWidth){
#hh = (20*log10(oneRasterLine[j, 1, hhBand])) -83
hh = oneRasterLine[j, 1, hhBand] #gets hh value at each of the pixels
#open water mask
#if(is.na(hh) | hh < -20){
#pred[j] = 0
#}
#else{
#select bands
selectBands = oneRasterLine[j, 1, bands] #EDITED: changed modelBands to bands
z = (intercept + sum(posteriors * selectBands))
pred[j] = exp(z)/(1+ exp(z))
#z = (intercept + sum(posteriors * scaledBands))
#pred[j] = exp(z)/(1+ exp(z))
if ((i/100 == i%/%100) & j == 1000) print(z) #reality check on the model fits
#}
}
#write one row to file
putRasterData(gdalObjDummy, pred, offset=c(i-1, 0)) #place predicted line in raster into dummy
}
saveDataset(gdalObjDummy, out_pred)
#convert to GeoTiff
gdal_translate(src_dataset=out_pred, dst_dataset=out_tif, of="GTiff")
###Output
_____no_output_____
###Markdown
VegMapperLicense TermsCopyright (c) 2019, California Institute of Technology ("Caltech"). U.S. Government sponsorship acknowledged.All rights reserved.Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met:* Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer.* Redistributions must reproduce the above copyright notice, this list of conditions and the following disclaimer in the documentation and/or other materials provided with the distribution.* Neither the name of Caltech nor its operating division, the Jet Propulsion Laboratory, nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission.THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
###Code
#load in Python libraries
import ipywidgets as ipw
from ipyfilechooser import FileChooser
from IPython.display import Markdown, display
def printmd(string):
display(Markdown(string))
#set up Python/R dual functionality for notebook
%load_ext rpy2.ipython
%%R
#initialize R cell and load in necessary libraries
library(rgdal)
library(arm)
library(gdalUtils)
library(raster)
###Output
_____no_output_____
###Markdown
User inputs:
###Code
%%R
##### FILES #####
#all GIS data in lat lon WGS84 unless specified
#path to comma-delimited file, must have cols 'latitude', 'longitude', 'class'
in_points = "wwf_2018_clipped.csv"
#remote sensing stack in ENVI flat binary format, get stack info
in_stack = "indo_stack_nov_2018_clipped"
#path to output comma-delimited file, same as in_points with appended remote sensing values
out_points = "wwf_2018_clipped_testPred.csv"
#name of the output map in ENVI flat binary format
out_pred = "op3_indo_nov_2018_clipped"
#name of output GeoTIFF
out_tif = "predSurface.tif"
#--------------------------------------------------------------#
##### PARAMETERS #####
buffer = 1
hhBandIndex = 0 #NOTE: see Apply routine for information
#treshold for logistic model;
#when probability is larger than this threshold, we say that oil palm is present
threshold = 0.5
#RUN: priors (variable order corresponds to order in which bands are read), currently Costa Rica
#sequence of prior values needs to match the sequence of stack_names below
use_prior = TRUE
prior_mean = c(0.06491638, -26.63132179, 0.05590800, -29.64091620)
prior_scale = c(0.02038204, 7.58200324, 0.01686930, 8.73995422)
prior_mean_int = 1.99274801
prior_scale_int = 7.22600112
#lower and upper bounds for posterior credible intervals
lp = 0.025
up = 0.975
#--------------------------------------------------------------#
##### BAND INFO #####
stack_names = c("vcf", "c_rvi", "ndvi", "l_rvi_mosaic") #desired bands from the in_stack (names should match source)
#NOTE: this is a DEVELOPER input and not a user input,
# please do not change this unless you have been approved to do so
bands = c(2,3,1,4) #index of each of the bands defined above (from in_stack)
nodata = -9999 #NA value present in input bands: NEEDS TO BE A LIST OF NUMBERS IF NOT CONSISTENT ACROSS BANDS
###Output
_____no_output_____
###Markdown
EXTRACTObjective: read remote sensing values at training points. The cell below creates a function to get intensity values from stack and executes extract routine:
###Code
%%R
getPixel= function(gdalObject, X, Y, buffer, ulX, ulY, cellSize, bands){
nrow = dim(gdalObject)[1]
ncol = dim(gdalObject)[2]
rowOffset = ((ulY-Y)/cellSize) - buffer
if(rowOffset<0 | (rowOffset+buffer+2) > nrow){
return(NA)
}
colOffset = ((X-ulX)/cellSize) - buffer
if(colOffset<0 | (colOffset+buffer+2) > ncol){
return(NA)
}
windowY = buffer+2
windowX = windowY
pixelValue = getRasterData(gdalObject, band=bands, offset=c(rowOffset, colOffset), region.dim=c(windowY, windowX))
return(pixelValue)
}
#stack information
r_stack = stack(in_stack)
res = xres(r_stack)
r_extent = r_stack@extent
ulX = r_extent@xmin
ulY = r_extent@ymax
# s_info = GDALinfo(in_stack) also works, lacks ulY
#grab stack
gdalObj = new("GDALDataset", in_stack)
#append remote sensing information to point table and write to cvs file
inData <- read.csv(in_points, header=TRUE)
numPoints <- nrow(inData)
header <- c(colnames(inData), stack_names)
write.table(x=t(header), file=out_points, append=FALSE, col.names=FALSE, row.names=FALSE, sep=",")
print("Extracting values for...")
for(i in 1:numPoints){
allBands <- rep(NA, length(bands))
for (j in 1:length(bands)){
oneBand = getPixel(gdalObj, inData$longitude[i], inData$latitude[i], buffer, ulX, ulY, res, bands[j])
w = which (oneBand == nodata[j])
oneBand[w]<-NA
allBands[j] = mean(oneBand, na.rm=TRUE)
if(i==1) print(stack_names[j])
}
mydata <- data.frame(t(as.vector(allBands)))
colnames(mydata) <- stack_names
newRow = cbind(inData[i,],mydata)
write.table(x=newRow, file=out_points, append=TRUE, col.names=FALSE, row.names=FALSE, sep=",")
}
###Output
_____no_output_____
###Markdown
RUNObjective: fit Bayesian model, calculate posteriors and confusion matrix. The cell below creates the model and executes the prediction, constructs a confusion matrix, calculates the prediction accuracy/posterior CI, calculates/builds posteriors for subsequent runs, and prints the result:
###Code
%%R
#ADDITIONAL INPUTS
#columns of the predictor variables to be used in this model (taken from pred csv)
#column order indicated here is vcf, c_rvi, ndvi, l_rvi_mosaic, matching the priors above
index = c(13,14,15,16)
%%R
#impute missing values by variable means
data = read.csv(out_points)
for (i in which(sapply(data, is.numeric))) {
for (j in which(is.na(data[, i]))) {
data[j, i] <- mean(data[data[, "my_class"] == data[j, "my_class"], i], na.rm = TRUE)
}
}
#true_label: 1 for oil_palm and 0 for non oil_palm
true_label = 1*(data$my_class == 'oil_palm') #EDIT: why this --> http://127.0.0.1:54098/notebooks/sample-run/example_analysis.ipynb
#transform interested variables into a matrix which would be used
x = as.matrix(data[, index])
all_names = names(data)
stack_names = all_names[index]
colnames(x) = stack_names
#build model by incorporating those variables
formula = as.formula(paste("true_label ~ ", paste(stack_names, collapse="+"),sep = ""))
use_data = as.data.frame(cbind(x, true_label))
#to specify prior
#if noninformative prior, use prior.mean=apply(x, 2, mean), prior.scale=Inf, prior.df=Inf
#if having a prior, set prior.mean=c(....), prior.scale=c(.....)
#length of prior mean and prior scale should be equal to the number of predictors
if(! use_prior){
model = bayesglm(formula, data=use_data, family=binomial(link='logit'), prior.mean=apply(x, 2, mean), prior.scale=Inf, scale=FALSE)
}
if(use_prior){
model = bayesglm(formula, data=use_data, family=binomial(link='logit'),
prior.mean=prior_mean,
prior.scale=prior_scale,
prior.mean.for.intercept=prior_mean_int,
prior.scale.for.intercept=prior_scale_int,
scale = FALSE)
}
#oil_palm prediction
class_prediction = 1*(model$fitted.values >= threshold) #if the fitted value is above the threshold, value is changed to binary 1
print(class_prediction)
#used instead of na.remove to get rid of NA values in 2018 validation dataset
true_label = true_label[!is.na(true_label)]
#generate confusion matrix
bayesian_conf_matrix = matrix(0,2,2)
bayesian_conf_matrix[1,1] = sum(class_prediction + true_label == 0)
bayesian_conf_matrix[2,2] = sum(class_prediction + true_label == 2)
bayesian_conf_matrix[1,2] = sum((class_prediction == 0) & (true_label == 1))
bayesian_conf_matrix[2,1] = sum((class_prediction == 1) & (true_label == 0))
rownames(bayesian_conf_matrix) = c("Predicted non-oil-palm", "Predicted oil-palm")
colnames(bayesian_conf_matrix) = c("Actual non-oil-palm", "Actual oil-palm")
print(bayesian_conf_matrix)
#overall accuracy of model
accu_bayes = sum(class_prediction == true_label) / nrow(data)
print("Overall accuracy:")
print(accu_bayes)
#EDIT: push values to Python and use numpy/matplotlib to display matrix
# approach posterior distributions of coefficients
# specify number of draws
num_draw = 2000
post_dist = sim(model, n.sims=num_draw)
coef_matrix = coef(post_dist)
# calculate posterior credible intervals for coefficients
posterior_ci_coef = matrix(NA, ncol(x)+1, 2)
for (i in 1:(ncol(x)+1)){
posterior_ci_coef[i, ] = unname(quantile(coef_matrix[, i], probs=c(lp, up), na.rm=TRUE))
}
# calculate posterior credible intervals for every data point
posterior_ci_data = matrix(NA, nrow(x), 2)
for(i in 1:nrow(x)){
temp = as.numeric()
for(j in 1:num_draw){
temp[j] = 1 / (1 + exp(-coef_matrix[j, 1] - sum(coef_matrix[j, 2:(length(index)+1)] * x[i, ])))
}
posterior_ci_data[i, ] = unname(quantile(temp, probs=c(lp, up), na.rm=TRUE))
}
# build posterior objects for next run
posterior_mean = model$coefficients
posterior_scale = apply(coef(post_dist), 2, sd)
#print the posteriors, store them for later
#EDIT: not sure if this works with combinations of other stack variables than current build (need to test in future)
posterior_mean
intercept = posterior_mean[["(Intercept)"]]
numVars = length(posterior_mean)
posteriors = posterior_mean[2:numVars] #EDIT: does this need to be transformed into c() variable?
###Output
_____no_output_____
###Markdown
APPLYObjective: apply model fits to calculate OP3 for the area covered by the data stack. OP3 = oil palm probability presence, ranging between 0-1. The cell below applies the model to the stack and executes the predictive analysis, outputting the prediction surface in GeoTIFF and ENVI binary formats:
###Code
%%R
#generate dummy for Docker
#only use env variables if creation fails (will normally return NULL but still create object)
# Sys.setenv(PROJ_LIB="/usr/bin/proj/")
# Sys.getenv("PROJ_LIB")
#dummy corresponds to one band from the original stack, used to save your output prediction map
in_dummy = "temp_dummy"
gdal_translate(src_dataset=in_stack, dst_dataset=in_dummy, of="ENVI", b=1)
#create GIS objects
gdalObjStack = new("GDALDataset", in_stack)
gdalObjDummy = new("GDALDataset", in_dummy)
rasterWidth = ncol(gdalObjStack)
rasterRows = nrow(gdalObjStack)
#calculate prediction for each pixel and save
print("Checking a few values...")
for(i in 1:rasterRows){
oneRasterLine = getRasterData(gdalObjStack, offset=c(i-1,0), region.dim=c(1, rasterWidth))
hhBand = hhBandIndex #PREVIOUSLY: which(bandNames == "alos2_hh")
#NOTE: previous value was 0, bandNames/modelBands was removed for redundancy,
# the above code has not been tested yet
pred = rep(-9999, rasterWidth)
for(j in 1:rasterWidth){
#hh = (20*log10(oneRasterLine[j, 1, hhBand])) -83
hh = oneRasterLine[j, 1, hhBand] #gets hh value at each of the pixels
#open water mask
#if(is.na(hh) | hh < -20){
#pred[j] = 0
#}
#else{
#select bands
selectBands = oneRasterLine[j, 1, bands] #EDITED: changed modelBands to bands
z = (intercept + sum(posteriors * selectBands))
pred[j] = exp(z)/(1+ exp(z))
#z = (intercept + sum(posteriors * scaledBands))
#pred[j] = exp(z)/(1+ exp(z))
if ((i/100 == i%/%100) & j == 1000) print(z) #reality check on the model fits
#}
}
#write one row to file
putRasterData(gdalObjDummy, pred, offset=c(i-1, 0)) #place predicted line in raster into dummy
}
saveDataset(gdalObjDummy, out_pred)
#convert to GeoTiff
gdal_translate(src_dataset=out_pred, dst_dataset=out_tif, of="GTiff")
###Output
_____no_output_____ |
A Primer on Genetic Algorithms.ipynb | ###Markdown
Primer On Genetic Algorithms
###Code
# We will begin this introduction to Genetic Algorithms by creating a target string,
# and a starting random string using Genetic Algorithms
import random
# No. of individuals in each generation
POPULATION_SIZE = 100
# Valid genes
GENES = '''abcdefghijklmnopqrstuvwxyzABCDEFGHIJKLMNOPQRSTUVWXYZ 123456789, .-;:_!"#%&/()=?@${[]}"'''
TARGET = "The rain in Spain is mainly in the plain."
class Individual(object):
'''
Class representing an individual in the population
'''
def __init__(self, chromosome):
self.chromosome = chromosome
self.fitness = self.cal_fitness()
@classmethod
def mutated_genes(self):
'''
Create random genes for mutation
'''
global GENES
gene = random.choice(GENES)
return gene
@classmethod
def create_gnome(self):
'''
Create a chromosome of strings.
'''
global TARGET
gnome_len = len(TARGET)
return [self.mutated_genes() for _ in range(gnome_len)]
def mate(self, par2):
'''
Perform mating which produces an offspring
'''
child_chromosome = []
for gp1, gp2 in zip(self.chromosome, par2.chromosome):
# random probability
prob = random.random()
# if the probability is less than 0.45, insert gene
# from parent 1
if prob < 0.45:
child_chromosome.append(gp1)
# if the probability is between 0.45 and 0.90, insert
# gene from parent 2
elif prob >=0.45 and prob < 0.90:
child_chromosome.append(gp2)
# otherwise insert a random gene(mutate), in order to
# maintain diversity
else:
child_chromosome.append(self.mutated_genes())
# create new individual (offspring) using generated chromosome
# for offspring
return Individual(child_chromosome)
def cal_fitness(self):
'''
Calculate fitness score, it is the number of characters in the string which
differs from the target string
'''
global TARGET
fitness = 0
for gs, gt in zip(self.chromosome, TARGET):
if gs != gt: fitness += 1
return fitness
# Program driver code
def main():
global POPULATION_SIZE
# current generation
generation = 1
found = False
population = []
# create initial population
for _ in range(POPULATION_SIZE):
gnome = Individual.create_gnome()
population.append(Individual(gnome))
while not found:
# sort the population in increasing order of fitness score
population = sorted(population, key = lambda x:x.fitness)
# if the individual having the lowest fitness i.e. 0
# then we know that we have reached the target and will
# break the loop
if population[0].fitness <= 0:
found = True
break
# Conversely, generate new offspring for a new generation
new_generation = []
# Perform "Elitism" operation, which means only 10% of the
# fittest population goes to the next generation
s = int((90 * POPULATION_SIZE) / 100)
new_generation.extend(population[:s])
# From 50% of the fittest population, individuals will mate
# to produce their offspring
s = int((90* POPULATION_SIZE / 100))
for _ in range(s):
parent1 = random.choice(population[:50])
parent2 = random.choice(population[:50])
child = parent1.mate(parent2)
new_generation.append(child)
population = new_generation
print("Generation: {}\tString: {}\tFitness: {}".\
format(generation, "".join(population[0].chromosome),
population[0].fitness))
generation += 1
print("Generation: {}\tString: {}\tFitness: {}".\
format(generation, "".join(population[0].chromosome),
population[0].fitness))
if __name__ == '__main__':
main()
###Output
Generation: 1 String: t-eS;O:P"Q!m:p,-rdv:xMR]x(NUvmGL{Y92_pivT Fitness: 38
Generation: 2 String: Te Nhr?%DI,a8[$r9-i! 6ZUnFTB9g ztO?g[Kic" Fitness: 35
Generation: 3 String: / ev}Yi[ ZprLyE(asi1B,a7qld 7nsgw]qQhaiG( Fitness: 31
Generation: 4 String: U-ev}Yij ZGrLpE4r#ifBmaiOld 8nGLwJq;vaiv. Fitness: 27
Generation: 5 String: TCEvrYoj {-aSp"4k#i@ mainly 8nGPt! ;vPiB. Fitness: 23
Generation: 6 String: T-evr[ij EGrSp"rr-iT maiOld qnGLwc Xlaiv. Fitness: 21
Generation: 7 String: T}e tain6in S3Y#ipis mannKy l7 F53 ulain. Fitness: 16
Generation: 8 String: }!e9r=on Cn Apaincis Cain"x 8n$the p[aiv. Fitness: 15
Generation: 9 String: T}evrain i- KpLrn-ps mainlyjqn two plaiv. Fitness: 13
Generation: 10 String: T!evtain in SpM_ndisJm3inly 8n t5e plai]. Fitness: 11
Generation: 11 String: J8evrain Cn Spgi,cis mainly18n the plain8 Fitness: 10
Generation: 12 String: TyevrarX in Spain?is mainly In the ulain. Fitness: 7
Generation: 13 String: TyevrarX in Spain?is mainly In the ulain. Fitness: 7
Generation: 14 String: Tyevrain in SpEin?is mainly 8o the plain. Fitness: 6
Generation: 15 String: T_eTrain in Spaindis mainly In he plain. Fitness: 5
Generation: 16 String: T7e rain in Spain is mainly IN he plain. Fitness: 4
Generation: 17 String: T7e rain in Spain is mainly IN he plain. Fitness: 4
Generation: 18 String: T7e rain in SpainVis mainly id the plain. Fitness: 3
Generation: 19 String: Tye rain in Spain is mainly In the plain. Fitness: 2
Generation: 20 String: Tye rain in Spain is mainly In the plain. Fitness: 2
Generation: 21 String: Tye rain in Spain is mainly In the plain. Fitness: 2
Generation: 22 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 23 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 24 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 25 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 26 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 27 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 28 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 29 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 30 String: T_e rain in Spain is mainly in the plain. Fitness: 1
Generation: 31 String: The rain in Spain is mainly in the plain. Fitness: 0
|
_drafts/modeling-the-nhl-better/.ipynb_checkpoints/Hockey Model Ideal Data-checkpoint.ipynb | ###Markdown
Generate Ideal Data
###Code
n_days = 200
n_teams = 32
gpd = 8
true_Δi_σ = 0.0
true_Δh_σ = 0.0
true_Δod_σ = 0.002
true_i_0 = 1.12
true_h_0 = 0.25
true_o_0 = np.random.normal(0, 0.15, n_teams)
true_o_0 = true_o_0 - np.mean(true_o_0)
true_d_0 = np.random.normal(0, 0.15, n_teams)
true_d_0 = true_d_0 - np.mean(true_d_0)
true_i = np.zeros(n_days)
true_h = np.zeros(n_days)
true_o = np.zeros((n_days, n_teams))
true_d = np.zeros((n_days, n_teams))
true_i[0] = true_i_0
true_h[0] = true_h_0
true_o[0,:] = true_o_0
true_d[0,:] = true_d_0
games_list = []
matches = np.arange(12)
np.random.shuffle(matches)
for t in range(1, n_days):
true_i[t] = true_i[t-1] + np.random.normal(0.0, true_Δi_σ)
true_h[t] = true_h[t-1] + np.random.normal(0.0, true_Δh_σ)
true_o[t,:] = true_o[t-1,:] + np.random.normal(0.0, true_Δod_σ, n_teams)
true_o[t,:] = true_o[t,:] - np.mean(true_o[t,:])
true_d[t,:] = true_d[t-1,:] + np.random.normal(0.0, true_Δod_σ, n_teams)
true_d[t,:] = true_d[t,:] - np.mean(true_d[t,:])
if matches.shape[0]//2 < gpd:
new_matches = np.arange(n_teams)
np.random.shuffle(new_matches)
matches = np.concatenate([matches, new_matches])
for _ in range(gpd):
idₕ = matches[0]
idₐ = matches[1]
logλₕ = true_i[t] + true_h[t] + true_o[t,idₕ] - true_d[t,idₐ]
logλₐ = true_i[t] + true_o[t,idₐ] - true_d[t,idₕ]
sₕ = np.random.poisson(np.exp(logλₕ))
sₐ = np.random.poisson(np.exp(logλₐ))
if sₕ > sₐ:
hw = 1
elif sₕ == sₐ:
p = np.exp(logλₕ)/(np.exp(logλₕ) + np.exp(logλₐ))
hw = np.random.binomial(1, p)
else:
hw = 0
games_list.append([t, idₕ, sₕ, idₐ, sₐ, hw])
matches = matches[2:]
games = pd.DataFrame(games_list, columns=['day', 'idₕ', 'sₕ', 'idₐ', 'sₐ', 'hw'])
games.head()
games['idₕ'].value_counts() + games['idₐ'].value_counts()
###Output
_____no_output_____
###Markdown
Model 1: Daily Updates, No Deltas
###Code
def get_m1_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
return posteriors
def m1_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h = pm.Normal('h', mu=priors['h'][0], sigma=priors['h'][1])
i = pm.Normal('i', mu=priors['i'][0], sigma=priors['i'][1])
# Team-specific poisson model parameters
o_star = pm.Normal('o_star', mu=priors['o'][0], sigma=priors['o'][1], shape=n_teams)
d_star = pm.Normal('d_star', mu=priors['d'][0], sigma=priors['d'][1], shape=n_teams)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d = pm.Deterministic('d', d_star - tt.mean(d_star))
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(1000, tune=1000, cores=3, progressbar=F)
posteriors = get_m1_posteriors(trace)
return posteriors
ws = 7
iv1_rows = []
priors = {
'h': [0.25, 0.1],
'i': [1.0, 0.1],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)]
}
for t in tqdm(range(ws, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m1_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) +list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1])
iv1_rows.append(iv_row)
###Output
_____no_output_____
###Markdown
Model 2: Daily Updates with Deltas
###Code
def get_m2_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
# Deltas
Δ_h_μ, Δ_h_σ = norm.fit(trace['Δ_h'])
posteriors['Δ_h'] = [Δ_h_μ, Δ_h_σ]
Δ_i_μ, Δ_i_σ = norm.fit(trace['Δ_i'])
posteriors['Δ_i'] = [Δ_i_μ, Δ_i_σ]
Δ_od_μ_μ, Δ_od_μ_σ = norm.fit(trace['Δ_od_μ'])
posteriors['Δ_od_μ'] = [Δ_od_μ_μ, Δ_od_μ_σ]
Δ_od_σ_α, _, Δ_od_σ_β = invgamma.fit(trace['Δ_od_σ'])
posteriors['Δ_od_σ'] = [Δ_od_σ_α, Δ_od_σ_β]
return posteriors
def m2_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h_init = pm.Normal('h_init', mu=priors['h'][0], sigma=priors['h'][1])
Δ_h = pm.Normal('Δ_h', mu=priors['Δ_h'][0], sigma=priors['Δ_h'][1])
h = pm.Deterministic('h', h_init + Δ_h)
i_init = pm.Normal('i_init', mu=priors['i'][0], sigma=priors['i'][1])
Δ_i = pm.Normal('Δ_i', mu=priors['Δ_i'][0], sigma=priors['Δ_i'][1])
i = pm.Deterministic('i', i_init + Δ_i)
Δ_od_μ = pm.Normal('Δ_od_μ', mu=priors['Δ_od_μ'][0], sigma=priors['Δ_od_μ'][1])
Δ_od_σ = pm.InverseGamme('Δ_od_σ', mu=priors['Δ_od_σ'][0], sigma=priors['Δ_od_σ'][1])
# Team-specific poisson model parameters
o_star_init = pm.Normal('o_star_init', mu=priors['o'][0], sigma=priors['o'][1], shape=n_teams)
Δ_o = pm.Normal('Δ_o', mu=Δ_od_μ, sigma=Δ_od_σ, shape=n_teams)
o_star = pm.Deterministic('o_star', o_star_init + Δ_o)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d_star_init = pm.Normal('d_star_init', mu=priors['d'][0], sigma=priors['d'][1], shape=n_teams)
Δ_d = pm.Normal('Δ_d', mu=Δ_od_μ, sigma=Δ_od_σ, shape=n_teams)
d_star = pm.Deterministic('d_star', d_star_init + Δ_d)
d = pm.Deterministic('d', d_star - tt.mean(d_star))
# Regulation game time goal Poisson rates
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(1000, tune=1000, cores=3, progressbar=False)
posteriors = get_m2_posteriors(trace)
return posteriors
ws = 7
iv2_rows = []
priors = {
'h': [0.25, 0.1],
'i': [1.0, 0.1],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'Δ_h': [0.0, 0.001],
'Δ_i': [0.0, .001],
'Δ_od_μ': [0.0, 0.0005],
'Δ_od_σ': [5.0, 0.01],
}
for t in tqdm(range(ws, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m2_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) + list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1] + posteriors['Δ_h'] + posteriors['Δ_i'] +\
posteriors['Δ_od_μ'] + posteriors['Δ_od_σ'])
iv2_rows.append(iv_row)
###Output
_____no_output_____
###Markdown
Model 3: Daily Updates with Zero Cenetered Deltas
###Code
def get_m3_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
# Deltas
Δ_h_μ, Δ_h_σ = norm.fit(trace['Δ_h'], loc=0.0)
posteriors['Δ_h'] = [0.0, Δ_h_σ]
Δ_i_μ, Δ_i_σ = norm.fit(trace['Δ_i'], loc=0.0)
posteriors['Δ_i'] = [0.0, Δ_i_σ]
Δ_od_σ_α, _, Δ_od_σ_β = invgamma.fit(trace['Δ_od_σ'])
posteriors['Δ_od_σ'] = [Δ_od_σ_α, Δ_od_σ_β]
return posteriors
def m3_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h_init = pm.Normal('h_init', mu=priors['h'][0], sigma=priors['h'][1])
Δ_h = pm.Normal('Δ_h', mu=priors['Δ_h'][0], sigma=priors['Δ_h'][1])
h = pm.Deterministic('h', h_init + Δ_h)
i_init = pm.Normal('i_init', mu=priors['i'][0], sigma=priors['i'][1])
Δ_i = pm.Normal('Δ_i', mu=priors['Δ_i'][0], sigma=priors['Δ_i'][1])
i = pm.Deterministic('i', i_init + Δ_i)
Δ_od_σ = pm.InverseGamma('Δ_od_σ', alpha=priors['Δ_od_σ'][0], beta=priors['Δ_od_σ'][1])
# Team-specific poisson model parameters
o_star_init = pm.Normal('o_star_init', mu=priors['o'][0], sigma=priors['o'][1], shape=n_teams)
Δ_o = pm.Normal('Δ_o', mu=0.0, sigma=Δ_od_σ, shape=n_teams)
o_star = pm.Deterministic('o_star', o_star_init + Δ_o)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d_star_init = pm.Normal('d_star_init', mu=priors['d'][0], sigma=priors['d'][1], shape=n_teams)
Δ_d = pm.Normal('Δ_d', mu=0.0, sigma=Δ_od_σ, shape=n_teams)
d_star = pm.Deterministic('d_star', d_star_init + Δ_d)
d = pm.Deterministic('d', d_star - tt.mean(d_star))
# Regulation game time goal Poisson rates
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
#pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
#hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(10000, tune=10000, cores=3)#, progressbar=False)
posteriors = get_m3_posteriors(trace)
return posteriors
ws = 28
iv3_rows = []
# Initialize model with model1 parameters on first 75 days of data
init_priors = {
'h': [0.25, 0.01],
'i': [1.12, 0.01],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)]
}
init_data = games[(games['day'] <= 75)]
priors = m1_iteration(init_data, init_priors)
priors['Δ_h'] = [0.0, 0.005]
priors['Δ_i'] = [0.0, 0.005]
priors['Δ_od_σ'] = [5.0, 0.01]
for t in tqdm(range(ws, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m3_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) + list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1]) + posteriors['Δ_h'] + posteriors['Δ_i'] +\
posteriors['Δ_od_σ']
iv3_rows.append(iv_row)
###Output
_____no_output_____
###Markdown
Model 4: Do not vary h and i with each step
###Code
def get_m4_posteriors(trace):
posteriors = {}
h_μ, h_σ = norm.fit(trace['h'])
posteriors['h'] = [h_μ, h_σ]
i_μ, i_σ = norm.fit(trace['i'])
posteriors['i'] = [i_μ, i_σ]
o_μ = []
o_σ = []
d_μ = []
d_σ = []
for i in range(n_teams):
oᵢ_μ, oᵢ_σ = norm.fit(trace['o'][:,i])
o_μ.append(oᵢ_μ)
o_σ.append(oᵢ_σ)
dᵢ_μ, dᵢ_σ = norm.fit(trace['d'][:,i])
d_μ.append(dᵢ_μ)
d_σ.append(dᵢ_σ)
posteriors['o'] = [np.array(o_μ), np.array(o_σ)]
posteriors['d'] = [np.array(d_μ), np.array(d_σ)]
# Unified o and d variances
o_σ_α, _, o_σ_β = invgamma.fit(trace['o_σ'])
posteriors['o_σ'] = [o_σ_α, o_σ_β]
d_σ_α, _, d_σ_β = invgamma.fit(trace['d_σ'])
posteriors['d_σ'] = [d_σ_α, d_σ_β]
return posteriors
def m4_iteration(obs_data, priors):
idₕ = obs_data['idₕ'].to_numpy()
sₕ_obs = obs_data['sₕ'].to_numpy()
idₐ = obs_data['idₐ'].to_numpy()
sₐ_obs = obs_data['sₐ'].to_numpy()
hw_obs = obs_data['hw'].to_numpy()
with pm.Model() as model:
# Global model parameters
h = pm.Normal('h', mu=priors['h'][0], sigma=priors['h'][1])
i = pm.Normal('i', mu=priors['i'][0], sigma=priors['i'][1])
o_σ = pm.InverseGamma('o_σ', alpha=priors['o_σ'][0], beta=priors['o_σ'][1])
d_σ = pm.InverseGamma('d_σ', alpha=priors['d_σ'][0], beta=priors['d_σ'][1])
Δ_od= pm.Normal('Δ_od_σ', mu=0.0, sigma=0.0025)
# Team-specific poisson model parameters
o_star_init = pm.Normal('o_star_init', mu=priors['o'][0], sigma=o_σ, shape=n_teams)
Δ_o = pm.Normal('Δ_o', mu=0.0, sigma=Δ_od_σ, shape=n_teams)
o_star = pm.Deterministic('o_star', o_star_init + Δ_o)
o = pm.Deterministic('o', o_star - tt.mean(o_star))
d_star_init = pm.Normal('d_star_init', mu=priors['d'][0], sigma=d_σ, shape=n_teams)
Δ_d = pm.Normal('Δ_d', mu=0.0, sigma=Δ_od_σ, shape=n_teams)
d_star = pm.Deterministic('d_star', d_star_init + Δ_d)
d = pm.Deterministic('d', d_star - tt.mean(d_star))
# Regulation game time goal Poisson rates
λₕ = tt.exp(i + h + o[idₕ] - d[idₐ])
λₐ = tt.exp(i + o[idₐ] - d[idₕ])
# OT/SO home win bernoulli model parameter
# P(T < Y), where T ~ a, Y ~ b: a/(a + b)
pₕ = λₕ/(λₕ + λₐ)
# Likelihood of observed data
sₕ = pm.Poisson('sₕ', mu=λₕ, observed=sₕ_obs)
sₐ = pm.Poisson('sₐ', mu=λₐ, observed=sₐ_obs)
hw = pm.Bernoulli('hw', p=pₕ, observed=hw_obs)
trace = pm.sample(10000, tune=10000, target_accept=0.90, cores=3)#, progressbar=False)
posteriors = get_m4_posteriors(trace)
return posteriors
start_day = 150
ws = 14
iv4_rows = []
# Initialize model with model1 parameters on first 75 days of data
init_priors = {
'h': [0.25, 0.01],
'i': [1.12, 0.01],
'o': [np.array([0] * n_teams), np.array([0.15] * n_teams)],
'd': [np.array([0] * n_teams), np.array([0.15] * n_teams)]
}
init_data = games[(games['day'] <= start_day)]
priors = m1_iteration(init_data, init_priors)
priors['o_σ'] = [5.0, 0.4]
priors['d_σ'] = [5.0, 0.4]
priors['Δ_od_σ'] = [5.0, 0.1]
print(priors)
for t in tqdm(range(start_day, n_days+1)):
obs_data = games[((games['day'] <= t) & (games['day'] > (t - ws)))]
priors = posteriors = m4_iteration(obs_data, priors);
iv_row = posteriors['h'] + posteriors['i'] + list(posteriors['o'][0]) + list(posteriors['o'][1]) + \
list(posteriors['d'][0]) + list(posteriors['d'][1]) + posteriors['o_σ'] +\
posteriors['d_σ'] + posteriors['Δ_od_σ']
iv4_rows.append(iv_row)
true_o
np.array(iv4_rows)
np.array(iv4_rows)[:,4:36]
col_names = ['h_μ', 'h_σ', 'i_μ', 'i_σ'] + ['o{}_μ'.format(i) for i in range(n_teams)] + \
['o{}_σ'.format(i) for i in range(n_teams)] + ['d{}_μ'.format(i) for i in range(n_teams)] + \
['d{}_σ'.format(i) for i in range(n_teams)] + \
['o_σ_α', 'o_σ_β', 'd_σ_α', 'd_σ_β', 'Δ_od_σ_α', 'Δ_od_σ_β']
iv4_df = pd.DataFrame(iv4_rows, columns=col_names)
iv4_df['day'] = list(range(start_day, n_days+1))
iv4_df.head()
iv4_df.to_csv('iv4_df.csv')
lv_df = pd.DataFrame(data={'h':true_h, 'i':true_i})
lv_df = pd.concat([lv_df, pd.DataFrame(data=true_o, columns=['o{}'.format(i) for i in range(n_teams)])], axis=1)
lv_df = pd.concat([lv_df, pd.DataFrame(data=true_d, columns=['d{}'.format(i) for i in range(n_teams)])], axis=1)
lv_df['day'] = list(range(1,n_days+1))
lv_df.iloc[150:155,:].head()
lv_df.to_csv('lv_df.csv')
###Output
_____no_output_____ |
CourseWork/PowerProduction.ipynb | ###Markdown
Having a mess around with the Power Production dataset Around 18 minutes into "regression using scikit-learn" video provides some ideas for PLOTTING of linear regression, which might be a nice plot to have in the project.
###Code
import pandas as pd
import sklearn as sklearn
import seaborn as sns
import numpy as numpy
import sklearn.linear_model as lin
#stackoverflow chat suggests that you can't import an external dataset into seaborn, use pandas instead??!!
#It's as though if it's not here >>>https://github.com/mwaskom/seaborn<<< seaborn doesn't want to know.
powerproduction=pd.read_csv("powerproduction.csv")
#powerproduction.describe()
print(powerproduction)
###Output
speed power
0 0.000 0.0
1 0.125 0.0
2 0.150 0.0
3 0.225 0.0
4 0.275 0.0
.. ... ...
495 24.775 0.0
496 24.850 0.0
497 24.875 0.0
498 24.950 0.0
499 25.000 0.0
[500 rows x 2 columns]
###Markdown
Analysis
###Code
sns.pairplot(powerproduction)
###Output
_____no_output_____
###Markdown
Functions to draw linear regression models
###Code
sns.regplot(x="speed", y="power", data=powerproduction);
#a blue line appears as a suggestion
sns.lmplot(x="speed", y="power", data=powerproduction);
###Output
C:\Users\Acer\anaconda3\lib\site-packages\numpy\linalg\linalg.py:1965: RuntimeWarning: invalid value encountered in greater
large = s > cutoff
###Markdown
Source: https://seaborn.pydata.org/tutorial/regression.htmlYou should note that the resulting plots are identical, except that the figure shapes are different. regplot() accepts the x and y variables in a variety of formats including simple numpy arrays, pandas Series objects, or as references to variables in a pandas DataFrame object passed to data. In contrast, lmplot() has data as a required parameter and the x and y variables must be specified as strings. This data format is called “long-form” or “tidy” data. Other than this input flexibility, regplot() possesses a subset of lmplot()’s features, so we will demonstrate them using the latter. Observations on these graphsThe curve shape corresponds with graphs at https://energyeducation.ca/encyclopedia/Wind_powerIt is logical, it takes a reasonable amount of wind to get the turbine going and then when it's moving and the wind drops...it will still rotate and come to a stop eventually.The cut off point seems to be around speed kmph? 24.5. Perhaps it is dangerous for the turbine to move at a high speed.>>Turbines are designed to operate within a specific range of wind speeds. The limits of the range are known as the cut-in speed and cut-out speed.[5] The cut-in speed is the point at which the wind turbine is able to generate power. The cut-out speed is the point at which the turbine must be shut down to avoid damage to the equipment. The cut-in and cut-out speeds are related to the turbine design and size and are decided on prior to construction.[6] TrainAttempt to automate the prediction, find relationships between the paired data. import sklearn.linear_model as linmanipulate the two lists of numbersx = flipper["body_mass_g"].to_numpy()y = flipper["flipper_length_mm"].to_numpy()even though you've only one input value, you must reshape as if there are more. It is ascikit learm thing....x = x.reshape(-1, 1)use sckitlearn to give numbers pertaining to the suggested blue line abovemodel = lin.LinearRegression()model.fit(x, y)tells skcitlearn where the values arer = model.score(x, y)find out the r value, how well the lines fits the data setp = [model.intercept_, model.coef_[0]]provide the intercept
###Code
import sklearn.linear_model as lin
powerproduction=pd.read_csv("powerproduction.csv")
#manipulate the two lists of numbers
#x=powerproduction["speed"].to_numpy()
speed=powerproduction["speed"].to_numpy()
y=powerproduction["power"].to_numpy()
#y = data["power"].to_numpy()
x = speed.reshape(-1, 1)
model = lin.LinearRegression()
model.fit(x, y)#tells skcitlearn where the values are
r = model.score(x, y)#find out the r value, how well the lines fits the data set
p = [model.intercept_, model.coef_[0]]#provide the intercept
r
#.72 is not too bad of a fit
#In statistics, the correlation coefficient r measures the strength
#and direction of a linear relationship between two variables on a scatterplot.
p#the minimum value -13 windspeed delivers 4.9 units of power....that don't sound right
### Predict
def f(x, p):
# x is the input, p is the parameter/s e.g. a list of values somehow trained on a dataset already
# p can be used to help us make predictions in the case of x
return p[0] + x * p[1]
#this function is designed to provide linear results
###Output
_____no_output_____
###Markdown
per Ian.....the calculations are straightforward,our ideas behind them are important.the functions might be deterministicmaybe the input becomes part of the function (not external data anymore)
###Code
f(7, p)
#we can use the p values above where 7 windspeed, how much power is generated?
#we trained p on the dataset, it is a model
#you could define another function using p also , see below.
def predict(x):
return f(x, p)
predict(2.9)
###Output
_____no_output_____
###Markdown
The web service needs to get input into the function.The function needs to reject amounts higher than 24.399The function needs to reject amounts lower than 0.325When I leave the dataset as it is , the figures are skewed - it is giving negative value.Remove the problematic values to train it properly
###Code
powerproduction=pd.read_csv("powerproduction.csv")
#Code adapted from https://stackoverflow.com/questions/22649693/drop-rows-with-all-zeros-in-pandas-data-frame
df=powerproduction
#print (df.sort_values('power', ascending=True))
#new_df = df[df.loc[:]!=0].dropna()
#df.drop(0,111,110,105,89) #this does not work - only deletes index 0
#print(df.drop(['speed'], axis=1))
new_df
X = new_df.transpose()
X
#new_df.to_csv(index=False)
#new_df.to_csv(index=True)
newdf_csv_data = new_df.to_csv('new_df.csv', index = False)
#print('\nCSV String:\n', gfg_csv_data)
print('\nCSV String:\n', newdf_csv_data)
new_df=pd.read_csv("new_df.csv")
new_df.describe()
new_df.shape
#debugging. why does speed have 500 here?
print (new_df.sort_values('speed', ascending=False))
###Output
speed power
450 24.399 95.117
449 24.374 98.223
448 24.349 93.078
447 24.299 93.694
446 24.249 103.700
.. ... ...
4 0.526 5.553
3 0.501 1.048
2 0.450 3.826
1 0.400 5.186
0 0.325 4.331
[451 rows x 2 columns]
###Markdown
Analysis 2
###Code
sns.lmplot(x="speed", y="power", data=new_df);
new_df=pd.read_csv("new_df.csv")
a=new_df["speed"].to_numpy()
b=new_df["power"].to_numpy()
a = speed.reshape(-1, 1)
model = lin.LinearRegression()
model.fit(a,b)#tells skcitlearn where the values are
#r = model.score(x, y)#find out the r value, how well the lines fits the data set
#p = [model.intercept_, model.coef_[0]]#provide the intercept
# Let's rename already created dataFrame.
# Check the current column names
# using "columns" attribute.
# df.columns
# Change the column names
#new_df.columns =['Col_1', 'Col_2']
# printing the data frame
#new_df
new_df=pd.read_csv("new_df.csv")
a=new_df["Col_1"].to_numpy()
b=new_df["Col_2"].to_numpy()
#a = speed.reshape(-1, 1)
#model = lin.LinearRegression()
###Output
_____no_output_____ |
feature_selection/feature_selection.ipynb | ###Markdown
Feature Selection
###Code
import mlrun
%nuclio config kind = "job"
%nuclio config spec.image = "mlrun/ml-models"
# nuclio: start-code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import json
# Feature selection strategies
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import SelectFromModel
# Model based feature selection
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
# Scale feature scores
from sklearn.preprocessing import MinMaxScaler
# SKLearn estimators list
from sklearn.utils import all_estimators
# MLRun utils
from mlrun.mlutils.plots import gcf_clear
from mlrun.utils.helpers import create_class
from mlrun.artifacts import PlotArtifact
# Feature Selection
from feature_selection import feature_selection, show_values_on_bars, plot_stat
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Test
###Code
from mlrun import code_to_function, mount_v3io, mlconf, NewTask, run_local
mlconf.artifact_path = os.path.abspath('./artifacts')
mlconf.db_path = 'http://mlrun-api:8080'
###Output
_____no_output_____
###Markdown
Local Test
###Code
task = NewTask(params={'k': 2,
'min_votes': 0.3,
'label_column': 'is_error'},
inputs={'df_artifact': os.path.abspath('data/metrics.pq')})
from feature_selection import feature_selection, show_values_on_bars, plot_stat
runl = run_local(task=task,
name='feature_selection',
handler=feature_selection,
artifact_path=os.path.join(os.path.abspath('./'), 'artifacts'))
###Output
> 2021-08-11 10:12:05,721 [info] starting run feature_selection uid=8765f9e7fde94efeb662fbe2c37a0e1a DB=http://mlrun-api:8080
###Markdown
Job Test
###Code
fn = code_to_function(name='feature_selection',
handler='feature_selection')
fn.spec.default_handler = 'feature_selection'
fn.spec.description = "Select features through multiple Statistical and Model filters"
fn.metadata.categories = ['data-prep', 'ml']
fn.metadata.labels = {"author": "alexz"}
fn.export('function.yaml')
fn.apply(mount_v3io())
fn_run = fn.run(task)
mlrun.get_dataitem(fn_run.spec.inputs['df_artifact']).as_df()
mlrun.get_dataitem(fn_run.outputs['feature_scores']).as_df()
mlrun.get_dataitem(fn_run.outputs['selected_features']).as_df()
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
import nuclio
%nuclio config kind = "job"
%nuclio config spec.image = "mlrun/ml-models"
# nuclio: start-code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import json
# Feature selection strategies
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import SelectFromModel
# Model based feature selection
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
# Scale feature scores
from sklearn.preprocessing import MinMaxScaler
# SKLearn estimators list
from sklearn.utils import all_estimators
# MLRun utils
from mlrun.mlutils import create_class, gcf_clear
from mlrun.artifacts import PlotArtifact
def show_values_on_bars(axs, h_v="v", space=0.4):
def _show_on_single_plot(ax):
if h_v == "v":
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
value = int(p.get_height())
ax.text(_x, _y, value, ha="center")
elif h_v == "h":
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = int(p.get_width())
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
def plot_stat(context,
stat_name,
stat_df):
gcf_clear(plt)
# Add chart
ax = plt.axes()
stat_chart = sns.barplot(x=stat_name,
y='index',
data=stat_df.sort_values(stat_name, ascending=False).reset_index(),
ax=ax)
plt.tight_layout()
for p in stat_chart.patches:
width = p.get_width()
plt.text(5+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1.2f}'.format(width),
ha='center', va='center')
context.log_artifact(PlotArtifact(f'{stat_name}', body=plt.gcf()),
local_path=os.path.join('plots', 'feature_selection', f'{stat_name}.html'))
gcf_clear(plt)
def feature_selection(context,
df_artifact,
k=2,
min_votes=0.5,
label_column: str = 'Y',
stat_filters = ['f_classif', 'mutual_info_classif', 'chi2', 'f_regression'],
model_filters = {'LinearSVC': 'LinearSVC',
'LogisticRegression': 'LogisticRegression',
'ExtraTreesClassifier': 'ExtraTreesClassifier'},
max_scaled_scores = True):
"""Applies selected feature selection statistical functions
or models on our 'df_artifact'.
Each statistical function or model will vote for it's best K selected features.
If a feature has >= 'min_votes' votes, it will be selected.
:param context: the function context
:param k: number of top features to select from each statistical
function or model
:param min_votes: minimal number of votes (from a model or by statistical
function) needed for a feature to be selected.
Can be specified by percentage of votes or absolute
number of votes
:param label_column: ground-truth (y) labels
:param stat_filters: statistical functions to apply to the features
(from sklearn.feature_selection)
:param model_filters: models to use for feature evaluation, can be specified by
model name (ex. LinearSVC), formalized json (contains 'CLASS',
'FIT', 'META') or a path to such json file.
:param max_scaled_scores: produce feature scores table scaled with max_scaler
"""
# Read input DF
df_path = str(df_artifact)
context.logger.info(f'input dataset {df_path}')
if df_path.endswith('csv'):
df = pd.read_csv(df_path)
elif df_path.endswith('parquet') or df_path.endswith('pq'):
df = pd.read_parquet(df_path)
# Set feature vector and labels
y = df.pop(label_column)
X = df
# Create selected statistical estimators
stat_functions_list = {stat_name:SelectKBest(create_class(f'sklearn.feature_selection.{stat_name}'), k)
for stat_name in stat_filters}
requires_abs = ['chi2']
# Run statistic filters
selected_features_agg = {}
stats_df = pd.DataFrame(index=X.columns)
for stat_name, stat_func in stat_functions_list.items():
try:
# Compute statistics
params = (X, y) if stat_name in requires_abs else (abs(X), y)
stat = stat_func.fit(*params)
# Collect stat function results
stat_df = pd.DataFrame(index=X.columns,
columns=[stat_name],
data=stat.scores_)
plot_stat(context, stat_name, stat_df)
stats_df = stats_df.join(stat_df)
# Select K Best features
selected_features = X.columns[stat_func.get_support()]
selected_features_agg[stat_name] = selected_features
except Exception as e:
context.logger.info(f"Couldn't calculate {stat_name} because of: {e}")
# Create models from class name / json file / json params
all_sklearn_estimators = dict(all_estimators()) if len(model_filters) > 0 else {}
selected_models = {}
for model_name, model in model_filters.items():
if '.json' in model:
current_model = json.load(open(model, 'r'))
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
elif model in all_sklearn_estimators:
selected_models[model_name] = all_sklearn_estimators[model_name]()
else:
try:
current_model = json.loads(model) if isinstance(model, str) else current_model
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
except:
context.logger.info(f'unable to load {model}')
# Run model filters
models_df = pd.DataFrame(index=X.columns)
for model_name, model in selected_models.items():
# Train model and get feature importance
select_from_model = SelectFromModel(model).fit(X,y)
feature_idx = select_from_model.get_support()
feature_names = X.columns[feature_idx]
selected_features_agg[model_name] = feature_names.tolist()
# Collect model feature importance
if hasattr(select_from_model.estimator_, 'coef_'):
stat_df = select_from_model.estimator_.coef_
elif hasattr(select_from_model.estimator_, 'feature_importances_'):
stat_df = select_from_model.estimator_.feature_importances_
stat_df = pd.DataFrame(index=X.columns,
columns=[model_name],
data=stat_df[0])
models_df = models_df.join(stat_df)
plot_stat(context, model_name, stat_df)
# Create feature_scores DF with stat & model filters scores
result_matrix_df = pd.concat([stats_df, models_df], axis=1, sort=False)
context.log_dataset(key='feature_scores',
df=result_matrix_df,
local_path='feature_scores.parquet',
format='parquet')
if max_scaled_scores:
normalized_df = result_matrix_df.replace([np.inf, -np.inf], np.nan).values
min_max_scaler = MinMaxScaler()
normalized_df = min_max_scaler.fit_transform(normalized_df)
normalized_df = pd.DataFrame(data=normalized_df,
columns=result_matrix_df.columns,
index=result_matrix_df.index)
context.log_dataset(key='max_scaled_scores_feature_scores',
df=normalized_df,
local_path='max_scaled_scores_feature_scores.parquet',
format='parquet')
# Create feature count DataFrame
for test_name in selected_features_agg:
result_matrix_df[test_name] = [1 if x in selected_features_agg[test_name] else 0 for x in X.columns]
result_matrix_df.loc[:,'num_votes'] = result_matrix_df.sum(axis=1)
context.log_dataset(key='selected_features_count',
df=result_matrix_df,
local_path='selected_features_count.parquet',
format='parquet')
# How many votes are needed for a feature to be selected?
if isinstance(min_votes, int):
votes_needed = min_votes
else:
num_filters = len(stat_filters) + len(model_filters)
votes_needed = int(np.floor(num_filters * max(min(min_votes, 1), 0)))
context.logger.info(f'votes needed to be selected: {votes_needed}')
# Create final feature dataframe
selected_features = result_matrix_df[result_matrix_df.num_votes>=votes_needed].index.tolist()
good_feature_df = df.loc[:, selected_features]
final_df = pd.concat([good_feature_df,y], axis=1)
context.log_dataset(key='selected_features',
df=final_df,
local_path='selected_features.parquet',
format='parquet')
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Test
###Code
from mlrun import code_to_function, mount_v3io, mlconf, NewTask, run_local
mlconf.artifact_path = os.path.abspath('./artifacts')
mlconf.db_path = 'http://mlrun-api:8080'
###Output
_____no_output_____
###Markdown
Local Test
###Code
task = NewTask(params={'k': 2,
'min_votes': 0.3,
'label_column': 'is_error'},
inputs={'df_artifact': '/User/demo-network-operations/data/metrics.parquet'})
runl = run_local(task=task,
name='feature_selection',
handler=feature_selection,
artifact_path=os.path.join(os.path.abspath('./'), 'artifacts'))
###Output
[mlrun] 2020-04-12 12:28:08,160 starting run feature_selection uid=558aa6cf639d4e9eab6c8d6020f45962 -> http://10.194.95.255:8080
###Markdown
Job Test
###Code
fn = code_to_function(name='feature_selection',
handler='feature_selection')
fn.spec.default_handler = 'feature_selection'
fn.spec.description = "Select features through multiple Statistical and Model filters"
fn.metadata.categories = ['data-prep', 'ml']
fn.metadata.labels = {"author": "orz"}
fn.export('function.yaml')
fn.apply(mount_v3io())
fn.run(task)
pd.read_parquet(runl.spec.inputs['df_artifact'])
pd.read_parquet(runl.outputs['feature_scores'])
pd.read_parquet(runl.outputs['max_scaled_scores_feature_scores'])
pd.read_parquet(runl.outputs['selected_features_count'])
pd.read_parquet(runl.outputs['selected_features'])
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
import nuclio
%nuclio config kind = "job"
%nuclio config spec.image = "mlrun/ml-models"
# nuclio: start-code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import json
# Feature selection strategies
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import SelectFromModel
# Model based feature selection
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
# Scale feature scores
from sklearn.preprocessing import MinMaxScaler
# SKLearn estimators list
from sklearn.utils import all_estimators
# MLRun utils
from mlrun.mlutils.plots import gcf_clear
from mlrun.utils.helpers import create_class
from mlrun.artifacts import PlotArtifact
def show_values_on_bars(axs, h_v="v", space=0.4):
def _show_on_single_plot(ax):
if h_v == "v":
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
value = int(p.get_height())
ax.text(_x, _y, value, ha="center")
elif h_v == "h":
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = int(p.get_width())
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
def plot_stat(context,
stat_name,
stat_df):
gcf_clear(plt)
# Add chart
ax = plt.axes()
stat_chart = sns.barplot(x=stat_name,
y='index',
data=stat_df.sort_values(stat_name, ascending=False).reset_index(),
ax=ax)
plt.tight_layout()
for p in stat_chart.patches:
width = p.get_width()
plt.text(5+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1.2f}'.format(width),
ha='center', va='center')
context.log_artifact(PlotArtifact(f'{stat_name}', body=plt.gcf()),
local_path=os.path.join('plots', 'feature_selection', f'{stat_name}.html'))
gcf_clear(plt)
def feature_selection(context,
df_artifact,
k=2,
min_votes=0.5,
label_column: str = 'Y',
stat_filters = ['f_classif', 'mutual_info_classif', 'chi2', 'f_regression'],
model_filters = {'LinearSVC': 'LinearSVC',
'LogisticRegression': 'LogisticRegression',
'ExtraTreesClassifier': 'ExtraTreesClassifier'},
max_scaled_scores = True):
"""Applies selected feature selection statistical functions
or models on our 'df_artifact'.
Each statistical function or model will vote for it's best K selected features.
If a feature has >= 'min_votes' votes, it will be selected.
:param context: the function context
:param k: number of top features to select from each statistical
function or model
:param min_votes: minimal number of votes (from a model or by statistical
function) needed for a feature to be selected.
Can be specified by percentage of votes or absolute
number of votes
:param label_column: ground-truth (y) labels
:param stat_filters: statistical functions to apply to the features
(from sklearn.feature_selection)
:param model_filters: models to use for feature evaluation, can be specified by
model name (ex. LinearSVC), formalized json (contains 'CLASS',
'FIT', 'META') or a path to such json file.
:param max_scaled_scores: produce feature scores table scaled with max_scaler
"""
# Read input DF
df_path = str(df_artifact)
context.logger.info(f'input dataset {df_path}')
if df_path.endswith('csv'):
df = pd.read_csv(df_path)
elif df_path.endswith('parquet') or df_path.endswith('pq'):
df = pd.read_parquet(df_path)
# Set feature vector and labels
y = df.pop(label_column)
X = df
# Create selected statistical estimators
stat_functions_list = {stat_name:SelectKBest(create_class(f'sklearn.feature_selection.{stat_name}'), k)
for stat_name in stat_filters}
requires_abs = ['chi2']
# Run statistic filters
selected_features_agg = {}
stats_df = pd.DataFrame(index=X.columns)
for stat_name, stat_func in stat_functions_list.items():
try:
# Compute statistics
params = (X, y) if stat_name in requires_abs else (abs(X), y)
stat = stat_func.fit(*params)
# Collect stat function results
stat_df = pd.DataFrame(index=X.columns,
columns=[stat_name],
data=stat.scores_)
plot_stat(context, stat_name, stat_df)
stats_df = stats_df.join(stat_df)
# Select K Best features
selected_features = X.columns[stat_func.get_support()]
selected_features_agg[stat_name] = selected_features
except Exception as e:
context.logger.info(f"Couldn't calculate {stat_name} because of: {e}")
# Create models from class name / json file / json params
all_sklearn_estimators = dict(all_estimators()) if len(model_filters) > 0 else {}
selected_models = {}
for model_name, model in model_filters.items():
if '.json' in model:
current_model = json.load(open(model, 'r'))
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
elif model in all_sklearn_estimators:
selected_models[model_name] = all_sklearn_estimators[model_name]()
else:
try:
current_model = json.loads(model) if isinstance(model, str) else current_model
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
except:
context.logger.info(f'unable to load {model}')
# Run model filters
models_df = pd.DataFrame(index=X.columns)
for model_name, model in selected_models.items():
# Train model and get feature importance
select_from_model = SelectFromModel(model).fit(X,y)
feature_idx = select_from_model.get_support()
feature_names = X.columns[feature_idx]
selected_features_agg[model_name] = feature_names.tolist()
# Collect model feature importance
if hasattr(select_from_model.estimator_, 'coef_'):
stat_df = select_from_model.estimator_.coef_
elif hasattr(select_from_model.estimator_, 'feature_importances_'):
stat_df = select_from_model.estimator_.feature_importances_
stat_df = pd.DataFrame(index=X.columns,
columns=[model_name],
data=stat_df[0])
models_df = models_df.join(stat_df)
plot_stat(context, model_name, stat_df)
# Create feature_scores DF with stat & model filters scores
result_matrix_df = pd.concat([stats_df, models_df], axis=1, sort=False)
context.log_dataset(key='feature_scores',
df=result_matrix_df,
local_path='feature_scores.parquet',
format='parquet')
if max_scaled_scores:
normalized_df = result_matrix_df.replace([np.inf, -np.inf], np.nan).values
min_max_scaler = MinMaxScaler()
normalized_df = min_max_scaler.fit_transform(normalized_df)
normalized_df = pd.DataFrame(data=normalized_df,
columns=result_matrix_df.columns,
index=result_matrix_df.index)
context.log_dataset(key='max_scaled_scores_feature_scores',
df=normalized_df,
local_path='max_scaled_scores_feature_scores.parquet',
format='parquet')
# Create feature count DataFrame
for test_name in selected_features_agg:
result_matrix_df[test_name] = [1 if x in selected_features_agg[test_name] else 0 for x in X.columns]
result_matrix_df.loc[:,'num_votes'] = result_matrix_df.sum(axis=1)
context.log_dataset(key='selected_features_count',
df=result_matrix_df,
local_path='selected_features_count.parquet',
format='parquet')
# How many votes are needed for a feature to be selected?
if isinstance(min_votes, int):
votes_needed = min_votes
else:
num_filters = len(stat_filters) + len(model_filters)
votes_needed = int(np.floor(num_filters * max(min(min_votes, 1), 0)))
context.logger.info(f'votes needed to be selected: {votes_needed}')
# Create final feature dataframe
selected_features = result_matrix_df[result_matrix_df.num_votes>=votes_needed].index.tolist()
good_feature_df = df.loc[:, selected_features]
final_df = pd.concat([good_feature_df,y], axis=1)
context.log_dataset(key='selected_features',
df=final_df,
local_path='selected_features.parquet',
format='parquet')
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Test
###Code
from mlrun import code_to_function, mount_v3io, mlconf, NewTask, run_local
mlconf.artifact_path = os.path.abspath('./artifacts')
mlconf.db_path = 'http://mlrun-api:8080'
###Output
_____no_output_____
###Markdown
Local Test
###Code
task = NewTask(params={'k': 2,
'min_votes': 0.3,
'label_column': 'is_error'},
inputs={'df_artifact': '/User/demo-network-operations/data/metrics.parquet'})
runl = run_local(task=task,
name='feature_selection',
handler=feature_selection,
artifact_path=os.path.join(os.path.abspath('./'), 'artifacts'))
###Output
[mlrun] 2020-04-12 12:28:08,160 starting run feature_selection uid=558aa6cf639d4e9eab6c8d6020f45962 -> http://10.194.95.255:8080
###Markdown
Job Test
###Code
fn = code_to_function(name='feature_selection',
handler='feature_selection')
fn.spec.default_handler = 'feature_selection'
fn.spec.description = "Select features through multiple Statistical and Model filters"
fn.metadata.categories = ['data-prep', 'ml']
fn.metadata.labels = {"author": "orz"}
fn.export('function.yaml')
fn.apply(mount_v3io())
fn.run(task)
pd.read_parquet(runl.spec.inputs['df_artifact'])
pd.read_parquet(runl.outputs['feature_scores'])
pd.read_parquet(runl.outputs['max_scaled_scores_feature_scores'])
pd.read_parquet(runl.outputs['selected_features_count'])
pd.read_parquet(runl.outputs['selected_features'])
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
import nuclio
%nuclio config kind = "job"
%nuclio config spec.image = "mlrun/ml-models"
# nuclio: start-code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import json
# Feature selection strategies
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import SelectFromModel
# Model based feature selection
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
# Scale feature scores
from sklearn.preprocessing import MinMaxScaler
# SKLearn estimators list
from sklearn.utils import all_estimators
# MLRun utils
from mlrun.mlutils import create_class, gcf_clear
from mlrun.artifacts import PlotArtifact
def show_values_on_bars(axs, h_v="v", space=0.4):
def _show_on_single_plot(ax):
if h_v == "v":
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
value = int(p.get_height())
ax.text(_x, _y, value, ha="center")
elif h_v == "h":
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = int(p.get_width())
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
def plot_stat(context,
stat_name,
stat_df):
gcf_clear(plt)
# Add chart
ax = plt.axes()
stat_chart = sns.barplot(x=stat_name,
y='index',
data=stat_df.sort_values(stat_name, ascending=False).reset_index(),
ax=ax)
plt.tight_layout()
for p in stat_chart.patches:
width = p.get_width()
plt.text(5+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1.2f}'.format(width),
ha='center', va='center')
context.log_artifact(PlotArtifact(f'{stat_name}', body=plt.gcf()),
local_path=os.path.join('plots', 'feature_selection', f'{stat_name}.html'))
gcf_clear(plt)
def feature_selection(context,
df_artifact,
k=2,
min_votes=0.5,
label_column: str = 'Y',
stat_filters = ['f_classif', 'mutual_info_classif', 'chi2', 'f_regression'],
model_filters = {'LinearSVC': 'LinearSVC',
'LogisticRegression': 'LogisticRegression',
'ExtraTreesClassifier': 'ExtraTreesClassifier'},
max_scaled_scores = True):
"""Applies selected feature selection statistical functions
or models on our 'df_artifact'.
Each statistical function or model will vote for it's best K selected features.
If a feature has >= 'min_votes' votes, it will be selected.
:param context: the function context
:param k: number of top features to select from each statistical
function or model
:param min_votes: minimal number of votes (from a model or by statistical
function) needed for a feature to be selected.
Can be specified by percentage of votes or absolute
number of votes
:param label_column: ground-truth (y) labels
:param stat_filters: statistical functions to apply to the features
(from sklearn.feature_selection)
:param model_filters: models to use for feature evaluation, can be specified by
model name (ex. LinearSVC), formalized json (contains 'CLASS',
'FIT', 'META') or a path to such json file.
:param max_scaled_scores: produce feature scores table scaled with max_scaler
"""
# Read input DF
df_path = str(df_artifact)
context.logger.info(f'input dataset {df_path}')
if df_path.endswith('csv'):
df = pd.read_csv(df_path)
elif df_path.endswith('parquet') or df_path.endswith('pq'):
df = pd.read_parquet(df_path)
# Set feature vector and labels
y = df.pop(label_column)
X = df
# Create selected statistical estimators
stat_functions_list = {stat_name:SelectKBest(create_class(f'sklearn.feature_selection.{stat_name}'), k)
for stat_name in stat_filters}
requires_abs = ['chi2']
# Run statistic filters
selected_features_agg = {}
stats_df = pd.DataFrame(index=X.columns)
for stat_name, stat_func in stat_functions_list.items():
# Compute statistics
params = (X, y) if stat_name in requires_abs else (abs(X), y)
stat = stat_func.fit(*params)
# Collect stat function results
stat_df = pd.DataFrame(index=X.columns,
columns=[stat_name],
data=stat.scores_)
plot_stat(context, stat_name, stat_df)
stats_df = stats_df.join(stat_df)
# Select K Best features
selected_features = X.columns[stat_func.get_support()]
selected_features_agg[stat_name] = selected_features
# Create models from class name / json file / json params
all_sklearn_estimators = dict(all_estimators()) if len(model_filters) > 0 else {}
selected_models = {}
for model_name, model in model_filters.items():
if '.json' in model:
current_model = json.load(open(model, 'r'))
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
elif model in all_sklearn_estimators:
selected_models[model_name] = all_sklearn_estimators[model_name]()
else:
try:
current_model = json.loads(model) if isinstance(model, str) else current_model
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
except:
context.logger.info(f'unable to load {model}')
# Run model filters
models_df = pd.DataFrame(index=X.columns)
for model_name, model in selected_models.items():
# Train model and get feature importance
select_from_model = SelectFromModel(model).fit(X,y)
feature_idx = select_from_model.get_support()
feature_names = X.columns[feature_idx]
selected_features_agg[model_name] = feature_names.tolist()
# Collect model feature importance
if hasattr(select_from_model.estimator_, 'coef_'):
stat_df = select_from_model.estimator_.coef_
elif hasattr(select_from_model.estimator_, 'feature_importances_'):
stat_df = select_from_model.estimator_.feature_importances_
stat_df = pd.DataFrame(index=X.columns,
columns=[model_name],
data=stat_df[0])
models_df = models_df.join(stat_df)
plot_stat(context, model_name, stat_df)
# Create feature_scores DF with stat & model filters scores
result_matrix_df = pd.concat([stats_df, models_df], axis=1, sort=False)
context.log_dataset(key='feature_scores',
df=result_matrix_df,
local_path='feature_scores.parquet',
format='parquet')
if max_scaled_scores:
normalized_df = result_matrix_df.replace([np.inf, -np.inf], np.nan).values
min_max_scaler = MinMaxScaler()
normalized_df = min_max_scaler.fit_transform(normalized_df)
normalized_df = pd.DataFrame(data=normalized_df,
columns=result_matrix_df.columns,
index=result_matrix_df.index)
context.log_dataset(key='max_scaled_scores_feature_scores',
df=normalized_df,
local_path='max_scaled_scores_feature_scores.parquet',
format='parquet')
# Create feature count DataFrame
for test_name in selected_features_agg:
result_matrix_df[test_name] = [1 if x in selected_features_agg[test_name] else 0 for x in X.columns]
result_matrix_df.loc[:,'num_votes'] = result_matrix_df.sum(axis=1)
context.log_dataset(key='selected_features_count',
df=result_matrix_df,
local_path='selected_features_count.parquet',
format='parquet')
# How many votes are needed for a feature to be selected?
if isinstance(min_votes, int):
votes_needed = min_votes
else:
num_filters = len(stat_filters) + len(model_filters)
votes_needed = int(np.floor(num_filters * max(min(min_votes, 1), 0)))
context.logger.info(f'votes needed to be selected: {votes_needed}')
# Create final feature dataframe
selected_features = result_matrix_df[result_matrix_df.num_votes>=votes_needed].index.tolist()
good_feature_df = df.loc[:, selected_features]
final_df = pd.concat([good_feature_df,y], axis=1)
context.log_dataset(key='selected_features',
df=final_df,
local_path='selected_features.parquet',
format='parquet')
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Test
###Code
from mlrun import code_to_function, mount_v3io, mlconf, NewTask, run_local
mlconf.artifact_path = os.path.abspath('./artifacts')
mlconf.db_path = 'http://mlrun-api:8080'
###Output
_____no_output_____
###Markdown
Local Test
###Code
task = NewTask(params={'k': 2,
'min_votes': 0.3,
'label_column': 'is_error'},
inputs={'df_artifact': '/User/demo-network-operations/data/metrics.parquet'})
runl = run_local(task=task,
name='feature_selection',
handler=feature_selection,
artifact_path=os.path.join(os.path.abspath('./'), 'artifacts'))
###Output
[mlrun] 2020-04-12 12:28:08,160 starting run feature_selection uid=558aa6cf639d4e9eab6c8d6020f45962 -> http://10.194.95.255:8080
###Markdown
Job Test
###Code
fn = code_to_function(name='feature_selection',
handler='feature_selection')
fn.spec.default_handler = 'feature_selection'
fn.spec.description = "Select features through multiple Statistical and Model filters"
fn.metadata.categories = ['data-prep', 'ml']
fn.metadata.labels = {"author": "orz"}
fn.export('function.yaml')
fn.apply(mount_v3io())
fn.run(task)
pd.read_parquet(runl.spec.inputs['df_artifact'])
pd.read_parquet(runl.outputs['feature_scores'])
pd.read_parquet(runl.outputs['max_scaled_scores_feature_scores'])
pd.read_parquet(runl.outputs['selected_features_count'])
pd.read_parquet(runl.outputs['selected_features'])
###Output
_____no_output_____
###Markdown
Feature Selection
###Code
import mlrun
%nuclio config kind = "job"
%nuclio config spec.image = "mlrun/ml-models"
# nuclio: start-code
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import numpy as np
import os
import json
# Feature selection strategies
from sklearn.feature_selection import SelectKBest
from sklearn.feature_selection import SelectFromModel
# Model based feature selection
from sklearn.ensemble import ExtraTreesClassifier
from sklearn.svm import LinearSVC
from sklearn.linear_model import LogisticRegression
# Scale feature scores
from sklearn.preprocessing import MinMaxScaler
# SKLearn estimators list
from sklearn.utils import all_estimators
# MLRun utils
from mlrun.mlutils.plots import gcf_clear
from mlrun.utils.helpers import create_class
from mlrun.artifacts import PlotArtifact
def show_values_on_bars(axs, h_v="v", space=0.4):
def _show_on_single_plot(ax):
if h_v == "v":
for p in ax.patches:
_x = p.get_x() + p.get_width() / 2
_y = p.get_y() + p.get_height()
value = int(p.get_height())
ax.text(_x, _y, value, ha="center")
elif h_v == "h":
for p in ax.patches:
_x = p.get_x() + p.get_width() + float(space)
_y = p.get_y() + p.get_height()
value = int(p.get_width())
ax.text(_x, _y, value, ha="left")
if isinstance(axs, np.ndarray):
for idx, ax in np.ndenumerate(axs):
_show_on_single_plot(ax)
else:
_show_on_single_plot(axs)
def plot_stat(context,
stat_name,
stat_df):
gcf_clear(plt)
# Add chart
ax = plt.axes()
stat_chart = sns.barplot(x=stat_name,
y='index',
data=stat_df.sort_values(stat_name, ascending=False).reset_index(),
ax=ax)
plt.tight_layout()
for p in stat_chart.patches:
width = p.get_width()
plt.text(5+p.get_width(), p.get_y()+0.55*p.get_height(),
'{:1.2f}'.format(width),
ha='center', va='center')
context.log_artifact(PlotArtifact(f'{stat_name}', body=plt.gcf()),
local_path=os.path.join('plots', 'feature_selection', f'{stat_name}.html'))
gcf_clear(plt)
def feature_selection(context,
df_artifact,
k=2,
min_votes=0.5,
label_column: str = 'Y',
stat_filters = ['f_classif', 'mutual_info_classif', 'chi2', 'f_regression'],
model_filters = {'LinearSVC': 'LinearSVC',
'LogisticRegression': 'LogisticRegression',
'ExtraTreesClassifier': 'ExtraTreesClassifier'},
max_scaled_scores = True):
"""Applies selected feature selection statistical functions
or models on our 'df_artifact'.
Each statistical function or model will vote for it's best K selected features.
If a feature has >= 'min_votes' votes, it will be selected.
:param context: the function context
:param k: number of top features to select from each statistical
function or model
:param min_votes: minimal number of votes (from a model or by statistical
function) needed for a feature to be selected.
Can be specified by percentage of votes or absolute
number of votes
:param label_column: ground-truth (y) labels
:param stat_filters: statistical functions to apply to the features
(from sklearn.feature_selection)
:param model_filters: models to use for feature evaluation, can be specified by
model name (ex. LinearSVC), formalized json (contains 'CLASS',
'FIT', 'META') or a path to such json file.
:param max_scaled_scores: produce feature scores table scaled with max_scaler
"""
# Read input DF
df = df_artifact.as_df()
# Drop nan's and inf's for our calculations
df = df.replace([np.inf, -np.inf], np.nan).dropna()
# Set feature vector and labels
y = df.pop(label_column)
X = df
# Create selected statistical estimators
stat_functions_list = {stat_name:SelectKBest(create_class(f'sklearn.feature_selection.{stat_name}'), k)
for stat_name in stat_filters}
requires_abs = ['chi2']
# Run statistic filters
selected_features_agg = {}
stats_df = pd.DataFrame(index=X.columns)
for stat_name, stat_func in stat_functions_list.items():
try:
# Compute statistics
params = (X, y) if stat_name in requires_abs else (np.abs(X), y)
stat = stat_func.fit(*params)
# Collect stat function results
stat_df = pd.DataFrame(index=X.columns,
columns=[stat_name],
data=stat.scores_)
plot_stat(context, stat_name, stat_df)
stats_df = stats_df.join(stat_df)
# Select K Best features
selected_features = X.columns[stat_func.get_support()]
selected_features_agg[stat_name] = selected_features
except Exception as e:
context.logger.info(f"Couldn't calculate {stat_name} because of: {e}")
# Create models from class name / json file / json params
all_sklearn_estimators = dict(all_estimators()) if len(model_filters) > 0 else {}
selected_models = {}
for model_name, model in model_filters.items():
if '.json' in model:
current_model = json.load(open(model, 'r'))
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
elif model in all_sklearn_estimators:
selected_models[model_name] = all_sklearn_estimators[model_name]()
else:
try:
current_model = json.loads(model) if isinstance(model, str) else current_model
ClassifierClass = create_class(current_model["META"]["class"])
selected_models[model_name] = ClassifierClass(**current_model["CLASS"])
except:
context.logger.info(f'unable to load {model}')
# Run model filters
models_df = pd.DataFrame(index=X.columns)
for model_name, model in selected_models.items():
# Train model and get feature importance
select_from_model = SelectFromModel(model).fit(X,y)
feature_idx = select_from_model.get_support()
feature_names = X.columns[feature_idx]
selected_features_agg[model_name] = feature_names.tolist()
# Collect model feature importance
if hasattr(select_from_model.estimator_, 'coef_'):
stat_df = select_from_model.estimator_.coef_
elif hasattr(select_from_model.estimator_, 'feature_importances_'):
stat_df = select_from_model.estimator_.feature_importances_
stat_df = pd.DataFrame(index=X.columns,
columns=[model_name],
data=stat_df[0])
models_df = models_df.join(stat_df)
plot_stat(context, model_name, stat_df)
# Create feature_scores DF with stat & model filters scores
result_matrix_df = pd.concat([stats_df, models_df], axis=1, sort=False)
context.log_dataset(key='feature_scores',
df=result_matrix_df,
local_path='feature_scores.parquet',
format='parquet')
if max_scaled_scores:
normalized_df = result_matrix_df.replace([np.inf, -np.inf], np.nan).values
min_max_scaler = MinMaxScaler()
normalized_df = min_max_scaler.fit_transform(normalized_df)
normalized_df = pd.DataFrame(data=normalized_df,
columns=result_matrix_df.columns,
index=result_matrix_df.index)
context.log_dataset(key='max_scaled_scores_feature_scores',
df=normalized_df,
local_path='max_scaled_scores_feature_scores.parquet',
format='parquet')
# Create feature count DataFrame
for test_name in selected_features_agg:
result_matrix_df[test_name] = [1 if x in selected_features_agg[test_name] else 0 for x in X.columns]
result_matrix_df.loc[:,'num_votes'] = result_matrix_df.sum(axis=1)
context.log_dataset(key='selected_features_count',
df=result_matrix_df,
local_path='selected_features_count.parquet',
format='parquet')
# How many votes are needed for a feature to be selected?
if isinstance(min_votes, int):
votes_needed = min_votes
else:
num_filters = len(stat_filters) + len(model_filters)
votes_needed = int(np.floor(num_filters * max(min(min_votes, 1), 0)))
context.logger.info(f'votes needed to be selected: {votes_needed}')
# Create final feature dataframe
selected_features = result_matrix_df[result_matrix_df.num_votes>=votes_needed].index.tolist()
good_feature_df = df.loc[:, selected_features]
final_df = pd.concat([good_feature_df,y], axis=1)
context.log_dataset(key='selected_features',
df=final_df,
local_path='selected_features.parquet',
format='parquet')
# nuclio: end-code
###Output
_____no_output_____
###Markdown
Test
###Code
from mlrun import code_to_function, mount_v3io, mlconf, NewTask, run_local
mlconf.artifact_path = os.path.abspath('./artifacts')
mlconf.db_path = 'http://mlrun-api:8080'
###Output
_____no_output_____
###Markdown
Local Test
###Code
task = NewTask(params={'k': 2,
'min_votes': 0.3,
'label_column': 'is_error'},
inputs={'df_artifact': os.path.abspath('data/metrics.pq')})
runl = run_local(task=task,
name='feature_selection',
handler=feature_selection,
artifact_path=os.path.join(os.path.abspath('./'), 'artifacts'))
###Output
> 2021-06-10 12:55:47,338 [info] starting run feature_selection uid=bcf7669f839147798ff84c6e2934bdbb DB=http://mlrun-api:8080
###Markdown
Job Test
###Code
fn = code_to_function(name='feature_selection',
handler='feature_selection')
fn.spec.default_handler = 'feature_selection'
fn.spec.description = "Select features through multiple Statistical and Model filters"
fn.metadata.categories = ['data-prep', 'ml']
fn.metadata.labels = {"author": "orz"}
fn.export('function.yaml')
fn.apply(mount_v3io())
fn_run = fn.run(task)
mlrun.get_dataitem(fn_run.spec.inputs['df_artifact']).as_df()
mlrun.get_dataitem(fn_run.outputs['feature_scores']).as_df()
mlrun.get_dataitem(fn_run.outputs['selected_features']).as_df()
###Output
_____no_output_____ |
supervised/3.1.decision_trees.ipynb | ###Markdown
3.1 Clasificador de arbol decision
###Code
import numpy as np
import matplotlib.pyplot as plt
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
from sklearn.tree import DecisionTreeClassifier
from utilities import visualize_classifier
data = np.loadtxt('data_decision_trees.txt', delimiter = ',')
X, y = data[:, :-1], data[:,-1]
class_0 = np.array(X[y==0])
class_1 = np.array(X[y==1])
print(class_1)
plt.figure()
plt.scatter(class_0[:,0], class_0[:,1], s=75,
facecolors='black',edgecolors='black',linewidth=1,marker='x')
plt.scatter(class_1[:,0],class_1[:,1], s=75, facecolors='white',
edgecolors='black', linewidth=1, marker='o')
plt.title('Input data')
X_train , X_test, y_train, y_test = train_test_split(X,y,test_size=0.25,random_state= 5)
#Decision , Trees classifier
params = {'random_state':0,'max_depth':4}
classifier = DecisionTreeClassifier(**params)
classifier.fit(X_train,y_train)
print('Training dataset')
visualize_classifier(classifier, X_train, y_train)
y_test_pred = classifier.predict(X_test)
print('Test dataset')
visualize_classifier(classifier, X_test, y_test)
class_names = ['Class-0', 'Class-1']
print("\n" + "#"*40)
print("\nClassifier performance on training dataset\n")
print(classification_report(y_train, classifier.predict(X_train),
target_names=class_names))
print("#"*40 + '\n')
print("#"*40)
print("\nClassifier performance on test dataset\n")
print(classification_report(y_test, y_test_pred, target_names=class_names))
print("#"*40 + "\n")
"""
El rendimiento de un clasificador se caracteriza por la precisión, la recuperación y las puntuaciones f1.
La precisión se refiere a la precisión de la clasificación y
el recuerdo se refiere al número de elementos que se recuperaron como un porcentaje del número total de elementos
que se suponía que debían recuperarse.
Un buen clasificador tendrá alta precisión y alta recuperación,
pero generalmente es una compensación entre los dos.
Por lo tanto, tenemos puntaje f1 para caracterizar eso.
La puntuación F1 es la media armónica de precisión y recuperación,
lo que le da un buen equilibrio entre la precisión y los valores de recuperación.
"""
###Output
_____no_output_____ |
curve-fitting-1d.ipynb | ###Markdown
1次元のカーブフィッティングscipy.optimize.curve_fitによるカーブフィッティングpolyvalは多項式のみだから汎用的でない 1次の多項式
###Code
# 必要なモジュールのインポート
from numpy import *
from matplotlib.pyplot import *
from scipy.optimize import curve_fit
# モデルを用意する
def model_poly1d(x, a, b):
return a * x + b
# データを用意する
x = linspace(-5, 5, 101)
y_model = model_poly1d(x, a=5, b=10)
plot(x, y_model)
noise = random.normal(0, 10, len(x))
scatter(x, noise)
y_exp = y_model + noise
scatter(x, y_exp)
plot(x, y_model)
savefig('fitting.svg', format='svg', dpi=1200)
param, covar = curve_fit(model_poly1d, x, y_exp)
param
# error
print(covar)
print(diag(covar))
print(sqrt(diag(covar)))
y_fit = model_poly1d(x, param[0], param[1])
%matplotlib inline
scatter(x, y_exp)
plot(x, y_model, color='black')
plot(x, y_fit, color='red')
###Output
_____no_output_____
###Markdown
指数関数
###Code
h = hist(noise, bins=10)
def gaussian(x, mu, sigma):
return (1/sqrt(2*pi*power(sigma,2))) * exp(-power(x-mu,2)/sqrt(2*power(sigma,2)))
param, covar = curve_fit(gaussian, h[1][:-1], h[0])
noise_fit = gaussian(h[1][:-1], param[0], param[1])
scatter(h[1][:-1], noise_fit)
###Output
_____no_output_____ |
notebooks/01-06-Image-combination.ipynb | ###Markdown
Image combinationImage combination serves several purposes. Combining images:+ reduces noise in images+ can remove transient artifacts like cosmic rays and satellite tracks+ can remove stars in flat images taken at twilightIt's essential that several of each type of calibration image (bias, dark, flat)be taken. Combining them reduces the noise in the images by roughly a factor of$1/\sqrt{N}$, where $N$ is the number of images being combined. As shown in theprevious notebook, using a single calibration image actually *increases* thenoise in your image.There are a few ways to combine images; if done properly, features that show upin only one of the images (like cosmic rays) are not present in the combination.If done incorrectly, those features show up in your combined images and thencontaminate your calibrated science images too. The bottom line: combine by averaging images, but clip extreme valuesThe remainder of this notebook demonstrates this conclusion and explains how todo a combination by averaging images with [ccdproc](https://ccdproc.readthedocs.io/en/latest/).
###Code
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import rc
from astropy.visualization import hist
from astropy.stats import mad_std
# Use custom style for larger fonts and figures
plt.style.use('guide.mplstyle')
# Set some default parameters for the plots below
rc('font', size=20)
rc('axes', grid=True)
###Output
_____no_output_____
###Markdown
Combination method: average or median?In this section we'll look at a simplified version of the challenges ofcombining images to reduce noise. It's fair to think of astronomical images(especially bias and dark images) as being a Gaussian distribution of pixelvalues around the bias level, and a width related to the read noise of thedetector. To simplify what follows, we will work arrays of random numbers drawnfrom a Gaussian distribution instead of with astronomical images.In properly done flat images the noise is technically a Poisson distribution,but with a large enough number of counts, the distribution is indistinguishablefrom a Gaussian distribution whose width is related to the square root of thenumber of counts. While some regions of a science image are dominated by Poissonnoise from sources in the image, most of the image will be dominated by Gaussianread noise from the detector or Poisson noise from the sky background.Instead of working with a combination of images, we'll create 100 Gaussiandistributions with a mean of zero, and a standard deviation of one, and combinethose two different ways: by finding the average and by finding the median. Eachdistribution has size $320^2$ so that we can view it as either a distribution of102,400 values or as an image that is $320 \times 320$.We can think of each of these 100 distributions as representing an image, like abias or dark. To make the analogy to real images a little more direct, a "bias"of 1000 is added to each distribution.
###Code
n_distributions = 100
bias_level = 1000
n_side = 320
bits = np.random.randn(n_distributions, n_side**2) + bias_level
average = np.average(bits, axis=0)
median = np.median(bits, axis=0)
###Output
_____no_output_____
###Markdown
Now that we've created the distributions and combined them in two differentways, let's take a look at them. The [`hist` function from astropy.visualization](https://astropy.readthedocs.io/en/stable/visualization/histogram.html) is usedbelow because it can figure out what bin size to use for your data.
###Code
fig, ax = plt.subplots(1, 2, sharey=True, tight_layout=True, figsize=(20, 10))
hist(bits[0, :], bins='freedman', ax=ax[0]);
ax[0].set_title('One sample distribution')
ax[0].set_xlabel('Pixel value')
ax[0].set_ylabel('Number of pixels')
hist(average, bins='freedman', label='average', alpha=0.5, ax=ax[1]);
hist(median, bins='freedman', label='median', alpha=0.5, ax=ax[1]);
ax[1].set_title('{} distributions combined'.format(n_distributions))
ax[1].set_xlabel('Pixel value')
ax[1].legend()
###Output
_____no_output_____
###Markdown
Combining by averaging gives a narrower (i.e. less noisy) distribution thancombining by median, though both substantially reduced the width of thedistribution. The conclusion so far is that combining by averaging is mildlypreferable to combining by median. Computationally, the mean is also faster tocompute than the median. Image view of these distributionsAs suggested above, we could view each of these distributions as an imageinstead of a histogram. One take away from the diagram below is that in thiscase, the difference between mean and median is not apparent.In all cases, the extreme values of the image display are set to bracket thewidth of the initial distribution.
###Code
fig, axes = plt.subplots(1, 3, sharey=True, tight_layout=True, figsize=(20, 10))
data_source = [bits[0, :], average, median]
titles = ['One distrbution', 'Average of {n}'.format(n=n_distributions), 'Median of {n}'.format(n=n_distributions)]
for axis, data, title in zip(axes, data_source, titles):
axis.imshow(data.reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
axis.set_xticks([])
axis.set_yticks([])
axis.grid(False)
axis.set_title(title)
###Output
_____no_output_____
###Markdown
The effect of outliersSuppose that, in just one of the 100 distributions we're combining, there are asmall number of extreme values. In astronomical images these extremes happenvery frequently because of cosmic ray hits on the detector that cause, in onesmall patch of a calibration image, much higher counts. Another case occurs whencombining twilight flats, which often contain faint images of stars.In the example below, we set just 50 points out of the 102,400 in the firstdistribution to a somewhat higher value than the rest.
###Code
bits[0, 10000:10050] = 2 * bias_level
###Output
_____no_output_____
###Markdown
Remember, we can think of the values in this distribution as an image, a viewthat will be particularly convenient in this case.
###Code
plt.imshow(bits[0, :].reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
plt.xticks([])
plt.yticks([])
plt.title('One distribution with outliers')
plt.grid(False)
###Output
_____no_output_____
###Markdown
Now that we know what the outliers in this (and *only* this) distribution looklike, we'll combine all of the distributions as we did above.
###Code
average = np.average(bits, axis=0)
median = np.median(bits, axis=0)
###Output
_____no_output_____
###Markdown
Even though only one out of the 100 "images" we're combining has these highpixel values, the distribution of pixels for the average is clearly affected(well, maybe not clearly, since seeing it requires a logarithmic $y$-axis). Thedistribution for the median looks much the same as above. Since median simplylooks for the middle value, an extreme value doesn't affect the result too much.
###Code
plt.figure(figsize=(10, 10))
hist(average, bins='freedman', alpha=0.5, label='average');
hist(median, bins='freedman', alpha=0.5, label='median');
plt.legend()
plt.xlabel('Counts')
plt.ylabel('Number of pixels')
plt.semilogy();
###Output
_____no_output_____
###Markdown
Combining using the average has a noticeable effect on the result; medianremoves the artifactThe effect of the outlier is *much* clearer if the distributions are displayedas images. If the distributions we're combining were calibration images then theoutliers that appear in one image (e.g. a cosmic ray) would affect the combinedimage we hoped to use for calibration.
###Code
fig, axes = plt.subplots(1, 3, sharey=True, tight_layout=True, figsize=(20, 10))
data_source = [bits[0, :], average, median]
titles = ['One distribution with outliers', 'Average of {n}'.format(n=n_distributions), 'Median of {n}'.format(n=n_distributions)]
for axis, data, title in zip(axes, data_source, titles):
axis.imshow(data.reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
axis.set_xticks([])
axis.set_yticks([])
axis.grid(False)
axis.set_title(title)
###Output
_____no_output_____
###Markdown
On one hand, the noise properties are better when you combine by taking theaverage. On the other hand, the median eliminates features that appear in onlyone image.Astronomical images will almost always have those transient features. Even at anobservatory near sea level in an exposure that is very short, cosmic ray hitsare common. The solution: average combine, but clip the extreme valuesThe answer here is to first clip extreme values from the distributions and thencombine using the average. That rejects outlying values like the median but withthe modestly better statistical properties of the average. A method called"sigma clipping" is used to remove the extreme values. **Please do not use the code below for reducing your data...**...in the next set of notebooks we'll walk through the package[ccdproc](https://ccdproc.readthedocs.io), which automates much of what you see below.The section below demonstrates and explains some of what's happening behind thescenes in [ccdproc](https://ccdproc.readthedocs.io). Sigma clippingSigma clipping means calculating how "far" each pixel to be combined is from the"typical" value and excluding values from the combination if they are "too far"from the pixel value.To be clear, when evaluating which values to reject we're doing it for each ofthe 102,400 points in the distribution (or, if you prefer, each of the320$\times$320 pixels in the image) we're going to combine. In other words, foreach point (or pixel), we'll compute a "typical" value for the 100 distributions(images) we're combining and exclude any from the average that are "too far"from the "typical value."What should be used as the "typical" value, how do we measure how "far" away avalue is, and how far is "too far"?The last question is easiest to answer: it depends a bit on the noise level inyour camera but something like 5 farther from the "typical" value than most ofthe pixels are.Using the average as the typical value and the standard deviation as a measureof how far a particular value is from the typical value is often not the bestchoice. The problem with this is that outlying values in a single distribution(or image) strongly bias the average and exaggerate the standard deviation. Inthis example, where we're combining 100 distributions (images), using theaverage and standard deviation might work since there are so many distributions.A more typical number of bias or dark images that one might combine is 10 or 20.In that case, an extreme value in one image strongly affects the mean andstandard deviation.As an example, consider combining 10, 20, or 100 of our distributions, as shownin the cell below. Only in the case of 100 distributions would our extreme valueof 2000 be excluded if we excluded values more than 5 times the standarddeviation from the average.
###Code
print('Number combined\t Average\t Standard dev σ \t 10σ ')
for n_to_combine in [10, 20, n_distributions]:
avg = np.mean(bits[:n_to_combine, 10000])
std = np.std(bits[:n_to_combine, 10000])
print('{n:10d}\t{avg:10.2f}\t{std:10.2f}\t{ten_sig:10.2f}'.format(n=n_to_combine,
avg=avg,
std=std, ten_sig=10 * std))
###Output
_____no_output_____
###Markdown
A better choice is to use the median as the typical value and the *medianabsolute deviation* in place of the standard deviation as the measure of how fara value is from the typical value. The [median absolute deviation](https://en.wikipedia.org/wiki/Median_absolute_deviation), or MAD,of a set of points $x$ is defined by:$$MAD = \frac{1}{N}\sum_{i=0}^N |x_i - \text{median}(x)|.$$This is a measure of the typical absolute distance from the median of the set ofvalues. The MAD is not directly equivalent to the standard deviation. Therelationship between the two depends on the distribution of values, but for aGaussian distribution multiplying the MAD by 1.4826 does the trick. The[astropy function `mad_std`](http://docs.astropy.org/en/stable/api/astropy.stats.mad_std.html) will calculate the MAD and multiply by theappropriate factor for you.Repeating the calculation above but with median as the central value and the MADin place of the standard deviation demonstrates that even for 10 distributionsthe extreme value will be excluded.
###Code
print('{:^20}{:^20}{:^20}{:^20}'.format('Number combined', 'Median', 'MAD σ', '10σ'))
for n_to_combine in [10, 20, n_distributions]:
avg = np.median(bits[:n_to_combine, 10000])
std = mad_std(bits[:n_to_combine, 10000])
print('{n:^20d}{avg:^20.2f}{std:^20.2f}{ten_sig:^20.2f}'.format(n=n_to_combine,
avg=avg,
std=std, ten_sig=10 * std))
###Output
_____no_output_____
###Markdown
The downside to using the median and median absolute deviation? They can be slowto compute for large images or large stacks of images. The cells below perform the actual clipping; you should generally use theastropy function [`sigma_clip`](https://astropy.readthedocs.io/en/stable/stats/robust.html) to do this, but here we'll doit manually to illustrate the process.We begin by calculating the MAD standard deviation estimator for our data.
###Code
mad_sigma = mad_std(bits, axis=0)
###Output
_____no_output_____
###Markdown
The expression below is true for all of the points farther than $10\sigma_{MAD}$ from the median of the distributions and false everywhere else.This array will be used to exclude the extreme points.
###Code
exclude = (bits - median) / mad_sigma > 10
###Output
_____no_output_____
###Markdown
Next, we calculate the average, excluding the points identified as "too far"from from the median. There are two approaches we can take here. One is to usenumpy masked arrays; the other is to temporarily set the excluded values to thespecial value `np.nan` and use a numpy function that excludes `nan` from thecalculation. The latter approach is often faster than the former.The best approach is really to use a higher-level function from astropy forccdproc. Those will take care of the details of implementing the clipping foryou.
###Code
original_values = bits[exclude]
bits[exclude] = np.nan
clip_combine = np.nanmean(bits, axis=0)
bits[exclude] = original_values
###Output
_____no_output_____
###Markdown
SummaryCombine images by (1) excluding extreme values using sigma clipping, with themedian as the typical value and the MAD estimator of the standard deviation, andthen (2) averaging the remaining pixels across all of the images.Note that in the distribution below the clipped average is a narrowerdistribution (less noise) than the median but that it still excludes the extremevalue that appeared in one image.
###Code
plt.figure(figsize=(10, 10))
hist(clip_combine, bins='freedman', alpha=0.5, label='clipped average')
hist(median, bins='freedman', alpha=0.5, label='median');
plt.legend()
plt.xlabel('Counts')
plt.ylabel('Number of pixels')
###Output
_____no_output_____
###Markdown
Image combinationImage combination serves several purposes. Combining images:+ reduces noise in images+ can remove transient artifacts like cosmic rays and satellite tracks+ can remove stars in flat images taken at twilightIt's essential that several of each type of calibration image (bias, dark, flat)be taken. Combining them reduces the noise in the images by roughly a factor of$1/\sqrt{N}$, where $N$ is the number of images being combined. As shown in theprevious notebook, using a single calibration image actually *increases* thenoise in your image.There are a few ways to combine images; if done properly, features that show upin only one of the images (like cosmic rays) are not present in the combination.If done incorrectly, those features show up in your combined images and thencontaminate your calibrated science images too. The bottom line: combine by averaging images, but clip extreme valuesThe remainder of this notebook demonstrates this conclusion and explains how todo a combination by averaging images with [ccdproc](https://ccdproc.readthedocs.io/en/latest/).
###Code
import os
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import rc
from astropy.visualization import hist
from astropy.stats import mad_std
# Use custom style for larger fonts and figures
plt.style.use('guide.mplstyle')
# Set some default parameters for the plots below
rc('font', size=20)
rc('axes', grid=True)
# Set up the random number generator, allowing a seed to be set from the environment
seed = os.getenv('GUIDE_RANDOM_SEED', None)
# This is the generator to use for any image component which changes in each image, e.g. read noise
# or Poisson error
noise_rng = np.random.default_rng(int(seed))
###Output
_____no_output_____
###Markdown
Combination method: average or median?In this section we'll look at a simplified version of the challenges ofcombining images to reduce noise. It's fair to think of astronomical images(especially bias and dark images) as being a Gaussian distribution of pixelvalues around the bias level, and a width related to the read noise of thedetector. To simplify what follows, we will work arrays of random numbers drawnfrom a Gaussian distribution instead of with astronomical images.In properly done flat images the noise is technically a Poisson distribution,but with a large enough number of counts, the distribution is indistinguishablefrom a Gaussian distribution whose width is related to the square root of thenumber of counts. While some regions of a science image are dominated by Poissonnoise from sources in the image, most of the image will be dominated by Gaussianread noise from the detector or Poisson noise from the sky background.Instead of working with a combination of images, we'll create 100 Gaussiandistributions with a mean of zero, and a standard deviation of one, and combinethose two different ways: by finding the average and by finding the median. Eachdistribution has size $320^2$ so that we can view it as either a distribution of102,400 values or as an image that is $320 \times 320$.We can think of each of these 100 distributions as representing an image, like abias or dark. To make the analogy to real images a little more direct, a "bias"of 1000 is added to each distribution.
###Code
n_distributions = 100
bias_level = 1000
n_side = 320
bits = noise_rng.normal(size=(n_distributions, n_side**2)) + bias_level
average = np.average(bits, axis=0)
median = np.median(bits, axis=0)
###Output
_____no_output_____
###Markdown
Now that we've created the distributions and combined them in two differentways, let's take a look at them. The [`hist` function from astropy.visualization](https://astropy.readthedocs.io/en/stable/visualization/histogram.html) is usedbelow because it can figure out what bin size to use for your data.
###Code
fig, ax = plt.subplots(1, 2, sharey=True, tight_layout=True, figsize=(20, 10))
hist(bits[0, :], bins='freedman', ax=ax[0]);
ax[0].set_title('One sample distribution')
ax[0].set_xlabel('Pixel value')
ax[0].set_ylabel('Number of pixels')
hist(average, bins='freedman', label='average', alpha=0.5, ax=ax[1]);
hist(median, bins='freedman', label='median', alpha=0.5, ax=ax[1]);
ax[1].set_title('{} distributions combined'.format(n_distributions))
ax[1].set_xlabel('Pixel value')
ax[1].legend()
###Output
_____no_output_____
###Markdown
Combining by averaging gives a narrower (i.e. less noisy) distribution thancombining by median, though both substantially reduced the width of thedistribution. The conclusion so far is that combining by averaging is mildlypreferable to combining by median. Computationally, the mean is also faster tocompute than the median. Image view of these distributionsAs suggested above, we could view each of these distributions as an imageinstead of a histogram. One take away from the diagram below is that in thiscase, the difference between mean and median is not apparent.In all cases, the extreme values of the image display are set to bracket thewidth of the initial distribution.
###Code
fig, axes = plt.subplots(1, 3, sharey=True, tight_layout=True, figsize=(20, 10))
data_source = [bits[0, :], average, median]
titles = ['One distrbution', 'Average of {n}'.format(n=n_distributions), 'Median of {n}'.format(n=n_distributions)]
for axis, data, title in zip(axes, data_source, titles):
axis.imshow(data.reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
axis.set_xticks([])
axis.set_yticks([])
axis.grid(False)
axis.set_title(title)
###Output
_____no_output_____
###Markdown
The effect of outliersSuppose that, in just one of the 100 distributions we're combining, there are asmall number of extreme values. In astronomical images these extremes happenvery frequently because of cosmic ray hits on the detector that cause, in onesmall patch of a calibration image, much higher counts. Another case occurs whencombining twilight flats, which often contain faint images of stars.In the example below, we set just 50 points out of the 102,400 in the firstdistribution to a somewhat higher value than the rest.
###Code
bits[0, 10000:10050] = 2 * bias_level
###Output
_____no_output_____
###Markdown
Remember, we can think of the values in this distribution as an image, a viewthat will be particularly convenient in this case.
###Code
plt.imshow(bits[0, :].reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
plt.xticks([])
plt.yticks([])
plt.title('One distribution with outliers')
plt.grid(False)
###Output
_____no_output_____
###Markdown
Now that we know what the outliers in this (and *only* this) distribution looklike, we'll combine all of the distributions as we did above.
###Code
average = np.average(bits, axis=0)
median = np.median(bits, axis=0)
###Output
_____no_output_____
###Markdown
Even though only one out of the 100 "images" we're combining has these highpixel values, the distribution of pixels for the average is clearly affected(well, maybe not clearly, since seeing it requires a logarithmic $y$-axis). Thedistribution for the median looks much the same as above. Since median simplylooks for the middle value, an extreme value doesn't affect the result too much.
###Code
plt.figure(figsize=(10, 10))
hist(average, bins='freedman', alpha=0.5, label='average');
hist(median, bins='freedman', alpha=0.5, label='median');
plt.legend()
plt.xlabel('Counts')
plt.ylabel('Number of pixels')
plt.semilogy();
###Output
_____no_output_____
###Markdown
Combining using the average has a noticeable effect on the result; medianremoves the artifactThe effect of the outlier is *much* clearer if the distributions are displayedas images. If the distributions we're combining were calibration images then theoutliers that appear in one image (e.g. a cosmic ray) would affect the combinedimage we hoped to use for calibration.
###Code
fig, axes = plt.subplots(1, 3, sharey=True, tight_layout=True, figsize=(20, 10))
data_source = [bits[0, :], average, median]
titles = ['One distribution with outliers', 'Average of {n}'.format(n=n_distributions), 'Median of {n}'.format(n=n_distributions)]
for axis, data, title in zip(axes, data_source, titles):
axis.imshow(data.reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
axis.set_xticks([])
axis.set_yticks([])
axis.grid(False)
axis.set_title(title)
###Output
_____no_output_____
###Markdown
On one hand, the noise properties are better when you combine by taking theaverage. On the other hand, the median eliminates features that appear in onlyone image.Astronomical images will almost always have those transient features. Even at anobservatory near sea level in an exposure that is very short, cosmic ray hitsare common. The solution: average combine, but clip the extreme valuesThe answer here is to first clip extreme values from the distributions and thencombine using the average. That rejects outlying values like the median but withthe modestly better statistical properties of the average. A method called"sigma clipping" is used to remove the extreme values. **Please do not use the code below for reducing your data...**...in the next set of notebooks we'll walk through the package[ccdproc](https://ccdproc.readthedocs.io), which automates much of what you see below.The section below demonstrates and explains some of what's happening behind thescenes in [ccdproc](https://ccdproc.readthedocs.io). Sigma clippingSigma clipping means calculating how "far" each pixel to be combined is from the"typical" value and excluding values from the combination if they are "too far"from the pixel value.To be clear, when evaluating which values to reject we're doing it for each ofthe 102,400 points in the distribution (or, if you prefer, each of the320$\times$320 pixels in the image) we're going to combine. In other words, foreach point (or pixel), we'll compute a "typical" value for the 100 distributions(images) we're combining and exclude any from the average that are "too far"from the "typical value."What should be used as the "typical" value, how do we measure how "far" away avalue is, and how far is "too far"?The last question is easiest to answer: it depends a bit on the noise level inyour camera but something like 5 farther from the "typical" value than most ofthe pixels are.Using the average as the typical value and the standard deviation as a measureof how far a particular value is from the typical value is often not the bestchoice. The problem with this is that outlying values in a single distribution(or image) strongly bias the average and exaggerate the standard deviation. Inthis example, where we're combining 100 distributions (images), using theaverage and standard deviation might work since there are so many distributions.A more typical number of bias or dark images that one might combine is 10 or 20.In that case, an extreme value in one image strongly affects the mean andstandard deviation.As an example, consider combining 10, 20, or 100 of our distributions, as shownin the cell below. Only in the case of 100 distributions would our extreme valueof 2000 be excluded if we excluded values more than 5 times the standarddeviation from the average.
###Code
print('Number combined\t Average\t Standard dev σ \t 10σ ')
for n_to_combine in [10, 20, n_distributions]:
avg = np.mean(bits[:n_to_combine, 10000])
std = np.std(bits[:n_to_combine, 10000])
print('{n:10d}\t{avg:10.2f}\t{std:10.2f}\t{ten_sig:10.2f}'.format(n=n_to_combine,
avg=avg,
std=std, ten_sig=10 * std))
###Output
_____no_output_____
###Markdown
A better choice is to use the median as the typical value and the *medianabsolute deviation* in place of the standard deviation as the measure of how fara value is from the typical value. The [median absolute deviation](https://en.wikipedia.org/wiki/Median_absolute_deviation), or MAD,of a set of points $x$ is defined by:$$MAD = \frac{1}{N}\sum_{i=0}^N |x_i - \text{median}(x)|.$$This is a measure of the typical absolute distance from the median of the set ofvalues. The MAD is not directly equivalent to the standard deviation. Therelationship between the two depends on the distribution of values, but for aGaussian distribution multiplying the MAD by 1.4826 does the trick. The[astropy function `mad_std`](http://docs.astropy.org/en/stable/api/astropy.stats.mad_std.html) will calculate the MAD and multiply by theappropriate factor for you.Repeating the calculation above but with median as the central value and the MADin place of the standard deviation demonstrates that even for 10 distributionsthe extreme value will be excluded.
###Code
print('{:^20}{:^20}{:^20}{:^20}'.format('Number combined', 'Median', 'MAD σ', '10σ'))
for n_to_combine in [10, 20, n_distributions]:
avg = np.median(bits[:n_to_combine, 10000])
std = mad_std(bits[:n_to_combine, 10000])
print('{n:^20d}{avg:^20.2f}{std:^20.2f}{ten_sig:^20.2f}'.format(n=n_to_combine,
avg=avg,
std=std, ten_sig=10 * std))
###Output
_____no_output_____
###Markdown
The downside to using the median and median absolute deviation? They can be slowto compute for large images or large stacks of images. The cells below perform the actual clipping; you should generally use theastropy function [`sigma_clip`](https://astropy.readthedocs.io/en/stable/stats/robust.html) to do this, but here we'll doit manually to illustrate the process.We begin by calculating the MAD standard deviation estimator for our data.
###Code
mad_sigma = mad_std(bits, axis=0)
###Output
_____no_output_____
###Markdown
The expression below is true for all of the points farther than $10\sigma_{MAD}$ from the median of the distributions and false everywhere else.This array will be used to exclude the extreme points.
###Code
exclude = (bits - median) / mad_sigma > 10
###Output
_____no_output_____
###Markdown
Next, we calculate the average, excluding the points identified as "too far"from from the median. There are two approaches we can take here. One is to usenumpy masked arrays; the other is to temporarily set the excluded values to thespecial value `np.nan` and use a numpy function that excludes `nan` from thecalculation. The latter approach is often faster than the former.The best approach is really to use a higher-level function from astropy forccdproc. Those will take care of the details of implementing the clipping foryou.
###Code
original_values = bits[exclude]
bits[exclude] = np.nan
clip_combine = np.nanmean(bits, axis=0)
bits[exclude] = original_values
###Output
_____no_output_____
###Markdown
SummaryCombine images by (1) excluding extreme values using sigma clipping, with themedian as the typical value and the MAD estimator of the standard deviation, andthen (2) averaging the remaining pixels across all of the images.Note that in the distribution below the clipped average is a narrowerdistribution (less noise) than the median but that it still excludes the extremevalue that appeared in one image.
###Code
plt.figure(figsize=(10, 10))
hist(clip_combine, bins='freedman', alpha=0.5, label='clipped average')
hist(median, bins='freedman', alpha=0.5, label='median');
plt.legend()
plt.xlabel('Counts')
plt.ylabel('Number of pixels')
###Output
_____no_output_____
###Markdown
Image combinationImage combination serves several purposes. Combining images:+ reduces noise in images+ can remove transient artifacts like cosmic rays and satellite tracks+ can remove stars in flat images taken at twilightIt's essential that several of each type of calibration image (bias, dark, flat)be taken. Combining them reduces the noise in the images by roughly a factor of$1/\sqrt{N}$, where $N$ is the number of images being combined. As shown in theprevious notebook, using a single calibration image actually *increases* thenoise in your image.There are a few ways to combine images; if done properly, features that show upin only one of the images (like cosmic rays) are not present in the combination.If done incorrectly, those features show up in your combined images and thencontaminate your calibrated science images too. The bottom line: combine by averaging images, but clip extreme valuesThe remainder of this notebook demonstrates this conclusion and explains how todo a combination by averaging images with [ccdproc](https://ccdproc.readthedocs.io/en/latest/).
###Code
import os
import numpy as np
%matplotlib inline
from matplotlib import pyplot as plt
from matplotlib import rc
from astropy.visualization import hist
from astropy.stats import mad_std
# Use custom style for larger fonts and figures
plt.style.use('guide.mplstyle')
# Set some default parameters for the plots below
rc('font', size=20)
rc('axes', grid=True)
# Set up the random number generator, allowing a seed to be set from the environment
seed = os.getenv('GUIDE_RANDOM_SEED', None)
if seed is not None:
seed = int(seed)
# This is the generator to use for any image component which changes in each image, e.g. read noise
# or Poisson error
noise_rng = np.random.default_rng(seed)
###Output
_____no_output_____
###Markdown
Combination method: average or median?In this section we'll look at a simplified version of the challenges ofcombining images to reduce noise. It's fair to think of astronomical images(especially bias and dark images) as being a Gaussian distribution of pixelvalues around the bias level, and a width related to the read noise of thedetector. To simplify what follows, we will work arrays of random numbers drawnfrom a Gaussian distribution instead of with astronomical images.In properly done flat images the noise is technically a Poisson distribution,but with a large enough number of counts, the distribution is indistinguishablefrom a Gaussian distribution whose width is related to the square root of thenumber of counts. While some regions of a science image are dominated by Poissonnoise from sources in the image, most of the image will be dominated by Gaussianread noise from the detector or Poisson noise from the sky background.Instead of working with a combination of images, we'll create 100 Gaussiandistributions with a mean of zero, and a standard deviation of one, and combinethose two different ways: by finding the average and by finding the median. Eachdistribution has size $320^2$ so that we can view it as either a distribution of102,400 values or as an image that is $320 \times 320$.We can think of each of these 100 distributions as representing an image, like abias or dark. To make the analogy to real images a little more direct, a "bias"of 1000 is added to each distribution.
###Code
n_distributions = 100
bias_level = 1000
n_side = 320
bits = noise_rng.normal(size=(n_distributions, n_side**2)) + bias_level
average = np.average(bits, axis=0)
median = np.median(bits, axis=0)
###Output
_____no_output_____
###Markdown
Now that we've created the distributions and combined them in two differentways, let's take a look at them. The [`hist` function from astropy.visualization](https://astropy.readthedocs.io/en/stable/visualization/histogram.html) is usedbelow because it can figure out what bin size to use for your data.
###Code
fig, ax = plt.subplots(1, 2, sharey=True, tight_layout=True, figsize=(20, 10))
hist(bits[0, :], bins='freedman', ax=ax[0]);
ax[0].set_title('One sample distribution')
ax[0].set_xlabel('Pixel value')
ax[0].set_ylabel('Number of pixels')
hist(average, bins='freedman', label='average', alpha=0.5, ax=ax[1]);
hist(median, bins='freedman', label='median', alpha=0.5, ax=ax[1]);
ax[1].set_title('{} distributions combined'.format(n_distributions))
ax[1].set_xlabel('Pixel value')
ax[1].legend()
###Output
_____no_output_____
###Markdown
Combining by averaging gives a narrower (i.e. less noisy) distribution thancombining by median, though both substantially reduced the width of thedistribution. The conclusion so far is that combining by averaging is mildlypreferable to combining by median. Computationally, the mean is also faster tocompute than the median. Image view of these distributionsAs suggested above, we could view each of these distributions as an imageinstead of a histogram. One take away from the diagram below is that in thiscase, the difference between mean and median is not apparent.In all cases, the extreme values of the image display are set to bracket thewidth of the initial distribution.
###Code
fig, axes = plt.subplots(1, 3, sharey=True, tight_layout=True, figsize=(20, 10))
data_source = [bits[0, :], average, median]
titles = ['One distrbution', 'Average of {n}'.format(n=n_distributions), 'Median of {n}'.format(n=n_distributions)]
for axis, data, title in zip(axes, data_source, titles):
axis.imshow(data.reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
axis.set_xticks([])
axis.set_yticks([])
axis.grid(False)
axis.set_title(title)
###Output
_____no_output_____
###Markdown
The effect of outliersSuppose that, in just one of the 100 distributions we're combining, there are asmall number of extreme values. In astronomical images these extremes happenvery frequently because of cosmic ray hits on the detector that cause, in onesmall patch of a calibration image, much higher counts. Another case occurs whencombining twilight flats, which often contain faint images of stars.In the example below, we set just 50 points out of the 102,400 in the firstdistribution to a somewhat higher value than the rest.
###Code
bits[0, 10000:10050] = 2 * bias_level
###Output
_____no_output_____
###Markdown
Remember, we can think of the values in this distribution as an image, a viewthat will be particularly convenient in this case.
###Code
plt.imshow(bits[0, :].reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
plt.xticks([])
plt.yticks([])
plt.title('One distribution with outliers')
plt.grid(False)
###Output
_____no_output_____
###Markdown
Now that we know what the outliers in this (and *only* this) distribution looklike, we'll combine all of the distributions as we did above.
###Code
average = np.average(bits, axis=0)
median = np.median(bits, axis=0)
###Output
_____no_output_____
###Markdown
Even though only one out of the 100 "images" we're combining has these highpixel values, the distribution of pixels for the average is clearly affected(well, maybe not clearly, since seeing it requires a logarithmic $y$-axis). Thedistribution for the median looks much the same as above. Since median simplylooks for the middle value, an extreme value doesn't affect the result too much.
###Code
plt.figure(figsize=(10, 10))
hist(average, bins='freedman', alpha=0.5, label='average');
hist(median, bins='freedman', alpha=0.5, label='median');
plt.legend()
plt.xlabel('Counts')
plt.ylabel('Number of pixels')
plt.semilogy();
###Output
_____no_output_____
###Markdown
Combining using the average has a noticeable effect on the result; medianremoves the artifactThe effect of the outlier is *much* clearer if the distributions are displayedas images. If the distributions we're combining were calibration images then theoutliers that appear in one image (e.g. a cosmic ray) would affect the combinedimage we hoped to use for calibration.
###Code
fig, axes = plt.subplots(1, 3, sharey=True, tight_layout=True, figsize=(20, 10))
data_source = [bits[0, :], average, median]
titles = ['One distribution with outliers', 'Average of {n}'.format(n=n_distributions), 'Median of {n}'.format(n=n_distributions)]
for axis, data, title in zip(axes, data_source, titles):
axis.imshow(data.reshape(n_side, n_side), vmin=bias_level - 3, vmax=bias_level + 3)
axis.set_xticks([])
axis.set_yticks([])
axis.grid(False)
axis.set_title(title)
###Output
_____no_output_____
###Markdown
On one hand, the noise properties are better when you combine by taking theaverage. On the other hand, the median eliminates features that appear in onlyone image.Astronomical images will almost always have those transient features. Even at anobservatory near sea level in an exposure that is very short, cosmic ray hitsare common. The solution: average combine, but clip the extreme valuesThe answer here is to first clip extreme values from the distributions and thencombine using the average. That rejects outlying values like the median but withthe modestly better statistical properties of the average. A method called"sigma clipping" is used to remove the extreme values. **Please do not use the code below for reducing your data...**...in the next set of notebooks we'll walk through the package[ccdproc](https://ccdproc.readthedocs.io), which automates much of what you see below.The section below demonstrates and explains some of what's happening behind thescenes in [ccdproc](https://ccdproc.readthedocs.io). Sigma clippingSigma clipping means calculating how "far" each pixel to be combined is from the"typical" value and excluding values from the combination if they are "too far"from the pixel value.To be clear, when evaluating which values to reject we're doing it for each ofthe 102,400 points in the distribution (or, if you prefer, each of the320$\times$320 pixels in the image) we're going to combine. In other words, foreach point (or pixel), we'll compute a "typical" value for the 100 distributions(images) we're combining and exclude any from the average that are "too far"from the "typical value."What should be used as the "typical" value, how do we measure how "far" away avalue is, and how far is "too far"?The last question is easiest to answer: it depends a bit on the noise level inyour camera but something like 5 farther from the "typical" value than most ofthe pixels are.Using the average as the typical value and the standard deviation as a measureof how far a particular value is from the typical value is often not the bestchoice. The problem with this is that outlying values in a single distribution(or image) strongly bias the average and exaggerate the standard deviation. Inthis example, where we're combining 100 distributions (images), using theaverage and standard deviation might work since there are so many distributions.A more typical number of bias or dark images that one might combine is 10 or 20.In that case, an extreme value in one image strongly affects the mean andstandard deviation.As an example, consider combining 10, 20, or 100 of our distributions, as shownin the cell below. Only in the case of 100 distributions would our extreme valueof 2000 be excluded if we excluded values more than 5 times the standarddeviation from the average.
###Code
print('Number combined\t Average\t Standard dev σ \t 10σ ')
for n_to_combine in [10, 20, n_distributions]:
avg = np.mean(bits[:n_to_combine, 10000])
std = np.std(bits[:n_to_combine, 10000])
print('{n:10d}\t{avg:10.2f}\t{std:10.2f}\t{ten_sig:10.2f}'.format(n=n_to_combine,
avg=avg,
std=std, ten_sig=10 * std))
###Output
_____no_output_____
###Markdown
A better choice is to use the median as the typical value and the *medianabsolute deviation* in place of the standard deviation as the measure of how fara value is from the typical value. The [median absolute deviation](https://en.wikipedia.org/wiki/Median_absolute_deviation), or MAD,of a set of points $x$ is defined by:$$MAD = \frac{1}{N}\sum_{i=0}^N |x_i - \text{median}(x)|.$$This is a measure of the typical absolute distance from the median of the set ofvalues. The MAD is not directly equivalent to the standard deviation. Therelationship between the two depends on the distribution of values, but for aGaussian distribution multiplying the MAD by 1.4826 does the trick. The[astropy function `mad_std`](http://docs.astropy.org/en/stable/api/astropy.stats.mad_std.html) will calculate the MAD and multiply by theappropriate factor for you.Repeating the calculation above but with median as the central value and the MADin place of the standard deviation demonstrates that even for 10 distributionsthe extreme value will be excluded.
###Code
print('{:^20}{:^20}{:^20}{:^20}'.format('Number combined', 'Median', 'MAD σ', '10σ'))
for n_to_combine in [10, 20, n_distributions]:
avg = np.median(bits[:n_to_combine, 10000])
std = mad_std(bits[:n_to_combine, 10000])
print('{n:^20d}{avg:^20.2f}{std:^20.2f}{ten_sig:^20.2f}'.format(n=n_to_combine,
avg=avg,
std=std, ten_sig=10 * std))
###Output
_____no_output_____
###Markdown
The downside to using the median and median absolute deviation? They can be slowto compute for large images or large stacks of images. The cells below perform the actual clipping; you should generally use theastropy function [`sigma_clip`](https://astropy.readthedocs.io/en/stable/stats/robust.html) to do this, but here we'll doit manually to illustrate the process.We begin by calculating the MAD standard deviation estimator for our data.
###Code
mad_sigma = mad_std(bits, axis=0)
###Output
_____no_output_____
###Markdown
The expression below is true for all of the points farther than $10\sigma_{MAD}$ from the median of the distributions and false everywhere else.This array will be used to exclude the extreme points.
###Code
exclude = (bits - median) / mad_sigma > 10
###Output
_____no_output_____
###Markdown
Next, we calculate the average, excluding the points identified as "too far"from from the median. There are two approaches we can take here. One is to usenumpy masked arrays; the other is to temporarily set the excluded values to thespecial value `np.nan` and use a numpy function that excludes `nan` from thecalculation. The latter approach is often faster than the former.The best approach is really to use a higher-level function from astropy forccdproc. Those will take care of the details of implementing the clipping foryou.
###Code
original_values = bits[exclude]
bits[exclude] = np.nan
clip_combine = np.nanmean(bits, axis=0)
bits[exclude] = original_values
###Output
_____no_output_____
###Markdown
SummaryCombine images by (1) excluding extreme values using sigma clipping, with themedian as the typical value and the MAD estimator of the standard deviation, andthen (2) averaging the remaining pixels across all of the images.Note that in the distribution below the clipped average is a narrowerdistribution (less noise) than the median but that it still excludes the extremevalue that appeared in one image.
###Code
plt.figure(figsize=(10, 10))
hist(clip_combine, bins='freedman', alpha=0.5, label='clipped average')
hist(median, bins='freedman', alpha=0.5, label='median');
plt.legend()
plt.xlabel('Counts')
plt.ylabel('Number of pixels')
###Output
_____no_output_____ |
Flights_Delay_classification/Train_Model_general---OG.ipynb | ###Markdown
Libraries & Parameters
###Code
!pip install -q awswrangler
import awswrangler as wr
import pandas as pd
import boto3
import pytz
import numpy as np
!pip install -U -q seaborn
import seaborn as sns
import matplotlib.pyplot as plt
import datetime
from sagemaker import get_execution_role
# Get Sagemaker Role
role = get_execution_role()
print(role)
###Output
Couldn't call 'get_role' to get Role ARN from role name AmazonSageMaker-ExecutionRole-20210503T205912 to get Role path.
Assuming role was created in SageMaker AWS console, as the name contains `AmazonSageMaker-ExecutionRole`. Defaulting to Role ARN with service-role in path. If this Role ARN is incorrect, please add IAM read permissions to your role or supply the Role Arn directly.
###Markdown
Runtime Parameters
###Code
# airline_to_run = 'MQ'
###Output
_____no_output_____
###Markdown
___ 1.) Download Data S3 parameters
###Code
# Flight data from Sagemaker Data Wrangler
bucket = 'sagemaker-us-west-2-506926764659/export-flow-05-16-30-08-0c003aed/output/data-wrangler-flow-processing-05-16-30-08-0c003aed/b98f4f8c-ddaf-4ee1-99da-b0dd09f47a21/default'
filename = 'part-00000-92fade68-00c4-41b3-9182-593084da2eae-c000.csv'
path_to_file = 's3://{}/{}'.format(bucket, filename)
# # Flight data from entire year of 2011
# bucket = 'from-public-data/carrier-perf/transformed'
# filename = 'airOT2011all.csv'
# path_to_file = 's3://{}/{}'.format(bucket, filename)
# # Flight data from 2011_01
# bucket = 'from-public-data/carrier-perf/transformed/airOT2011'
# filename = 'airOT201101.csv'
# path_to_file = 's3://{}/{}'.format(bucket, filename)
# ________________________________________________________________
# Supporting dataset useful for EDA and understanding data
# - airport codes
# - airline codes
bucket2 = 'from-public-data/carrier-perf/raw'
file_airport = 'airports.csv'
file_airline = 'airlines.csv'
path_to_file_airport = 's3://{}/{}'.format(bucket2, file_airport)
path_to_file_airline = 's3://{}/{}'.format(bucket2, file_airline)
###Output
_____no_output_____
###Markdown
=== === === === === Download data from S3 1. Flights Performance dataset
###Code
df = wr.s3.read_csv([path_to_file])
# df
###Output
_____no_output_____
###Markdown
A whopping 7,294,649 rows (records) of JUST year 2007! Thanks to all the Sagemaker Data Wrangler, I was able to already do some data cleaning and adjustment: - Create new variable `late_flight` depending on `DEP_DELAY` - Trim value to remove outliers for `DEP_DLAY` - Drop records for Cancelled flights `CANCELED` == 1 (doesn't make much sense to have flights that's irrelevant to flights delay when flight never occur) 2. Airports & Airlines dataset
###Code
df_airports = wr.s3.read_csv([path_to_file_airport])
df_airlines = wr.s3.read_csv([path_to_file_airline])
# df_airlines
###Output
_____no_output_____
###Markdown
=== === === === === Initial Data Clean-up and Organization
###Code
# rename 'DAY_OF_MONTH' column to 'DAY' (in prep of transforming to datetime format)
df = df.rename(columns={'DAY_OF_MONTH': 'DAY'})
# df
###Output
_____no_output_____
###Markdown
1. Date / Time modificationsMake date and time more appropriate. This will make it easier when making plots.
###Code
# Create a datetime field `DATE`
df['DATE'] = pd.to_datetime(df[['YEAR','MONTH','DAY']])
# Convert 'HHMM' string to datetime.time
def format_heure(chaine):
if pd.isnull(chaine):
return np.nan
else:
if chaine == 2400: chaine = 0
chaine = "{0:04d}".format(int(chaine))
heure = datetime.time(int(chaine[0:2]), int(chaine[2:4]))
return heure
df['DEP_TIME'] = df['DEP_TIME'].apply(format_heure)
df['ARR_TIME'] = df['ARR_TIME'].apply(format_heure)
###Output
_____no_output_____
###Markdown
2. Organize ColumnsLet's organize columns (features) to be more logical
###Code
variables_to_remove = ['ORIGIN_AIRPORT_ID', 'DEST_AIRPORT_ID']
df.drop(variables_to_remove, axis = 1, inplace = True)
df = df[[
'DATE',
'YEAR',
'MONTH',
'DAY',
'DAY_OF_WEEK',
'UNIQUE_CARRIER',
'ORIGIN',
'DEST',
'DEP_TIME',
'DEP_DELAY',
'DEP_DELAY_no_outlier',
'ACTUAL_ELAPSED_TIME',
'AIR_TIME',
'DISTANCE',
'ARR_TIME',
'ARR_DELAY',
'CARRIER_DELAY',
'WEATHER_DELAY',
'NAS_DELAY',
'SECURITY_DELAY',
'LATE_AIRCRAFT_DELAY',
'late_flight']]
# REMOVED for a generalized ML model for all airline carriers
# df_toTrain = df.loc[df['UNIQUE_CARRIER'] == airline_to_run]
df_toTrain = df
distinct_airlines = df_toTrain.UNIQUE_CARRIER.unique()
print('New dataset has {0} records with {1} variables, containing only airlines {2}'.format(df_toTrain.shape[0], df_toTrain.shape[1], distinct_airlines))
df_toTrain
###Output
_____no_output_____
###Markdown
___ 2.) Explorational Data Analysis Distribution of Target (dependent) Variable `late_flight`
###Code
df_toTrain.late_flight.value_counts().plot(kind='bar')
df_toTrain.late_flight.value_counts()
###Output
_____no_output_____
###Markdown
**NOTE** Looks like a pretty imbalance distribution of target variable. Will probably need to use SMOTE and create synthetic data for the minority class. Corrleations
###Code
# increase figure size
plt.figure(figsize=(13, 9))
heatmap = sns.heatmap(df_toTrain.corr(), vmin=-1, vmax=1, annot=True, cmap="YlGnBu")
# define title
heatmap.set_title('Correlation Heatmap', fontdict={'fontsize':15}, pad=12)
# ref. https://medium.com/@szabo.bibor/how-to-create-a-seaborn-correlation-heatmap-in-python-834c0686b88e
###Output
_____no_output_____
###Markdown
**NOTE** Looks like high correlation between: - `DEP_DELAY_no_outlier` :: `ARR_DELAY`, which could makes logical sense because if you are late departing, then you are likely to be late arriving - `ACTUAL_ELAPSED_TIME` :: `DISTANCE` :: `AIR_TIME`, which make sense as each 3-variables are referencing same part of flight 3.) Train Model
###Code
# Download PyCaret
!pip install pycaret --quiet
###Output
/opt/conda/lib/python3.7/site-packages/secretstorage/dhcrypto.py:16: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
/opt/conda/lib/python3.7/site-packages/secretstorage/util.py:25: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead
from cryptography.utils import int_from_bytes
###Markdown
a. Get the Data
###Code
data = df_toTrain.sample(frac=0.01, random_state=123)
data_unseen = df.drop(data.index)
data.reset_index(inplace=True, drop=True)
data_unseen.reset_index(inplace=True, drop=True)
print('Data for Modeling: ' + str(data.shape))
print('Unseen Data For Predictions: ' + str(data_unseen.shape))
###Output
Data for Modeling: (72946, 22)
Unseen Data For Predictions: (7221703, 22)
###Markdown
b. Setting Up Environment in PyCaret
###Code
from pycaret.classification import *
exp = setup(data = data,
numeric_features = ['YEAR', 'MONTH','DAY','DAY_OF_WEEK'],
ignore_features = ['DEP_DELAY', 'ARR_DELAY', 'AIR_TIME', 'ACTUAL_ELAPSED_TIME', 'ARR_TIME'],
target = 'late_flight',
fix_imbalance = True,
normalize = True,
transformation = True,
ignore_low_variance = True,
remove_multicollinearity = True,
multicollinearity_threshold = 0.95,
use_gpu = True,
fold = 2
)
###Output
_____no_output_____
###Markdown
c. Comparing all models
###Code
# ref.
# -- https://pycaret.readthedocs.io/en/latest/api/classification.html?highlight=compare_models#pycaret.classification.compare_models
# -- https://machinelearningmastery.com/k-fold-cross-validation/
best_model = compare_models(cross_validation=False)
# best_model = compare_modelfold=old=3)
###Output
_____no_output_____
###Markdown
4.) Create Model(s) a. Random Forest Classifier
###Code
rf = create_model('rf')
# rf = create_model('rf', cross_validation=False)
# trained model object is stored as `rf`
# print(rf)
###Output
_____no_output_____
###Markdown
b. Ada Boost Classifier
###Code
ada = create_model('ada')
# ada = create_model('ada', cross_validation=False)
# trained model object is stored as `ada`
# print(ada)
###Output
_____no_output_____
###Markdown
c. Light Gradient Boosting Machine
###Code
lightgbm = create_model('lightgbm')
# lightgbm = create_model('lightgbm', cross_validation=False)
# trained model object is stored as `lightgbm`
# print(lightgbm)
###Output
_____no_output_____
###Markdown
5.) Tune Model(s) c. Light Gradient Boosting Machine
###Code
tuned_lightgbm = tune_model(lightgbm, n_iter=2, early_stopping=True)
# tuned model object is stored as `tuned_lightgbm`
# print(tuned_lightgbm)
###Output
_____no_output_____
###Markdown
6.) Models Performance c. Light Gradient Boosting Machine i. Confusion Matrix
###Code
plot_model(tuned_lightgbm, plot = 'confusion_matrix')
###Output
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
###Markdown
ii. Features Importance
###Code
plot_model(tuned_lightgbm, plot='feature')
###Output
findfont: Font family ['sans-serif'] not found. Falling back to DejaVu Sans.
###Markdown
Features that has greatest explanatory power are:* `DISTANCE`* `DAY` of the month* (closely by) `MONTH` iii. Intrepret Model's with SHAPref. * https://www.analyticsvidhya.com/blog/2020/05/pycaret-machine-learning-model-seconds/* https://www.analyticsvidhya.com/blog/2019/11/shapley-value-machine-learning-interpretability-game-theory/?utm_source=blog&utm_medium=pycaret-machine-learning-model-seconds
###Code
!apt-get update && apt-get install -y build-essential -q
!python -m pip install -q shap
interpret_model(tuned_lightgbm, plot='summary')
interpret_model(tuned_lightgbm, plot='correlation')
###Output
_____no_output_____
###Markdown
8.) Predict of Test Data Sample c. Light Gradient Boosting Machine
###Code
predict_model(tuned_lightgbm)
###Output
_____no_output_____
###Markdown
9.) Deploy Model (finalized) c. Light Gradient Boosting Machine
###Code
final_lightgbm = finalize_model(tuned_lightgbm)
#Final model's parameters for deployment
print(final_lightgbm)
###Output
[LightGBM] [Warning] bagging_fraction is set=0.4, subsample=1.0 will be ignored. Current value: bagging_fraction=0.4
[LightGBM] [Warning] feature_fraction is set=1.0, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=1.0
[LightGBM] [Warning] bagging_freq is set=5, subsample_freq=0 will be ignored. Current value: bagging_freq=5
[LightGBM] [Warning] bagging_fraction is set=0.4, subsample=1.0 will be ignored. Current value: bagging_fraction=0.4
[LightGBM] [Warning] feature_fraction is set=1.0, colsample_bytree=1.0 will be ignored. Current value: feature_fraction=1.0
[LightGBM] [Warning] bagging_freq is set=5, subsample_freq=0 will be ignored. Current value: bagging_freq=5
LGBMClassifier(bagging_fraction=0.4, bagging_freq=5, boosting_type='gbdt',
class_weight=None, colsample_bytree=1.0, feature_fraction=1.0,
importance_type='split', learning_rate=0.5, max_depth=-1,
min_child_samples=31, min_child_weight=0.001, min_split_gain=0.6,
n_estimators=100, n_jobs=-1, num_leaves=100, objective=None,
random_state=3505, reg_alpha=10, reg_lambda=2, silent=True,
subsample=1.0, subsample_for_bin=200000, subsample_freq=0)
###Markdown
**Caution**: Once the model is finalized using `finalize_model()`, the entire dataset including the test/hold-out set is used for training. As a result, if the model is used for predictions on the hold-out set after `finalize_model()` is used, the information grid printed will be misleading as you are trying to predict on the same data that was used for modeling.
###Code
predict_model(final_lightgbm)
###Output
_____no_output_____
###Markdown
10.) Predict on Unseen Dataset
###Code
# Decrease size of unseen data `data_unseen` by sampling 168 random rows
data_unseen_mini = data_unseen.sample(n = 168)
unseen_predictions = predict_model(final_lightgbm, data=data_unseen_mini)
unseen_predictions
KPI = 'Accuracy'
from pycaret.utils import check_metric
# check_metric(unseen_predictions['late_flight'], unseen_predictions['Label'])
KPI_score = check_metric(unseen_predictions['late_flight'], unseen_predictions['Label'], metric=KPI)
print('The {0} of `lightgbm` model is {1}.'.format(KPI, KPI_score))
###Output
The Accuracy of `lightgbm` model is 0.994.
###Markdown
11.) Persist Model
###Code
today = datetime.datetime.now()
today_datetime = today.strftime("%d-%m-%Y %H:%M:%S")
pkl_filename = 'Final_model___' + 'lightgbm' + '___for_all_airlines_' + today_datetime
save_model(final_lightgbm, pkl_filename)
###Output
Transformation Pipeline and Model Succesfully Saved
###Markdown
12.) Load a Saved Model
###Code
# REMEMBER to omit file suffix ".pkl"
model_to_load = 'Final_model___lightgbm___for_all_airlines_07-05-2021 13:31:37'
saved_model = load_model(model_to_load)
# Decrease size of unseen data `data_unseen` by sampling 168 random rows
data_unseen_mini = data_unseen.sample(n = 168)
new_prediction = predict_model(saved_model, data=data_unseen_mini)
new_prediction.head()
KPI = 'Accuracy'
from pycaret.utils import check_metric
# check_metric(unseen_predictions['late_flight'], unseen_predictions['Label'])
KPI_score = check_metric(new_prediction['late_flight'], new_prediction['Label'], metric=KPI)
print('The {0} of `lightgbm` model is {1}.'.format(KPI, KPI_score))
###Output
The Accuracy of `lightgbm` model is 0.9821.
|
notebooks/02-estimators.ipynb | ###Markdown
2. Core conceptsIn this notebook, we will review:- Estimators in _scikit-learn_, and some of their functions.- How estimators can be supervised models that perform classification or regression tasks, as well as unsupervised models.--- Some important conceptsLet's quickly review some conceptual distinctions in Machine Learning (ML). This section is a refresher. If you are lacking some knowledge on these concepts, please consult our suggested reading in [Notebook 1](./01-preliminaries.ipynb). What machine learning is aboutAn excellent working definition of ML can be found in [Tal Yarkoni's tutorial](https://github.com/neurohackademy/nh2020-curriculum/blob/master/tu-machine-learning-yarkoni/02-core-concepts.ipynb): ML is the field of science/engineering that seeks to build systems capable of learning from experience. The goal of ML is to develop algorithms that can learn from data with a minimum set of explicitly programmed rules on how to do so.There are two main types of ML models depending on how they learn from data: supervised and unsupervised. Supervised MLIn supervised ML, we have available the real values of the variables we want to predict. The model can then use this information to train itself by comparing its predicted values with the real ones using a __loss function__, and an __optimization algorithm__ to iteratively make small adjustments and improve its perfomance. Regression vs classificationSupervised learning models can also be divided into regression and classification tasks. Regression models seek to predict a continuous variable (e.g. age), while classification models predict discrete labels (e.g. wine class). Unsupervised MLIn unsupervised ML, these labels are unkown. The algorithm instead seeks to find a pattern in the data that might be useful. EstimatorsIn _scikit-learn_ an [estimator](https://scikit-learn.org/stable/tutorial/statistical_inference/settings.htmlestimators-objects) is a Python object that __learns from data__. That means, both supervised (classification or regression) and unsupervised models can be constructed and fitted using estimators. We will review some properties of estimators in _scikit-learn_ using an example for each of these types of models. Linear RegressionA linear regression is an example of a supervised regression model. Used as a machine learning tool, linear regression predicts the values of a continuous variable from a __linear combination__ of one or more features.For example, if we had a feature matrix $X$ contaning the values of features $x1$ and $x2$, the value $\hat{y}$ predicted by linear regression could be expressed as:$$\hat{y}_i = \beta_0 + \beta_1x_{i1} + \beta_2x_{i2}$$> - where $\beta$ are the parameters the model learns from the data to make the predictions> - $\beta_1$ and $\beta_2$ are also called the coefficients, and $\beta_0$ the interceptLet's now see how we can fit a linear regression model using _scikit-learn_. We will first need to create a dataset for this exercise. With _scikit-learn_ we can do so using the `make_regression()` function (read the documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_regression.htmlsklearn.datasets.make_regression)). Let's create one containing 400 samples and 100 features. We will also define 20 of these features as informative, and add some gaussian noise to the data to make the task harder for the model:
###Code
import numpy as np
from sklearn.datasets import make_regression
# Create fake dataset
X, y = make_regression(
n_samples=400, n_features=100, n_informative=20, noise=10, random_state=0
)
# Print shape of feature matrix and labels
print(f"Shape of dataset: {np.shape(X)}")
print(f"Shape of labels: {np.shape(y)}")
###Output
_____no_output_____
###Markdown
Since it's a regression problem, let's make sure the target of our model is a continuous variable. Let's print the first ten values of `y`:
###Code
print(y[:10])
###Output
_____no_output_____
###Markdown
Now let's create a linear regression estimator using `LinearRegression` (read documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LinearRegression.html?highlight=linear%20regressionsklearn.linear_model.LinearRegression)).
###Code
from sklearn.linear_model import LinearRegression
# Create model
reg = LinearRegression()
###Output
_____no_output_____
###Markdown
Estimator objects contain certain parameters that define how they will behave when learning the data, as well as their outputs. These are called __estimator parameters__. Let's inspect the ones of `reg`:
###Code
# Print estimator parameters
vars(reg)
###Output
_____no_output_____
###Markdown
These parameters can be changed by modifying their corresponding attributes when calling the estimator, or afterwards using `set_params()`:
###Code
# Set new model parameters
reg.set_params(**{"normalize": True})
vars(reg)
###Output
_____no_output_____
###Markdown
Training the modelOnce the estimator object has been created, it can now learn the value of its parameters from the data. For this we need to call the `fit()` function, and pass our feature matrix (`X`) and true values (`y`) as input:
###Code
# Fit linear regression model
reg = reg.fit(X, y)
###Output
_____no_output_____
###Markdown
Let's inspect the attributes of `reg` again:
###Code
# Print the names of the attributes
vars(reg).keys()
###Output
_____no_output_____
###Markdown
`reg` now contains new attributes. These are refered to as __estimated parameters__, because they have been learned from the data. In _scikit-learn_, these are indexed by an underscore (`_`) at the end. For example, we can now access the coefficients learned by our linear model. We should have as many coefficients as features in our dataset:
###Code
print(f"Number of coefficients: {reg.coef_.shape[0]}")
###Output
_____no_output_____
###Markdown
Let's also print the values of some of them, and the value of the intercept.
###Code
# Define coefficients and intercept
coefs = reg.coef_
intercept = reg.intercept_
# Print
print(f"Model coefficients (first 10):\n {coefs[:10]} \n")
print(f"Model intercept: \n {intercept}")
###Output
_____no_output_____
###Markdown
Be careful if your intention is to interpret the coefficients of the model. This process is far from straightforward. Read this very useful [example](https://scikit-learn.org/stable/auto_examples/inspection/plot_linear_model_coefficient_interpretation.html) to learn more about this issue. Making predictions with the modelNow that our model is fitted, we can use it to make predictions. In _scikit-learn_, this is achieved by calling the function `predict()`. Let's predict the values of `X` using our fitted model, and visually compare them to their real values on the first ten samples:
###Code
import pandas as pd
# Predict labels with trained model
y_pred = reg.predict(X)
# Create dataframe for printing the predictions
df = pd.DataFrame({"y_pred": y_pred[:10], "y_real": y[:10]})
df
###Output
_____no_output_____
###Markdown
Scoring the modelWe can use the predicted values to evaluate the performance of the model by quantifyng the difference between these and the real values.In _scikit-learn_ we can evaluate the performance of the estimator using the function `score()`:
###Code
# Score the model using r2
score = reg.score(X, y)
# Print score
print(f"Linear model R2: {np.round(score,3)}")
###Output
_____no_output_____
###Markdown
By default, linear models are evaluated by calculating $R^2$, also called __coefficient of determination__. $R^2$ quantifies how much of the total variance of the outcome variable (`y`) is explained by the fitted model. The best possible value is 1. The higher the value, the best job the model does at explaining the data. You can read more about the implementation of $R^2$ in _scikit-learn_ [here](https://scikit-learn.org/stable/modules/generated/sklearn.metrics.r2_score.htmlsklearn.metrics.r2_score). ✍️ ExerciseThere are other scoring metrics for regression problems. Check the module [sklearn.metrics](https://scikit-learn.org/stable/modules/classes.htmlmodule-sklearn.metrics) for an overview of the alternatives. Pick one, and implement it in the cell below. Press the three dots to reveal the solution._Hint!_ If you want to implement a scoring function that is not the default one, you won't be able to do so using the `score()` method. You will need to use a function specifically designed for the scoring metric, and pass the real and predicted values as input.
###Code
#### Answer using mean squared error
from sklearn.metrics import mean_squared_error
# Compute mean squared error
mse = mean_squared_error(y, y_pred)
# Print score
print(f"Mean squared error: {mse}")
###Output
_____no_output_____
###Markdown
Logistic regressionLogistic regression is a very popular classification model. It uses a [logistic function](https://en.wikipedia.org/wiki/Logistic_function) to estimate the probability that an observation belongs to different classes. Let's implement a logistic regression is _scikit-learn_.We will create a fake dataset ready for classification using the `make_classification()` method (read the documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_classification.htmlsklearn.datasets.make_classification)):
###Code
from sklearn.datasets import make_classification
# Create fake dataset
X, y = make_classification(
n_samples=400, n_features=100, n_informative=20, random_state=0
)
###Output
_____no_output_____
###Markdown
Our `y` should now be a categorical variable. Let's print 10 samples to make sure:
###Code
print(y[:10])
###Output
_____no_output_____
###Markdown
Classifiers are also estimators in _scikit-learn_. This means we can also use them with the functions illustrated for the linear regression case.Let's create a `LogisticRegression` estimator (read the documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.linear_model.LogisticRegression.htmlsklearn.linear_model.LogisticRegression)), and fit it to our data:
###Code
from sklearn.linear_model import LogisticRegression
# Create model
clf = LogisticRegression()
# Fit model
clf = clf.fit(X, y)
###Output
_____no_output_____
###Markdown
✍️ ExerciseCan you compare the first 10 predictions of the fitted model to their real values? Write your answer in the cell below, and press the three dots to reveal the solution.
###Code
#### Answer
# Predict labels with trained model
y_pred = clf.predict(X[:10])
y_real = y[:10]
# Create dataframe for printing the predictions
df = pd.DataFrame({"y_pred": y_pred, "y_real": y_real})
df
###Output
_____no_output_____
###Markdown
Probabilistic predictionsLogistic Regression is a [probabilistic classifier](https://en.wikipedia.org/wiki/Probabilistic_classification), meaning it predicts a probability distribution over the classes.In _scikit-learn_ we can inspect the probabilities assigned to each class using `predict_proba()`:
###Code
# Predict the probability of each class
y_pred_proba = clf.predict_proba(X[:10])
# Create dataframe for printing the predictions for each class
df = pd.DataFrame(y_pred_proba, columns=["class 0", "class 1"])
df
###Output
_____no_output_____
###Markdown
By default, the predictions made by `LogisticRegression` when calling `score()` are evaluated by computing the __mean accuracy__ of the predictions:
###Code
# Score predictions
score = clf.score(X, y)
print(f"Mean accuracy: {np.round(score, 2)}")
###Output
_____no_output_____
###Markdown
K-MeansUnsupervised models are also estimators in _scikit-learn_, since they also learn from data. One type of unsupervised models are __clustering algorithms__. These learn to group the data from their feature values so that observations within a group are more similar than those between groups. You can read more about clustering [here](https://github.com/martinagvilas/intro_stat_learning/blob/master/notebooks/lab2_clustering.ipynb).A very popular clustering algorithm is __k-means__. This method partitions the data into __$k$ pre-specified__ clusters in a way that minimizes the within-cluster variance. Let's implement k-means using _scikit-learn_. We first need to generate a dataset suitable for clustering using `make_blobs()` (read documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.datasets.make_blobs.htmlsklearn.datasets.make_blobs)) which generates Gaussian shaped blobs. We will create a very simple dataset with only two features, to simplify visualization of the clusters:
###Code
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.datasets import make_blobs
# Create fake dataset
X, y = make_blobs(
n_samples=400, n_features=2, random_state=0, cluster_std=1
)
###Output
_____no_output_____
###Markdown
Let's visualize our dataset with a scatterplot, and color the observations according to their real labels:
###Code
# Plot dataset
sns.scatterplot(
x=X[:, 0], y=X[:, 1], hue=y,
marker='o', s=25, edgecolor='k', legend=True
).set_title("Data")
plt.show()
###Output
_____no_output_____
###Markdown
There are 3 clusters in our fake dataset. Usually we don't have this information available and we need to select an arbitrary number of clusters for the algorithm to find.Let's see an example of this and perform k-means clustering by calling `KMeans` (read documentation [here](https://scikit-learn.org/stable/modules/generated/sklearn.cluster.KMeans.htmlsklearn.cluster.KMeans)) with 5 clusters ($k=5$):
###Code
from sklearn.cluster import KMeans
# Create model
kmeans = KMeans(n_clusters=5)
# Fit model
kmeans = kmeans.fit(X)
###Output
_____no_output_____
###Markdown
Since this is an unsupervised method, we don't need to provide `y` as input to `fit()`. We also cannot compute the accuracy of the fitted model. But we can compute the average distance of the labeled example to the center of their assigned cluster using the `score()` function:
###Code
# Compute average distance
score = kmeans.score(X, y)
print(f"Average distance: {score}")
###Output
_____no_output_____
###Markdown
If you want to read more about the meaning behind the returned value, read [this answer](https://stackoverflow.com/questions/32370543/understanding-score-returned-by-scikit-learn-kmeans) on stackoverflow. More importantly, we can now use our fitted model to predict to which cluster the observations belong to. Let's predict the assignment of the first 10 observations:
###Code
# Predict cluster label
y_pred = kmeans.predict(X)
print(f"Predicted labels (first 10): {y_pred[:10]}")
###Output
_____no_output_____
###Markdown
We can use a scatterplot to inspect the predicted labels from the model:
###Code
# Plot predicted labels
sns.scatterplot(
x=X[:, 0], y=X[:, 1], hue=y_pred,
marker='o', s=25, edgecolor='k', legend=False
).set_title("Data")
plt.show()
###Output
_____no_output_____
###Markdown
We have the same number of predicted labels as the number of $k$. ✍️ Exercise Can you create a `KMeans` model specifying the correct number of clusters (`k=3`) and plot its predictions? Compare it with the plot of the real labels. Write your code in the cell below and press the three dots to see the solution.
###Code
#### Answer
# Create model
kmeans = KMeans(n_clusters=3)
# Fit model
kmeans = kmeans.fit(X, y)
# Predict labels
y_pred = kmeans.predict(X)
# Plot predicted labels
sns.scatterplot(
x=X[:, 0], y=X[:, 1], hue=y_pred,
marker='o', s=25, edgecolor='k', legend=False
).set_title("Data")
plt.show()
###Output
_____no_output_____ |
.ipynb_checkpoints/Find_the_best_moving_average-checkpoint.ipynb | ###Markdown
###Code
!pip install yfinance
import yfinance
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
from scipy.stats import ttest_ind
import datetime
plt.rcParams['figure.figsize'] = [10, 7]
plt.rc('font', size=14)
np.random.seed(0)
y = np.arange(0,100,1) + np.random.normal(0,10,100)
sma = pd.Series(y).rolling(20).mean()
plt.plot(y,label="Time series")
plt.plot(sma,label="20-period SMA")
plt.legend()
plt.show()
n_forward = 40
name = 'GLD'
start_date = "2010-01-01"
end_date = "2020-06-15"
ticker = yfinance.Ticker("FB")
data = ticker.history(interval="1d",start='2010-01-01',end=end_date)
plt.plot(data['Close'],label='Facebook')
plt.plot(data['Close'].rolling(20).mean(),label = "20-periods SMA")
plt.plot(data['Close'].rolling(50).mean(),label = "50-periods SMA")
plt.plot(data['Close'].rolling(200).mean(),label = "200-periods SMA")
plt.legend()
plt.xlim((datetime.date(2019,1,1),datetime.date(2020,6,15)))
plt.ylim((100,250))
plt.show()
ticker = yfinance.Ticker(name)
data = ticker.history(interval="1d",start=start_date,end=end_date)
data['Forward Close'] = data['Close'].shift(-n_forward)
data['Forward Return'] = (data['Forward Close'] - data['Close'])/data['Close']
result = []
train_size = 0.6
for sma_length in range(20,500):
data['SMA'] = data['Close'].rolling(sma_length).mean()
data['input'] = [int(x) for x in data['Close'] > data['SMA']]
df = data.dropna()
training = df.head(int(train_size * df.shape[0]))
test = df.tail(int((1 - train_size) * df.shape[0]))
tr_returns = training[training['input'] == 1]['Forward Return']
test_returns = test[test['input'] == 1]['Forward Return']
mean_forward_return_training = tr_returns.mean()
mean_forward_return_test = test_returns.mean()
pvalue = ttest_ind(tr_returns,test_returns,equal_var=False)[1]
result.append({
'sma_length':sma_length,
'training_forward_return': mean_forward_return_training,
'test_forward_return': mean_forward_return_test,
'p-value':pvalue
})
result.sort(key = lambda x : -x['training_forward_return'])
result[0]
best_sma = result[0]['sma_length']
data['SMA'] = data['Close'].rolling(best_sma).mean()
plt.plot(data['Close'],label=name)
plt.plot(data['SMA'],label = "{} periods SMA".format(best_sma))
plt.legend()
plt.show()
###Output
_____no_output_____ |
docs/python/basics/Confusion_matrix.ipynb | ###Markdown
---title: "Confusion Matrix"author: "Vaishnavi"date: 2020-08-09description: "-"type: technical_notedraft: false--- from sklearn.metrics import confusion_matrix
###Code
var1 = "Cat"
var2 = "Ant"
var3 = "Bird"
true = [var3, var1, var3, var3, var1, var2]
pred = [var1, var1, var3, var3, var1, var3]
confusion_matrix(true, pred, labels=[var1, var2, var3])
###Output
_____no_output_____
###Markdown
---title: "Confusion_Matrix"author: "Aishwarya"date: 2020-08-10description: "-"type: technical_notedraft: false---
###Code
from sklearn.metrics import confusion_matrix
C = "Cat"
A = "Ant"
B = "Bird"
true = [C, A, C, C, A, B]
pred = [A, A, C, C, A, C]
confusion_matrix(true, pred, labels=[A, B, C])
###Output
_____no_output_____
###Markdown
---title: "Confusion Matrix"author: "Kamal"date: 2020-08-11description: "-"type: technical_notedraft: false---
###Code
from sklearn.metrics import confusion_matrix, classification_report
C="Cat"
F="Fish"
H="Hen"
true = [C,C,C,C,C,C,C,C,C,C, F,F,F,F,F,F,F,F,F,F, H,H,H,H,H,H,H,H,H,H,H]
pred = [C,C,C,C,C,C,F,H,F,C, C,C,H,F,F,F,F,F,F,H, H,H,H,H,H,H,C,F,H,H,H]
confusion_matrix(true,pred)
print(classification_report(true,pred))
###Output
precision recall f1-score support
Cat 0.70 0.70 0.70 10
Fish 0.67 0.60 0.63 10
Hen 0.75 0.82 0.78 11
accuracy 0.71 31
macro avg 0.71 0.71 0.70 31
weighted avg 0.71 0.71 0.71 31
|
homework/Lab_6/lab6.ipynb | ###Markdown
Lab Six: Convolutional Neural Networks Sian Xiao & Tingting Zhao 0. Dataset SelectionThe Chinese MNIST (Chinese numbers handwritten characters images) dataset is downloaded from [Kaggle](https://www.kaggle.com/gpreda/chinese-mnist). It's collected and modified from a [project at Newcastle University](https://data.ncl.ac.uk/articles/dataset/Handwritten_Chinese_Numbers/10280831/1).In the original project, one hundred Chinese nationals took part in data collection. Each participant wrote with a standard black ink pen all 15 numbers in a table with 15 designated regions drawn on a white A4 paper. This process was repeated 10 times with each participant. Each sheet was scanned at the resolution of 300x300 pixels.It resulted a dataset of 15000 images, each representing one character from a set of 15 characters (grouped in samples, grouped in suites, with 10 samples/volunteer and 100 volunteers).The modified dataset (Kaggle) contains the following:* an index file, chinese_mnist.csv* a folder with 15,000 jpg images, sized 64 x 64.The .csv file contains a data frame with following attributes:* `suite_id`: There are totally 100 suites, each created by a volunteer.* `sample_id`: Each volunteer created 10 samples.* `code`: Each sample contains characters from 0 to 100M (totally 15 Chinese number characters). This is a code used to identify.* `value`: Numerical value of each character.* `character`:The actual Chinese character corresponding to one number.The mapping of value, character and code is shown below:| value | character | code ||-----------|-----------|------|| 0 | 零 | 1 || 1 | 一 | 2 || 2 | 二 | 3 || 3 | 三 | 4 || 4 | 四 | 5 || 5 | 五 | 6 || 6 | 六 | 7 || 7 | 七 | 8 || 8 | 八 | 9 || 9 | 九 | 10 || 10 | 十 | 11 || 100 | 百 | 12 || 1000 | 千 | 13 || 10000 | 万 | 14 || 100000000 | 亿 | 15 |The file names are `__.jpg`.
###Code
import pandas as pd
from tensorflow import keras
from sklearn.model_selection import StratifiedKFold
import os
import numpy as np
from collections import Counter
import cv2
import warnings
warnings.filterwarnings('ignore')
from tensorflow.keras.callbacks import EarlyStopping
from tensorflow.keras.regularizers import l2
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn import metrics as mt
from sklearn.metrics import roc_curve, auc
from sklearn.decomposition import PCA
from skimage.feature import daisy
from matplotlib import pyplot as plt
import seaborn as sns
from math import ceil
from tensorflow.keras.layers import Add, Input, average, concatenate, Input, Dense, Dropout, Activation, Flatten
from tensorflow.keras.layers import Conv2D, MaxPooling2D, Reshape, SeparableConv2D, BatchNormalization
from tensorflow.keras.models import Sequential, Model
from tensorflow.keras.applications import ResNet50
from tensorflow.keras.applications.resnet import preprocess_input
%matplotlib inline
###Output
_____no_output_____
###Markdown
1. Preparation 1.1 Metrics
###Code
data = pd.read_csv('data/chinese_mnist.csv', encoding='utf-8')
data_group = data.groupby(by=['code','value'])
data_group.character.value_counts()
image_files = list(os.listdir("data/image"))
print(f"Number of image files in folder: {len(image_files)}")
print(f"Number of instances in csv: {len(data)}")
###Output
Number of image files in folder: 15000
Number of instances in csv: 15000
###Markdown
**There are metrics designed for unbalanced dataset (like Cohen’s Kappa). Also, we can use precision, recall and F1 score (mainly for binary classification, or one-to-rest classification). As the dataset itself is well-organized to be balanced, and we care each category equaily, we don't need to use those complicated metrics designed for unbalanced dataset. If we have a cancer recgnition task, we need to care about false negative rate. But here we just need a high accuracy for all numbers. So we use accuracy as evaluation metrics.****Confusion matrix can be used to visualize the result.**https://scikit-learn.org/stable/modules/model_evaluation.htmlclassification-metrics`from sklearn.metrics import multilabel_confusion_matrix, log_loss, f1_score, accuracy_score, precision_score, recall_score` 1.2 Splits
###Code
%%time
X_list, y_list = [], []
for file in image_files:
code = int(file.split('.jpg')[0].split('_')[-1])
img_path = 'data/image/' + file
img = cv2.imread(img_path, 0) # Load image in grayscale mode
img_re = cv2.bitwise_not(img)
img_new = img_re/255.0 -0.5 # Zero mean
X_list.append(img_new)
y_list.append(code-1) # Zero base to use keras.utils.to_categorical
X_ori = np.array(X_list)
y_ori = np.array(y_list)
### !!!Note!!!
### From here, code-1 is the index for value now!!!
###Output
CPU times: user 2.25 s, sys: 2.13 s, total: 4.38 s
Wall time: 20.7 s
###Markdown
| value | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 100 | 1E3 | 1E4 | 1E8 ||-----------|----|----|----|----|----|----|----|----|----|----|----|-----|-----|-----|-----|| code | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 | 15 || character | 零 | 一 | 二 | 三 | 四 | 五 | 六 | 七 | 八 | 九 | 十 | 百 | 千 | 万 | 亿 || y | 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9 | 10 | 11 | 12 | 13 | 14 |
###Code
NUM_CLASSES = 15
img_wh = 64
X = np.expand_dims(X_ori.reshape((-1,img_wh,img_wh)), axis=3)
print(X[0].shape)
y = keras.utils.to_categorical(y_ori, NUM_CLASSES)
print(y[:10])
###Output
(64, 64, 1)
[[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 1. 0. 0.]
[0. 1. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
[0. 0. 0. 0. 0. 0. 1. 0. 0. 0. 0. 0. 0. 0. 0.]]
###Markdown
**We choose to use Stratified K-Folds cross-validator because it can preserve the percentage of samples for each class. As we hope to classify different characters, we should learn from each classification equally.**
###Code
skf = StratifiedKFold(n_splits=4)
train_idx_list, test_idx_list = [], []
# have to use y_ori ('multiclass') to use StratifiedKFold
# can't use y, but index is universal for y and y_ori
for train_idx, test_idx in skf.split(X, y_ori):
train_idx_list.append(train_idx)
test_idx_list.append(test_idx)
for i in range(4):
fold = list(y_ori[train_idx_list[0]])
print(Counter(fold))
###Output
Counter({11: 750, 14: 750, 9: 750, 13: 750, 7: 750, 10: 750, 12: 750, 8: 750, 2: 750, 3: 750, 5: 750, 4: 750, 0: 750, 6: 750, 1: 750})
Counter({11: 750, 14: 750, 9: 750, 13: 750, 7: 750, 10: 750, 12: 750, 8: 750, 2: 750, 3: 750, 5: 750, 4: 750, 0: 750, 6: 750, 1: 750})
Counter({11: 750, 14: 750, 9: 750, 13: 750, 7: 750, 10: 750, 12: 750, 8: 750, 2: 750, 3: 750, 5: 750, 4: 750, 0: 750, 6: 750, 1: 750})
Counter({11: 750, 14: 750, 9: 750, 13: 750, 7: 750, 10: 750, 12: 750, 8: 750, 2: 750, 3: 750, 5: 750, 4: 750, 0: 750, 6: 750, 1: 750})
###Markdown
**It's perfectly balanced.** 2. Modeling 2.1 Data expansion**Since in realality the character may be captured in different direaction, distorted, so we turn on rotation, width_shift, height_shift,zoom functions. Because flipping a character will not confuse the character with another, so flipping is meaningless in this case.** **Let's use the first fold as example.**
###Code
X_train, X_test = X[train_idx_list[0]], X[test_idx_list[0]]
y_train, y_test = y[train_idx_list[0]], y[test_idx_list[0]] # note here we have to use one hot encoded y
%%time
datagen = ImageDataGenerator(
featurewise_center=False,
samplewise_center=False,
featurewise_std_normalization=False,
samplewise_std_normalization=False,
zca_whitening=False,
rotation_range=5, # people would write characters slightly different in direction
width_shift_range=4, # same as 0.046875, allow move within 4 pixels.
height_shift_range=4, # same as 0.046875, allow move within 4 pixels.
shear_range=0., # Float. Shear Intensity (Shear angle in counter-clockwise direction as radians)
zoom_range=0.03, # different people write characters of different sizes
channel_shift_range=0.,
fill_mode='nearest', # I think it's the same to use 'constant' since edges of picture are 1.0
cval=1.,
horizontal_flip=False, # flip a character is meaningless in our case
vertical_flip=False, # flip a character is meaningless in our case
rescale=None)
datagen.fit(X_train)
tmps = datagen.flow(X_train, y_train, batch_size=1)
labels = ["零","一","二","三","四","五","六","七","八","九","十","百","千","万","亿"]
for tmp in tmps:
plt.imshow(tmp[0].squeeze(), cmap=plt.cm.gray)
plt.title(np.argmax(tmp[1])) # didn't install Chinese font in my matplotlib, can't use labels[].
break
###Output
_____no_output_____
###Markdown
2.2 CNN architectures**Here we used AlexNet (Alex), ResNet and Ensemble Nets (EnsNet) architectures, each architecture we enhanced the number of filter by 8 times and add one more layer of the fully connected MLP.** 2.2.1 AlexNet**Alex_1**
###Code
### AlexNet style convolutional phase ###
Alex_1 = Sequential(name='Alex_1')
Alex_1.add(Conv2D(filters=8, input_shape = (img_wh,img_wh,1),kernel_size=(3,3),
padding='same', activation='relu', data_format="channels_last"))
Alex_1.add(Conv2D(filters=16, kernel_size=(3,3), padding='same', activation='relu'))
Alex_1.add(MaxPooling2D(pool_size=(2,2), data_format="channels_last"))
Alex_1.add(Dropout(0.25))
Alex_1.add(Flatten())
Alex_1.add(Dense(128, activation='relu'))
Alex_1.add(Dropout(0.5))
Alex_1.add(Dense(NUM_CLASSES, activation='softmax'))
Alex_1.compile(loss='categorical_crossentropy', # 'categorical_crossentropy' 'mean_squared_error'
optimizer='rmsprop', # 'adadelta' 'rmsprop'
metrics=['accuracy']
)
Alex_1.summary()
%%time
# the flow method yields batches of images indefinitely, with the given transformations
history_Alex_1 = Alex_1.fit_generator(datagen.flow(X_train, y_train, batch_size=64),
steps_per_epoch=int(len(X_train)/64),
epochs=30, verbose=1,
validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_Alex_1.history['loss'],label='Training')
plt.plot(history_Alex_1.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_Alex_1.history['accuracy'],label='Training')
plt.plot(history_Alex_1.history['val_accuracy'],label='Testing')
plt.ylabel('Accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Alex_2**
###Code
### AlexNet style convolutional phase ###
Alex_2 = Sequential(name='Alex_2')
Alex_2.add(Conv2D(filters=64, input_shape = (img_wh,img_wh,1),kernel_size=(3,3),
padding='same', activation='relu', data_format="channels_last"))
Alex_2.add(Conv2D(filters=128, kernel_size=(3,3), padding='same', activation='relu'))
Alex_2.add(MaxPooling2D(pool_size=(2,2), data_format="channels_last"))
# add one layer on flattened output
Alex_2.add(Dropout(0.20))
Alex_2.add(Flatten())
Alex_2.add(Dense(128, activation='relu'))
Alex_2.add(Dropout(0.40))
Alex_2.add(Dense(64, activation='relu'))
Alex_2.add(Dropout(0.60))
Alex_2.add(Dense(NUM_CLASSES, activation='softmax'))
# Let's train the model
Alex_2.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy']
)
Alex_2.summary()
%%time
history_Alex_2 = Alex_2.fit_generator(datagen.flow(X_train, y_train, batch_size=64),
steps_per_epoch=int(len(X_train)/64),
epochs=30, verbose=1,
validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_Alex_2.history['loss'],label='Training')
plt.plot(history_Alex_2.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_Alex_2.history['accuracy'],label='Training')
plt.plot(history_Alex_2.history['val_accuracy'],label='Testing')
plt.ylabel('Accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2.2.2 ResNet**ResNet_1**
###Code
### ResNet-Style Bypass ###
l2_lambda= 0.000001
input_holder = Input(shape=(img_wh, img_wh, 1))
x = Conv2D(filters=8, input_shape = (img_wh,img_wh,1), kernel_size=(3,3),
kernel_initializer='he_uniform', kernel_regularizer=l2(l2_lambda),
padding='same', activation='relu', data_format="channels_last")(input_holder)
x = MaxPooling2D(pool_size=(2,2), data_format="channels_last")(x)
x = Conv2D(filters=8, kernel_size=(3,3), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same',
activation='relu', data_format="channels_last")(x)
x_split = MaxPooling2D(pool_size=(2,2), data_format="channels_last")(x)
x = Conv2D(filters=16, kernel_size=(1,1), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same', activation='relu',
data_format="channels_last")(x_split)
x = Conv2D(filters=16, kernel_size=(3,3), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same',
activation='relu', data_format="channels_last")(x)
x = Conv2D(filters=8, kernel_size=(1,1), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same',
activation='relu', data_format="channels_last")(x)
x = Add()([x, x_split])
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2,2), data_format="channels_last")(x)
x = Flatten()(x)
x = Dropout(0.25)(x)
x = Dense(256)(x)
x = Activation("relu")(x)
x = Dropout(0.5)(x)
x = Dense(NUM_CLASSES)(x)
x = Activation('softmax')(x)
ResNet_1 = Model(inputs=input_holder, outputs=x, name='ResNet_1')
ResNet_1.compile(loss='categorical_crossentropy', # 'categorical_crossentropy' 'mean_squared_error'
optimizer='rmsprop', # 'adadelta' 'rmsprop'
metrics=['accuracy']
)
ResNet_1.summary()
%%time
history_ResNet_1 = ResNet_1.fit_generator(datagen.flow(X_train, y_train, batch_size=64),
steps_per_epoch=int(len(X_train)/64),
epochs=30, verbose=1,
validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_ResNet_1.history['loss'],label='Training')
plt.plot(history_ResNet_1.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_ResNet_1.history['accuracy'],label='Training ')
plt.plot(history_ResNet_1.history['val_accuracy'],label='Testing')
plt.ylabel('Accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**ResNet_2**
###Code
%%time
input_holder = Input(shape=(img_wh, img_wh, 1))
x = Conv2D(filters=64, input_shape = (img_wh,img_wh,1), kernel_size=(3,3),
kernel_initializer='he_uniform', kernel_regularizer=l2(l2_lambda),
padding='same', activation='relu', data_format="channels_last")(input_holder)
x = MaxPooling2D(pool_size=(2,2), data_format="channels_last")(x)
x = Conv2D(filters=64, kernel_size=(3,3), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same',
activation='relu', data_format="channels_last")(x)
x_split = MaxPooling2D(pool_size=(2,2), data_format="channels_last")(x)
x = Conv2D(filters=128, kernel_size=(1,1), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same', activation='relu',
data_format="channels_last")(x_split)
x = Conv2D(filters=128, kernel_size=(3,3), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same',
activation='relu', data_format="channels_last")(x)
x = Conv2D(filters=64, kernel_size=(1,1), kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda), padding='same',
activation='relu', data_format="channels_last")(x)
x = Add()([x, x_split])
x = Activation("relu")(x)
x = MaxPooling2D(pool_size=(2,2), data_format="channels_last")(x)
x = Flatten()(x)
x = Dropout(0.20)(x)
x = Flatten()(x)
x = Dense(512)(x)
x = Activation("relu")(x)
x = Dropout(0.40)(x)
x = Flatten()(x)
x = Dense(256)(x)
x = Activation("relu")(x)
x = Dropout(0.60)(x)
x = Dense(NUM_CLASSES)(x)
x = Activation('softmax')(x)
ResNet_2 = Model(inputs=input_holder, outputs=x, name='ResNet_2')
ResNet_2.compile(loss='categorical_crossentropy', # 'categorical_crossentropy' 'mean_squared_error'
optimizer='rmsprop', # 'adadelta' 'rmsprop'
metrics=['accuracy']
)
ResNet_2.summary()
%%time
history_ResNet_2 = ResNet_2.fit_generator(datagen.flow(X_train, y_train, batch_size=64),
steps_per_epoch=int(len(X_train)/64),
epochs=30, verbose=1,
validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_ResNet_2.history['loss'],label='Training')
plt.plot(history_ResNet_2.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_ResNet_2.history['accuracy'],label='Training')
plt.plot(history_ResNet_2.history['val_accuracy'],label='Testing')
plt.ylabel('Accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2.2.3 EnsNet**EnsNet_1**
###Code
### Ensemble Nets ###
num_ensembles = 3
l2_lambda = 0.000001
input_holder = Input(shape=(img_wh, img_wh, 1))
# start with a conv layer
x = Conv2D(filters=16,
input_shape = (img_wh,img_wh,1),
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='relu', data_format="channels_last")(input_holder)
x = Conv2D(filters=16,
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='relu')(x)
input_conv = MaxPooling2D(pool_size=(2, 2), data_format="channels_last")(x)
branches = []
for _ in range(num_ensembles):
# start using NiN (MLPConv)
x = Conv2D(filters=16,
input_shape = (img_wh,img_wh,1),
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='linear', data_format="channels_last")(input_conv)
x = Conv2D(filters=16,
kernel_size=(1,1),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='relu', data_format="channels_last")(x)
x = MaxPooling2D(pool_size=(2, 2), data_format="channels_last")(x)
x = Conv2D(filters=32,
input_shape = (img_wh,img_wh,1),
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='linear', data_format="channels_last")(x)
x = Conv2D(filters=32,
kernel_size=(1,1),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='linear', data_format="channels_last")(x)
x = MaxPooling2D(pool_size=(2, 2), data_format="channels_last")(x)
# add one layer on flattened output
x = Flatten()(x)
x = Dropout(0.25)(x) # add some dropout for regularization after conv layers
x = Dense(32,
activation='relu',
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
x = Dense(NUM_CLASSES,
activation='relu',
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
# now add this branch onto the master list
branches.append(x)
# that's it, we just need to average the results
x = concatenate(branches)
x = Dense(NUM_CLASSES,
activation='softmax',
kernel_initializer='glorot_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
EnsNet_1 = Model(inputs=input_holder, outputs=x, name='EnsNet_1')
EnsNet_1.compile(loss='categorical_crossentropy', # 'categorical_crossentropy' 'mean_squared_error'
optimizer='rmsprop', # 'adadelta' 'rmsprop'
metrics=['accuracy']
)
EnsNet_1.summary()
%%time
history_EnsNet_1 = EnsNet_1.fit_generator(datagen.flow(X_train, y_train, batch_size=64),
steps_per_epoch=int(len(X_train)/64),
epochs=30, verbose=1, validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_EnsNet_1.history['loss'],label='Training')
plt.plot(history_EnsNet_1.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_EnsNet_1.history['accuracy'],label='Training')
plt.plot(history_EnsNet_1.history['val_accuracy'],label='Testing')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**EnsNet_2**
###Code
### Ensemble Nets ###
num_ensembles = 3
l2_lambda = 0.000001
input_holder = Input(shape=(img_wh, img_wh, 1))
x = Conv2D(filters=64,
input_shape = (img_wh,img_wh,1),
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='relu', data_format="channels_last")(input_holder)
x = Conv2D(filters=64,
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='relu')(x)
input_conv = MaxPooling2D(pool_size=(2, 2), data_format="channels_last")(x)
branches = []
for _ in range(num_ensembles):
x = Conv2D(filters=64,
input_shape = (img_wh,img_wh,1),
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='linear', data_format="channels_last")(input_conv)
x = Conv2D(filters=64,
kernel_size=(1,1),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='relu', data_format="channels_last")(x)
x = MaxPooling2D(pool_size=(2, 2), data_format="channels_last")(x)
x = Conv2D(filters=128,
input_shape = (img_wh,img_wh,1),
kernel_size=(3,3),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='linear', data_format="channels_last")(x)
x = Conv2D(filters=128,
kernel_size=(1,1),
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda),
padding='same',
activation='linear', data_format="channels_last")(x)
x = MaxPooling2D(pool_size=(2, 2), data_format="channels_last")(x)
x = Flatten()(x)
x = Dropout(0.25)(x)
x = Dense(128,
activation='relu',
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
x = Dropout(0.50)(x)
x = Dense(64,
activation='relu',
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
x = Dense(NUM_CLASSES,
activation='relu',
kernel_initializer='he_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
branches.append(x)
x = concatenate(branches)
x = Dense(NUM_CLASSES,
activation='softmax',
kernel_initializer='glorot_uniform',
kernel_regularizer=l2(l2_lambda)
)(x)
EnsNet_2 = Model(inputs=input_holder, outputs=x, name='EnsNet_2')
EnsNet_2.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy']
)
EnsNet_2.summary()
%%time
history_EnsNet_2 = EnsNet_2.fit_generator(datagen.flow(X_train, y_train, batch_size=64),
steps_per_epoch=int(len(X_train)/64),
epochs=30, verbose=1, validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_EnsNet_2.history['loss'],label='Training')
plt.plot(history_EnsNet_2.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_EnsNet_2.history['accuracy'],label='Training')
plt.plot(history_EnsNet_2.history['val_accuracy'],label='Testing')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
2.3 Visualization and Comparison 2.3.1 Visualization
###Code
# visualize
def visualize_models(X_test, y_test, model_names=[], labels='auto'):
assert isinstance(model_names, list)
assert all(isinstance(name, str) for name in model_names)
assert isinstance(y_test[0], np.int64)
height = ceil(len(model_names)/2)
plt.figure(figsize=(20, 6*height))
for i, name in enumerate(model_names):
model = eval(name)
yhat_model = np.argmax(model.predict(X_test), axis=1)
acc_model = mt.accuracy_score(y_test,yhat_model)
plt.subplot(height,2,i+1)
cm = mt.confusion_matrix(y_test,yhat_model)
cm = cm/np.sum(cm,axis=1)[:,np.newaxis]
sns.heatmap(cm, annot=True, fmt='.2f',xticklabels=labels,yticklabels=labels,
cmap = sns.color_palette("YlOrBr", as_cmap=True))
plt.title(f'{str(name)}: {acc_model}', fontsize=20)
plt.tight_layout()
# have to use y_ori since it's 'multiclass' instead of onehot encoded array
visualize_models(X_test, y_ori[test_idx_list[0]], labels='auto',
model_names=['Alex_1', 'Alex_2', 'ResNet_1', 'ResNet_2', 'EnsNet_1', 'EnsNet_2'])
###Output
_____no_output_____
###Markdown
**As we can see, it seems EnsNet is better than ResNet, Alex got relatively the worsest results. Let's use statistical method to do the comparison.** 2.3.2 Comparison
###Code
# Define a function for McNemar's Test
# confidence: 0.90 0.95 0.99
# 1DOF critical value: 2.706 3.841 6.635
def mn_test(ypred1, ypred2, ytrue):
tab_b = sum((ypred1 == ytrue) & (ypred2 != ytrue))
tab_c = sum((ypred1 != ytrue) & (ypred2 == ytrue))
if tab_b + tab_c == 0:
chi2 = 0
else:
chi2 = (abs(tab_b - tab_c)-1)**2 / (tab_b + tab_c)
return round(chi2,4)
###Output
_____no_output_____
###Markdown
**we would like to compare between same architecture with different parameters.**
###Code
yhat_Alex_1 = np.argmax(Alex_1.predict(X_test), axis=1)
yhat_Alex_2 = np.argmax(Alex_2.predict(X_test), axis=1)
yhat_ResNet_1 = np.argmax(ResNet_1.predict(X_test), axis=1)
yhat_ResNet_2 = np.argmax(ResNet_2.predict(X_test), axis=1)
yhat_EnsNet_1 = np.argmax(EnsNet_1.predict(X_test), axis=1)
yhat_EnsNet_2 = np.argmax(EnsNet_2.predict(X_test), axis=1)
print("Alex 1 vs Alex 2: Test statistic",mn_test(yhat_Alex_1, yhat_Alex_2, y_ori[test_idx_list[0]]))
print("ResNet 1 vs ResNet 2: Test statistic",mn_test(yhat_ResNet_1, yhat_ResNet_2, y_ori[test_idx_list[0]]))
print("EnsNet 1 vs EnsNet 2: Test statistic",mn_test(yhat_EnsNet_1, yhat_EnsNet_2, y_ori[test_idx_list[0]]))
###Output
Alex 1 vs Alex 2: Test statistic 32.7935
ResNet 1 vs ResNet 2: Test statistic 30.2222
EnsNet 1 vs EnsNet 2: Test statistic 0.3556
###Markdown
**As we can see, `ResNet_2` and `Alex_2` are different from `ResNet_1` and `Alex_1`, respectively. While `ResNet_2` and `Alex_1` have higher accuracy, we would say they have better perfromance.****We can't conclude `EnsNet_2` is actually better than `EnsNet_1` since the $\chi^2$ value is smaller than 2.706, but it has better accuracy so let's just use `EnsNet_2` as an example.****Let's compare `Alex_1` , `ResNet_2` and `EnsNet_2`.**
###Code
print("ResNet 2 vs EnsNet 2: Test statistic",mn_test(yhat_ResNet_2, yhat_EnsNet_2, y_ori[test_idx_list[0]]))
print("Alex 1 vs EnsNet 2: Test statistic",mn_test(yhat_Alex_1, yhat_EnsNet_2, y_ori[test_idx_list[0]]))
print("Alex 1 vs ResNet 2: Test statistic",mn_test(yhat_Alex_1, yhat_ResNet_2, y_ori[test_idx_list[0]]))
###Output
ResNet 2 vs EnsNet 2: Test statistic 9.4464
Alex 1 vs EnsNet 2: Test statistic 134.453
Alex 1 vs ResNet 2: Test statistic 90.2798
###Markdown
**$\chi^2$ values for all three comparisons much larger than 6.635, so we can say they are different by 99% confidence. And EnsNet_2 actually has the highest accuracy, let's assume EnsNet_2 is the best.** 2.4 CNN vs. MLP**Let's first implement a MLP. Here we tried to use raw data, PCA reduced data, Daisy extraction data to build MLP model.** 2.4.1 Raw data
###Code
# Raw data
mlp = Sequential()
mlp.add( Flatten() ) # make images flat for the MLP input
mlp.add( Dense(input_dim=X_train.shape[1], units=30,
activation='relu') )
mlp.add( Dense(units=15, activation='relu') )
mlp.add( Dense(NUM_CLASSES) )
mlp.add( Activation('softmax') )
mlp.compile(loss='mean_squared_error',
optimizer='rmsprop',
metrics=['accuracy'])
history_MLP = mlp.fit(X_train, y_train, batch_size=64,
epochs=50, verbose=1, validation_data=(X_test,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
###Output
Epoch 1/50
176/176 [==============================] - 1s 3ms/step - loss: 0.0630 - accuracy: 0.0669 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 2/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0637 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 3/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0630 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 4/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0675 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 5/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0605 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 6/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0643 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 7/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0656 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 8/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0717 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 9/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0634 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 10/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0705 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 11/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0605 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 12/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0647 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 13/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0682 - val_loss: 0.0622 - val_accuracy: 0.0667
Epoch 14/50
176/176 [==============================] - 0s 2ms/step - loss: 0.0622 - accuracy: 0.0640 - val_loss: 0.0622 - val_accuracy: 0.0667
###Markdown
**Using raw data as input, the accuracy is terriable** 2.4.2 PCA reduced data
###Code
# PCA
n_components = 750
pca= PCA(n_components=n_components).fit(X_train.reshape(X_train.shape[0],-1))
X_train_pca =pca.transform(X_train.reshape(X_train.shape[0],-1))
X_test_pca =pca.transform(X_test.reshape(X_test.shape[0],-1))
mlp_pca = Sequential()
mlp_pca.add( Flatten() )
mlp_pca.add( Dense(input_dim=X_train_pca.shape[1], units=30,
activation='relu') )
mlp_pca.add( Dense(units=15, activation='relu') )
mlp_pca.add( Dense(NUM_CLASSES) )
mlp_pca.add( Activation('softmax') )
mlp_pca.compile(loss='mean_squared_error',
optimizer='rmsprop',
metrics=['accuracy'])
history_MLP_pca = mlp_pca.fit(X_train_pca, y_train, batch_size=64,
epochs=150, verbose=1, validation_data=(X_test_pca,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
###Output
Epoch 1/150
176/176 [==============================] - 1s 2ms/step - loss: 0.0617 - accuracy: 0.0984 - val_loss: 0.0593 - val_accuracy: 0.1712
Epoch 2/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0577 - accuracy: 0.2102 - val_loss: 0.0557 - val_accuracy: 0.2885
Epoch 3/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0531 - accuracy: 0.3403 - val_loss: 0.0511 - val_accuracy: 0.3979
Epoch 4/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0474 - accuracy: 0.4544 - val_loss: 0.0470 - val_accuracy: 0.4456
Epoch 5/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0431 - accuracy: 0.5037 - val_loss: 0.0447 - val_accuracy: 0.4752
Epoch 6/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0394 - accuracy: 0.5493 - val_loss: 0.0431 - val_accuracy: 0.5011
Epoch 7/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0372 - accuracy: 0.5914 - val_loss: 0.0420 - val_accuracy: 0.5131
Epoch 8/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0354 - accuracy: 0.6078 - val_loss: 0.0410 - val_accuracy: 0.5293
Epoch 9/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0327 - accuracy: 0.6458 - val_loss: 0.0402 - val_accuracy: 0.5405
Epoch 10/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0319 - accuracy: 0.6568 - val_loss: 0.0397 - val_accuracy: 0.5509
Epoch 11/150
176/176 [==============================] - 0s 1ms/step - loss: 0.0302 - accuracy: 0.6820 - val_loss: 0.0390 - val_accuracy: 0.5555
Epoch 12/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0287 - accuracy: 0.7003 - val_loss: 0.0384 - val_accuracy: 0.5640
Epoch 13/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0269 - accuracy: 0.7190 - val_loss: 0.0379 - val_accuracy: 0.5741
Epoch 14/150
176/176 [==============================] - 0s 1ms/step - loss: 0.0257 - accuracy: 0.7356 - val_loss: 0.0375 - val_accuracy: 0.5867
Epoch 15/150
176/176 [==============================] - 0s 1ms/step - loss: 0.0246 - accuracy: 0.7507 - val_loss: 0.0373 - val_accuracy: 0.5864
Epoch 16/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0236 - accuracy: 0.7596 - val_loss: 0.0370 - val_accuracy: 0.5971
Epoch 17/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0225 - accuracy: 0.7735 - val_loss: 0.0367 - val_accuracy: 0.5997
Epoch 18/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0216 - accuracy: 0.7853 - val_loss: 0.0367 - val_accuracy: 0.5973
Epoch 19/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0206 - accuracy: 0.7915 - val_loss: 0.0367 - val_accuracy: 0.6040
Epoch 20/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0198 - accuracy: 0.8076 - val_loss: 0.0365 - val_accuracy: 0.6083
Epoch 21/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0185 - accuracy: 0.8200 - val_loss: 0.0366 - val_accuracy: 0.6016
Epoch 22/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0175 - accuracy: 0.8292 - val_loss: 0.0366 - val_accuracy: 0.6053
Epoch 23/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0175 - accuracy: 0.8291 - val_loss: 0.0365 - val_accuracy: 0.6099
Epoch 24/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0167 - accuracy: 0.8398 - val_loss: 0.0365 - val_accuracy: 0.6109
Epoch 25/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0162 - accuracy: 0.8454 - val_loss: 0.0366 - val_accuracy: 0.6120
Epoch 26/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0159 - accuracy: 0.8448 - val_loss: 0.0366 - val_accuracy: 0.6144
Epoch 27/150
176/176 [==============================] - 0s 2ms/step - loss: 0.0148 - accuracy: 0.8558 - val_loss: 0.0368 - val_accuracy: 0.6141
###Markdown
**From lab2, we knew using the first 750 of components is enough to represent the data, so here we used the first 750 components as input, and the accuracy was enhanced greatly. Let's try using Daisy.** 2.4.3 Daisy extraction data
###Code
# daisy
def apply_daisy(row,shape):
feat = daisy(row.reshape(shape), step=5, radius=5,
rings=2, histograms=8, orientations=8,
visualize=False)
return feat.reshape((-1))
X_train_daisy = np.apply_along_axis(apply_daisy, 1, X_train.reshape(-1,64*64),(64,64))
print(X_train_daisy.shape)
X_test_daisy = np.apply_along_axis(apply_daisy, 1, X_test.reshape(-1,64*64),(64,64))
print(X_test_daisy.shape)
mlp_daisy = Sequential()
mlp_daisy.add( Flatten() )
mlp_daisy.add( Dense(input_dim=X_train_daisy.shape[1], units=30,
activation='relu') )
mlp_daisy.add( Dense(units=15, activation='relu') )
mlp_daisy.add( Dense(NUM_CLASSES) )
mlp_daisy.add( Activation('softmax') )
mlp_daisy.compile(loss='mean_squared_error',
optimizer='rmsprop',
metrics=['accuracy'])
history_MLP_daisy= mlp_daisy.fit(X_train_daisy, y_train, batch_size=64,
epochs=150, verbose=1, validation_data=(X_test_daisy,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
###Output
Epoch 1/150
176/176 [==============================] - 1s 5ms/step - loss: 0.0580 - accuracy: 0.2323 - val_loss: 0.0461 - val_accuracy: 0.4573
Epoch 2/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0433 - accuracy: 0.5178 - val_loss: 0.0361 - val_accuracy: 0.6229
Epoch 3/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0342 - accuracy: 0.6438 - val_loss: 0.0308 - val_accuracy: 0.6840
Epoch 4/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0284 - accuracy: 0.7053 - val_loss: 0.0264 - val_accuracy: 0.7293
Epoch 5/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0249 - accuracy: 0.7486 - val_loss: 0.0248 - val_accuracy: 0.7419
Epoch 6/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0218 - accuracy: 0.7795 - val_loss: 0.0214 - val_accuracy: 0.7843
Epoch 7/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0196 - accuracy: 0.8106 - val_loss: 0.0202 - val_accuracy: 0.7933
Epoch 8/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0187 - accuracy: 0.8153 - val_loss: 0.0192 - val_accuracy: 0.8048
Epoch 9/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0172 - accuracy: 0.8360 - val_loss: 0.0176 - val_accuracy: 0.8240
Epoch 10/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0157 - accuracy: 0.8484 - val_loss: 0.0164 - val_accuracy: 0.8381
Epoch 11/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0148 - accuracy: 0.8625 - val_loss: 0.0169 - val_accuracy: 0.8235
Epoch 12/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0143 - accuracy: 0.8587 - val_loss: 0.0163 - val_accuracy: 0.8360
Epoch 13/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0133 - accuracy: 0.8702 - val_loss: 0.0149 - val_accuracy: 0.8413
Epoch 14/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0126 - accuracy: 0.8833 - val_loss: 0.0141 - val_accuracy: 0.8568
Epoch 15/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0116 - accuracy: 0.8918 - val_loss: 0.0134 - val_accuracy: 0.8648
Epoch 16/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0111 - accuracy: 0.8997 - val_loss: 0.0132 - val_accuracy: 0.8680
Epoch 17/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0106 - accuracy: 0.9069 - val_loss: 0.0123 - val_accuracy: 0.8829
Epoch 18/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0100 - accuracy: 0.9107 - val_loss: 0.0124 - val_accuracy: 0.8768
Epoch 19/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0095 - accuracy: 0.9134 - val_loss: 0.0119 - val_accuracy: 0.8789
Epoch 20/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0087 - accuracy: 0.9244 - val_loss: 0.0120 - val_accuracy: 0.8832
Epoch 21/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0088 - accuracy: 0.9206 - val_loss: 0.0112 - val_accuracy: 0.8899
Epoch 22/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0086 - accuracy: 0.9224 - val_loss: 0.0104 - val_accuracy: 0.9035
Epoch 23/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0084 - accuracy: 0.9268 - val_loss: 0.0105 - val_accuracy: 0.8979
Epoch 24/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0080 - accuracy: 0.9287 - val_loss: 0.0108 - val_accuracy: 0.8920
Epoch 25/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0075 - accuracy: 0.9320 - val_loss: 0.0097 - val_accuracy: 0.9077
Epoch 26/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0076 - accuracy: 0.9338 - val_loss: 0.0099 - val_accuracy: 0.9000
Epoch 27/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0072 - accuracy: 0.9356 - val_loss: 0.0094 - val_accuracy: 0.9053
Epoch 28/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0065 - accuracy: 0.9433 - val_loss: 0.0092 - val_accuracy: 0.9149
Epoch 29/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0066 - accuracy: 0.9409 - val_loss: 0.0091 - val_accuracy: 0.9096
Epoch 30/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0063 - accuracy: 0.9447 - val_loss: 0.0094 - val_accuracy: 0.9061
Epoch 31/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0062 - accuracy: 0.9446 - val_loss: 0.0099 - val_accuracy: 0.9016
Epoch 32/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0059 - accuracy: 0.9495 - val_loss: 0.0086 - val_accuracy: 0.9165
Epoch 33/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0055 - accuracy: 0.9535 - val_loss: 0.0085 - val_accuracy: 0.9120
Epoch 34/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0052 - accuracy: 0.9522 - val_loss: 0.0099 - val_accuracy: 0.8997
Epoch 35/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0055 - accuracy: 0.9521 - val_loss: 0.0083 - val_accuracy: 0.9168
Epoch 36/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0051 - accuracy: 0.9570 - val_loss: 0.0078 - val_accuracy: 0.9227
Epoch 37/150
176/176 [==============================] - 1s 3ms/step - loss: 0.0051 - accuracy: 0.9552 - val_loss: 0.0082 - val_accuracy: 0.9184
Epoch 38/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0049 - accuracy: 0.9568 - val_loss: 0.0087 - val_accuracy: 0.9136
Epoch 39/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0048 - accuracy: 0.9594 - val_loss: 0.0088 - val_accuracy: 0.9104
Epoch 40/150
176/176 [==============================] - 0s 3ms/step - loss: 0.0047 - accuracy: 0.9583 - val_loss: 0.0081 - val_accuracy: 0.9157
###Markdown
**Using Daisy, the accuracy was further enhanced!**
###Code
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_MLP_daisy.history['loss'],label='Training')
plt.plot(history_MLP_daisy.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_MLP_daisy.history['accuracy'],label='Training')
plt.plot(history_MLP_daisy.history['val_accuracy'],label='Testing')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
**Let's compare the performance.**
###Code
# Compute ROC curve and ROC area for each class
def roc(y_score):
n_classes =15
fpr = dict()
tpr = dict()
roc_auc = dict()
for i in range(n_classes):
fpr[i], tpr[i], _ = roc_curve(y_test[:, i], y_score[:, i])
roc_auc[i] = auc(fpr[i], tpr[i])
# Compute micro-average ROC curve and ROC area
fpr["micro"], tpr["micro"], _ = roc_curve(y_test.ravel(), y_score.ravel())
roc_auc["micro"] = auc(fpr["micro"], tpr["micro"])
return fpr,tpr,roc_auc
mlp_fpr,mlp_tpr,mlp_roc_auc=roc(mlp_daisy.predict_proba(X_test_daisy))
EnsNet_fpr,EnsNet_tpr,EnsNet_roc_auc=roc(EnsNet_2.predict(X_test))
ResNet_fpr,ResNet_tpr,ResNet_roc_auc=roc(ResNet_2.predict(X_test))
# Plot ROC curve
plt.figure(figsize=(12, 12))
plt.plot(mlp_fpr["micro"], mlp_tpr["micro"],
label='MLP ROC curve (area = {0:0.2f})'
''.format(mlp_roc_auc["micro"]))
plt.plot(EnsNet_fpr["micro"], EnsNet_tpr["micro"],
label='EnsNet ROC curve (area = {0:0.2f})'
''.format(EnsNet_roc_auc["micro"]))
plt.plot(ResNet_fpr["micro"], ResNet_tpr["micro"],
label='ResNet ROC curve (area = {0:0.2f})'
''.format(ResNet_roc_auc["micro"]))
plt.plot([0, 1], [0, 1], 'k--')
plt.xlim([0.0, 1.05])
plt.ylim([0.0, 1.05])
plt.xlabel('False Positive Rate')
plt.ylabel('True Positive Rate')
plt.title(' Receiver operating characteristic to multi-class')
plt.legend(loc="lower right")
plt.show()
###Output
_____no_output_____
###Markdown
**EnsNet and ResNet model are perfect, MLP model is almost perfect, it seems this data set is too easy.**
###Code
yhat_MLP = np.argmax(mlp_daisy.predict(X_test_daisy), axis=1)
print("MLP_DAISY vs EnsNet_2: Test statistic",mn_test(yhat_MLP, yhat_EnsNet_2, y_ori[test_idx_list[0]]))
###Output
MLP_DAISY vs EnsNet_2: Test statistic 257.4031
###Markdown
**$\chi^2$ value for MLP_DAISY vs EnsNet_2 is larger than 6.635, we can conclude that EnsNet_2 is better than MLP with 99% confidence.** 3. Transfer learning
###Code
# only need to change X to 3 channels
# use the preprocessed y
X_list_tl = []
for file in image_files:
code = int(file.split('.jpg')[0].split('_')[-1])
img_path = 'data/image/' + file
img = cv2.imread(img_path)
X_list_tl.append(img)
# ResNet requires: exactly 3 inputs channels, and width and height no smaller than 32.
# valid range for imshow with RGB data ([0..1] for floats or [0..255] for integers)
# didn't zero mean, because of the requirement of VGG19
X_tl = np.array(X_list_tl)
print(X_tl[0].shape)
plt.imshow(X_tl[0])
plt.show()
# load only convolutional layers of resnet:
if 'res_no_top' not in locals():
res_no_top = ResNet50(weights='imagenet', include_top=False, input_shape=(64,64,3))
X_train_tl = X_tl[train_idx_list[0]]
X_test_tl = X_tl[test_idx_list[0]]
x_train_up = preprocess_input(X_train_tl)
x_test_up = preprocess_input(X_test_tl)
%%time
x_train_resnet = res_no_top.predict(x_train_up)
x_test_resnet = res_no_top.predict(x_test_up)
print(x_train_resnet.shape)
input_x = Input(shape=x_train_resnet[0].shape)
x = Flatten()(input_x)
x = Dense(2000, activation='relu',kernel_initializer='he_uniform')(x)
x = Dropout(0.5)(x)
x = Dense(200, activation='relu',kernel_initializer='he_uniform')(x)
predictions = Dense(NUM_CLASSES, activation='softmax', kernel_initializer='glorot_uniform')(x)
ResNet_tl = Model(inputs=input_x, outputs=predictions, name='ResNet_tl')
ResNet_tl.summary()
ResNet_tl.compile(loss='categorical_crossentropy',
optimizer='rmsprop',
metrics=['accuracy']
)
%%time
history_ResNet_tl = ResNet_tl.fit(x_train_resnet, y_train, epochs=30, batch_size=64,
verbose=1, validation_data=(x_test_resnet,y_test),
callbacks=[EarlyStopping(monitor='val_loss', patience=4)]
)
plt.figure(figsize=(10,4))
plt.subplot(1,2,1)
plt.plot(history_ResNet_tl.history['loss'],label='Training')
plt.plot(history_ResNet_tl.history['val_loss'],label='Testing')
plt.ylabel('Loss')
plt.xlabel('epochs')
plt.legend()
plt.subplot(1,2,2)
plt.plot(history_ResNet_tl.history['accuracy'],label='Training')
plt.plot(history_ResNet_tl.history['val_accuracy'],label='Testing')
plt.ylabel('accuracy')
plt.xlabel('epochs')
plt.legend()
plt.show()
# compare ensnet_2 and resnet_tl
yhat_ResNet_tl = np.argmax(ResNet_tl.predict(x_test_resnet), axis=1)
print("ResNet_tl vs EnsNet_2: Test statistic",mn_test(yhat_ResNet_tl, yhat_EnsNet_2, y_ori[test_idx_list[0]]))
###Output
ResNet_tl vs EnsNet_2: Test statistic 118.0904
|
core/Python Decorators and attr library.ipynb | ###Markdown
Python Decorators and attr libraryA decorator is a function that takes another function and extends the behavior of the latter function without explicitly modifying it.https://realpython.com/primer-on-python-decorators/--- Decorator use examplesHere are some examples how decorators can be used.* Flask web framework * `@app.route` = a decorator that tells Flask what URLs should trigger the function that it decorates. * https://flask.palletsprojects.com/en/1.1.x/quickstart/ * unittest module * `@unittest.expectedFailure` = tells unittest module that it is expected that the decorated test (function) will fail. * https://docs.python.org/3/library/unittest.htmlskipping-tests-and-expected-failures * Timing function execution * a decorator that records the start and end time of a function call, calculates the difference.---
###Code
%%writefile flask_demo.py
from flask import Flask
app = Flask(__name__)
@app.route('/')
def index():
return 'Welcome to our web-app!'
@app.route('/hello')
def hello_world():
return 'Hello, World!'
###Output
_____no_output_____
###Markdown
run the file from command line:```export FLASK_APP=flask_demo.pyflask run``` --- What is a higher Order Function? * takes one or more functions as arguments* and/or returns a function as its result What is a function?* Essentially, functions return a value based on the given arguments.
###Code
def add_two(bar):
return bar + 2
###Output
_____no_output_____
###Markdown
First Class ObjectsIn Python, functions are first-class objects. This means that functions can be passed around, and used as arguments, just like any other value (e.g, string, int, float).
###Code
print(add_two(2))
print(type(add_two))
def call_fn_with_arg(f, arg):
res = f(arg)
return res
print(call_fn_with_arg(add_two, 9))
###Output
_____no_output_____
###Markdown
Nested Functions* Because of the first-class nature of functions in Python, you can define functions inside other functions. Such functions are called nested functions.
###Code
def parent():
print("Printing from the parent() function.")
def first_child():
return "Printing from the first_child() function."
def second_child():
return "Printing from the second_child() function."
print(first_child())
print(second_child())
parent()
dir(parent)
first_child()
#Aha First Child is not in general Scope!!
###Output
_____no_output_____
###Markdown
Returning FunctionsPython also allows you to return functions from other functions. Let’s alter the previous function for this example.
###Code
def parent(num=42):
def first_child():
return "Printing from the first_child() function."
def second_child():
return "Printing from the second_child() function."
print('Checking if num is 10')
if num == 10:
return first_child
else:
return second_child
type(parent)
foo = parent(10)
bar = parent(11)
print(foo)
print(bar)
print(foo())
print(bar())
foo.__name__
bar.__name__
###Output
_____no_output_____
###Markdown
Decorators - wrappers
###Code
def my_decorator(f):
def wrapper():
print("Something is happening before some_function() is called.")
f()
print("Something is happening after some_function() is called.")
return wrapper
def just_some_function():
print("Wheee!")
just_some_function()
f = my_decorator(just_some_function)
f()
@my_decorator
def my_fun():
print("Yey, my_fun() called!")
# The use of @my_decorator is equivalent to:
# my_fun = my_decorator(my_fun)
my_fun()
# you can chain decorators together
@my_decorator
@my_decorator
def myfun():
print("Wow decorators!")
myfun()
def my_decorator(f):
def wrapper():
print("Something is happening before some_function() is called.")
f()
print("one more time")
f()
print("Something is happening after some_function() is called.")
return wrapper
def just_some_function():
print("Wheee!")
just_some_function = my_decorator(just_some_function)
just_some_function()
###Output
Something is happening before some_function() is called.
Wheee!
one more time
Wheee!
Something is happening after some_function() is called.
###Markdown
Put simply, decorators wrap a function, modifying its behavior.
###Code
# another example with an if
def my_dec2(some_function):
def wrapper():
num = 10
if num == 10:
print("Yes!")
else:
print("No!")
some_function()
print("Something is happening after some_function() is called.")
return wrapper
def just_some_function():
print("Inside!")
just_some_function()
just_some_function = my_dec2(just_some_function)
just_some_function()
%%writefile my_deco.py
def my_newdeco(some_function):
def wrapper():
num = 10
if num == 10:
print("Yes!")
else:
print("No!")
some_function()
print("Something is happening after some_function() is called.")
return wrapper
if __name__ == "__main__":
my_decorator()
import my_deco
dir(my_deco)
# THIS is the decorator syntax
### Same as just_some_function = my_deco.my_newdeco(just_some_function)
@my_deco.my_newdeco
def just_some_function():
print("Wheee!")
just_some_function()
just_some_function.__name__
def twice(f):
return lambda x: f(f(x))
def plusfour(x):
return x + 4
g = twice(plusfour)
g(9)
###Output
_____no_output_____
###Markdown
Decorating with argumentsSay that you have a function that accepts some arguments. Can you still decorate it?The problem is that the inner function wrapper_do_twice() does not take any arguments, but name="World" was passed to it. You could fix this by letting wrapper_do_twice() accept one argument, but then it would not work for the say_whee() function you created earlier.The solution is to use *args and **kwargs in the inner wrapper function. Then it will accept an arbitrary number of positional and keyword arguments. Rewrite decorators.py as follows:
###Code
def do_twice(func):
def wrapper_do_twice(*args, **kwargs):
func(*args, **kwargs)
func(*args, **kwargs)
return wrapper_do_twice
@do_twice
def print_something(name ="World ", repeat=1):
print("Hello, ", name*repeat)
print_something("Valdis ", repeat=3)
@do_twice
def show(*posit, **kwargs):
print(posit)
print(kwargs)
print()
show("it works now", test=1, name="Valdis")
show.__name__
###Output
_____no_output_____
###Markdown
The wrapper_do_twice() inner function now accepts any number of arguments and passes them on to the function it decorates Fixing introspection for decorated functionsA great convenience when working with Python, especially in the interactive shell, is its powerful introspection ability. Introspection is the ability of an object to know about its own attributes at runtime. For instance, a function knows its own name and documentation: However, after being decorated, say_whee() has gotten very confused about its identity. It now reports being the wrapper_do_twice() inner function inside the do_twice() decorator. Although technically true, this is not very useful information.To fix this, decorators should use the @functools.wraps decorator, which will preserve information about the original function. Update decorators.py again:
###Code
# boilerplate for building your own decorators
import functools
def decorator(func):
@functools.wraps(func)
def wrapper_decorator(*args, **kwargs):
# Do something before
value = func(*args, **kwargs)
# Do something after
return value
return wrapper_decorator
@decorator
def my_fun():
"""
Help string here.
"""
my_fun()
my_fun.__name__
my_fun.__doc__
###Output
_____no_output_____
###Markdown
Example: timeitfrom https://stackoverflow.com/questions/1622943/timeit-versus-timing-decorator
###Code
import functools
from time import time
def timeit(f):
@functools.wraps(f)
def wrap(*args, **kw):
ts = time()
result = f(*args, **kw)
te = time()
print('func:%r args:[%r, %r] took: %2.4f sec' % \
(f.__name__, args, kw, te-ts))
return result
return wrap
@timeit
def do_something(num = 1_000_000):
res = []
for i in range(num):
res.append(i**2)
print("Finished")
do_something(num = 10_000_000)
@timeit
def do_simple_thing(num = 1_000_000):
res = []
for i in range(num):
res.append(i)
print("Finished simple thing")
do_simple_thing(num = 10_000_000)
@timeit
def do_anything(num = 1_000_000, fun = lambda x: x):
"""
Apply function fun num times
"""
res = []
for i in range(num):
res.append(fun(i))
print(f"Finishinged running {fun} {num} times")
do_anything()
do_anything(10_000_000)
do_anything(10_000_000, lambda x: x**2)
do_anything(10_000_000, lambda x: x+x)
# it could very well be that we are hitting some CPU caches here
import random
random.random()
do_anything(fun = lambda _: random.random())
do_anything(10_000_000, lambda _: random.random())
# turns out that pseudo random numbers are generated faster than power of 2
do_anything(10_000_000, lambda x: x*x)
do_anything(10_000_000, lambda x: x**2)
do_anything(10_000_000, lambda x: x*x*x)
do_anything(10_000_000, lambda x: x**3)
%%timeit
do_something()
%%time
do_something()
%%timeit
random.random()
%%timeit
random.random()+1
do_something()
do_simple_thing()
###Output
Finished
func:'do_something' args:[(), {}] took: 0.3842 sec
Finished simple thing
func:'do_simple_thing' args:[(), {}] took: 0.1326 sec
###Markdown
Class Exercise* Write a Python program to make a chain of function decorators for text to wrap in HTML tags* Possible decorators (bold, italic, underline, any others ?)
###Code
#TODO create a decorator here
def strong(f):
"""
Wraps text in <strong> tag
"""
@functools.wraps(f)
def wrapper_decorator(*args, **kwargs):
# Do something before
value = "<strong>"
value += f(*args, **kwargs)
# Do something after
return value+"</strong>"
return wrapper_decorator
@strong
def my_text_function(text):
return f"Hello {text}"
my_text_function("Valdis")
%%timeit
my_text_function("Uldis")
###Output
601 ns ± 14.1 ns per loop (mean ± std. dev. of 7 runs, 1000000 loops each)
###Markdown
Popular decorator library: attrs Classes Without Boilerplatehttps://github.com/python-attrs/attrsClass decorator and a way to declaratively define the attributes on that class
###Code
import attr
#Decorator Magic happens below
@attr.s
class SomeClass(object):
a_number = attr.ib(default=42)
list_of_numbers = attr.ib(default=attr.Factory(list))
a_string = attr.ib(default='justadefaultname')
def hard_math(self, another_number):
return self.a_number + sum(self.list_of_numbers) * another_number
sc = SomeClass(1, [2,3,4], "MyNameIsInigo")
###Output
_____no_output_____
###Markdown
After declaring your attributes attrs gives you:* a concise and explicit overview of the class’s attributes,* a nice human-readable __repr__,* a complete set of comparison methods,* an initializer,* more stuff
###Code
sc
sc.hard_math(10)
attr.asdict(sc)
sc2 = SomeClass([2,3],5) # will not work quite this way..
sc2
###Output
_____no_output_____
###Markdown
New in Python 3.7: Dataclasses inspired by attr, easier way to declare classeshttps://docs.python.org/3/whatsnew/3.7.htmlwhatsnew37-pep557
###Code
## dataclasses
# The new dataclass() decorator provides a way to declare data classes. A data class describes its attributes using class variable annotations. Its constructor and other magic methods, such as __repr__(), __eq__(), and __hash__() are generated automatically.
# Example:
from dataclasses import dataclass
@dataclass
class Point:
x: float
y: float
z: float = 0.0
p = Point(1.5, 2.5)
print(p) # produces "Point(x=1.5, y=2.5, z=0.0)"
###Output
_____no_output_____ |
Week 2/.ipynb_checkpoints/S+P_Week_2_Lesson_2-checkpoint.ipynb | ###Markdown
###Code
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
print(tf.__version__)
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
def trend(time, slope=0):
return slope * time
def seasonal_pattern(season_time):
"""Just an arbitrary pattern, you can change it if you wish"""
return np.where(season_time < 0.4,
np.cos(season_time * 2 * np.pi),
1 / np.exp(3 * season_time))
def seasonality(time, period, amplitude=1, phase=0):
"""Repeats the same pattern at each period"""
season_time = ((time + phase) % period) / period
return amplitude * seasonal_pattern(season_time)
def noise(time, noise_level=1, seed=None):
rnd = np.random.RandomState(seed)
return rnd.randn(len(time)) * noise_level
time = np.arange(4 * 365 + 1, dtype="float32")
baseline = 10
series = trend(time, 0.1)
baseline = 10
amplitude = 40
slope = 0.05
noise_level = 5
# Create the series
series = baseline + trend(time, slope) + seasonality(time, period=365, amplitude=amplitude)
# Update with noise
series += noise(time, noise_level, seed=42)
split_time = 1000
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 20
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
dataset = tf.data.Dataset.from_tensor_slices(series)
dataset = dataset.window(window_size + 1, shift=1, drop_remainder=True)
dataset = dataset.flat_map(lambda window: window.batch(window_size + 1))
dataset = dataset.shuffle(shuffle_buffer).map(lambda window: (window[:-1], window[-1]))
dataset = dataset.batch(batch_size).prefetch(1)
return dataset
dataset = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
print(dataset)
l0 = tf.keras.layers.Dense(1, input_shape=[window_size])
model = tf.keras.models.Sequential([l0])
model.compile(loss="mse", optimizer=tf.keras.optimizers.SGD(lr=1e-6, momentum=0.9))
model.fit(dataset,epochs=100,verbose=0)
print("Layer weights {}".format(l0.get_weights()))
forecast = []
for time in range(len(series) - window_size):
forecast.append(model.predict(series[time:time + window_size][np.newaxis]))
forecast = forecast[split_time-window_size:]
results = np.array(forecast)[:, 0, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, results)
tf.keras.metrics.mean_absolute_error(x_valid, results).numpy()
###Output
_____no_output_____ |
2-Code/.ipynb_checkpoints/2-1-Model-1-checkpoint.ipynb | ###Markdown
Predicting the Next Pandemic of Dengue Baseline Modelby Brenda Hali--- Importing Libraries
###Code
#data manipualtion libraries
import pandas as pd
import numpy as np
import seaborn as sns
#data visualization
from matplotlib import pyplot as plt
#data modeling libraries
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
#statsmodels
from statsmodels.tools import eval_measures
import statsmodels.api as sm
import statsmodels.formula.api as smf
###Output
_____no_output_____
###Markdown
Importing Datasets
###Code
sj_train = pd.read_csv('../1-Data/4-sj_train.csv')
sj_test = pd.read_csv('../1-Data/5-sj_test.csv')
iq_train = pd.read_csv('../1-Data/6-iq_train.csv')
iq_test = pd.read_csv('../1-Data/7-iq_test.csv')
sj_train.drop('Unnamed: 0', axis =1, inplace = True)
iq_train.drop('Unnamed: 0', axis =1, inplace = True)
sj_test.drop('Unnamed: 0', axis =1, inplace = True)
iq_test.drop('Unnamed: 0', axis =1, inplace = True)
sj_train.head()
###Output
_____no_output_____
###Markdown
Modeling Baseline modelSplitting data into train and test
###Code
sj_train_subtrain = sj_train.head(800)
sj_train_subtest = sj_train.tail(sj_train.shape[0] - 800)
iq_train_subtrain = iq_train.head(400)
iq_train_subtest = iq_train.tail(iq_train.shape[0] - 400)
def get_best_model(train, test):
# Step 1: specify the form of the model
model_formula = "total_cases ~ 1 + " \
"reanalysis_specific_humidity_g_per_kg + " \
"reanalysis_dew_point_temp_k + " \
"reanalysis_min_air_temp_k + " \
"station_min_temp_c + " \
"station_max_temp_c + " \
"station_avg_temp_c"
grid = 10 ** np.arange(-8, -3, dtype=np.float64)
best_alpha = []
best_score = 1000
# Step 2: Find the best hyper parameter, alpha
for alpha in grid:
model = smf.glm(formula=model_formula,
data=train,
family=sm.families.NegativeBinomial(alpha=alpha))
results = model.fit()
predictions = results.predict(test).astype(int)
score = eval_measures.meanabs(predictions, test.total_cases)
if score < best_score:
best_alpha = alpha
best_score = score
print('Best Alpha = ', best_alpha)
print('Best Score = ', best_score)
# Step 3: refit on entire dataset
full_dataset = pd.concat([train, test])
model = smf.glm(formula=model_formula,
data=full_dataset,
family=sm.families.NegativeBinomial(alpha=best_alpha))
fitted_model = model.fit()
return fitted_model
sj_best_model = get_best_model(sj_train_subtrain, sj_train_subtest)
iq_best_model = get_best_model(iq_train_subtrain, iq_train_subtest)
figs, axes = plt.subplots(nrows=2, ncols=1)
# plot sj
sj_train['fitted'] = sj_best_model.fittedvalues
sj_train.fitted.plot(ax=axes[0], label="Predictions")
sj_train.total_cases.plot(ax=axes[0], label="Actual")
# plot iq
iq_train['fitted'] = iq_best_model.fittedvalues
iq_train.fitted.plot(ax=axes[1], label="Predictions")
iq_train.total_cases.plot(ax=axes[1], label="Actual")
plt.suptitle("Dengue Predicted Cases vs. Actual Cases")
plt.legend();
sj_predictions = sj_best_model.predict(sj_test).astype(int)
iq_predictions = iq_best_model.predict(iq_test).astype(int)
#creating a new colum with MSE of our first predictions
sj_test['baseline'] = sj_predictions
iq_test['baseline'] = iq_predictions
###Output
_____no_output_____ |
Gradient_Descent_Boston_Dataset.ipynb | ###Markdown
Gradient Descent Algorithm
###Code
def step_gradient(X, Y, m, learning_rate):
m_slope = np.zeros(len(X[0]))
for i in range(len(X)):
x = X[i]
y = Y[i]
for j in range(len(x)):
m_slope[j] += (-2/len(X))*(y-sum(m*x))*x[j]
new_m = m - (learning_rate)*(m_slope)
return new_m
def cost(x, y, m):
cost = 0
for i in range(len(x)):
cost += (1/len(x))*((y[i]-sum(m*x[i]))**2)
return cost
def gd(x, y, learning_rate, num_iterations):
m = np.zeros(len(x[0]))
for i in range(num_iterations):
m = step_gradient(x, y, m, learning_rate)
print("itr= ", i, "cost=", cost(x, y, m))
return m
def gradient_descent(x,y):
learning_rate = 0.23
num_iterations = 330
x = np.append(x, np.ones(len(x)).reshape(-1,1),axis=1)
m = gd(x, y, learning_rate, num_iterations)
return m
###Output
_____no_output_____
###Markdown
Loading Training Data
###Code
train_data = np.genfromtxt("train.csv", delimiter = ",")
X_train = train_data[:,:-1]
Y_train = train_data[:,-1]
square = []
for i in X_train:
square.append(i**2)
square = np.array(square)
X_train = np.append(X_train, square, axis = 1)
scaler = preprocessing.MinMaxScaler()
scaler.fit(X_train)
X_train = scaler.transform(X_train)
m = gradient_descent(X_train, Y_train)
print("final m :",m)
###Output
itr= 0 cost= 532.2891356563496
itr= 1 cost= 480.2975303149365
itr= 2 cost= 438.0621496141528
itr= 3 cost= 402.4827171680969
itr= 4 cost= 371.64745521203366
itr= 5 cost= 344.3578994450321
itr= 6 cost= 319.8452830659182
itr= 7 cost= 297.60113564108013
itr= 8 cost= 277.2759873967504
itr= 9 cost= 258.6186932429382
itr= 10 cost= 241.43999166115023
itr= 11 cost= 225.59053038926794
itr= 12 cost= 210.94753492764875
itr= 13 cost= 197.40664690255102
itr= 14 cost= 184.8768607707388
itr= 15 cost= 173.27732279935066
itr= 16 cost= 162.53525433330037
itr= 17 cost= 152.58455834137618
itr= 18 cost= 143.36484533776718
itr= 19 cost= 134.82072042336634
itr= 20 cost= 126.9012362393857
itr= 21 cost= 119.55945427634299
itr= 22 cost= 112.75207948789357
itr= 23 cost= 106.4391466334301
itr= 24 cost= 100.58374485947904
itr= 25 cost= 95.15177189980051
itr= 26 cost= 90.11171222289295
itr= 27 cost= 85.43443525556334
itr= 28 cost= 81.0930109229999
itr= 29 cost= 77.06254044421331
itr= 30 cost= 73.3200007708866
itr= 31 cost= 69.84410135562351
itr= 32 cost= 66.61515214097821
itr= 33 cost= 63.614941808640964
itr= 34 cost= 60.82662543988284
itr= 35 cost= 58.234620826609
itr= 36 cost= 55.82451274488609
itr= 37 cost= 53.58296456436902
itr= 38 cost= 51.49763662062849
itr= 39 cost= 49.55711082485773
itr= 40 cost= 47.750821028049316
itr= 41 cost= 46.06898869532286
itr= 42 cost= 44.50256348122501
itr= 43 cost= 43.04316832896608
itr= 44 cost= 41.683048746024504
itr= 45 cost= 40.41502593561891
itr= 46 cost= 39.23245348844058
itr= 47 cost= 38.12917736194972
itr= 48 cost= 37.09949889563991
itr= 49 cost= 36.13814063011317
itr= 50 cost= 35.24021471572473
itr= 51 cost= 34.40119371307077
itr= 52 cost= 33.616883602816564
itr= 53 cost= 32.88339883640567
itr= 54 cost= 32.19713927213702
itr= 55 cost= 31.554768853040443
itr= 56 cost= 30.95319589399853
itr= 57 cost= 30.38955485572608
itr= 58 cost= 29.861189492595095
itr= 59 cost= 29.365637269946482
itr= 60 cost= 28.900614954513884
itr= 61 cost= 28.46400528895234
itr= 62 cost= 28.05384466826666
itr= 63 cost= 27.66831174221013
itr= 64 cost= 27.305716873517994
itr= 65 cost= 26.96449238718967
itr= 66 cost= 26.643183550969653
itr= 67 cost= 26.34044023173763
itr= 68 cost= 26.055009176725513
itr= 69 cost= 25.78572687236644
itr= 70 cost= 25.531512937169826
itr= 71 cost= 25.29136400832874
itr= 72 cost= 25.0643480848296
itr= 73 cost= 24.849599292656844
itr= 74 cost= 24.646313040299294
itr= 75 cost= 24.453741535174014
itr= 76 cost= 24.271189633811943
itr= 77 cost= 24.098011000706244
itr= 78 cost= 23.93360455262583
itr= 79 cost= 23.777411166951122
itr= 80 cost= 23.628910634212716
itr= 81 cost= 23.487618836510837
itr= 82 cost= 23.35308513487964
itr= 83 cost= 23.224889949938508
itr= 84 cost= 23.10264252135549
itr= 85 cost= 22.98597883274071
itr= 86 cost= 22.874559689595696
itr= 87 cost= 22.76806893887943
itr= 88 cost= 22.66621181961252
itr= 89 cost= 22.56871343473804
itr= 90 cost= 22.475317335194458
itr= 91 cost= 22.38578420783636
itr= 92 cost= 22.29989065946759
itr= 93 cost= 22.21742808983305
itr= 94 cost= 22.138201646953444
itr= 95 cost= 22.062029258682763
itr= 96 cost= 21.988740734829207
itr= 97 cost= 21.918176934603732
itr= 98 cost= 21.850188994553374
itr= 99 cost= 21.784637612499093
itr= 100 cost= 21.721392383334155
itr= 101 cost= 21.660331182848143
itr= 102 cost= 21.60133959602966
itr= 103 cost= 21.544310386564938
itr= 104 cost= 21.48914300449621
itr= 105 cost= 21.435743129228506
itr= 106 cost= 21.384022245285177
itr= 107 cost= 21.333897248405346
itr= 108 cost= 21.285290079755892
itr= 109 cost= 21.238127386196645
itr= 110 cost= 21.19234020469112
itr= 111 cost= 21.147863669096136
itr= 112 cost= 21.10463673769634
itr= 113 cost= 21.062601939969195
itr= 114 cost= 21.021705141180302
itr= 115 cost= 20.981895323511218
itr= 116 cost= 20.943124382518523
itr= 117 cost= 20.905346937812634
itr= 118 cost= 20.86852015692537
itr= 119 cost= 20.83260359141345
itr= 120 cost= 20.797559024313408
itr= 121 cost= 20.7633503281307
itr= 122 cost= 20.729943332603824
itr= 123 cost= 20.69730570154263
itr= 124 cost= 20.66540681808863
itr= 125 cost= 20.634217677795984
itr= 126 cost= 20.603710788973654
itr= 127 cost= 20.573860079771652
itr= 128 cost= 20.54464081153099
itr= 129 cost= 20.516029497953518
itr= 130 cost= 20.488003829678856
itr= 131 cost= 20.460542603886385
itr= 132 cost= 20.4336256585683
itr= 133 cost= 20.407233811144806
itr= 134 cost= 20.381348801117223
itr= 135 cost= 20.35595323647592
itr= 136 cost= 20.331030543601663
itr= 137 cost= 20.306564920416566
itr= 138 cost= 20.282541292559586
itr= 139 cost= 20.258945272376796
itr= 140 cost= 20.23576312053273
itr= 141 cost= 20.212981710061996
itr= 142 cost= 20.190588492693756
itr= 143 cost= 20.168571467294573
itr= 144 cost= 20.14691915028392
itr= 145 cost= 20.125620547889714
itr= 146 cost= 20.104665130118352
itr= 147 cost= 20.084042806324593
itr= 148 cost= 20.063743902272872
itr= 149 cost= 20.04375913859106
itr= 150 cost= 20.024079610523522
itr= 151 cost= 20.00469676889696
itr= 152 cost= 19.985602402219502
itr= 153 cost= 19.966788619837743
itr= 154 cost= 19.948247836083066
itr= 155 cost= 19.929972755342128
itr= 156 cost= 19.91195635799157
itr= 157 cost= 19.894191887141606
itr= 158 cost= 19.876672836135704
itr= 159 cost= 19.859392936758482
itr= 160 cost= 19.842346148106323
itr= 161 cost= 19.825526646079062
itr= 162 cost= 19.808928813453267
itr= 163 cost= 19.792547230500915
itr= 164 cost= 19.776376666119106
itr= 165 cost= 19.760412069439514
itr= 166 cost= 19.744648561887725
itr= 167 cost= 19.729081429664802
itr= 168 cost= 19.713706116625662
itr= 169 cost= 19.698518217529998
itr= 170 cost= 19.683513471643234
itr= 171 cost= 19.668687756666863
itr= 172 cost= 19.654037082978224
itr= 173 cost= 19.63955758816173
itr= 174 cost= 19.625245531814336
itr= 175 cost= 19.61109729060921
itr= 176 cost= 19.59710935360266
itr= 177 cost= 19.583278317770482
itr= 178 cost= 19.569600883760327
itr= 179 cost= 19.556073851848286
itr= 180 cost= 19.54269411808771
itr= 181 cost= 19.529458670639798
itr= 182 cost= 19.516364586275877
itr= 183 cost= 19.503409027041787
itr= 184 cost= 19.490589237075582
itr= 185 cost= 19.477902539570312
itr= 186 cost= 19.46534633387387
itr= 187 cost= 19.452918092718733
itr= 188 cost= 19.440615359574725
itr= 189 cost= 19.428435746118286
itr= 190 cost= 19.416376929812024
itr= 191 cost= 19.404436651589354
itr= 192 cost= 19.39261271363779
itr= 193 cost= 19.380902977277273
itr= 194 cost= 19.36930536092756
itr= 195 cost= 19.357817838160656
itr= 196 cost= 19.34643843583413
itr= 197 cost= 19.335165232301232
itr= 198 cost= 19.32399635569405
itr= 199 cost= 19.312929982276057
itr= 200 cost= 19.301964334861054
itr= 201 cost= 19.29109768129476
itr= 202 cost= 19.280328332996852
itr= 203 cost= 19.269654643559605
itr= 204 cost= 19.259075007401847
itr= 205 cost= 19.24858785847409
itr= 206 cost= 19.238191669013865
itr= 207 cost= 19.227884948348258
itr= 208 cost= 19.217666241741437
itr= 209 cost= 19.207534129285747
itr= 210 cost= 19.19748722483392
itr= 211 cost= 19.187524174970644
itr= 212 cost= 19.177643658022003
itr= 213 cost= 19.167844383101038
itr= 214 cost= 19.158125089187802
itr= 215 cost= 19.148484544242265
itr= 216 cost= 19.138921544349458
itr= 217 cost= 19.129434912894286
itr= 218 cost= 19.120023499765864
itr= 219 cost= 19.110686180589525
itr= 220 cost= 19.10142185598564
itr= 221 cost= 19.09222945085388
itr= 222 cost= 19.083107913682305
itr= 223 cost= 19.074056215879853
itr= 224 cost= 19.065073351131677
itr= 225 cost= 19.056158334775937
itr= 226 cost= 19.047310203201775
itr= 227 cost= 19.038528013267243
itr= 228 cost= 19.029810841736506
itr= 229 cost= 19.02115778473561
itr= 230 cost= 19.012567957226086
itr= 231 cost= 19.00404049249576
itr= 232 cost= 18.995574541665974
itr= 233 cost= 18.987169273214725
itr= 234 cost= 18.978823872515175
itr= 235 cost= 18.9705375413887
itr= 236 cost= 18.96230949767233
itr= 237 cost= 18.954138974799633
itr= 238 cost= 18.94602522139489
itr= 239 cost= 18.937967500879978
itr= 240 cost= 18.929965091093255
itr= 241 cost= 18.922017283920457
itr= 242 cost= 18.914123384936918
itr= 243 cost= 18.906282713060538
###Markdown
Loading Testing Data
###Code
X_test = np.genfromtxt("test.csv", delimiter = ",")
square = []
for i in X_test:
square.append(i**2)
square = np.array(square)
X_test = np.append(X_test, square, axis = 1)
scaler.fit(X_test)
X_test = scaler.transform(X_test)
X_test = np.append(X_test, np.ones(len(X_test)).reshape(-1, 1), axis=1)
###Output
_____no_output_____
###Markdown
Prediction
###Code
predictions = []
for i in X_test:
Y_pred = sum(i*m)
predictions.append(Y_pred)
np_predictions = np.array(predictions)
np.savetxt("predictions.csv", np_predictions,fmt="%.5f", delimiter=",")
###Output
_____no_output_____ |
spincaster/parts/button_cap.ipynb | ###Markdown
Spincaster Button CapEnlarged caps for tactile switches mounted to the DIY spincaster. Ream with a 3.2mm drill bit before attachingEngage silly mode for more butt.
###Code
show_object(button)
from pathlib import Path; downloads = str(Path.home() / "Downloads")
cq.exporters.export(button, f"{downloads}/spincaster_button_cap.stl")
print("done")
###Output
done
|
notebooks/D1_L6_MatPlotLib_and_Seaborn/02-Simple-Scatter-Plots.ipynb | ###Markdown
Simple Scatter Plots Another commonly used plot type is the simple scatter plot, a close cousin of the line plot.Instead of points being joined by line segments, here the points are represented individually with a dot, circle, or other shape.We’ll start by setting up the notebook for plotting and importing the functions we will use:
###Code
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
import numpy as np
###Output
_____no_output_____
###Markdown
Scatter Plots with ``plt.plot``In the previous section we looked at ``plt.plot``/``ax.plot`` to produce line plots.It turns out that this same function can produce scatter plots as well:
###Code
x = np.linspace(0, 10, 30)
y = np.sin(x)
plt.plot(x, y, 'o', color='black');
###Output
_____no_output_____
###Markdown
The third argument in the function call is a character that represents the type of symbol used for the plotting. Just as you can specify options such as ``'-'``, ``'--'`` to control the line style, the marker style has its own set of short string codes. The full list of available symbols can be seen in the documentation of ``plt.plot``, or in Matplotlib's online documentation. Most of the possibilities are fairly intuitive, and we'll show a number of the more common ones here:
###Code
rng = np.random.RandomState(0)
for marker in ['o', '.', ',', 'x', '+', 'v', '^', '<', '>', 's', 'd']:
plt.plot(rng.rand(5), rng.rand(5), marker,
label="marker='{0}'".format(marker))
plt.legend(numpoints=1)
plt.xlim(0, 1.8);
###Output
_____no_output_____
###Markdown
For even more possibilities, these character codes can be used together with line and color codes to plot points along with a line connecting them:
###Code
plt.plot(x, y, '-ok');
###Output
_____no_output_____
###Markdown
Additional keyword arguments to ``plt.plot`` specify a wide range of properties of the lines and markers:
###Code
plt.plot(x, y, '-p', color='gray',
markersize=15, linewidth=4,
markerfacecolor='white',
markeredgecolor='gray',
markeredgewidth=2)
plt.ylim(-1.2, 1.2);
###Output
_____no_output_____
###Markdown
This type of flexibility in the ``plt.plot`` function allows for a wide variety of possible visualization options.For a full description of the options available, refer to the ``plt.plot`` documentation. Scatter Plots with ``plt.scatter``A second, more powerful method of creating scatter plots is the ``plt.scatter`` function, which can be used very similarly to the ``plt.plot`` function:
###Code
plt.scatter(x, y, marker='o');
###Output
_____no_output_____
###Markdown
The primary difference of ``plt.scatter`` from ``plt.plot`` is that it can be used to create scatter plots where the properties of each individual point (size, face color, edge color, etc.) can be individually controlled or mapped to data.Let's show this by creating a random scatter plot with points of many colors and sizes.In order to better see the overlapping results, we'll also use the ``alpha`` keyword to adjust the transparency level:
###Code
rng = np.random.RandomState(0)
x = rng.randn(100)
y = rng.randn(100)
colors = rng.rand(100)
sizes = 1000 * rng.rand(100)
plt.scatter(x, y, c=colors, s=sizes, alpha=0.3,
cmap='viridis')
plt.colorbar(); # show color scale
###Output
_____no_output_____
###Markdown
Notice that the color argument is automatically mapped to a color scale (shown here by the ``colorbar()`` command), and that the size argument is given in pixels.In this way, the color and size of points can be used to convey information in the visualization, in order to visualize multidimensional data.For example, we might use the Iris data from Scikit-Learn, where each sample is one of three types of flowers that has had the size of its petals and sepals carefully measured:
###Code
from sklearn.datasets import load_iris
iris = load_iris()
features = iris.data.T
plt.scatter(features[0], features[1], alpha=0.2,
s=100*features[3], c=iris.target, cmap='viridis')
plt.xlabel(iris.feature_names[0])
plt.ylabel(iris.feature_names[1]);
###Output
_____no_output_____ |
docs/bayes_window_book/neurons_example/lme.ipynb | ###Markdown
Linear mixed effects
###Code
from bayes_window import models, BayesWindow, LMERegression
from bayes_window.generative_models import generate_fake_spikes, generate_fake_lfp
import numpy as np
df, df_monster, index_cols, _ = generate_fake_lfp(mouse_response_slope=8,
n_trials=40)
###Output
_____no_output_____
###Markdown
LFP Without data overlay
###Code
bw = LMERegression(df=df, y='Log power', treatment='stim', group='mouse')
bw.fit(add_data=False)
bw.plot().display()
bw.data_and_posterior
###Output
_____no_output_____
###Markdown
With data overlay
###Code
bw = LMERegression(df=df, y='Log power', treatment='stim', group='mouse')
try:
bw.fit(add_data=True, do_make_change='subtract');
bw.plot()
except NotImplementedError:
print('\n Data addition to LME is not implemented')
###Output
Using formula Log_power ~ C(stim, Treatment) + (1 | mouse)
Coef. Std.Err. z P>|z| [0.025 \
Intercept 0.013
C(stim, Treatment)[T.1] 0.123 0.022 5.496 0.000 0.079
1 | mouse -0.076 174464.372 -0.000 1.000 -341943.961
Group Var 0.000
0.975]
Intercept
C(stim, Treatment)[T.1] 0.167
1 | mouse 341943.809
Group Var
Data addition to LME is not implemented
###Markdown
Spikes
###Code
df, df_monster, index_cols, firing_rates = generate_fake_spikes(n_trials=20,
n_neurons=6,
n_mice=3,
dur=5,
mouse_response_slope=40,
overall_stim_response_strength=5)
df['log_isi']=np.log10(df['isi'])
bw = LMERegression(df=df, y='log_isi', treatment='stim', condition=['neuron_x_mouse'], group='mouse',)
bw.fit(add_data=False,add_group_intercept=True, add_group_slope=False);
bw.chart
###Output
_____no_output_____
###Markdown
Group slope
###Code
bw = LMERegression(df=df, y='log_isi', treatment='stim', condition=['neuron_x_mouse'], group='mouse',)
bw.fit(add_data=False,add_group_intercept=True, add_group_slope=True)
bw.chart
bw.plot(x='neuron_x_mouse:O').display()
###Output
_____no_output_____
###Markdown
Categorical
###Code
bw.fit(formula='log_isi ~ (1|mouse) + C(stim| neuron_x_mouse)')
bw.plot(x='neuron_x_mouse:O').display()
###Output
_____no_output_____
###Markdown
Nested
###Code
bw = LMERegression(df=df, y='log_isi', treatment='stim', condition=['neuron_x_mouse'], group='mouse',)
try:
bw.fit(add_data=False,add_group_intercept=True, add_group_slope=True, add_nested_group=True)
except Exception as e:
print(e)
###Output
Using formula log_isi ~ (stim|mouse) + stim| neuron_x_mouse__0:mouse + stim|neuron_x_mouse__1:mouse + stim|neuron_x_mouse__2:mouse + stim|neuron_x_mouse__3:mouse + stim|neuron_x_mouse__4:mouse + stim|neuron_x_mouse__5:mouse + stim|neuron_x_mouse__6:mouse + stim|neuron_x_mouse__7:mouse + stim|neuron_x_mouse__8:mouse + stim|neuron_x_mouse__9:mouse + stim|neuron_x_mouse__10:mouse + stim|neuron_x_mouse__11:mouse + stim|neuron_x_mouse__12:mouse + stim|neuron_x_mouse__13:mouse + stim|neuron_x_mouse__14:mouse + stim|neuron_x_mouse__15:mouse + stim|neuron_x_mouse__16:mouse + stim|neuron_x_mouse__17:mouse
Singular matrix
|
bruno/Issue #1.ipynb | ###Markdown
Table of ContentsImportsRead DataPlots - Author: Bruno- Start: 16/04 Imports
###Code
import zipfile
import pandas as pd
import numpy as np
pd.set_option('display.max_columns', 50)
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
import seaborn as sns
sns.set()
###Output
_____no_output_____
###Markdown
Read Data
###Code
DATA_DIR = "../main/datasets/"
DATA_FILE = "1.0v.zip"
with zipfile.ZipFile(DATA_DIR+DATA_FILE) as z:
# I am saving the data again to use in my auto eda script;
# Too lazy to change it :)
dfs = []
for name in ["infos", "items", "orders"]:
dfs.append(pd.read_csv(z.open(f"1.0v/{name}.csv"), sep="|"))
infos, items, orders = dfs
orders.head(2)
orders.isna().sum()
orders["time"] = pd.to_datetime(orders["time"])
print("The first timestamp is", orders["time"].min(),
"and the last is", orders["time"].max())
orders["days"] = orders["time"].dt.dayofyear
# Make sure we have data for every single day
assert (orders["days"].unique() != np.arange(1, 181)).sum() == 0
###Output
_____no_output_____
###Markdown
Plots
###Code
ticks = np.arange(1, 181, 10)
sns.countplot(orders["days"])
plt.xticks(ticks, ticks, rotation=90)
plt.title("Quantidade de LINHAS por dia");
orders["temp_qtd"] = 1
for name, col in zip(["QTD LINHAS", "VENDAS", "VALOR DA VENDA"],
["temp_qtd", "order", "salesPrice"]):
temp = orders.groupby("days")[col].sum()
week_mean = temp.rolling(7, min_periods=1).mean()
sns.lineplot(y=temp.values, x=temp.index, label="value")
sns.lineplot(y=week_mean, x=week_mean.index, label="Média deslizante 1 semana")
ticks = np.arange(1, 181, 10)
plt.xticks(ticks, ticks, rotation=90)
plt.title(f"{name} por dia")
plt.show()
###Output
_____no_output_____
###Markdown
Very interesting! Something happened after day 50 and they started selling more...OBS: In general, "order" = 1, but not always, that's why the first 2 plots are similar
###Code
(orders["order"] != 1).sum() / len(orders)
###Output
_____no_output_____ |
examples/EDA_imputation.ipynb | ###Markdown
Take only numerical variables
###Code
track_df_numeric=manipulate.get_numerical(track_df)
track_df_numeric.head()
###Output
_____no_output_____
###Markdown
Inspect missing values to choose a variable which has many missing valuesFor this variable we will then try to impute the missing values we will try to impute CO2 Emission (GPS-based).value
###Code
missingValues=inspect.missing_values_per_variable(track_df_numeric)
missingValues
###Output
_____no_output_____
###Markdown
Spearman correlation Just to get an impression, chose the variable which has the strongest non-parametric relationship with CO2 Emission (GPS-based).value by applying a spearman correlation.Here it seems to be he Speed.value so we will try to impute CO2 Emission (GPS-based).value based on Speed.value
###Code
allCoeffs, very_strong, strong, moderate, weak = inspect.get_classified_correlations(track_df_numeric, 'spearman')
allCoeffs.loc[(allCoeffs['column'] == 'Consumption (GPS-based).value')]
###Output
_____no_output_____
###Markdown
As speed correlates the strongest with the CO2 Emission, we will use Speed as predictor. First create a subset of the two variables.
###Code
relation = track_df[["track.id","Speed.value", "CO2 Emission (GPS-based).value"]]
###Output
_____no_output_____
###Markdown
OutlierSet outlier to Nan
###Code
correct.flag_outlier_in_sample(relation, dropOutlierColumn=True, setOutlierToNan=True, dropFlag=True)
relation
###Output
outlier_in_sample_Speed.value 0
outlier_in_sample_CO2 Emission (GPS-based).value 0
Flagged outlier in sample: 0
###Markdown
Prepare the data To train the model we need some complete data. Therefore we delete all rows without a valid value.
###Code
relation2 = relation.dropna()
relation2
###Output
_____no_output_____
###Markdown
As we can see in the plot, there may be a linear relationship, however the line is far away from describing the relationship well.Still, we will have a look how good the linear regression predicts CO2 Emissions
###Code
inspect.plot_linear_regression(relation2["Speed.value"], relation2["CO2 Emission (GPS-based).value"], title='Linear relation Speed/CO2_Consumption')
###Output
_____no_output_____
###Markdown
Prepare variables
###Code
X = np.c_[relation2["Speed.value"]]
y = np.c_[relation2["CO2 Emission (GPS-based).value"]]
inspect.plot_scatter(relation2,"Speed.value" , "CO2 Emission (GPS-based).value", 0.3)
###Output
_____no_output_____
###Markdown
Median Impute nan with median
###Code
X_impute= np.c_[relation["CO2 Emission (GPS-based).value"]]
X_impute
imputer=sklearn.impute.SimpleImputer(strategy='median')
imputer.fit(y)
imputedCO2Emission=imputer.transform(X_impute)
imputedCO2Emission
###Output
_____no_output_____
###Markdown
Linear Regression Impute nan with linear regression model
###Code
modelLinear= sklearn.linear_model.LinearRegression()
modelLinear.fit(X, y)
prepareData = np.c_[relation2["Speed.value"]]
y_predicted= modelLinear.predict(prepareData)
###Output
_____no_output_____
###Markdown
Check the rmseWe have an rmse of ~ 3.5, this means we have typical prediction error of 3.5
###Code
mse = mean_squared_error(y, y_predicted)
rmse = sqrt(mse)
rmse
###Output
_____no_output_____
###Markdown
K-nearest Neighbor Impute nan with k-nearest neighbor model
###Code
modelNeighbor = sklearn.neighbors.KNeighborsRegressor(n_neighbors=2)
modelNeighbor.fit(X,y)
y_predict_n=modelNeighbor.predict(prepareData)
#y_predict_n
###Output
_____no_output_____
###Markdown
Check the rmseStill not perfect but better: we have here a typical error of ~ 2.4.We will need more sophisticated methods to impute missing values.
###Code
rmse_n = sqrt(mean_squared_error(y, y_predict_n))
rmse_n
###Output
_____no_output_____
###Markdown
Take only numerical variables
###Code
track_df_numeric=manipulate.get_numerical(track_df)
track_df_numeric.head()
###Output
_____no_output_____
###Markdown
Inspect missing values to choose a variable which has many missing valuesFor this variable we will then try to impute the missing values we will try to impute CO2 Emission (GPS-based).value
###Code
missingValues=inspect.missing_values_per_variable(track_df_numeric)
missingValues
###Output
_____no_output_____
###Markdown
Spearman correlation Just to get an impression, chose the variable which has the strongest non-parametric relationship with CO2 Emission (GPS-based).value by applying a spearman correlation.Here it seems to be he Speed.value so we will try to impute CO2 Emission (GPS-based).value based on Speed.value
###Code
allCoeffs, very_strong, strong, moderate, weak = inspect.get_classified_correlations(track_df_numeric, 'spearman')
allCoeffs.loc[(allCoeffs['column'] == 'Consumption (GPS-based).value')]
###Output
_____no_output_____
###Markdown
As speed correlates the strongest with the CO2 Emission, we will use Speed as predictor. First create a subset of the two variables.
###Code
relation = track_df[["track.id","Speed.value", "CO2 Emission (GPS-based).value"]]
###Output
_____no_output_____
###Markdown
OutlierSet outlier to Nan
###Code
correct.flag_outlier_in_sample(relation, dropOutlierColumn=True, setOutlierToNan=True, dropFlag=True)
relation
###Output
outlier_in_sample_Speed.value 0
outlier_in_sample_CO2 Emission (GPS-based).value 0
Flagged outlier in sample: 0
###Markdown
Prepare the data To train the model we need some complete data. Therefore we delete all rows without a valid value.
###Code
relation2 = relation.dropna()
relation2
###Output
_____no_output_____
###Markdown
As we can see in the plot, there may be a linear relationship, however the line is far away from describing the relationship well.Still, we will have a look how good the linear regression predicts CO2 Emissions
###Code
inspect.plot_linear_regression(relation2["Speed.value"], relation2["CO2 Emission (GPS-based).value"], title='Linear relation Speed/CO2_Consumption')
###Output
_____no_output_____
###Markdown
Prepare variables
###Code
X = np.c_[relation2["Speed.value"]]
y = np.c_[relation2["CO2 Emission (GPS-based).value"]]
inspect.plot_scatter(relation2,"Speed.value" , "CO2 Emission (GPS-based).value", 0.3)
###Output
_____no_output_____
###Markdown
Median Impute nan with median
###Code
X_impute= np.c_[relation["CO2 Emission (GPS-based).value"]]
X_impute
imputer=sklearn.impute.SimpleImputer(strategy='median')
imputer.fit(y)
imputedCO2Emission=imputer.transform(X_impute)
imputedCO2Emission
###Output
_____no_output_____
###Markdown
Linear Regression Impute nan with linear regression model
###Code
modelLinear= sklearn.linear_model.LinearRegression()
modelLinear.fit(X, y)
prepareData = np.c_[relation2["Speed.value"]]
y_predicted= modelLinear.predict(prepareData)
###Output
_____no_output_____
###Markdown
Check the rmseWe have an rmse of ~ 3.5, this means we have typical prediction error of 3.5
###Code
mse = mean_squared_error(y, y_predicted)
rmse = sqrt(mse)
rmse
###Output
_____no_output_____
###Markdown
K-nearest Neighbor Impute nan with k-nearest neighbor model
###Code
modelNeighbor = sklearn.neighbors.KNeighborsRegressor(n_neighbors=2)
modelNeighbor.fit(X,y)
y_predict_n=modelNeighbor.predict(prepareData)
#y_predict_n
###Output
_____no_output_____
###Markdown
Check the rmseStill not perfect but better: we have here a typical error of ~ 2.4.We will need more sophisticated methods to impute missing values.
###Code
rmse_n = sqrt(mean_squared_error(y, y_predict_n))
rmse_n
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.