code
stringlengths 2.5k
150k
| kind
stringclasses 1
value |
---|---|
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
measurements = Base.classes.measurement
stations = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Climate Analysis
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
last_year = dt.date(2017, 8, 23) - dt.timedelta(days=365)
# Perform a query to retrieve the data and precipitation scores
results = session.query(measurements.date, measurements.prcp).filter(measurements.date >= last_year).all()
# Save the query results as a Pandas DataFrame and set the index to the date column
data = pd.DataFrame(results, columns=['date', 'precipitation'])
# Sort the dataframe by date
data = data.sort_values("date")
# Use Pandas Plotting with Matplotlib to plot the data
x_axis=data["date"]
y_axis=data["precipitation"]
plt.scatter(x_axis, y_axis, marker="o", facecolors="red", edgecolors="black")
plt.xlabel("Date")
plt.ylabel("Measurement")
# Use Pandas to calcualte the summary statistics for the precipitation data
data.describe()
# Design a query to show how many stations are available in this dataset?
session.query(func.count(stations.station)).all()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
session.query(measurements.station, func.count(1)).\
group_by(measurements.station).\
order_by(func.count(1).desc()).all()
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
sel = [measurements.station,
func.min(measurements.tobs),
func.max(measurements.tobs),
func.avg(measurements.tobs)]
session.query(*sel).\
filter(measurements.station == "USC00519281").all()
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
precipitation_df = pd.DataFrame(session.query(measurements.date, measurements.tobs).\
filter(measurements.date > last_year).\
filter(measurements.station == "USC00519281").\
order_by(measurements.date).all(), columns = ["Date", "temperature"])
# plot the results as a histogram
precipitation_df.plot(kind = "hist", bins = 12)
plt.xlabel("Temperature")
plt.ylabel("Frequency")
plt.savefig("output/fig1.png");
```
| github_jupyter |
# Photometric Plugin
For optical photometry, we provide the **PhotometryLike** plugin that handles forward folding of a spectral model through filter curves. Let's have a look at the avaiable procedures.
```
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
from threeML import *
# we will need XPSEC models for extinction
from astromodels.xspec import *
# The filter library takes a while to load so you must import it explicitly..
from threeML.plugins.photometry.filter_library import threeML_filter_library
```
## Setup
We use [speclite](http://speclite.readthedocs.io/en/latest/ ) to handle optical filters.
Therefore, you can easily build your own custom filters, use the built in speclite filters, or use the 3ML filter library that we have built thanks to [Spanish Virtual Observatory](http://svo.cab.inta-csic.es/main/index.php).
**If you use these filters, please be sure to cite the proper sources!**
### Simple example of building a filter
Let's say we have our own 1-m telescope with a Johnson filter and we happen to record the data. We also have simultaneous data at other wavelengths and we want to compare. Let's setup the optical plugin (we'll ignore the other data for now).
```
import speclite.filters as spec_filters
my_backyard_telescope_filter = spec_filters.load_filter('bessell-r')
# NOTE:
my_backyard_telescope_filter.name
```
NOTE: the filter name is 'bessell-R'. The plugin will look for the name *after* the **'-'** i.e 'R'
Now let's build a 3ML plugin via **PhotometryLike**.
Our data are entered as keywords with the name of the filter as the keyword and the data in an magnitude,error tuple, i.e. R=(mag,mag_err):
```
my_backyard_telescope = PhotometryLike('backyard_astronomy',
filters=my_backyard_telescope_filter, # the filter
R=(20,.1) ) # the magnitude and error
my_backyard_telescope.display_filters()
```
## 3ML filter library
Explore the filter library. If you cannot find what you need, it is simple to add your own
```
threeML_filter_library.SLOAN
spec_filters.plot_filters(threeML_filter_library.SLOAN.SDSS)
spec_filters.plot_filters(threeML_filter_library.Herschel.SPIRE)
spec_filters.plot_filters(threeML_filter_library.Keck.NIRC2)
```
## Build your own filters
Following the example from speclite, we can build our own filters and add them:
```
fangs_g = spec_filters.FilterResponse(
wavelength = [3800, 4500, 5200] * u.Angstrom,
response = [0, 0.5, 0], meta=dict(group_name='fangs', band_name='g'))
fangs_r = spec_filters.FilterResponse(
wavelength = [4800, 5500, 6200] * u.Angstrom,
response = [0, 0.5, 0], meta=dict(group_name='fangs', band_name='r'))
fangs = spec_filters.load_filters('fangs-g', 'fangs-r')
fangslike = PhotometryLike('fangs',filters=fangs,g=(20,.1),r=(18,.1))
fangslike.display_filters()
```
## GROND Example
Now we will look at GROND. We get the filter from the 3ML filter library.
(Just play with tab completion to see what is available!)
```
grond = PhotometryLike('GROND',
filters=threeML_filter_library.ESO.GROND,
#g=(21.5.93,.23), # we exclude these filters
#r=(22.,0.12),
i=(21.8,.01),
z=(21.2,.01),
J=(19.6,.01),
H=(18.6,.01),
K=(18.,.01))
grond.display_filters()
```
### Model specification
Here we use XSPEC's dust extinction models for the milky way and the host
```
spec = Powerlaw() * XS_zdust() * XS_zdust()
data_list = DataList(grond)
model = Model(PointSource('grb',0,0,spectral_shape=spec))
spec.piv_1 = 1E-2
spec.index_1.fix=False
spec.redshift_2 = 0.347
spec.redshift_2.fix = True
spec.e_bmv_2 = 5./2.93
spec.e_bmv_2.fix = True
spec.rv_2 = 2.93
spec.rv_2.fix = True
spec.method_2 = 3
spec.method_2.fix=True
spec.e_bmv_3 = .002/3.08
spec.e_bmv_3.fix = True
spec.rv_3= 3.08
spec.rv_3.fix=True
spec.redshift_3 = 0
spec.redshift_3.fix=True
spec.method_3 = 1
spec.method_3.fix=True
jl = JointLikelihood(model,data_list)
```
We compute $m_{\rm AB}$ from astromodels photon fluxes. This is done by convolving the differential flux over the filter response:
$ F[R,f_\lambda] \equiv \int_0^\infty \frac{dg}{d\lambda}(\lambda)R(\lambda) \omega(\lambda) d\lambda$
where we have converted the astromodels functions to wavelength properly.
```
_ = jl.fit()
```
We can now look at the fit in magnitude space or model space as with any plugin.
```
_=display_photometry_model_magnitudes(jl)
_ = plot_point_source_spectra(jl.results,flux_unit='erg/(cm2 s keV)',
xscale='linear',
energy_unit='nm',ene_min=1E3, ene_max=1E5, num_ene=200 )
```
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Tensorflow Lite Gesture Classification Example Conversion Script
This guide shows how you can go about converting the model trained with TensorFlowJS to TensorFlow Lite FlatBuffers.
Run all steps in-order. At the end, `model.tflite` file will be downloaded.
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/examples/blob/master/mobile/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb">
<img src="https://www.tensorflow.org/images/colab_logo_32px.png" />
Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/examples/blob/master/mobile/examples/gesture_classification/ml/tensorflowjs_to_tflite_colab_notebook.ipynb">
<img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />
View source on GitHub</a>
</td>
</table>
**Install Dependencies**
```
!pip3 install tensorflow==1.14.0 keras==2.2.4 tensorflowjs==0.6.4 --force-reinstall
import traceback
import logging
import tensorflow.compat.v1 as tf
import keras.backend as K
import os
from google.colab import files
from keras import Model, Input
from keras.applications import MobileNet
from keras.engine.saving import load_model
from tensorflowjs.converters import load_keras_model
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
```
***Cleanup any existing models if necessary***
```
!rm -rf *.h5 *.tflite *.json *.bin
```
**Upload your Tensorflow.js Artifacts Here**
i.e., The weights manifest **model.json** and the binary weights file **model-weights.bin**
```
files.upload()
```
**Export Configuration**
```
#@title Export Configuration
# TensorFlow.js arguments
config_json = "model.json" #@param {type:"string"}
weights_path_prefix = None #@param {type:"raw"}
model_tflite = "model.tflite" #@param {type:"string"}
```
**Model Converter**
The following class converts a TensorFlow.js model to a TFLite FlatBuffer
```
class ModelConverter:
"""
Creates a ModelConverter class from a TensorFlow.js model file.
Args:
:param config_json_path: Full filepath of weights manifest file containing the model architecture.
:param weights_path_prefix: Full filepath to the directory in which the weights binaries exist.
:param tflite_model_file: Name of the TFLite FlatBuffer file to be exported.
:return:
ModelConverter class.
"""
def __init__(self,
config_json_path,
weights_path_prefix,
tflite_model_file
):
self.config_json_path = config_json_path
self.weights_path_prefix = weights_path_prefix
self.tflite_model_file = tflite_model_file
self.keras_model_file = 'merged.h5'
# MobileNet Options
self.input_node_name = 'the_input'
self.image_size = 224
self.alpha = 0.25
self.depth_multiplier = 1
self._input_shape = (1, self.image_size, self.image_size, 3)
self.depthwise_conv_layer = 'conv_pw_13_relu'
def convert(self):
self.save_keras_model()
self._deserialize_tflite_from_keras()
logger.info('The TFLite model has been generated')
self._purge()
def save_keras_model(self):
top_model = load_keras_model(self.config_json_path, self.weights_path_prefix,
weights_data_buffers=None,
load_weights=True,
use_unique_name_scope=True)
base_model = self.get_base_model()
merged_model = self.merge(base_model, top_model)
merged_model.save(self.keras_model_file)
logger.info("The merged Keras HDF5 model has been saved as {}".format(self.keras_model_file))
def merge(self, base_model, top_model):
"""
Merges base model with the classification block
:return: Returns the merged Keras model
"""
logger.info("Initializing model...")
layer = base_model.get_layer(self.depthwise_conv_layer)
model = Model(inputs=base_model.input, outputs=top_model(layer.output))
logger.info("Model created.")
return model
def get_base_model(self):
"""
Builds MobileNet with the default parameters
:return: Returns the base MobileNet model
"""
input_tensor = Input(shape=self._input_shape[1:], name=self.input_node_name)
base_model = MobileNet(input_shape=self._input_shape[1:],
alpha=self.alpha,
depth_multiplier=self.depth_multiplier,
input_tensor=input_tensor,
include_top=False)
return base_model
def _deserialize_tflite_from_keras(self):
converter = tf.lite.TFLiteConverter.from_keras_model_file(self.keras_model_file)
tflite_model = converter.convert()
with open(self.tflite_model_file, "wb") as file:
file.write(tflite_model)
def _purge(self):
logger.info('Cleaning up Keras model')
os.remove(self.keras_model_file)
try:
K.clear_session()
converter = ModelConverter(config_json,
weights_path_prefix,
model_tflite)
converter.convert()
except ValueError as e:
print(traceback.format_exc())
print("Error occurred while converting")
files.download(model_tflite)
```
| github_jupyter |
```
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
from tqdm import tqdm
%matplotlib inline
from torch.utils.data import Dataset, DataLoader
import torch
import torchvision
import torch.nn as nn
import torch.optim as optim
from torch.nn import functional as F
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
print(device)
m = 50 # 5, 50, 100, 500, 1000, 2000
desired_num = 200
tr_i = 0
tr_j = int(desired_num/2)
tr_k = desired_num
tr_i, tr_j, tr_k
```
# Generate dataset
```
np.random.seed(12)
y = np.random.randint(0,10,5000)
idx= []
for i in range(10):
print(i,sum(y==i))
idx.append(y==i)
x = np.zeros((5000,2))
np.random.seed(12)
x[idx[0],:] = np.random.multivariate_normal(mean = [5,5],cov=[[0.1,0],[0,0.1]],size=sum(idx[0]))
x[idx[1],:] = np.random.multivariate_normal(mean = [-6,7],cov=[[0.1,0],[0,0.1]],size=sum(idx[1]))
x[idx[2],:] = np.random.multivariate_normal(mean = [-5,-4],cov=[[0.1,0],[0,0.1]],size=sum(idx[2]))
x[idx[3],:] = np.random.multivariate_normal(mean = [-1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[3]))
x[idx[4],:] = np.random.multivariate_normal(mean = [0,2],cov=[[0.1,0],[0,0.1]],size=sum(idx[4]))
x[idx[5],:] = np.random.multivariate_normal(mean = [1,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[5]))
x[idx[6],:] = np.random.multivariate_normal(mean = [0,-1],cov=[[0.1,0],[0,0.1]],size=sum(idx[6]))
x[idx[7],:] = np.random.multivariate_normal(mean = [0,0],cov=[[0.1,0],[0,0.1]],size=sum(idx[7]))
x[idx[8],:] = np.random.multivariate_normal(mean = [-0.5,-0.5],cov=[[0.1,0],[0,0.1]],size=sum(idx[8]))
x[idx[9],:] = np.random.multivariate_normal(mean = [0.4,0.2],cov=[[0.1,0],[0,0.1]],size=sum(idx[9]))
x[idx[0]][0], x[idx[5]][5]
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
bg_idx = [ np.where(idx[3] == True)[0],
np.where(idx[4] == True)[0],
np.where(idx[5] == True)[0],
np.where(idx[6] == True)[0],
np.where(idx[7] == True)[0],
np.where(idx[8] == True)[0],
np.where(idx[9] == True)[0]]
bg_idx = np.concatenate(bg_idx, axis = 0)
bg_idx.shape
np.unique(bg_idx).shape
x = x - np.mean(x[bg_idx], axis = 0, keepdims = True)
np.mean(x[bg_idx], axis = 0, keepdims = True), np.mean(x, axis = 0, keepdims = True)
x = x/np.std(x[bg_idx], axis = 0, keepdims = True)
np.std(x[bg_idx], axis = 0, keepdims = True), np.std(x, axis = 0, keepdims = True)
for i in range(10):
plt.scatter(x[idx[i],0],x[idx[i],1],label="class_"+str(i))
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
foreground_classes = {'class_0','class_1', 'class_2'}
background_classes = {'class_3','class_4', 'class_5', 'class_6','class_7', 'class_8', 'class_9'}
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
print(a.shape)
print(fg_class , fg_idx)
np.reshape(a,(2*m,1))
mosaic_list_of_images =[]
mosaic_label = []
fore_idx=[]
for j in range(desired_num):
np.random.seed(j)
fg_class = np.random.randint(0,3)
fg_idx = np.random.randint(0,m)
a = []
for i in range(m):
if i == fg_idx:
b = np.random.choice(np.where(idx[fg_class]==True)[0],size=1)
a.append(x[b])
# print("foreground "+str(fg_class)+" present at " + str(fg_idx))
else:
bg_class = np.random.randint(3,10)
b = np.random.choice(np.where(idx[bg_class]==True)[0],size=1)
a.append(x[b])
# print("background "+str(bg_class)+" present at " + str(i))
a = np.concatenate(a,axis=0)
mosaic_list_of_images.append(np.reshape(a,(2*m,1)))
mosaic_label.append(fg_class)
fore_idx.append(fg_idx)
mosaic_list_of_images = np.concatenate(mosaic_list_of_images,axis=1).T
mosaic_list_of_images.shape
mosaic_list_of_images.shape, mosaic_list_of_images[0]
for j in range(m):
print(mosaic_list_of_images[0][2*j:2*j+2])
def create_avg_image_from_mosaic_dataset(mosaic_dataset,labels,foreground_index,dataset_number, m):
"""
mosaic_dataset : mosaic_dataset contains 9 images 32 x 32 each as 1 data point
labels : mosaic_dataset labels
foreground_index : contains list of indexes where foreground image is present so that using this we can take weighted average
dataset_number : will help us to tell what ratio of foreground image to be taken. for eg: if it is "j" then fg_image_ratio = j/9 , bg_image_ratio = (9-j)/8*9
"""
avg_image_dataset = []
cnt = 0
counter = np.zeros(m) #np.array([0,0,0,0,0,0,0,0,0])
for i in range(len(mosaic_dataset)):
img = torch.zeros([2], dtype=torch.float64)
np.random.seed(int(dataset_number*10000 + i))
give_pref = foreground_index[i] #np.random.randint(0,9)
# print("outside", give_pref,foreground_index[i])
for j in range(m):
if j == give_pref:
img = img + mosaic_dataset[i][2*j:2*j+2]*dataset_number/m #2 is data dim
else :
img = img + mosaic_dataset[i][2*j:2*j+2]*(m-dataset_number)/((m-1)*m)
if give_pref == foreground_index[i] :
# print("equal are", give_pref,foreground_index[i])
cnt += 1
counter[give_pref] += 1
else :
counter[give_pref] += 1
avg_image_dataset.append(img)
print("number of correct averaging happened for dataset "+str(dataset_number)+" is "+str(cnt))
print("the averaging are done as ", counter)
return avg_image_dataset , labels , foreground_index
avg_image_dataset_1 , labels_1, fg_index_1 = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[0:tr_j], mosaic_label[0:tr_j], fore_idx[0:tr_j] , 1, m)
test_dataset , labels , fg_index = create_avg_image_from_mosaic_dataset(mosaic_list_of_images[tr_j : tr_k], mosaic_label[tr_j : tr_k], fore_idx[tr_j : tr_k] , m, m)
avg_image_dataset_1 = torch.stack(avg_image_dataset_1, axis = 0)
# avg_image_dataset_1 = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(avg_image_dataset_1, keepdims= True, axis = 0))
# print(torch.std(avg_image_dataset_1, keepdims= True, axis = 0))
print("=="*40)
test_dataset = torch.stack(test_dataset, axis = 0)
# test_dataset = (avg - torch.mean(avg, keepdims= True, axis = 0)) / torch.std(avg, keepdims= True, axis = 0)
# print(torch.mean(test_dataset, keepdims= True, axis = 0))
# print(torch.std(test_dataset, keepdims= True, axis = 0))
print("=="*40)
x1 = (avg_image_dataset_1).numpy()
y1 = np.array(labels_1)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("dataset4 CIN with alpha = 1/"+str(m))
x1 = (test_dataset).numpy() / m
y1 = np.array(labels)
plt.scatter(x1[y1==0,0], x1[y1==0,1], label='class 0')
plt.scatter(x1[y1==1,0], x1[y1==1,1], label='class 1')
plt.scatter(x1[y1==2,0], x1[y1==2,1], label='class 2')
plt.legend()
plt.title("test dataset4")
test_dataset[0:10]/m
test_dataset = test_dataset/m
test_dataset[0:10]
class MosaicDataset(Dataset):
"""MosaicDataset dataset."""
def __init__(self, mosaic_list_of_images, mosaic_label):
"""
Args:
csv_file (string): Path to the csv file with annotations.
root_dir (string): Directory with all the images.
transform (callable, optional): Optional transform to be applied
on a sample.
"""
self.mosaic = mosaic_list_of_images
self.label = mosaic_label
#self.fore_idx = fore_idx
def __len__(self):
return len(self.label)
def __getitem__(self, idx):
return self.mosaic[idx] , self.label[idx] #, self.fore_idx[idx]
avg_image_dataset_1[0].shape
avg_image_dataset_1[0]
batch = 200
traindata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
trainloader_1 = DataLoader( traindata_1 , batch_size= batch ,shuffle=True)
testdata_1 = MosaicDataset(avg_image_dataset_1, labels_1 )
testloader_1 = DataLoader( testdata_1 , batch_size= batch ,shuffle=False)
testdata_11 = MosaicDataset(test_dataset, labels )
testloader_11 = DataLoader( testdata_11 , batch_size= batch ,shuffle=False)
class Whatnet(nn.Module):
def __init__(self):
super(Whatnet,self).__init__()
self.linear1 = nn.Linear(2,3)
# self.linear2 = nn.Linear(50,10)
# self.linear3 = nn.Linear(10,3)
torch.nn.init.xavier_normal_(self.linear1.weight)
torch.nn.init.zeros_(self.linear1.bias)
def forward(self,x):
# x = F.relu(self.linear1(x))
# x = F.relu(self.linear2(x))
x = (self.linear1(x))
return x
def calculate_loss(dataloader,model,criter):
model.eval()
r_loss = 0
with torch.no_grad():
for i, data in enumerate(dataloader, 0):
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
outputs = model(inputs)
loss = criter(outputs, labels)
r_loss += loss.item()
return r_loss/(i+1)
def test_all(number, testloader,net):
correct = 0
total = 0
out = []
pred = []
with torch.no_grad():
for data in testloader:
images, labels = data
images, labels = images.to("cuda"),labels.to("cuda")
out.append(labels.cpu().numpy())
outputs= net(images)
_, predicted = torch.max(outputs.data, 1)
pred.append(predicted.cpu().numpy())
total += labels.size(0)
correct += (predicted == labels).sum().item()
pred = np.concatenate(pred, axis = 0)
out = np.concatenate(out, axis = 0)
print("unique out: ", np.unique(out), "unique pred: ", np.unique(pred) )
print("correct: ", correct, "total ", total)
print('Accuracy of the network on the %d test dataset %d: %.2f %%' % (total, number , 100 * correct / total))
def train_all(trainloader, ds_number, testloader_list):
print("--"*40)
print("training on data set ", ds_number)
torch.manual_seed(12)
net = Whatnet().double()
net = net.to("cuda")
criterion_net = nn.CrossEntropyLoss()
optimizer_net = optim.Adam(net.parameters(), lr=0.001 ) #, momentum=0.9)
acti = []
loss_curi = []
epochs = 1000
running_loss = calculate_loss(trainloader,net,criterion_net)
loss_curi.append(running_loss)
print('epoch: [%d ] loss: %.3f' %(0,running_loss))
for epoch in range(epochs): # loop over the dataset multiple times
ep_lossi = []
running_loss = 0.0
net.train()
for i, data in enumerate(trainloader, 0):
# get the inputs
inputs, labels = data
inputs, labels = inputs.to("cuda"),labels.to("cuda")
# zero the parameter gradients
optimizer_net.zero_grad()
# forward + backward + optimize
outputs = net(inputs)
loss = criterion_net(outputs, labels)
# print statistics
running_loss += loss.item()
loss.backward()
optimizer_net.step()
running_loss = calculate_loss(trainloader,net,criterion_net)
if(epoch%200 == 0):
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
loss_curi.append(running_loss) #loss per epoch
if running_loss<=0.05:
print('epoch: [%d] loss: %.3f' %(epoch + 1,running_loss))
break
print('Finished Training')
correct = 0
total = 0
with torch.no_grad():
for data in trainloader:
images, labels = data
images, labels = images.to("cuda"), labels.to("cuda")
outputs = net(images)
_, predicted = torch.max(outputs.data, 1)
total += labels.size(0)
correct += (predicted == labels).sum().item()
print('Accuracy of the network on the %d train images: %.2f %%' % (total, 100 * correct / total))
for i, j in enumerate(testloader_list):
test_all(i+1, j,net)
print("--"*40)
return loss_curi
train_loss_all=[]
testloader_list= [ testloader_1, testloader_11]
train_loss_all.append(train_all(trainloader_1, 1, testloader_list))
%matplotlib inline
for i,j in enumerate(train_loss_all):
plt.plot(j,label ="dataset "+str(i+1))
plt.xlabel("Epochs")
plt.ylabel("Training_loss")
plt.legend(loc='center left', bbox_to_anchor=(1, 0.5))
```
| github_jupyter |
# Introduction to Convolutional Neural Networks (CNNs) in PyTorch
### Representing images digitally
While convolutional neural networks (CNNs) see a wide variety of uses, they were originally designed for images, and CNNs are still most commonly used for vision-related tasks.
For today, we'll primarily be focusing on CNNs for images.
Before we dive into convolutions and neural networks, it's worth prefacing with how images are represented by a computer, as this understanding will inform some of our design choices.
Previously, we saw an example of a digitized MNIST handwritten digit.
Specifically, we represent it as an $H \times W$ table, with the value of each element storing the intensity of the corresponding pixel.
<img src="./Figures/mnist_digital.png" alt="mnist_digital" style="width: 600px;"/>
With a 2D representation as above, we for the most part can only efficiently represent grayscale images.
What if we want color?
There are many schemes for storing color, but one of the most common ones is the [RGB color model](https://en.wikipedia.org/wiki/RGB_color_model).
In such a system, we store 3 tables of pixel intensities (each called a *channel*), one each for the colors red, green, and blue (hence RGB), resulting in an $H \times W \times 3$ tensor.
Pixel values for a particular channel indicate how much of the corresponding color the image has at a particular location.
## Let's load an image and look at different channels:
```
%matplotlib inline
import imageio
import matplotlib.pyplot as plt
# Read the image "./Figures/chapel.jpg" from the disk.
# Hint: use `im = imageio.imread(<Path to the image>)`.
# Print the shape of the tensor
# Display the image
```
We can see that the image we loaded has height and width of $620 \times 1175$, with 3 channels corresponding to RGB.
We can easily slice out and view individual color channels:
```
# Uncomment the following command to extract the red channel of the above image.
# im_red = im[:,:,0]
# Display the image
# Hint: To display the pixel values for a single channel, we can display the image using the gray-scale colormap
# Repeat the above for the blue channel to visualize features represented in the blue color channel.
```
While we have so far considered only 3 channel RGB images, there are many settings in which we may consider a different number of channels.
For example, [hyperspectral imaging](https://en.wikipedia.org/wiki/Hyperspectral_imaging) uses a wide range of the electromagnetic spectrum to characterize a scene.
Such modalities may have hundreds of channels or more.
Additionally, we'll soon see that certain intermediate representations in a CNN can be considered images with many channels.
### Convolutions
Convolutional neural networks (CNNs) are a class of neural networks that have convolutional layers.
CNNs are particularly effective for data that have spatial structures and correlations (e.g. images).
We'll focus on CNNs applied to images in this tutorial.
Recall that a multilayer perceptron (MLP) is entirely composed of fully connected layers, which are each a matrix multiply operation (and addition of a bias) followed by a non-linearity (e.g. sigmoid, ReLU).
A convolutional layer is similar, except the matrix multiply operation is replaced with a convolution operation (in practice a cross-correlation).
Note that a CNN need not be entirely composed of convolutional layers; in fact, many popular CNN architectures end in fully connected layers.
As before, since we're building neural networks, let's start by loading PyTorch. We'll find NumPy useful as well, so we'll also import that here.
```
import numpy as np
# PyTorch Imports
##################################################
# #
# ---- YOUR CODE HERE ---- #
# #
##################################################
```
#### Review: Fully connected layer
In a fully connected layer, the input $x \in \mathbb R^{M \times C_{in}}$ is a vector (or, rather a batch of vectors), where $M$ is the minibatch size and $C_{in}$ is the dimensionality of the input.
We first matrix multiply the input $x$ by a weight matrix $W$.
This weight matrix has dimensions $W \in \mathbb R^{C_{in} \times C_{out}}$, where $C_{out}$ is the number of output units.
We then add a bias for each output, which we do by adding $b \in \mathbb{R}^{C_{out}}$.
The output $y \in \mathbb{R}^{M \times C_{out}}$ of the fully connected layer then:
\begin{align*}
y = \text{ReLU}(x W + b)
\end{align*}
Remember, the values of $W$ and $b$ are variables that we are trying to learn for our model.
Below we have a visualization of what the matrix operation looks like (bias term and activation function omitted).
<img src="./Figures/mnist_matmul.png" width="800"/>
```
# Create a random flat input vector
x_fc = torch.randn(100, 1024)
# Create weight matrix variable
W = torch.randn(1024, 10)/np.sqrt(1024)
# Create bias variable
b = torch.zeros(10, requires_grad=True)
# Use `W` and `b` to apply a fully connected layer.
# Store the output in variable `y`.
# Don't forget to apply the activation function.
##################################################
# ---- YOUR CODE HERE ---- #
##################################################
# Print input/output shape
print("Input shape: {}".format(x_fc.shape))
print("Output shape: {}".format(y.shape))
```
#### Convolutional layer
In a convolutional layer, we convolve the input $x$ with a convolutional kernel (aka filter), which we also call $W$, producing output $y$:
\begin{align*}
y = \text{ReLU}(W*x + b)
\end{align*}
In the context of CNNs, the output $y$ is often referred to as feature maps. As with a fully connected layer, the goal is to learn $W$ and $b$ for our model.
Unlike the input of a fully connected layer, which is $x \in \mathbb R^{M\times C_{in}}$, the dimensionality of an image input is 4D: $x \in \mathbb R^{M \times C_{in} \times H_{in} \times W_{in}}$, where $M$ is still the batch size, $C_{in}$ is the number of channels of the input (e.g. 3 for RGB), and $H_{in}$ and $W_{in}$ are the height and width of the image.
The weight parameter $W$ is also different in a convolutional layer.
Unlike the 2-D weight matrix for fully connected layers, the kernel is 4-D with dimensions $W \in \mathbb R^{C_{out} \times C_{in} \times H_K \times W_K }$, where $H_K$ and $W_K$ are the kernel height and weight, respectively.
A common choice for $H_K$ and $W_K$ is $H_K = W_K = 3$ or $5$, but this tends to vary depending on the architecture.
Convolving the input with the kernel and adding a bias then gives an output $y \in \mathbb R^{M \times C_{out} \times H_{out} \times W_{out}}$.
If we use "same" padding and a stride of $1$ in our convolution (more on this later), our output will have the same spatial dimensions as the input: $H_{out}=H_{in}$ and $W_{out}=W_{in}$.
If you're having trouble visualizing this operation in 4D, it's easier to think about for a single member of the minibatch, one convolutional kernel at a time.
Consider a stack of $C_{out}$ number of kernels, each of which are 3D ($C_{in} \times H_K \times W_K $).
This 3D volume is then slid across the input (which is also 3D: $C_{in} \times H_{in} \times W_{in}$) in the two spatial dimensions (along $H_{in}$ and $W_{in}$).
The outputs of the multiplication of the kernel and the input at every location creates a single feature map that is $H_{out} \times W_{out}$.
Stacking the feature maps generated by each kernel gives the 3D output $C_{out} \times H_{out} \times W_{out} $.
Repeat the process for all $M$ inputs in the minibatch, and we get a 4D output $M \times C_{out} \times H_{out} \times W_{out}$.
<img src="./Figures/conv_filters.png" alt="Convolutional filters" style="width: 600px;"/>
A few more things to note:
- Notice the ordering of the dimensions of the input (batch, channels in, height, width).
This is commonly referred to as $NCHW$ ordering.
Many other languages and libraries (e.g. MATLAB, TensorFlow, the image example at the beginning of this notebook) instead default to the slightly different $NHWC$ ordering.
PyTorch defaults to $NCHW$, as it more efficient computationally, especially with CUDA.
- An additional argument for the convolution is the *stride*, which controls the how far we slide the convolutional filter as we move it along the input image.
The convolutional operator, from its signal processing roots, by default considers a stride length of 1 in all dimensions, but in some situations we would like to consider strides more than 1 (or even less than 1).
More on this later.
- In the context of signal processing, convolutions usually result in outputs that are larger than the input size, which results from when the kernel "hangs off the edge" of the input on both sides.
This might not always be desirable.
We can control this by controlling the padding of the input.
Typically, we use pad the input to ensure the output has the same spatial dimensions as the input (assuming stride of 1); this makes it easier for us to keep track of what the size of our model is.
Let's implement this convolution operator in code.
There is a convolution implementation in `torch.nn.functional`, which we use here.
```
# Create a random 4D tensor. Use the NCHW format, where N = 100, C = 3, H = W =32
x_cnn =
# Create convolutional kernel variable (C_out, C_in, H_k, W_k)
W1 =
# Create a bias variable of size C_out
b1 =
# Apply the convolutional layer with relu activation
conv1 =
# Print input/output shape
print("Input shape: {}".format(x_cnn.shape))
print("Convolution output shape: {}".format(conv1.shape))
```
Just like in a MLP, we can stack multiple of these convolutional layers.
In the *Representing Images Digitally* section, we briefly mentioned considering images with channels more than 3.
Observe that the input to the second layer (i.e. the output of the first layer) can be viewed as an "image" with $C_{out}$ channels.
Instead of each channel representing a color content though, each channel effectively represents how much the original input image activated a particular convolutional kernel.
Given $C_{out}$ kernels that are each $C_{in} \times H_K \times W_K$, this results in $C_{out}$ channels for the output of the convolution.
Note that we need to change the dimensions of the convolutional kernel such that its input channels matches the number of output channels of the previous layer:
```
# Create the second convolutional layer by defining a random `W2` and `b2`
W2 =
b2 =
# Apply 2nd convolutional layer to the output of the first convolutional layer
conv2 =
# Print output shape
print("Second convolution output shape: {}".format(conv2.shape))
```
In fact, we typically perform these convolution operations many times.
Popular CNN architectures for image analysis today can be 100+ layers.
### Reshaping
You'll commonly finding yourself needing to reshape tensors while building CNNs.
The PyTorch function for doing so is `view()`.
Anyone familiar with NumPy will find it very similar to `np.reshape()`.
Importantly, the new dimensions must be chosen so that it is possible to rearrange the input into the shape of the output (i.e. the total number of elements must be the same).
As with NumPy, you can optionally replace one of the dimensions with a `-1`, which tells `torch` to infer the missing dimension.
```
M = torch.zeros(4, 3)
M2 = M.view(1,1,12)
M3 = M.view(2,1,2,3)
M4 = M.view(-1,2,3)
M5 = M.view(-1)
```
To get an idea of why reshaping is need in a CNN, let's look at a diagram of a simple CNN.
<img src="Figures/mnist_cnn_ex.png" alt="mnist_cnn_ex" style="width: 800px;"/>
First of all, the CNN expects a 4D input, with the dimensions corresponding to `[batch, channel, height, width]`.
Your data may not come in this format, so you may have to reshape it yourself.
```
x_flat = torch.randn(100, 1024)
# Reshape flat input image into a 4D batched image input
# Hint: Use batch=100, height=width=32.
x_reshaped =
# Print input shape
print(x_reshaped.shape)
```
CNN architectures also commonly contain fully connected layers or a softmax, as we're often interested in classification.
Both of these expect 2D inputs with dimensions `[batch, dim]`, so you have to "flatten" a CNN's 4D output to 2D.
For example, to flatten the convolutional feature maps we created earlier:
```
# Flatten convolutional feature maps into a vector
h_flat = conv2.view(-1, 32*32*32)
# Print output shape
print(h_flat.shape)
```
### Pooling and striding
Almost all CNN architectures incorporate either pooling or striding. This is done for a number of reasons, including:
- Dimensionality reduction: pooling and striding operations reduces computational complexity by shrinking the number of values passed to the next layer.
For example, a 2x2 maxpool reduces the size of the feature maps by a factor of 4.
- Translational invariance: Oftentimes in computer vision, we'd prefer that shifting the input by a few pixels doesn't change the output. Pooling and striding reduces sensitivity to exact pixel locations.
- Increasing receptive field: by summarizing a window with a single value, subsequent convolutional kernels are seeing a wider swath of the original input image. For example, a max pool on some input followed by a 3x3 convolution results in a kernel "seeing" a 6x6 region instead of 3x3.
#### Pooling
The two most common forms of pooling are max pooling and average pooling.
Both reduce values within a window to a single value, on a per-feature-map basis.
Max pooling takes the maximum value of the window as the output value; average pooling takes the mean.
<img src="./Figures/maxpool.png" alt="avg_vs_max" style="width: 800px;"/>
```
# Recreate the values in pooling figure with shape [4,4]
feature_map_fig =
# Convert 2D matrix to a 4D tensor of shape [1,1,4,4].
fmap_fig =
print("Feature map shape pre-pooling: {}".format(fmap_fig.shape))
# Apply max pool to fmap_fig
max_pool_fig =
print("\nMax pool")
print("Shape: {}".format(max_pool_fig.shape))
print(torch.squeeze(max_pool_fig))
# Apply Avgerage pool to fmap_fig
avg_pool_fig =
print("\nAvg pool")
print("Shape: {}".format(avg_pool_fig.shape))
print(torch.squeeze(avg_pool_fig))
```
Now we will apply max pool and average pool to the output of the convolutional layer `conv2`.
```
# Taking the output we've been working with so far, first print its current size
print("Shape of conv2 feature maps before pooling: {0}".format(conv2.shape))
# Apply Max pool with size = 2 and then print new shape.
max_pool2 =
print("Shape of conv2 feature maps after max pooling: {0}".format(max_pool2.shape))
# Average pool with size = 2 and then print new shape
avg_pool2 =
print("Shape of conv2 feature maps after avg pooling: {0}".format(avg_pool2.shape))
```
#### Striding
One might expect that pixels in an image have high correlation with neighboring pixels, so we can save computation by skipping positions while sliding the convolutional kernel.
By default, a CNN slides across the input one pixel at a time, which we call a stride of 1.
By instead striding by 2, we skip calculating 75% of the values of the output feature map, which yields a feature map that's half the size in each spatial direction.
Note, while pooling is an operation done after the convolution, striding is part of the convolution operation itself.
```
# Since striding is part of the convolution operation, we'll start with the feature maps before the 2nd convolution
print("Shape of conv1 feature maps: {0}".format(conv1.shape))
# Apply 2nd convolutional layer, with striding of 2
conv2_strided =
# Print output shape
print("Shape of conv2 feature maps with stride of 2: {0}".format(conv2_strided.shape))
```
## Building a custom CNN
Let's revisit MNIST digit classification, but this time, we'll use the following CNN as our classifier: $5 \times 5$ convolution -> $2 \times 2$ max pool -> $5 \times 5$ convolution -> $2 \times 2$ max pool -> fully connected to $\mathbb R^{256}$ -> fully connected to $\mathbb R^{10}$ (prediction).
ReLU activation functions will be used to impose non-linearities.
Remember, convolutions produce 4-D outputs, and fully connected layers expect 2-D inputs, so tensors must be reshaped when transitioning from one to the other.
We can build this CNN with the components introduced before, but as with the logistic regression example, it may prove helpful to instead organize our model with a `nn.Module`.
```
import torch.nn as nn
# Important: Inherit the `nn.Module` class to define a PyTorch model
class CIFAR_CNN():
def __init__(self):
super().__init__()
# Step 1: Define the first convoluation layer (C_in=3, C_out=32, H_k=W_k=5, padding = 2)
self.conv1 =
# Step 2: Define the second convolutional layer (C_out=64, H_k=W_k=5, padding = 2)
self.conv2 =
# Step 3: Define the first fully-connected layer with an output dimension of 256.
# What should be the input dimension of this layer?
self.fc1 =
# Step 4: Define the second fully-connected layer with an output dimension of 10 (# of classes).
self.fc2 =
def forward(self, x):
# Step 5: Using the layers defined in __init__ function, define the forward pass of the neural network below:
# Apply conv layer 1, activation, and max-pool
# Apply conv layer 2, activation, and max-pool
# Reshape to kernel for fully-connected layer
# Apply fc layer 1 and activation
# Apply fc layer 2
output =
return output
```
Notice how our `nn.Module` contains several operation chained together.
The code for submodule initialization, which creates all the stateful parameters associated with each operation, is placed in the `__init__()` function, where it is run once during object instantiation.
Meanwhile, the code describing the forward pass, which is used every time the model is run, is placed in the `forward()` method.
Printing an instantiated model shows the model summary:
```
model = CIFAR_CNN()
print(model)
```
We can drop this model into our logistic regression training code, with few modifications beyond changing the model itself.
A few other changes:
- CNNs expect a 4-D input, so we no longer have to reshape the images before feeding them to our neural network.
- Since CNNs are a little more complex than models we've worked with before, we're going to increase the number of epochs (complete passes through the training data) during training.
- We switch from a vanilla stochastic gradient descent optimizer to the [Adam](https://arxiv.org/abs/1412.6980) optimizer, which tends to do well for neural networks.
## Training the CNN
```
import numpy as np
import torch
import torch.nn as nn
import torch.nn.functional as F
from torchvision import datasets, transforms
from tqdm.notebook import tqdm, trange
cifar_train = datasets.CIFAR10(root="./datasets/cifar-10/", train=True, transform=transforms.ToTensor(), download=True)
cifar_test = datasets.CIFAR10(root="./datasets/cifar-10/", train=False, transform=transforms.ToTensor(), download=True)
# Creatre the train and test data loaders.
train_loader =
test_loader =
# Create a loader identical to the training laoder with a sample size of 8. This is to demonstrate
# how we display images. If we had used the train_loader, we would be looking at 100 images!
sample_loader =
#define an image viewing function
def imshow(img):
npimg = img.numpy()
plt.imshow(np.transpose(npimg, (1, 2, 0)))
plt.show()
#list out the classes for the dataset in order from 0 to 9 to correspond to the integer labels
classes = ('plane', 'car', 'bird', 'cat',
'deer', 'dog', 'frog', 'horse', 'ship', 'truck')
#Take a sample of 1 batch from the sample loader
dataiter = iter(sample_loader)
images, labels = dataiter.next()
# show images
imshow(torchvision.utils.make_grid(images))
# print labels
print(' '.join('%5s' % classes[labels[j]] for j in range(8)))
# Instantiate model
model =
# Loss and Optimizer
criterion =
optimizer =
track_loss = []
# Iterate through train set minibatchs
num_training_steps = 0
for epoch in trange(3):
for images, labels in tqdm(train_loader):
# Step 1: Zero out the gradients.
# Step 2: Forward pass.
# Step 3: Compute the loss using `criterion`.
# Step 5: Backward pass.
# Step 6: Update the parameters.
# Step 7: Track the loss value at every 100th step.
if num_training_steps % 100 == 0:
# Append loss to the list.
track_loss.append()
num_training_steps += 1
```
### Let's plot the loss function
```
##################################################
# #
# ---- YOUR CODE HERE ---- #
# #
##################################################
```
## Testing the trained model
```
## Testing
correct = 0
total = len(cifar_test)
with torch.no_grad():
# Iterate through test set minibatchs
for images, labels in tqdm(test_loader):
# Step 1: Forward pass to get
y =
# Step 2: Compute the predicted labels from `y`.
predictions =
# Step 3: Compute the number of samples that were correctly predicted, and maintain the count in the variable `correct`.
correct +=
print('Test accuracy: {}'.format(correct/total))
```
If you are running this notebook on CPU, training this CNN might take a while.
On the other hand, if you use a GPU, this model should train in seconds.
This is why we usually prefer to use GPUs when we have them.
### Torchvision
#### Datasets and transforms
As any experienced ML practioner will say, data wrangling is often half (sometimes even 90%) of the battle when building a model.
Often, we have to write significant code to handle downloading, organizing, formatting, shuffling, pre-processing, augmenting, and batching examples.
For popular datasets, we'd like to standardize data handling so that the comparisons we make are specific to the models themselves.
Enter [Torchvision](https://pytorch.org/vision/stable/index.html).
Torchvision includes easy-to-use APIs for downloading and loading many popular vision datasets.
We've previously seen this in action for downloading the MNIST dataset:
```
from torchvision import datasets
mnist_train = datasets.CIFAR10(root="./datasets", train=True, transform=transforms.ToTensor(), download=True)
```
Of course, there's [many more](https://pytorch.org/vision/stable/datasets.html).
Currently, datasets for image classification (e.g. MNIST, CIFAR, ImageNet), object detection (VOC, COCO, Cityscapes), and video action recognition (UCF101, Kinetics) are included.
For formatting, pre-processing, and augmenting, [transforms](https://pytorch.org/vision/stable/transforms.html) can come in handy.
Again, we've seen this before (see above), when we used a transform to convert the MNIST data from PIL images to PyTorch tensors.
However, transforms can be used for much more.
Preprocessing steps like data whitening are common before feeding the data into the model.
Also, in many cases, we use data augmentations to artificially inflate our dataset and learn invariances.
Transforms are a versatile tool for all of these.
#### Leveraging popular convolutional neural networks
While you certainly can build your own custom CNNs like we did above, more often than not, it's better to use one of the popular existing architectures.
The Torchvision documentation has a [list of supported CNNs](https://pytorch.org/vision/stable/models.html), as well as some performance characteristics.
There's a number of reasons for using one of these CNNs instead of designing your own.
First, for image datasets larger and more complex than CIFAR and MNIST (which is basically all of them), a fair amount network depth and width is often necessary.
For example, some of the popular CNNs can be over 100 layers deep, with several tricks and details beyond what we've covered in this notebook.
Coding all of this yourself has a high potential for error, especially when you're first getting started.
Instead, you can create the CNN architecture using Torchvision, using a couple lines:
```
import torchvision.models as models
resnet18 = models.resnet18()
print(resnet18)
```
Loading a working CNN architecture in a couple lines can save a significant amount of time both implementing and debugging.
The second, perhaps even more important, reason to use one of these existing architectures is the ability to use pre-trained weights.
Early on in the recent resurgence of deep learning, people discovered that the weights of a CNN trained for ImageNet classification were highly transferable.
For example, it is common to use the weights of an ImageNet-trained CNN as a weight initialization for other vision tasks, or even to freeze the bulk of the weights and only re-train the final classification layer(s) on a new task.
This is significant, as in most settings, we rarely have enough labeled data to train a powerful CNN from scratch without overfitting.
Loading pre-trained CNN is also pretty simple, involving an additional argument to the previous cell block:
`resnet18 = models.resnet18(pretrained=True)`
<font size="1">*We will not be using the above command, as running it will initiate a download of the pre-trained weights, which is a fairly large file.*</font>
A full tutorial on using pre-trained CNNs is a little beyond the scope of this notebook.
See [this tutorial](https://pytorch.org/tutorials/beginner/transfer_learning_tutorial.html) for an example.
#### Other computer vision tasks
The base CNN architectures were often designed for image classification, but the same CNNs are often used as the backbone of most modern computer vision models.
These other models often take this base CNN and include additional networks or make other architecture changes to adapt them to other tasks, such as object detection.
Torchvision contains a few models (and pre-trained weights) for object detection, segmentation, and video action recognition.
For example, to load a [Faster R-CNN](https://arxiv.org/abs/1506.01497) with a [ResNet50](https://arxiv.org/abs/1512.03385) convolutional feature extractor with [Feature Pyramid Networks](https://arxiv.org/abs/1612.03144) pre-trained on [MS COCO](http://cocodataset.org/#home):
`object_detector = torchvision.models.detection.fasterrcnn_resnet50_fpn(pretrained=True)`
<font size="1">*Again, this line has been commented out to prevent loading a large network for this demo.*</font>
Torchvision's selection of non-classification models is relatively light, and not particularly flexible.
A number of other libraries are available, depending on the task.
For example, for object detection and segmentation, Facebook AI Research's [Detectron2](https://github.com/facebookresearch/detectron2) is highly recommend.
| github_jupyter |
```
import warnings
import pandas as pd
import numpy as np
import os
import sys # error msg
import operator # sorting
from math import *
from read_trace import *
from avgblkmodel import *
warnings.filterwarnings("ignore", category=np.VisibleDeprecationWarning)
```
# gpu info
```
gtx950 = DeviceInfo()
gtx950.sm_num = 6
gtx950.sharedmem_per_sm = 49152
gtx950.reg_per_sm = 65536
gtx950.maxthreads_per_sm = 2048
```
# single stream info
```
data_size = 23000
trace_file = './1cke/trace_' + str(data_size) + '.csv'
df_trace = trace2dataframe(trace_file) # read the trace to the dataframe
df_trace
df_single_stream = model_param_from_trace_v1(df_trace)
df_single_stream.head(20)
df_s1 = reset_starting(df_single_stream)
df_s1
```
### running 2cke case
```
stream_num = 2
df_cke_list = []
for x in range(stream_num):
df_cke_list.append(df_s1.copy(deep=True))
df_cke_list[0]
df_cke_list[1]
H2D_H2D_OVLP_TH = 3.158431
for i in range(1,stream_num):
# compute the time for the init data transfer
stream_startTime = find_whentostart_comingStream(df_cke_list[i-1], H2D_H2D_OVLP_TH)
print('stream_startTime : {}'.format(stream_startTime))
df_cke_list[i].start += stream_startTime
df_cke_list[i].end += stream_startTime
df_cke_list[0]
df_cke_list[1]
```
### check whether there is h2d overlapping
```
prev_stm_h2ds_start, prev_stm_h2ds_end = find_h2ds_timing(df_cke_list[0])
print("prev stream h2ds : {} - {}".format(prev_stm_h2ds_start, prev_stm_h2ds_end))
curr_stm_h2ds_start, curr_stm_h2ds_end = find_h2ds_timing(df_cke_list[1])
print("curr stream h2ds : {} - {}".format(curr_stm_h2ds_start, curr_stm_h2ds_end))
if curr_stm_h2ds_start >=prev_stm_h2ds_start and curr_stm_h2ds_start < prev_stm_h2ds_end:
h2ds_ovlp_between_stream = True
else:
h2ds_ovlp_between_stream = False
print("h2ds_ovlp_between_stream : {}".format(h2ds_ovlp_between_stream))
```
### check kernel overlapping
```
prev_stm_kern_start, prev_stm_kern_end = find_kern_timing(df_cke_list[0])
print("prev stream kern : {} - {}".format(prev_stm_kern_start, prev_stm_kern_end))
curr_stm_kern_start, curr_stm_kern_end = find_kern_timing(df_cke_list[1])
print("curr stream kern : {} - {}".format(curr_stm_kern_start, curr_stm_kern_end))
if prev_stm_kern_start <= curr_stm_kern_start < prev_stm_kern_end:
kern_ovlp_between_stream = True
else:
kern_ovlp_between_stream = False
print("kern_ovlp_between_stream : {}".format(kern_ovlp_between_stream))
```
#### use cke model if kern_ovlp_between_stream is true
```
# get the overlapping kernel info from both stream
kernel_ = model_cke_from_same_kernel(gtx950, df_trace, )
```
| github_jupyter |
```
%matplotlib inline
from matplotlib import style
style.use('fivethirtyeight')
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import datetime as dt
```
# Reflect Tables into SQLAlchemy ORM
```
# Python SQL toolkit and Object Relational Mapper
import sqlalchemy
from sqlalchemy.ext.automap import automap_base
from sqlalchemy.orm import Session
from sqlalchemy import create_engine, func
engine = create_engine("sqlite:///Resources/hawaii.sqlite")
# reflect an existing database into a new model
Base = automap_base()
# reflect the tables
Base.prepare(engine, reflect=True)
# We can view all of the classes that automap found
Base.classes.keys()
# Save references to each table
Measurement = Base.classes.measurement
Station = Base.classes.station
# Create our session (link) from Python to the DB
session = Session(engine)
```
# Exploratory Climate Analysis using pandas
```
# Design a query to retrieve the last 12 months of precipitation data and plot the results
# Calculate the date 1 year ago from the last data point in the database
# Perform a query to retrieve the data and precipitation scores
# Save the query results as a Pandas DataFrame and set the index to the date column
# Sort the dataframe by date
# Use Pandas Plotting with Matplotlib to plot the data
weather_data = pd.read_sql("SELECT * FROM measurement", engine)
weather_data.head()
# Latest Date
latest_date = session.query(Measurement.date).order_by(Measurement.date.desc()).first().date
latest_date
end_date = latest_date
end_date
start_date = dt.datetime.strptime(end_date, '%Y-%m-%d') - dt.timedelta(days=365)
start_date
start_date = start_date.strftime('%y-%m-%d')
start_date
start_date = "2016-08-23"
end_date = "2017-08-23"
weather_data_one_year = weather_data[weather_data["date"].between(start_date, end_date)]
weather_data_one_year.head()
len(weather_data_one_year)
precipitation_data = weather_data_one_year[["prcp", "date"]]
precipitation_data.set_index('date', inplace=True)
# Sort the dataframe by date
precipitation_data_sorted = precipitation_data.sort_values('date', ascending=True )
precipitation_data_sorted.head()
# Use Pandas Plotting with Matplotlib to plot the data
# Rotate the xticks for the dates
precipitation_chart = precipitation_data_sorted.plot(kind = "line",grid=True, figsize=(10,6), rot=30, x_compat=True, fontsize=12, title = "Precipitation data for one year")
precipitation_chart.set_xlabel("Date")
precipitation_chart.set_ylabel("Precipitation")
plt.show()
# Use Pandas to calcualte the summary statistics for the precipitation data
precipitation_data_sorted.describe()
# Design a query to show how many stations are available in this dataset?
station_data = pd.read_sql("SELECT * FROM station", engine)
station_data
station_data["station"].count()
# What are the most active stations? (i.e. what stations have the most rows)?
# List the stations and the counts in descending order.
weather_data["station"].value_counts()
weather_data_station_counts = weather_data["station"].value_counts()
# The station with maximum number of temperature observations
active_station = weather_data_station_counts.index[0]
active_station
# Using the station id from the previous query, calculate the lowest temperature recorded,
# highest temperature recorded, and average temperature of the most active station?
weather_data_active_station = weather_data.loc[(weather_data["station"] == active_station), :]
Lowest_temperature = weather_data_active_station["tobs"].min()
Highest_temperature = weather_data_active_station["tobs"].max()
Average_temperature = weather_data_active_station["tobs"].mean()
print(f"For the most active station The lowest temperature, The Highest temperature, The Average temperature is {Lowest_temperature} , {Highest_temperature}, {Average_temperature}")
# Choose the station with the highest number of temperature observations.
# Query the last 12 months of temperature observation data for this station and plot the results as a histogram
start_date = "2016-08-23"
end_date = "2017-08-23"
weather_data_one_year = weather_data[weather_data["date"].between(start_date, end_date)]
weather_data_active_station_one_year = weather_data_one_year.loc[(weather_data_one_year["station"] == active_station), :]
temperature_data = weather_data_active_station_one_year[["tobs", "date"]]
x_data = temperature_data["tobs"]
plt.hist(x_data, 12, label = "tobs")
plt.xlabel('Temperature')
plt.ylabel('Frequency')
plt.legend(loc=1, prop={'size': 14})
plt.show()
# This function called `calc_temps` will accept start date and end date in the format '%Y-%m-%d'
# and return the minimum, average, and maximum temperatures for that range of dates
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
# function usage example
print(calc_temps('2012-02-28', '2012-03-05'))
# Use your previous function `calc_temps` to calculate the tmin, tavg, and tmax
# for your trip using the previous year's data for those same dates.
def calc_temps(start_date, end_date):
"""TMIN, TAVG, and TMAX for a list of dates.
Args:
start_date (string): A date string in the format %Y-%m-%d
end_date (string): A date string in the format %Y-%m-%d
Returns:
TMIN, TAVE, and TMAX
"""
return session.query(func.min(Measurement.tobs), func.avg(Measurement.tobs), func.max(Measurement.tobs)).\
filter(Measurement.date >= start_date).filter(Measurement.date <= end_date).all()
print(calc_temps('2017-02-28', '2017-03-05'))
trip_results = calc_temps('2017-02-28', '2017-03-05')
trip_results
# Plot the results from your previous query as a bar chart.
# Use "Trip Avg Temp" as your Title
# Use the average temperature for the y value
# Use the peak-to-peak (tmax-tmin) value as the y error bar (yerr)
trip_df = pd.DataFrame(trip_results, columns=['Min Temp', 'Avg Temp', 'Max Temp'])
avg_temp = trip_df['Avg Temp']
min_max_temp = trip_df.iloc[0]['Max Temp'] - trip_df.iloc[0]['Min Temp']
temp_chart = avg_temp.plot(kind='bar', yerr=min_max_temp, grid = True, figsize=(6,8), alpha=0.5, color='coral')
temp_chart.set_title("Trip Avg Temp", fontsize=20)
temp_chart.set_ylabel("Temp (F)")
plt.xticks([])
plt.show()
# Calculate the total amount of rainfall per weather station for your trip dates using the previous year's matching dates.
# Sort this in descending order by precipitation amount and list the station, name, latitude, longitude, and elevation
trip_start_date = "2017-02-28"
trip_end_date = "2017-03-5"
weather_data_one_year_trip = weather_data_one_year[weather_data_one_year["date"].between(trip_start_date, trip_end_date)]
weather_data_one_year_trip_per_station = weather_data_one_year_trip.groupby("station")
weather_data_one_year_trip_per_station["prcp"].sum()
```
## Optional Challenge Assignment
| github_jupyter |
# Tutorial - Time Series Forecasting - Autoregression (AR)
The goal is to forecast time series with the Autoregression (AR) Approach. 1) JetRail Commuter, 2) Air Passengers, 3) Function Autoregression with Air Passengers, and 5) Function Autoregression with Wine Sales.
References Jason Brownlee - https://machinelearningmastery.com/time-series-forecasting-methods-in-python-cheat-sheet/
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import datetime
import warnings
warnings.filterwarnings("ignore")
# Load File
url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/JetRail%20Avg%20Hourly%20Traffic%20Data%20-%202012-2013.csv'
df = pd.read_csv(url)
df.info()
df.Datetime = pd.to_datetime(df.Datetime,format='%Y-%m-%d %H:%M')
df.index = df.Datetime
```
# Autoregression (AR) Approach with JetRail
The autoregression (AR) method models the next step in the sequence as a linear function of the observations at prior time steps.
The notation for the model involves specifying the order of the model p as a parameter to the AR function, e.g. AR(p). For example, AR(1) is a first-order autoregression model.
The method is suitable for univariate time series without trend and seasonal components.
```
#Split Train Test
import math
total_size=len(df)
split = 10392 / 11856
train_size=math.floor(split*total_size)
train=df.head(train_size)
test=df.tail(len(df) -train_size)
from statsmodels.tsa.ar_model import AR
model = AR(train.Count)
fit1 = model.fit()
y_hat = test.copy()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
#Plotting data
plt.figure(figsize=(12,8))
plt.plot(train.index, train['Count'], label='Train')
plt.plot(test.index,test['Count'], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR) Forecast")
plt.show()
```
# RMSE Calculation
```
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test.Count, y_hat.AR))
print('RMSE = '+str(rms))
```
# Autoregression (AR) Approach with Air Passagers
```
# Subsetting
url = 'https://raw.githubusercontent.com/tristanga/Machine-Learning/master/Data/International%20Airline%20Passengers.csv'
df = pd.read_csv(url, sep =";")
df.info()
df.Month = pd.to_datetime(df.Month,format='%Y-%m')
df.index = df.Month
#df.head()
#Creating train and test set
import math
total_size=len(df)
train_size=math.floor(0.7*total_size) #(70% Dataset)
train=df.head(train_size)
test=df.tail(len(df) -train_size)
#train.info()
#test.info()
from statsmodels.tsa.ar_model import AR
# Create prediction table
y_hat = test.copy()
model = AR(train['Passengers'])
fit1 = model.fit()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
y_hat.describe()
plt.figure(figsize=(12,8))
plt.plot(train.index, train['Passengers'], label='Train')
plt.plot(test.index,test['Passengers'], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR)")
plt.show()
from sklearn.metrics import mean_squared_error
from math import sqrt
rms = sqrt(mean_squared_error(test.Passengers, y_hat.AR))
print('RMSE = '+str(rms))
```
# Function Autoregression (AR) Approach with variables
```
def AR_forecasting(mydf,colval,split):
#print(split)
import math
from statsmodels.tsa.api import Holt
from sklearn.metrics import mean_squared_error
from math import sqrt
global y_hat, train, test
total_size=len(mydf)
train_size=math.floor(split*total_size) #(70% Dataset)
train=mydf.head(train_size)
test=mydf.tail(len(mydf) -train_size)
y_hat = test.copy()
model = AR(train[colval])
fit1 = model.fit()
y_hat['AR'] = fit1.predict(start=len(train), end=len(train)+len(test)-1, dynamic=False)
plt.figure(figsize=(12,8))
plt.plot(train.index, train[colval], label='Train')
plt.plot(test.index,test[colval], label='Test')
plt.plot(y_hat.index,y_hat['AR'], label='AR')
plt.legend(loc='best')
plt.title("Autoregression (AR) Forecast")
plt.show()
rms = sqrt(mean_squared_error(test[colval], y_hat.AR))
print('RMSE = '+str(rms))
AR_forecasting(df,'Passengers',0.7)
```
# Testing Function Autoregression (AR) Approach with Wine Dataset
```
url = 'https://raw.githubusercontent.com/tristanga/Data-Cleaning/master/Converting%20Time%20Series/Wine_Sales_R_Dataset.csv'
df = pd.read_csv(url)
df.info()
df.Date = pd.to_datetime(df.Date,format='%Y-%m-%d')
df.index = df.Date
AR_forecasting(df,'Sales',0.7)
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
# Introduzione a TensorFlow 2 per esperti
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/quickstart/advanced"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />Visualizza su TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs-l10n/blob/master/site/it/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Esegui in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs-l10n/blob/master/site/it/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />Visualizza il sorgente su GitHub</a>
</td>
<td>
<a href="https://storage.googleapis.com/tensorflow_docs/docs-l10n/site/it/tutorials/quickstart/advanced.ipynb"><img src="https://www.tensorflow.org/images/download_logo_32px.png" />Scarica il notebook</a>
</td>
</table>
Note: La nostra comunitร di Tensorflow ha tradotto questi documenti. Poichรจ queste traduzioni sono *best-effort*, non รจ garantito che rispecchino in maniera precisa e aggiornata la [documentazione ufficiale in inglese](https://www.tensorflow.org/?hl=en).
Se avete suggerimenti per migliorare questa traduzione, mandate per favore una pull request al repository Github [tensorflow/docs](https://github.com/tensorflow/docs).
Per proporsi come volontari alla scrittura o alla review delle traduzioni della comunitร contattate la
[mailing list [email protected]](https://groups.google.com/a/tensorflow.org/forum/#!forum/docs).
Questo รจ un [Google Colaboratory](https://colab.research.google.com/notebooks/welcome.ipynb) notebook file. I programmi Python sono eseguiti direttamente nel browserโun ottimo modo per imparare e utilizzare TensorFlow. Per seguire questo tutorial, esegui il file notebook in Google Colab cliccando sul bottone in cima a questa pagina.
1. All'interno di Colab, connettiti al runtime di Python: In alto a destra della barra dei menu, seleziona *CONNECT*.
2. Esegui tutte le celle di codice di notebook: Seleziona *Runtime* > *Run all*.
Scarica e installa il pacchetto TensorFlow 2:
Importa TensorFlow nel tuo codice:
```
import tensorflow as tf
from tensorflow.keras.layers import Dense, Flatten, Conv2D
from tensorflow.keras import Model
```
Carica e prepara il [dataset MNIST](http://yann.lecun.com/exdb/mnist/).
```
mnist = tf.keras.datasets.mnist
(x_train, y_train), (x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
# Add a channels dimension
x_train = x_train[..., tf.newaxis]
x_test = x_test[..., tf.newaxis]
```
Usa `tf.data` per raggruppare e mischiare il dataset:
```
train_ds = tf.data.Dataset.from_tensor_slices(
(x_train, y_train)).shuffle(10000).batch(32)
test_ds = tf.data.Dataset.from_tensor_slices((x_test, y_test)).batch(32)
```
Costrusci il modello `tf.keras` usando l'[API Keras per creare sottoclassi di modelli](https://www.tensorflow.org/guide/keras#model_subclassing):
```
class MyModel(Model):
def __init__(self):
super(MyModel, self).__init__()
self.conv1 = Conv2D(32, 3, activation='relu')
self.flatten = Flatten()
self.d1 = Dense(128, activation='relu')
self.d2 = Dense(10, activation='softmax')
def call(self, x):
x = self.conv1(x)
x = self.flatten(x)
x = self.d1(x)
return self.d2(x)
# Create an instance of the model
model = MyModel()
```
Scegli un metodo di ottimizzazione e una funzione obiettivo per l'addestramento:
```
loss_object = tf.keras.losses.SparseCategoricalCrossentropy()
optimizer = tf.keras.optimizers.Adam()
```
Seleziona delle metriche per misurare la pertita e l'accuratezza del modello. Queste metriche accumulano i valori alle varie epoche e alla fine stampano il risultato globale.
```
train_loss = tf.keras.metrics.Mean(name='train_loss')
train_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='train_accuracy')
test_loss = tf.keras.metrics.Mean(name='test_loss')
test_accuracy = tf.keras.metrics.SparseCategoricalAccuracy(name='test_accuracy')
```
Usa `tf.GradientTape` per addestrare il modello:
```
@tf.function
def train_step(images, labels):
with tf.GradientTape() as tape:
predictions = model(images)
loss = loss_object(labels, predictions)
gradients = tape.gradient(loss, model.trainable_variables)
optimizer.apply_gradients(zip(gradients, model.trainable_variables))
train_loss(loss)
train_accuracy(labels, predictions)
```
Testa il modello:
```
@tf.function
def test_step(images, labels):
predictions = model(images)
t_loss = loss_object(labels, predictions)
test_loss(t_loss)
test_accuracy(labels, predictions)
EPOCHS = 5
for epoch in range(EPOCHS):
for images, labels in train_ds:
train_step(images, labels)
for test_images, test_labels in test_ds:
test_step(test_images, test_labels)
template = 'Epoch {}, Loss: {}, Accuracy: {}, Test Loss: {}, Test Accuracy: {}'
print(template.format(epoch+1,
train_loss.result(),
train_accuracy.result()*100,
test_loss.result(),
test_accuracy.result()*100))
# Reset the metrics for the next epoch
train_loss.reset_states()
train_accuracy.reset_states()
test_loss.reset_states()
test_accuracy.reset_states()
```
Il classificatore di immagini รจ ora addestrato per circa il 98% di accuratezza su questo insieme di dati. Per approfondire, leggi i [tutorials di TensorFlow](https://www.tensorflow.org/tutorials).
| github_jupyter |
็จๅธฆๆไธ็ง็ฑปๅๅชๅฃฐ(ๅบฆ๏ผ่พนๆ้๏ผ็นๆ้)็ไผ ้ๆจกๅ็ฝ็ปๆต่ฏRoleMagnet็ๆๅชๆง
```
import numpy as np
import networkx as nx
import matplotlib.pyplot as plt
```
## Creating a graph
ๆจกๆ23ไบบ็ๅฐๅไผ ้็ป็ป๏ผๅธฆๅฐ้ๅชๅฃฐ
```
%matplotlib inline
plt.rcParams['figure.dpi'] = 150
plt.rcParams['figure.figsize'] = (4, 3)
G = nx.DiGraph()
G.add_weighted_edges_from([('11','s1',0.07),('12','s1',0.1),('13','s1',0.06),('14','s1',0.09),('15','s1',0.08),
('21','s2',0.07),('22','s2',0.1),('23','s2',0.06),('24','s2',0.09),('25','s2',0.08),('26','s2',0.1),
('31','s3',0.1),('32','s3',0.1),('33','s3',0.1),('34','s3',0.1),('35','s3',0.1),('36','s3',0.1),
('s1','mid',0.4),('s2','mid',0.5),('s3','mid',0.55),
('mid','boss',0.7),('mid','w1',0.72),
('w1','41',0.065),('w1','42',0.05),('w1','43',0.06),('w1','44',0.055),('w1','51',0.24),('w1','52',0.25)])
# ๅ่ทๅฉ
balance=[-0.07,0,-0.1,-0.06,-0.09,-0.08,
-0.07,0,-0.1,-0.06,-0.09,-0.08,-0.1,
-0.1,0.05,-0.1,-0.1,-0.1,-0.1,-0.1,
0.03,0.7,0,
0.065,0.05,0.06,0.055,0.24,0.25]
color=['lightgray','violet','lightgray','lightgray','lightgray','lightgray',
'lightgray','violet','lightgray','lightgray','lightgray','lightgray','lightgray',
'lightgray','violet','lightgray','lightgray','lightgray','lightgray','lightgray',
'orange','r','limegreen',
'c','c','c','c','pink','pink']
nx.draw_planar(G, with_labels=True, node_color=color, node_size=300, font_size=7)
plt.show()
```
## RoleMagnet
```
import rolemagnet as rm
vec,role,label=rm.role_magnet(G, balance=balance)
```
## Visualization
ๅฏ่งๅ่็น็ๅ้่กจ็คบ๏ผ็จPCA้ๅฐไบ็ปดๅๅๆฌกๅฏ่งๅ
```
print ('ไธ็ปดๅตๅ
ฅ็ปๆ')
for i in range(len(G.nodes)):
print (list(G.nodes)[i],'\t',vec[i])
from mpl_toolkits.mplot3d import Axes3D
coord = np.transpose(vec)
fig = plt.figure(figsize=(4,3))
ax = Axes3D(fig)
ax.scatter(coord[0], coord[1], coord[2], c=color, s=150)
plt.show()
# ๅๆฌก้ๅฐไบ็ปด
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
reduced=PCA(n_components=2).fit_transform(StandardScaler().fit_transform(vec))
print ('ไบ็ปดๅตๅ
ฅ็ปๆ')
for i in range(len(G.nodes)):
print (list(G.nodes)[i],'\t',reduced[i])
coord = np.transpose(reduced)
plt.scatter(coord[0], coord[1], c=color, s=150, linewidths=0.8, edgecolors='k')
plt.title("RoleMagnet")
plt.show()
```
## Evaluation
็จ Adjusted Rand Index ๅ V-Measure ไธค็งๆๆ ่ฏไปท่็ฑป็ปๆ
```
from sklearn.metrics.cluster import adjusted_rand_score, homogeneity_completeness_v_measure
true_label=[1,2,1,1,1,1,
1,2,1,1,1,1,1,
1,2,1,1,1,1,1,
3,4,5,6,6,6,6,7,7]
print('Adjusted Rand Index:',adjusted_rand_score(true_label,label))
print('V-Measure:',homogeneity_completeness_v_measure(true_label,label))
print('\n่็ฑป็ปๆ')
for k,v in role.items():
print(k,v[0])
for i in v[1]:
print(' ',list(G.nodes)[i])
```
| github_jupyter |
# Tune TensorFlow Serving
## Guidelines
### CPU-only
If your system is CPU-only (no GPU), then consider the following values:
* `num_batch_threads` equal to the number of CPU cores
* `max_batch_size` to infinity (ie. MAX_INT)
* `batch_timeout_micros` to 0.
Then experiment with batch_timeout_micros values in the 1-10 millisecond (1000-10000 microsecond) range, while keeping in mind that 0 may be the optimal value.
### GPU
If your model uses a GPU device for part or all of your its inference work, consider the following value:
* `num_batch_threads` to the number of CPU cores.
* `batch_timeout_micros` to infinity while tuning `max_batch_size` to achieve the desired balance between throughput and average latency. Consider values in the hundreds or thousands.
For online serving, tune `batch_timeout_micros` to rein in tail latency.
The idea is that batches normally get filled to max_batch_size, but occasionally when there is a lapse in incoming requests, to avoid introducing a latency spike it makes sense to process whatever's in the queue even if it represents an underfull batch.
The best value for `batch_timeout_micros` is typically a few milliseconds, and depends on your context and goals.
Zero is a value to consider as it works well for some workloads. For bulk-processing batch jobs, choose a large value, perhaps a few seconds, to ensure good throughput but not wait too long for the final (and likely underfull) batch.
## Close TensorFlow Serving and Load Test Terminals
## Open a Terminal through Jupyter Notebook
### (Menu Bar -> File -> New...)

## Enable Request Batching
## Start TensorFlow Serving in Separate Terminal
The params are as follows:
* `port` for TensorFlow Serving (int)
* `model_name` (anything)
* `model_base_path` (/path/to/model/ above all versioned sub-directories)
* `enable_batching` (true|false)
```
tensorflow_model_server \
--port=9000 \
--model_name=linear \
--model_base_path=/root/models/linear_fully_optimized/cpu \
--batching_parameters_file=/root/config/tf_serving/batch_config.txt \
--enable_batching=true \
```
### `batch_config.txt`
* `num_batch_threads` (usually equal to the number of CPU cores or a multiple thereof)
* `max_batch_size` (# of requests - start with infinity, tune down to find the right balance between latency and throughput)
* `batch_timeout_micros` (minimum batch window duration)
```
num_batch_threads { value: 100 }
max_batch_size { value: 99999999 }
batch_timeout_micros { value: 100000 }
```
## Start Load Test in the Terminal
```
loadtest high
```
Notice the throughput and avg/min/max latencies:
```
summary ... = 301.1/s Avg: 227 Min: 3 Max: 456 Err: 0 (0.00%)
```
## Modify Request Batching Parameters, Repeat Load Test
Gain intuition on the performance impact of changing the request batching parameters.
| github_jupyter |
```
import scrublet as scr
import numpy as np
import pandas as pd
import scanpy as sc
import matplotlib.pyplot as plt
import os
import sys
import scipy
def MovePlots(plotpattern, subplotdir):
os.system('mkdir -p '+str(sc.settings.figdir)+'/'+subplotdir)
os.system('mv '+str(sc.settings.figdir)+'/*'+plotpattern+'** '+str(sc.settings.figdir)+'/'+subplotdir)
sc.settings.verbosity = 3 # verbosity: errors (0), warnings (1), info (2), hints (3)
sc.settings.figdir = './final-figures/'
sc.logging.print_versions()
sc.settings.set_figure_params(dpi=80) # low dpi (dots per inch) yields small inline figures
sys.executable
# Benjamini-Hochberg and Bonferroni FDR helper functions.
def bh(pvalues):
"""
Computes the Benjamini-Hochberg FDR correction.
Input:
* pvals - vector of p-values to correct
"""
pvalues = np.array(pvalues)
n = int(pvalues.shape[0])
new_pvalues = np.empty(n)
values = [ (pvalue, i) for i, pvalue in enumerate(pvalues) ]
values.sort()
values.reverse()
new_values = []
for i, vals in enumerate(values):
rank = n - i
pvalue, index = vals
new_values.append((n/rank) * pvalue)
for i in range(0, int(n)-1):
if new_values[i] < new_values[i+1]:
new_values[i+1] = new_values[i]
for i, vals in enumerate(values):
pvalue, index = vals
new_pvalues[index] = new_values[i]
return new_pvalues
def bonf(pvalues):
"""
Computes the Bonferroni FDR correction.
Input:
* pvals - vector of p-values to correct
"""
new_pvalues = np.array(pvalues) * len(pvalues)
new_pvalues[new_pvalues>1] = 1
return new_pvalues
```
Scrumblet
(Courtesy of K Polansky)
Two-step doublet score processing, mirroring the approach from Popescu et al. https://www.nature.com/articles/s41586-019-1652-y which was closely based on Pijuan-Sala et al. https://www.nature.com/articles/s41586-019-0933-9
The first step starts with some sort of doublet score, e.g. Scrublet, and ends up with a per-cell p-value (with significant values marking doublets). For each sample individually:
run Scrublet to obtain each cell's score
overcluster the manifold - run a basic Scanpy pipeline up to clustering, then additionally cluster each cluster separately
compute per-cluster Scrublet scores as the median of the observed values, and use those going forward
identify p-values:
compute normal distribution parameters: centered at the median of the scores, with a MAD-derived standard deviation
the score distribution is zero-truncated, so as per the paper I only use above-median values to compute the MAD
K deviates from the paper a bit, at least the exact wording captured within it, and multiply the MAD by 1.4826 to obtain a literature-derived normal distribution standard deviation estimate
FDR-correct the p-values via Benjamini-Hochberg
write out all this doublet info into CSVs for later use
NOTE: The second step is performed later, in a multi-sample space
```
path_to_data = '/nfs/users/nfs_l/lg18/team292/lg18/gonads/data/scRNAseq/FCA/rawdata/'
metadata = pd.read_csv(path_to_data + 'immune_meta.csv', index_col=0)
metadata['process'].value_counts()
# Select process = CD45+
metadata_enriched = metadata[metadata['process'] == 'CD45+']
metadata_enriched
metadata_enriched['stage'] = metadata_enriched['stage'].astype('str')
plotmeta = list(metadata_enriched.columns)
plotmeta.append('sample')
print('Number of samples: ', metadata_enriched.index.size)
#there's loads of clustering going on, so set verbosity low unless you enjoy walls of text
sc.settings.verbosity = 0 # verbosity: errors (0), warnings (1), info (2), hints (3)
scorenames = ['scrublet_score','scrublet_cluster_score','zscore','bh_pval','bonf_pval']
if not os.path.exists('scrublet-scores'):
os.makedirs('scrublet-scores')
#loop over the subfolders of the rawdata folder
samples = metadata_enriched.index.to_list()
for sample in list(reversed(samples)):
print(sample)
#import data
adata_sample = sc.read_10x_mtx(path_to_data + sample + '/filtered_feature_bc_matrix/',cache=True)
adata_sample.var_names_make_unique()
#rename cells to SAMPLE_BARCODE
adata_sample.obs_names = [sample+'_'+i for i in adata_sample.obs_names]
#do some early filtering to retain meaningful cells for doublet inspection
sc.pp.filter_cells(adata_sample, min_genes=200)
sc.pp.filter_genes(adata_sample, min_cells=3)
#convert to lower to be species agnostic: human mito start with MT-, mouse with mt-
mito_genes = [name for name in adata_sample.var_names if name.lower().startswith('mt-')]
# for each cell compute fraction of counts in mito genes vs. all genes
# the `.A1` is only necessary as X is sparse (to transform to a dense array after summing)
adata_sample.obs['percent_mito'] = np.sum(
adata_sample[:, mito_genes].X, axis=1).A1 / np.sum(adata_sample.X, axis=1).A1
adata_sample = adata_sample[adata_sample.obs['percent_mito'] < 0.2, :]
#set up and run Scrublet, seeding for replicability
np.random.seed(0)
scrub = scr.Scrublet(adata_sample.X)
doublet_scores, predicted_doublets = scrub.scrub_doublets(verbose=False)
adata_sample.obs['scrublet_score'] = doublet_scores
#overcluster prep. run turbo basic scanpy pipeline
sc.pp.normalize_per_cell(adata_sample, counts_per_cell_after=1e4)
sc.pp.log1p(adata_sample)
sc.pp.highly_variable_genes(adata_sample, min_mean=0.0125, max_mean=3, min_disp=0.5)
adata_sample = adata_sample[:, adata_sample.var['highly_variable']]
sc.pp.scale(adata_sample, max_value=10)
sc.tl.pca(adata_sample, svd_solver='arpack')
sc.pp.neighbors(adata_sample)
#overclustering proper - do basic clustering first, then cluster each cluster
sc.tl.leiden(adata_sample)
adata_sample.obs['leiden'] = [str(i) for i in adata_sample.obs['leiden']]
for clus in np.unique(adata_sample.obs['leiden']):
adata_sub = adata_sample[adata_sample.obs['leiden']==clus].copy()
sc.tl.leiden(adata_sub)
adata_sub.obs['leiden'] = [clus+','+i for i in adata_sub.obs['leiden']]
adata_sample.obs.loc[adata_sub.obs_names,'leiden'] = adata_sub.obs['leiden']
#compute the cluster scores - the median of Scrublet scores per overclustered cluster
for clus in np.unique(adata_sample.obs['leiden']):
adata_sample.obs.loc[adata_sample.obs['leiden']==clus, 'scrublet_cluster_score'] = \
np.median(adata_sample.obs.loc[adata_sample.obs['leiden']==clus, 'scrublet_score'])
#now compute doublet p-values. figure out the median and mad (from above-median values) for the distribution
med = np.median(adata_sample.obs['scrublet_cluster_score'])
mask = adata_sample.obs['scrublet_cluster_score']>med
mad = np.median(adata_sample.obs['scrublet_cluster_score'][mask]-med)
#let's do a one-sided test. the Bertie write-up does not address this but it makes sense
zscores = (adata_sample.obs['scrublet_cluster_score'].values - med) / (1.4826 * mad)
adata_sample.obs['zscore'] = zscores
pvals = 1-scipy.stats.norm.cdf(zscores)
adata_sample.obs['bh_pval'] = bh(pvals)
adata_sample.obs['bonf_pval'] = bonf(pvals)
#create results data frame for single sample and copy stuff over from the adata object
scrublet_sample = pd.DataFrame(0, index=adata_sample.obs_names, columns=scorenames)
for score in scorenames:
scrublet_sample[score] = adata_sample.obs[score]
#write out complete sample scores
scrublet_sample.to_csv('scrublet-scores/'+sample+'.csv')
```
#### End of notebook
| github_jupyter |
## The Basics
At the core of Python (and any programming language) there are some key characteristics of how a program is structured that enable the proper execution of that program. These characteristics include the structure of the code itself, the core data types from which others are built, and core operators that modify objects or create new ones. From these raw materials more complex commands, functions, and modules are built.
For guidance on recommended Python structure refer to the [Python Style Guide](https://www.python.org/dev/peps/pep-0008).
# Examples: Variables and Data Types
## The Interpreter
```
# The interpreter can be used as a calculator, and can also echo or concatenate strings.
3 + 3
3 * 3
3 ** 3
3 / 2 # classic division - output is a floating point number
# Use quotes around strings, single or double, but be consistent to the extent possible
'dogs'
"dogs"
"They're going to the beach"
'He said "I like mac and cheese"'
# sometimes you can't escape the escape
'He said "I\'d like mac and cheese"'
# + operator can be used to concatenate strings
'dogs' + "cats"
print('Hello World!')
```
### Try It Yourself
Go to the section _4.4. Numeric Types_ in the Python 3 documentation at <https://docs.python.org/3.4/library/stdtypes.html>. The table in that section describes different operators - try some!
What is the difference between the different division operators (`/`, `//`, and `%`)?
## Variables
Variables allow us to store values for later use.
```
a = 5
b = 10
a + b
```
Variables can be reassigned:
```
b = 38764289.1097
a + b
```
The ability to reassign variable values becomes important when iterating through groups of objects for batch processing or other purposes. In the example below, the value of `b` is dynamically updated every time the `while` loop is executed:
```
a = 5
b = 10
while b > a:
print("b="+str(b))
b = b-1
```
Variable data types can be inferred, so Python does not require us to declare the data type of a variable on assignment.
```
a = 5
type(a)
```
is equivalent to
```
a = int(5)
type(a)
c = 'dogs'
print(type(c))
c = str('dogs')
print(type(c))
```
There are cases when we may want to declare the data type, for example to assign a different data type from the default that will be inferred. Concatenating strings provides a good example.
```
customer = 'Carol'
pizzas = 2
print(customer + ' ordered ' + pizzas + ' pizzas.')
```
Above, Python has inferred the type of the variable `pizza` to be an integer. Since strings can only be concatenated with other strings, our print statement generates an error. There are two ways we can resolve the error:
1. Declare the `pizzas` variable as type string (`str`) on assignment or
2. Re-cast the `pizzas` variable as a string within the `print` statement.
```
customer = 'Carol'
pizzas = str(2)
print(customer + ' ordered ' + pizzas + ' pizzas.')
customer = 'Carol'
pizzas = 2
print(customer + ' ordered ' + str(pizzas) + ' pizzas.')
```
Given the following variable assignments:
```
x = 12
y = str(14)
z = donuts
```
Predict the output of the following:
1. `y + z`
2. `x + y`
3. `x + int(y)`
4. `str(x) + y`
Check your answers in the interpreter.
### Variable Naming Rules
Variable names are case senstive and:
1. Can only consist of one "word" (no spaces).
2. Must begin with a letter or underscore character ('\_').
3. Can only use letters, numbers, and the underscore character.
We further recommend using variable names that are meaningful within the context of the script and the research.
## Reading Files
We can accomplish a lot by assigning variables within our code as demonstrated above, but often we are interested in working with objects and data that exist in other files and directories on our system.
When we want to read data files into a script, we do so by assigning the content of the file to a variable. This stores the data in memory and lets us perform processes and analyses on the data without changing the content of the source file.
There are several ways to read files in Python - many libraries have methods for reading text, Excel and Word documents, PDFs, etc. This morning we're going to demonstrate using the ```read()``` and ```readlines()``` method in the standard library, and the Pandas```read_csv()``` function.
```
# Read unstructured text
# One way is to open the whole file as a block
file_path = "./beowulf" # We can save the path to the file as a variable
file_in = open(file_path, "r") # Options are 'r', 'w', and 'a' (read, write, append)
beowulf_a = file_in.read()
file_in.close()
print(beowulf_a)
# Another way is to read the file as a list of individual lines
with open(file_path, "r") as b:
beowulf_b = b.readlines()
print(beowulf_b)
# In order to get a similar printout to the first method, we use a for loop
# to print line by line - more on for loops below!
for l in beowulf_b:
print(l)
# We now have two variables with the content of our 'beowulf' file represented using two different data structures.
# Why do you think we get the different outputs from the next two statements?
# Beowulf text stored as one large string
print("As string:", beowulf_a[0])
# Beowulf text stored as a list of lines
print("As list of lines:", beowulf_b[0])
# We can confirm our expectations by checking on the types of our two beowulf variables
print(type(beowulf_a))
print(type(beowulf_b))
# Read CSV files using the Pandas read_csv method.
# Note: Pandas also includes methods for reading Excel.
# First we need to import the pandas library
import pandas as pd
# Create a variable to hold the path to the file
fpath = "aaj1945_DataS1_Egg_shape_by_species_v2.csv"
egg_data = pd.read_csv(fpath)
# We can get all kinds of info about the dataset
# info() provides an overview of the structure
print(egg_data.info())
# Look at the first five rows
egg_data.head()
# Names of columns
print(egg_data.columns.values)
# Dimensions (number of rows and columns)
print(egg_data.shape)
# And much more! But as a final example we can perform operations on the data.
# Descriptive statistics on the "Number of eggs" column
print(egg_data["Number of eggs"].describe())
# Or all of the columns in whole table with numeric data types:
print(egg_data.describe())
```
### Structure
Now that we have practiced assigning variables and reading information from files, we will have a look at concepts that are key to developing processes to use and analyze this information.
#### Blocks
The structure of a Python program is pretty simple:
Blocks of code are defined using indentation. Code that is at a lower level of indentation is not considerd part of a block. Indentation can be defined using spaces or tabs (spaces are recommended by the style guide), but be consistent (and prepared to defend your choice). As we will see, code blocks define the boundaries of sets of commands that fit within a given section of code. This indentation model for defining blocks of code significantly increases the readabiltiy of Python code.
For example:
>>>a = 5
>>>b = 10
>>>while b > a:
... print("b="+str(b))
... b = b-1
>>>print("I'm outside the block")
#### Comments & Documentation
You can (and should) also include documentation and comments in the code your write - both for yourself, and potential future users (including yourself). Comments are pretty much any content on a line that follows a `#` symbol (unless it is between quotation marks. For example:
>>># we're going to do some math now
>>>yae = 5 # the number of votes in favor
>>>nay = 10 # the number of votes against
>>>proportion = yae / nay # the proportion of votes in favor
>>>print(proportion)
When you are creating functions or classes (a bit more on what these are in a bit) you can also create what are called *doc strings* that provide a defined location for content that is used to generate the `help()` information highlighted above and is also used by other systems for the automatic generation of documentation for packages that contain these *doc strings*. Creating a *doc string* is simple - just create a single or multi-line text string (more on this soon) that starts on the first indented line following the start of the definition of the function or class. For example:
>>># we're going to create a documented function and then access the information about the function
>>>def doc_demo(some_text="Ill skewer yer gizzard, ye salty sea bass"):
... """This function takes the provided text and prints it out in Pirate
...
... If a string is not provided for `some_text` a default message will be displayed
... """
... out_string = "Ahoy Matey. " + some_text
... print(out_string)
>>>help(doc_demo)
>>>doc_demo()
>>>doc_demo("Sail ho!")
### Standard Objects
Any programming language has at its foundation a collection of *types* or in Python's terminology *objects*. The standard objects of Python consist of the following:
* **Numbers** - integer, floating point, complex, and multiple-base defined numeric values
* **Strings** - **immutable** strings of characters, numbers, and symbols that are bounded by single- or double-quotes
* **Lists** - an ordered collection of objects that is bounded by square-brackets - `[]`. Elements in lists are extracted or referenced by their position in the list. For example, `my_list[0]` refers to the first item in the list, `my_list[5]` the sixth, and `my_list[-1]` to the last item in the list.
* **Dictionaries** - an unordered collection of objects that are referenced by *keys* that allow for referring to those objexts by reference to those keys. Dictionaries are bounded by curley-brackets - `{}` with each element of the dictionary consisting of a *key* (string) and a *value* (object) separated by a colon `:`. Elements of a dictionary are extracted or referenced using their keys. for example:
my_dict = {"key1":"value1", "key2":36, "key3":[1,2,3]}
my_dict['key1'] returns "value1"
my_dict['key3'] returns [1,2,3]
* **Tuples** - **immutable** lists that are bounded by parentheses = `()`. Referencing elements in a tuple is the same as referencing elements in a list above.
* **Files** - objects that represent external files on the file system. Programs can interact with (e.g. read, write, append) external files through their representative file objects in the program.
* **Sets** - unordered, collections of **immutable** objects (i.e. ints, floats, strings, and tuples) where membership in the set and uniqueness within the set are defining characteristics of the member objects. Sets are created using the `set` function on a sequence of objects. A specialized list of operators on sets allow for identifying *union*, *intersection*, and *difference* (among others) between sets.
* **Other core types** - Booleans, types, `None`
* **Program unit types** - *functions*, *modules*, and *classes* for example
* **Implementation-related types** (not covered in this workshop)
These objects have their own sets of related methods (as we saw in the `help()` examples above) that enable their creation, and operations upon them.
```
# Fun with types
this = 12
that = 15
the_other = "27"
my_stuff = [this,that,the_other,["a","b","c",4]]
more_stuff = {
"item1": this,
"item2": that,
"item3": the_other,
"item4": my_stuff
}
this + that
# this won't work ...
# this + that + the_other
# ... but this will ...
this + that + int(the_other)
# ...and this too
str(this) + str(that) + the_other
```
## Lists
<https://docs.python.org/3/library/stdtypes.html?highlight=lists#list>
Lists are a type of collection in Python. Lists allow us to store sequences of items that are typically but not always similar. All of the following lists are legal in Python:
```
# Separate list items with commas!
number_list = [1, 2, 3, 4, 5]
string_list = ['apples', 'oranges', 'pears', 'grapes', 'pineapples']
combined_list = [1, 2, 'oranges', 3.14, 'peaches', 'grapes', 99.19876]
# Nested lists - lists of lists - are allowed.
list_of_lists = [[1, 2, 3],
['oranges', 'grapes', 8],
[['small list'],
['bigger', 'list', 55],
['url_1', 'url_2']
]
]
```
There are multiple ways to create a list:
```
# Create an empty list
empty_list = []
# As we did above, by using square brackets around a comma-separated sequence of items
new_list = [1, 2, 3]
# Using the type constructor
constructed_list = list('purple')
# Using a list comprehension
result_list = [i for i in range(1, 20)]
```
We can inspect our lists:
```
empty_list
new_list
result_list
constructed_list
```
The above output for `constructed_list` may seem odd. Referring to the documentation, we see that the argument to the type constructor is an _iterable_, which according to the documentation is "An object capable of returning its members one at a time." In our construtor statement above
```
# Using the type constructor
constructed_list = list('purple')
```
the word 'purple' is the object - in this case a ```str``` (string) consisting of the word 'purple' - that when used to construct a list returns its members (individual letters) one at a time.
Compare the outputs below:
```
constructed_list_int = list(123)
constructed_list_str = list('123')
constructed_list_str
```
Lists in Python are:
* mutable - the list and list items can be changed
* ordered - list items keep the same "place" in the list
_Ordered_ here does not mean sorted. The list below is printed with the numbers in the order we added them to the list, not in numeric order:
```
ordered = [3, 2, 7, 1, 19, 0]
ordered
# There is a 'sort' method for sorting list items as needed:
ordered.sort()
ordered
```
Info on additional list methods is available at <https://docs.python.org/3/library/stdtypes.html?highlight=lists#mutable-sequence-types>
Because lists are ordered, it is possible to access list items by referencing their positions. Note that the position of the first item in a list is 0 (zero), not 1!
```
string_list = ['apples', 'oranges', 'pears', 'grapes', 'pineapples']
string_list[0]
# We can use positions to 'slice' or select sections of a list:
string_list[3:] # start at index '3' and continue to the end
string_list[:3] # start at index '0' and go up to, but don't include index '3'
string_list[1:4] # start at index '1' and go up to and don't include index '4'
# If we don't know the position of a list item, we can use the 'index()' method to find out.
# Note that in the case of duplicate list items, this only returns the position of the first one:
string_list.index('pears')
string_list.append('oranges')
string_list
string_list.index('oranges')
# one more time with lists and dictionaries
list_ex1 = my_stuff[0] + my_stuff[1] + int(my_stuff[2])
print(list_ex1)
# we can use parentheses to split a continuous group of commands over multiple lines
list_ex2 = (
str(my_stuff[0])
+ str(my_stuff[1])
+ my_stuff[2]
+ my_stuff[3][0]
)
print(list_ex2)
dict_ex1 = (
more_stuff['item1']
+ more_stuff['item2']
+ int(more_stuff['item3'])
)
print(dict_ex1)
dict_ex2 = (
str(more_stuff['item1'])
+ str(more_stuff['item2'])
+ more_stuff['item3']
)
print(dict_ex2)
# Now try it yourself ...
# print out the phrase "The answer: 42" using the following
# variables and one or more of your own and the 'print()' function
# (remember spaces are characters as well)
start = "The"
answer = 42
```
### Operators
If *objects* are the nouns, operators are the verbs of a programming language. We've already seen examples of some operators: *assignment* with the `=` operator, *arithmetic* addition *and* string concatenation with the `+` operator, *arithmetic* division with the `/` and `-` operators, and *comparison* with the `>` operator. Different object types have different operators that may be used with them. The [Python Documentation](https://docs.python.org/3/library/stdtypes.html) provides detailed information about the operators and their functions as they relate to the standard object types described above.
### Flow Control and Logical Tests
Flow control commands allow for the dynamic execution of parts of the program based upon logical conditions, or processing of objects within an *iterable* object (like a list or dictionary). Some key flow control commands in python include:
* `while-else` loops that continue to run until the termination test is `False` or a `break` command is issued within the loop:
done = False
i = 0
while not done:
i = i+1
if i > 5: done = True
* `if-elif-else` statements defined alternative blocks of code that are executed if a test condition is met:
do_something = "what?"
if do_something == "what?":
print(do_something)
elif do_something == "where?":
print("Where are we going?")
else:
print("I guess nothing is going to happen")
* `for` loops allow for repeated execution of a block of code for each item in a python sequence such as a list or dictionary. For example:
my_stuff = ['a', 'b', 'c']
for item in my_stuff:
print(item)
a
b
c
| github_jupyter |
# Bayesian Optimization
[Bayesian optimization](https://en.wikipedia.org/wiki/Bayesian_optimization) is a powerful strategy for minimizing (or maximizing) objective functions that are costly to evaluate. It is an important component of [automated machine learning](https://en.wikipedia.org/wiki/Automated_machine_learning) toolboxes such as [auto-sklearn](https://automl.github.io/auto-sklearn/stable/), [auto-weka](http://www.cs.ubc.ca/labs/beta/Projects/autoweka/), and [scikit-optimize](https://scikit-optimize.github.io/), where Bayesian optimization is used to select model hyperparameters. Bayesian optimization is used for a wide range of other applications as well; as cataloged in the review [2], these include interactive user-interfaces, robotics, environmental monitoring, information extraction, combinatorial optimization, sensor networks, adaptive Monte Carlo, experimental design, and reinforcement learning.
## Problem Setup
We are given a minimization problem
$$ x^* = \text{arg}\min \ f(x), $$
where $f$ is a fixed objective function that we can evaluate pointwise.
Here we assume that we do _not_ have access to the gradient of $f$. We also
allow for the possibility that evaluations of $f$ are noisy.
To solve the minimization problem, we will construct a sequence of points $\{x_n\}$ that converge to $x^*$. Since we implicitly assume that we have a fixed budget (say 100 evaluations), we do not expect to find the exact minumum $x^*$: the goal is to get the best approximate solution we can given the allocated budget.
The Bayesian optimization strategy works as follows:
1. Place a prior on the objective function $f$. Each time we evaluate $f$ at a new point $x_n$, we update our model for $f(x)$. This model serves as a surrogate objective function and reflects our beliefs about $f$ (in particular it reflects our beliefs about where we expect $f(x)$ to be close to $f(x^*)$). Since we are being Bayesian, our beliefs are encoded in a posterior that allows us to systematically reason about the uncertainty of our model predictions.
2. Use the posterior to derive an "acquisition" function $\alpha(x)$ that is easy to evaluate and differentiate (so that optimizing $\alpha(x)$ is easy). In contrast to $f(x)$, we will generally evaluate $\alpha(x)$ at many points $x$, since doing so will be cheap.
3. Repeat until convergence:
+ Use the acquisition function to derive the next query point according to
$$ x_{n+1} = \text{arg}\min \ \alpha(x). $$
+ Evaluate $f(x_{n+1})$ and update the posterior.
A good acquisition function should make use of the uncertainty encoded in the posterior to encourage a balance between exploration—querying points where we know little about $f$—and exploitation—querying points in regions we have good reason to think $x^*$ may lie. As the iterative procedure progresses our model for $f$ evolves and so does the acquisition function. If our model is good and we've chosen a reasonable acquisition function, we expect that the acquisition function will guide the query points $x_n$ towards $x^*$.
In this tutorial, our model for $f$ will be a Gaussian process. In particular we will see how to use the [Gaussian Process module](http://docs.pyro.ai/en/0.3.1/contrib.gp.html) in Pyro to implement a simple Bayesian optimization procedure.
```
import matplotlib.gridspec as gridspec
import matplotlib.pyplot as plt
import torch
import torch.autograd as autograd
import torch.optim as optim
from torch.distributions import constraints, transform_to
import pyro
import pyro.contrib.gp as gp
assert pyro.__version__.startswith('1.5.2')
pyro.set_rng_seed(1)
```
## Define an objective function
For the purposes of demonstration, the objective function we are going to consider is the [Forrester et al. (2008) function](https://www.sfu.ca/~ssurjano/forretal08.html):
$$f(x) = (6x-2)^2 \sin(12x-4), \quad x\in [0, 1].$$
This function has both a local minimum and a global minimum. The global minimum is at $x^* = 0.75725$.
```
def f(x):
return (6 * x - 2)**2 * torch.sin(12 * x - 4)
```
Let's begin by plotting $f$.
```
x = torch.linspace(0, 1)
plt.figure(figsize=(8, 4))
plt.plot(x.numpy(), f(x).numpy())
plt.show()
```
## Setting a Gaussian Process prior
[Gaussian processes](https://en.wikipedia.org/wiki/Gaussian_process) are a popular choice for a function priors due to their power and flexibility. The core of a Gaussian Process is its covariance function $k$, which governs the similarity of $f(x)$ for pairs of input points. Here we will use a Gaussian Process as our prior for the objective function $f$. Given inputs $X$ and the corresponding noisy observations $y$, the model takes the form
$$f\sim\mathrm{MultivariateNormal}(0,k(X,X)),$$
$$y\sim f+\epsilon,$$
where $\epsilon$ is i.i.d. Gaussian noise and $k(X,X)$ is a covariance matrix whose entries are given by $k(x,x^\prime)$ for each pair of inputs $(x,x^\prime)$.
We choose the [Matern](https://en.wikipedia.org/wiki/Mat%C3%A9rn_covariance_function) kernel with $\nu = \frac{5}{2}$ (as suggested in reference [1]). Note that the popular [RBF](https://en.wikipedia.org/wiki/Radial_basis_function_kernel) kernel, which is used in many regression tasks, results in a function prior whose samples are infinitely differentiable; this is probably an unrealistic assumption for most 'black-box' objective functions.
```
# initialize the model with four input points: 0.0, 0.33, 0.66, 1.0
X = torch.tensor([0.0, 0.33, 0.66, 1.0])
y = f(X)
gpmodel = gp.models.GPRegression(X, y, gp.kernels.Matern52(input_dim=1),
noise=torch.tensor(0.1), jitter=1.0e-4)
```
The following helper function `update_posterior` will take care of updating our `gpmodel` each time we evaluate $f$ at a new value $x$.
```
def update_posterior(x_new):
y = f(x_new) # evaluate f at new point.
X = torch.cat([gpmodel.X, x_new]) # incorporate new evaluation
y = torch.cat([gpmodel.y, y])
gpmodel.set_data(X, y)
# optimize the GP hyperparameters using Adam with lr=0.001
optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001)
gp.util.train(gpmodel, optimizer)
```
## Define an acquisition function
There are many reasonable options for the acquisition function (see references [1] and [2] for a list of popular choices and a discussion of their properties). Here we will use one that is 'simple to implement and interpret,' namely the 'Lower Confidence Bound' acquisition function.
It is given by
$$
\alpha(x) = \mu(x) - \kappa \sigma(x)
$$
where $\mu(x)$ and $\sigma(x)$ are the mean and square root variance of the posterior at the point $x$, and the arbitrary constant $\kappa>0$ controls the trade-off between exploitation and exploration. This acquisition function will be minimized for choices of $x$ where either: i) $\mu(x)$ is small (exploitation); or ii) where $\sigma(x)$ is large (exploration). A large value of $\kappa$ means that we place more weight on exploration because we prefer candidates $x$ in areas of high uncertainty. A small value of $\kappa$ encourages exploitation because we prefer candidates $x$ that minimize $\mu(x)$, which is the mean of our surrogate objective function. We will use $\kappa=2$.
```
def lower_confidence_bound(x, kappa=2):
mu, variance = gpmodel(x, full_cov=False, noiseless=False)
sigma = variance.sqrt()
return mu - kappa * sigma
```
The final component we need is a way to find (approximate) minimizing points $x_{\rm min}$ of the acquisition function. There are several ways to proceed, including gradient-based and non-gradient-based techniques. Here we will follow the gradient-based approach. One of the possible drawbacks of gradient descent methods is that the minimization algorithm can get stuck at a local minimum. In this tutorial, we adopt a (very) simple approach to address this issue:
- First, we seed our minimization algorithm with 5 different values: i) one is chosen to be $x_{n-1}$, i.e. the candidate $x$ used in the previous step; and ii) four are chosen uniformly at random from the domain of the objective function.
- We then run the minimization algorithm to approximate convergence for each seed value.
- Finally, from the five candidate $x$s identified by the minimization algorithm, we select the one that minimizes the acquisition function.
Please refer to reference [2] for a more detailed discussion of this problem in Bayesian Optimization.
```
def find_a_candidate(x_init, lower_bound=0, upper_bound=1):
# transform x to an unconstrained domain
constraint = constraints.interval(lower_bound, upper_bound)
unconstrained_x_init = transform_to(constraint).inv(x_init)
unconstrained_x = unconstrained_x_init.clone().detach().requires_grad_(True)
minimizer = optim.LBFGS([unconstrained_x], line_search_fn='strong_wolfe')
def closure():
minimizer.zero_grad()
x = transform_to(constraint)(unconstrained_x)
y = lower_confidence_bound(x)
autograd.backward(unconstrained_x, autograd.grad(y, unconstrained_x))
return y
minimizer.step(closure)
# after finding a candidate in the unconstrained domain,
# convert it back to original domain.
x = transform_to(constraint)(unconstrained_x)
return x.detach()
```
## The inner loop of Bayesian Optimization
With the various helper functions defined above, we can now encapsulate the main logic of a single step of Bayesian Optimization in the function `next_x`:
```
def next_x(lower_bound=0, upper_bound=1, num_candidates=5):
candidates = []
values = []
x_init = gpmodel.X[-1:]
for i in range(num_candidates):
x = find_a_candidate(x_init, lower_bound, upper_bound)
y = lower_confidence_bound(x)
candidates.append(x)
values.append(y)
x_init = x.new_empty(1).uniform_(lower_bound, upper_bound)
argmin = torch.min(torch.cat(values), dim=0)[1].item()
return candidates[argmin]
```
## Running the algorithm
To illustrate how Bayesian Optimization works, we make a convenient plotting function that will help us visualize our algorithm's progress.
```
def plot(gs, xmin, xlabel=None, with_title=True):
xlabel = "xmin" if xlabel is None else "x{}".format(xlabel)
Xnew = torch.linspace(-0.1, 1.1)
ax1 = plt.subplot(gs[0])
ax1.plot(gpmodel.X.numpy(), gpmodel.y.numpy(), "kx") # plot all observed data
with torch.no_grad():
loc, var = gpmodel(Xnew, full_cov=False, noiseless=False)
sd = var.sqrt()
ax1.plot(Xnew.numpy(), loc.numpy(), "r", lw=2) # plot predictive mean
ax1.fill_between(Xnew.numpy(), loc.numpy() - 2*sd.numpy(), loc.numpy() + 2*sd.numpy(),
color="C0", alpha=0.3) # plot uncertainty intervals
ax1.set_xlim(-0.1, 1.1)
ax1.set_title("Find {}".format(xlabel))
if with_title:
ax1.set_ylabel("Gaussian Process Regression")
ax2 = plt.subplot(gs[1])
with torch.no_grad():
# plot the acquisition function
ax2.plot(Xnew.numpy(), lower_confidence_bound(Xnew).numpy())
# plot the new candidate point
ax2.plot(xmin.numpy(), lower_confidence_bound(xmin).numpy(), "^", markersize=10,
label="{} = {:.5f}".format(xlabel, xmin.item()))
ax2.set_xlim(-0.1, 1.1)
if with_title:
ax2.set_ylabel("Acquisition Function")
ax2.legend(loc=1)
```
Our surrogate model `gpmodel` already has 4 function evaluations at its disposal; however, we have yet to optimize the GP hyperparameters. So we do that first. Then in a loop we call the `next_x` and `update_posterior` functions repeatedly. The following plot illustrates how Gaussian Process posteriors and the corresponding acquisition functions change at each step in the algorith. Note how query points are chosen both for exploration and exploitation.
```
plt.figure(figsize=(12, 30))
outer_gs = gridspec.GridSpec(5, 2)
optimizer = torch.optim.Adam(gpmodel.parameters(), lr=0.001)
gp.util.train(gpmodel, optimizer)
for i in range(8):
xmin = next_x()
gs = gridspec.GridSpecFromSubplotSpec(2, 1, subplot_spec=outer_gs[i])
plot(gs, xmin, xlabel=i+1, with_title=(i % 2 == 0))
update_posterior(xmin)
plt.show()
```
Because we have assumed that our observations contain noise, it is improbable that we will find the exact minimizer of the function $f$. Still, with a relatively small budget of evaluations (8) we see that the algorithm has converged to very close to the global minimum at $x^* = 0.75725$.
While this tutorial is only intended to be a brief introduction to Bayesian Optimization, we hope that we have been able to convey the basic underlying ideas. Consider watching the lecture by Nando de Freitas [3] for an excellent exposition of the basic theory. Finally, the reference paper [2] gives a review of recent research on Bayesian Optimization, together with many discussions about important technical details.
## References
[1] `Practical bayesian optimization of machine learning algorithms`,<br />
Jasper Snoek, Hugo Larochelle, and Ryan P. Adams
[2] `Taking the human out of the loop: A review of bayesian optimization`,<br />
Bobak Shahriari, Kevin Swersky, Ziyu Wang, Ryan P. Adams, and Nando De Freitas
[3] [Machine learning - Bayesian optimization and multi-armed bandits](https://www.youtube.com/watch?v=vz3D36VXefI)
| github_jupyter |
# UMAP
This script generates UMAP representations from spectrograms (previously generated).
### Installing and loading libraries
```
import os
import pandas as pd
import sys
import numpy as np
from pandas.core.common import flatten
import pickle
import umap
from pathlib import Path
import datetime
import scipy
import matplotlib.pyplot as plt
import seaborn as sns
import matplotlib
import librosa.display
from scipy.spatial.distance import pdist, squareform
from plot_functions import umap_2Dplot, mara_3Dplot, plotly_viz
from preprocessing_functions import pad_spectro, calc_zscore, preprocess_spec_numba, create_padded_data
```
### Setting constants
Setting project, input and output folders.
```
wd = os.getcwd()
DATA = os.path.join(os.path.sep, str(Path(wd).parents[0]), "data", "processed")
FIGURES = os.path.join(os.path.sep, str(Path(wd).parents[0]), "reports", "figures")
DF_DICT = {}
for dftype in ['full', 'reduced', 'balanced']:
DF_DICT[dftype] = os.path.join(os.path.sep, DATA, "df_focal_"+dftype+".pkl")
LOAD_EXISTING = True # if true, load existing embedding instead of creating new
OVERWRITE_FIGURES = False # if true, overwrite existing figures
```
# UMAP projection
### Choose dataset
```
#dftype='full'
dftype='reduced'
#dftype='balanced'
spec_df = pd.read_pickle(DF_DICT[dftype])
labels = spec_df.call_lable.values
spec_df.shape
```
### Choose feature
```
specs = spec_df.spectrograms.copy()
specs = [calc_zscore(x) for x in specs]
data = create_padded_data(specs)
```
## Run UMAP
```
# 3D
embedding_filename = os.path.join(os.path.sep, DATA,'basic_UMAP_3D_'+dftype+'_default_params.csv')
print(embedding_filename)
if (LOAD_EXISTING and os.path.isfile(embedding_filename)):
embedding = np.loadtxt(embedding_filename, delimiter=";")
print("File already exists")
else:
reducer = umap.UMAP(n_components=3, min_dist = 0, random_state=2204)
embedding = reducer.fit_transform(data)
np.savetxt(embedding_filename, embedding, delimiter=";")
# 2D
embedding_filename = os.path.join(os.path.sep, DATA,'basic_UMAP_2D_'+dftype+'_default_params.csv')
print(embedding_filename)
if (LOAD_EXISTING and os.path.isfile(embedding_filename)):
embedding2D = np.loadtxt(embedding_filename, delimiter=";")
print("File already exists")
else:
reducer = umap.UMAP(n_components=2, min_dist = 0, random_state=2204)
embedding2D = reducer.fit_transform(data)
np.savetxt(embedding_filename, embedding2D, delimiter=";")
```
## Visualization
```
pal="Set2"
```
### 2D Plots
```
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES, 'UMAP_2D_plot_'+dftype+'_nolegend.jpg')
else:
outname=None
print(outname)
umap_2Dplot(embedding2D[:,0], embedding2D[:,1], labels, pal, outname=outname, showlegend=False)
```
### 3D Plot
#### Matplotlib
```
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES, 'UMAP_3D_plot_'+dftype+'_nolegend.jpg')
else:
outname=None
print(outname)
mara_3Dplot(embedding[:,0],
embedding[:,1],
embedding[:,2],
labels,
pal,
outname,
showlegend=False)
```
#### Plotly
Interactive viz in plotly (though without sound or spectrogram)
```
#plotly_viz(embedding[:,0],
# embedding[:,1],
# embedding[:,2],
# labels,
# pal)
```
# Embedding evaluation
Evaluate the embedding based on calltype labels of nearest neighbors.
```
from evaluation_functions import nn, sil
# produce nearest neighbor statistics
nn_stats = nn(embedding, np.asarray(labels), k=5)
```
## Calculate metrics
```
print("Log final metric (unweighted):",nn_stats.get_S())
print("Abs final metric (unweighted):",nn_stats.get_Snorm())
print(nn_stats.knn_accuracy())
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES, 'heatS_UMAP_'+dftype+'.png')
else:
outname=None
print(outname)
nn_stats.plot_heat_S(outname=outname)
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES, 'heatSnorm_UMAP_'+dftype+'.png')
else:
outname=None
print(outname)
nn_stats.plot_heat_Snorm(outname=outname)
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES, 'heatfold_UMAP_'+dftype+'.png')
else:
outname=None
print(outname)
nn_stats.plot_heat_fold(outname=outname)
```
# Within vs. outside distances
```
from evaluation_functions import plot_within_without
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES,"distanceswithinwithout_"+dftype+".png")
else:
outname=None
print(outname)
plot_within_without(embedding=embedding, labels=labels, outname=outname)
```
## Silhouette Plot
```
sil_stats = sil(embedding, labels)
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep, FIGURES, 'silplot_UMAP_'+dftype+'.png')
else:
outname=None
print(outname)
sil_stats.plot_sil(outname=outname)
sil_stats.get_avrg_score()
```
## How many dimensions?
Evaluate, how many dimensions are best for the embedding.
```
specs = spec_df.spectrograms.copy()
# normalize feature
specs = [calc_zscore(x) for x in specs]
# pad feature
maxlen= np.max([spec.shape[1] for spec in specs])
flattened_specs = [pad_spectro(spec, maxlen).flatten() for spec in specs]
data = np.asarray(flattened_specs)
data.shape
embeddings = {}
for n_dims in range(1,11):
reducer = umap.UMAP(n_components = n_dims, min_dist = 0, metric='euclidean', random_state=2204)
embeddings[n_dims] = reducer.fit_transform(data)
labels = spec_df.call_lable.values
calltypes = sorted(list(set(labels)))
k=5
dims_tab = np.zeros((10,1))
for n_dims in range(1,11):
nn_stats = nn(embeddings[n_dims], labels, k=k)
stats_tab = nn_stats.get_statstab()
mean_metric = np.mean(np.diagonal(stats_tab.iloc[:-1,]))
print(mean_metric)
dims_tab[n_dims-1,:] = mean_metric
x = np.arange(1,11,1)
y = dims_tab[:,0]
plt.plot(x,y, marker='o', markersize=4)
plt.xlabel("N_components")
plt.ylabel("Embedding score S")
plt.xticks(np.arange(0, 11, step=1))
plt.savefig(os.path.join(os.path.sep,FIGURES,'n_dims.png'), facecolor="white")
```
Note that this is different than doing UMAP with n=10 components and then selection only the first x dimensions in UMAP space!
# Graph from embedding evaluation
```
if OVERWRITE_FIGURES:
outname = os.path.join(os.path.sep,FIGURES,'simgraph_test.png')
else:
outname=None
nn_stats.draw_simgraph(outname)
```
Resource: https://en.it1352.com/article/d096c1eadbb84c19b038eb9648153346.html
# Visualize example nearest neighbors
```
import random
import scipy
from sklearn.neighbors import NearestNeighbors
knn=5
# Find k nearest neighbors
nbrs = NearestNeighbors(metric='euclidean',n_neighbors=knn+1, algorithm='brute').fit(embedding)
distances, indices = nbrs.kneighbors(embedding)
# need to remove the first neighbor, because that is the datapoint itself
indices = indices[:,1:]
distances = distances[:,1:]
calltypes = sorted(list(set(spec_df['call_lable'])))
labels = spec_df.call_lable.values
names = spec_df.Name.values
# make plots per calltype
n_examples = 3
for calltype in calltypes:
fig = plt.figure(figsize=(14,6))
fig_name = 'NN_viz_'+calltype
k=1
call_indices = np.asarray(np.where(labels==calltype))[0]
# randomly choose 3
random.seed(2204)
example_indices = random.sample(list(call_indices), n_examples)
for i,ind in enumerate(example_indices):
img_of_interest = spec_df.iloc[ind,:].spectrograms
embedding_of_interest = embedding[ind,:]
plt.subplot(n_examples, knn+1, k)
#librosa.display.specshow(np.transpose(spec))
plt.imshow(img_of_interest, interpolation='nearest', origin='lower', aspect='equal')
#plt.title(calltype+' : 0')
#plt.title(calltype)
k=k+1
nearest_neighbors = indices[ind]
for neighbor in nearest_neighbors:
neighbor_label = names[neighbor]
neighbor_embedding = embedding[neighbor,:]
dist_to_original = scipy.spatial.distance.euclidean(embedding_of_interest, neighbor_embedding)
neighbor_img = spec_df.iloc[neighbor,:].spectrograms
plt.subplot(n_examples, knn+1, k)
plt.imshow(neighbor_img, interpolation='nearest', origin='lower', aspect='equal')
k=k+1
plt.tight_layout()
plt.savefig(os.path.join(os.path.sep,FIGURES,fig_name), facecolor="white")
plt.close()
# Randomly choose 10 calls and plot their 4 nearest neighbors
n_examples = 10
fig = plt.figure(figsize=(14,25))
fig_name = 'NN_viz'
k=1
# randomly choose 3
random.seed(2204)
example_indices = random.sample(list(range(embedding.shape[0])), n_examples)
for i,ind in enumerate(example_indices):
img_of_interest = spec_df.iloc[ind,:].spectrograms
embedding_of_interest = embedding[ind,:]
plt.subplot(n_examples, knn+1, k)
plt.imshow(img_of_interest, interpolation='nearest', origin='lower', aspect='equal')
k=k+1
nearest_neighbors = indices[ind]
for neighbor in nearest_neighbors:
neighbor_label = names[neighbor]
neighbor_embedding = embedding[neighbor,:]
dist_to_original = scipy.spatial.distance.euclidean(embedding_of_interest, neighbor_embedding)
neighbor_img = spec_df.iloc[neighbor,:].spectrograms
plt.subplot(n_examples, knn+1, k)
plt.imshow(neighbor_img, interpolation='nearest', origin='lower', aspect='equal')
k=k+1
plt.tight_layout()
plt.savefig(os.path.join(os.path.sep,FIGURES,fig_name), facecolor="white")
```
# Visualize preprocessing steps
```
N_MELS = 40
MEL_BINS_REMOVED_UPPER = 5
MEL_BINS_REMOVED_LOWER = 5
# make plots
calltypes = sorted(list(set(spec_df.call_lable.values)))
fig = plt.figure(figsize=(10,6))
fig_name = 'preprocessing_examples_mara.png'
fig.suptitle('Preprocessing steps', fontsize=16)
k=1
# randomly choose 4
examples = spec_df.sample(n=6, random_state=1)
examples.reset_index(inplace=True)
ori_specs = examples.denoised_spectrograms
# original
specs = ori_specs
vmin = np.min([np.min(x) for x in specs])
vmax = np.max([np.max(x) for x in specs])
for i in range(examples.shape[0]):
spec = specs[i]
plt.subplot(5, 6, k)
#librosa.display.specshow(spec, y_axis='mel', fmin=0, fmax=4000)
plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal', norm=None,vmin=vmin, vmax=vmax)
if i==0: plt.ylabel('none', rotation=0, labelpad=30)
plt.title("Example "+str(i+1))
k=k+1
# z-score
specs = ori_specs.copy()
#specs = [x[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:] for x in specs]
specs = [calc_zscore(s) for s in specs]
#vmin = np.min([np.min(x) for x in specs])
#vmax = np.max([np.max(x) for x in specs])
for i in range(examples.shape[0]):
spec = specs[i]
plt.subplot(5, 6, k)
plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal')
if i==0: plt.ylabel('zs', rotation=0, labelpad=30)
k=k+1
# cut
for i in range(examples.shape[0]):
spec = ori_specs[i]
spec = spec[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:]
spec = calc_zscore(spec)
plt.subplot(5, 6, k)
plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal')
if i==0: plt.ylabel('zs-cu', rotation=0, labelpad=30)
k=k+1
# floor
for i in range(examples.shape[0]):
spec = ori_specs[i]
spec = spec[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:]
spec = calc_zscore(spec)
spec = np.where(spec < 0, 0, spec)
plt.subplot(5, 6, k)
plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal')
if i==0: plt.ylabel('zs-cu-fl', rotation=0, labelpad=30)
k=k+1
# ceiling
for i in range(examples.shape[0]):
spec = ori_specs[i]
spec = spec[MEL_BINS_REMOVED_LOWER:(N_MELS-MEL_BINS_REMOVED_UPPER),:]
spec = calc_zscore(spec)
spec = np.where(spec < 0, 0, spec)
spec = np.where(spec > 3, 3, spec)
plt.subplot(5, 6, k)
plt.imshow(spec, interpolation='nearest', origin='lower', aspect='equal')
if i==0: plt.ylabel('zs-cu-fl-ce', rotation=0, labelpad=30)
k=k+1
plt.tight_layout()
outname= os.path.join(os.path.sep,FIGURES,fig_name)
print(outname)
plt.savefig(outname)
```
| github_jupyter |
There are 76,670 different agent ids in the training data.
```
import os
import pickle
import random
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set(rc={"figure.dpi":100, 'savefig.dpi':100})
sns.set_context('notebook')
# Keys to the pickle objects
CITY = 'city'
LANE = 'lane'
LANE_NORM = 'lane_norm'
SCENE_IDX = 'scene_idx'
AGENT_ID = 'agent_id'
P_IN = 'p_in'
V_IN = 'v_in'
P_OUT = 'p_out'
V_OUT = 'v_out'
CAR_MASK = 'car_mask'
TRACK_ID = 'track_id'
# Set the training and test paths
TEST_PATH = '../new_val_in/'
TRAIN_PATH = '../new_train/'
train_path = TRAIN_PATH
test_path = TEST_PATH
# DUMMY_TRAIN_PATH = './dummy_train/'
# DUMMY_TEST_PATH = './dummy_val/'
# train_path = DUMMY_TRAIN_PATH
# test_path = DUMMY_TEST_PATH
```
# Size of training and test data
```
train_size = len([entry for entry in os.scandir(train_path)])
test_size = len([entry for entry in os.scandir(test_path)])
print(f"Number of training samples = {train_size}")
print(f"Number of test samples = {test_size}")
```
# Scene object
```
# Open directory containing pickle files
with os.scandir(train_path) as entries:
scene = None
# Get the first pickle file
entry = next(entries)
# Open the first pickle file and store its data
with open(entry, "rb") as file:
scene = pickle.load(file)
# Look at key-value pairs
print('Scene object:')
for k, v in scene.items():
if type(v) is np.ndarray:
print(f"{k} : shape = {v.shape}")
else:
print(f"{k} : {type(v)}")
```
# Scene Analysis
```
random.seed(1)
def lane_centerline(scene):
lane = scene[LANE]
lane_norm = scene[LANE_NORM]
fig, (ax1) = plt.subplots(nrows=1, ncols=1, figsize=(5, 5))
ax1.quiver(lane[:, 0], lane[:, 1], lane_norm[:, 0], lane_norm[:, 1], color='gray')
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('Lane centerline')
def target_agent(scene):
lane = scene[LANE]
lane_norm = scene[LANE_NORM]
pin = scene[P_IN]
pout = scene[P_OUT]
vin = scene[V_IN]
vout = scene[V_OUT]
# Get the index of the target agent
targ = np.where(scene[TRACK_ID][:, 0, 0] == scene[AGENT_ID])[0][0]
fig, (ax1) = plt.subplots(nrows=1, ncols=1, figsize=(5, 5))
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('Target agent motion')
ax1.quiver(lane[:, 0], lane[:, 1], lane_norm[:, 0], lane_norm[:, 1], units='xy', color='black')
ax1.quiver(pin[targ, :, 0], pin[targ, :, 1], vin[targ, :, 0], vin[targ, :, 1], color='red', units='xy');
ax1.quiver(pout[targ, :, 0], pout[targ, :, 1], vout[targ, :, 0], vout[targ, :, 1], color='blue', units='xy');
def full_scene(scene):
lane = scene[LANE]
lane_norm = scene[LANE_NORM]
pin = scene[P_IN]
pout = scene[P_OUT]
vin = scene[V_IN]
vout = scene[V_OUT]
# Get the index of the target agent
targ = np.where(scene[TRACK_ID][:, 0, 0] == scene[AGENT_ID])[0][0]
actual_idxs = np.where(scene[CAR_MASK][:, 0] == 1) # Row indexes of actually tracked agents
pin_other = scene[P_IN][actual_idxs]
vin_other = scene[V_IN][actual_idxs]
pout_other = scene[P_OUT][actual_idxs]
vout_other = scene[V_OUT][actual_idxs]
fig, (ax1) = plt.subplots(nrows=1, ncols=1, figsize=(7, 7))
ax1.set_xlabel('x')
ax1.set_ylabel('y')
ax1.set_title('Scene ' + str(scene[SCENE_IDX]))
ax1.quiver(lane[:, 0], lane[:, 1], lane_norm[:, 0], lane_norm[:, 1], units='xy', color='gray', label='Center line(s)')
# Index of the last other agent - can either be the last element in the array or the element right before
# target when target is the last element in the array
last_other = len(actual_idxs[0]) - 1 if targ != len(actual_idxs[0]) - 1 else targ - 1
for i in range(len(actual_idxs[0])):
# Non target agent
if i != targ:
if i == last_other:
ax1.quiver(pin[i, :, 0], pin[i, :, 1], vin[i, :, 0], vin[i, :, 1],
color='orange', units='xy', label='Other agent input')
ax1.quiver(pout[i, :, 0], pout[i, :, 1], vout[i, :, 0], vout[i, :, 1],
color='blue', units='xy', label='Other agent output')
else:
ax1.quiver(pin[i, :, 0], pin[i, :, 1], vin[i, :, 0], vin[i, :, 1],
color='orange', units='xy', label='_nolegend_')
ax1.quiver(pout[i, :, 0], pout[i, :, 1], vout[i, :, 0], vout[i, :, 1],
color='blue', units='xy', label='_nolegend_')
set_other_legend = True
else:
ax1.quiver(pin[targ, :, 0], pin[targ, :, 1], vin[targ, :, 0], vin[targ, :, 1],
color='lightgreen', units='xy', label='Target agent input')
ax1.quiver(pout[targ, :, 0], pout[targ, :, 1], vout[targ, :, 0], vout[targ, :, 1],
color='darkgreen', units='xy', label='Target agent output')
ax1.legend()
# Randomly pick a scene
scene = None
rand = random.choice(os.listdir(train_path))
# Build out full path name
rand = train_path + rand
with open(rand, "rb") as file:
scene = pickle.load(file)
scene[SCENE_IDX]
# lane_centerline(scene)
# target_agent(scene)
full_scene(scene)
```
| github_jupyter |
# Basic Tensor operations and GradientTape.
In this graded assignment, you will perform different tensor operations as well as use [GradientTape](https://www.tensorflow.org/api_docs/python/tf/GradientTape). These are important building blocks for the next parts of this course so it's important to master the basics. Let's begin!
```
import tensorflow as tf
import numpy as np
```
## Exercise 1 - [tf.constant]((https://www.tensorflow.org/api_docs/python/tf/constant))
Creates a constant tensor from a tensor-like object.
```
# Convert NumPy array to Tensor using `tf.constant`
def tf_constant(array):
"""
Args:
array (numpy.ndarray): tensor-like array.
Returns:
tensorflow.python.framework.ops.EagerTensor: tensor.
"""
### START CODE HERE ###
tf_constant_array = tf.constant(array)
### END CODE HERE ###
return tf_constant_array
tmp_array = np.arange(1,10)
x = tf_constant(tmp_array)
x
# Expected output:
# <tf.Tensor: shape=(9,), dtype=int64, numpy=array([1, 2, 3, 4, 5, 6, 7, 8, 9])>
```
Note that for future docstrings, the type `EagerTensor` will be used as a shortened version of `tensorflow.python.framework.ops.EagerTensor`.
## Exercise 2 - [tf.square](https://www.tensorflow.org/api_docs/python/tf/math/square)
Computes the square of a tensor element-wise.
```
# Square the input tensor
def tf_square(array):
"""
Args:
array (numpy.ndarray): tensor-like array.
Returns:
EagerTensor: tensor.
"""
# make sure it's a tensor
array = tf.constant(array)
### START CODE HERE ###
tf_squared_array = tf.square(array)
### END CODE HERE ###
return tf_squared_array
tmp_array = tf.constant(np.arange(1, 10))
x = tf_square(tmp_array)
x
# Expected output:
# <tf.Tensor: shape=(9,), dtype=int64, numpy=array([ 1, 4, 9, 16, 25, 36, 49, 64, 81])>
```
## Exercise 3 - [tf.reshape](https://www.tensorflow.org/api_docs/python/tf/reshape)
Reshapes a tensor.
```
# Reshape tensor into the given shape parameter
def tf_reshape(array, shape):
"""
Args:
array (EagerTensor): tensor to reshape.
shape (tuple): desired shape.
Returns:
EagerTensor: reshaped tensor.
"""
# make sure it's a tensor
array = tf.constant(array)
### START CODE HERE ###
tf_reshaped_array = tf.reshape(array, shape = shape)
### END CODE HERE ###
return tf_reshaped_array
# Check your function
tmp_array = np.array([1,2,3,4,5,6,7,8,9])
# Check that your function reshapes a vector into a matrix
x = tf_reshape(tmp_array, (3, 3))
x
# Expected output:
# <tf.Tensor: shape=(3, 3), dtype=int64, numpy=
# [[1, 2, 3],
# [4, 5, 6],
# [7, 8, 9]]
```
## Exercise 4 - [tf.cast](https://www.tensorflow.org/api_docs/python/tf/cast)
Casts a tensor to a new type.
```
# Cast tensor into the given dtype parameter
def tf_cast(array, dtype):
"""
Args:
array (EagerTensor): tensor to be casted.
dtype (tensorflow.python.framework.dtypes.DType): desired new type. (Should be a TF dtype!)
Returns:
EagerTensor: casted tensor.
"""
# make sure it's a tensor
array = tf.constant(array)
### START CODE HERE ###
tf_cast_array = tf.cast(array, dtype = dtype)
### END CODE HERE ###
return tf_cast_array
# Check your function
tmp_array = [1,2,3,4]
x = tf_cast(tmp_array, tf.float32)
x
# Expected output:
# <tf.Tensor: shape=(4,), dtype=float32, numpy=array([1., 2., 3., 4.], dtype=float32)>
```
## Exercise 5 - [tf.multiply](https://www.tensorflow.org/api_docs/python/tf/multiply)
Returns an element-wise x * y.
```
# Multiply tensor1 and tensor2
def tf_multiply(tensor1, tensor2):
"""
Args:
tensor1 (EagerTensor): a tensor.
tensor2 (EagerTensor): another tensor.
Returns:
EagerTensor: resulting tensor.
"""
# make sure these are tensors
tensor1 = tf.constant(tensor1)
tensor2 = tf.constant(tensor2)
### START CODE HERE ###
product = tf.multiply(tensor1, tensor2)
### END CODE HERE ###
return product
# Check your function
tmp_1 = tf.constant(np.array([[1,2],[3,4]]))
tmp_2 = tf.constant(np.array(2))
result = tf_multiply(tmp_1, tmp_2)
result
# Expected output:
# <tf.Tensor: shape=(2, 2), dtype=int64, numpy=
# array([[2, 4],
# [6, 8]])>
```
## Exercise 6 - [tf.add](https://www.tensorflow.org/api_docs/python/tf/add)
Returns x + y element-wise.
```
# Add tensor1 and tensor2
def tf_add(tensor1, tensor2):
"""
Args:
tensor1 (EagerTensor): a tensor.
tensor2 (EagerTensor): another tensor.
Returns:
EagerTensor: resulting tensor.
"""
# make sure these are tensors
tensor1 = tf.constant(tensor1)
tensor2 = tf.constant(tensor2)
### START CODE HERE ###
total = tf.add(tensor1, tensor2)
### END CODE HERE ###
return total
# Check your function
tmp_1 = tf.constant(np.array([1, 2, 3]))
tmp_2 = tf.constant(np.array([4, 5, 6]))
tf_add(tmp_1, tmp_2)
# Expected output:
# <tf.Tensor: shape=(3,), dtype=int64, numpy=array([5, 7, 9])>
```
## Exercise 7 - Gradient Tape
Implement the function `tf_gradient_tape` by replacing the instances of `None` in the code below. The instructions are given in the code comments.
You can review the [docs](https://www.tensorflow.org/api_docs/python/tf/GradientTape) or revisit the lectures to complete this task.
```
def tf_gradient_tape(x):
"""
Args:
x (EagerTensor): a tensor.
Returns:
EagerTensor: Derivative of z with respect to the input tensor x.
"""
with tf.GradientTape() as t:
### START CODE HERE ###
# Record the actions performed on tensor x with `watch`
t.watch(x)
# Define a polynomial of form 3x^3 - 2x^2 + x
y = (3 * (x ** 3)) - (2 * (x ** 2)) + x
# Obtain the sum of the elements in variable y
z = tf.reduce_sum(y)
# Get the derivative of z with respect to the original input tensor x
dz_dx = t.gradient(z, x)
### END CODE HERE
return dz_dx
# Check your function
tmp_x = tf.constant(2.0)
dz_dx = tf_gradient_tape(tmp_x)
result = dz_dx.numpy()
result
# Expected output:
# 29.0
```
**Congratulations on finishing this week's assignment!**
**Keep it up!**
| github_jupyter |
# Exploratory Data Analysis
In this notebook, I have illuminated some of the strategies that one can use to explore the data and gain some insights about it.
We will start from finding metadata about the data, to determining what techniques to use, to getting some important insights about the data. This is based on the IBM's Data Analysis with Python course on Coursera.
## The Problem
The problem is to find the variables that impact the car price. For this problem, we will use a real-world dataset that details information about cars.
The dataset used is an open-source dataset made available by Jeffrey C. Schlimmer. The one used in this notebook is hosted on the IBM Cloud. The dataset provides details of some cars. It includes properties like make, horse-power, price, wheel-type and so on.
## Loading data and finding the metadata
Import libraries
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from scipy import stats
%matplotlib inline
```
Load the data as pandas dataframe
```
path='https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DA0101EN-SkillsNetwork/labs/Data%20files/automobileEDA.csv'
df = pd.read_csv(path)
df.head()
```
### Metadata: The columns's types
Finding column's types is an important step. It serves two purposes:
1. See if we need to convert some data. For example, price may be in string instead of numbers. This is very important as it could throw everything that we do afterwards off.
2. Find out what type of analysis we need to do with what column. After fixing the problems given above, the type of the object is often a great indicator of whether the data is categorical or numerical. This is important as it would determine what kind of exploratory analysis we can and want to do.
To find out the type, we can simply use `.dtypes` property of the dataframe. Here's an example using the dataframe we loaded above.
```
df.dtypes
```
From the results above, we can see that we can roughly divide the types into two categories: numeric (int64 and float64) and object. Although object type can contain lots of things, it's used often to store string variables. A quick glance at the table tells us that there's no glaring errors in object types.
Now we divide them into two categories: numerical variables and categorical variables. Numerical, as the name states, are the variables that hold numerical data. Categorical variables hold string that describes a certain property of the data (such as Audi as the make).
Make a special note that our target variable, price, is numerical. So the relationships we would be exploring would be between numerical-and-numerical data and numerical-and-categorical data.
## Relationship between Numerical Data
First we will explore the relationship between two numerical data and see if we can learn some insights out of it.
In the beginning, it's helpful to get the correlation between the variables. For this, we can use the `corr()` method to find out the correlation between all the variables.
Do note that the method finds out the Pearson correlation. Natively, pandas also support Spearman and the Kendall Tau correlation. You can also pass in a custom callable if you want. Check out the docs for more info.
Here's how to do it with the dataframe that we have:
```
df.corr()
```
Note that the diagonal elements are always one; because correlation with itself is always one.
Now, it seems somewhat daunting, and frankly, unneccessary to have this big of a table and correlation between things we don't care (say bore and stroke). If we want to find out the correlation with just price, using `corrwith()` method is helpful.
Here's how to do it:
```
corr = df.corrwith(df['price'])
# Prettify
pd.DataFrame(data=corr.values, index=corr.index, columns=['Correlation'])
```
From the table above, we have some idea about what can we expect the relationship should be like.
As a refresher, in Pearson correlation, values range in [-1, 1] with -1 and 1 implying a perfect linear relationship and 0 implying none. A positive value implies a positive relationship (value increase in response to increment) and negative value implies negative relationship (value decrease in response to increment).
The next step is to have a more visual outlook on the relationship.
### Visualizing Relationships
Continuous numerical variables are variables that may contain any value within some range. In pandas dtype, continuous numerical variables can have the type "int64" or "float64".
Scatterplots are a great way to visualize these variables is by using scatterplots.
To take it further, it's better to use a scatter plot with a regression line. This should also be able to provide us with some preliminary ways to test our hypothesis of the relationship between them.
In this notebook, we would be using the `regplot()` function in the `seaborn` package.
Below are some examples.
<h4>Positive linear relationship</h4>
Let's plot "engine-size" vs "price" since the correlation between them seems strong.
```
plt.figure(figsize=(5,5))
sns.regplot(x="engine-size", y="price", data=df);
```
As the engine-size goes up, the price goes up. This indicates a decent positive direct correlation between these two variables. Thus, we can say that the engine size is a good predictor of price since the regression line is almost a perfect diagonal line.
We can also check this with the Pearson correlation we got above. It's 0.87, which means sense.
Let's also try highway mpg too since the correlation between them is -0.7
```
sns.regplot(x="highway-mpg", y="price", data=df);
```
The graph shows a decent negative realtionship. So, it could be a potential indicator. Although, it seems that the relationship isn't exactly normal--given the curve of the points.
Let's try a higher order regression line.
```
sns.regplot(x="highway-mpg", y="price", data=df, order=2);
```
There. It seems much better.
### Weak Linear Relationship
Not all variables have to be correlated. Let's check out the graph of "Peak-rpm" as a predictor variable for "price".
```
sns.regplot(x="peak-rpm", y="price", data=df);
```
From the graph, it's clear that peak rpm is a bad indicator of price. It seems that there is no relationship between them. It seems almost random.
A quick check at the correlation value confirms this. The value is -0.1. It's very close to zero, implying no relationship.
Although there are cases in which low value can be misguiding, it's usually only for relationships that show a non-linear relationship in which value goes down and up. But the graph confirms there is none.
## Relationship between Numerical and Categorical data
Categorical variables, like their name imply, divide the data into certain categories. They essentially describe a 'characteristic' of the data unit, and are often selected from a small group of categories.
Although they commonly have "object" type, it's possible to have them has "int64" too (for example 'Level of happiness').
### Visualizing with Boxplots
Boxplots are a great way to visualize such relationships. Boxplots essentially show the spread of the data. You can use the `boxplot()` function in the seaborn package. Alternatively, you can use boxen or violin plots too.
Here's an example by plotting relationship between "body-style" and "price"
```
sns.boxplot(x="body-style", y="price", data=df);
```
We can infer that there is likely to be no significant relationship as there is a decent over lap.
Let's examine engine "engine-location" and "price"
```
sns.boxplot(x="engine-location", y="price", data=df);
```
Although there are a lot of outliers for the front, the distribution of price between these two engine-location categories is distinct enough to take engine-location as a potential good predictor of price.
Let's examine "drive-wheels" and "price".
```
sns.boxplot(x="drive-wheels", y="price", data=df);
```
<p>Here we see that the distribution of price between the different drive-wheels categories differs; as such drive-wheels could potentially be a predictor of price.</p>
### Statistical method to checking for a significant realtionship - ANOVA
Although visualisation is helpful, it does not give us a concrete and certain vision in this (and often in others) case. So, it follows that we would want a metric to evaluate it by. For correlation between categorical and continuous variable, there are various tests. ANOVA family of tests is a common one to use.
The Analysis of Variance (ANOVA) is a statistical method used to test whether there are significant differences between the means of two or more groups.
Do note that ANOVA is an _omnibus_ test statistic and it can't tell you what groups are the ones that have correlation among them. Only that there are at least two groups with a significant difference.
In python, we can calculate the ANOVA statistic fairly easily using the `scipy.stats` module. The function `f_oneway()` calculates and returns:
__F-test score__: ANOVA assumes the means of all groups are the same, calculates how much the actual means deviate from the assumption, and reports it as the F-test score. A larger score means there is a larger difference between the means. Although the degree of the 'largeneess' differs from data to data. You can use the F-table to find out the critical F-value by using the significance level and degrees of freedom for numerator and denominator and compare it with the calculated F-test score.
__P-value__: P-value tells how statistically significant is our calculated score value.
If the variables are strongly correlated, the expectation is to have ANOVA to return a sizeable F-test score and a small p-value.
#### Drive Wheels
Since ANOVA analyzes the difference between different groups of the same variable, the `groupby()` function will come in handy. With this, we can easily and concisely seperate the dataset into groups of drive-wheels. Essentially, the function allows us to split the dataset into groups and perform calculations on groups moving forward. Check out Grouping below for more explanation.
Let's see if different types 'drive-wheels' impact 'price', we group the data.
```
grouped_anova = df[['drive-wheels', 'price']].groupby(['drive-wheels'])
grouped_anova.head(2)
```
We can obtain the values of the method group using the method `get_group()`
```
grouped_anova.get_group('4wd')['price']
```
Finally, we use the function `f_oneway()` to obtain the F-test score and P-value.
```
# ANOVA
f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price'], grouped_anova.get_group('4wd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val)
```
From the result, we can see that we have a large F-test score and a very small p-value. Still, we need to check if all three tested groups are highly correlated?
#### Separately: fwd and rwd
```
f_val, p_val = stats.f_oneway(grouped_anova.get_group('fwd')['price'], grouped_anova.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val )
```
Seems like the result is significant and they are correlated. Let's examine the other groups
#### 4wd and rwd
```
f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('rwd')['price'])
print( "ANOVA results: F=", f_val, ", P =", p_val)
```
<h4>4wd and fwd</h4>
```
f_val, p_val = stats.f_oneway(grouped_anova.get_group('4wd')['price'], grouped_anova.get_group('fwd')['price'])
print("ANOVA results: F=", f_val, ", P =", p_val)
```
## Relationship between Categorical Data: Corrected Cramer's V
A good way to test relation between two categorical variable is Corrected Cramer's V.
**Note:** A p-value close to zero means that our variables are very unlikely to be completely unassociated in some population. However, this does not mean the variables are strongly associated; a weak association in a large sample size may also result in p = 0.000.
**General Rule of Thumb:**
* V โ [0.1,0.3]: weak association
* V โ [0.4,0.5]: medium association
* V > 0.5: strong association
Here's how to do it in python:
```python
import scipy.stats as ss
import pandas as pd
import numpy as np
def cramers_corrected_stat(x, y):
""" calculate Cramers V statistic for categorial-categorial association.
uses correction from Bergsma and Wicher,
Journal of the Korean Statistical Society 42 (2013): 323-328
"""
result = -1
if len(x.value_counts()) == 1:
print("First variable is constant")
elif len(y.value_counts()) == 1:
print("Second variable is constant")
else:
conf_matrix = pd.crosstab(x, y)
if conf_matrix.shape[0] == 2:
correct = False
else:
correct = True
chi2, p = ss.chi2_contingency(conf_matrix, correction=correct)[0:2]
n = sum(conf_matrix.sum())
phi2 = chi2/n
r, k = conf_matrix.shape
phi2corr = max(0, phi2 - ((k-1)*(r-1))/(n-1))
rcorr = r - ((r-1)**2)/(n-1)
kcorr = k - ((k-1)**2)/(n-1)
result = np.sqrt(phi2corr / min((kcorr-1), (rcorr-1)))
return round(result, 6), round(p, 6)
```
## Descriptive Statistical Analysis
Although the insights gained above are significant, it's clear we need more work.
Since we are exploring the data, performing some common and useful descriptive statistical analysis would be nice. However, there are a lot of them and would require a lot of work to do them by scratch. Fortunately, `pandas` library has a neat method that computes all of them for us.
The `describe()` method, when invoked on a dataframe automatically computes basic statistics for all continuous variables. Do note that any NaN values are automatically skipped in these statistics. By default, it will show stats for numerical data.
Here's what it will show:
* Count of that variable
* Mean
* Standard Deviation (std)
* Minimum Value
* IQR (Interquartile Range: 25%, 50% and 75%)
* Maximum Value
If you want, you can change the percentiles too. Check out the docs for that.
Here's how to do it in our dataframe:
```
df.describe()
```
To get the information about categorical variables, we need to specifically tell it to pandas to include them.
For categorical variables, it shows:
* Count
* Unique values
* The most common value or 'top'
* Frequency of the 'top'
```
df.describe(include=['object'])
```
### Value Counts
Sometimes, we need to understand the distribution of the categorical data. This could mean understanding how many units of each characteristic/variable we have. `value_counts()` is a method in pandas that can help with it. If we use it with a series, it will give us the unique values and how many of them exist.
_Caution:_ Using it with DataFrame works like count of unique rows by combination of all columns (like in SQL). This may or may not be what you want. For example, using it with drive-wheels and engine-location would give you the number of rows with unique pair of values.
Here's an example of doing it with the drive-wheels column.
```
df['drive-wheels'].value_counts().to_frame()
```
`.to_frame()` method is added to make it into a dataframe, hence making it look better.
You can play around and rename the column and index name if you want.
We can repeat the above process for the variable 'engine-location'.
```
df['engine-location'].value_counts().to_frame()
```
Examining the value counts of the engine location would not be a good predictor variable for the price. This is because we only have three cars with a rear engine and 198 with an engine in the front, this result is skewed. Thus, we are not able to draw any conclusions about the engine location.
## Grouping
Grouping is a useful technique to explore the data. With grouping, we can split data and apply various transforms. For example, we can find out the mean of different body styles. This would help us to have more insight into whether there's a relationsip between our target variable and the variable we are using grouping on.
Although oftenly used on categorical data, grouping can also be used with numerical data by seperating them into categories. For example we might seperate car by prices into affordable and luxury groups.
In pandas, we can use the `groupby()` method.
Let's try it with the 'drive-wheels' variable. First we will find out how many unique values there are. We do that by `unique()` method.
```
df['drive-wheels'].unique()
```
If we want to know, on average, which type of drive wheel is most valuable, we can group "drive-wheels" and then average them.
```
df[['drive-wheels','body-style','price']].groupby(['drive-wheels']).mean()
```
From our data, it seems rear-wheel drive vehicles are, on average, the most expensive, while 4-wheel and front-wheel are approximately the same in price.
It's also possible to group with multiple variables. For example, let's group by both 'drive-wheels' and 'body-style'. This groups the dataframe by the unique combinations 'drive-wheels' and 'body-style'.
Let's store it in the variable `grouped_by_wheels_and_body`.
```
grouped_by_wheels_and_body = df[['drive-wheels','body-style','price']].groupby(['drive-wheels','body-style']).mean()
grouped_by_wheels_and_body
```
Although incredibly useful, it's a little hard to read. It's better to convert it to a pivot table.
A pivot table is like an Excel spreadsheet, with one variable along the column and another along the row. There are various ways to do so. A way to do that is to use the method `pivot()`. However, with groups like the one above (multi-index), one can simply call the `unstack()` method.
```
grouped_by_wheels_and_body = grouped_by_wheels_and_body.unstack()
grouped_by_wheels_and_body
```
Often, we won't have data for some of the pivot cells. Often, it's filled with the value 0, but any other value could potentially be used as well. This could be mean or some other flag.
```
grouped_by_wheels_and_body.fillna(0)
```
Let's do the same for body-style only
```
df[['price', 'body-style']].groupby('body-style').mean()
```
### Visualizing Groups
Heatmaps are a great way to visualize groups. They can show relationships clearly in this case.
Do note that you need to be careful with the color schemes. Since chosing appropriate colorscheme is not only appropriate for your 'story' of the data, it is also important since it can impact the perception of the data.
[This resource](https://matplotlib.org/tutorials/colors/colormaps.html) gives a great idea on what to choose as a color scheme and when it's appropriate. It also has samples of the scheme below too for a quick preview along with when should one use them.
Here's an example of using it with the pivot table we created with the `seaborn` package.
```
sns.heatmap(grouped_by_wheels_and_body, cmap="Blues");
```
This heatmap plots the target variable (price) proportional to colour with respect to the variables 'drive-wheel' and 'body-style' in the vertical and horizontal axis respectively. This allows us to visualize how the price is related to 'drive-wheel' and 'body-style'.
## Correlation and Causation
Correlation and causation are terms that are used often and confused with each other--or worst considered to imply the other. Here's a quick overview of them:
__Correlation__: The degree of association (or resemblance) of variables with each other.
__Causation__: A relationship of cause and effect between variables.
It is important to know the difference between these two.
Note that correlation does __not__ imply causation.
Determining correlation is much simpler. We can almost always use methods such as Pearson Correlation, ANOVA method, and graphs. Determining causation may require independent experimentation.
### Pearson Correlation
Described earlier, Pearson Correlation is great way to measure linear dependence between two variables. It's also the default method in the method corr.
```
df.corr()
```
### Cramer's V
Cramer's V is a great method to calculate the relationship between two categorical variables. Read above about Cramer's V to get a better estimate.
**General Rule of Thumb:**
* V โ [0.1,0.3]: weak association
* V โ [0.4,0.5]: medium association
* V > 0.5: strong association
### ANOVA Method
As discussed previously, ANOVA method is great to conduct analysis to determine whether there's a significant realtionship between categorical and continous variables. Check out the ANOVA section above for more details.
Now, just knowing the correlation statistics is not enough. We also need to know whether the relationship is statistically significant or not. We can use p-value for that.
### P-value
In very simple terms, p-value checks the probability whether the result we have could be just a random chance. For example, for a p-value of 0.05, we are certain that our results are insignificant about 5% of time and are significant 95% of the time.
It's recommended to define a tolerance level of the p-value beforehand. Here's some common interpretations of p-value:
* The p-value is $<$ 0.001: A strong evidence that the correlation is significant.
* The p-value is $<$ 0.05: A moderate evidence that the correlation is significant.
* The p-value is $<$ 0.1: A weak evidence that the correlation is significant.
* The p-value is $>$ 0.1: No evidence that the correlation is significant.
We can obtain this information using `stats` module in the `scipy` library.
Let's calculate it for wheel-base vs price
```
pearson_coef, p_value = stats.pearsonr(df['wheel-base'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P =", p_value)
```
Since the p-value is $<$ 0.001, the correlation between wheel-base and price is statistically significant, although the linear relationship isn't extremely strong (~0.585)
Let's try one more example: horsepower vs price.
```
pearson_coef, p_value = stats.pearsonr(df['horsepower'], df['price'])
print("The Pearson Correlation Coefficient is", pearson_coef, " with a P-value of P = ", p_value)
```
Since the p-value is $<$ 0.001, the correlation between horsepower and price is statistically significant, and the linear relationship is quite strong (~0.809, close to 1).
### Conclusion: Important Variables
We now have a better idea of what our data looks like and which variables are important to take into account when predicting the car price. Some more analysis later, we can find that the important variables are:
Continuous numerical variables:
* Length
* Width
* Curb-weight
* Engine-size
* Horsepower
* City-mpg
* Highway-mpg
* Wheel-base
* Bore
Categorical variables:
* Drive-wheels
If needed, we can now mone onto into building machine learning models as we now know what to feed our model.
P.S. [This medium article](https://medium.com/@outside2SDs/an-overview-of-correlation-measures-between-categorical-and-continuous-variables-4c7f85610365#:~:text=A%20simple%20approach%20could%20be,variance%20of%20the%20continuous%20variable.&text=If%20the%20variables%20have%20no,similar%20to%20the%20original%20variance) is a great resource that talks about various ways of correlation between categorical and continous variables.
## Author
By Abhinav Garg
| github_jupyter |
#### Bogumiลa Walkowiak [email protected]
#### Joachim Mฤ
kowski [email protected]
# Intelligent Systems: Reasoning and Recognition
## Recognizing Digits using Neural Networks
## 1. Introduction
<font size=4>The MNIST (Modified National Institute of Standards and Technology) dataset is a large collection of handwritten digits composed of 60,000 training images and 10,000 test images. The black and white images from NIST were normalized to fit into a 28x28 pixel bounding box and anti-aliased, which introduced gray-scale levels. Our task was to design and evaluate neural network architectures that can recognize hand-drawn digits using the grayscale this data set.
## 2. Data preparation
<font size=4>First of all, we downloaded MNIST data. We decided to combine train and test set provided by MNIST dataset and then we splited data into training set 90% and a test set 10%. In the further part of the project, we'll also create a validation set so the final split of the data will look like this: training data 80%, validating data 10% and testing data 10%.
```
import numpy as np
import tensorflow.compat.v1.keras.backend as K
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers
import matplotlib.pyplot as plt
from sklearn.model_selection import train_test_split
from sklearn.metrics import f1_score,roc_curve,auc
from sklearn.metrics import confusion_matrix, plot_confusion_matrix
#physical_devices = tf.config.list_physical_devices('GPU')
#tf.config.experimental.set_memory_growth(physical_devices[0], True)
(x_train, y_train), (x_test, y_test) = keras.datasets.mnist.load_data() #loading the data set
X = np.concatenate((x_train, x_test))
y = np.concatenate([y_train, y_test])
train_ratio = 0.9
test_ratio = 0.1
x_train, x_test, y_train, y_test = train_test_split(X, y, test_size = test_ratio)
plt.imshow(x_train[0], cmap='gray')
x_train = x_train.astype("float32") / 255
x_test = x_test.astype("float32") / 255
# images have shape (28, 28, 1)
x_train = np.expand_dims(x_train, -1)
x_test = np.expand_dims(x_test, -1)
y_train = keras.utils.to_categorical(y_train, 10)
y_test = keras.utils.to_categorical(y_test, 10)
```
## 3. Creating neural networks
<font size=4>We decided to create a function. Thanks to it we will be able to write less code. Function trains model provided by argument of function, prints model's loss, accuracy, precision, recall and AUC for each digit and plots a history of training.
```
def predict_model(model, callbacks = [],batch_size=128, epochs = 4,lr=0.001):
adam = keras.optimizers.Adam(lr=lr)
model.compile(loss="categorical_crossentropy", optimizer=adam, metrics=["accuracy", "Precision","Recall"])
history = model.fit(x_train, y_train, batch_size=batch_size, epochs=epochs, validation_split=0.11, callbacks=callbacks)
score = model.evaluate(x_test, y_test, verbose=0)
y_pred = model.predict(x_test)
print("Test loss:", score[0])
print("Test accuracy:", score[1])
print("Test precision:", score[2])
print("Test recall:", score[3])
y_pred = np.argmax(y_pred,axis=1)
y_test1 = np.argmax(y_test,axis=1)
print("Test f1 score:", f1_score(y_test1,y_pred,average='micro'))
for i in range(10):
temp_pred = [1 if x==i else 0 for x in y_pred]
temp_test = [1 if x==i else 0 for x in y_test1]
fpr, tpr, thresholds =roc_curve(temp_test,temp_pred)
print("Test AUC for digit:",i, auc(fpr, tpr))
# summarize history for accuracy
plt.plot(history.history['accuracy'])
plt.plot(history.history['val_accuracy'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
# summarize history for loss
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'val'], loc='upper left')
plt.show()
```
<font size=4>We added an instance of EarlyStopping class, which provides us a mechanism of stopping algorithm before the whole training process is done. When 3 epochs are not achieving a better result (in our example higher validation accuracy) then our training is stopped and we restore the best model.
```
# simple early stopping
es = keras.callbacks.EarlyStopping(monitor='val_accuracy', mode='max', verbose=1, patience = 3, restore_best_weights=True)
```
### Basic Fully Connected Multi-layer Network
<font size=4>The first network we have created is basic fully connected mutli-layer network:
```
model_fc = keras.Sequential([
layers.Dense(32, activation="relu",input_shape=(28,28,1)),
layers.Dense(64, activation="relu"),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc.summary()
predict_model(model_fc, [es], epochs=100)
```
<font size=4>This is basic model achieves about 97,5% accuracy on test set. It is made of 2 hidden layers with reasonable number of units. Training this model is quite fast (on my laptop it was 5s per epoch, using GPU).
As we see in plots our model started to overfits, because validation accuracy and loss was staying on the same level, while train accuracy was growing and loss was decreasing.
<font size=4>Next, we wanted to demonstrate the effect of changing various parameters of the network.
### Different number of layers
```
model_fc_small = keras.Sequential([
layers.Dense(32, activation="relu",input_shape=(28,28,1)),
layers.Flatten(),
layers.Dense(64, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc_small.summary()
predict_model(model_fc_small, [es], epochs=100)
model_fc_large = keras.Sequential([
layers.Dense(32, activation="relu",input_shape=(28,28,1)),
layers.Dense(64, activation="relu"),
layers.Flatten(),
layers.Dense(4096, activation="relu"),
layers.Dense(1024, activation="relu"),
layers.Dense(64, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc_large.summary()
predict_model(model_fc_large, [es], epochs=100)
```
<font size=4>Firstly, we tried different numbers of hidden layers. With 1 hidden layer the model the model was achieving around 96,5% on test set. The model is underfitted because this number of layers is not enough to explain the complexity of our data.
Model with 4 hidden layers achieved 98,1% of accuracy but the training time was pretty long (34s per epoch). That is because this model had to find weights for over 200,000,000 parameters (compering to 1,600,000 of params for model with 1 hidden layer). We can assume, that after second epoch our model is overfitted because the difference between validation and train loss and accuracy are high.
### Different number of units per layer
```
model_fc = keras.Sequential([
layers.Dense(10, activation="relu",input_shape=(28,28,1)),
layers.Dense(20, activation="relu"),
layers.Flatten(),
layers.Dense(40, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc.summary()
predict_model(model_fc, [es], epochs=100)
```
<font size=4>In this situation we trained a model with small number of units in each layer. The model didn't achieve it's best. We can see, that train accuracy is much lower than validation accuracy. It is caused by insufficient number of units, so that our model decided to choose higher accuracy in validation data at the expense of accuracy on whole data.
```
model_fc = keras.Sequential([
layers.Dense(100, activation="relu",input_shape=(28,28,1)),
layers.Dense(200, activation="relu"),
layers.Flatten(),
layers.Dense(400, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc.summary()
predict_model(model_fc, [es], epochs=100)
```
<font size=4>In this model we see that it's overfitting after third epoch. It is caused by too high number of units.
### Different learning rate
```
model_fc_01 = keras.Sequential([
layers.Dense(32, activation="relu",input_shape=(28,28,1)),
layers.Dense(64, activation="relu"),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc_01.summary()
predict_model(model_fc_01,[es], epochs=100, lr=0.05)
```
<font size=4>We took our first model and decided to train it with different learning rates. With learning rate 0.05 we received very bad results (accuracy around 92%). The scores are so bad because our optimizer did not find good weights, because it had to change values with too big "jump".
```
model_fc_00001 = keras.Sequential([
layers.Dense(32, activation="relu",input_shape=(28,28,1)),
layers.Dense(64, activation="relu"),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dropout(.25),
layers.Dense(10, activation="softmax")
])
model_fc_00001.summary()
predict_model(model_fc_00001,[es], epochs=100, lr = 0.00001)
```
<font size=4>Model with learning rate equals 0.00001 performed pretty well but it needed 54 epochs to achieve 97,1% accuracy (compared to 6 epochs using standard learning rate equals 0.001). This is because optimizer "jumped" too small distance searching best results, and it had to do many iterations to find the best weights.
### Basic Multi-layer CNN
```
model_cnn = keras.Sequential([
layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D (2,2),
layers.Conv2D(64, (3,3), activation="relu"),
layers.MaxPooling2D (2,2),
layers.Flatten(),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn.summary()
predict_model(model_cnn, [es], epochs=100)
```
<font size=4>Our first convolutional model with 2 convolutional layers was performing even better then fully connected neural networks. This model is not overfitted, because train and validation loss and accuracy are close to each other. It has only 34,826 parameters to train, so the training of such model is pretty fast. On test set model achieves 98.7% accuracy which is great result.
### Different number of convolutional layers
```
model_cnn_short = keras.Sequential([
layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D (2,2),
layers.Flatten(),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_short.summary()
predict_model(model_cnn_short, [es], epochs=100)
```
<font size=4>Next model has only 1 convolution layer which has more parameters (54,410) because of less number of pooling layers. The results are satisfying, but not as good as previous model (test accuracy equals 98,2%).
```
model_cnn_long = keras.Sequential([
layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D ((2,2),1),
layers.Conv2D(64, (3,3), activation="relu"),
layers.MaxPooling2D ((2,2),1),
layers.Conv2D(128, (3,3), activation="relu"),
layers.MaxPooling2D ((2,2),1),
layers.Conv2D(512, (3,3), activation="relu"),
layers.MaxPooling2D ((2,2),1),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_long.summary()
predict_model(model_cnn_long, [es], epochs=100)
```
<font size=4>Next we created a neural network with 4 convolutional layers and with 17 milion parameters. The model was not overfitted. It had accuracy around 99.2% for test, train and validation model. Time needed to train this model was much higher (19s per epoch comparing to 3s per epoch in CNN that we have implemented). This is the best model that we have created for this dataset.
### Different number of filters per layer
```
model_cnn_min = keras.Sequential([
layers.Conv2D(4, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D (2,2),
layers.Conv2D(16, (3,3), activation="relu"),
layers.MaxPooling2D (2,2),
layers.Flatten(),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_min.summary()
predict_model(model_cnn_min, [es], epochs=100)
```
<font size=4>Next we decided to check how number of filters impact to performance of model. Reducing number of filter in convolutional layers made our model worse than basic model. Accuracy has fallen to 97.8%, because this model was too simple to explain complexity of our data. This model is underfitted.
```
model_cnn_max = keras.Sequential([
layers.Conv2D(128, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D (2,2),
layers.Conv2D(512, (3,3), activation="relu"),
layers.MaxPooling2D (2,2),
layers.Flatten(),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_max.summary()
predict_model(model_cnn_max, [es], epochs=100)
```
<font size=4>Next we increased number of filters. This caused a raise of number of parameters to over 700 thousands but model did not perform better than basic model. Test accuracy was 99%, which is slightly less than basic model's accuracy. It means that we should not use such high number of filters because we do not need them. This model also seems to be overfitted.
### Different size and type of pooling layers
```
model_cnn_pool5 = keras.Sequential([
layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D (5,3),
layers.Conv2D(64, (3,3), activation="relu"),
layers.MaxPooling2D (5,3),
layers.Flatten(),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_pool5.summary()
predict_model(model_cnn_pool5, [es], epochs=100)
```
<font size=4>Next, we checked different size of pooling layers. We decided to create a MaxPooling layers with size equals (5,5) and stride equals 3. It means that we take a square of values with size 5x5 then we look for max value, we write it in the middle of square and than we "move" 3 numbers in right or down direction. As we can see, the accuracy is worse than basic model, because we lose too much information in MaxPooling layers. The plots also shows that this model is underfitted.
```
model_cnn_avg = keras.Sequential([
layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)),
layers.AveragePooling2D (3,3),
layers.Conv2D(64, (3,3), activation="relu"),
layers.AveragePooling2D (3,3),
layers.Flatten(),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_avg.summary()
predict_model(model_cnn_avg, [es], epochs=100)
```
<font size=4>After, we changed MaxPooling layer to AveragePooling layer. The difference between this two layers is that AveragePooling layer sums the values in the square and divides by the number of values in square. Results are worse than basic model because MaxPooling, by its characteristics, is better when we have black background, because it remembers the most white value in grey-scale.
### Different number of full conected layers
```
model_cnn_fc = keras.Sequential([
layers.Conv2D(32, (3,3), activation="relu", input_shape=(28,28,1)),
layers.MaxPooling2D (2,2),
layers.Conv2D(64, (3,3), activation="relu"),
layers.MaxPooling2D (2,2),
layers.Flatten(),
layers.Dense(128, activation="relu"),
layers.Dense(32, activation="relu"),
layers.Dropout(.5),
layers.Dense(10, activation="softmax")
])
model_cnn_fc.summary()
predict_model(model_cnn_fc, [es], epochs=100)
```
## The performance of a published network (LeNet5, VGG, Yolo, etc) for recognizing MNIST Digits
<font size=4>We decided to implement the architecture of LeNet5 network. The LeNet-5 architecture consists of two sets of convolutional and average pooling layers, followed by a flattening convolutional layer, then two fully-connected layers and finally a softmax classifier.
<img src="images/lenet5.png" >
### LeNet5
<font size=4>This is an implementation of LeNet5 (slightly different, because input shape is 28x28 and in original version was 32x32). Despite of its age the model is pretty accurate (with accuracy 98.9%). This is close to our best models, and it does not have too big number of parameters (only 60,074).
```
lenet5 = keras.Sequential([
layers.Conv2D(filters=6, kernel_size=(3, 3), activation='relu', input_shape=(28,28,1)),
layers.AveragePooling2D(),
layers.Conv2D(filters=16, kernel_size=(3, 3), activation='relu'),
layers.AveragePooling2D(),
layers.Flatten(),
layers.Dense(units=120, activation='relu'),
layers.Dense(units=84, activation='relu'),
layers.Dense(units=10, activation='softmax')
])
lenet5.summary()
predict_model(lenet5, [es], epochs=100)
```
### The best network
```
from sklearn.metrics import confusion_matrix
y_pred = model_cnn_long.predict(x_test)
y_pred1 = list(np.argmax(y_pred, axis=1))
y_test1 = list(np.argmax(y_test, axis = 1))
confusion_matrix = confusion_matrix(y_test1, y_pred1)
print(confusion_matrix)
```
<font size=4>We chose model with best accuracy. It was a model with 4 convolutional layers and this is confusion matrix of this model.
| github_jupyter |
```
# Copyright 2019 Google Inc.
# Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
# http://www.apache.org/licenses/LICENSE-2.0
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
#Predicting Movie Review Sentiment with BERT on TF Hub
If youโve been following Natural Language Processing over the past year, youโve probably heard of BERT: Bidirectional Encoder Representations from Transformers. Itโs a neural network architecture designed by Google researchers thatโs totally transformed whatโs state-of-the-art for NLP tasks, like text classification, translation, summarization, and question answering.
Now that BERT's been added to [TF Hub](https://www.tensorflow.org/hub) as a loadable module, it's easy(ish) to add into existing Tensorflow text pipelines. In an existing pipeline, BERT can replace text embedding layers like ELMO and GloVE. Alternatively, [finetuning](http://wiki.fast.ai/index.php/Fine_tuning) BERT can provide both an accuracy boost and faster training time in many cases.
Here, we'll train a model to predict whether an IMDB movie review is positive or negative using BERT in Tensorflow with tf hub. Some code was adapted from [this colab notebook](https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb). Let's get started!
```
from sklearn.model_selection import train_test_split
import pandas as pd
import tensorflow as tf
import tensorflow_hub as hub
from datetime import datetime
tf.logging.set_verbosity(tf.logging.INFO)
```
In addition to the standard libraries we imported above, we'll need to install BERT's python package.
```
!pip install bert-tensorflow
import bert
from bert import run_classifier
from bert import optimization
from bert import tokenization
```
Below, we'll set an output directory location to store our model output and checkpoints. This can be a local directory, in which case you'd set OUTPUT_DIR to the name of the directory you'd like to create. If you're running this code in Google's hosted Colab, the directory won't persist after the Colab session ends.
Alternatively, if you're a GCP user, you can store output in a GCP bucket. To do that, set a directory name in OUTPUT_DIR and the name of the GCP bucket in the BUCKET field.
Set DO_DELETE to rewrite the OUTPUT_DIR if it exists. Otherwise, Tensorflow will load existing model checkpoints from that directory (if they exist).
```
# Set the output directory for saving model file
# Optionally, set a GCP bucket location
OUTPUT_DIR = 'output_files'#@param {type:"string"}
#@markdown Whether or not to clear/delete the directory and create a new one
DO_DELETE = False #@param {type:"boolean"}
#@markdown Set USE_BUCKET and BUCKET if you want to (optionally) store model output on GCP bucket.
USE_BUCKET = False #@param {type:"boolean"}
BUCKET = 'BUCKET_NAME' #@param {type:"string"}
if USE_BUCKET:
OUTPUT_DIR = 'gs://{}/{}'.format(BUCKET, OUTPUT_DIR)
from google.colab import auth
auth.authenticate_user()
if DO_DELETE:
try:
tf.gfile.DeleteRecursively(OUTPUT_DIR)
except:
# Doesn't matter if the directory didn't exist
pass
tf.gfile.MakeDirs(OUTPUT_DIR)
print('***** Model output directory: {} *****'.format(OUTPUT_DIR))
```
#Data
First, let's download the dataset, hosted by Stanford. The code below, which downloads, extracts, and imports the IMDB Large Movie Review Dataset, is borrowed from [this Tensorflow tutorial](https://www.tensorflow.org/hub/tutorials/text_classification_with_tf_hub).
```
from tensorflow import keras
import os
import re
# Load all files from a directory in a DataFrame.
def load_directory_data(directory):
data = {}
data["sentence"] = []
data["sentiment"] = []
for file_path in os.listdir(directory):
with tf.gfile.GFile(os.path.join(directory, file_path), "r") as f:
data["sentence"].append(f.read())
data["sentiment"].append(re.match("\d+_(\d+)\.txt", file_path).group(1))
return pd.DataFrame.from_dict(data)
# Merge positive and negative examples, add a polarity column and shuffle.
def load_dataset(directory):
pos_df = load_directory_data(os.path.join(directory, "pos"))
neg_df = load_directory_data(os.path.join(directory, "neg"))
pos_df["polarity"] = 1
neg_df["polarity"] = 0
return pd.concat([pos_df, neg_df]).sample(frac=1).reset_index(drop=True)
# Download and process the dataset files.
def download_and_load_datasets(force_download=False):
dataset = tf.keras.utils.get_file(
fname="aclImdb.tar.gz",
origin="http://ai.stanford.edu/~amaas/data/sentiment/aclImdb_v1.tar.gz",
extract=True)
train_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "train"))
test_df = load_dataset(os.path.join(os.path.dirname(dataset),
"aclImdb", "test"))
return train_df, test_df
# train, test = download_and_load_datasets()
import pandas as pd
def load_dataset():
train_df = pd.read_csv('data/train.csv')
test_df = pd.read_csv('data/valid.csv')
return train_df, test_df
train, test = load_dataset()
```
To keep training fast, we'll take a sample of 5000 train and test examples, respectively.
```
# train = train.sample(5000)
# test = test.sample(5000)
train.columns
```
For us, our input data is the 'sentence' column and our label is the 'polarity' column (0, 1 for negative and positive, respecitvely)
```
DATA_COLUMN = 'text'
LABEL_COLUMN = 'stars'
# label_list is the list of labels, i.e. True, False or 0, 1 or 'dog', 'cat'
label_list = [1, 2, 3, 4, 5]
```
#Data Preprocessing
We'll need to transform our data into a format BERT understands. This involves two steps. First, we create `InputExample`'s using the constructor provided in the BERT library.
- `text_a` is the text we want to classify, which in this case, is the `Request` field in our Dataframe.
- `text_b` is used if we're training a model to understand the relationship between sentences (i.e. is `text_b` a translation of `text_a`? Is `text_b` an answer to the question asked by `text_a`?). This doesn't apply to our task, so we can leave `text_b` blank.
- `label` is the label for our example, i.e. True, False
```
# Use the InputExample class from BERT's run_classifier code to create examples from the data
train_InputExamples = train.apply(lambda x: bert.run_classifier.InputExample(guid=None, # Globally unique ID for bookkeeping, unused in this example
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
test_InputExamples = test.apply(lambda x: bert.run_classifier.InputExample(guid=None,
text_a = x[DATA_COLUMN],
text_b = None,
label = x[LABEL_COLUMN]), axis = 1)
```
Next, we need to preprocess our data so that it matches the data BERT was trained on. For this, we'll need to do a couple of things (but don't worry--this is also included in the Python library):
1. Lowercase our text (if we're using a BERT lowercase model)
2. Tokenize it (i.e. "sally says hi" -> ["sally", "says", "hi"])
3. Break words into WordPieces (i.e. "calling" -> ["call", "##ing"])
4. Map our words to indexes using a vocab file that BERT provides
5. Add special "CLS" and "SEP" tokens (see the [readme](https://github.com/google-research/bert))
6. Append "index" and "segment" tokens to each input (see the [BERT paper](https://arxiv.org/pdf/1810.04805.pdf))
Happily, we don't have to worry about most of these details.
To start, we'll need to load a vocabulary file and lowercasing information directly from the BERT tf hub module:
```
# This is a path to an uncased (all lowercase) version of BERT
BERT_MODEL_HUB = "https://tfhub.dev/google/bert_uncased_L-12_H-768_A-12/1"
def create_tokenizer_from_hub_module():
"""Get the vocab file and casing info from the Hub module."""
with tf.Graph().as_default():
bert_module = hub.Module(BERT_MODEL_HUB)
tokenization_info = bert_module(signature="tokenization_info", as_dict=True)
with tf.Session() as sess:
vocab_file, do_lower_case = sess.run([tokenization_info["vocab_file"],
tokenization_info["do_lower_case"]])
return bert.tokenization.FullTokenizer(
vocab_file=vocab_file, do_lower_case=do_lower_case)
tokenizer = create_tokenizer_from_hub_module()
```
Great--we just learned that the BERT model we're using expects lowercase data (that's what stored in tokenization_info["do_lower_case"]) and we also loaded BERT's vocab file. We also created a tokenizer, which breaks words into word pieces:
```
tokenizer.tokenize("This here's an example of using the BERT tokenizer")
```
Using our tokenizer, we'll call `run_classifier.convert_examples_to_features` on our InputExamples to convert them into features BERT understands.
```
# We'll set sequences to be at most 128 tokens long.
MAX_SEQ_LENGTH = 128
# Convert our train and test features to InputFeatures that BERT understands.
train_features = bert.run_classifier.convert_examples_to_features(train_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
test_features = bert.run_classifier.convert_examples_to_features(test_InputExamples, label_list, MAX_SEQ_LENGTH, tokenizer)
```
#Creating a model
Now that we've prepared our data, let's focus on building a model. `create_model` does just this below. First, it loads the BERT tf hub module again (this time to extract the computation graph). Next, it creates a single new layer that will be trained to adapt BERT to our sentiment task (i.e. classifying whether a movie review is positive or negative). This strategy of using a mostly trained model is called [fine-tuning](http://wiki.fast.ai/index.php/Fine_tuning).
```
def create_model(is_predicting, input_ids, input_mask, segment_ids, labels,
num_labels):
"""Creates a classification model."""
bert_module = hub.Module(
BERT_MODEL_HUB,
trainable=True)
bert_inputs = dict(
input_ids=input_ids,
input_mask=input_mask,
segment_ids=segment_ids)
bert_outputs = bert_module(
inputs=bert_inputs,
signature="tokens",
as_dict=True)
# Use "pooled_output" for classification tasks on an entire sentence.
# Use "sequence_outputs" for token-level output.
output_layer = bert_outputs["pooled_output"]
hidden_size = output_layer.shape[-1].value
# Create our own layer to tune for politeness data.
output_weights = tf.get_variable(
"output_weights", [num_labels, hidden_size],
initializer=tf.truncated_normal_initializer(stddev=0.02))
output_bias = tf.get_variable(
"output_bias", [num_labels], initializer=tf.zeros_initializer())
with tf.variable_scope("loss"):
# Dropout helps prevent overfitting
output_layer = tf.nn.dropout(output_layer, keep_prob=0.9)
logits = tf.matmul(output_layer, output_weights, transpose_b=True)
logits = tf.nn.bias_add(logits, output_bias)
log_probs = tf.nn.log_softmax(logits, axis=-1)
# Convert labels into one-hot encoding
one_hot_labels = tf.one_hot(labels, depth=num_labels, dtype=tf.float32)
predicted_labels = tf.squeeze(tf.argmax(log_probs, axis=-1, output_type=tf.int32))
# If we're predicting, we want predicted labels and the probabiltiies.
if is_predicting:
return (predicted_labels, log_probs)
# If we're train/eval, compute loss between predicted and actual label
per_example_loss = -tf.reduce_sum(one_hot_labels * log_probs, axis=-1)
loss = tf.reduce_mean(per_example_loss)
return (loss, predicted_labels, log_probs)
```
Next we'll wrap our model function in a `model_fn_builder` function that adapts our model to work for training, evaluation, and prediction.
```
# model_fn_builder actually creates our model function
# using the passed parameters for num_labels, learning_rate, etc.
def model_fn_builder(num_labels, learning_rate, num_train_steps,
num_warmup_steps):
"""Returns `model_fn` closure for TPUEstimator."""
def model_fn(features, labels, mode, params): # pylint: disable=unused-argument
"""The `model_fn` for TPUEstimator."""
input_ids = features["input_ids"]
input_mask = features["input_mask"]
segment_ids = features["segment_ids"]
label_ids = features["label_ids"]
is_predicting = (mode == tf.estimator.ModeKeys.PREDICT)
# TRAIN and EVAL
if not is_predicting:
(loss, predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
train_op = bert.optimization.create_optimizer(
loss, learning_rate, num_train_steps, num_warmup_steps, use_tpu=False)
# Calculate evaluation metrics.
def metric_fn(label_ids, predicted_labels):
accuracy = tf.metrics.accuracy(label_ids, predicted_labels)
f1_score = tf.contrib.metrics.f1_score(
label_ids,
predicted_labels)
auc = tf.metrics.auc(
label_ids,
predicted_labels)
recall = tf.metrics.recall(
label_ids,
predicted_labels)
precision = tf.metrics.precision(
label_ids,
predicted_labels)
true_pos = tf.metrics.true_positives(
label_ids,
predicted_labels)
true_neg = tf.metrics.true_negatives(
label_ids,
predicted_labels)
false_pos = tf.metrics.false_positives(
label_ids,
predicted_labels)
false_neg = tf.metrics.false_negatives(
label_ids,
predicted_labels)
return {
"eval_accuracy": accuracy,
"f1_score": f1_score,
"auc": auc,
"precision": precision,
"recall": recall,
"true_positives": true_pos,
"true_negatives": true_neg,
"false_positives": false_pos,
"false_negatives": false_neg
}
eval_metrics = metric_fn(label_ids, predicted_labels)
if mode == tf.estimator.ModeKeys.TRAIN:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
train_op=train_op)
else:
return tf.estimator.EstimatorSpec(mode=mode,
loss=loss,
eval_metric_ops=eval_metrics)
else:
(predicted_labels, log_probs) = create_model(
is_predicting, input_ids, input_mask, segment_ids, label_ids, num_labels)
predictions = {
'probabilities': log_probs,
'labels': predicted_labels
}
return tf.estimator.EstimatorSpec(mode, predictions=predictions)
# Return the actual model function in the closure
return model_fn
# Compute train and warmup steps from batch size
# These hyperparameters are copied from this colab notebook (https://colab.sandbox.google.com/github/tensorflow/tpu/blob/master/tools/colab/bert_finetuning_with_cloud_tpus.ipynb)
BATCH_SIZE = 32
LEARNING_RATE = 2e-5
NUM_TRAIN_EPOCHS = 3.0
# Warmup is a period of time where hte learning rate
# is small and gradually increases--usually helps training.
WARMUP_PROPORTION = 0.1
# Model configs
SAVE_CHECKPOINTS_STEPS = 500
SAVE_SUMMARY_STEPS = 100
# Compute # train and warmup steps from batch size
num_train_steps = int(len(train_features) / BATCH_SIZE * NUM_TRAIN_EPOCHS)
num_warmup_steps = int(num_train_steps * WARMUP_PROPORTION)
# Specify outpit directory and number of checkpoint steps to save
run_config = tf.estimator.RunConfig(
model_dir=OUTPUT_DIR,
save_summary_steps=SAVE_SUMMARY_STEPS,
save_checkpoints_steps=SAVE_CHECKPOINTS_STEPS)
model_fn = model_fn_builder(
num_labels=len(label_list),
learning_rate=LEARNING_RATE,
num_train_steps=num_train_steps,
num_warmup_steps=num_warmup_steps)
estimator = tf.estimator.Estimator(
model_fn=model_fn,
config=run_config,
params={"batch_size": BATCH_SIZE})
```
Next we create an input builder function that takes our training feature set (`train_features`) and produces a generator. This is a pretty standard design pattern for working with Tensorflow [Estimators](https://www.tensorflow.org/guide/estimators).
```
# Create an input function for training. drop_remainder = True for using TPUs.
train_input_fn = bert.run_classifier.input_fn_builder(
features=train_features,
seq_length=MAX_SEQ_LENGTH,
is_training=True,
drop_remainder=False)
```
Now we train our model! For me, using a Colab notebook running on Google's GPUs, my training time was about 14 minutes.
```
print(f'Beginning Training!')
current_time = datetime.now()
estimator.train(input_fn=train_input_fn, max_steps=num_train_steps)
print("Training took time ", datetime.now() - current_time)
```
Now let's use our test data to see how well our model did:
```
test_input_fn = run_classifier.input_fn_builder(
features=test_features,
seq_length=MAX_SEQ_LENGTH,
is_training=False,
drop_remainder=False)
estimator.evaluate(input_fn=test_input_fn, steps=None)
```
Now let's write code to make predictions on new sentences:
```
def getPrediction(in_sentences):
labels = ["Negative", "Positive"]
input_examples = [run_classifier.InputExample(guid="", text_a = x, text_b = None, label = 0) for x in in_sentences] # here, "" is just a dummy label
input_features = run_classifier.convert_examples_to_features(input_examples, label_list, MAX_SEQ_LENGTH, tokenizer)
predict_input_fn = run_classifier.input_fn_builder(features=input_features, seq_length=MAX_SEQ_LENGTH, is_training=False, drop_remainder=False)
predictions = estimator.predict(predict_input_fn)
return [(sentence, prediction['probabilities'], labels[prediction['labels']]) for sentence, prediction in zip(in_sentences, predictions)]
pred_sentences = [
"That movie was absolutely awful",
"The acting was a bit lacking",
"The film was creative and surprising",
"Absolutely fantastic!"
]
predictions = getPrediction(pred_sentences)
```
Voila! We have a sentiment classifier!
```
predictions
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import scipy
import matplotlib.pyplot as plt
from sklearn.metrics import accuracy_score
from datetime import datetime
%matplotlib inline
import matplotlib
from datetime import datetime
import os
from scipy import stats
from definitions import HUMAN_DATA_DIR, ROOT_DIR
from data.load_from_csv import get_content_datasets
def ClairvoyantCF(test_dataset, train_dataset, answers_dict):
"""Takes datasets and {item_id: True/False} dict and returns
mean mse simply predicting 0/100"""
total_score = 0
for i, rating in enumerate(test_dataset.ratings):
try:
if answers_dict[test_dataset.item_ids[i]]:
total_score += (rating[2] - 1.0)**2
else:
total_score += (rating[2] - 0)**2
except:
print(i, test_dataset.item_ids[i])
mean_mse = total_score / len(test_dataset.ratings)
print("Using Clairvoyant CF, got total val score {:.3f}".format(mean_mse))
return
def ClairvoyantAdjustedCF(test_dataset, train_dataset, answers_dict):
"""Takes datasets and {item_id: True/False} dict and returns
mean mse simply predicting 0/100"""
tot_true = 0
tot_false = 0
true_count = 0
false_count = 0
for i, rating in enumerate(train_dataset.ratings):
if not np.isnan(rating[2]):
if answers_dict[train_dataset.item_ids[i]]:
tot_true += rating[2]
true_count += 1
else:
tot_false += rating[2]
false_count += 1
avg_true = tot_true / true_count
avg_false = tot_false / false_count
total_score = 0
for i, rating in enumerate(test_dataset.ratings):
if answers_dict[test_dataset.item_ids[i]]:
total_score += (rating[2] - avg_true)**2
else:
total_score += (rating[2] - avg_false)**2
mean_mse = total_score / len(test_dataset.ratings)
print("Using Clairvoyant Adjusted CF, got total val score {:.3f}".format(mean_mse))
return
fermi_answers = pd.read_csv(os.path.join(HUMAN_DATA_DIR, 'fermi', 'answers.csv')).drop('Unnamed: 0', axis=1).set_index('item_id').T.to_dict('index')['answer']
politifact_answers = pd.read_csv(os.path.join(HUMAN_DATA_DIR, 'politifact', 'answers.csv')).drop('Unnamed: 0', axis=1).set_index('item_id').T.to_dict('index')['answer']
## Fermi
print('Fermi\nUnmasked:')
unmasked_fermi, unmasked_val_fermi, _ = get_content_datasets(task='fermi', sparsity='unmasked')
ClairvoyantCF(unmasked_val_fermi, unmasked_fermi, fermi_answers)
ClairvoyantAdjustedCF(unmasked_val_fermi, unmasked_fermi, fermi_answers)
print('\nLight Masking:')
light_fermi, unmasked_val_fermi, _ = get_content_datasets(task='fermi', sparsity='light')
ClairvoyantCF(unmasked_val_fermi, light_fermi, fermi_answers)
ClairvoyantAdjustedCF(unmasked_val_fermi, light_fermi, fermi_answers)
print('\nHeavy Masking:')
heavy_fermi, unmasked_val_fermi, _ = get_content_datasets(task='fermi', sparsity='heavy')
ClairvoyantCF(unmasked_val_fermi, heavy_fermi, fermi_answers)
ClairvoyantAdjustedCF(unmasked_val_fermi, heavy_fermi, fermi_answers)
## Politifact
print('Politifact\nUnmasked:')
unmasked_politifact, unmasked_val_politifact, _ = get_content_datasets(task='politifact', sparsity='unmasked')
ClairvoyantCF(unmasked_val_politifact, unmasked_politifact, politifact_answers)
ClairvoyantAdjustedCF(unmasked_val_politifact, unmasked_politifact, politifact_answers)
print('\nPolitifact Masking:')
light_politifact, unmasked_val_politifact, _ = get_content_datasets(task='politifact', sparsity='light')
ClairvoyantCF(unmasked_val_politifact, light_politifact, politifact_answers)
ClairvoyantAdjustedCF(unmasked_val_politifact, light_politifact, politifact_answers)
print('\nPolitifact Masking:')
heavy_politifact, unmasked_val_politifact, _ = get_content_datasets(task='politifact', sparsity='heavy')
ClairvoyantCF(unmasked_val_politifact, heavy_politifact, politifact_answers)
ClairvoyantAdjustedCF(unmasked_val_politifact, heavy_politifact, politifact_answers)
```
| github_jupyter |
# UCI Daphnet dataset (Freezing of gait for Parkinson's disease patients)
```
import numpy as np
import pandas as pd
import os
from typing import List
from pathlib import Path
from config import data_raw_folder, data_processed_folder
from timeeval import Datasets
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (20, 10)
dataset_collection_name = "Daphnet"
source_folder = Path(data_raw_folder) / "UCI ML Repository/Daphnet/dataset"
target_folder = Path(data_processed_folder)
print(f"Looking for source datasets in {source_folder.absolute()} and\nsaving processed datasets in {target_folder.absolute()}")
train_type = "unsupervised"
train_is_normal = False
input_type = "multivariate"
datetime_index = True
dataset_type = "real"
# create target directory
dataset_subfolder = os.path.join(input_type, dataset_collection_name)
target_subfolder = os.path.join(target_folder, dataset_subfolder)
try:
os.makedirs(target_subfolder)
print(f"Created directories {target_subfolder}")
except FileExistsError:
print(f"Directories {target_subfolder} already exist")
pass
dm = Datasets(target_folder)
experiments = [f for f in source_folder.iterdir()]
experiments
columns = ["timestamp", "ankle_horiz_fwd", "ankle_vert", "ankle_horiz_lateral", "leg_horiz_fwd", "leg_vert", "leg_horiz_lateral",
"trunk_horiz_fwd", "trunk_vert", "trunk_horiz_lateral", "is_anomaly"]
def transform_experiment_file(path: Path) -> List[pd.DataFrame]:
df = pd.read_csv(path, sep=" ", header=None)
df.columns = columns
df["timestamp"] = pd.to_datetime(df["timestamp"], unit="ms")
# slice out experiments (0 annotation shows unrelated data points (preparation/briefing/...))
s_group = df["is_anomaly"].isin([1, 2])
s_diff = s_group.shift(-1) - s_group
starts = (df[s_diff == 1].index + 1).values # first point has annotation 0 --> index + 1
ends = df[s_diff == -1].index.values
dfs = []
for start, end in zip(starts, ends):
df1 = df.iloc[start:end].copy()
df1["is_anomaly"] = (df1["is_anomaly"] == 2).astype(int)
dfs.append(df1)
return dfs
for exp in experiments:
# transform file to get datasets
datasets = transform_experiment_file(exp)
for i, df in enumerate(datasets):
# get target filenames
experiment_name = os.path.splitext(exp.name)[0]
dataset_name = f"{experiment_name}E{i}"
filename = f"{dataset_name}.test.csv"
path = os.path.join(dataset_subfolder, filename)
target_filepath = os.path.join(target_subfolder, filename)
# calc length and save in file
dataset_length = len(df)
df.to_csv(target_filepath, index=False)
print(f"Processed source dataset {exp} -> {target_filepath}")
# save metadata
dm.add_dataset((dataset_collection_name, dataset_name),
train_path = None,
test_path = path,
dataset_type = dataset_type,
datetime_index = datetime_index,
split_at = None,
train_type = train_type,
train_is_normal = train_is_normal,
input_type = input_type,
dataset_length = dataset_length
)
dm.save()
dm.refresh()
dm.df().loc[(slice(dataset_collection_name,dataset_collection_name), slice(None))]
```
## Experimentation
Annotations
- `0`: not part of the experiment.
For instance the sensors are installed on the user or the user is performing activities unrelated to the experimental protocol, such as debriefing
- `1`: experiment, no freeze (can be any of stand, walk, turn)
- `2`: freeze
```
columns = ["timestamp", "ankle_horiz_fwd", "ankle_vert", "ankle_horiz_lateral", "leg_horiz_fwd", "leg_vert", "leg_horiz_lateral",
"trunk_horiz_fwd", "trunk_vert", "trunk_horiz_lateral", "annotation"]
df1 = pd.read_csv(source_folder / "S01R01.txt", sep=' ', header=None)
df1.columns = columns
df1["timestamp"] = pd.to_datetime(df1["timestamp"], unit="ms")
df1
columns = [c for c in columns if c not in ["timestamp", "annotation"]]
df_plot = df1.set_index("timestamp", drop=True)#.loc["1970-01-01 00:15:00":"1970-01-01 00:16:00"]
df_plot.plot(y=columns, figsize=(20,10))
df_plot["annotation"].plot(secondary_y=True)
plt.legend()
plt.show()
s_group = df1["annotation"].isin([1, 2])
s_diff = s_group.shift(-1) - s_group
starts = (df1[s_diff == 1].index + 1).values
ends = df1[s_diff == -1].index.values
starts, ends
dfs = [df1.iloc[start:end] for start, end in zip(starts, ends)]
len(dfs)
columns = [c for c in columns if c not in ["timestamp", "annotation"]]
for df in dfs:
df = df.set_index("timestamp", drop=True)
df.plot(y=columns, figsize=(20,10))
df["annotation"].plot(secondary_y=True)
plt.show()
```
| github_jupyter |
#### Amy Green - 200930437
# <center> 5990M: Introduction to Programming for Geographical Information Analysis - Core Skills </center>
## <center><u> __**Assignment 2: Investigating the Black Death**__ </u></center>
-------------------------------------------------------------
### Project Aim
<p>The aim of the project hopes to build a model, based upon initial agent-based framework coding schemes, that generates an analysis into an aspect of 'The Black Death'. This project intends to calculate the fatalities from The Great Plague of London via the known population densities of London parishes in 1665. The generation of this measure from historical data will allow any correlation to be investigated and an overall map of total deaths to be produced. Furthermore, the final code should allow for manipulation in terms of changing parameter weights to investigate possible scenarios that could have ensued. </p>
### Context
<p>The Great Plague of London (1665-1666) was the last occurrence of the fatal โBlack Deathโ Plague that swept across Europe in the 1300s. The bubonic plague caused an epidemic across the 17th century parishes of London, as well as some smaller areas of the UK. The overcrowded city and hot Summer became a breeding ground for the bacterium <i>Yersinia pestis</i> disseminated by rat fleas โ the known cause of the plague. Transmission was inevitable due to the high poverty levels, low sanitation rates, and open sewers in closely packed waste-filled streets; especially in poorer areas (Trueman, 2015). Deaths started slowly within the St. Gilesโs Parish but rose alarmingly as documented by the weekly โBill of Mortalityโ that was legally required from each parish at the time (Defoe, 2005). The numbers of deaths slowed after 18 months due to quarantines, much of the population moving to the country and the onset of Winter, however, the final end emerged due to the Great Fire of London destroying central parts of the city in September 1666. </p>
### Data Source
<p>The calculation of the average death rate from the Great Plague will be generated from two raster maps. The model will be using known rat populations and average population densities of 16 different parishes within London, both from historical records, recorded by rat-catchers and parish figures in 1665, respectively. The original maps have data stored for each 400m x 400m area as text data, but the figures have been averaged to represent either the area covered by the Parish or the area within which the rat-catcher operates.
The relationship to calculate the average death rate from this source data is as follows:
<b>Death Rate = (0.8 x Rat Population)(1.3 x Population Density) </b></p>
### Model Expectations
<p>The model should first show maps of the original source data: the rat populations and population densities for the 16 investigated parishes. These maps will then be combined using the calculation to generate the average death rate from the Great Plague per week and will be mapped as an image. The final map will then be altered so the user will be able to manipulate the weights of either the rat population or the density population to envision how these alternate factors may change the overall death rate. </p>
<p>The code should run on Windows.</p>
------------------------------------------------------------------------------
### Part 1 - Read in Source Data
```
'''Step 1 - Set up initial imports for programme'''
import random
%matplotlib inline
import matplotlib.pyplot
import matplotlib
import matplotlib.animation
import os
import requests
import tkinter
import pandas as pd #Shortened in standard python documentation format
import numpy as np #Shortened in standard python documentation format
import ipywidgets as widgets #Shortened in standard python documentation format
```
<p><u> Map 1</u> - Rat Populations (Average Rats caught per week) </p>
```
'''Step 2 - Import data for the rat populations and generate environment from the 2D array'''
#Set up a base path for the import of the rats txt file
base_path = "C:\\Users\\Home\\Documents\\MSc GIS\\Programming\\Black_Death\\BlackDeathProject" #Basepath
deathrats = "deathrats.txt" #Saved filename
path_to_file = os.path.join(base_path, deathrats)
f = open(path_to_file , 'r')
#mapA = f.read()
#print(mapA) #Test to show data has imported
#Set up an environment to read the rats txt file into - this is called environmentA
environmentA = []
for line in f:
parsed_line = str.split(line, ",") #Split values up via commas
rowlist = []
for word in parsed_line:
rowlist.append(float(word))
environmentA.append(rowlist) #Append all lists individually so can print environment
f.close()
#print(environmentA) #Test environment appears and all lines run
#Display environment of rat populations
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(environmentA) #Shows the environment
matplotlib.pyplot.title('Average Rat Populations', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
```
<p> This map contains the data for the average rat populations denoted from the amount of rats caught per week. The data is initially placed into a text file which can be seen through print(mapA), but then has been put into an environment which is shown. The different colours show the different amounts of rats, however, this will have more useful when combined with Map 2 in Part 2 when calculating the overall death rates. </p>
<p><u> Map 2</u> - Average Population Densities (per Parish) </p>
```
'''Step 3 - Import data for the parish population densities and generate environment from the 2D array'''
#Set up a base path for the import of the parish txt file
#base_path = "C:\\Users\\Home\\Documents\\MSc GIS\\Programming\\Black_Death\\BlackDeathProject" #Basepath
deathparishes = "deathparishes.txt" #Saved filename
path_to_file = os.path.join(base_path, deathparishes)
fd = open(path_to_file , 'r')
#mapB = fd.read()
#print(mapB) #Test to show data has imported
#Set up an environment to read the parish txt file into - this is called environmentB
environmentB = []
for line in fd:
parsed_line = str.split(line, ",") #Split values up via commas
rowlist = []
for word in parsed_line:
rowlist.append(float(word))
environmentB.append(rowlist) #Append all lists individually so can print environment
f.close()
#print(environmentB) #Test environment appears and all lines run
#Display environment of parish populations
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(environmentB) #Shows the environment
matplotlib.pyplot.title('Average Parish Population Densities', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
```
<p> This map contains the data for the average population densities per the 16 parishes investigated. The data is initially placed into a text file which can be seen through print(mapB), but then has been put into an environment which is shown. The different colours show the different populations per parish. </p>
------------------------------------------------------------------------------
### Part 2 - Calculate the Average Death Rate
```
'''Step 4 - Calculate Map of Average death rates '''
#Sets up a list named results to append all calculated values to
result = []
for r in range(len(environmentA)):#Goes through both environments' (A and B) rows
row_a = environmentA[r]
row_b = environmentB[r]
rowlist = []
result.append(rowlist) #Append all lists individually so can merge values from environmentA and environmentB
for c in range(len(row_a)): #Goes through both environments' (A and B) columns
rats = row_a[c]
parishes = row_b[c]
# d = (0.8 x r) x (1.3 x p) Equation used to generate average death rate
d = (0.8 * rats) * (1.3 * parishes) #Puts values through death average equation with initial set parameters
rowlist.append(d)
#print(d) #Test that results array shows
'''Step 5 - Plot and show the average death rates'''
#Sets up environment to display the results
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(result) #Shows the environment
matplotlib.pyplot.title('Average Weekly Death Rates of the Great Plague', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
#To do:
#Insert legend
'''Step 6 - Save the average death rate results as a seperate txt.file'''
np.savetxt('result.txt', result, fmt='%-6.2f' , newline="\r\n") #Each row should equal a new line on the map
#Results have been padded to a width of 6 and rounded to 2 decimal points within the txt.file
```
<p> The output map within Part 2 displays the average death rate calculations within the 400x400 environment of the parishes investigated. The results array has been saved as a <i> result.txt </i> file (rounded to two decimal points) that can be manipulated and utilised for further investigation. </p>
------------------------------------------------------------------------------
### Part 3 - Display the Death Rate with Changing Parameters
```
'''Step 7 - Set up Rat Population Parameter Slider'''
#Generate a slider for the rats parameter
sR = widgets.FloatSlider(
value=0.8, #Initial parameter value set by the equation
min=0, #Minimum of range is 0
max=5.0, #Maximum of range is 5
step=0.1, #Values get to 1 decimal place increments
description='Rats:', #Label for slider
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
display(sR) #Dispays the parameter slider for rats that users can alter
'''Step 8 - Set up Parish Population Density Parameter Slider'''
#Generate a slider for the parish parameter
sP = widgets.FloatSlider(
value=1.3, #Initial parameter value set by the equation
min=0, #Minimum of range is 0
max=5.0, #Maximum of range is 5
step=0.1, #Values get to 1 decimal place increments
description='Parishes:', #Label for slider
disabled=False,
continuous_update=False,
orientation='horizontal',
readout=True,
readout_format='.1f',
)
display(sP) #Displays the parameter slider for parish population that users can alter
```
<p> The sliders above are available to alter to investigate the relationship between the rat population values and the average population density amounts. These will then be the next set parameters when the proceeding cell is run. </p>
```
'''Step 9 - Display the Changed Parameters '''
#Formatting to display parameter amounts to correlate to the underlying map
print('Changed Parameter Values')
print('Rats:', sR.value)
print('Parishes:', sP.value)
'''Step 10 - Create a map of the death rate average with new changed parameters'''
#Alter the results list to incorporate the altered parameter values
result = []
for r in range(len(environmentA)): #Goes through both environments' (A and B) rows
row_a = environmentA[r]
row_b = environmentB[r]
rowlist = []
result.append(rowlist) #Append all lists individually so can merge values from environmentA and environmentB
for c in range(len(row_a)): #Goes through both environments' (A and B) columns
rats = row_a[c]
parishes = row_b[c]
# d = (0.8 x r) x (1.3 x p) #Original equation used to generate average death rate
d = (sR.value * rats) * (sP.value * parishes) #Updated equation to show the altered parameter values
rowlist.append(d)
#print(d) #Test that results array has updated
#Set up a larger figure view of the final map
fig = matplotlib.pyplot.figure(figsize=(7,7))
ax = fig.add_axes([0, 0, 1, 1])
matplotlib.pyplot.xlim(0, 400) #Set up x-axis
matplotlib.pyplot.ylim(0, 400) #Set up y-axis
matplotlib.pyplot.imshow(result) #Display the final map
matplotlib.pyplot.xlabel('Rat Populations') #Label the x-axis
matplotlib.pyplot.ylabel('Parish Densities') #Label the y-axis
matplotlib.pyplot.title('Average Weekly Death Rates of the Great Plague at Altered Parameters', loc='center') #Adds a centred title
hsv() #Altered colourmap to red-yellow-green-cyan-blue-pink-magenta display, from original viridis: aids user interpretation
def update(d):
d = (sR * rats)*(sP * parishes)
rowlist.append(d) #Updates figure with new parameters
print('Average weekly death rate at these parameters =', round(d,2)) #Print the average weekly death rate with altered parameters to 2 decimal places
```
The final map displays the average death rate of people within the 16 investigated parishes affected by the Great Plague of 1665. Changing the parameters will generate a different total value which will be interesting to explore.
------------------------------------------------------------------------------
### Conclusions and Review
<p> The code appears to run smoothly and does generate an average weekly death rate successfully, even when parameter values have been changed. The issue that arises is that the final map doesn't change much, albeit small changes, if the parameters are altered. Therefore, to enhance the model further, the map display aspect would be explored to be show a clearer layout of the values, possinly by a line or correlation style graph. This would enable the relationship between the rat populations and the parish population densities to be interrogated further.
<i>n.b.</i> The one issue with the model is that the base paths of the initial txt.file imports need to be altered if copying the code as they are read from a saved folder into the Jupyter notebook. This is simple to do, just a tad annoying! </p>
#### References
<ul type="circle">
<li><p> Defoe, D. 2005.<i> History of the Plague in London.</i> [Online]. USA: American Book Company. [Accessed 2/1/19] Available from: <a href="http://www.gutenberg.org/files/17221/17221-h/17221-h.htm".>http://www.gutenberg.org/files/17221/17221-h/17221-h.htm.</a> </p></li>
<li><p> Trueman, C.N. 2015.<i> The Plague of 1665. </i> [Online]. [Accessed 2/1/19]. Available from: <a href="https://www.historylearningsite.co.uk/stuart-england/the-plagu,/ae-of-1665/".> https://www.historylearningsite.co.uk/stuart-england/the-plague-of-1665/.</a> </p> </li>
<li><p> Wikipedia. 2018. <i> Great Plague of London. </i> [Online]. [Accessed 2/1/19]. Available from: <a href="https://en.wikipedia.org/wiki/Great_Plague_of_London.">https://en.wikipedia.org/wiki/Great_Plague_of_London.</a></p></li>
</ul>
| github_jupyter |
# SDLib
> Shilling simulated attacks and detection methods
## Setup
```
!mkdir -p results
```
### Imports
```
from collections import defaultdict
import numpy as np
import random
import os
import os.path
from os.path import abspath
from os import makedirs,remove
from re import compile,findall,split
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.metrics.pairwise import pairwise_distances,cosine_similarity
from numpy.linalg import norm
from scipy.stats.stats import pearsonr
from math import sqrt,exp
import sys
from re import split
from multiprocessing import Process,Manager
from time import strftime,localtime,time
import re
from os.path import abspath
from time import strftime,localtime,time
from sklearn.metrics import classification_report
from re import split
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from random import shuffle
from sklearn.tree import DecisionTreeClassifier
import time as tm
from sklearn.metrics import classification_report
import numpy as np
from collections import defaultdict
from math import log,exp
from sklearn.ensemble import RandomForestClassifier
from sklearn.tree import DecisionTreeClassifier
from random import choice
import matplotlib
import matplotlib.pyplot as plt
from sklearn.manifold import TSNE
import random
from sklearn.metrics import classification_report
import numpy as np
from collections import defaultdict
from math import log,exp
from sklearn.tree import DecisionTreeClassifier
from sklearn.metrics import classification_report
from sklearn import metrics
from sklearn.metrics import classification_report
from sklearn import preprocessing
from sklearn import metrics
import scipy
from scipy.sparse import csr_matrix
from sklearn.metrics import classification_report
from sklearn.model_selection import train_test_split
import math
from sklearn.naive_bayes import GaussianNB
```
## Data
```
!mkdir -p dataset/amazon
!cd dataset/amazon && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/amazon/profiles.txt
!cd dataset/amazon && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/amazon/labels.txt
!mkdir -p dataset/averageattack
!cd dataset/averageattack && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/averageattack/ratings.txt
!cd dataset/averageattack && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/averageattack/labels.txt
!mkdir -p dataset/filmtrust
!cd dataset/filmtrust && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/filmtrust/ratings.txt
!cd dataset/filmtrust && wget -q --show-progress https://github.com/Coder-Yu/SDLib/raw/master/dataset/filmtrust/trust.txt
```
## Config
### Configure the Detection Method
<div>
<table class="table table-hover table-bordered">
<tr>
<th width="12%" scope="col"> Entry</th>
<th width="16%" class="conf" scope="col">Example</th>
<th width="72%" class="conf" scope="col">Description</th>
</tr>
<tr>
<td>ratings</td>
<td>dataset/averageattack/ratings.txt</td>
<td>Set the path to the dirty recommendation dataset. Format: each row separated by empty, tab or comma symbol. </td>
</tr>
<tr>
<td>label</td>
<td>dataset/averageattack/labels.txt</td>
<td>Set the path to labels (for users). Format: each row separated by empty, tab or comma symbol. </td>
</tr>
<tr>
<td scope="row">ratings.setup</td>
<td>-columns 0 1 2</td>
<td>-columns: (user, item, rating) columns of rating data are used;
-header: to skip the first head line when reading data<br>
</td>
</tr>
<tr>
<td scope="row">MethodName</td>
<td>DegreeSAD/PCASelect/etc.</td>
<td>The name of the detection method<br>
</td>
</tr>
<tr>
<td scope="row">evaluation.setup</td>
<td>-testSet dataset/testset.txt</td>
<td>Main option: -testSet, -ap, -cv <br>
-testSet path/to/test/file (need to specify the test set manually)<br>
-ap ratio (ap means that the user set (including items and ratings) are automatically partitioned into training set and test set, the number is the ratio of test set. e.g. -ap 0.2)<br>
-cv k (-cv means cross validation, k is the number of the fold. e.g. -cv 5)<br>
</td>
</tr>
<tr>
<td scope="row">output.setup</td>
<td>on -dir Results/</td>
<td>Main option: whether to output recommendation results<br>
-dir path: the directory path of output results.
</td>
</tr>
</table>
</div>
### Configure the Shilling Model
<div>
<table class="table table-hover table-bordered">
<tr>
<th width="12%" scope="col"> Entry</th>
<th width="16%" class="conf" scope="col">Example</th>
<th width="72%" class="conf" scope="col">Description</th>
</tr>
<tr>
<td>ratings</td>
<td>dataset/averageattack/ratings.txt</td>
<td>Set the path to the recommendation dataset. Format: each row separated by empty, tab or comma symbol. </td>
</tr>
<tr>
<td scope="row">ratings.setup</td>
<td>-columns 0 1 2</td>
<td>-columns: (user, item, rating) columns of rating data are used;
-header: to skip the first head line when reading data<br>
</td>
</tr>
<tr>
<td>attackSize</td>
<td>0.01</td>
<td>The ratio of the injected spammers to genuine users</td>
</tr>
<tr>
<td>fillerSize</td>
<td>0.01</td>
<td>The ratio of the filler items to all items </td>
</tr>
<tr>
<td>selectedSize</td>
<td>0.001</td>
<td>The ratio of the selected items to all items </td>
</tr>
<tr>
<td>linkSize</td>
<td>0.01</td>
<td>The ratio of the users maliciously linked by a spammer to all user </td>
</tr>
<tr>
<td>targetCount</td>
<td>20</td>
<td>The count of the targeted items </td>
</tr>
<tr>
<td>targetScore</td>
<td>5.0</td>
<td>The score given to the target items</td>
</tr>
<tr>
<td>threshold</td>
<td>3.0</td>
<td>Item has an average score lower than threshold may be chosen as one of the target items</td>
</tr>
<tr>
<td>minCount</td>
<td>3</td>
<td>Item has a ratings count larger than minCount may be chosen as one of the target items</td>
</tr>
<tr>
<td>maxCount</td>
<td>50</td>
<td>Item has a rating count smaller that maxCount may be chosen as one of the target items</td>
</tr>
<tr>
<td scope="row">outputDir</td>
<td>data/</td>
<td> User profiles and labels will be output here </td>
</tr>
</table>
</div>
```
%%writefile BayesDetector.conf
ratings=dataset/amazon/profiles.txt
ratings.setup=-columns 0 1 2
label=dataset/amazon/labels.txt
methodName=BayesDetector
evaluation.setup=-cv 5
item.ranking=off -topN 50
num.max.iter=100
learnRate=-init 0.03 -max 0.1
reg.lambda=-u 0.3 -i 0.3
BayesDetector=-k 10 -negCount 256 -gamma 1 -filter 4 -delta 0.01
output.setup=on -dir results/
%%writefile CoDetector.conf
ratings=dataset/amazon/profiles.txt
ratings.setup=-columns 0 1 2
label=dataset/amazon/labels.txt
methodName=CoDetector
evaluation.setup=-ap 0.3
item.ranking=on -topN 50
num.max.iter=200
learnRate=-init 0.01 -max 0.01
reg.lambda=-u 0.8 -i 0.4
CoDetector=-k 10 -negCount 256 -gamma 1 -filter 4
output.setup=on -dir results/amazon/
%%writefile DegreeSAD.conf
ratings=dataset/amazon/profiles.txt
ratings.setup=-columns 0 1 2
label=dataset/amazon/labels.txt
methodName=DegreeSAD
evaluation.setup=-cv 5
output.setup=on -dir results/
%%writefile FAP.conf
ratings=dataset/averageattack/ratings.txt
ratings.setup=-columns 0 1 2
label=dataset/averageattack/labels.txt
methodName=FAP
evaluation.setup=-ap 0.000001
seedUser=350
topKSpam=1557
output.setup=on -dir results/
%%writefile PCASelectUsers.conf
ratings=dataset/averageattack/ratings.txt
ratings.setup=-columns 0 1 2
label=dataset/averageattack/labels.txt
methodName=PCASelectUsers
evaluation.setup=-ap 0.00001
kVals=3
attackSize=0.1
output.setup=on -dir results/
%%writefile SemiSAD.conf
ratings=dataset/averageattack/ratings.txt
ratings.setup=-columns 0 1 2
label=dataset/averageattack/labels.txt
methodName=SemiSAD
evaluation.setup=-ap 0.2
Lambda=0.5
topK=28
output.setup=on -dir results/
```
## Baseclass
```
class SDetection(object):
def __init__(self,conf,trainingSet=None,testSet=None,labels=None,fold='[1]'):
self.config = conf
self.isSave = False
self.isLoad = False
self.foldInfo = fold
self.labels = labels
self.dao = RatingDAO(self.config, trainingSet, testSet)
self.training = []
self.trainingLabels = []
self.test = []
self.testLabels = []
def readConfiguration(self):
self.algorName = self.config['methodName']
self.output = LineConfig(self.config['output.setup'])
def printAlgorConfig(self):
"show algorithm's configuration"
print('Algorithm:',self.config['methodName'])
print('Ratings dataSet:',abspath(self.config['ratings']))
if LineConfig(self.config['evaluation.setup']).contains('-testSet'):
print('Test set:',abspath(LineConfig(self.config['evaluation.setup']).getOption('-testSet')))
#print 'Count of the users in training set: ',len()
print('Training set size: (user count: %d, item count %d, record count: %d)' %(self.dao.trainingSize()))
print('Test set size: (user count: %d, item count %d, record count: %d)' %(self.dao.testSize()))
print('='*80)
def initModel(self):
pass
def buildModel(self):
pass
def saveModel(self):
pass
def loadModel(self):
pass
def predict(self):
pass
def execute(self):
self.readConfiguration()
if self.foldInfo == '[1]':
self.printAlgorConfig()
# load model from disk or build model
if self.isLoad:
print('Loading model %s...' % (self.foldInfo))
self.loadModel()
else:
print('Initializing model %s...' % (self.foldInfo))
self.initModel()
print('Building Model %s...' % (self.foldInfo))
self.buildModel()
# preict the ratings or item ranking
print('Predicting %s...' % (self.foldInfo))
prediction = self.predict()
report = classification_report(self.testLabels, prediction, digits=4)
currentTime = currentTime = strftime("%Y-%m-%d %H-%M-%S", localtime(time()))
FileIO.writeFile(self.output['-dir'],self.algorName+'@'+currentTime+self.foldInfo,report)
# save model
if self.isSave:
print('Saving model %s...' % (self.foldInfo))
self.saveModel()
print(report)
return report
class SSDetection(SDetection):
def __init__(self,conf,trainingSet=None,testSet=None,labels=None,relation=list(),fold='[1]'):
super(SSDetection, self).__init__(conf,trainingSet,testSet,labels,fold)
self.sao = SocialDAO(self.config, relation) # social relations access control
```
## Utils
```
class Config(object):
def __init__(self,fileName):
self.config = {}
self.readConfiguration(fileName)
def __getitem__(self, item):
if not self.contains(item):
print('parameter '+item+' is invalid!')
exit(-1)
return self.config[item]
def getOptions(self,item):
if not self.contains(item):
print('parameter '+item+' is invalid!')
exit(-1)
return self.config[item]
def contains(self,key):
return key in self.config
def readConfiguration(self,fileName):
if not os.path.exists(abspath(fileName)):
print('config file is not found!')
raise IOError
with open(fileName) as f:
for ind,line in enumerate(f):
if line.strip()!='':
try:
key,value=line.strip().split('=')
self.config[key]=value
except ValueError:
print('config file is not in the correct format! Error Line:%d'%(ind))
class LineConfig(object):
def __init__(self,content):
self.line = content.strip().split(' ')
self.options = {}
self.mainOption = False
if self.line[0] == 'on':
self.mainOption = True
elif self.line[0] == 'off':
self.mainOption = False
for i,item in enumerate(self.line):
if (item.startswith('-') or item.startswith('--')) and not item[1:].isdigit():
ind = i+1
for j,sub in enumerate(self.line[ind:]):
if (sub.startswith('-') or sub.startswith('--')) and not sub[1:].isdigit():
ind = j
break
if j == len(self.line[ind:])-1:
ind=j+1
break
try:
self.options[item] = ' '.join(self.line[i+1:i+1+ind])
except IndexError:
self.options[item] = 1
def __getitem__(self, item):
if not self.contains(item):
print('parameter '+item+' is invalid!')
exit(-1)
return self.options[item]
def getOption(self,key):
if not self.contains(key):
print('parameter '+key+' is invalid!')
exit(-1)
return self.options[key]
def isMainOn(self):
return self.mainOption
def contains(self,key):
return key in self.options
class FileIO(object):
def __init__(self):
pass
@staticmethod
def writeFile(dir,file,content,op = 'w'):
if not os.path.exists(dir):
os.makedirs(dir)
if type(content)=='str':
with open(dir + file, op) as f:
f.write(content)
else:
with open(dir+file,op) as f:
f.writelines(content)
@staticmethod
def deleteFile(filePath):
if os.path.exists(filePath):
remove(filePath)
@staticmethod
def loadDataSet(conf, file, bTest=False):
trainingData = defaultdict(dict)
testData = defaultdict(dict)
ratingConfig = LineConfig(conf['ratings.setup'])
if not bTest:
print('loading training data...')
else:
print('loading test data...')
with open(file) as f:
ratings = f.readlines()
# ignore the headline
if ratingConfig.contains('-header'):
ratings = ratings[1:]
# order of the columns
order = ratingConfig['-columns'].strip().split()
for lineNo, line in enumerate(ratings):
items = split(' |,|\t', line.strip())
if not bTest and len(order) < 3:
print('The rating file is not in a correct format. Error: Line num %d' % lineNo)
exit(-1)
try:
userId = items[int(order[0])]
itemId = items[int(order[1])]
if bTest and len(order)<3:
rating = 1 #default value
else:
rating = items[int(order[2])]
except ValueError:
print('Error! Have you added the option -header to the rating.setup?')
exit(-1)
if not bTest:
trainingData[userId][itemId]=float(rating)
else:
testData[userId][itemId] = float(rating)
if not bTest:
return trainingData
else:
return testData
@staticmethod
def loadRelationship(conf, filePath):
socialConfig = LineConfig(conf['social.setup'])
relation = []
print('loading social data...')
with open(filePath) as f:
relations = f.readlines()
# ignore the headline
if socialConfig.contains('-header'):
relations = relations[1:]
# order of the columns
order = socialConfig['-columns'].strip().split()
if len(order) <= 2:
print('The social file is not in a correct format.')
for lineNo, line in enumerate(relations):
items = split(' |,|\t', line.strip())
if len(order) < 2:
print('The social file is not in a correct format. Error: Line num %d' % lineNo)
exit(-1)
userId1 = items[int(order[0])]
userId2 = items[int(order[1])]
if len(order) < 3:
weight = 1
else:
weight = float(items[int(order[2])])
relation.append([userId1, userId2, weight])
return relation
@staticmethod
def loadLabels(filePath):
labels = {}
with open(filePath) as f:
for line in f:
items = split(' |,|\t', line.strip())
labels[items[0]] = items[1]
return labels
class DataSplit(object):
def __init__(self):
pass
@staticmethod
def dataSplit(data,test_ratio = 0.3,output=False,path='./',order=1):
if test_ratio>=1 or test_ratio <=0:
test_ratio = 0.3
testSet = {}
trainingSet = {}
for user in data:
if random.random() < test_ratio:
testSet[user] = data[user].copy()
else:
trainingSet[user] = data[user].copy()
if output:
FileIO.writeFile(path,'testSet['+str(order)+']',testSet)
FileIO.writeFile(path, 'trainingSet[' + str(order) + ']', trainingSet)
return trainingSet,testSet
@staticmethod
def crossValidation(data,k,output=False,path='./',order=1):
if k<=1 or k>10:
k=3
for i in range(k):
trainingSet = {}
testSet = {}
for ind,user in enumerate(data):
if ind%k == i:
testSet[user] = data[user].copy()
else:
trainingSet[user] = data[user].copy()
yield trainingSet,testSet
def drawLine(x,y,labels,xLabel,yLabel,title):
f, ax = plt.subplots(1, 1, figsize=(10, 6), sharex=True)
#f.tight_layout()
#sns.set(style="darkgrid")
palette = ['blue','orange','red','green','purple','pink']
# for i in range(len(ax)):
# x1 = range(0, len(x))
#ax.set_xlim(min(x1)-0.2,max(x1)+0.2)
# mini = 10000;max = -10000
# for label in labels:
# if mini>min(y[i][label]):
# mini = min(y[i][label])
# if max<max(y[i][label]):
# max = max(y[i][label])
# ax[i].set_ylim(mini-0.25*(max-mini),max+0.25*(max-mini))
# for j,label in enumerate(labels):
# if j%2==1:
# ax[i].plot(x1, y[i][label], color=palette[j/2], marker='.', label=label, markersize=12)
# else:
# ax[i].plot(x1, y[i][label], color=palette[j/2], marker='.', label=label,markersize=12,linestyle='--')
# ax[0].set_ylabel(yLabel,fontsize=20)
for xdata,ydata,lab,c in zip(x,y,labels,palette):
ax.plot(xdata,ydata,color = c,label=lab)
ind = np.arange(0,60,10)
ax.set_xticks(ind)
#ax.set_xticklabels(x)
ax.set_xlabel(xLabel, fontsize=20)
ax.set_ylabel(yLabel, fontsize=20)
ax.tick_params(labelsize=16)
#ax.tick_params(axs='y', labelsize=20)
ax.set_title(title,fontsize=24)
plt.grid(True)
handles, labels1 = ax.get_legend_handles_labels()
#ax[i].legend(handles, labels1, loc=2, fontsize=20)
# ax.legend(loc=2,
# ncol=6, borderaxespad=0.,fontsize=20)
#ax[2].legend(bbox_to_anchor=(1.05, 1), loc=2, borderaxespad=0.,fontsize=20)
ax.legend(loc='upper right',fontsize=20,shadow=True)
plt.show()
plt.close()
paths = ['SVD.txt','PMF.txt','EE.txt','RDML.txt']
files = ['EE['+str(i)+'] iteration.txt' for i in range(2,9)]
x = []
y = []
data = []
def normalize():
for file in files:
xdata = []
with open(file) as f:
for line in f:
items = line.strip().split()
rmse = items[2].split(':')[1]
xdata.append(float(rmse))
data.append(xdata)
average = []
for i in range(len(data[0])):
total = 0
for k in range(len(data)):
total += data[k][i]
average.append(str(i+1)+':'+str(float(total)/len(data))+'\n')
with open('EE.txt','w') as f:
f.writelines(average)
def readData():
for file in paths:
xdata = []
ydata = []
with open(file) as f:
for line in f:
items = line.strip().split(':')
xdata.append(int(items[0]))
rmse = float(items[1])
ydata.append(float(rmse))
x.append(xdata)
y.append(ydata)
# x = [[1,2,3],[1,2,3]]
# y = [[1,2,3],[4,5,6]]
#normalize()
readData()
labels = ['SVD','PMF','EE','RDML',]
xlabel = 'Iteration'
ylabel = 'RMSE'
drawLine(x,y,labels,xlabel,ylabel,'')
def l1(x):
return norm(x,ord=1)
def l2(x):
return norm(x)
def common(x1,x2):
# find common ratings
common = (x1!=0)&(x2!=0)
new_x1 = x1[common]
new_x2 = x2[common]
return new_x1,new_x2
def cosine_sp(x1,x2):
'x1,x2 are dicts,this version is for sparse representation'
total = 0
denom1 = 0
denom2 =0
for k in x1:
if k in x2:
total+=x1[k]*x2[k]
denom1+=x1[k]**2
denom2+=x2[k]**2
try:
return (total + 0.0) / (sqrt(denom1) * sqrt(denom2))
except ZeroDivisionError:
return 0
def cosine(x1,x2):
#find common ratings
new_x1, new_x2 = common(x1,x2)
#compute the cosine similarity between two vectors
sum = new_x1.dot(new_x2)
denom = sqrt(new_x1.dot(new_x1)*new_x2.dot(new_x2))
try:
return float(sum)/denom
except ZeroDivisionError:
return 0
#return cosine_similarity(x1,x2)[0][0]
def pearson_sp(x1,x2):
total = 0
denom1 = 0
denom2 = 0
overlapped=False
try:
mean1 = sum(x1.values())/(len(x1)+0.0)
mean2 = sum(x2.values()) / (len(x2) + 0.0)
for k in x1:
if k in x2:
total += (x1[k]-mean1) * (x2[k]-mean2)
denom1 += (x1[k]-mean1) ** 2
denom2 += (x2[k]-mean2) ** 2
overlapped=True
return (total + 0.0) / (sqrt(denom1) * sqrt(denom2))
except ZeroDivisionError:
if overlapped:
return 1
else:
return 0
def euclidean(x1,x2):
#find common ratings
new_x1, new_x2 = common(x1, x2)
#compute the euclidean between two vectors
diff = new_x1-new_x2
denom = sqrt((diff.dot(diff)))
try:
return 1/denom
except ZeroDivisionError:
return 0
def pearson(x1,x2):
#find common ratings
new_x1, new_x2 = common(x1, x2)
#compute the pearson similarity between two vectors
ind1 = new_x1 > 0
ind2 = new_x2 > 0
try:
mean_x1 = float(new_x1.sum())/ind1.sum()
mean_x2 = float(new_x2.sum())/ind2.sum()
new_x1 = new_x1 - mean_x1
new_x2 = new_x2 - mean_x2
sum = new_x1.dot(new_x2)
denom = sqrt((new_x1.dot(new_x1))*(new_x2.dot(new_x2)))
return float(sum) / denom
except ZeroDivisionError:
return 0
def similarity(x1,x2,sim):
if sim == 'pcc':
return pearson_sp(x1,x2)
if sim == 'euclidean':
return euclidean(x1,x2)
else:
return cosine_sp(x1, x2)
def normalize(vec,maxVal,minVal):
'get the normalized value using min-max normalization'
if maxVal > minVal:
return float(vec-minVal)/(maxVal-minVal)+0.01
elif maxVal==minVal:
return vec/maxVal
else:
print('error... maximum value is less than minimum value.')
raise ArithmeticError
def sigmoid(val):
return 1/(1+exp(-val))
def denormalize(vec,maxVal,minVal):
return minVal+(vec-0.01)*(maxVal-minVal)
```
## Shilling models
### Attack base class
```
class Attack(object):
def __init__(self,conf):
self.config = Config(conf)
self.userProfile = FileIO.loadDataSet(self.config,self.config['ratings'])
self.itemProfile = defaultdict(dict)
self.attackSize = float(self.config['attackSize'])
self.fillerSize = float(self.config['fillerSize'])
self.selectedSize = float(self.config['selectedSize'])
self.targetCount = int(self.config['targetCount'])
self.targetScore = float(self.config['targetScore'])
self.threshold = float(self.config['threshold'])
self.minCount = int(self.config['minCount'])
self.maxCount = int(self.config['maxCount'])
self.minScore = float(self.config['minScore'])
self.maxScore = float(self.config['maxScore'])
self.outputDir = self.config['outputDir']
if not os.path.exists(self.outputDir):
os.makedirs(self.outputDir)
for user in self.userProfile:
for item in self.userProfile[user]:
self.itemProfile[item][user] = self.userProfile[user][item]
self.spamProfile = defaultdict(dict)
self.spamItem = defaultdict(list) #items rated by spammers
self.targetItems = []
self.itemAverage = {}
self.getAverageRating()
self.selectTarget()
self.startUserID = 0
def getAverageRating(self):
for itemID in self.itemProfile:
li = list(self.itemProfile[itemID].values())
self.itemAverage[itemID] = float(sum(li)) / len(li)
def selectTarget(self,):
print('Selecting target items...')
print('-'*80)
print('Target item Average rating of the item')
itemList = list(self.itemProfile.keys())
itemList.sort()
while len(self.targetItems) < self.targetCount:
target = np.random.randint(len(itemList)) #generate a target order at random
if len(self.itemProfile[str(itemList[target])]) < self.maxCount and len(self.itemProfile[str(itemList[target])]) > self.minCount \
and str(itemList[target]) not in self.targetItems \
and self.itemAverage[str(itemList[target])] <= self.threshold:
self.targetItems.append(str(itemList[target]))
print(str(itemList[target]),' ',self.itemAverage[str(itemList[target])])
def getFillerItems(self):
mu = int(self.fillerSize*len(self.itemProfile))
sigma = int(0.1*mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedItems = np.random.randint(len(self.itemProfile), size=markedItemsCount)
return markedItems.tolist()
def insertSpam(self,startID=0):
pass
def loadTarget(self,filename):
with open(filename) as f:
for line in f:
self.targetItems.append(line.strip())
def generateLabels(self,filename):
labels = []
path = self.outputDir + filename
with open(path,'w') as f:
for user in self.spamProfile:
labels.append(user+' 1\n')
for user in self.userProfile:
labels.append(user+' 0\n')
f.writelines(labels)
print('User profiles have been output to '+abspath(self.config['outputDir'])+'.')
def generateProfiles(self,filename):
ratings = []
path = self.outputDir+filename
with open(path, 'w') as f:
for user in self.userProfile:
for item in self.userProfile[user]:
ratings.append(user+' '+item+' '+str(self.userProfile[user][item])+'\n')
for user in self.spamProfile:
for item in self.spamProfile[user]:
ratings.append(user + ' ' + item + ' ' + str(self.spamProfile[user][item])+'\n')
f.writelines(ratings)
print('User labels have been output to '+abspath(self.config['outputDir'])+'.')
```
### Relation attack
```
class RelationAttack(Attack):
def __init__(self,conf):
super(RelationAttack, self).__init__(conf)
self.spamLink = defaultdict(list)
self.relation = FileIO.loadRelationship(self.config,self.config['social'])
self.trustLink = defaultdict(list)
self.trusteeLink = defaultdict(list)
for u1,u2,t in self.relation:
self.trustLink[u1].append(u2)
self.trusteeLink[u2].append(u1)
self.activeUser = {} # ๅ
ณๆณจไบ่ๅ็จๆท็ๆญฃๅธธ็จๆท
self.linkedUser = {} # ่ขซ่ๅ็จๆท็งๆค่ฟ้พๆฅ็็จๆท
# def reload(self):
# super(RelationAttack, self).reload()
# self.spamLink = defaultdict(list)
# self.trustLink, self.trusteeLink = loadTrusts(self.config['social'])
# self.activeUser = {} # ๅ
ณๆณจไบ่ๅ็จๆท็ๆญฃๅธธ็จๆท
# self.linkedUser = {} # ่ขซ่ๅ็จๆท็งๆค่ฟ้พๆฅ็็จๆท
def farmLink(self):
pass
def getReciprocal(self,target):
#ๅฝๅ็ฎๆ ็จๆทๅ
ณๆณจspammer็ๆฆ็๏ผไพ่ตไบ็ฒไธๆฐๅๅ
ณๆณจๆฐ็ไบค้
reciprocal = float(2 * len(set(self.trustLink[target]).intersection(self.trusteeLink[target])) + 0.1) \
/ (len(set(self.trustLink[target]).union(self.trusteeLink[target])) + 1)
reciprocal += (len(self.trustLink[target]) + 0.1) / (len(self.trustLink[target]) + len(self.trusteeLink[target]) + 1)
reciprocal /= 2
return reciprocal
def generateSocialConnections(self,filename):
relations = []
path = self.outputDir + filename
with open(path, 'w') as f:
for u1 in self.trustLink:
for u2 in self.trustLink[u1]:
relations.append(u1 + ' ' + u2 + ' 1\n')
for u1 in self.spamLink:
for u2 in self.spamLink[u1]:
relations.append(u1 + ' ' + u2 + ' 1\n')
f.writelines(relations)
print('Social relations have been output to ' + abspath(self.config['outputDir']) + '.')
```
### Random relation attack
```
class RandomRelationAttack(RelationAttack):
def __init__(self,conf):
super(RandomRelationAttack, self).__init__(conf)
self.scale = float(self.config['linkSize'])
def farmLink(self): # ้ๆบๆณจๅ
ฅ่ๅๅ
ณ็ณป
for spam in self.spamProfile:
#ๅฏน่ดญไนฐไบ็ฎๆ ้กน็ฎ็็จๆท็งๆค้พๆฅ
for item in self.spamItem[spam]:
if random.random() < 0.01:
for target in self.itemProfile[item]:
self.spamLink[spam].append(target)
response = np.random.random()
reciprocal = self.getReciprocal(target)
if response <= reciprocal:
self.trustLink[target].append(spam)
self.activeUser[target] = 1
else:
self.linkedUser[target] = 1
#ๅฏนๅ
ถๅฎ็จๆทไปฅscale็ๆฏไพ็งๆค้พๆฅ
for user in self.userProfile:
if random.random() < self.scale:
self.spamLink[spam].append(user)
response = np.random.random()
reciprocal = self.getReciprocal(user)
if response < reciprocal:
self.trustLink[user].append(spam)
self.activeUser[user] = 1
else:
self.linkedUser[user] = 1
```
### Random attack
```
class RandomAttack(Attack):
def __init__(self,conf):
super(RandomAttack, self).__init__(conf)
def insertSpam(self,startID=0):
print('Modeling random attack...')
itemList = list(self.itemProfile.keys())
if startID == 0:
self.startUserID = len(self.userProfile)
else:
self.startUserID = startID
for i in range(int(len(self.userProfile)*self.attackSize)):
#fill ่ฃ
ๅกซ้กน็ฎ
fillerItems = self.getFillerItems()
for item in fillerItems:
self.spamProfile[str(self.startUserID)][str(itemList[item])] = random.randint(self.minScore,self.maxScore)
#target ็ฎๆ ้กน็ฎ
for j in range(self.targetCount):
target = np.random.randint(len(self.targetItems))
self.spamProfile[str(self.startUserID)][self.targetItems[target]] = self.targetScore
self.spamItem[str(self.startUserID)].append(self.targetItems[target])
self.startUserID += 1
class RR_Attack(RandomRelationAttack,RandomAttack):
def __init__(self,conf):
super(RR_Attack, self).__init__(conf)
```
### Average attack
```
class AverageAttack(Attack):
def __init__(self,conf):
super(AverageAttack, self).__init__(conf)
def insertSpam(self,startID=0):
print('Modeling average attack...')
itemList = list(self.itemProfile.keys())
if startID == 0:
self.startUserID = len(self.userProfile)
else:
self.startUserID = startID
for i in range(int(len(self.userProfile)*self.attackSize)):
#fill
fillerItems = self.getFillerItems()
for item in fillerItems:
self.spamProfile[str(self.startUserID)][str(itemList[item])] = round(self.itemAverage[str(itemList[item])])
#target
for j in range(self.targetCount):
target = np.random.randint(len(self.targetItems))
self.spamProfile[str(self.startUserID)][self.targetItems[target]] = self.targetScore
self.spamItem[str(self.startUserID)].append(self.targetItems[target])
self.startUserID += 1
```
### Random average relation
```
class RA_Attack(RandomRelationAttack,AverageAttack):
def __init__(self,conf):
super(RA_Attack, self).__init__(conf)
```
### Bandwagon attack
```
class BandWagonAttack(Attack):
def __init__(self,conf):
super(BandWagonAttack, self).__init__(conf)
self.hotItems = sorted(iter(self.itemProfile.items()), key=lambda d: len(d[1]), reverse=True)[
:int(self.selectedSize * len(self.itemProfile))]
def insertSpam(self,startID=0):
print('Modeling bandwagon attack...')
itemList = list(self.itemProfile.keys())
if startID == 0:
self.startUserID = len(self.userProfile)
else:
self.startUserID = startID
for i in range(int(len(self.userProfile)*self.attackSize)):
#fill ่ฃ
ๅกซ้กน็ฎ
fillerItems = self.getFillerItems()
for item in fillerItems:
self.spamProfile[str(self.startUserID)][str(itemList[item])] = random.randint(self.minScore,self.maxScore)
#selected ้ๆฉ้กน็ฎ
selectedItems = self.getSelectedItems()
for item in selectedItems:
self.spamProfile[str(self.startUserID)][item] = self.targetScore
#target ็ฎๆ ้กน็ฎ
for j in range(self.targetCount):
target = np.random.randint(len(self.targetItems))
self.spamProfile[str(self.startUserID)][self.targetItems[target]] = self.targetScore
self.spamItem[str(self.startUserID)].append(self.targetItems[target])
self.startUserID += 1
def getFillerItems(self):
mu = int(self.fillerSize*len(self.itemProfile))
sigma = int(0.1*mu)
markedItemsCount = int(round(random.gauss(mu, sigma)))
if markedItemsCount < 0:
markedItemsCount = 0
markedItems = np.random.randint(len(self.itemProfile), size=markedItemsCount)
return markedItems
def getSelectedItems(self):
mu = int(self.selectedSize * len(self.itemProfile))
sigma = int(0.1 * mu)
markedItemsCount = abs(int(round(random.gauss(mu, sigma))))
markedIndexes = np.random.randint(len(self.hotItems), size=markedItemsCount)
markedItems = [self.hotItems[index][0] for index in markedIndexes]
return markedItems
```
### Random bandwagon relation
```
class RB_Attack(RandomRelationAttack,BandWagonAttack):
def __init__(self,conf):
super(RB_Attack, self).__init__(conf)
```
### Hybrid attack
```
class HybridAttack(Attack):
def __init__(self,conf):
super(HybridAttack, self).__init__(conf)
self.aveAttack = AverageAttack(conf)
self.bandAttack = BandWagonAttack(conf)
self.randAttack = RandomAttack(conf)
def insertSpam(self,startID=0):
self.aveAttack.insertSpam()
self.bandAttack.insertSpam(self.aveAttack.startUserID+1)
self.randAttack.insertSpam(self.bandAttack.startUserID+1)
self.spamProfile = {}
self.spamProfile.update(self.aveAttack.spamProfile)
self.spamProfile.update(self.bandAttack.spamProfile)
self.spamProfile.update(self.randAttack.spamProfile)
def generateProfiles(self,filename):
ratings = []
path = self.outputDir + filename
with open(path, 'w') as f:
for user in self.userProfile:
for item in self.userProfile[user]:
ratings.append(user + ' ' + item + ' ' + str(self.userProfile[user][item]) + '\n')
for user in self.spamProfile:
for item in self.spamProfile[user]:
ratings.append(user + ' ' + item + ' ' + str(self.spamProfile[user][item]) + '\n')
f.writelines(ratings)
print('User labels have been output to ' + abspath(self.config['outputDir']) + '.')
def generateLabels(self,filename):
labels = []
path = self.outputDir + filename
with open(path,'w') as f:
for user in self.spamProfile:
labels.append(user+' 1\n')
for user in self.userProfile:
labels.append(user+' 0\n')
f.writelines(labels)
print('User profiles have been output to '+abspath(self.config['outputDir'])+'.')
```
### Generate data
```
%%writefile config.conf
ratings=dataset/filmtrust/ratings.txt
ratings.setup=-columns 0 1 2
social=dataset/filmtrust/trust.txt
social.setup=-columns 0 1 2
attackSize=0.1
fillerSize=0.05
selectedSize=0.005
targetCount=20
targetScore=4.0
threshold=3.0
maxScore=4.0
minScore=1.0
minCount=5
maxCount=50
linkSize=0.001
outputDir=output/
attack = RR_Attack('config.conf')
attack.insertSpam()
attack.farmLink()
attack.generateLabels('labels.txt')
attack.generateProfiles('profiles.txt')
attack.generateSocialConnections('relations.txt')
```
## Data access objects
```
class RatingDAO(object):
'data access control'
def __init__(self,config, trainingData, testData):
self.config = config
self.ratingConfig = LineConfig(config['ratings.setup'])
self.user = {} #used to store the order of users in the training set
self.item = {} #used to store the order of items in the training set
self.id2user = {}
self.id2item = {}
self.all_Item = {}
self.all_User = {}
self.userMeans = {} #used to store the mean values of users's ratings
self.itemMeans = {} #used to store the mean values of items's ratings
self.globalMean = 0
self.timestamp = {}
# self.trainingMatrix = None
# self.validationMatrix = None
self.testSet_u = testData.copy() # used to store the test set by hierarchy user:[item,rating]
self.testSet_i = defaultdict(dict) # used to store the test set by hierarchy item:[user,rating]
self.trainingSet_u = trainingData.copy()
self.trainingSet_i = defaultdict(dict)
#self.rScale = []
self.trainingData = trainingData
self.testData = testData
self.__generateSet()
self.__computeItemMean()
self.__computeUserMean()
self.__globalAverage()
def __generateSet(self):
scale = set()
# find the maximum rating and minimum value
# for i, entry in enumerate(self.trainingData):
# userName, itemName, rating = entry
# scale.add(float(rating))
# self.rScale = list(scale)
# self.rScale.sort()
for i,user in enumerate(self.trainingData):
for item in self.trainingData[user]:
# makes the rating within the range [0, 1].
#rating = normalize(float(rating), self.rScale[-1], self.rScale[0])
#self.trainingSet_u[userName][itemName] = float(rating)
self.trainingSet_i[item][user] = self.trainingData[user][item]
# order the user
if user not in self.user:
self.user[user] = len(self.user)
self.id2user[self.user[user]] = user
# order the item
if item not in self.item:
self.item[item] = len(self.item)
self.id2item[self.item[item]] = item
self.trainingSet_i[item][user] = self.trainingData[user][item]
# userList.append
# triple.append([self.user[userName], self.item[itemName], rating])
# self.trainingMatrix = new_sparseMatrix.SparseMatrix(triple)
self.all_User.update(self.user)
self.all_Item.update(self.item)
for i, user in enumerate(self.testData):
# order the user
if user not in self.user:
self.all_User[user] = len(self.all_User)
for item in self.testData[user]:
# order the item
if item not in self.item:
self.all_Item[item] = len(self.all_Item)
#self.testSet_u[userName][itemName] = float(rating)
self.testSet_i[item][user] = self.testData[user][item]
def __globalAverage(self):
total = sum(self.userMeans.values())
if total==0:
self.globalMean = 0
else:
self.globalMean = total/len(self.userMeans)
def __computeUserMean(self):
# for u in self.user:
# n = self.row(u) > 0
# mean = 0
#
# if not self.containsUser(u): # no data about current user in training set
# pass
# else:
# sum = float(self.row(u)[0].sum())
# try:
# mean = sum/ n[0].sum()
# except ZeroDivisionError:
# mean = 0
# self.userMeans[u] = mean
for u in self.trainingSet_u:
self.userMeans[u] = sum(self.trainingSet_u[u].values())/(len(list(self.trainingSet_u[u].values()))+0.0)
for u in self.testSet_u:
self.userMeans[u] = sum(self.testSet_u[u].values())/(len(list(self.testSet_u[u].values()))+0.0)
def __computeItemMean(self):
# for c in self.item:
# n = self.col(c) > 0
# mean = 0
# if not self.containsItem(c): # no data about current user in training set
# pass
# else:
# sum = float(self.col(c)[0].sum())
# try:
# mean = sum / n[0].sum()
# except ZeroDivisionError:
# mean = 0
# self.itemMeans[c] = mean
for item in self.trainingSet_i:
self.itemMeans[item] = sum(self.trainingSet_i[item].values())/(len(list(self.trainingSet_i[item].values())) + 0.0)
for item in self.testSet_i:
self.itemMeans[item] = sum(self.testSet_i[item].values())/(len(list(self.testSet_i[item].values())) + 0.0)
def getUserId(self,u):
if u in self.user:
return self.user[u]
else:
return -1
def getItemId(self,i):
if i in self.item:
return self.item[i]
else:
return -1
def trainingSize(self):
recordCount = 0
for user in self.trainingData:
recordCount+=len(self.trainingData[user])
return (len(self.trainingSet_u),len(self.trainingSet_i),recordCount)
def testSize(self):
recordCount = 0
for user in self.testData:
recordCount += len(self.testData[user])
return (len(self.testSet_u),len(self.testSet_i),recordCount)
def contains(self,u,i):
'whether user u rated item i'
if u in self.trainingSet_u and i in self.trainingSet_u[u]:
return True
return False
def containsUser(self,u):
'whether user is in training set'
return u in self.trainingSet_u
def containsItem(self,i):
'whether item is in training set'
return i in self.trainingSet_i
def allUserRated(self, u):
if u in self.user:
return list(self.trainingSet_u[u].keys()), list(self.trainingSet_u[u].values())
else:
return list(self.testSet_u[u].keys()), list(self.testSet_u[u].values())
# def userRated(self,u):
# if self.trainingMatrix.matrix_User.has_key(self.getUserId(u)):
# itemIndex = self.trainingMatrix.matrix_User[self.user[u]].keys()
# rating = self.trainingMatrix.matrix_User[self.user[u]].values()
# return (itemIndex,rating)
# return ([],[])
#
# def itemRated(self,i):
# if self.trainingMatrix.matrix_Item.has_key(self.getItemId(i)):
# userIndex = self.trainingMatrix.matrix_Item[self.item[i]].keys()
# rating = self.trainingMatrix.matrix_Item[self.item[i]].values()
# return (userIndex,rating)
# return ([],[])
# def row(self,u):
# return self.trainingMatrix.row(self.getUserId(u))
#
# def col(self,c):
# return self.trainingMatrix.col(self.getItemId(c))
#
# def sRow(self,u):
# return self.trainingMatrix.sRow(self.getUserId(u))
#
# def sCol(self,c):
# return self.trainingMatrix.sCol(self.getItemId(c))
#
# def rating(self,u,c):
# return self.trainingMatrix.elem(self.getUserId(u),self.getItemId(c))
#
# def ratingScale(self):
# return (self.rScale[0],self.rScale[1])
# def elemCount(self):
# return self.trainingMatrix.elemCount()
class SocialDAO(object):
def __init__(self,conf,relation=list()):
self.config = conf
self.user = {} #used to store the order of users
self.relation = relation
self.followees = {}
self.followers = {}
self.trustMatrix = self.__generateSet()
def __generateSet(self):
#triple = []
for line in self.relation:
userId1,userId2,weight = line
#add relations to dict
if userId1 not in self.followees:
self.followees[userId1] = {}
self.followees[userId1][userId2] = weight
if userId2 not in self.followers:
self.followers[userId2] = {}
self.followers[userId2][userId1] = weight
# order the user
if userId1 not in self.user:
self.user[userId1] = len(self.user)
if userId2 not in self.user:
self.user[userId2] = len(self.user)
#triple.append([self.user[userId1], self.user[userId2], weight])
#return new_sparseMatrix.SparseMatrix(triple)
# def row(self,u):
# #return user u's followees
# return self.trustMatrix.row(self.user[u])
#
# def col(self,u):
# #return user u's followers
# return self.trustMatrix.col(self.user[u])
#
# def elem(self,u1,u2):
# return self.trustMatrix.elem(u1,u2)
def weight(self,u1,u2):
if u1 in self.followees and u2 in self.followees[u1]:
return self.followees[u1][u2]
else:
return 0
# def trustSize(self):
# return self.trustMatrix.size
def getFollowers(self,u):
if u in self.followers:
return self.followers[u]
else:
return {}
def getFollowees(self,u):
if u in self.followees:
return self.followees[u]
else:
return {}
def hasFollowee(self,u1,u2):
if u1 in self.followees:
if u2 in self.followees[u1]:
return True
else:
return False
return False
def hasFollower(self,u1,u2):
if u1 in self.followers:
if u2 in self.followers[u1]:
return True
else:
return False
return False
```
## Methods
### BayesDetector
```
#BayesDetector: Collaborative Shilling Detection Bridging Factorization and User Embedding
class BayesDetector(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(BayesDetector, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(BayesDetector, self).readConfiguration()
extraSettings = LineConfig(self.config['BayesDetector'])
self.k = int(extraSettings['-k'])
self.negCount = int(extraSettings['-negCount']) # the number of negative samples
if self.negCount < 1:
self.negCount = 1
self.regR = float(extraSettings['-gamma'])
self.filter = int(extraSettings['-filter'])
self.delta = float(extraSettings['-delta'])
learningRate = LineConfig(self.config['learnRate'])
self.lRate = float(learningRate['-init'])
self.maxLRate = float(learningRate['-max'])
self.maxIter = int(self.config['num.max.iter'])
regular = LineConfig(self.config['reg.lambda'])
self.regU, self.regI = float(regular['-u']), float(regular['-i'])
# self.delta = float(self.config['delta'])
def printAlgorConfig(self):
super(BayesDetector, self).printAlgorConfig()
print('k: %d' % self.negCount)
print('regR: %.5f' % self.regR)
print('filter: %d' % self.filter)
print('=' * 80)
def initModel(self):
super(BayesDetector, self).initModel()
# self.c = np.random.rand(len(self.dao.all_User) + 1) / 20 # bias value of context
self.G = np.random.rand(len(self.dao.all_User)+1, self.k) / 100 # context embedding
self.P = np.random.rand(len(self.dao.all_User)+1, self.k) / 100 # latent user matrix
self.Q = np.random.rand(len(self.dao.all_Item)+1, self.k) / 100 # latent item matrix
# constructing SPPMI matrix
self.SPPMI = defaultdict(dict)
D = len(self.dao.user)
print('Constructing SPPMI matrix...')
# for larger data set has many items, the process will be time consuming
occurrence = defaultdict(dict)
for user1 in self.dao.all_User:
iList1, rList1 = self.dao.allUserRated(user1)
if len(iList1) < self.filter:
continue
for user2 in self.dao.all_User:
if user1 == user2:
continue
if user2 not in occurrence[user1]:
iList2, rList2 = self.dao.allUserRated(user2)
if len(iList2) < self.filter:
continue
count = len(set(iList1).intersection(set(iList2)))
if count > self.filter:
occurrence[user1][user2] = count
occurrence[user2][user1] = count
maxVal = 0
frequency = {}
for user1 in occurrence:
frequency[user1] = sum(occurrence[user1].values()) * 1.0
D = sum(frequency.values()) * 1.0
# maxx = -1
for user1 in occurrence:
for user2 in occurrence[user1]:
try:
val = max([log(occurrence[user1][user2] * D / (frequency[user1] * frequency[user2]), 2) - log(
self.negCount, 2), 0])
except ValueError:
print(self.SPPMI[user1][user2])
print(self.SPPMI[user1][user2] * D / (frequency[user1] * frequency[user2]))
if val > 0:
if maxVal < val:
maxVal = val
self.SPPMI[user1][user2] = val
self.SPPMI[user2][user1] = self.SPPMI[user1][user2]
# normalize
for user1 in self.SPPMI:
for user2 in self.SPPMI[user1]:
self.SPPMI[user1][user2] = self.SPPMI[user1][user2] / maxVal
def buildModel(self):
self.dao.ratings = dict(self.dao.trainingSet_u, **self.dao.testSet_u)
#suspicous set
print('Preparing sets...')
self.sSet = defaultdict(dict)
#normal set
self.nSet = defaultdict(dict)
# self.NegativeSet = defaultdict(list)
for user in self.dao.user:
for item in self.dao.ratings[user]:
# if self.dao.ratings[user][item] >= 5 and self.labels[user]=='1':
if self.labels[user] =='1':
self.sSet[item][user] = 1
# if self.dao.ratings[user][item] >= 5 and self.labels[user] == '0':
if self.labels[user] == '0':
self.nSet[item][user] = 1
# Jointly decompose R(ratings) and SPPMI with shared user latent factors P
iteration = 0
while iteration < self.maxIter:
self.loss = 0
for item in self.sSet:
i = self.dao.all_Item[item]
if item not in self.nSet:
continue
normalUserList = list(self.nSet[item].keys())
for user in self.sSet[item]:
su = self.dao.all_User[user]
# if len(self.NegativeSet[user]) > 0:
# item_j = choice(self.NegativeSet[user])
# else:
normalUser = choice(normalUserList)
nu = self.dao.all_User[normalUser]
s = sigmoid(self.P[su].dot(self.Q[i]) - self.P[nu].dot(self.Q[i]))
self.Q[i] += (self.lRate * (1 - s) * (self.P[su] - self.P[nu]))
self.P[su] += (self.lRate * (1 - s) * self.Q[i])
self.P[nu] -= (self.lRate * (1 - s) * self.Q[i])
self.Q[i] -= self.lRate * self.regI * self.Q[i]
self.P[su] -= self.lRate * self.regU * self.P[su]
self.P[nu] -= self.lRate * self.regU * self.P[nu]
self.loss += (-log(s))
#
# for item in self.sSet:
# if not self.nSet.has_key(item):
# continue
# for user1 in self.sSet[item]:
# for user2 in self.sSet[item]:
# su1 = self.dao.all_User[user1]
# su2 = self.dao.all_User[user2]
# self.P[su1] += (self.lRate*(self.P[su1]-self.P[su2]))*self.delta
# self.P[su2] -= (self.lRate*(self.P[su1]-self.P[su2]))*self.delta
#
# self.loss += ((self.P[su1]-self.P[su2]).dot(self.P[su1]-self.P[su2]))*self.delta
for user in self.dao.ratings:
for item in self.dao.ratings[user]:
rating = self.dao.ratings[user][item]
if rating < 5:
continue
error = rating - self.predictRating(user,item)
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
p = self.P[u]
q = self.Q[i]
# self.loss += (error ** 2)*self.b
# update latent vectors
self.P[u] += (self.lRate * (error * q - self.regU * p))
self.Q[i] += (self.lRate * (error * p - self.regI * q))
for user in self.SPPMI:
u = self.dao.all_User[user]
p = self.P[u]
for context in self.SPPMI[user]:
v = self.dao.all_User[context]
m = self.SPPMI[user][context]
g = self.G[v]
diff = (m - p.dot(g))
self.loss += (diff ** 2)
# update latent vectors
self.P[u] += (self.lRate * diff * g)
self.G[v] += (self.lRate * diff * p)
self.loss += self.regU * (self.P * self.P).sum() + self.regI * (self.Q * self.Q).sum() + self.regR * (self.G * self.G).sum()
iteration += 1
print('iteration:',iteration)
# preparing examples
self.training = []
self.trainingLabels = []
self.test = []
self.testLabels = []
for user in self.dao.trainingSet_u:
self.training.append(self.P[self.dao.all_User[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append(self.P[self.dao.all_User[user]])
self.testLabels.append(self.labels[user])
#
# tsne = TSNE(n_components=2)
# self.Y = tsne.fit_transform(self.P)
#
# self.normalUsers = []
# self.spammers = []
# for user in self.labels:
# if self.labels[user] == '0':
# self.normalUsers.append(user)
# else:
# self.spammers.append(user)
#
#
# print len(self.spammers)
# self.normalfeature = np.zeros((len(self.normalUsers), 2))
# self.spamfeature = np.zeros((len(self.spammers), 2))
# normal_index = 0
# for normaluser in self.normalUsers:
# if normaluser in self.dao.all_User:
# self.normalfeature[normal_index] = self.Y[self.dao.all_User[normaluser]]
# normal_index += 1
#
# spam_index = 0
# for spamuser in self.spammers:
# if spamuser in self.dao.all_User:
# self.spamfeature[spam_index] = self.Y[self.dao.all_User[spamuser]]
# spam_index += 1
# self.randomNormal = np.zeros((500,2))
# self.randomSpam = np.zeros((500,2))
# # for i in range(500):
# # self.randomNormal[i] = self.normalfeature[random.randint(0,len(self.normalfeature)-1)]
# # self.randomSpam[i] = self.spamfeature[random.randint(0,len(self.spamfeature)-1)]
# plt.scatter(self.normalfeature[:, 0], self.normalfeature[:, 1], c='red',s=8,marker='o',label='NormalUser')
# plt.scatter(self.spamfeature[:, 0], self.spamfeature[:, 1], c='blue',s=8,marker='o',label='Spammer')
# plt.legend(loc='lower left')
# plt.xticks([])
# plt.yticks([])
# plt.savefig('9.png',dpi=500)
def predictRating(self,user,item):
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
return self.P[u].dot(self.Q[i])
def predict(self):
classifier = RandomForestClassifier(n_estimators=12)
# classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(self.training, self.trainingLabels)
pred_labels = classifier.predict(self.test)
print('Decision Tree:')
return pred_labels
```
### CoDetector
```
#CoDetector: Collaborative Shilling Detection Bridging Factorization and User Embedding
class CoDetector(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(CoDetector, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(CoDetector, self).readConfiguration()
extraSettings = LineConfig(self.config['CoDetector'])
self.k = int(extraSettings['-k'])
self.negCount = int(extraSettings['-negCount']) # the number of negative samples
if self.negCount < 1:
self.negCount = 1
self.regR = float(extraSettings['-gamma'])
self.filter = int(extraSettings['-filter'])
learningRate = LineConfig(self.config['learnRate'])
self.lRate = float(learningRate['-init'])
self.maxLRate = float(learningRate['-max'])
self.maxIter = int(self.config['num.max.iter'])
regular = LineConfig(self.config['reg.lambda'])
self.regU, self.regI = float(regular['-u']), float(regular['-i'])
def printAlgorConfig(self):
super(CoDetector, self).printAlgorConfig()
print('k: %d' % self.negCount)
print('regR: %.5f' % self.regR)
print('filter: %d' % self.filter)
print('=' * 80)
def initModel(self):
super(CoDetector, self).initModel()
self.w = np.random.rand(len(self.dao.all_User)+1) / 20 # bias value of user
self.c = np.random.rand(len(self.dao.all_User)+1)/ 20 # bias value of context
self.G = np.random.rand(len(self.dao.all_User)+1, self.k) / 20 # context embedding
self.P = np.random.rand(len(self.dao.all_User)+1, self.k) / 20 # latent user matrix
self.Q = np.random.rand(len(self.dao.all_Item)+1, self.k) / 20 # latent item matrix
# constructing SPPMI matrix
self.SPPMI = defaultdict(dict)
D = len(self.dao.user)
print('Constructing SPPMI matrix...')
# for larger data set has many items, the process will be time consuming
occurrence = defaultdict(dict)
for user1 in self.dao.all_User:
iList1, rList1 = self.dao.allUserRated(user1)
if len(iList1) < self.filter:
continue
for user2 in self.dao.all_User:
if user1 == user2:
continue
if user2 not in occurrence[user1]:
iList2, rList2 = self.dao.allUserRated(user2)
if len(iList2) < self.filter:
continue
count = len(set(iList1).intersection(set(iList2)))
if count > self.filter:
occurrence[user1][user2] = count
occurrence[user2][user1] = count
maxVal = 0
frequency = {}
for user1 in occurrence:
frequency[user1] = sum(occurrence[user1].values()) * 1.0
D = sum(frequency.values()) * 1.0
# maxx = -1
for user1 in occurrence:
for user2 in occurrence[user1]:
try:
val = max([log(occurrence[user1][user2] * D / (frequency[user1] * frequency[user2]), 2) - log(
self.negCount, 2), 0])
except ValueError:
print(self.SPPMI[user1][user2])
print(self.SPPMI[user1][user2] * D / (frequency[user1] * frequency[user2]))
if val > 0:
if maxVal < val:
maxVal = val
self.SPPMI[user1][user2] = val
self.SPPMI[user2][user1] = self.SPPMI[user1][user2]
# normalize
for user1 in self.SPPMI:
for user2 in self.SPPMI[user1]:
self.SPPMI[user1][user2] = self.SPPMI[user1][user2] / maxVal
def buildModel(self):
# Jointly decompose R(ratings) and SPPMI with shared user latent factors P
iteration = 0
while iteration < self.maxIter:
self.loss = 0
self.dao.ratings = dict(self.dao.trainingSet_u, **self.dao.testSet_u)
for user in self.dao.ratings:
for item in self.dao.ratings[user]:
rating = self.dao.ratings[user][item]
error = rating - self.predictRating(user,item)
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
p = self.P[u]
q = self.Q[i]
self.loss += error ** 2
# update latent vectors
self.P[u] += self.lRate * (error * q - self.regU * p)
self.Q[i] += self.lRate * (error * p - self.regI * q)
for user in self.SPPMI:
u = self.dao.all_User[user]
p = self.P[u]
for context in self.SPPMI[user]:
v = self.dao.all_User[context]
m = self.SPPMI[user][context]
g = self.G[v]
diff = (m - p.dot(g) - self.w[u] - self.c[v])
self.loss += diff ** 2
# update latent vectors
self.P[u] += self.lRate * diff * g
self.G[v] += self.lRate * diff * p
self.w[u] += self.lRate * diff
self.c[v] += self.lRate * diff
self.loss += self.regU * (self.P * self.P).sum() + self.regI * (self.Q * self.Q).sum() + self.regR * (self.G * self.G).sum()
iteration += 1
print('iteration:',iteration)
# preparing examples
self.training = []
self.trainingLabels = []
self.test = []
self.testLabels = []
for user in self.dao.trainingSet_u:
self.training.append(self.P[self.dao.all_User[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append(self.P[self.dao.all_User[user]])
self.testLabels.append(self.labels[user])
def predictRating(self,user,item):
u = self.dao.all_User[user]
i = self.dao.all_Item[item]
return self.P[u].dot(self.Q[i])
def predict(self):
classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(self.training, self.trainingLabels)
pred_labels = classifier.predict(self.test)
print('Decision Tree:')
return pred_labels
```
### DegreeSAD
```
class DegreeSAD(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(DegreeSAD, self).__init__(conf, trainingSet, testSet, labels, fold)
def buildModel(self):
self.MUD = {}
self.RUD = {}
self.QUD = {}
# computing MUD,RUD,QUD for training set
sList = sorted(iter(self.dao.trainingSet_i.items()), key=lambda d: len(d[1]), reverse=True)
maxLength = len(sList[0][1])
for user in self.dao.trainingSet_u:
self.MUD[user] = 0
for item in self.dao.trainingSet_u[user]:
self.MUD[user] += len(self.dao.trainingSet_i[item]) #/ float(maxLength)
self.MUD[user]/float(len(self.dao.trainingSet_u[user]))
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.trainingSet_u[user]]
lengthList.sort(reverse=True)
self.RUD[user] = lengthList[0] - lengthList[-1]
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.trainingSet_u[user]]
lengthList.sort()
self.QUD[user] = lengthList[int((len(lengthList) - 1) / 4.0)]
# computing MUD,RUD,QUD for test set
for user in self.dao.testSet_u:
self.MUD[user] = 0
for item in self.dao.testSet_u[user]:
self.MUD[user] += len(self.dao.trainingSet_i[item]) #/ float(maxLength)
for user in self.dao.testSet_u:
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.testSet_u[user]]
lengthList.sort(reverse=True)
self.RUD[user] = lengthList[0] - lengthList[-1]
for user in self.dao.testSet_u:
lengthList = [len(self.dao.trainingSet_i[item]) for item in self.dao.testSet_u[user]]
lengthList.sort()
self.QUD[user] = lengthList[int((len(lengthList) - 1) / 4.0)]
# preparing examples
for user in self.dao.trainingSet_u:
self.training.append([self.MUD[user], self.RUD[user], self.QUD[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append([self.MUD[user], self.RUD[user], self.QUD[user]])
self.testLabels.append(self.labels[user])
def predict(self):
# classifier = LogisticRegression()
# classifier.fit(self.training, self.trainingLabels)
# pred_labels = classifier.predict(self.test)
# print 'Logistic:'
# print classification_report(self.testLabels, pred_labels)
#
# classifier = SVC()
# classifier.fit(self.training, self.trainingLabels)
# pred_labels = classifier.predict(self.test)
# print 'SVM:'
# print classification_report(self.testLabels, pred_labels)
classifier = DecisionTreeClassifier(criterion='entropy')
classifier.fit(self.training, self.trainingLabels)
pred_labels = classifier.predict(self.test)
print('Decision Tree:')
return pred_labels
```
### FAP
```
class FAP(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(FAP, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(FAP, self).readConfiguration()
# # s means the number of seedUser who be regarded as spammer in training
self.s =int( self.config['seedUser'])
# preserve the real spammer ID
self.spammer = []
for i in self.dao.user:
if self.labels[i] == '1':
self.spammer.append(self.dao.user[i])
sThreshold = int(0.5 * len(self.spammer))
if self.s > sThreshold :
self.s = sThreshold
print('*** seedUser is more than a half of spammer, so it is set to', sThreshold, '***')
# # predict top-k user as spammer
self.k = int(self.config['topKSpam'])
# 0.5 is the ratio of spammer to dataset, it can be changed according to different datasets
kThreshold = int(0.5 * (len(self.dao.user) - self.s))
if self.k > kThreshold:
self.k = kThreshold
print('*** the number of top-K users is more than threshold value, so it is set to', kThreshold, '***')
# product transition probability matrix self.TPUI and self.TPIU
def __computeTProbability(self):
# m--user count; n--item count
m, n, tmp = self.dao.trainingSize()
self.TPUI = np.zeros((m, n))
self.TPIU = np.zeros((n, m))
self.userUserIdDic = {}
self.itemItemIdDic = {}
tmpUser = list(self.dao.user.values())
tmpUserId = list(self.dao.user.keys())
tmpItem = list(self.dao.item.values())
tmpItemId = list(self.dao.item.keys())
for users in range(0, m):
self.userUserIdDic[tmpUser[users]] = tmpUserId[users]
for items in range(0, n):
self.itemItemIdDic[tmpItem[items]] = tmpItemId[items]
for i in range(0, m):
for j in range(0, n):
user = self.userUserIdDic[i]
item = self.itemItemIdDic[j]
# if has edge in graph,set a value ;otherwise set 0
if (user not in self.bipartiteGraphUI) or (item not in self.bipartiteGraphUI[user]):
continue
else:
w = float(self.bipartiteGraphUI[user][item])
# to avoid positive feedback and reliability problem,we should Polish the w
otherItemW = 0
otherUserW = 0
for otherItem in self.bipartiteGraphUI[user]:
otherItemW += float(self.bipartiteGraphUI[user][otherItem])
for otherUser in self.dao.trainingSet_i[item]:
otherUserW += float(self.bipartiteGraphUI[otherUser][item])
# wPrime = w*1.0/(otherUserW * otherItemW)
wPrime = w
self.TPUI[i][j] = wPrime / otherItemW
self.TPIU[j][i] = wPrime / otherUserW
if i % 100 == 0:
print('progress: %d/%d' %(i,m))
def initModel(self):
# construction of the bipartite graph
print("constructing bipartite graph...")
self.bipartiteGraphUI = {}
for user in self.dao.trainingSet_u:
tmpUserItemDic = {} # user-item-point
for item in self.dao.trainingSet_u[user]:
# tmpItemUserDic = {}#item-user-point
recordValue = float(self.dao.trainingSet_u[user][item])
w = 1 + abs((recordValue - self.dao.userMeans[user]) / self.dao.userMeans[user]) + abs(
(recordValue - self.dao.itemMeans[item]) / self.dao.itemMeans[item]) + abs(
(recordValue - self.dao.globalMean) / self.dao.globalMean)
# tmpItemUserDic[user] = w
tmpUserItemDic[item] = w
# self.bipartiteGraphIU[item] = tmpItemUserDic
self.bipartiteGraphUI[user] = tmpUserItemDic
# we do the polish in computing the transition probability
print("computing transition probability...")
self.__computeTProbability()
def isConvergence(self, PUser, PUserOld):
if len(PUserOld) == 0:
return True
for i in range(0, len(PUser)):
if (PUser[i] - PUserOld[i]) > 0.01:
return True
return False
def buildModel(self):
# -------init--------
m, n, tmp = self.dao.trainingSize()
PUser = np.zeros(m)
PItem = np.zeros(n)
self.testLabels = [0 for i in range(m)]
self.predLabels = [0 for i in range(m)]
# preserve seedUser Index
self.seedUser = []
randDict = {}
for i in range(0, self.s):
randNum = random.randint(0, len(self.spammer) - 1)
while randNum in randDict:
randNum = random.randint(0, len(self.spammer) - 1)
randDict[randNum] = 0
self.seedUser.append(int(self.spammer[randNum]))
# print len(randDict), randDict
#initial user and item spam probability
for j in range(0, m):
if j in self.seedUser:
#print type(j),j
PUser[j] = 1
else:
PUser[j] = random.random()
for tmp in range(0, n):
PItem[tmp] = random.random()
# -------iterator-------
PUserOld = []
iterator = 0
while self.isConvergence(PUser, PUserOld):
#while iterator < 100:
for j in self.seedUser:
PUser[j] = 1
PUserOld = PUser
PItem = np.dot(self.TPIU, PUser)
PUser = np.dot(self.TPUI, PItem)
iterator += 1
print(self.foldInfo,'iteration', iterator)
PUserDict = {}
userId = 0
for i in PUser:
PUserDict[userId] = i
userId += 1
for j in self.seedUser:
del PUserDict[j]
self.PSort = sorted(iter(PUserDict.items()), key=lambda d: d[1], reverse=True)
def predict(self):
# predLabels
# top-k user as spammer
spamList = []
sIndex = 0
while sIndex < self.k:
spam = self.PSort[sIndex][0]
spamList.append(spam)
self.predLabels[spam] = 1
sIndex += 1
# trueLabels
for user in self.dao.trainingSet_u:
userInd = self.dao.user[user]
# print type(user), user, userInd
self.testLabels[userInd] = int(self.labels[user])
# delete seedUser labels
differ = 0
for user in self.seedUser:
user = int(user - differ)
# print type(user)
del self.predLabels[user]
del self.testLabels[user]
differ += 1
return self.predLabels
```
### PCASelectUsers
```
class PCASelectUsers(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]', k=None, n=None ):
super(PCASelectUsers, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(PCASelectUsers, self).readConfiguration()
# K = top-K vals of cov
self.k = int(self.config['kVals'])
self.userNum = len(self.dao.trainingSet_u)
self.itemNum = len(self.dao.trainingSet_i)
if self.k >= min(self.userNum, self.itemNum):
self.k = 3
print('*** k-vals is more than the number of user or item, so it is set to', self.k)
# n = attack size or the ratio of spammers to normal users
self.n = float(self.config['attackSize'])
def buildModel(self):
#array initialization
dataArray = np.zeros([self.userNum, self.itemNum], dtype=float)
self.testLabels = np.zeros(self.userNum)
self.predLabels = np.zeros(self.userNum)
#add data
print('construct matrix')
for user in self.dao.trainingSet_u:
for item in list(self.dao.trainingSet_u[user].keys()):
value = self.dao.trainingSet_u[user][item]
a = self.dao.user[user]
b = self.dao.item[item]
dataArray[a][b] = value
sMatrix = csr_matrix(dataArray)
# z-scores
sMatrix = preprocessing.scale(sMatrix, axis=0, with_mean=False)
sMT = np.transpose(sMatrix)
# cov
covSM = np.dot(sMT, sMatrix)
# eigen-value-decomposition
vals, vecs = scipy.sparse.linalg.eigs(covSM, k=self.k, which='LM')
newArray = np.dot(dataArray**2, np.real(vecs))
distanceDict = {}
userId = 0
for user in newArray:
distance = 0
for tmp in user:
distance += tmp
distanceDict[userId] = float(distance)
userId += 1
print('sort distance ')
self.disSort = sorted(iter(distanceDict.items()), key=lambda d: d[1], reverse=False)
def predict(self):
print('predict spammer')
spamList = []
i = 0
while i < self.n * len(self.disSort):
spam = self.disSort[i][0]
spamList.append(spam)
self.predLabels[spam] = 1
i += 1
# trueLabels
for user in self.dao.trainingSet_u:
userInd = self.dao.user[user]
self.testLabels[userInd] = int(self.labels[user])
return self.predLabels
```
### SemiSAD
```
class SemiSAD(SDetection):
def __init__(self, conf, trainingSet=None, testSet=None, labels=None, fold='[1]'):
super(SemiSAD, self).__init__(conf, trainingSet, testSet, labels, fold)
def readConfiguration(self):
super(SemiSAD, self).readConfiguration()
# K = top-K vals of cov
self.k = int(self.config['topK'])
# Lambda = ฮปๅๆฐ
self.Lambda = float(self.config['Lambda'])
def buildModel(self):
self.H = {}
self.DegSim = {}
self.LengVar = {}
self.RDMA = {}
self.FMTD = {}
print('Begin feature engineering...')
# computing H,DegSim,LengVar,RDMA,FMTD for LabledData set
trainingIndex = 0
testIndex = 0
trainingUserCount, trainingItemCount, trainingrecordCount = self.dao.trainingSize()
testUserCount, testItemCount, testrecordCount = self.dao.testSize()
for user in self.dao.trainingSet_u:
trainingIndex += 1
self.H[user] = 0
for i in range(10,50,5):
n = 0
for item in self.dao.trainingSet_u[user]:
if(self.dao.trainingSet_u[user][item]==(i/10.0)):
n+=1
if n==0:
self.H[user] += 0
else:
self.H[user] += (-(n/(trainingUserCount*1.0))*math.log(n/(trainingUserCount*1.0),2))
SimList = []
self.DegSim[user] = 0
for user1 in self.dao.trainingSet_u:
userA, userB, C, D, E, Count = 0,0,0,0,0,0
for item in list(set(self.dao.trainingSet_u[user]).intersection(set(self.dao.trainingSet_u[user1]))):
userA += self.dao.trainingSet_u[user][item]
userB += self.dao.trainingSet_u[user1][item]
Count += 1
if Count==0:
AverageA = 0
AverageB = 0
else:
AverageA = userA/Count
AverageB = userB/Count
for item in list(set(self.dao.trainingSet_u[user]).intersection(set(self.dao.trainingSet_u[user1]))):
C += (self.dao.trainingSet_u[user][item]-AverageA)*(self.dao.trainingSet_u[user1][item]-AverageB)
D += np.square(self.dao.trainingSet_u[user][item]-AverageA)
E += np.square(self.dao.trainingSet_u[user1][item]-AverageB)
if C==0:
SimList.append(0.0)
else:
SimList.append(C/(math.sqrt(D)*math.sqrt(E)))
SimList.sort(reverse=True)
for i in range(1,self.k+1):
self.DegSim[user] += SimList[i] / (self.k)
GlobalAverage = 0
F = 0
for user2 in self.dao.trainingSet_u:
GlobalAverage += len(self.dao.trainingSet_u[user2]) / (len(self.dao.trainingSet_u) + 0.0)
for user3 in self.dao.trainingSet_u:
F += pow(len(self.dao.trainingSet_u[user3])-GlobalAverage,2)
self.LengVar[user] = abs(len(self.dao.trainingSet_u[user])-GlobalAverage)/(F*1.0)
Divisor = 0
for item1 in self.dao.trainingSet_u[user]:
Divisor += abs(self.dao.trainingSet_u[user][item1]-self.dao.itemMeans[item1])/len(self.dao.trainingSet_i[item1])
self.RDMA[user] = Divisor/len(self.dao.trainingSet_u[user])
Minuend, index1, Subtrahend, index2 = 0, 0, 0, 0
for item3 in self.dao.trainingSet_u[user]:
if(self.dao.trainingSet_u[user][item3]==5.0 or self.dao.trainingSet_u[user][item3]==1.0) :
Minuend += sum(self.dao.trainingSet_i[item3].values())
index1 += len(self.dao.trainingSet_i[item3])
else:
Subtrahend += sum(self.dao.trainingSet_i[item3].values())
index2 += len(self.dao.trainingSet_i[item3])
if index1 == 0 and index2 == 0:
self.FMTD[user] = 0
elif index1 == 0:
self.FMTD[user] = abs(Subtrahend / index2)
elif index2 == 0:
self.FMTD[user] = abs(Minuend / index1)
else:
self.FMTD[user] = abs(Minuend / index1 - Subtrahend / index2)
if trainingIndex==(trainingUserCount/5):
print('trainingData Done 20%...')
elif trainingIndex==(trainingUserCount/5*2):
print('trainingData Done 40%...')
elif trainingIndex==(trainingUserCount/5*3):
print('trainingData Done 60%...')
elif trainingIndex==(trainingUserCount/5*4):
print('trainingData Done 80%...')
elif trainingIndex==(trainingUserCount):
print('trainingData Done 100%...')
# computing H,DegSim,LengVar,RDMA,FMTD for UnLabledData set
for user in self.dao.testSet_u:
testIndex += 1
self.H[user] = 0
for i in range(10,50,5):
n = 0
for item in self.dao.testSet_u[user]:
if(self.dao.testSet_u[user][item]==(i/10.0)):
n+=1
if n==0:
self.H[user] += 0
else:
self.H[user] += (-(n/(testUserCount*1.0))*math.log(n/(testUserCount*1.0),2))
SimList = []
self.DegSim[user] = 0
for user1 in self.dao.testSet_u:
userA, userB, C, D, E, Count = 0,0,0,0,0,0
for item in list(set(self.dao.testSet_u[user]).intersection(set(self.dao.testSet_u[user1]))):
userA += self.dao.testSet_u[user][item]
userB += self.dao.testSet_u[user1][item]
Count += 1
if Count==0:
AverageA = 0
AverageB = 0
else:
AverageA = userA/Count
AverageB = userB/Count
for item in list(set(self.dao.testSet_u[user]).intersection(set(self.dao.testSet_u[user1]))):
C += (self.dao.testSet_u[user][item]-AverageA)*(self.dao.testSet_u[user1][item]-AverageB)
D += np.square(self.dao.testSet_u[user][item]-AverageA)
E += np.square(self.dao.testSet_u[user1][item]-AverageB)
if C==0:
SimList.append(0.0)
else:
SimList.append(C/(math.sqrt(D)*math.sqrt(E)))
SimList.sort(reverse=True)
for i in range(1,self.k+1):
self.DegSim[user] += SimList[i] / self.k
GlobalAverage = 0
F = 0
for user2 in self.dao.testSet_u:
GlobalAverage += len(self.dao.testSet_u[user2]) / (len(self.dao.testSet_u) + 0.0)
for user3 in self.dao.testSet_u:
F += pow(len(self.dao.testSet_u[user3])-GlobalAverage,2)
self.LengVar[user] = abs(len(self.dao.testSet_u[user])-GlobalAverage)/(F*1.0)
Divisor = 0
for item1 in self.dao.testSet_u[user]:
Divisor += abs(self.dao.testSet_u[user][item1]-self.dao.itemMeans[item1])/len(self.dao.testSet_i[item1])
self.RDMA[user] = Divisor/len(self.dao.testSet_u[user])
Minuend, index1, Subtrahend, index2= 0,0,0,0
for item3 in self.dao.testSet_u[user]:
if(self.dao.testSet_u[user][item3]==5.0 or self.dao.testSet_u[user][item3]==1.0):
Minuend += sum(self.dao.testSet_i[item3].values())
index1 += len(self.dao.testSet_i[item3])
else:
Subtrahend += sum(self.dao.testSet_i[item3].values())
index2 += len(self.dao.testSet_i[item3])
if index1 == 0 and index2 == 0:
self.FMTD[user] = 0
elif index1 == 0:
self.FMTD[user] = abs(Subtrahend / index2)
elif index2 == 0:
self.FMTD[user] = abs(Minuend / index1)
else:
self.FMTD[user] = abs(Minuend / index1 - Subtrahend / index2)
if testIndex == testUserCount / 5:
print('testData Done 20%...')
elif testIndex == testUserCount / 5 * 2:
print('testData Done 40%...')
elif testIndex == testUserCount / 5 * 3:
print('testData Done 60%...')
elif testIndex == testUserCount / 5 * 4:
print('testData Done 80%...')
elif testIndex == testUserCount:
print('testData Done 100%...')
# preparing examples training for LabledData ,test for UnLableData
for user in self.dao.trainingSet_u:
self.training.append([self.H[user], self.DegSim[user], self.LengVar[user],self.RDMA[user],self.FMTD[user]])
self.trainingLabels.append(self.labels[user])
for user in self.dao.testSet_u:
self.test.append([self.H[user], self.DegSim[user], self.LengVar[user],self.RDMA[user],self.FMTD[user]])
self.testLabels.append(self.labels[user])
def predict(self):
ClassifierN = 0
classifier = GaussianNB()
X_train,X_test,y_train,y_test = train_test_split(self.training,self.trainingLabels,test_size=0.75,random_state=33)
classifier.fit(X_train, y_train)
# predict UnLabledData
#pred_labelsForTrainingUn = classifier.predict(X_test)
print('Enhanced classifier...')
while 1:
if len(X_test)<=5: # min
break #min
proba_labelsForTrainingUn = classifier.predict_proba(X_test)
X_test_labels = np.hstack((X_test, proba_labelsForTrainingUn))
X_test_labels0_sort = sorted(X_test_labels,key=lambda x:x[5],reverse=True)
if X_test_labels0_sort[4][5]>X_test_labels0_sort[4][6]:
a = [x[:5] for x in X_test_labels0_sort]
b = a[0:5]
classifier.partial_fit(b, ['0','0','0','0','0'], classes=['0', '1'],sample_weight=np.ones(len(b), dtype=np.float) * self.Lambda)
X_test_labels = X_test_labels0_sort[5:]
X_test = a[5:]
if len(X_test)<6: # min
break #min
X_test_labels0_sort = sorted(X_test_labels, key=lambda x: x[5], reverse=True)
if X_test_labels0_sort[4][5]<=X_test_labels0_sort[4][6]: #min
a = [x[:5] for x in X_test_labels0_sort]
b = a[0:5]
classifier.partial_fit(b, ['1', '1', '1', '1', '1'], classes=['0', '1'],sample_weight=np.ones(len(b), dtype=np.float) * 1)
X_test_labels = X_test_labels0_sort[5:] # min
X_test = a[5:]
if len(X_test)<6:
break
# while 1 :
# p1 = pred_labelsForTrainingUn
# # ๅฐๅธฆฮปๅๆฐ็ๆ ๆ ็ญพๆฐๆฎๆๅๅ
ฅๅ็ฑปๅจ
# classifier.partial_fit(X_test, pred_labelsForTrainingUn,classes=['0','1'], sample_weight=np.ones(len(X_test),dtype=np.float)*self.Lambda)
# pred_labelsForTrainingUn = classifier.predict(X_test)
# p2 = pred_labelsForTrainingUn
# # ๅคๆญๅ็ฑปๅจๆฏๅฆ็จณๅฎ
# if list(p1)==list(p2) :
# ClassifierN += 1
# elif ClassifierN > 0:
# ClassifierN = 0
# if ClassifierN == 20:
# break
pred_labels = classifier.predict(self.test)
print('naive_bayes with EM algorithm:')
return pred_labels
```
## Main
```
class SDLib(object):
def __init__(self,config):
self.trainingData = [] # training data
self.testData = [] # testData
self.relation = []
self.measure = []
self.config =config
self.ratingConfig = LineConfig(config['ratings.setup'])
self.labels = FileIO.loadLabels(config['label'])
if self.config.contains('evaluation.setup'):
self.evaluation = LineConfig(config['evaluation.setup'])
if self.evaluation.contains('-testSet'):
#specify testSet
self.trainingData = FileIO.loadDataSet(config, config['ratings'])
self.testData = FileIO.loadDataSet(config, self.evaluation['-testSet'], bTest=True)
elif self.evaluation.contains('-ap'):
#auto partition
self.trainingData = FileIO.loadDataSet(config,config['ratings'])
self.trainingData,self.testData = DataSplit.\
dataSplit(self.trainingData,test_ratio=float(self.evaluation['-ap']))
elif self.evaluation.contains('-cv'):
#cross validation
self.trainingData = FileIO.loadDataSet(config, config['ratings'])
#self.trainingData,self.testData = DataSplit.crossValidation(self.trainingData,int(self.evaluation['-cv']))
else:
print('Evaluation is not well configured!')
exit(-1)
if config.contains('social'):
self.socialConfig = LineConfig(self.config['social.setup'])
self.relation = FileIO.loadRelationship(config,self.config['social'])
print('preprocessing...')
def execute(self):
if self.evaluation.contains('-cv'):
k = int(self.evaluation['-cv'])
if k <= 1 or k > 10:
k = 3
#create the manager used to communication in multiprocess
manager = Manager()
m = manager.dict()
i = 1
tasks = []
for train,test in DataSplit.crossValidation(self.trainingData,k):
fold = '['+str(i)+']'
if self.config.contains('social'):
method = self.config['methodName'] + "(self.config,train,test,self.labels,self.relation,fold)"
else:
method = self.config['methodName'] + "(self.config,train,test,self.labels,fold)"
#create the process
p = Process(target=run,args=(m,eval(method),i))
tasks.append(p)
i+=1
#start the processes
for p in tasks:
p.start()
#wait until all processes are completed
for p in tasks:
p.join()
#compute the mean error of k-fold cross validation
self.measure = [dict(m)[i] for i in range(1,k+1)]
res = []
pattern = re.compile('(\d+\.\d+)')
countPattern = re.compile('\d+\\n')
labelPattern = re.compile('\s\d{1}[^\.|\n|\d]')
labels = re.findall(labelPattern, self.measure[0])
values = np.array([0]*9,dtype=float)
count = np.array([0,0,0],dtype=int)
for report in self.measure:
patterns = np.array(re.findall(pattern,report),dtype=float)
values += patterns[:9]
patterncounts = np.array(re.findall(countPattern,report),dtype=int)
count += patterncounts[:3]
values/=k
values=np.around(values,decimals=4)
res.append(' precision recall f1-score support\n\n')
res.append(' '+labels[0]+' '+' '.join(np.array(values[0:3],dtype=str).tolist())+' '+str(count[0])+'\n')
res.append(' '+labels[1]+' '+' '.join(np.array(values[3:6],dtype=str).tolist())+' '+str(count[1])+'\n\n')
res.append(' avg/total ' + ' '.join(np.array(values[6:9], dtype=str).tolist()) + ' ' + str(count[2]) + '\n')
print('Total:')
print(''.join(res))
# for line in lines[1:]:
#
# measure = self.measure[0][i].split(':')[0]
# total = 0
# for j in range(k):
# total += float(self.measure[j][i].split(':')[1])
# res.append(measure+':'+str(total/k)+'\n')
#output result
currentTime = strftime("%Y-%m-%d %H-%M-%S", localtime(time()))
outDir = LineConfig(self.config['output.setup'])['-dir']
fileName = self.config['methodName'] +'@'+currentTime+'-'+str(k)+'-fold-cv' + '.txt'
FileIO.writeFile(outDir,fileName,res)
print('The results have been output to '+abspath(LineConfig(self.config['output.setup'])['-dir'])+'\n')
else:
if self.config.contains('social'):
method = self.config['methodName'] + '(self.config,self.trainingData,self.testData,self.labels,self.relation)'
else:
method = self.config['methodName'] + '(self.config,self.trainingData,self.testData,self.labels)'
eval(method).execute()
def run(measure,algor,order):
measure[order] = algor.execute()
conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
print('='*80)
print('Supervised Methods:')
print('1. DegreeSAD 2.CoDetector 3.BayesDetector\n')
print('Semi-Supervised Methods:')
print('4. SemiSAD\n')
print('Unsupervised Methods:')
print('5. PCASelectUsers 6. FAP 7.timeIndex\n')
print('-'*80)
order = eval(input('please enter the num of the method to run it:'))
algor = -1
conf = -1
s = tm.clock()
if order == 1:
conf = Config('DegreeSAD.conf')
elif order == 2:
conf = Config('CoDetector.conf')
elif order == 3:
conf = Config('BayesDetector.conf')
elif order == 4:
conf = Config('SemiSAD.conf')
elif order == 5:
conf = Config('PCASelectUsers.conf')
elif order == 6:
conf = Config('FAP.conf')
elif order == 7:
conf = Config('timeIndex.conf')
else:
print('Error num!')
exit(-1)
# conf = Config('DegreeSAD.conf')
sd = SDLib(conf)
sd.execute()
e = tm.clock()
print("Run time: %f s" % (e - s))
```
| github_jupyter |
# Calculate the AMOC in density space
$VVEL*DZT*DXT (x,y,z)$ -> $VVEL*DZT*DXT (x,y,$\sigma$)$ -> $\sum_{x=W}^E$ -> $\sum_{\sigma=\sigma_{max/min}}^\sigma$
```
import os
import sys
import xgcm
import numpy as np
import xarray as xr
import cmocean
import pop_tools
import matplotlib
import matplotlib.pyplot as plt
%matplotlib inline
matplotlib.rc_file('rc_file_paper')
%config InlineBackend.print_figure_kwargs={'bbox_inches':None}
%load_ext autoreload
%autoreload 2
from MOC import calculate_AMOC_sigma_z
from tqdm import notebook
from paths import path_results, path_prace, file_RMASK_ocn, file_RMASK_ocn_low, file_ex_ocn_ctrl, file_ex_ocn_lpd, path_data
from FW_plots import Atl_lats
from timeseries import lowpass
from xhistogram.xarray import histogram
from xr_DataArrays import xr_DZ_xgcm
from xr_regression import xr_lintrend, xr_linear_trend, xr_2D_trends, ocn_field_regression
RAPIDz = xr.open_dataarray(f'{path_data}/RAPID_AMOC/moc_vertical.nc')
kwargs = dict(combine='nested', concat_dim='time', decode_times=False)
ds_ctrl = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_ctrl_*.nc', **kwargs)
ds_rcp = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_rcp_*.nc' , **kwargs)
ds_lpd = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_lpd_*.nc' , **kwargs)
ds_lr1 = xr.open_mfdataset(f'{path_prace}/MOC/AMOC_sz_yz_lr1_*.nc' , **kwargs)
AMOC_ctrl = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_ctrl.nc', decode_times=False)
AMOC_rcp = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_rcp.nc' , decode_times=False)
AMOC_lpd = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_lpd.nc' , decode_times=False)
AMOC_lr1 = xr.open_dataarray(f'{path_results}/MOC/AMOC_max_lr1.nc' , decode_times=False)
mycmap = cmocean.tools.crop_by_percent(cmocean.cm.curl, 100/3, which='min', N=None)
f = plt.figure(figsize=(6.4,5))
# profiles
ax = f.add_axes([.84,.55,.15,.4])
ax.set_title(r'26.5$\!^\circ\!$N')
ax.set_ylim((-6,0))
ax.set_yticklabels([])
ax.axvline(0, c='k', lw=.5)
ax.axhline(-1, c='k', lw=.5)
r, = ax.plot(RAPIDz.mean('time'), -RAPIDz.depth/1e3, c='k', label='RAPID')
RAPID_ctrl = ds_ctrl['AMOC(y,z)'].isel(nlat_u=1456).mean('time')
RAPID_lpd = ds_lpd ['AMOC(y,z)'].isel(nlat_u= 271).mean('time')
RAPID_rcp = 365*100*xr_linear_trend(ds_rcp['AMOC(y,z)'].isel(nlat_u=1456)).rename({'dim_0':'z_t'}).assign_coords(z_t=ds_rcp.z_t) + RAPID_ctrl
RAPID_lr1 = 365*100*xr_linear_trend(ds_lr1['AMOC(y,z)'].isel(nlat_u= 271)).rename({'dim_0':'z_t'}).assign_coords(z_t=ds_lr1.z_t) + RAPID_lpd
hc, = ax.plot(RAPID_ctrl, -ds_ctrl.z_t/1e5, c='k', ls='--', label='HR CTRL')
lc, = ax.plot(RAPID_lpd , -ds_lpd .z_t/1e5, c='k', ls=':' , label='LR CTRL')
hr, = ax.plot(RAPID_rcp , -ds_ctrl.z_t/1e5, c='k', ls='--', lw=.7, label='HR RCP')
lr, = ax.plot(RAPID_lr1 , -ds_lpd .z_t/1e5, c='k', ls=':' , lw=.7, label='LR RCP')
ax.text(.01,.92, '(c)', transform=ax.transAxes)
ax.set_xlabel('AMOC [Sv]')
ax.legend(handles=[r, hc, lc, hr, lr], fontsize=5, frameon=False, handlelength=2, loc='lower right')
for i, sim in enumerate(['HIGH', 'LOW']):
axt = f.add_axes([.1+i*.37,.55,.35,.4])
axb = f.add_axes([.1+i*.37,.09,.35,.35])
# psi
axt.set_title(['HR-CESM', 'LR-CESM'][i])
axt.set_ylim((-6,0))
axt.set_xlim((-34,60))
if i==0:
axt.set_ylabel('depth [km]')
axb.set_ylabel('AMOC at 26.5$\!^\circ\!$N, 1000 m')
else:
axt.set_yticklabels([])
axb.set_yticklabels([])
(ds_mean, ds_trend) = [(ds_ctrl, ds_rcp), (ds_lpd, ds_lr1)][i]
vmaxm = 25
vmaxt = 10
mean = ds_mean['AMOC(y,z)'].mean('time')
trend = xr_2D_trends(ds_trend['AMOC(y,z)']).rolling(nlat_u=[15,3][i]).mean()*100*365
Xm,Ym = np.meshgrid(Atl_lats(sim=sim), -1e-5*mean['z_t'].values)
Xt,Yt = np.meshgrid(Atl_lats(sim=sim), -1e-5*trend['z_t'].values)
im = axt.contourf(Xm, Ym, mean, cmap=mycmap, levels=np.arange(-8,25,1))
cs = axt.contour(Xm, Ym, trend, levels=np.arange(-12,3,1),
cmap='cmo.balance', vmin=-10, vmax=10, linewidths=.5)
axt.clabel(cs, np.arange(-12,3,2), fmt='%d', fontsize=7)
axt.text(.01,.92, '('+['a','b'][i]+')', transform=axt.transAxes)
axt.scatter(26.5,-1, color='w', marker='x')
axt.set_xlabel(r'latitude $\theta$ [$\!^{\!\circ}\!$N]')
# time series
AMOC_c = [AMOC_ctrl, AMOC_lpd][i]
AMOC_r = [AMOC_rcp , AMOC_lr1][i]
axb.set_xlabel('time [model years]')
axb.plot(AMOC_c.time/365, AMOC_c, c='C0', alpha=.3, lw=.5)
axb.plot(AMOC_c.time[60:-60]/365, lowpass(AMOC_c,120)[60:-60], c='C0', label='CTRL')
axb.plot(AMOC_r.time/365-[1800,1500][i], AMOC_r, c='C1', alpha=.3, lw=.5)
axb.plot(AMOC_r.time[60:-60]/365-[1800,1500][i], lowpass(AMOC_r,120)[60:-60], c='C1', label='RCP')
axb.plot(AMOC_r.time/365-[1800,1500][i], xr_lintrend(AMOC_r), c='grey', lw=.8, ls='--', label='RCP linear fit')
axb.text(25+[200,500][i], 5.8, f'{xr_linear_trend(AMOC_r).values*100*365:3.2f} Sv/100yr', color='grey', fontsize=7)
axb.set_ylim((4,29.5))
if i==0:
axb.legend(frameon=False, fontsize=8)
axb.set_xlim([(95,305), (395,605)][i])
axb.text(.01,.91, '('+['d','e'][i]+')', transform=axb.transAxes)
cax1 = f.add_axes([.88,.12,.02,.3])
f.colorbar(im, cax=cax1)
cax1.text(1,-.1,'[Sv]', ha='right', fontsize=7, transform=cax1.transAxes)
cax1.yaxis.set_ticks_position('left')
cax2 = f.add_axes([.92,.12,.02,.3])
f.colorbar(cs, cax=cax2)
cax2.text(-.1,-.1,'[Sv/100yr]', ha='left', fontsize=7, transform=cax2.transAxes)
# plt.savefig(f'{path_results}/FW-paper/Fig5', dpi=600)
```
| github_jupyter |
# ENGR 213 Project Demonstration: Toast Falling from Counter
## Iteration AND slipping of toast
This is a Jupyter notebook created to explore the utility of notebooks as an engineering/physics tool. As I consider integrating this material into physics and engineering courses I am having a hard time clarifying the outcomes that I seek for the students. It seems plausible that understanding what it would take to implement the 'Toast Project" in a way which satisfies me would be helpful to indentify those skills and outcomes I hope for.
I hope to do a good job of documentation as I go but intentions are quirky creatures and prone to change:)
### Today's Learning:
I've been working may way through this for a number of days (a couple of hours at a time). I just realized that it's getting very cumbersome to try to keep each upgrade to the model in the same notebook. I'm going to keep this notebook as an object lesson of how that can happen. In the meantime I'm going to rebuild this notebook into three new notebooks that keep each stage of the process independent. A very helpful discovery about my own workflow.....
#### Exporting this document to pdf
This has not behaved well so far for me. My current most successful strategy is to download the Jupyter notebook as an html file, open in Firefox, print the file - which gives me the option to save the html document as a pdf. This is not terrible but it's not as good as I would like it.
I also have had some luck with downloading as a .tex file and running it through TeXWorks which complains a bit but ultimately gives pretty output. This may be my strategy.
## The Problem
The basic problem is this. When toast teeters off the edge of a standard counter it seems to, remarkably consistently, land 'jelly side' down. I have read that this is no accident and that in fact the solution to the problem is to push the toast when you notice it going off the edge rather than try to stop it. I have done some loose experiments and find this to be empirically true. Now -- can I use basic dynamics and an numerical approach to explore this problem without getting caught up the analytic solution to the problem documented in various places in AJP.
In the previous notebook I modelled this process assuming that the angular acceleration would be changing as the toast tips over the edge. Experimentally starting with a piece of toast 3/4 of the way off the edge and then releasing it I observe that it rotates about 2๐ radians before it hits the floor (a little more perhaps). My basic iteration model predicted 8ish radians which is more than which is only a little over $2\pi$ radians. Definitely getting closer to my observations. It seems likely that there is a point at which the gravitational forces will make the toast begin to slip 'laterally'. This will increase the moment of inertia and the net torque in ways that are hard to predict intuitively. That is the reason to try and model this more complex process.
# Code
The code starts the same way as the previous model. I will retain the comment block to keep things consistent.
The following just sets up the plotting, `matplotlib`, and numerical processing, `numpy`, libaries for use by the kernal. As is apparently common they are given shorter aliases as `plt` and `np`.
Here are the reference sites for [`matplotlib`](https://matplotlib.org/) and [`numpy`](http://www.numpy.org/)
Note: The plt.rcParams calls are tools for making the plots larger along with the fonts that are used in the labeling. These settings seem to give acceptable plots.
```
%matplotlib inline
import matplotlib.pyplot as plt
plt.style.use('seaborn-whitegrid')
plt.rcParams["figure.figsize"] = (20,10)
plt.rcParams.update({'font.size': 22})
import numpy as np
```
## Defining constants
In this problem it seems prudent to allow for a variety of 'toast' like objects of different materials and sizes. To that end I want to establish a set of constants that describe the particular setting I am considering. Note that I am working in cm and s for convenience which may turn out to be a bad idea. These variables define the physical form of the toast. Modeling parameters will be solicited from the user allowing a different way to change features.
tlength (parallel to table edge - cm)
twidth (perpendicular to table edge - cm)
tthick (yup - cm)
tdensity (in case it matters - gr/cm<sup>3</sup>)
counterheight ( in cm)
gravity (cm/s<sup>2</sup>)
anglimit (radians - generally set to $\pi/2$ but could be different)
In this model the toast will be slipping which leads to some component of downward velocity as the toast leaves the edge of the table. This leads to a more complex calculation for the time to fall to the floor which depends on the results of the model. That is why the floortime calculation from the previous models has been removed at this stage.
Calculations in this cell are constant regardless of other variations in the problem parameters.
```
tlength = 10.0
twidth = 10.0
tthick = 1.0
tdensity = 0.45
counterheight = 100.0
gravity = 981.0
anglimit = np.pi/2
lindensity = tdensity*tlength*tthick # linear density of toast
tmass = lindensity*twidth # mass of toast
tinertiacm = tmass*(twidth*twidth)/12.0 # moment of inertia around CM
tinertiamax = tmass*(twidth*twidth)/3.0 # max moment around edge of 'book'
# debug
# print ("Toast mass (gr): %.2f gr"% (tmass))
# print ("Inertia around CM: %.2f gr*cm^2" % (tinertiacm))
# print ("Max inertia around edge: %.2f gr*cm^2" % (tinertiamax))
```
## Updated Freebody Diagram
Since I know from experiment that the toast rotates almost exactly a full $2\pi$ if I start it hanging 3/4 of it's width over the edge that would mean that the rotational velocity when the toast disconnects from the table is around 12 rad/s (since the fall time is abut 0.5 s). The previous notebook and the plots therein suggest that the toast reaches that velocity when it has rotated a little less than 1 radian.
Analysis of the previous model raises the question of how slipping of the toast might affect the model. This will lead to a more complex model for sure so it seemed prudent to develop a more explicit freebody diagram using CAD software to keep variables clear.
So, here's the freebody diagram with a host of 'new' labels that represent my current thinking.
<img src="images/toastFB.png" alt="Freebody Diagram" title="Toast Freebody" />
## Rerun from here!!
When I wish to rerun this model with different parameters this is where I start....
### Set tstep and numit
To explore tools for interacting with the python code I am choosing to set the time step (tstep) and the maximum number of iterations (numit) as inputs from the user. This link from [`stackoverflow`](https://stackoverflow.com/questions/20449427/how-can-i-read-inputs-as-numbers) does the best job of explaining how to use the input() command in python 3.x which is the version I am using. This hopefully explains the format of the input statements....
```python
tstep = float(input("Define time step in ms (ms)? "))
numit = int(input("How many interations? "))
overhang = float(input("What is the initial overhang of the toast (% as in 1.0 = 100%)? "))
coeffric = float(input("What is the coefficient of friction? "))
```
The time step (tstep) can be fractions of ms if I want while the number of iterations (numit) but conceptually be an integer (It doesn't make much sense to repeat a process 11.3 times in this context). The overhang is the initial overhang of the toast (no sliding across the table yet) and the coefficient of friction is needed to handle slipping of the toast.
```
# Solicit model parameters from user.....
tstep = float(input("Define time step in ms (ms)? "))
numit = int(input("How many interations? "))
overhang = float(input("What is the initial overhang of the toast (% as in 1.0 = 100%)? "))
coeffric = float(input("What is the coefficient of friction? "))
print("Overhang is %.3f and the coefficient of friction is %.2f ."% (overhang, coeffric))
print("time step is %.2f ms and the number of iterations is %s."% (tstep, numit))
print("Rerun this cell to change these values and then rerun the calculations. ")
```
### Set up variable arrays
Getting these arrays set up is a little bit of an iterative process itself. I set up all the arrays I think I need and invariably I find later that I need several others. Some of that process will be hidden so I apologise. I started out this just a giant list of arrays but later decided I needed to group them in a way that would help visualize how they contribute to the calculation. Much of this is very similar to the previous model.
```
# Define variable arrays needed
# time variables
count = np.linspace(0,numit,num=numit+1) # start at 0, go to numit, because it started at there is 1 more element in the array
currenttime = np.full_like(count,0.0) # same size as count will all values = 0 for starters
# moment of inertia variables
dparallel = np.full_like(count,0.0) # distance to pivot from center of mass (CM)
tinertianow = np.full_like(count,0.0) # moment of inertia from parallel axis theorem
# rotation variables
angaccel = np.full_like(count,0.0) # current angular acceleration
angvel = np.full_like(count,0.0) # current angular velocity
angpos = np.full_like(count,0.0) # current angular position
torqpos = np.full_like(count,0.0) # torque from overhanging 'right' side of toast
torqneg = np.full_like(count,0.0) # torque from 'left' side of toast still over the table
torqnet = np.full_like(count,0.0) # net torque
# general location of cm variables
rside = np.full_like(count,0.0) # length of toast hanging out over edge
lside = np.full_like(count,0.0) # length of toast to left of edge
# torque calculation variables
armr = np.full_like(count,0.0) # moment arm of overhanging toast
arml = np.full_like(count,0.0) # moment arm of toast left of pivot
weightr = np.full_like(count,0.0) # weight of overhanging toast acting at armr/2
weightl = np.full_like(count,0.0) # weight of 'left' side of toast acting at arml/2
# slipping variables
friction = np.full_like(count,0.0) # friction at pivot
latgforce = np.full_like(count,0.0) # force seeking to slide toast off
parallelaccel = np.full_like(count,0.0) # acceleration parallel to plane of toast
# These arrays had to be added later as I needed to deal with the toast slipping off the edge
slipdisplace = np.full_like(count,0.0) # displacement of toast in this interation
slipposx = np.full_like(count,0.0) # position of CM of toast in x
slipposy = np.full_like(count,0.0) # position of CM of toast in y
slipveltot = np.full_like(count,0.0) # total velocity at iteration
slipvelx = np.full_like(count,0.0) # velocity of CM in x direction
slipvely = np.full_like(count,0.0) # velocity of CM in y direction
# kinematic coefficients
quadcoef = np.zeros(3) # needed to invoke the python polynomial roots solver.
```
### Initialize the arrays....
In the process of taking my original notebook apart and creating separate notebooks for each model I am finding that I can do this in a more understandable way than I did the first time around. Feel free to look back at the orginal notebook which I abandoned when it got too cumbersome.
Each time I perform a set of calculations I will start by considering where it is now and whether it is slipping or not. That means the next step in the iteration only depends on the previous step and some constants. Because of this I only need to establish the first value in each of the arrays. What is the value of each variable when this process starts. Note that all array values except count[] have been set to 0 so any variables whose initial value should be 0 have been commented out.
See previous models for details of calculating the moment of inertia using the parallel axis theorem.
```
# Set first term of each variable
# time variables
# count : count is aready completely filled from 0 to numit
# currenttime[0] is already set to 0
# general location of cm variables
rside[0] = twidth*overhang
lside[0] = twidth-rside[0]
# torque calculation variables
armr[0] = rside[0]/2.0
arml[0] = lside[0]/2.0
weightr[0] = lindensity*rside[0]*gravity # weight of overhang
weightl[0] = lindensity*lside[0]*gravity # weight over table
# moment of inertia variables
dparallel[0] = rside[0] - twidth/2. # value changes if slipping
tinertianow[0] = tinertiacm + tmass*dparallel[0]**2
# rotation variables
#angvel[0] is already set to 0
#angpos[0] is already set to 0
torqpos[0] = (overhang*twidth/2)*(tmass*overhang*gravity)
torqneg[0] = -((1.0-overhang)*twidth/2)*(tmass*(1.0-overhang)*gravity)
torqnet[0] = torqpos[0]+torqneg[0]
angaccel[0] = torqnet[0]/tinertianow[0]
# slipping variables
# friction[0] is already set to 0
# latgforce[0] is already set to 0
# parallelaccel[0] is already set to 0
# These arrays had to be added later as I needed to deal with the toast slipping off the edge
# slipdisplace[0] is already set to 0
slipposx[0] = rside[0]- twidth/2.0 # CM relative to pivot due to overhang
# slipposy[0] is already set to 0
# slipveltot[0] is already set to 0
# slipvelx[0] is already set to 0
# slipvely[0] is already set to 0
# kinematic coefficients
# quadcoef[] depend on conditions when toast leaves edge
```
### ...same calculation but using variables differently.....
I still need to calculate torqpos and torqneg but these will be based on my new nomenclature that tries to make it more explicit how the torques are calculated as well as the normal force on the corner and the friction generated.
One of the features I have NOT dealt with yet is that the moment of inertia will change once the toast starts sliding. I'm going to let that go for now and merely calculate the normal, friction, and lateral forces on the toast to see at what point it might start to slide. Then I will worry about how to recalculate the moment of inertia after I do a first test.
Look at the analysis section immediately following for discussion of how this process developed......
### When it starts to slip...(initially ignored)
When the toast starts to slip things get complicated in a hurry. Perhaps most obviously the pivot point starts to move which means all of the torques and moment arms change as well as the moment of intertia. That will all be sort of straightforward. Tracking the motion of the toast as it slides off the edge seems painful since the acceleration and velocity will be in a slightly different direction with each successive iteration. Yikes.....
Remember that python keeps track of loops and other programming features through the indents in the code. All of this part of the model will need to take place inside the 'if-else' conditional test' part way through the calculation. To be more specific it is the 'else' part of the conditional test where all the action has to happen
```
ndex1 = 0
while (ndex1 < numit) and (angpos[ndex1] < anglimit):
print("iteration: ",ndex1)
# These calculations take place in every iteration regardless of whether it's slipping or not.
# moment of inertia NOW - ndex1
dparallel[ndex1] = rside[ndex1] - twidth/2. # value changes if slipping
tinertianow[ndex1] = tinertiacm + tmass*dparallel[ndex1]**2
# torqnet NOW - ndex1
torqpos[ndex1] = np.cos(angpos[ndex1])*armr[ndex1]*weightr[ndex1]
torqneg[ndex1] = -np.cos(angpos[ndex1])*arml[ndex1]*weightl[ndex1]
torqnet[ndex1] = torqpos[ndex1] + torqneg[ndex1]
# angular acceleration NOW -ndex1
angaccel[ndex1] = torqnet[ndex1]/tinertianow[ndex1]
# NEXT position and velocity after tstep - ndex1+1
angvel[ndex1+1] = angvel[ndex1] + angaccel[ndex1]*(tstep/1000.0)
angpos[ndex1+1] = angpos[ndex1] + angvel[ndex1]*(tstep/1000.0) + 0.5*angaccel[ndex1]*(tstep/1000.0)*(tstep/1000.0)
currenttime[ndex1+1] = currenttime[ndex1] + tstep
# determine if the toast is slipping
# calculate normal, friction, and lateral forces NOW - ndex1
currentnormal = (weightr[ndex1] + weightl[ndex1])*np.cos(angpos[ndex1])
friction[ndex1] = currentnormal*coeffric
latgforce[ndex1] = (weightr[ndex1] + weightl[ndex1])*np.sin(angpos[ndex1])
parallelaccel[ndex1] = (latgforce[ndex1] - friction[ndex1])/(tmass)
# This is where I have to deal with the toast slipping. When the parallelaccel > 0
# then the toast is starting to slip.
if parallelaccel[ndex1] < 0.0: # NOT slipping
parallelaccel[ndex1] = 0.0
# update variables for next step = ndex1+1
rside[ndex1+1] = rside[ndex1]
lside[ndex1+1] = twidth - rside[ndex1+1]
armr[ndex1+1] = rside[ndex1+1]/2.0
arml[ndex1+1] = lside[ndex1+1]/2.0
weightr[ndex1+1] = lindensity*rside[ndex1+1]*gravity # weight of overhang
weightl[ndex1+1] = lindensity*lside[ndex1+1]*gravity # weight over table
slipangle = angpos[ndex1+1] # keep updating the slip angle until is starts slipping.
else:
print("Toast is slipping!!; ndex1: ", ndex1)
# determine NEXT sliding velocity - ndex1+1
slipvelx[ndex1+1] = slipvelx[ndex1] + np.cos(angpos[ndex1])*parallelaccel[ndex1]*tstep/1000.
slipvely[ndex1+1] = slipvely[ndex1] - np.sin(angpos[ndex1])*parallelaccel[ndex1]*tstep/1000.
slipveltot[ndex1+1] = np.sqrt(slipvelx[ndex1+1]**2 + slipvely[ndex1+1]**2)
# determine NEXT slid position - ndex1+1
slipposx[ndex1+1] = slipposx[ndex1] + slipvelx[ndex1+1]*tstep/1000.
slipposy[ndex1+1] = slipposy[ndex1] + slipvely[ndex1+1]*tstep/1000.
slipdisplace[ndex1+1] = np.sqrt(slipposx[ndex1+1]**2 + slipposy[ndex1+1]**2)
# find NEXT overhang, this affects the moment of inertia - ndex1+1
rside[ndex1+1] = rside[ndex1]+slipdisplace[ndex1+1]
lside[ndex1+1] = twidth - rside[ndex1+1]
weightr[ndex1+1] = lindensity*rside[ndex1+1]*gravity # weight of overhang
weightl[ndex1+1] = lindensity*lside[ndex1+1]*gravity # weight over table
# debugging help
# print("lateral accel (cm/s^2) : ", parallelaccel[ndex1])
# print("lateral g force: ", latgforce[ndex1])
# print("currenttime: ", currenttime[ndex1])
# print("velx: %.3f vely %.3f posx %.3f posy %.3f " % (slipvelx[ndex1],slipvely[ndex1],slipposx[ndex1],slipposy[ndex1]))
# print("slip velocity %.3f slip displacement %.3f " % (slipveltot[ndex1],slipdisplace[ndex1]))
# inputcont = input("continue?")
# debugging help
# print("Tpos: %.3f Tneg %.3f Ttot %.3f angaccel %.3f " % (torqpos2[ndex4],torqneg2[ndex4],torqtot[ndex4],angaccel2[ndex4]))
# print("cos(angpos): ", np.cos(angpos2[ndex2]))
# print("pos %.3f pos+ %.3f vel %.3f vel+ %.3f accel %.3f " % (angpos2[ndex4],angpos2[ndex4+1],angvel2[ndex4],angvel2[ndex4+1],angaccel2[ndex4]))
# inputcont = input("continue?")
# test for end point of rotation
if angpos[ndex1+1] > (np.pi/2.0):
ndex1 = ndex1 + 1
print ("Got to 90 degrees at ndex1: ", ndex1)
break # get out of the loop
ndex1 = ndex1 +1 # go to the next time increment
ndexfinal = ndex1
print("final index: ", ndex1)
print("Tpos: %.3f Tneg %.3f Ttot %.3f angaccel %.3f : torque 0.0 and angaccel 0.0" % (torqpos[ndex1],torqneg[ndex1],torqnet[ndex1],angaccel[ndex1]))
print("pos %.3f vel %.3f : angular position 1.55ish" % (angpos[ndex1],angvel[ndex1]))
print("Angle at which slipping begins is %.3f radians" % (slipangle))
```
### Plot lateral g force and friction to see crossover point.....
This introduces a different plotting requirement. I'm looking to understand where in the process the frictional force falls below the lateral g force resulting in the toast slipping. In previous dual plot I allowed the plot routines to set the scales on the vertical axes internally. Now I need to make sure both variable share the same vertical axis scale so the visual crossover point is in fact what I'm looking for. The first time I did this it looked good but because the scales on the left and right weren't the same it was misleading.
```
plt.plot(currenttime, latgforce, color = "blue", label = "lateral g force")
plt.plot(currenttime, friction, color = "red", label = "friction")
plt.title("Is it slipping?")
plt.ylabel("force");
plt.legend();
```
### Analysis
The first time I ran the analysis above with the possibility of slipping I screwed up the cos/sin thing and it started slipping right away. Fixed that and then it began slipping, with a coefficient of friction of 0.4, at the 6th interation (60 ms). I increased the coefficient of friction to 0.8 and it went up to the 8th interation before slipping. This is qualitatively what one would expect. Interestingly if you go back to the rotation speed plot it seems hopeful that if the toast starts to slide around 60-80 ms that would significantly reduce it's rotational velocity as it starts to fall which is what would be consistent with the experimental evidence.
Now I need to go back and build in the impact of the slipping which will be a bit of a pain. The discussion for this is back a few cells.
It feels like I have the slipping part working appropriately now. If I increase the coefficient of friction the angle at which it starts to slip is higher AND the final angular velocity is higher by a little.
### New Drop time
To get the rotation of the toast I need to calculate the drop time taking into account that because of slipping (and rotation actually) the toast has some downward velocity when it comes off the edge of the counter. That will slightly reduce the drop time.
```
quadcoef[0] = -gravity/2.0
quadcoef[2] = counterheight
quadcoef[1] = slipvely[ndexfinal-1]
droptime = np.roots(quadcoef)
if droptime[0] > 0.0: # assume 2 roots and only one is positive....could be a problem
finalrotation = droptime[0]*angvel[ndexfinal]
timetofloor = droptime[0]
else:
finalrotation = droptime[1]*angvel[ndexfinal]
timetofloor = droptime[1]
print("Final Report:")
print("Final Rotation at Floor (rad): ", finalrotation)
print("Angular velocity coming off the table (rad/s):", angvel[ndexfinal])
print("Time to reach floor (s):", timetofloor)
print("Initial overhang (%):", overhang)
print("Coefficient of Friction:", coeffric)
print("Angle at which slipping started (rad):", slipangle)
print("Time until comes off edge (ms): ", currenttime[ndexfinal])
print()
print()
# debug
print("coef 0 (t^2): ", quadcoef[0])
print("coef 1 (t): ", quadcoef[1])
print("coef 2 (const):", quadcoef[2])
print("root 1: ", droptime[0])
print("root 2: ", droptime[1])
```
### Analysis
Results from this reworked version of the model are quite sensitive to the coefficient of friction. With a $\mu$ of 0.8 the predicted rotation is 6.4 radians but with a $\mu$ of 0.3 the predicted rotation is roughly $3/2\pi$ which is less than a full rotation. It strikes me that it would be interesting to print out the angle at which slipping starts since this is a static calculation and could be compared to experiment. Generally though this model produces results that are consistent with observation.
My next step will be to go back and capture the angle at which slipping happens and see how that compares to some quick experiments. I also adjust the output of the last cell to provide a more useful summary of the outcome.
### Next steps....
What feels like the next step is to wrap this calculation in another loop so that I don't have to try out a range of different coefficients of friction. I could then plot the final rotation angle as function of the coefficient of friction for different overhangs. That would produce a fascinating plot....
I also want to try and figure out how hard it would be to animate this thing....hmmmm
| github_jupyter |
# Deep learning for Natural Language Processing
* Simple text representations, bag of words
* Word embedding and... not just another word2vec this time
* 1-dimensional convolutions for text
* Aggregating several data sources "the hard way"
* Solving ~somewhat~ real ML problem with ~almost~ end-to-end deep learning
Special thanks to Irina Golzmann for help with technical part.
# NLTK
You will require nltk v3.2 to solve this assignment
__It is really important that the version is 3.2, otherwize russian tokenizer might not work__
Install/update
* `sudo pip install --upgrade nltk==3.2`
* If you don't remember when was the last pip upgrade, `sudo pip install --upgrade pip`
If for some reason you can't or won't switch to nltk v3.2, just make sure that russian words are tokenized properly with RegeExpTokenizer.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
```
# Dataset
Ex-kaggle-competition on job salary prediction

Original conest - https://www.kaggle.com/c/job-salary-prediction
### Download
Go [here](https://www.kaggle.com/c/job-salary-prediction) and download as usual
CSC cloud: data should already be here somewhere, just poke the nearest instructor.
# What's inside
Different kinds of features:
* 2 text fields - title and description
* Categorical fields - contract type, location
Only 1 binary target whether or not such advertisement contains prohibited materials
* criminal, misleading, human reproduction-related, etc
* diving into the data may result in prolonged sleep disorders
```
df = pd.read_csv("./Train_rev1.csv",sep=',')
print df.shape, df.SalaryNormalized.mean()
df[:5]
```
# Tokenizing
First, we create a dictionary of all existing words.
Assign each word a number - it's Id
```
from nltk.tokenize import RegexpTokenizer
from collections import Counter,defaultdict
tokenizer = RegexpTokenizer(r"\w+")
#Dictionary of tokens
token_counts = Counter()
#All texts
all_texts = np.hstack([df.FullDescription.values,df.Title.values])
#Compute token frequencies
for s in all_texts:
if type(s) is not str:
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
for token in tokens:
token_counts[token] +=1
```
### Remove rare tokens
We are unlikely to make use of words that are only seen a few times throughout the corpora.
Again, if you want to beat Kaggle competition metrics, consider doing something better.
```
#Word frequency distribution, just for kicks
_=plt.hist(token_counts.values(),range=[0,50],bins=50)
#Select only the tokens that had at least 10 occurences in the corpora.
#Use token_counts.
min_count = 5
tokens = <tokens from token_counts keys that had at least min_count occurences throughout the dataset>
token_to_id = {t:i+1 for i,t in enumerate(tokens)}
null_token = "NULL"
token_to_id[null_token] = 0
print "# Tokens:",len(token_to_id)
if len(token_to_id) < 10000:
print "Alarm! It seems like there are too few tokens. Make sure you updated NLTK and applied correct thresholds -- unless you now what you're doing, ofc"
if len(token_to_id) > 100000:
print "Alarm! Too many tokens. You might have messed up when pruning rare ones -- unless you know what you're doin' ofc"
```
### Replace words with IDs
Set a maximum length for titles and descriptions.
* If string is longer that that limit - crop it, if less - pad with zeros.
* Thus we obtain a matrix of size [n_samples]x[max_length]
* Element at i,j - is an identifier of word j within sample i
```
def vectorize(strings, token_to_id, max_len=150):
token_matrix = []
for s in strings:
if type(s) is not str:
token_matrix.append([0]*max_len)
continue
s = s.decode('utf8').lower()
tokens = tokenizer.tokenize(s)
token_ids = map(lambda token: token_to_id.get(token,0), tokens)[:max_len]
token_ids += [0]*(max_len - len(token_ids))
token_matrix.append(token_ids)
return np.array(token_matrix)
desc_tokens = vectorize(df.FullDescription.values,token_to_id,max_len = 500)
title_tokens = vectorize(df.Title.values,token_to_id,max_len = 15)
```
### Data format examples
```
print "Matrix size:",title_tokens.shape
for title, tokens in zip(df.Title.values[:3],title_tokens[:3]):
print title,'->', tokens[:10],'...'
```
__ As you can see, our preprocessing is somewhat crude. Let us see if that is enough for our network __
# Non-sequences
Some data features are categorical data. E.g. location, contract type, company
They require a separate preprocessing step.
```
#One-hot-encoded category and subcategory
from sklearn.feature_extraction import DictVectorizer
categories = []
data_cat = df[["Category","LocationNormalized","ContractType","ContractTime"]]
categories = [A list of dictionaries {"category":category_name, "subcategory":subcategory_name} for each data sample]
vectorizer = DictVectorizer(sparse=False)
df_non_text = vectorizer.fit_transform(categories)
df_non_text = pd.DataFrame(df_non_text,columns=vectorizer.feature_names_)
```
# Split data into training and test
```
#Target variable - whether or not sample contains prohibited material
target = df.is_blocked.values.astype('int32')
#Preprocessed titles
title_tokens = title_tokens.astype('int32')
#Preprocessed tokens
desc_tokens = desc_tokens.astype('int32')
#Non-sequences
df_non_text = df_non_text.astype('float32')
#Split into training and test set.
#Difficulty selector:
#Easy: split randomly
#Medium: split by companies, make sure no company is in both train and test set
#Hard: do whatever you want, but score yourself using kaggle private leaderboard
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = <define_these_variables>
```
## Save preprocessed data [optional]
* The next tab can be used to stash all the essential data matrices and get rid of the rest of the data.
* Highly recommended if you have less than 1.5GB RAM left
* To do that, you need to first run it with save_prepared_data=True, then restart the notebook and only run this tab with read_prepared_data=True.
```
save_prepared_data = True #save
read_prepared_data = False #load
#but not both at once
assert not (save_prepared_data and read_prepared_data)
if save_prepared_data:
print "Saving preprocessed data (may take up to 3 minutes)"
import pickle
with open("preprocessed_data.pcl",'w') as fout:
pickle.dump(data_tuple,fout)
with open("token_to_id.pcl",'w') as fout:
pickle.dump(token_to_id,fout)
print "done"
elif read_prepared_data:
print "Reading saved data..."
import pickle
with open("preprocessed_data.pcl",'r') as fin:
data_tuple = pickle.load(fin)
title_tr,title_ts,desc_tr,desc_ts,nontext_tr,nontext_ts,target_tr,target_ts = data_tuple
with open("token_to_id.pcl",'r') as fin:
token_to_id = pickle.load(fin)
#Re-importing libraries to allow staring noteboook from here
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
print "done"
```
# Train the monster
Since we have several data sources, our neural network may differ from what you used to work with.
* Separate input for titles
* cnn+global max or RNN
* Separate input for description
* cnn+global max or RNN
* Separate input for categorical features
* Few dense layers + some black magic if you want
These three inputs must be blended somehow - concatenated or added.
* Output: a simple regression task
```
#libraries
import lasagne
from theano import tensor as T
import theano
#3 inputs and a refere output
title_token_ids = T.matrix("title_token_ids",dtype='int32')
desc_token_ids = T.matrix("desc_token_ids",dtype='int32')
categories = T.matrix("categories",dtype='float32')
target_y = T.vector("is_blocked",dtype='float32')
```
# NN architecture
```
title_inp = lasagne.layers.InputLayer((None,title_tr.shape[1]),input_var=title_token_ids)
descr_inp = lasagne.layers.InputLayer((None,desc_tr.shape[1]),input_var=desc_token_ids)
cat_inp = lasagne.layers.InputLayer((None,nontext_tr.shape[1]), input_var=categories)
# Descriptions
#word-wise embedding. We recommend to start from some 64 and improving after you are certain it works.
descr_nn = lasagne.layers.EmbeddingLayer(descr_inp,
input_size=len(token_to_id)+1,
output_size=?)
#reshape from [batch, time, unit] to [batch,unit,time] to allow 1d convolution over time
descr_nn = lasagne.layers.DimshuffleLayer(descr_nn, [0,2,1])
descr_nn = 1D convolution over embedding, maybe several ones in a stack
#pool over time
descr_nn = lasagne.layers.GlobalPoolLayer(descr_nn,T.max)
#Possible improvements here are adding several parallel convs with different filter sizes or stacking them the usual way
#1dconv -> 1d max pool ->1dconv and finally global pool
# Titles
title_nn = <Process titles somehow (title_inp)>
# Non-sequences
cat_nn = <Process non-sequences(cat_inp)>
nn = <merge three layers into one (e.g. lasagne.layers.concat) >
nn = lasagne.layers.DenseLayer(nn,your_lucky_number)
nn = lasagne.layers.DropoutLayer(nn,p=maybe_use_me)
nn = lasagne.layers.DenseLayer(nn,1,nonlinearity=lasagne.nonlinearities.linear)
```
# Loss function
* The standard way:
* prediction
* loss
* updates
* training and evaluation functions
```
#All trainable params
weights = lasagne.layers.get_all_params(nn,trainable=True)
#Simple NN prediction
prediction = lasagne.layers.get_output(nn)[:,0]
#loss function
loss = lasagne.objectives.squared_error(prediction,target_y).mean()
#Weight optimization step
updates = <your favorite optimizer>
```
### Determinitic prediction
* In case we use stochastic elements, e.g. dropout or noize
* Compile a separate set of functions with deterministic prediction (deterministic = True)
* Unless you think there's no neet for dropout there ofc. Btw is there?
```
#deterministic version
det_prediction = lasagne.layers.get_output(nn,deterministic=True)[:,0]
#equivalent loss function
det_loss = <an excercise in copy-pasting and editing>
```
### Coffee-lation
```
train_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[loss,prediction],updates = updates)
eval_fun = theano.function([desc_token_ids,title_token_ids,categories,target_y],[det_loss,det_prediction])
```
# Training loop
* The regular way with loops over minibatches
* Since the dataset is huge, we define epoch as some fixed amount of samples isntead of all dataset
```
# Out good old minibatch iterator now supports arbitrary amount of arrays (X,y,z)
def iterate_minibatches(*arrays,**kwargs):
batchsize=kwargs.get("batchsize",100)
shuffle = kwargs.get("shuffle",True)
if shuffle:
indices = np.arange(len(arrays[0]))
np.random.shuffle(indices)
for start_idx in range(0, len(arrays[0]) - batchsize + 1, batchsize):
if shuffle:
excerpt = indices[start_idx:start_idx + batchsize]
else:
excerpt = slice(start_idx, start_idx + batchsize)
yield [arr[excerpt] for arr in arrays]
```
### Tweaking guide
* batch_size - how many samples are processed per function call
* optimization gets slower, but more stable, as you increase it.
* May consider increasing it halfway through training
* minibatches_per_epoch - max amount of minibatches per epoch
* Does not affect training. Lesser value means more frequent and less stable printing
* Setting it to less than 10 is only meaningfull if you want to make sure your NN does not break down after one epoch
* n_epochs - total amount of epochs to train for
* `n_epochs = 10**10` and manual interrupting is still an option
Tips:
* With small minibatches_per_epoch, network quality may jump up and down for several epochs
* Plotting metrics over training time may be a good way to analyze which architectures work better.
* Once you are sure your network aint gonna crash, it's worth letting it train for a few hours of an average laptop's time to see it's true potential
```
from sklearn.metrics import mean_squared_error,mean_absolute_error
n_epochs = 100
batch_size = 100
minibatches_per_epoch = 100
for i in range(n_epochs):
#training
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_tr,title_tr,nontext_tr,target_tr,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch:break
loss,pred_probas = train_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Train:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
if j > minibatches_per_epoch: break
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Val:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
print "If you are seeing this, it's time to backup your notebook. No, really, 'tis too easy to mess up everything without noticing. "
```
# Final evaluation
Evaluate network over the entire test set
```
#evaluation
epoch_y_true = []
epoch_y_pred = []
b_c = b_loss = 0
for j, (b_desc,b_title,b_cat, b_y) in enumerate(
iterate_minibatches(desc_ts,title_ts,nontext_ts,target_ts,batchsize=batch_size,shuffle=True)):
loss,pred_probas = eval_fun(b_desc,b_title,b_cat,b_y)
b_loss += loss
b_c +=1
epoch_y_true.append(b_y)
epoch_y_pred.append(pred_probas)
epoch_y_true = np.concatenate(epoch_y_true)
epoch_y_pred = np.concatenate(epoch_y_pred)
print "Scores:"
print '\tloss:',b_loss/b_c
print '\trmse:',mean_squared_error(epoch_y_true,epoch_y_pred)**.5
print '\tmae:',mean_absolute_error(epoch_y_true,epoch_y_pred)
```
Now tune the monster for least MSE you can get!
# Next time in our show
* Recurrent neural networks
* How to apply them to practical problems?
* What else can they do?
* Why so much hype around LSTM?
* Stay tuned!
| github_jupyter |
# 3์ฅ ์ผ๋ผ์ค์ ํ
์ํ๋ก์ฐ
## ์ฃผ์ ๋ด์ฉ
- ๋ฅ๋ฌ๋ ํ์ ์์
- ์ผ๋ผ์ค์ ํ
์ํ๋ก์ฐ ๊ฐ๋ต ์๊ฐ
- ํ
์ํ๋ก์ฐ, ์ผ๋ผ์ค, GPU๋ฅผ ํ์ฉํ ๋ฅ๋ฌ๋ ์์
ํ๊ฒฝ
- ์ผ๋ผ์ค์ ํ
์ํ๋ก์ฐ๋ฅผ ์ด์ฉํ ์ ๊ฒฝ๋ง์ ํต์ฌ ๊ตฌ์ฑ์์ ๊ตฌํ
## 3.1 ํ
์ํ๋ก์ฐ ์๊ฐ
### ํ
์ํ๋ก์ฐ
- ๊ตฌ๊ธ์ ์ค์ฌ์ผ๋ก ๊ฐ๋ฐ๋ ๋จธ์ ๋ฌ๋ __ํ๋ซํผ__(platform)
- TF-Agents: ๊ฐํํ์ต ์ฐ๊ตฌ ์ง์
- TFX: ๋จธ์ ๋ฌ๋ ํ๋ก์ ํธ ์งํ๊ณผ์ (workflow) ์ด์ ์ง์
- TF-Hub: ํ๋ จ๋ ๋ชจ๋ธ ์ ๊ณต
- ํ์ด์ฌ ๊ธฐ๋ฐ
- ํ
์ ์ฐ์ฐ ์ง์
### ๋ํ์ด(Numpy)์์ ์ฐจ์ด์
- ๋ฏธ๋ถ ๊ฐ๋ฅํ ํจ์๋ค์ ๊ทธ๋ ์ด๋์ธํธ ์๋ ๊ณ์ฐ
- GPU, TPU ๋ฑ ๊ณ ์ฑ๋ฅ ๋ณ๋ ฌ ํ๋์จ์ด ๊ฐ์๊ธฐ ํ์ฉ ๊ฐ๋ฅ
- ๋์ ํ์ฅ์ฑ: ์ผ๊ธฐ์๋ณด, ๋ฐ๋ ํ๋ก๊ทธ๋จ ๋ฑ ๋งค์ฐ ๋ง์ ๋ฐ์ดํฐ์ ๊ณ์ฐ์ด ์๊ตฌ๋๋ ์ค์ ์ํฉ์ ํ์ฉ๋จ.
- C++(๊ฒ์), ์๋ฐ์คํฌ๋ฆฝํธ(์น๋ธ๋ผ์ฐ์ ), TFLite(๋ชจ๋ฐ์ผ ์ฅ์น) ๋ฑ ๋ค๋ฅธ ์ธ์ด๊ฐ ์ ํธ๋๋
๋๋ฉ์ธ ํนํ ํ๋ก๊ทธ๋จ์ ์ฝ๊ฒ ์ด์ ๊ฐ๋ฅ
## 3.2 ์ผ๋ผ์ค
### ์ผ๋ผ์ค์ ํ
์ํ๋ก์ฐ
- ๋ฅ๋ฌ๋ ๋ชจ๋ธ ํ๋ จ์ ์ต์ ํ๋ ์ธํฐํ์ด์ค ์ ๊ณต.
- ์๋ ํ
์ํ๋ก์ฐ์ ๋
๋ฆฝ์ ์ผ๋ก ์์๋จ.
- ํ
์ํ๋ก์ฐ 2.0๋ถํฐ ํ
์ํ๋ก์ฐ ๋ผ์ด๋ธ๋ฌ๋ฆฌ์ ์ต์์ ํ๋ ์์ํฌ(framework)๋ก ํฌํจ๋จ.
- ๋ค์ํ ์ํฌํ๋ก์ฐ ์ ๊ณต: ๋ชจ๋ธ ๊ตฌ์ถ๊ณผ ํ๋ จ ๋ฐฉ์์ ์์ด์ ๊ณ ์์ค/์ ์์ค ๋ฐฉ์ ๋ชจ๋ ์ ๊ณต
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/keras_and_tf.png" style="width:650px;"></div>
๊ทธ๋ฆผ ์ถ์ฒ: [Deep Learning with Python(Manning MEAP)](https://www.manning.com/books/deep-learning-with-python-second-edition)
## 3.3 ์ผ๋ผ์ค์ ํ
์ํ๋ก์ฐ์ ์ฝ๋ ฅ
- 2007๋
: ์จ์๋
ธ(Theano) ๊ณต๊ฐ. ์บ๋๋ค ๋ชฌํธ๋ฆฌ์ฌ ๋ํ๊ต ์ฐ๊ตฌํ.
- ๊ณ์ฐ ๊ทธ๋ํ, ๋ฏธ๋ถ ์๋ํ ๋ฑ์ ์ต์ด๋ก ํ์ฉ
- 2015๋
3์: ์ผ๋ผ์ค ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ณต๊ฐ
- ์จ์๋
ธ(Theano)๋ฅผ ๋ฐฑ์ค๋๋ก ์ฌ์ฉํ๋ ๊ณ ์์ค ํจํค์ง
- 2015๋
11์: ํ
์ํ๋ก์ฐ ๋ผ์ด๋ธ๋ฌ๋ฆฌ ๊ณต๊ฐ
- 2016๋
: ํ
์ํ๋ก์ฐ๊ฐ ์ผ๋ผ์ค์ ๊ธฐ๋ณธ ๋ฐฑ์๋๋ก ์ง์ ๋จ
- 2017๋
: ์จ์๋
ธ, ํ
์ํ๋ก์ฐ, CNTK(๋ง์ดํฌ๋ก์ํํธ), MXNet(์๋ง์กด)์ด ์ผ๋ผ์ค์ ๋ฐฑ์๋๋ก ์ง์๋จ.
- 2019๋
9์: ํ
์ํ๋ก์ฐ 2.0๋ถํฐ ์ผ๋ผ์ค๊ฐ ํ
์ํ๋ก์ฐ์ ์ต์์ ํ๋ ์์ํฌ๋ก ์ง์ ๋จ.
## 3.4 ๋ฅ๋ฌ๋ ์์
ํ๊ฒฝ
### GPU ํ์ฉ ์ต์
- ๊ฐ์ธ NVIDIA ๊ทธ๋ํฝ์นด๋๊ฐ ์ฅ์ฐฉ๋ PC ๋๋ ๋
ธํธ๋ถ ์ฌ์ฉ
- ๋ฅ๋ฌ๋์ ๋ง์ด ํ์ฉํ๋ ๊ฒฝ์ฐ
- Ubuntu ์ค์น ๋๋ WSL(Windows Subsystem for Linux) ํ์ฉ ์ถ์ฒ
- ๊ตฌ๊ธ ํด๋ผ์ฐ๋ ํ๋ซํผ ๋๋ ์๋ง์กด ์น์๋น์ค(AWS EC2) ํ์ฉ
- ๋จ๊ธฐ๊ฐ๋์ ๊ณ ์ฑ๋ฅ ์ปดํจํฐ๋ฅผ ํ์ฉํ๊ณ ์ ํ๋ ๊ฒฝ์ฐ
- __๊ตฌ๊ธ ์ฝ๋ฉ ํ์ฉ__
- ๊ฐ์ข ์ด์ ์ฉ๋๋ก ์ถ์ฒ
### ๊ตฌ๊ธ ์ฝ๋ฉ ์ฌ์ฉ
- ๊ธฐ๋ณธ ์ฌ์ฉ๋ฒ์ ์ธํฐ๋ท ๊ฒ์ ์ฐธ์กฐ
- ์ฝ๋ ์คํ์ ํ์ํ ์ถ๊ฐ ํจํค์ง ์ค์น๋ pip(ํ์ด์ฌ ํจํค์ง ๊ด๋ฆฌ์) ํ์ฉ
```python
!pip install package_name
```
- ์ฐธ๊ณ : ๋๋ํ(`!`)๋ ์ฃผํผํฐ ๋
ธํธ๋ถ ์ฝ๋์
์์ ํฐ๋ฏธ๋ ๋ช
๋ น์ด๋ฅผ ์คํํ๋ ๊ฒฝ์ฐ ์ฌ์ฉ
- GPU ํ์ฉ: ๋ฐํ์ ์ ํ์ GPU๋ก ์ง์ ๋ง ํ๋ฉด ๋จ.
- TPU ํ์ฉ: ์ข ๋ ๋ณต์กํ ์ธํ
ํ์. 13์ฅ ์ฐธ์กฐ.
## 3.5 ํ
์ํ๋ก์ฐ ๊ธฐ๋ณธ ์ฌ์ฉ๋ฒ
### ์ ๊ฒฝ๋ง ๋ชจ๋ธ ํ๋ จ ํต์ฌ 1
1. ์์ ํ
์์ ๋ณ์ ํ
์
- ์์ ํ
์(constant tensor): ์
์ถ๋ ฅ ๋ฐ์ดํฐ ๋ฑ ๋ณํ์ง ์๋ ํ
์
- ๋ณ์ ํ
์(variable): ๋ชจ๋ธ ๊ฐ์ค์น, ํธํฅ ๋ฑ ์
๋ฐ์ดํธ ๋๋ ํ
์
1. ํ
์ ์ฐ์ฐ: ๋ง์
, relu, ์ ๊ณฑ ๋ฑ
1. ์ญ์ ํ(backpropagation):
- ์์คํจ์์ ๊ทธ๋ ์ด๋์ธํธ ๊ณ์ฐ ํ ๋ชจ๋ธ ๊ฐ์ค์น ์
๋ฐ์ดํธ
- ๊ทธ๋ ์ด๋์ธํธ ํ
์ดํ(`GradientTape`) ์ด์ฉ
## 3.6 ์ผ์ค์ ํต์ฌ API ์ดํด
### ์ ๊ฒฝ๋ง ๋ชจ๋ธ ํ๋ จ ํต์ฌ 2
1. ์ธต(layer)๊ณผ ๋ชจ๋ธ: ์ธต์ ์ ์ ํ๊ฒ ์์ ๋ชจ๋ธ ๊ตฌ์ฑ
1. ์์ค ํจ์(loss function): ํ์ต ๋ฐฉํฅ์ ์ ๋ํ๋ ํผ๋๋ฐฑ ์ญํ ์ํ
1. ์ตํฐ๋ง์ด์ (optimizer): ํ์ต ๋ฐฉํฅ์ ์ ํ๋ ๊ธฐ๋ฅ ์ํ
1. ๋ฉํธ๋ฆญ(metric): ์ ํ๋ ๋ฑ ๋ชจ๋ธ ์ฑ๋ฅ ํ๊ฐ ์ฉ๋
1. ํ๋ จ ๋ฐ๋ณต(training loop): ๋ฏธ๋ ๋ฐฐ์น ๊ฒฝ์ฌํ๊ฐ๋ฒ ์คํ
### ์ธต(layer)์ ์ญํ
- ๋ชจ๋ธ์ ์ํ(์ง์)๋ก ์ฌ์ฉ๋๋ ๊ฐ์ค์น(weight)์ ํธํฅ(bias) ์ ์ฅ
- ๋ฐ์ดํฐ ํํ ๋ณํ(forwardd pass)
- ์ผ๋ผ์ค ํ์ฉ ๋ฅ๋ฌ๋ ๋ชจ๋ธ: ํธํ ๊ฐ๋ฅํ ์ธต๋ค์ ์ ์ ํ ์ฐ๊ฒฐ
#### ์ธต์ ์ข
๋ฅ์ ์ฒ๋ฆฌ ๊ฐ๋ฅ ํ
์
- `Dense` ํด๋์ค๋ฅผ ์ฌ์ฉํ๋ ๋ฐ์ง์ธต(dense layer):
`(์ํ์, ํน์ฑ์)` ๋ชจ์์ 2D ํ
์๋ก ์ ๊ณต๋ ๋ฐ์ดํฐ์
- `LSTM` ํด๋์ค, `Conv1D` ํด๋์ค ๋ฑ์ ์ฌ์ฉํ๋ ์ํ์ธต(recurrent layer):
`(์ํ์, ํ์์คํ
์, ํน์ฑ์)` ๋ชจ์์ 3D ํ
์๋ก ์ ๊ณต๋ ์์ฐจ ๋ฐ์ดํฐ์
- `Cons2D` ํด๋์ค ๋ฑ์ ์ฌ์ฉํ๋ ์ธต:
`(์ํ์, ๊ฐ๋ก, ์ธ๋ก, ์ฑ๋์)` ๋ชจ์์ 4D ํ
์๋ก ์ ๊ณต๋ ์ด๋ฏธ์ง ๋ฐ์ดํฐ์
#### `Layer` ํด๋์ค์ `__call__()` ๋ฉ์๋
- ์ผ๋ผ์ค์์ ์ฌ์ฉ๋๋ ๋ชจ๋ ์ธต์ ๋ํ ๋ถ๋ชจ ํด๋์ค
- `__call__()` ๋ฉ์๋์ ์ญํ
- ๊ฐ์ค์น์ ํธํฅ ๋ฒกํฐ ์์ฑ ๋ฐ ์ด๊ธฐํ
- ์
๋ ฅ ๋ฐ์ดํฐ๋ฅผ ์ถ๋ ฅ ๋ฐ์ดํฐ๋ก ๋ณํ
#### `__call__()` ๋ฉ์๋์ ๋๋ต์ ์ ์
```python
def __call__(self, inputs):
if not self.built:
self.build(inputs.shape)
self.built = True
return self.call(inputs)
```
- `self.built`: ๊ฐ์ค์น์ ํธํฅ ๋ฒกํฐ๊ฐ ์ด๊ธฐํ๊ฐ ๋์ด ์๋์ง ์ฌ๋ถ ๊ธฐ์ต
- `self.build(inputs.shape)`: ์
๋ ฅ ๋ฐฐ์น ๋ฐ์ดํฐ์
(`inputs`)์ ๋ชจ์(shape) ์ ๋ณด ์ด์ฉ
- ๊ฐ์ค์น ํ
์ ์์ฑ ๋ฐ ๋ฌด์์์ ์ผ๋ก ์ด๊ธฐํ
- ํธํฅ ํ
์ ์์ฑ ๋ฐ 0๋ฒกํฐ๋ก ์ด๊ธฐํ
- `self.call(inputs)`: ์ถ๋ ฅ๊ฐ ๊ณ์ฐ(forward pass)
- ์ํ ๋ณํ ๋ฐ ํ์ฑํ ํจ์ ์ ์ฉ
### ์ธต์์ ๋ชจ๋ธ๋ก
- ์
๋ ต๊ฐ์ ๋ณด๊ณ ๋ฐ๋ก ์
๋ ฅ๊ฐ์ ๋ชจ์ ํ์ธ
- MNIST ๋ชจ๋ธ ์ฌ์ฉ๋ `Dense` ํด๋์ค์ฒ๋ผ ์
๋ ฅ ๋ฐ์ดํฐ์ ์ ๋ณด ๋ฏธ๋ฆฌ ์๊ตฌํ์ง ์์
```python
from tensorflow import keras
from tensorflow.keras import layers
model = keras.Sequential([
layers.SimpleDense(512, activation="relu"),
layers.SimpleDense(10, activation="softmax")
])
```
#### ๋ฅ๋ฌ๋ ๋ชจ๋ธ
- ์ธต์ผ๋ก ๊ตฌ์ฑ๋ ๊ทธ๋ํ
- ์์ : `Sequential` ๋ชจ๋ธ
- ์ธต์ ์ผ๋ ฌ๋ก ์์ ์ ๊ฒฝ๋ง ์ ๊ณต
- ์๋ ์ธต์์ ์ ๋ฌํ ๊ฐ์ ๋ฐ์ ๋ณํํ ํ ์ ์ธต์ผ๋ก ์ ๋ฌ
- ์์ : ํธ๋์คํฌ๋จธ(Transformer)
<div align="center"><img src="https://drek4537l1klr.cloudfront.net/chollet2/v-7/Figures/transformer0001.png" style="width:400px;"></div>
๊ทธ๋ฆผ ์ถ์ฒ: [Deep Learning with Python(Manning MEAP)](https://www.manning.com/books/deep-learning-with-python-second-edition)
#### ๋ง ๊ตฌ์ฑ๋ฐฉ์๊ณผ ๊ฐ์ค ๊ณต๊ฐ
- ๋ชจ๋ธ์ ํ์ต๊ณผ์ ์ ์ธต์ ์ด๋ป๊ฒ ๊ตฌ์ฑํ์๋๊ฐ์ ์ ์ ์ผ๋ก ์์กดํจ.
- ์ฌ๋ฌ ๊ฐ์ `Dense` ์ธต์ ์ด์ฉํ `Sequential` ๋ชจ๋ธ
- ์ํ ๋ณํ,`relu()` ๋ฑ์ ํ์ฑํ ํจ์๋ฅผ ์ฐ์์ ์ผ๋ก ์ ์ฉํ ๋ฐ์ดํฐ ํํ ๋ณํ
- ๋ค๋ฅธ ๋ฐฉ์์ผ๋ก ๊ตฌ์ฑ๋ ๋ชจ๋ธ: ๋ค๋ฅธ ๋ฐฉ์์ผ๋ก ํ
์ ํํ ๋ณํ
- ์ด๋ ๋ฏ ์ธต์ ๊ตฌ์ฑํ๋ ๋ฐฉ์์ ๋ฐ๋ผ ํ
์๋ค์ด ๊ฐ์ง ์ ์๋ ํํ๋ค์ ๊ณต๊ฐ์ด ์ ํด์ง.
- '**๋ง ๊ตฌ์ฑ๋ฐฉ์(network topology)์ ๋ฐ๋ฅธ ํํ ๊ฐ์ค ๊ณต๊ฐ(hypothesis space)**'์ด ์ง์ ๋จ.
- ์ ๊ฒฝ๋ง์ ๊ตฌ์ฑ
- ์ฃผ์ด์ง ๋ฐ์ดํฐ์
๊ณผ ๋ชจ๋ธ์ ๋ชฉ์ ์ ๋ฐ๋ผ ๊ฒฐ์ ๋จ.
- ํน๋ณํ ๊ท์น ๋๋ ์ด๋ก ์ ์์.
- ์ด๋ก ๋ณด๋ค๋ ๋ง์ ์ค์ต์ ํตํ ๊ฒฝํ์ ์์กด
### ๋ชจ๋ธ ์ปดํ์ผ
๋ชจ๋ธ์ ๊ตฌ์กฐ๋ฅผ ์ ์ํ ํ์ ์๋ ์ธ ๊ฐ์ง ์ค์ ์ ์ถ๊ฐ๋ก ์ง์ ํด์ผ ํจ.
- ์ตํฐ๋ง์ด์ (optimizer): ๋ชจ๋ธ์ ์ฑ๋ฅ์ ํฅ์์ํค๋ ๋ฐฉํฅ์ผ๋ก ๊ฐ์ค์น๋ฅผ ์
๋ฐ์ดํธํ๋ ์๊ณ ๋ฆฌ์ฆ
- ์์คํจ์(loss function): ํ๋ จ ์ค ๋ชจ๋ธ์ ์ฑ๋ฅ ์ผ๋ง ๋์๊ฐ๋ฅผ ์ธก์ ํ๋ ๊ธฐ์ค.
๋ฏธ๋ถ๊ฐ๋ฅ์ด์ด์ผ ํ๋ฉฐ ์ตํฐ๋ง์ด์ ๊ฐ ๊ฒฝ์ฌํ๊ฐ๋ฒ์ ํ์ฉํ์ฌ ์์คํจ์ซ๊ฐ์ ์ค์ด๋ ๋ฐฉํฅ์ผ๋ก ์๋ํจ.
- ํ๊ฐ์งํ(metrics):: ํ๋ จ๊ณผ ํ
์คํธ ๊ณผ์ ์ ๋ชจ๋ํฐ๋ง ํ ๋ ์ฌ์ฉ๋๋ ๋ชจ๋ธ ํ๊ฐ ์งํ.
์ตํฐ๋ง์ด์ ๋๋ ์์คํจ์์ ์ผ๋ฐ์ ์ผ๋ก ์๊ด ์์.
### `fit()` ๋ฉ์๋ ์๋๋ฒ
๋ชจ๋ธ์ ํ๋ จ์ํค๋ ค๋ฉด `fit()` ๋ฉ์๋๋ฅผ ์ ์ ํ ์ธ์๋ค๊ณผ ํจ๊ป ํธ์ถํด์ผ ํจ.
- ํ๋ จ ์ธํธ: ๋ณดํต ๋ํ์ด ์ด๋ ์ด ๋๋ ํ
์ํ๋ก์ฐ์ `Dataset` ๊ฐ์ฒด ์ฌ์ฉ
- ์ํฌํฌ(`epochs`): ์ ์ฒด ํ๋ จ ์ธํธ๋ฅผ ๋ช ๋ฒ ํ๋ จํ ์ง ์ง์
- ๋ฐฐ์น ํฌ๊ธฐ(`batch_size`): ๋ฐฐ์น ๊ฒฝ์ฌํ๊ฐ๋ฒ์ ์ ์ฉ๋ ๋ฐฐ์น(๋ฌถ์) ํฌ๊ธฐ ์ง์
์๋ ์ฝ๋๋ ์์ ๋ํ์ด ์ด๋ ์ด๋ก ์์ฑํ (2000, 2) ๋ชจ์์ ์์ฑ, ์์ฑ ๋ฐ์ดํฐ์
์ ๋์์ผ๋ก ํ๋ จํ๋ค.
### ๊ฒ์ฆ ์ธํธ ํ์ฉ
ํ๋ จ๋ ๋ชจ๋ธ์ด ์์ ํ ์๋ก์ด ๋ฐ์ดํฐ์ ๋ํด ์์ธก์ ์ํ๋์ง ์ฌ๋ถ๋ฅผ ํ๋จํ๋ ค๋ฉด
์ ์ฒด ๋ฐ์ดํฐ์
์ ํ๋ จ ์ธํธ์ **๊ฒ์ฆ ์ธํธ**๋ก ๊ตฌ๋ถํด์ผ ํจ.
- ํ๋ จ ์ธํธ: ๋ชจ๋ธ ํ๋ จ์ ์ฌ์ฉ๋๋ ๋ฐ์ดํฐ์
- ๊ฒ์ฆ ์ธํธ: ํ๋ จ๋ ๋ชจ๋ธ ํ๊ฐ์ ์ฌ์ฉ๋๋ ๋ฐ์ดํฐ์
```python
model.fit(
training_inputs,
training_targets,
epochs=5,
batch_size=16,
validation_data=(val_inputs, val_targets)
)
```
| github_jupyter |
<h1>Linear Algebra (CpE210A)
<h3>Midterms Project
Coded and submitted by:<br>
<i>Galario, Adrian Q.<br>
201814169 <br>
58051</i>
Directions
This Jupyter Notebook will serve as your base code for your Midterm Project. You must further format and provide complete discussion on the given topic.
- Provide all necessary explanations for specific code blocks.
- Provide illustrations for key results.
- Observe clean code (intuitive variable names, proper commenting, proper code spacing)
- Provide a summary discussion at the end
Failure to use this format or failure to update the document will be given a deduction equivalent to 50% of the original score.
### Case
Bebang is back to consult you about her business. Furthering her data analytics initiative she asks you for help to compute some relevant data. Now she is asking you to compute and visualize her sales and costs for the past year. She has given you the datasets attached to her request.
### Problem
State and explain Bebang's problem here and provide the deliverables.
# Proof of Concept
Now that you have a grasp on the requirements we need to start with making a program to prove that her problem is solvable. As a Linear Algebra student, we will be focusin on applying vector operations to meet her needs. First, we need to import her data. We will use the `pandas` library for this. For more information you can look into their documentation [here](https://pandas.pydata.org/).
```
import seaborn as sns
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
%matplotlib inline
df_prices = pd.read_csv(r'C:\Users\EyyGiee\Desktop\Bebang\bebang prices.csv')
df_sales = pd.read_csv(r'C:\Users\EyyGiee\Desktop\Bebang\bebang sales.csv')
df_prices
df_sales
```
## Part 1: Monthly Sales
```
sales_mat = np.array(df_sales.set_index('flavor'))
prices_mat = np.array(df_prices.set_index('Unnamed: 0'))[0]
costs_mat = np.array(df_prices.set_index('Unnamed: 0'))[1]
price_reshaped=np.reshape(prices_mat,(12,1))
cost_reshaped=np.reshape(costs_mat,(12,1))
print(sales_mat.shape)
print(price_reshaped.shape)
print(cost_reshaped.shape)
```
#### Formulas
Take note that the fomula for revenue is: <br>
$revenue = sales * price $ <br>
In this case, think that revenue, sales, and price are vectors instead of individual values <br>
The formula of cost per item sold is: <br>
$cost_{sold} = sales * cost$ <br>
The formula for profit is: <br>
$profit = revenue - cost_{sold}$ <br>
Solving for the monthly profit will be the sum of all profits made on that month.
```
## Function that returns and prints the monthly sales and profit for each month
def monthly_sales(price, cost, sales):
monthly_revenue = sum(sales*price)
monthly_costs = sum(sales*cost)
monthly_profits = (monthly_revenue - monthly_costs)
return monthly_revenue.flatten(), monthly_costs.flatten(), monthly_profits.flatten()
### Using the monthly_sales function to compute for the revenue, cost, and profit
## Then passing the values to month_rev, month_cost, and month_profit
month_rev, month_cost, month_profit = monthly_sales(prices_mat, costs_mat, sales_mat)
### printing the values
print("Monthly Revenue(Starting from the month of January): \n", month_rev)
print("\nYearly Revenue: \n", sum(month_rev))
print("\nMonthly Cost(Starting from the month of January): \n", month_cost)
print("\nYearly Cost: \n", sum(month_cost))
print("\nMonthly Profit(Starting from the month of January): \n", month_profit)
print("\nYearly Profit: \n", sum(month_profit))
```
## Part 2: Flavor Sales
```
## Function that returns and prints the flavor profits for the whole year
def flavor_sales(price, cost, sales):
flavor_revenue = sales*price
flavor_costs = sales*cost
flavor_profits = flavor_revenue - flavor_costs
return flavor_profits.flatten()
### Using the flavor_sales function to compute for the profit
## Then passing the values to flavor_profit variable
flavor_profit = flavor_sales(prices_mat, costs_mat, sales_mat)
## Values of profit of flavors will be inserted here
flavor1 = []
flavor2 = []
flavor3 = []
flavor4 = []
flavor5 = []
flavor6 = []
flavor7 = []
flavor8 = []
flavor9 = []
flavor10 = []
flavor11 = []
flavor12 = []
## Loop that will append the values(profit) to their respective variables above
## The variables above was created so that the sum can be computed by row(to get the yearly profit per flavor)
## Unlike getting the sum of flavor_profits inside the function flavor_sales, it will get the sum per column(which will get the profit of all flavor per month)
for x in flavor_profit:
if len(flavor1)<=11:
flavor1.append(x)
elif len(flavor2)<=11:
flavor2.append(x)
elif len(flavor3)<=11:
flavor3.append(x)
elif len(flavor4)<=11:
flavor4.append(x)
elif len(flavor5)<=11:
flavor5.append(x)
elif len(flavor6)<=11:
flavor6.append(x)
elif len(flavor7)<=11:
flavor7.append(x)
elif len(flavor8)<=11:
flavor8.append(x)
elif len(flavor9)<=11:
flavor9.append(x)
elif len(flavor10)<=11:
flavor10.append(x)
elif len(flavor11)<=11:
flavor11.append(x)
elif len(flavor12)<=11:
flavor12.append(x)
## Profit of each flavor per year
flavor_profits = np.array([sum(flavor1),sum(flavor2),sum(flavor3),sum(flavor4),sum(flavor5),sum(flavor6),sum(flavor7),sum(flavor8),sum(flavor9),
sum(flavor10),sum(flavor11),sum(flavor12)])
### Printing the values
print("The row represents each flavor while the column represents the months")
print("The order of flavor and months in rows and columns is the same as in df_sales\n")
print("Profit of Flavor per Month: \n", flavor_profit)
print("\nThe order of the flavor is the same as in df_sales\n")
print("Flavor Profit per Year: \n", flavor_profits)
## Putting the list of flavors into array
flavors = np.array(pd.read_csv("bebang sales.csv", usecols=[0]))
## Converting the arrays into lists
## Using list is easier to match/zip them
fprofit_list = flavor_profits.tolist()
flavor_list = flavors.tolist()
## Matched the two list, to know the profit of each flavor and to be sorted later
matched_list = list(zip(fprofit_list, flavor_list))
### Sorting of the flavors by their profit and displaying the first element(flavors) only
best_3_flavors = [x[1] for x in sorted(matched_list, reverse=True)]
worst_3_flavors = [x[1] for x in sorted(matched_list)]
## Printing of the three best and worst flavors
print("Best Selling Flavors: \n", best_3_flavors[0:3])
print("\nWorst Selling Flavors: \n", worst_3_flavors[0:3])
```
## Part 3: Visualizing the Data (Optional for +40%)
You can try to visualize the data in the most comprehensible chart that you can use.
```
import matplotlib.pyplot as plt
import matplotlib
import seaborn as sns
import pandas as pd
import csv
%matplotlib inline
```
#### Entire Dataset
```
## Graph for Sales of each flavor
## Table inside the original file(bebang sales) was transposed in the excel, columns were converted to rows
df_sales_Transposed = pd.read_csv(r"C:\Users\EyyGiee\Desktop\Bebang\bebang sales(transpose).csv")
## Transposing the table makes it easier to plot the data inside it
## The column header 'flavor' was changed to 'Months'
df_sales_Transposed.plot(x="Months", figsize=(25,15))
plt.title('Sales of Each Flavor')
## Graph for Price vs Cost per Flavor
## Declaring the font size and weight to be used in the graph
font = {'weight' : 'bold',
'size' : 15}
matplotlib.rc('font', **font)
## Declaration of the figure to be used
fig = plt.figure()
ax = fig.add_axes([0,0,4,4])
ax.set_title('Price vs Cost per Flavor')
## For the legends used in the graph
colors = {'Price':'blue', 'Cost':'green'}
labels = list(colors.keys())
handles = [plt.Rectangle((0,0),1,1, color=colors[label]) for label in labels]
plt.legend(handles, labels, loc='upper left', prop={'size': 40})
## Plotting of the values for the bar graph
## Price and cost were plotted in one x-axis per flavor to see the difference between the two variable
ax.bar('Red Velvet' ,prices_mat[0], color = 'b', width = 0.50)
ax.bar('Red Velvet' ,costs_mat[0], color = 'g', width = 0.50)
ax.bar('Oreo' ,prices_mat[1], color = 'b', width = 0.50)
ax.bar('Oreo' ,costs_mat[1], color = 'g', width = 0.50)
ax.bar('Super Glazed' ,prices_mat[2], color = 'b', width = 0.50)
ax.bar('Super Glazed' ,costs_mat[2], color = 'g', width = 0.50)
ax.bar('Almond Honey' ,prices_mat[3], color = 'b', width = 0.50)
ax.bar('Almond Honey' ,costs_mat[3], color = 'g', width = 0.50)
ax.bar('Matcha' ,prices_mat[4], color = 'b', width = 0.50)
ax.bar('Matcha' ,costs_mat[4], color = 'g', width = 0.50)
ax.bar('Strawberry Cream' ,prices_mat[5], color = 'b', width = 0.50)
ax.bar('Strawberry Cream' ,costs_mat[5], color = 'g', width = 0.50)
ax.bar('Brown \nSugar Boba' ,prices_mat[6], color = 'b', width = 0.50)
ax.bar('Brown \nSugar Boba' ,costs_mat[6], color = 'g', width = 0.50)
ax.bar('Fruits \nand Nuts' ,prices_mat[7], color = 'b', width = 0.50)
ax.bar('Fruits \nand Nuts' ,costs_mat[7], color = 'g', width = 0.50)
ax.bar('Dark \nChocolate' ,prices_mat[8], color = 'b', width = 0.50)
ax.bar('Dark \nChocolate' ,costs_mat[8], color = 'g', width = 0.50)
ax.bar('Chocolate \nand Orange' ,prices_mat[9], color = 'b', width = 0.50)
ax.bar('Chocolate \nand Orange' ,costs_mat[9], color = 'g', width = 0.50)
ax.bar('Choco Mint' ,prices_mat[10], color = 'b', width = 0.50)
ax.bar('Choco Mint' ,costs_mat[10], color = 'g', width = 0.50)
ax.bar('Choco \nButter Naught' ,prices_mat[11], color = 'b', width = 0.50)
ax.bar('Choco \nButter Naught' ,costs_mat[11], color = 'g', width = 0.50)
```
#### Monthly Sales
```
## Graph for Revenue vs Cost per Month
## Declaring the font size and weight to be used in the graph
font = {'weight' : 'bold',
'size' : 15}
matplotlib.rc('font', **font)
## Declaration of the figure to be used in the graph
fig = plt.figure()
ax = fig.add_axes([0,0,4,4])
ax.set_title('Revenue vs Cost per Month')
ax.set_ylabel('Revenue/Cost')
ax.set_xlabel('Months')
## For the legends used in the graph
colors = {'Revenue':'blue', 'Cost':'green'}
labels = list(colors.keys())
handles = [plt.Rectangle((0,0),1,1, color=colors[label]) for label in labels]
plt.legend(handles, labels, loc='upper left', prop={'size': 40})
## Plotting of the values for the bar graph
## Revenue and cost were plotted in one x-axis per month to see the difference between the two variable
ax.bar('January' ,month_rev[0], color = 'b', width = 0.50)
ax.bar('January' ,month_cost[0], color = 'g', width = 0.50)
ax.bar('February' ,month_rev[1], color = 'b', width = 0.50)
ax.bar('February' ,month_cost[1], color = 'g', width = 0.50)
ax.bar('March' ,month_rev[2], color = 'b', width = 0.50)
ax.bar('March' ,month_cost[2], color = 'g', width = 0.50)
ax.bar('April' ,month_rev[3], color = 'b', width = 0.50)
ax.bar('April' ,month_cost[3], color = 'g', width = 0.50)
ax.bar('May' ,month_rev[4], color = 'b', width = 0.50)
ax.bar('May' ,month_cost[4], color = 'g', width = 0.50)
ax.bar('June' ,month_rev[5], color = 'b', width = 0.50)
ax.bar('June' ,month_cost[5], color = 'g', width = 0.50)
ax.bar('July' ,month_rev[6], color = 'b', width = 0.50)
ax.bar('July' ,month_cost[6], color = 'g', width = 0.50)
ax.bar('August' ,month_rev[7], color = 'b', width = 0.50)
ax.bar('August' ,month_cost[7], color = 'g', width = 0.50)
ax.bar('September' ,month_rev[8], color = 'b', width = 0.50)
ax.bar('September' ,month_cost[8], color = 'g', width = 0.50)
ax.bar('October' ,month_rev[9], color = 'b', width = 0.50)
ax.bar('October' ,month_cost[9], color = 'g', width = 0.50)
ax.bar('November' ,month_rev[10], color = 'b', width = 0.50)
ax.bar('November' ,month_cost[10], color = 'g', width = 0.50)
ax.bar('December' ,month_rev[11], color = 'b', width = 0.50)
ax.bar('December' ,month_cost[11], color = 'g', width = 0.50)
## Graph for profit per month
## Declaration of the figure to be used
fig = plt.figure()
ax = fig.add_axes([0,0,3,2])
ax.set_ylabel('Profit')
ax.set_xlabel('Months')
ax.set_title('Profit per Month')
## Declaring the values of each axis
months = ['January', 'February', 'March', 'April', 'May', 'June', 'July', 'August', 'September', 'October', 'November', 'December']
Profits = month_profit
## Declaration of the axes and printing/showing them
ax.bar(months, Profits)
plt.show()
```
#### Flavor Sales
```
## Graph for Flavor profit
## Declaration of the figure to be used
fig = plt.figure()
ax = fig.add_axes([0,0,3,2])
ax.set_ylabel('Profit')
ax.set_xlabel('Flavors')
ax.set_title('Flavor Profit')
## Declaring the values of each axis
flavors = ['Red Velvet', 'Oreo', 'Super \nGlazed', 'Almond \nHoney', 'Matcha', 'Strawberry \nCream', 'Brown \nSugar Boba',
'Fruits \nand Nuts', 'Dark \nChocolate', 'Chocolate \nOrange', 'Choco Mint', 'Choco \nButter Naught']
Profits = flavor_profits
## Declaration of the axes and printing/showing them
ax.bar(flavors, Profits)
plt.show()
```
## Part 4: Business Recommendation and Conclusion
Present the findings of your data analysis and provide recommendations
The software reveals that the top three flavors that sell the most for Bebang's business are choco butter naught, matcha, and super glazed, while the worst or bottom three flavors for her business are strawberry cream, oreo, and almond honey after computing and plotting the data from her business. The software also displays the calculated monthly cost, annual cost, yearly revenue, and monthly revenue for all flavors. Bebang makes the most profit in December and the least profit in September, according to the findings. The software also generates Bebang's monthly benefit from all of the flavors. Based on the information gathered, it is suggested that Bebang make a large quantity of the top three best-selling flavors because it will help her company and the expense of producing those flavors will not be wasted and will be recovered through sales. Bebang should also reduce the quantity of the top three worst flavors to help her company avoid spending a lot of money on them without making any profit. Another piece of advice for Bebang is to keep track of the months when the flavors sell best so she can prepare ahead for the amount she'll need to avoid food shortages and excesses. Bebang's company is doing well in general. Bebang's company expenses per flavor are more than offset by the sales she makes. Different marketing techniques can also assist her in growing her company and generating large profits not only for a month, but for the entire year. Bebang could also conduct a customer survey to determine what changes she can make, especially to the flavors that sell the least.
| github_jupyter |
<a href="https://colab.research.google.com/github/Prady96/Pothole-Detection/blob/avi_testing/Final_file_for_tata_innoverse.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
!pip -V
!python -V
!pip install --upgrade youtube-dl
!youtube-dl https://drive.google.com/file/d/16-xNP_Ez-3WgFF3vfsP9KJl4ka9hXDlV/view?usp=sharing
!youtube-dl https://drive.google.com/file/d/1rP5tveZgNXJZe_uipJNWUaSqJiow_LGc/view?usp=sharing
!ls
!mv main_DATASET_VAL.zip-1rP5tveZgNXJZe_uipJNWUaSqJiow_LGc.zip val.zip
!mv main_DataSETS_TRAIN.zip-16-xNP_Ez-3WgFF3vfsP9KJl4ka9hXDlV.zip train.zip
!ls
!unzip train.zip
!unzip val.zip
!ls
!rm -rf train.zip
!rm -rf val.zip
!mv main_DATASET_VAL/ val
!mv main_DataSETS_TRAIN/ train
!ls
!mkdir customImages
!rm -rf sample_data
!rm -rf __MACOSX
!mv train/ customImages/
!mv val/ customImages/
!ls
!git clone https://github.com/matterport/Mask_RCNN.git
!ls
!mv customImages/ Mask_RCNN/
%cd Mask_RCNN/
!pip install -r requirements.txt
%run setup.py install
!wget https://raw.githubusercontent.com/Prady96/Pothole-Detection/avi_testing/custom.py?token=AHIVHIOGTWT7LA4IIWMEJVS455SIO
!mv custom.py\?token\=AHIVHIOGTWT7LA4IIWMEJVS455SIO custom.py
!ls
!mkdir logs
import os
import sys
import itertools
import math
import logging
import json
import re
import random
from collections import OrderedDict
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
# Root directory of the project
ROOT_DIR = os.getcwd()
# Import Mask RCNN
sys.path.append(ROOT_DIR) # To find local version of the library
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
import mrcnn.model as modellib
from mrcnn.model import log
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
import custom
%matplotlib inline
config = custom.CustomConfig()
CUSTOM_DIR = os.path.join(ROOT_DIR, "customImages")
print(CUSTOM_DIR)
# Load dataset
# Get the dataset from the releases page
# https://github.com/matterport/Mask_RCNN/releases
dataset = custom.CustomDataset()
dataset.load_custom(CUSTOM_DIR, "train")
# Must call before using the dataset
dataset.prepare()
print("Image Count: {}".format(len(dataset.image_ids)))
print("Class Count: {}".format(dataset.num_classes))
for i, info in enumerate(dataset.class_info):
print("{:3}. {:50}".format(i, info['name']))
class InferenceConfig(custom.CustomConfig):
# Set batch size to 1 since we'll be running inference on
# one image at a time. Batch size = GPU_COUNT * IMAGES_PER_GPU
GPU_COUNT = 1
IMAGES_PER_GPU = 1
config = InferenceConfig()
config.display()
##################### MODEL FILE HERE ##################
### FOR 320 epoch
!youtube-dl https://drive.google.com/file/d/1aShefxzQmeB1qerh1Xo2Xkm1SPIy_yzy/view?usp=sharing
### FOR 160 epoch
!youtube-dl https://drive.google.com/file/d/1ex7Mo62j7wugrZbmNFZFAuujd_UguRYK/view?usp=sharing
!ls
!mv mask_rcnn_damage_0160.h5-1ex7Mo62j7wugrZbmNFZFAuujd_UguRYK.h5 mask_rcnn_damage_0160.h5
!mv mask_rcnn_damage_0160.h5 logs/
!ls
!mv mask_rcnn_damage_0320.h5-1aShefxzQmeB1qerh1Xo2Xkm1SPIy_yzy.h5 mask_rcnn_damage_0320.h5
!mv mask_rcnn_damage_0320.h5 logs/
!ls logs/
# Create model object in inference mode.
model = modellib.MaskRCNN(mode="inference", model_dir=MODEL_DIR, config=config)
# Load weights trained on MS-COCO
model.load_weights("logs/mask_rcnn_damage_0320.h5", by_name=True)
class_names = ['BG', 'damage']
!pip install utils
import os
import sys
import custom
import utils
%cd mrcnn
import model as modellib
%cd ..
import cv2
import numpy as np
## Testing
from PIL import Image, ImageDraw, ImageFont
```
MoveOver for Getting Testing Images Similar to S3 Bucket
```
!youtube-dl https://drive.google.com/file/d/1FTvc361O9BBURgsTMb6dJoE6InAoic_O/view?usp=sharing
!ls
!mv images.zip-1FTvc361O9BBURgsTMb6dJoE6InAoic_O.zip images.zip
!mkdir S3_Images
!mv images.zip S3_Images/
%cd S3_Images/
!ls
!unzip images.zip
!ls
!rm -rf images.zip
!rm -rf __MACOSX/
!mv images\ 2 images
!ls
!ls images
!pwd
%cd /content/Mask_RCNN/
!ls /content/Mask_RCNN/S3_Images/images/
%cd /content/Mask_RCNN/S3_Images/images/
!pip install python-resize-image
from PIL import Image
import os
from resizeimage import resizeimage
count = 0
for f in os.listdir(os.getcwd()):
f_name, f_ext = os.path.splitext(f)
# f_random, f_lat_name,f_lat_val,f_long_name,f_long_val = f_name.split('-')
# f_lat_val = f_lat_val.strip() ##removing the white Space
# f_long_val = f_long_val.strip()
# new_name = '{}-{}-{}.jpg'.format(f_lat_val,f_long_val,count)
try:
with Image.open(f) as image:
count +=1
cover = resizeimage.resize_cover(image, [600,600])
cover.save('{}{}'.format(f_name,f_ext),image.format)
#os.remove(f)
print(count)
except(OSError) as e:
print('Bad Image {}{}'.format(f,count))
%cd /content/Mask_RCNN/
!wget https://github.com/Prady96/IITM_PythonTraining/blob/master/ImageWorking_add_textInImage/fonts_Dir/OpenSans-Bold.ttf?raw=true
!mv OpenSans-Bold.ttf?raw=true OpenSans-Bold.ttf
!ls
# Main file for the file iteration
import cv2
import numpy as np
from PIL import Image, ImageDraw, ImageFont
myList = [] ## area list
classList = [] ##class Id List
def random_colors(N):
np.random.seed(1)
colors = [tuple(255 * np.random.rand(3)) for _ in range(N)]
return colors
def apply_mask(image, mask, color, alpha=0.5):
"""apply mask to image"""
for n, c in enumerate(color):
image[:, :, n] = np.where(
mask == 1,
image[:, :, n] * (1 - alpha) + alpha * c,
image[:, :, n]
)
return image
def display_instances(image, boxes, masks, ids, names, scores):
"""
take the image and results and apply the mask, box, and Label
"""
n_instances = boxes.shape[0]
colors = random_colors(n_instances)
if not n_instances:
print('NO INSTANCES TO DISPLAY')
else:
assert boxes.shape[0] == masks.shape[-1] == ids.shape[0]
for i, color in enumerate(colors):
if not np.any(boxes[i]):
continue
y1, x1, y2, x2 = boxes[i]
label = names[ids[i]]
score = scores[i] if scores is not None else None
caption = '{} {:.2f}'.format(label, score) if score else label
mask = masks[:, :, i]
image = apply_mask(image, mask, color)
image = cv2.rectangle(image, (x1, y1), (x2, y2), color, 2)
image = cv2.putText(
image, caption, (x1, y1), cv2.FONT_HERSHEY_COMPLEX, 0.7, color, 2
)
return image
def save_image(image, image_name, boxes, masks, class_ids, scores, class_names, filter_classs_names=None,
scores_thresh=0.1, save_dir=None, mode=0):
"""
image: image array
image_name: image name
boxes: [num_instance, (y1, x1, y2, x2, class_id)] in image coordinates.
masks: [num_instances, height, width]
class_ids: [num_instances]
scores: confidence scores for each box
class_names: list of class names of the dataset
filter_classs_names: (optional) list of class names we want to draw
scores_thresh: (optional) threshold of confidence scores
save_dir: (optional) the path to store image
mode: (optional) select the result which you want
mode = 0 , save image with bbox,class_name,score and mask;
mode = 1 , save image with bbox,class_name and score;
mode = 2 , save image with class_name,score and mask;
mode = 3 , save mask with black background;
"""
mode_list = [0, 1, 2, 3]
assert mode in mode_list, "mode's value should in mode_list %s" % str(mode_list)
if save_dir is None:
save_dir = os.path.join(os.getcwd(), "output")
if not os.path.exists(save_dir):
os.makedirs(save_dir)
useful_mask_indices = []
N = boxes.shape[0]
if not N:
print("\n*** No instances in image %s to draw *** \n" % (image_name))
return
else:
assert boxes.shape[0] == masks.shape[-1] == class_ids.shape[0]
for i in range(N):
# filter
class_id = class_ids[i]
score = scores[i] if scores is not None else None
if score is None or score < scores_thresh:
continue
label = class_names[class_id]
if (filter_classs_names is not None) and (label not in filter_classs_names):
continue
if not np.any(boxes[i]):
# Skip this instance. Has no bbox. Likely lost in image cropping.
continue
useful_mask_indices.append(i)
if len(useful_mask_indices) == 0:
print("\n*** No instances in image %s to draw *** \n" % (image_name))
return
colors = random_colors(len(useful_mask_indices))
if mode != 3:
masked_image = image.astype(np.uint8).copy()
else:
masked_image = np.zeros(image.shape).astype(np.uint8)
if mode != 1:
for index, value in enumerate(useful_mask_indices):
masked_image = apply_mask(masked_image, masks[:, :, value], colors[index])
masked_image = Image.fromarray(masked_image)
if mode == 3:
masked_image.save(os.path.join(save_dir, '%s.jpg' % (image_name)))
return
draw = ImageDraw.Draw(masked_image)
colors = np.array(colors).astype(int) * 255
myList = []
countClassIds = 0
for index, value in enumerate(useful_mask_indices):
class_id = class_ids[value]
print('class_id value is {}'.format(class_id))
if class_id == 1:
countClassIds += 1
print('counter for the class ID {}'.format(countClassIds))
score = scores[value]
label = class_names[class_id]
y1, x1, y2, x2 = boxes[value]
# myList = []
## area of the rectangle
yVal = y2 - y1
xVal = x2 - x1
area = xVal * yVal
print('area is {}'.format(area))
myList.append(area)
if mode != 2:
color = tuple(colors[index])
draw.rectangle((x1, y1, x2, y2), outline=color)
# Label
# font = ImageFont.load('/usr/share/fonts/truetype/ttf-bitstream-vera/Vera.ttf')
font = ImageFont.truetype('OpenSans-Bold.ttf', 15)
draw.text((x1, y1), "%s %f" % (label, score), (255, 255, 255), font)
print(r['class_ids'], r['scores'])
print(myList)
# print('value of r is {}'.format(r))
print('image_name is {}'.format(image_name))
image_name = os.path.basename(image_name)
print('image name is {}'.format(image_name))
f_name, f_ext = os.path.splitext(image_name)
#f_lat_val,f_long_val,f_count = f_name.split('-')
#f_lat_val = f_lat_val.strip() ##removing the white Space
#f_long_val = f_long_val.strip()
# new_name = '{}-{}-{}.jpg'.format(f_lat_val,f_long_val,count)
# print([area for area in myList if ])
# print([i for i in range(countClassIds) ])
print("avi96 {}".format(myList[:countClassIds]))
# myList.pop(countClassIds - 1)
new_name = '{}-{}.jpg'.format(myList, r['scores'])
# masked_image.save(os.path.join(save_dir, '%s.jpg' % (image_name)))
print("New Name file is {}".format(new_name))
print('save_dir is {}'.format(save_dir))
masked_image.save(os.path.join(save_dir, '%s' % (new_name)))
print('file Saved {}'.format(new_name))
# os.rename(image_name, new_name)
if __name__ == '__main__':
"""
test everything
"""
import os
import sys
import custom
import utils
import model as modellib
#import visualize
# We use a K80 GPU with 24GB memory, which can fit 3 images.
batch_size = 3
ROOT_DIR = os.getcwd()
MODEL_DIR = os.path.join(ROOT_DIR, "logs")
VIDEO_DIR = os.path.join(ROOT_DIR, "videos")
VIDEO_SAVE_DIR = os.path.join(VIDEO_DIR, "save")
# COCO_MODEL_PATH = os.path.join(ROOT_DIR, "mask_rcnn_damage_0010.h5")
# if not os.path.exists(COCO_MODEL_PATH):
# utils.download_trained_weights(COCO_MODEL_PATH)
class InferenceConfig(custom.CustomConfig):
GPU_COUNT = 1
IMAGES_PER_GPU = batch_size
config = InferenceConfig()
config.display()
model = modellib.MaskRCNN(
mode="inference", model_dir=MODEL_DIR, config=config
)
model.load_weights("logs/mask_rcnn_damage_0160.h5", by_name=True)
class_names = [
'BG', 'damage'
]
# capture = cv2.VideoCapture(os.path.join(VIDEO_DIR, 'trailer1.mp4'))
try:
if not os.path.exists(VIDEO_SAVE_DIR):
os.makedirs(VIDEO_SAVE_DIR)
except OSError:
print ('Error: Creating directory of data')
# points to be done before final coding
"""
path_for_image_dir
list for the image array
resolve for naming convention for location basis
passing image in model
"""
# path for the data files
data_path = '/content/Mask_RCNN/S3_Images/images/'
onlyfiles = [f for f in os.listdir(data_path) if os.path.isfile(os.path.join(data_path, f))]
# empty list for the training data
frames = []
frame_count = 0
batch_count = 1
# enumerate the iteration with number of files
for j, files in enumerate(onlyfiles):
image_path = data_path + onlyfiles[j]
# print("image Path {}".format(image_path))
# print("Only Files {}".format(onlyfiles[j]))
# print('j is {}'.format(j))
# print('files is {}'.format(files))
try:
images = cv2.imread(image_path).astype(np.uint8)
# print("images {}".format(images))
frames.append(np.asarray(images, dtype=np.uint8))
# frames.append(images)
frame_count += 1
print('frame_count :{0}'.format(frame_count))
if len(frames) == batch_size:
results = model.detect(frames, verbose=0)
print('Predicted')
for i, item in enumerate(zip(frames, results)):
# print('i is {}'.format(i))
# print('item is {}'.format(item))
frame = item[0]
r = item[1]
frame = display_instances(
frame, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']
)
name = '{}'.format(files)
name = os.path.join(VIDEO_SAVE_DIR, name)
# name = '{0}.jpg'.format(frame_count + i - batch_size)
# name = os.path.join(VIDEO_SAVE_DIR, name)
# cv2.imwrite(name, frame)
# print(name)
print('writing to file:{0}'.format(name))
# print(name)
save_image(images, name, r['rois'], r['masks'], r['class_ids'],
r['scores'], class_names, save_dir=VIDEO_SAVE_DIR, mode=0)
frames = []
print('clear')
# clear the frames here
except(AttributeError) as e:
print('Bad Image {}'.format(image_path))
print("Success, check the folder")
"""
## Code for the video section
frames = []
frame_count = 0
# these 2 lines can be removed if you dont have a 1080p camera.
capture.set(cv2.CAP_PROP_FRAME_WIDTH, 1920)
capture.set(cv2.CAP_PROP_FRAME_HEIGHT, 1080)
while True:
ret, frame = capture.read()
# Bail out when the video file ends
if not ret:
break
# Save each frame of the video to a list
frame_count += 1
frames.append(frame)
print('frame_count :{0}'.format(frame_count))
if len(frames) == batch_size:
results = model.detect(frames, verbose=0)
print('Predicted')
for i, item in enumerate(zip(frames, results)):
frame = item[0]
r = item[1]
frame = display_instances(
frame, r['rois'], r['masks'], r['class_ids'], class_names, r['scores']
)
# name = '{0}.jpg'.format(frame_count + i - batch_size)
# name = os.path.join(VIDEO_SAVE_DIR, name)
# cv2.imwrite(name, frame)
# print('writing to file:{0}'.format(name))
## add visualise files
# visualize.display_instances(image, r['rois'], r['masks'], r['class_ids'],
# class_names, r['scores'])
save_image(image, name, r['rois'], r['masks'], r['class_ids'],
r['scores'],class_names, save_dir=VIDEO_SAVE_DIR, mode=0)
# print(r['class_ids'], r['scores'])
# Clear the frames array to start the next batch
frames = []
capture.release()
"""
!ls /content/Mask_RCNN/videos/save/
!zip -r save.zip /content/Mask_RCNN/videos/save/
```
| github_jupyter |
# Credit Risk Resampling Techniques
```
import warnings
warnings.filterwarnings('ignore')
import numpy as np
import pandas as pd
from pathlib import Path
from collections import Counter
```
# Read the CSV into DataFrame
```
# Load the data
file_path = Path('Resources/lending_data.csv')
df = pd.read_csv(file_path)
df.head()
```
# Split the Data into Training and Testing
```
from sklearn.preprocessing import LabelEncoder
le = LabelEncoder()
le.fit(df["homeowner"])
df["homeowner"] = le.transform(df["homeowner"])
# Create our features
X = X = df.copy()
X.drop("loan_status", axis=1, inplace=True)
# Create our target
y = y = df['loan_status']
X.describe()
# Check the balance of our target values
y.value_counts()
# Create X_train, X_test, y_train, y_test
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=1)
```
## Data Pre-Processing
Scale the training and testing data using the `StandardScaler` from `sklearn`. Remember that when scaling the data, you only scale the features data (`X_train` and `X_testing`).
```
# Create the StandardScaler instance
from sklearn.preprocessing import StandardScaler
scaler = StandardScaler()
# Fit the Standard Scaler with the training data
# When fitting scaling functions, only train on the training dataset
X_scaler = scaler.fit(X_train)
# Scale the training and testing data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
```
# Simple Logistic Regression
```
from sklearn.linear_model import LogisticRegression
model = LogisticRegression(solver='lbfgs', random_state=1)
model.fit(X_train, y_train)
# Calculated the balanced accuracy score
from sklearn.metrics import balanced_accuracy_score
y_pred = model.predict(X_test)
balanced_accuracy_score(y_test, y_pred)
# Display the confusion matrix
from sklearn.metrics import confusion_matrix
confusion_matrix(y_test, y_pred)
# Print the imbalanced classification report
from imblearn.metrics import classification_report_imbalanced
print(classification_report_imbalanced(y_test, y_pred))
```
# Oversampling
In this section, you will compare two oversampling algorithms to determine which algorithm results in the best performance. You will oversample the data using the naive random oversampling algorithm and the SMOTE algorithm. For each algorithm, be sure to complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Print the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
### Naive Random Oversampling
```
# Resample the training data with the RandomOversampler
from imblearn.over_sampling import RandomOverSampler
ros = RandomOverSampler(random_state=1)
X_resampled1, y_resampled1 = ros.fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled1)
# Train the Logistic Regression model using the resampled data
model1 = LogisticRegression(solver='lbfgs', random_state=1)
model1.fit(X_resampled1, y_resampled1)
# Calculated the balanced accuracy score
y_pred1 = model1.predict(X_test)
balanced_accuracy_score(y_test, y_pred1)
# Display the confusion matrix
confusion_matrix(y_test, y_pred1)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred1))
```
### SMOTE Oversampling
```
# Resample the training data with SMOTE
from imblearn.over_sampling import SMOTE
X_resampled2, y_resampled2 = SMOTE(random_state=1, sampling_strategy=1.0).fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled2)
# Train the Logistic Regression model using the resampled data
model2 = LogisticRegression(solver='lbfgs', random_state=1)
model2.fit(X_resampled2, y_resampled2)
# Calculated the balanced accuracy score
y_pred2 = model2.predict(X_test)
balanced_accuracy_score(y_test, y_pred2)
# Display the confusion matrix
confusion_matrix(y_test, y_pred2)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred2))
```
# Undersampling
In this section, you will test an undersampling algorithm to determine which algorithm results in the best performance compared to the oversampling algorithms above. You will undersample the data using the Cluster Centroids algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Display the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the data using the ClusterCentroids resampler
from imblearn.under_sampling import ClusterCentroids
cc = ClusterCentroids(random_state=1)
X_resampled3, y_resampled3 = cc.fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled3)
# Train the Logistic Regression model using the resampled data
model3 = LogisticRegression(solver='lbfgs', random_state=1)
model3.fit(X_resampled3, y_resampled3)
# Calculate the balanced accuracy score
y_pred3 = model3.predict(X_test)
balanced_accuracy_score(y_test, y_pred3)
# Display the confusion matrix
confusion_matrix(y_test, y_pred3)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred3, digits = 4))
```
# Combination (Over and Under) Sampling
In this section, you will test a combination over- and under-sampling algorithm to determine if the algorithm results in the best performance compared to the other sampling algorithms above. You will resample the data using the SMOTEENN algorithm and complete the folliowing steps:
1. View the count of the target classes using `Counter` from the collections library.
3. Use the resampled data to train a logistic regression model.
3. Calculate the balanced accuracy score from sklearn.metrics.
4. Display the confusion matrix from sklearn.metrics.
5. Generate a classication report using the `imbalanced_classification_report` from imbalanced-learn.
Note: Use a random state of 1 for each sampling algorithm to ensure consistency between tests
```
# Resample the training data with SMOTEENN
from imblearn.combine import SMOTEENN
sm = SMOTEENN(random_state=1)
X_resampled4, y_resampled4 = sm.fit_resample(X_train, y_train)
# View the count of target classes with Counter
Counter(y_resampled4)
# Train the Logistic Regression model using the resampled data
model4 = LogisticRegression(solver='lbfgs', random_state=1)
model4.fit(X_resampled4, y_resampled4)
# Calculate the balanced accuracy score
y_pred4 = model4.predict(X_test)
balanced_accuracy_score(y_test, y_pred4)
# Display the confusion matrix
confusion_matrix(y_test, y_pred4)
# Print the imbalanced classification report
print(classification_report_imbalanced(y_test, y_pred4))
```
# Final Questions
1. Which model had the best balanced accuracy score?
Oversampling models has best balanced accuracy score at 99%
2. Which model had the best recall score?
Oversampling models have best recall scores around 99%
3. Which model had the best geometric mean score?
oversampling model has best geometric scorce around 99%
| github_jupyter |
# Grรกficos de desempenho das Caches
### Import libs
```
%matplotlib inline
##Bibliotecas importadas
# Biblioteca usada para abrir arquivos CSV
import csv
# Bibilioteca para fazer leitura de datas
from datetime import datetime, timedelta
# Fazer o ajuste de datas no grรกfico
import matplotlib.dates as mdate
# Biblioteca mateแธฟรกtica
import numpy as np
# Bibloteca para traรงar grรกficos
import matplotlib.pyplot as plt
#Biblioteca para mudar tamanho o grรกfico apresentado
import matplotlib.cm as cm
import operator as op
import os
import math
```
## Generate miss % graphs
```
for file in os.listdir('cache_csv/percentage'):
filepath = 'cache_csv/percentage/'+file
dados = list(csv.reader(open(filepath,'r')))
alg = file.split('.')[0]
mr1 = list()
mr2 = list()
mw1 = list()
mw2 = list()
mrw1 = list()
mrw2 = list()
mi1 = list()
mi2 = list()
for dado in dados:
mr1.append(float(dado[0]))
mw1.append(float(dado[1]))
mrw1.append(float(dado[2]))
mr2.append(float(dado[3]))
mw2.append(float(dado[4]))
mrw2.append(float(dado[5]))
mi1.append(float(dado[6]))
mi2.append(float(dado[7]))
# Configuraรงรตes de cache
x = np.arange(1, 9)
##### READ MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mr1)
markerline2, stemlines2, baseline2 = plt.stem(x,mr2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 read miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Read miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_r.png', dpi=300)
plt.show()
plt.close()
##### WRITE MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 write miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Write miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_w.png', dpi=300)
plt.show()
plt.close()
##### TOTAL MISSES (DATA) #####
markerline1, stemlines1, baseline1 = plt.stem(x,mrw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mrw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 total data miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Total data miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_rw.png', dpi=300)
plt.show()
plt.close()
##### INSTRUCTION MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mi1)
markerline2, stemlines2, baseline2 = plt.stem(x,mi2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 instruction miss rate (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Instruction miss rate (%)')
plt.savefig('img/Cache/percentage/cache_' + alg + '_i.png', dpi=300)
plt.show()
plt.close()
```
## Generate graphs of miss numbers
```
for file in os.listdir('cache_csv/num'):
filepath = 'cache_csv/num/'+file
dados = list(csv.reader(open(filepath,'r')))
alg = file.split('.')[0]
mr1 = list()
mr2 = list()
mw1 = list()
mw2 = list()
mrw1 = list()
mrw2 = list()
mi1 = list()
mi2 = list()
for dado in dados:
mr1.append(float(dado[0]))
mw1.append(float(dado[1]))
mrw1.append(int(dado[2]))
mr2.append(float(dado[3]))
mw2.append(float(dado[4]))
mrw2.append(float(dado[5]))
mi1.append(float(dado[6]))
mi2.append(float(dado[7]))
# Configuraรงรตes de cache
x = np.arange(1, 9)
##### READ MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mr1)
markerline2, stemlines2, baseline2 = plt.stem(x,mr2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 read misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Read misses')
plt.savefig('img/Cache/num/cache_' + alg + '_r.png', dpi=300)
plt.show()
plt.close()
##### WRITE MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 write misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Write misses')
plt.savefig('img/Cache/num/cache_' + alg + '_w.png', dpi=300)
plt.show()
plt.close()
##### TOTAL MISSES (DATA) #####
markerline1, stemlines1, baseline1 = plt.stem(x,mrw1)
markerline2, stemlines2, baseline2 = plt.stem(x,mrw2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Bordas
#plt.ylim([0,100])
#plt.xlim([1,9])
# Legendas
plt.title('L1 x L2 total data misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Total data misses')
plt.savefig('img/Cache/num/cache_' + alg + '_rw.png', dpi=300)
plt.show()
plt.close()
##### INSTRUCTION MISSES #####
markerline1, stemlines1, baseline1 = plt.stem(x,mi1)
markerline2, stemlines2, baseline2 = plt.stem(x,mi2)
#plt.xticks(x, (1,2,3,4,5,6,7,8))
# Define caracterรญstica das linhas
plt.setp(stemlines1, 'linestyle', 'none')
plt.setp(markerline1, 'linestyle', '-', 'color', 'r')
plt.setp(baseline1, 'linestyle', 'none')
plt.setp(stemlines2, 'linestyle', 'none')
plt.setp(markerline2, 'linestyle', '-', 'color', 'b')
plt.setp(baseline2, 'linestyle', 'none')
# Legendas
plt.title('L1 x L2 instruction misses (' + alg + ')')
plt.xlabel('Cache Configurations')
plt.ylabel('Instruction misses')
plt.savefig('img/Cache/num/cache_' + alg + '_i.png', dpi=300)
plt.show()
plt.close()
```
| github_jupyter |
## Loading libraries and looking at given data
```
import numpy as np
import pandas as pd
import seaborn as sns
import re
appendix_3=pd.read_excel("Appendix_3_august.xlsx")
appendix_3
print(appendix_3["Language"].value_counts(),)
print(appendix_3["Country"].value_counts())
pd.set_option("display.max_rows", None, "display.max_columns", None)
print(appendix_3['Country'].to_string(index=False))
```
## Removing useless data
```
appendix_3=appendix_3[appendix_3.Language!="Kรธbenhavnsk"]
appendix_3=appendix_3.drop(["Meaningless_ID"], axis=1)
appendix_3
appendix_3=appendix_3[appendix_3.Licenses!=0]
appendix_3
```
## Making usefull languages
```
def language(var):
"""Function that returns languages spoken by 3Shapes present support teams.
If not spoken, return English"""
if var.lower() in ['english','american']: #If english or "american"
return 'English' #Return English
if var.lower() in ['spanish']:
return 'Spanish'
if var.lower() in ['french']:
return 'French'
if var.lower() in ['german']:
return 'German'
if var.lower() in ['russian']:
return 'Russian'
if var.lower() in ['portuguese']:
return 'Portuguese'
if var.lower() in ['italian']:
return 'Italian'
if re.search('chin.+', var.lower()): # If lettercombination 'chin' appears:
return 'Chinese' # Return 'Chinese'
if var.lower() in ['japanese']:
return 'Japanese'
if var.lower() in ['korean']:
return 'Korean'
else:
return 'English' #If not spoken, return English
appendix_3['Support_language'] = appendix_3['Language'].apply(language)
appendix_3['Support_language'].value_counts()
appendix_3["Licenses_per_language"]=appendix_3.groupby(["Support_language"])["Licenses"].transform("sum")
appendix_3['Country'] = appendix_3['Country'].str.strip() #Removing initial whitespace
appendix_3.iloc[1,0]
```
## Making a column that "groups" countries into 3 regions/timezones of the world (Americas, Europe (incl. Middle East and Africa) and Asia)
```
def region(var):
"""Function that returns region based on country"""
if var in ['United States','Canada','Brazil','Mexico','Colombia','Argentina','Uruguay',
'Costa Rica','Chile','Paraguay','Bolivia','Venezuela','Puerto Rico']:
return 'Americas'
if var in ['France','Italy','Germany','United Kingdom','Spain','Netherlands','Ireland','Poland',
'Denmark','Switzerland','United Arab Emirates','Sweden','Norway','Belgium','Austria',
'Lebanon','Israel','Slovakia','Greece','Romania','Turkey','Czech Republic','South Africa',
'Finland','Lithuania','Russia','Hungary','Ukraine','Pakistan','Croatia','Iceland','Morocco',
'Egypt','Kuwait','Bulgaria','Iran','Luxembourg','Serbia','Slovenia','Tunisia','Estonia',
'Saudi Arabia','Portugal','Jordan','Cyprus','Armenia','Moldova','Azerbaijan','Algeria',
'Monaco','Georgia','Iraq','Liechtenstein','Latvia']:
return 'Europe'
if var in ['Korea','Australia','China','Singapore','Taiwan','Thailand','India','Japan',
'Hong Kong SAR','Vietnam','New Zealand','Philippines','Indonesia','Myanmar',
'Malaysia','Nepal']:
return 'Asia'
else:
return 'No'
appendix_3['Region'] = appendix_3['Country'].apply(region)
appendix_3['Region'].head(6)
appendix_3["Licenses_per_region"]=appendix_3.groupby(["Region"])["Licenses"].transform("sum")
appendix_3[["Licenses_per_region","Region"]].head(6)
```
## New DataFrame with our three regions/support centers
```
New_regions=appendix_3.groupby(["Region"])["Licenses"].sum().sort_values(ascending=False).to_frame().reset_index()
New_regions
def employees_needed(var):
""" Function that gives number of recuired employees based on licenses"""
if var <300:
return 3
else:
return np.ceil((var-300)/200+3)
New_regions["Employ_needed"]=New_regions["Licenses"].apply(employees_needed)
New_regions.head(3)
New_regions["Revenue"]=New_regions["Licenses"]*2000
New_regions.head(3)
```
## Loking at appendix 2 and cleaning useless data, and converting to int.
```
appendix_2=pd.read_excel("Appendix_2_august.xlsx")
appendix_2
appendix_2=appendix_2.drop([5])
appendix_2
appendix_2['Total cost']=appendix_2['Total cost'].astype(int)
appendix_2['Average FTE']=appendix_2['Average FTE'].astype(int)
print(appendix_2.dtypes)
```
## Getting the cost pr. worker pr. support center
```
appendix_2["Cost_per_FTE"]=np.round(appendix_2["Total cost"]/appendix_2["Average FTE"])
appendix_2
```
## Because of trouble with merge, the values are tranferred manually to the new DataFrame
```
def regional_center(var):
""" Quick function that gives the location of support center"""
if var in ['Europe']:
return 'Ukraine'
if var in ['Americas']:
return 'USA'
if var in ['Asia']:
return 'China'
New_regions["Support Center"]=New_regions["Region"].apply(regional_center)
New_regions.head(3)
New_regions['Cost per FTE']=[17105,83333,250000]
New_regions
```
## Altering the order of the columns to a more intiutive layout
```
print(list(New_regions.columns.values)) #
New_regions=New_regions[['Support Center','Region', 'Licenses','Revenue','Employ_needed','Cost per FTE','Total cost']]
New_regions
```
## Calculation cost and balance values
```
New_regions['Total cost']=New_regions['Employ_needed']*New_regions['Cost per FTE']
New_regions
New_regions=New_regions.assign(Balance=New_regions['Revenue'] - New_regions['Total cost'])
New_regions
```
## Making a new DataFrame for the whole project
```
Whole_project=pd.DataFrame()
Whole_project['Licenses']=[New_regions['Licenses'].sum(axis=0)]
Whole_project['Revenue']=[New_regions['Revenue'].sum(axis=0)]
Whole_project['Employ_needed']=[New_regions['Employ_needed'].sum(axis=0)]
Whole_project['Total cost']=[New_regions['Total cost'].sum(axis=0)]
Whole_project['Balance']=[New_regions['Balance'].sum(axis=0)]
Whole_project
Whole_project['Balance before']=(appendix_3['Licenses'].sum(axis=0)*2000*0.7)-appendix_2['Total cost'].sum(axis=0)
Whole_project['Gain']=Whole_project['Balance']-Whole_project['Balance before']
Whole_project
Whole_project['Balance + savings']=Whole_project['Balance']+(appendix_2.iloc[0]['Total cost']+appendix_2.iloc[3]['Total cost'])
Whole_project['Gain + savings']=Whole_project['Balance + savings']-Whole_project['Balance before']
Whole_project
```
## Looking at the 3-year forecast with different adoption rates
First off is 10% adoption rate, then 50%, and finally 100%
```
def adoption(df_out_name,df_in_name,adoption_rate):
""" A function that takes an adoption rate as input, and calculates usefull parameters
(licenses, revenue, employees needed, cost and balance) after 3 years.
An annual growth rate of 10% is given """
df_in_name[f'{adoption_rate} adoption, licenses']=round(df_in_name['Licenses']*(1.1**3)*adoption_rate)
df_in_name[f'{adoption_rate} adoption, revenue']=df_in_name[f'{adoption_rate} adoption, licenses']*2000
df_in_name[f'{adoption_rate} adoption, employ_needed']=np.ceil((df_in_name[f'{adoption_rate} adoption, licenses']-300)/200+3)
df_in_name[f'{adoption_rate} adoption, total cost']=round(df_in_name[f'{adoption_rate} adoption, employ_needed']*((New_regions.iloc[0,5]*New_regions.iloc[0,4])+(New_regions.iloc[1,5]*New_regions.iloc[1,4])+(New_regions.iloc[2,5]*New_regions.iloc[2,4]))/New_regions['Employ_needed'].sum())
df_in_name[f'{adoption_rate} adoption, balance']=df_in_name[f'{adoption_rate} adoption, revenue']-df_in_name[f'{adoption_rate} adoption, total cost']
df_out_name=df_in_name
return df_out_name
adoption('Whole_project_10',Whole_project,0.1)
adoption('Whole_project_50',Whole_project,0.5)
adoption('Whole_project_50',Whole_project,1)
with pd.ExcelWriter('samlet.xlsx') as writer:
appendix_3.to_excel(writer, sheet_name='Lande,sprog og licenser')
appendix_2.to_excel(writer, sheet_name='Supportcenter og omkostninger')
license_country.to_excel(writer, sheet_name='Licenser pr. supportsprog')
with pd.ExcelWriter('samlet_2.xlsx') as writer:
New_regions.to_excel(writer, sheet_name='De tre supportcentre')
Whole_project.to_excel(writer, sheet_name='Hele projektet')
```
| github_jupyter |
# ADN
Implemente un programa que identifique a una persona en funciรณn de su ADN, segรบn se indica a continuaciรณn.
<code>$ python dna.py databases/large.csv sequences/5.txt
Lavender</code>
## Empezando
- Dentro de la carpeta data/adn se encuentra la informaciรณn necesaria para resolver este ejercicio la cual incluye un archivo de base de datos y archivos txt con las cadenas adn
## Antecedentes
El ADN, el portador de informaciรณn genรฉtica en los seres vivos, se ha utilizado en la justicia penal durante dรฉcadas. Pero, ยฟcรณmo funciona exactamente el perfil de ADN? Dada una secuencia de ADN, ยฟcรณmo pueden los investigadores forenses identificar a quiรฉn pertenece?
Bueno, el ADN es en realidad solo una secuencia de molรฉculas llamadas nucleรณtidos, dispuestas en una forma particular (una doble hรฉlice). Cada nucleรณtido de ADN contiene una de cuatro bases diferentes: adenina (A), citosina (C), guanina (G) o timina (T). Cada cรฉlula humana tiene miles de millones de estos nucleรณtidos ordenados en secuencia. Algunas porciones de esta secuencia (es decir, el genoma) son iguales, o al menos muy similares, en casi todos los seres humanos, pero otras porciones de la secuencia tienen una mayor diversidad genรฉtica y, por tanto, varรญan mรกs en la poblaciรณn.
Un lugar donde el ADN tiende a tener una alta diversidad genรฉtica es en las repeticiones cortas en tรกndem (STR). Un STR es una secuencia corta de bases de ADN que tiende a repetirse consecutivamente numerosas veces en lugares especรญficos dentro del ADN de una persona. El nรบmero de veces que se repite un STR en particular varรญa mucho entre los individuos. En las siguientes muestras de ADN, por ejemplo, Alice tiene el STR <code>AGAT</code> repetido cuatro veces en su ADN, mientras que Bob tiene el mismo STR repetido cinco veces.
<img src="./img/adn.PNG">
El uso de varios STR, en lugar de solo uno, puede mejorar la precisiรณn del perfil de ADN. Si la probabilidad de que dos personas tengan el mismo nรบmero de repeticiones para un solo STR es del 5%, y el analista observa 10 STR diferentes, entonces la probabilidad de que dos muestras de ADN coincidan puramente por casualidad es de aproximadamente 1 en 1 billรณn (asumiendo que todos los STR son independientes entre sรญ). Entonces, si dos muestras de ADN coinciden en el nรบmero de repeticiones para cada uno de los STR, el analista puede estar bastante seguro de que provienen de la misma persona. CODIS, la base de datos de ADN del FBI , utiliza 20 STR diferentes como parte de su proceso de elaboraciรณn de perfiles de ADN.
ยฟCรณmo serรญa una base de datos de ADN de este tipo? Bueno, en su forma mรกs simple, podrรญa imaginarse formateando una base de datos de ADN como un archivo CSV, donde cada fila corresponde a un individuo y cada columna corresponde a un STR particular.
<code>name,AGAT,AATG,TATC
Alice,28,42,14
Bob,17,22,19
Charlie,36,18,25</code>
Los datos del archivo anterior sugerirรญan que Alice tiene la secuencia <code>AGAT</code> repetida 28 veces consecutivamente en algรบn lugar de su ADN, la secuencia <code>AATG</code> repetida 42 veces y <code>TATC</code> repetida 14 veces. Bob, mientras tanto, tiene esos mismos tres STR repetidos 17, 22 y 19 veces, respectivamente. Y Charlie tiene esos mismos tres STR repetidos 36, 18 y 25 veces, respectivamente.
Entonces, dada una secuencia de ADN, ยฟcรณmo podrรญa identificar a quiรฉn pertenece? Bueno, imagina que buscas en la secuencia de ADN la secuencia consecutiva mรกs larga de <code>AGAT</code>s repetidos y descubres que la secuencia mรกs larga tiene 17 repeticiones. Si luego encontrara que la secuencia mรกs larga de <code>AATG</code> tiene 22 repeticiones y la secuencia mรกs larga de <code>TATC</code> 19 repeticiones, eso proporcionarรญa una evidencia bastante buena de que el ADN era de Bob. Por supuesto, tambiรฉn es posible que una vez que tome los recuentos de cada uno de los STR, no coincida con nadie en su base de datos de ADN, en cuyo caso no tendrรก ninguna coincidencia.
En la prรกctica, dado que los analistas saben en quรฉ cromosoma y en quรฉ lugar del ADN se encontrarรก un STR, pueden localizar su bรบsqueda en una secciรณn limitada del ADN. Pero ignoraremos ese detalle para este problema.
Su tarea es escribir un programa que tomarรก una secuencia de ADN y un archivo CSV que contiene recuentos de STR para una lista de individuos y luego generarรก a quiรฉn pertenece el ADN (lo mรกs probable).
## Especificaciones
En un archivo llamado <code>dna.py</code>, implementar un programa que identifica a la que pertenece una secuencia de ADN.
- El programa debe requerir como primer argumento de lรญnea de comando el nombre de un archivo CSV que contiene los recuentos de STR para una lista de individuos y debe requerir como segundo argumento de lรญnea de comando el nombre de un archivo de texto que contiene la secuencia de ADN para identificar.
- Si su programa se ejecuta con el nรบmero incorrecto de argumentos en la lรญnea de comandos, su programa deberรญa imprimir un mensaje de error de su elecciรณn (con <code>print</code>). Si se proporciona el nรบmero correcto de argumentos, puede suponer que el primer argumento es de hecho el nombre de archivo de un archivo CSV vรกlido y que el segundo argumento es el nombre de archivo de un archivo de texto vรกlido.
- Su programa deberรญa abrir el archivo CSV y leer su contenido en la memoria.
- Puede suponer que la primera fila del archivo CSV serรกn los nombres de las columnas. La primera columna serรก la palabra <code>name</code> y las columnas restantes serรกn las propias secuencias STR.
- Su programa deberรญa abrir la secuencia de ADN y leer su contenido en la memoria.
- Para cada uno de los STR (de la primera lรญnea del archivo CSV), su programa debe calcular la ejecuciรณn mรกs larga de repeticiones consecutivas del STR en la secuencia de ADN para identificar.
- Si los conteos de STR coinciden exactamente con cualquiera de las personas en el archivo CSV, su programa debe imprimir el nombre de la persona que coincide.
- Puede suponer que los recuentos de STR no coincidirรกn con mรกs de un individuo.
- Si los recuentos de STR no coinciden exactamente con ninguno de los individuos en el archivo CSV, su programa deberรญa imprimir <code>"No match"</code>.
## Uso
Su programa deberรญa comportarse segรบn los siguientes ejemplos.
<code>$ python dna.py databases/large.csv sequences/5.txt
Lavender</code>
<code>$ python dna.py
Usage: python dna.py data.csv sequence.txt </code>
<code>$ python dna.py data.csv
Usage: python dna.py data.csv sequence.txt</code>
## Sugerencia
- Puede encontrar <a href='https://docs.python.org/3/library/csv.html'><code>csv</code></a> รบtil el mรณdulo de Python para leer archivos CSV en la memoria. Es posible que desee aprovechar <a href='https://docs.python.org/3/library/csv.html#csv.reader'><code>csv.reader</code></a> o <a href='https://docs.python.org/3/library/csv.html#csv.DictReader'><code>csv.DictReader</code></a>.
- Las funciones <a href='https://docs.python.org/3.3/tutorial/inputoutput.html#reading-and-writing-files'><code>open</code></a> y <a href='https://docs.python.org/3.3/tutorial/inputoutput.html#methods-of-file-objects'><code>read</code></a> pueden resultar รบtiles para leer archivos de texto en la memoria.
- Considere quรฉ estructuras de datos podrรญan ser รบtiles para realizar un seguimiento de la informaciรณn en su programa. A <code>list</code> o a <code>dict</code> pueden resultar รบtiles.
## Pruebas
python dna.py databases/small.csv sequences/1.txt.1.txtAsegรบrese de probar su cรณdigo para cada uno de los siguientes.
- Ejecute su programa como <code>python dna.py databases/small.csv sequences/1.txt.</code> Su programa deberรญa generar <code>Bob</code>.
- Ejecute su programa como <code>python dna.py databases/small.csv sequences/2.txt.</code> Su programa deberรญa generar <code>No</code> match.
- Ejecute su programa como <code>python dna.py databases/small.csv sequences/3.txt.</code> Su programa deberรญa generar <code>No</code> match.
- Ejecute su programa como <code>python dna.py databases/small.csv sequences/4.txt.</code> Su programa deberรญa generar <code>Alice</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/5.txt.</code> Su programa deberรญa generar <code>Lavender</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/6.txt.</code> Su programa deberรญa generar <code>Luna</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/7.txt.</code> Su programa deberรญa generar <code>Ron</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/8.txt.</code> Su programa deberรญa generar <code>Ginny</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/9.txt.</code> Su programa deberรญa generar <code>Draco</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/10.txt.</code> Su programa deberรญa generar <code>Albus</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/11.txt.</code> Su programa deberรญa generar <code>Hermione</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/12.txt.</code> Su programa deberรญa generar <code>Lily</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/13.txt.</code> Su programa deberรญa generar <code>No</code> match.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/14.txt.</code> Su programa deberรญa generar <code>Severus</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/15.txt.</code> Su programa deberรญa generar <code>Sirius</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/16.txt.</code> Su programa deberรญa generar <code>No</code> match.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/17.txt.</code> Su programa deberรญa generar <code>Harry</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/18.txt.</code> Su programa deberรญa generar <code>No</code> match.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/19.txt.</code> Su programa deberรญa generar <code>Fred</code>.
- Ejecute su programa como <code>python dna.py databases/large.csv sequences/20.txt.</code> Su programa deberรญa generar <code>No</code> match.
```
# Importando librerias
import csv
import re
#Declarando funciones
def conteomaxstr(patron, texto):
maxrep = 0
count = 0
fin = 0
while True:
encontrado = re.search(patron, texto)
if encontrado:
inicio = encontrado.start()
end = encontrado.end()
if fin == 0:
count = 1
fin = end
texto = texto.replace(patron, patron.lower(), 1) #hacer un replace del patron para que no vuelva a ser encontrado en la siguiente iteraciรณn
else:
if inicio == fin:
texto = texto.replace(patron, patron.lower(), 1) #hacer un replace del patron para que no vuelva a ser encontrado en la siguiente iteraciรณn
count = count + 1
fin = end
else:
#Se reinicia el conteo para los patrones restantes
fin = 0
if count > maxrep:
maxrep = count
count = 0
else:
if count > maxrep:
maxrep = count
break
return maxrep
def busqueda_dna():
nombre = "No match"
try:
#Se pide el nombre del archivo de .csv
csvname = input("Introduce el nombre del archivo CSV que contiene los recuentos de STR para la lista de individuos:")
#Se abre el archivo .csv y se guarda en una colecciรณn de datos similar a un diccionario gracias al mรฉtodo csv.DictReader
with open('./data/dna/databases/' + csvname, newline='') as csvfile:
try:
#Se pide el nombre del archivo de .txt
txtname = input("Introduce el nombre del archivo de texto que contiene la secuencia ADN a indentificar:")
#Se abre el archivo de texto que contiene el ADN del individuo y se guarda el texto en un string ADN
with open('./data/dna/sequences/' + txtname, mode = 'r') as txtfile:
ADN = txtfile.read()
for fila in csv.DictReader(csvfile):
cadena = ADN
if nombre == "No match":
for patron in fila:
if patron != 'name':
if int(fila[patron]) == int(conteomaxstr(patron, cadena)):
nombre = fila['name']
else:
nombre = "No match"
break
else:
break
print(nombre)
except:
print(f"El archivo {txtname} no existe")
except:
print(f"El archivo {csvname} no existe")
busqueda_dna()
```
| github_jupyter |
```
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
%matplotlib inline
import matplotlib
matplotlib.rcParams["figure.figsize"] = (20,10)
df1 = pd.read_csv("Bengaluru_House_Data.csv")
df1.head()
df1.shape
df1.columns
df1["area_type"].unique()
df1["area_type"].value_counts()
# Drop features that are not required to build our model
df2=df1.drop(['area_type','availability','society','balcony'] , axis = 'columns')
df2.shape
# Data Cleaning : Handle NA values
df2.isnull().sum()
df3=df2.dropna()
df3.head()
df3.isnull().sum()
df3.shape
# Feature Engineering
# Add new feature(integer) for bhk (Bedrooms Hall Kitchen)
df3['size'].unique()
df3['BHK']=df3['size'].apply(lambda x: int(x.split(' ')[0]))
df3.head()
df3[df3.BHK>20]
df3.total_sqft.unique()
def is_float(x):
try:
float(x)
except:
return False
return True
df3[~df3['total_sqft'].apply(is_float)].head(10)
df3['total_sqft'].unique()
def convert_sqft_to_num(x):
tokens=x.split("-")
if len(tokens)==2:
return(float(tokens[0])+float(tokens[1]))/2
try:
return float(x)
except:
return None
convert_sqft_to_num('627')
convert_sqft_to_num('627-7643')
df4=df3.copy()
df4['total_sqft']=df4['total_sqft'].apply(convert_sqft_to_num)
df4.head()
df4.loc[30]
df5=df4.copy()
df5['price_per_sqft']=df5['price']*100000/df5['total_sqft']
df5.head()
len(df5.location.unique())
df=df5['price_per_sqft'].describe()
df.head()
df5.location=df5.location.apply(lambda x: x.strip())
df5.head()
df5
location_stats=df5['location'].value_counts(ascending=False)
location_stats
location_stats.values.sum()
location_stats[location_stats<=10]
len(location_stats[location_stats<=10])
other_location=location_stats[location_stats<=10]
other_location
df5['location']=df5['location'].apply(lambda x: "other" if x in other_location else x )
df5
len(df5[df5.location!="other"])
len(df5.location.unique())
df5[df5.total_sqft/df5.BHK<300].head()
df5.shape
df6=df5[~(df5.total_sqft/df5.BHK<300)]
df6.shape
df6.price_per_sqft.describe()
def remove_pps_outlier(df):
df_out=pd.DataFrame()
for key,subdf in df.groupby('location'):
m=np.mean(subdf.price_per_sqft)
st=np.std(subdf.price_per_sqft)
reduced_df=subdf[(subdf.price_per_sqft>(m-st)) & (subdf.price_per_sqft<=(m+st))]
df_out=pd.concat([df_out,reduced_df],ignore_index=True)
return df_out
df7=remove_pps_outlier(df6)
df7.shape
df7.columns
df7['price_per_sqft'].unique
df7.price_per_sqft.describe()
df7
def plot_scatter_chart(df,location):
bhk2=df[(df.location==location) & (df.BHK==2)]
bhk3=df[(df.location==location) & (df.BHK==3)]
matplotlib.rcParams['figure.figsize']=(15,10)
plt.scatter(bhk2.total_sqft,bhk2.price,marker='*',color='red',label='2 BHK',s=50)
plt.scatter(bhk3.total_sqft,bhk3.price,marker='+',color='blue',label='3 bhk',s=50)
plt.xlabel('Price(Lakh Indian Rupees)')
plt.ylabel('Total sqft Area')
plt.title(location)
plt.legend()
plot_scatter_chart(df7,'Rajaji Nagar')
def remove_bhk_outliers(df):
exclude_indices = np.array([])
for location, location_df in df.groupby('location'):
bhk_stats = {}
for BHK, bhk_df in location_df.groupby('BHK'):
bhk_stats[BHK] = {
'mean': np.mean(bhk_df.price_per_sqft),
'std': np.std(bhk_df.price_per_sqft),
'count': bhk_df.shape[0]
}
for BHK, bhk_df in location_df.groupby('BHK'):
stats = bhk_stats.get(BHK-1)
if stats and stats['count']>5:
exclude_indices = np.append(exclude_indices, bhk_df[bhk_df.price_per_sqft<(stats['mean'])].index.values)
return df.drop(exclude_indices,axis='index')
df8 = remove_bhk_outliers(df7)
# df8 = df7.copy()
df8.shape
plot_scatter_chart(df8,'Rajaji Nagar')
plot_scatter_chart(df7,'Hebbal')
plot_scatter_chart(df8,'Hebbal')
import matplotlib
matplotlib.rcParams['figure.figsize']=(20,10)
plt.hist(df8.price_per_sqft,rwidth=0.8)
plt.xlabel('price per sqaure feet')
plt.ylabel('count')
df8.bath.unique()
df8[df8.bath>10]
plt.hist(df8.bath,rwidth=0.8)
plt.xlabel('No. of bathrooms')
plt.ylabel('Count')
df8[df8.bath>df8.BHK+2]
df9=df8[df8.bath<df8.BHK+2]
df9.shape
df10 = df9.drop(['size','price_per_sqft'],axis='columns')
df10.head(3)
# Use One Hot Encoding for location
dummies = pd.get_dummies(df10.location)
dummies.head(3)
df11=pd.concat([df10,dummies.drop('other',axis='columns')],axis='columns')
df11.head()
df12=df11.drop('location',axis='columns')
df12.head()
df12.shape
```
#drop the dependent columns to train and test X
```
X=df12.drop(['price'],axis='columns')
X.head()
y=df12.price
y.head()
from sklearn.model_selection import train_test_split
X_train,X_test,y_train,y_test=train_test_split(X,y,test_size=0.2,random_state=10)
from sklearn.linear_model import LinearRegression
lr_reg=LinearRegression()
lr_reg.fit(X_train,y_train)
lr_reg.score(X_test,y_test)
```
#Use K Fold cross validation to measure accuracy of our LinearRegression model
```
from sklearn.model_selection import ShuffleSplit
from sklearn.model_selection import cross_val_score
cv=ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
cross_val_score(LinearRegression(),X,y,cv=cv)
```
#Find best model using GridSearchCV
```
from sklearn.model_selection import GridSearchCV
from sklearn.linear_model import Lasso
from sklearn.tree import DecisionTreeRegressor
def find_best_model_using_gridsearchcv(X,y):
algos = {
'linear_regression' : {
'model': LinearRegression(),
'params': {
'normalize': [True, False]
}
},
'lasso': {
'model': Lasso(),
'params': {
'alpha': [1,2],
'selection': ['random', 'cyclic']
}
},
'decision_tree': {
'model': DecisionTreeRegressor(),
'params': {
'criterion' : ['mse','friedman_mse'],
'splitter': ['best','random']
}
}
}
scores = []
cv = ShuffleSplit(n_splits=5, test_size=0.2, random_state=0)
for algo_name, config in algos.items():
gs = GridSearchCV(config['model'], config['params'], cv=cv, return_train_score=False)
gs.fit(X,y)
scores.append({
'model': algo_name,
'best_score': gs.best_score_,
'best_params': gs.best_params_
})
return pd.DataFrame(scores,columns=['model','best_score','best_params'])
find_best_model_using_gridsearchcv(X,y)
def predict_price(location,sqft,bath,bhk):
loc_index = np.where(X.columns==location)[0][0]
x = np.zeros(len(X.columns))
x[0] = sqft
x[1] = bath
x[2] = bhk
if loc_index >= 0:
x[loc_index] = 1
return lr_reg.predict([x])[0]
predict_price('1st Phase JP Nagar',1128, 3, 3)
predict_price('Indira Nagar',1000, 3, 3)
```
#Export the tested model to a pickle file
```
import pickle
with open('banglore_home_prices_model.pickle','wb') as f:
pickle.dump(lr_reg,f)
```
#Export location and column information to a file that will be useful later on in our prediction application
```
import json
columns={
'data_columns':[col.lower() for col in X.columns]
}
with open("columns.json","w") as f:
f.write(json.dumps(columns))
```
| github_jupyter |
##### Copyright 2019 The TensorFlow Authors.
```
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
```
#How to train Boosted Trees models in TensorFlow
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/tutorials/estimators/boosted_trees"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/docs/blob/master/site/en/tutorials/estimators/boosted_trees.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/docs/blob/master/site/en/tutorials/estimators/boosted_trees.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png">View source on GitHub</a>
</td>
</table>
This tutorial is an end-to-end walkthrough of training a Gradient Boosting model using decision trees with the `tf.estimator` API. Boosted Trees models are among the most popular and effective machine learning approaches for both regression and classification. It is an ensemble technique that combines the predictions from several (think 10s, 100s or even 1000s) tree models.
Boosted Trees models are popular with many machine learning practioners as they can achieve impressive performance with minimal hyperparameter tuning.
## Load the titanic dataset
You will be using the titanic dataset, where the (rather morbid) goal is to predict passenger survival, given characteristics such as gender, age, class, etc.
```
from __future__ import absolute_import, division, print_function, unicode_literals
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import tensorflow as tf
tf.enable_eager_execution()
tf.logging.set_verbosity(tf.logging.ERROR)
tf.set_random_seed(123)
# Load dataset.
dftrain = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_train.csv')
dfeval = pd.read_csv('https://storage.googleapis.com/tfbt/titanic_eval.csv')
y_train = dftrain.pop('survived')
y_eval = dfeval.pop('survived')
```
The dataset consists of a training set and an evaluation set:
* `dftrain` and `y_train` are the *training set*โthe data the model uses to learn.
* The model is tested against the *eval set*, `dfeval`, and `y_eval`.
For training you will use the following features:
<table>
<tr>
<th>Feature Name</th>
<th>Description</th>
</tr>
<tr>
<td>sex</td>
<td>Gender of passenger</td>
</tr>
<tr>
<td>age</td>
<td>Age of passenger</td>
</tr>
<tr>
<td>n_siblings_spouses</td>
<td># siblings and partners aboard</td>
</tr>
<tr>
<td>parch</td>
<td># of parents and children aboard</td>
</tr>
<tr>
<td>fare</td>
<td>Fare passenger paid.</td>
</tr>
<tr>
<td>class</td>
<td>Passenger's class on ship</td>
</tr>
<tr>
<td>deck</td>
<td>Which deck passenger was on</td>
</tr>
<tr>
<td>embark_town</td>
<td>Which town passenger embarked from</td>
</tr>
<tr>
<td>alone</td>
<td>If passenger was alone</td>
</tr>
</table>
## Explore the data
Let's first preview some of the data and create summary statistics on the training set.
```
dftrain.head()
dftrain.describe()
```
There are 627 and 264 examples in the training and evaluation sets, respectively.
```
dftrain.shape[0], dfeval.shape[0]
```
The majority of passengers are in their 20's and 30's.
```
dftrain.age.hist(bins=20)
plt.show()
```
There are approximately twice as male passengers as female passengers aboard.
```
dftrain.sex.value_counts().plot(kind='barh')
plt.show()
```
The majority of passengers were in the "third" class.
```
(dftrain['class']
.value_counts()
.plot(kind='barh'))
plt.show()
```
Most passengers embarked from Southampton.
```
(dftrain['embark_town']
.value_counts()
.plot(kind='barh'))
plt.show()
```
Females have a much higher chance of surviving vs. males. This will clearly be a predictive feature for the model.
```
ax = (pd.concat([dftrain, y_train], axis=1)\
.groupby('sex')
.survived
.mean()
.plot(kind='barh'))
ax.set_xlabel('% survive')
plt.show()
```
## Create feature columns and input functions
The Gradient Boosting estimator can utilize both numeric and categorical features. Feature columns work with all TensorFlow estimators and their purpose is to define the features used for modeling. Additionally they provide some feature engineering capabilities like one-hot-encoding, normalization, and bucketization. In this tutorial, the fields in `CATEGORICAL_COLUMNS` are transformed from categorical columns to one-hot-encoded columns ([indicator column](https://www.tensorflow.org/api_docs/python/tf/feature_column/indicator_column)):
```
fc = tf.feature_column
CATEGORICAL_COLUMNS = ['sex', 'n_siblings_spouses', 'parch', 'class', 'deck',
'embark_town', 'alone']
NUMERIC_COLUMNS = ['age', 'fare']
def one_hot_cat_column(feature_name, vocab):
return fc.indicator_column(
fc.categorical_column_with_vocabulary_list(feature_name,
vocab))
feature_columns = []
for feature_name in CATEGORICAL_COLUMNS:
# Need to one-hot encode categorical features.
vocabulary = dftrain[feature_name].unique()
feature_columns.append(one_hot_cat_column(feature_name, vocabulary))
for feature_name in NUMERIC_COLUMNS:
feature_columns.append(fc.numeric_column(feature_name,
dtype=tf.float32))
```
You can view the transformation that a feature column produces. For example, here is the output when using the `indicator_column` on a single example:
```
example = dftrain.head(1)
class_fc = one_hot_cat_column('class', ('First', 'Second', 'Third'))
print('Feature value: "{}"'.format(example['class'].iloc[0]))
print('One-hot encoded: ', fc.input_layer(dict(example), [class_fc]).numpy())
```
Additionally, you can view all of the feature column transformations together:
```
fc.input_layer(dict(example), feature_columns).numpy()
```
Next you need to create the input functions. These will specify how data will be read into our model for both training and inference. You will use the `from_tensor_slices` method in the [`tf.data`](https://www.tensorflow.org/api_docs/python/tf/data) API to read in data directly from Pandas. This is suitable for smaller, in-memory datasets. For larger datasets, the tf.data API supports a variety of file formats (including [csv](https://www.tensorflow.org/api_docs/python/tf/data/experimental/make_csv_dataset)) so that you can process datasets that do not fit in memory.
```
# Use entire batch since this is such a small dataset.
NUM_EXAMPLES = len(y_train)
def make_input_fn(X, y, n_epochs=None, shuffle=True):
y = np.expand_dims(y, axis=1)
def input_fn():
dataset = tf.data.Dataset.from_tensor_slices((dict(X), y))
if shuffle:
dataset = dataset.shuffle(NUM_EXAMPLES)
# For training, cycle thru dataset as many times as need (n_epochs=None).
dataset = dataset.repeat(n_epochs)
# In memory training doesn't use batching.
dataset = dataset.batch(NUM_EXAMPLES)
return dataset
return input_fn
# Training and evaluation input functions.
train_input_fn = make_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
```
## Train and evaluate the model
Below you will do the following steps:
1. Initialize the model, specifying the features and hyperparameters.
2. Feed the training data to the model using the `train_input_fn` and train the model using the `train` function.
3. You will assess model performance using the evaluation setโin this example, the `dfeval` DataFrame. You will verify that the predictions match the labels from the `y_eval` array.
Before training a Boosted Trees model, let's first train a linear classifier (logistic regression model). It is best practice to start with simpler model to establish a benchmark.
```
linear_est = tf.estimator.LinearClassifier(feature_columns)
# Train model.
linear_est.train(train_input_fn, max_steps=100)
# Evaluation.
results = linear_est.evaluate(eval_input_fn)
print('Accuracy : ', results['accuracy'])
print('Dummy model: ', results['accuracy_baseline'])
```
Next let's train a Boosted Trees model. For boosted trees, regression (`BoostedTreesRegressor`) and classification (`BoostedTreesClassifier`) are supported, along with using any twice differentiable custom loss (`BoostedTreesEstimator`). Since the goal is to predict a class - survive or not survive, you will use the `BoostedTreesClassifier`.
```
# Since data fits into memory, use entire dataset per layer. It will be faster.
# Above one batch is defined as the entire dataset.
n_batches = 1
est = tf.estimator.BoostedTreesClassifier(feature_columns,
n_batches_per_layer=n_batches)
# The model will stop training once the specified number of trees is built, not
# based on the number of steps.
est.train(train_input_fn, max_steps=100)
# Eval.
results = est.evaluate(eval_input_fn)
print('Accuracy : ', results['accuracy'])
print('Dummy model: ', results['accuracy_baseline'])
```
For performance reasons, when your data fits in memory, it is recommended to use the `boosted_trees_classifier_train_in_memory` function. However if training time is not of a concern or if you have a very large dataset and want to do distributed training, use the `tf.estimator.BoostedTrees` API shown above.
When using this method, you should not batch your input data, as the method operates on the entire dataset.
```
def make_inmemory_train_input_fn(X, y):
y = np.expand_dims(y, axis=1)
def input_fn():
return dict(X), y
return input_fn
train_input_fn = make_inmemory_train_input_fn(dftrain, y_train)
eval_input_fn = make_input_fn(dfeval, y_eval, shuffle=False, n_epochs=1)
est = tf.contrib.estimator.boosted_trees_classifier_train_in_memory(
train_input_fn,
feature_columns)
print(est.evaluate(eval_input_fn)['accuracy'])
```
Now you can use the train model to make predictions on a passenger from the evaluation set. TensorFlow models are optimized to make predictions on a batch, or collection, of examples at once. Earlier, the `eval_input_fn` is defined using the entire evaluation set.
```
pred_dicts = list(est.predict(eval_input_fn))
probs = pd.Series([pred['probabilities'][1] for pred in pred_dicts])
probs.plot(kind='hist', bins=20, title='predicted probabilities')
plt.show()
```
Finally you can also look at the receiver operating characteristic (ROC) of the results, which will give us a better idea of the tradeoff between the true positive rate and false positive rate.
```
from sklearn.metrics import roc_curve
fpr, tpr, _ = roc_curve(y_eval, probs)
plt.plot(fpr, tpr)
plt.title('ROC curve')
plt.xlabel('false positive rate')
plt.ylabel('true positive rate')
plt.xlim(0,)
plt.ylim(0,)
plt.show()
```
| github_jupyter |
# Implementing the Gradient Descent Algorithm
In this lab, we'll implement the basic functions of the Gradient Descent algorithm to find the boundary in a small dataset. First, we'll start with some functions that will help us plot and visualize the data.
```
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
# Some helper functions for plotting and drawing lines
def plot_points(X, y):
admitted = X[np.argwhere(y==1)]
rejected = X[np.argwhere(y==0)]
plt.scatter([s[0][0] for s in rejected], [s[0][1] for s in rejected], s = 25, color = 'blue', edgecolor = 'k')
plt.scatter([s[0][0] for s in admitted], [s[0][1] for s in admitted], s = 25, color = 'red', edgecolor = 'k')
def display(m, b, color='g--'):
plt.xlim(-0.05,1.05)
plt.ylim(-0.05,1.05)
x = np.arange(-10, 10, 0.1)
plt.plot(x, m*x+b, color)
```
## Reading and plotting the data
```
data = pd.read_csv('data.csv', header=None)
X = np.array(data[[0,1]])
y = np.array(data[2])
plot_points(X,y)
plt.show()
```
## TODO: Implementing the basic functions
Here is your turn to shine. Implement the following formulas, as explained in the text.
- Sigmoid activation function
$$\sigma(x) = \frac{1}{1+e^{-x}}$$
- Output (prediction) formula
$$\hat{y} = \sigma(w_1 x_1 + w_2 x_2 + b)$$
- Error function
$$Error(y, \hat{y}) = - y \log(\hat{y}) - (1-y) \log(1-\hat{y})$$
- The function that updates the weights
$$ w_i \longrightarrow w_i + \alpha (y - \hat{y}) x_i$$
$$ b \longrightarrow b + \alpha (y - \hat{y})$$
```
# Implement the following functions
# Activation (sigmoid) function
def sigmoid(x):
return 1 / (1 + np.exp(-x))
# Output (prediction) formula
def output_formula(features, weights, bias):
return sigmoid(np.dot(features, weights) + bias)
# Error (log-loss) formula
def error_formula(y, output):
return -(y * np.log(output)) - ((1 - y) * np.log(1 - output))
# Gradient descent step
def update_weights(x, y, weights, bias, learnrate):
yhat = output_formula(x, weights, bias)
d_error = y - yhat
weights += learnrate * d_error * x
bias += learnrate * d_error
return weights, bias
```
## Training function
This function will help us iterate the gradient descent algorithm through all the data, for a number of epochs. It will also plot the data, and some of the boundary lines obtained as we run the algorithm.
```
np.random.seed(44)
epochs = 100
learnrate = 0.01
def train(features, targets, epochs, learnrate, graph_lines=False):
errors = []
n_records, n_features = features.shape
last_loss = None
weights = np.random.normal(scale=1 / n_features**.5, size=n_features)
bias = 0
for e in range(epochs):
del_w = np.zeros(weights.shape)
for x, y in zip(features, targets):
output = output_formula(x, weights, bias)
error = error_formula(y, output)
weights, bias = update_weights(x, y, weights, bias, learnrate)
# Printing out the log-loss error on the training set
out = output_formula(features, weights, bias)
loss = np.mean(error_formula(targets, out))
errors.append(loss)
if e % (epochs / 10) == 0:
print("\n========== Epoch", e,"==========")
if last_loss and last_loss < loss:
print("Train loss: ", loss, " WARNING - Loss Increasing")
else:
print("Train loss: ", loss)
last_loss = loss
predictions = out > 0.5
accuracy = np.mean(predictions == targets)
print("Accuracy: ", accuracy)
if graph_lines and e % (epochs / 100) == 0:
display(-weights[0]/weights[1], -bias/weights[1])
# Plotting the solution boundary
plt.title("Solution boundary")
display(-weights[0]/weights[1], -bias/weights[1], 'black')
# Plotting the data
plot_points(features, targets)
plt.show()
# Plotting the error
plt.title("Error Plot")
plt.xlabel('Number of epochs')
plt.ylabel('Error')
plt.plot(errors)
plt.show()
```
## Time to train the algorithm!
When we run the function, we'll obtain the following:
- 10 updates with the current training loss and accuracy
- A plot of the data and some of the boundary lines obtained. The final one is in black. Notice how the lines get closer and closer to the best fit, as we go through more epochs.
- A plot of the error function. Notice how it decreases as we go through more epochs.
```
train(X, y, epochs, learnrate, True)
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_parent" href="https://github.com/giswqs/geemap/tree/master/tutorials/ImageCollection/01_image_collection_overview.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_parent" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/tutorials/ImageCollection/01_image_collection_overview.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_parent" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/tutorials/ImageCollection/01_image_collection_overview.ipynb"><img width=26px src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
# ImageCollection Overview
An `ImageCollection` is a stack or time series of images. In addition to loading an `ImageCollection` using an Earth Engine collection ID, Earth Engine has methods to create image collections. The constructor `ee.ImageCollection()` or the convenience method `ee.ImageCollection.fromImages()` create image collections from lists of images. You can also create new image collections by merging existing collections. For example:
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.foliumap`](https://github.com/giswqs/geemap/blob/master/geemap/foliumap.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.foliumap as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
The default basemap is `Google Satellite`. [Additional basemaps](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py#L13) can be added using the `Map.add_basemap()` function.
```
Map = emap.Map(center=[40,-100], zoom=4)
Map.add_basemap('ROADMAP') # Add Google Map
Map
```
## Add Earth Engine Python script
```
# Create arbitrary constant images.
constant1 = ee.Image(1)
constant2 = ee.Image(2)
# Create a collection by giving a list to the constructor.
collectionFromConstructor = ee.ImageCollection([constant1, constant2])
print('collectionFromConstructor: ', collectionFromConstructor.getInfo())
# Create a collection with fromImages().
collectionFromImages = ee.ImageCollection.fromImages(
[ee.Image(3), ee.Image(4)])
print('collectionFromImages: ', collectionFromImages.getInfo())
# Merge two collections.
mergedCollection = collectionFromConstructor.merge(collectionFromImages)
print('mergedCollection: ', mergedCollection.getInfo())
# Create an ee.Geometry.
polygon = ee.Geometry.Polygon([
[[-35, -10], [35, -10], [35, 10], [-35, 10], [-35, -10]]
])
# Create a toy FeatureCollection
features = ee.FeatureCollection(
[ee.Feature(polygon, {'foo': 1}), ee.Feature(polygon, {'foo': 2})])
print(features.getInfo())
# Create an ImageCollection from the FeatureCollection
# by mapping a function over the FeatureCollection.
images = features.map(lambda feature: ee.Image(ee.Number(feature.get('foo'))))
# Print the resultant collection.
print('Image collection: ', images.getInfo())
```
## Display Earth Engine data layers
```
Map.addLayerControl()
Map
```
Note that in this example an `ImageCollection` is created by mapping a function that returns an `Image` over a `FeatureCollection`. Learn more about mapping in the [Mapping over an ImageCollection section](https://developers.google.com/earth-engine/ic_mapping.html). Learn more about feature collections from the [FeatureCollection section](https://developers.google.com/earth-engine/feature_collections.html).
| github_jupyter |
# Regarding this Notebook
This is a replication of the original analysis performed in the paper by [Waade & Enevoldsen 2020](missing). This replication script will not be updated as it is intended for reproducibility.ย Any deviations from the paper is marked with bold for transparency.
Footnotes and internal documentation references are removed from this example to avoid confusion.
---
# 2.2 Using tomsup
One of the advantages of computational models of cognitive processes is that the implications of the model can be worked out by simulating the modelโs behavior in a variety of situations. tomsup in particular, allows to test the k-ToM model as it plays a wide set of game-theoretical situations (e.g. Matching Pennies or Prisonerโs Dilemma), in interaction with a variety of different agents (e.g. other k-ToM or less sophisticated agents), within different possible settings (e.g. repeated interactions with the same opponent, or round robin tournaments). In order to better understand the setup of the tomsup package, we start with the case of two simple agents interacting, followed by a simple exampleusing k-ToM agents, which will also illustrate how one might implement tomsup in an experiment. Lastly, we will show how to run a simulation using multiple agents as well as how to plot the evolving internal states of a k-ToM agent. In this simple scenario two agents are playing the Matching Pennies game. One agent hides a penny in one hand: letโs say chooses 0 for hiding in the left hand, and 1 in the right. The other agent has to guess where the penny is. If the second agent guesses (chooses the same hand as the first), it wins and the first loses. In other words, the first agent wants to choose the hand that the second will not choose and the second wants to choose the hand that the first chooses. In this example, one of the agents implements the Random Bias strategy (e.g. has a 60 percent probability of choosing right over left), while the other implements a classic Q-learning strategy (a model free reinforcement learning mechanism updating the expected reward of choosing a specific option on a trial by trial basis). The full list of strategies already implemented in tomsup is accessible using the function `valid_agents()`. The user first has to install the tomsup package developed using python 3.6 (Van Rossum & Drake, 2009). The package can be downloaded and installed using pip:
```pip3 install tomsup```
**However, in this notebook we will assume the user simply downloaded the git. Feel free to skip the next code chunk if that is not the case.**
```
# assuming you are in the github folder change the path - not relevant if tomsup is installed via. pip
import os
os.chdir("..") # go out of the tutorials folder
```
Both approaches will also install the required dependencies. Now tomsup can be imported into Python following the lines;
```
import tomsup as ts
```
We will also set a arbitrary seed for to ensure reproducibility;
```
import random
import numpy as np
np.random.seed(1995)
random.seed(1995) # The year of birth of the first author
```
First we need to set up the Matching Pennies game. As different games are defined by different payoff matrices, we set up the game by creating the appropriate payoff matrix using the ```PayoffMatrix``` class.
```
# initiate the competitive matching pennies game
penny = ts.PayoffMatrix(name="penny_competitive")
# print the payoff matrix
print(penny)
```
The Matching Pennies game is a zero sum game, meaning that for one agent to get a reward, the opponent has to lose. Agents have thus to predict their opponents' behavior, which is ideal for investigating \gls{tom}. Note that to explore other payoff matrices included in the package, or to learn how to specify a custom payoff matrix, the user can type the `help(ts.PayoffMatrix)` command.
Then we create the first of the two competing agents:
```
# define the random bias agent, which chooses 1 70 percent of the time, and call the agent "jung"
jung = ts.RB(bias=0.7)
# Examine Agent
print(f"jung is a class of type: {type(jung)}")
if isinstance(jung, ts.Agent):
print(f"but jung is also an instance of the parent class ts.Agent")
# let us have Jung make a choice
choice = jung.compete()
print(f"jung chose {choice} and its probability for choosing 1 was {jung.get_bias()}.")
```
Note that it is possible to create one or more agents simultaneously using the convenient `create\_agents()` and passing any starting parameters to it in the form of a dictionary.
```
# create a reinforcement learning agent
skinner = ts.create_agents(agents="QL", start_params={"save_history": True})
```
Now that both agents are created, we have them play against each other.
```
# have the agents compete for 30 rounds
results = ts.compete(jung, skinner, p_matrix=penny, n_rounds=30)
# examine results
print(results.head()) # inspect the first 5 rows of the dataframe
```
** Note: you can remove the print() to get a nicer printout of the dataframe **
```
results.head() # inspect the first 5 rows of the dataframe
```
The data frame stores the choice of each agent as well as their resulting payoff. Simply summing the payoff columns would determine the winner.
## k-ToM
Here we will present some simple examples of the k-ToM agent. For a more in-depth description we recommend checking the expanded introduction on the [Github repository](https://github.com/KennethEnevoldsen/tomsup/blob/master/tutorials/introduction_to_tom.ipynb).
We will start of by creating a 0-ToM with default priors and `save_history=True` to examine the workings of it. Notice that setting `save_history` is turned off by default to save on memory which is especially problematic for ToM agents with high sophistication level.
```
# Creating a simple 1-ToM with default parameters
tom_1 = ts.TOM(level=1, dilution=None, save_history=True)
# Extract the parameters
tom_1.print_parameters()
```
Note that k-ToM agents as default uses agnostic starting beliefs. These can be shown in detail and specified as desired, as shown in **appendix in the paper**.
To increase the agent's tendency to choose one we could simply increase its bias. Similarly, if we want the agent to behave in a more more deterministic fashion we can decrease the behavioural temperature. When the parameter values are set, we can play the agent against an opponent using the `.compete()` method. Where `agent` denote the agent in the payoff matrix (0 or 1) and the `op_choice` denote the choice of the opponent during the previous round.
```
tom_2 = ts.TOM(
level=2,
volatility=-2,
b_temp=-2, # more deterministic
bias=0,
dilution=None,
save_history=True,
)
choice = tom_2.compete(p_matrix=penny, agent=0, op_choice=None)
print("tom_2 chose:", choice)
```
The user is recommended to have the 1-ToM and the 2-ToM agents compete using the previously presented `ts.compete()` function for simplicity. However, to make the process more transparent for the user in the following we create a simple for-loop:
```
tom_2.reset() # reset before start
prev_choice_1tom = None
prev_choice_2tom = None
for trial in range(1, 4):
# note that op_choice is choice on previous turn
# and that agent is the agent you respond to in the payoff matrix
choice_1 = tom_1.compete(p_matrix=penny, agent=0, op_choice=prev_choice_1tom)
choice_2 = tom_2.compete(p_matrix=penny, agent=1, op_choice=prev_choice_2tom)
# update previous choice
prev_choice_1tom = choice_1
prev_choice_2tom = choice_2
print(
f"Round {trial}",
f" 1-ToM choose {choice_1}",
f" 2-ToM choose {choice_2}",
sep="\n",
)
```
A for loop like this can be used to implement k-ToM in an experimental setting by replacing the agent with the behavior of a participant. Examples of such implementations (interfacing with PsychoPy are available in the [documentation](https://github.com/KennethEnevoldsen/tomsup/tree/master/tutorials/psychopy_experiment)).
```
tom_2.print_internal(
keys=["p_k", "p_op"], level=[0, 1] # print these two states
) # for the agent simulated opponents 0-ToM and 1-ToM
```
For instance, we can note that the estimate of the opponent's sophistication level (\texttt{p\_k}) slightly favors a 1-ToM as opposed to a 0-ToM and that the average probability of the opponent choosing one (`p_op`) slightly favors 1 (which was indeed the option the opponent chose). These estimates are quite uncertain due to the few rounds played. More information on how to interpret the internal states of the ToM agent is available in the documentation of the package, e.g. by using the help function `help(tom_2.print_internal)`
## Multiple Agents and Visualizing Results
The above syntax is useful for small setups. However, the user might want to build larger simulations involving several agents to simulate data for experimental setup or test underlying assumptions. The package provides syntax for quickly iterating over multiple agents, rounds and even simulations. We will here show a quick example along with how to visualize the results and internal states of ToM agents.
```
# Create a list of agents
agents = ["RB", "QL", "WSLS", "1-TOM", "2-TOM"]
# And set their starting parameters. An empty dictionary denotes default values
start_params = [{"bias": 0.7}, {"learning_rate": 0.5}, {}, {}, {}]
group = ts.create_agents(agents, start_params) # create a group of agents
# Specify the environment
# round_robin e.g. each agent will play against all other agents
group.set_env(env="round_robin")
# Finally, we make the group compete 20 simulations of 30 rounds
results = group.compete(p_matrix=penny, n_rounds=30, n_sim=20, save_history=True)
```
Following the simulation, a data frame can be extracted as before, with additional columns reporting simulation number, competing agent pair (`agent0` and `agent1`) and if `save_history=True` it will also add two columns denoting the internal states of each agent, e.g. estimates and expectations at each trial.
```
res = group.get_results()
print(res.head(1)) # print the first row
```
**Again, removing the print statement gives you a more readable output**
```
res.head(1)
```
** to allow other authors to examine these results we have also saved the results to a new lines delimited .ndjson**
```
res.to_json("tutorials/paper.ndjson", orient="records", lines=True)
```
The package also provides convenient functions for plotting the agent's choices and performance.
> for nicer plots we will increase the figure size using the following code. This is excluded from the paper for simplicity
```
import matplotlib.pyplot as plt
# Set figure size
plt.rcParams["figure.figsize"] = [10, 10]
# plot a heatmap of the rewards for all agent in the tournament
group.plot_heatmap(cmap="RdBu_r")
plt.rcParams["figure.figsize"] = [5, 5]
# plot the choices of the 1-ToM agent when competing against the WSLS agent
group.plot_choice(agent0="WSLS", agent1="1-TOM", agent=1)
# plot the choices of the 1-ToM agent when competing against the WSLS agent
group.plot_choice(agent0="RB", agent1="1-TOM", agent=1)
# plot the score of the 1-ToM agent when competing against the WSLS agent
group.plot_score(agent0="WSLS", agent1="1-TOM", agent=1)
# plot the score of the 2-ToM agent when competing against the WSLS agent
group.plot_score(agent0="WSLS", agent1="2-TOM", agent=1)
```
As seen in the heatmap we see that the k-ToM model compares favorably against simpler agents
such as the QL. Furthermore notice that the 1-ToM and 2-ToM compares especially favorably
against the WSLS agent as this agent act as a deterministic 0-ToM. Similarly, we see that the
2-ToM agent incurs a cost for being more complex by being less able to take advantage of the
deterministic nature of WSLS. We can examine this further in the figures, where we see that the
1-ToM is almost perfectly able to predict the behaviour of the WSLS agent after a turn 5
across simulations while the 2-ToM, take longer to estimate the behaviour. The figures also show
that 1-ToM differs in behavioural patterns figures when playing against a RB agents showing
a bias estimation behaviour, while when playing against the WSLS it shows a oscillating
choice pattern. Ultimately these are meant for initial investigation and more elaborate plots
can be constructed from the results data frame.
> here we just refer to the figures, for more exact references please see the paper
Besides these general plots the package also contains a series of shortcuts for plotting $k$-ToM's internal states such as its estimate of its opponent's sophistication level, in which it is seen that the 2-ToM correctly estimates the opponents estimates as having a sophistication level of 1 on average.
```
# plot 2-ToM estimate of its opponent sophistication level
group.plot_p_k(agent0="1-TOM", agent1="2-TOM", agent=1, level=0)
group.plot_p_k(agent0="1-TOM", agent1="2-TOM", agent=1, level=1)
```
It is also easy to plot k-ToM's estimates of its opponent's model parameters. As an example, the following code plots the 2-ToM's estimate of 1-ToM's volatility and bias. We see that the ToM agent approaches a correct estimate of the default volatility of -2 as well as correctly estimated its opponent as having no inherent bias.
```
# plot 2-ToM estimate of its opponent's volatility while believing the opponent to be level 1.
group.plot_tom_op_estimate(
agent0="1-TOM", agent1="2-TOM", agent=1, estimate="volatility", level=1, plot="mean"
)
# plot 2-ToM estimate of its opponent's bias while believing the opponent to be level 1.
group.plot_tom_op_estimate(
agent0="1-TOM", agent1="2-TOM", agent=1, estimate="bias", level=1, plot="mean"
)
```
Use `help(ts.AgentGroup.plot_tom_op_estimate)` for information on how to plot the other estimated parameters or k-ToM's uncertainty in these parameters.
Additional information can be found in the history column in the results data frame, if needed. This includes all k-ToM's internal states (the changing variables in the model) which for example include choice probability, gradient, estimate uncertainties as well as k-ToM's estimates of its opponent's internal states. Documentation, examples and further tutorials can be found on the Github repository, this also includes a more in-depth description of the dynamics of **the k-ToM model implementation**.
---
## Are you left with any questions?
Feel free to open a github issue with questions and or bug reports.
Best,
*Enevoldsen and Waade*
| github_jupyter |
<a href="https://colab.research.google.com/github/robertozerbini/blog/blob/add-license-1/Roberto_Zerbini's_Blog_Polynomial_Regression.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import math
import sklearn
axes = [-2.5, 2.5, -3.5, 3.5]
```
#Get Data
```
#generate data
def get_data(m):
np.random.seed(3)
X = np.random.randn(m, 1)
y = np.sin(3 * X) + np.random.randn(m, 1) *.5
return X,y
X,y = get_data(500)
fig, ax = plt.subplots()
plt.title('Raw Data')
ax.plot(X, y, "b.")
ax.axis(axes)
ax.set_xlabel("X")
ax.set_ylabel("y")
plt.show()
```
#Train Linear Model
```
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
from sklearn.linear_model import LinearRegression
lin_reg = LinearRegression()
lin_reg.fit(X_train, y_train)
from sklearn.metrics import mean_squared_error
y_test_prediction = lin_reg.predict(X_test)
lin_mse_test = mean_squared_error(y_test, y_test_prediction)
lin_mse_test
fig, ax = plt.subplots()
plt.title('Prediction Linear Model')
plt.axis(axes)
ax.plot(X_test, y_test_prediction, "r*", label = 'Prediction')
ax.plot(X_test, y_test, "b.", label = 'Test Data')
ax.axis(axes)
ax.set_xlabel("X")
ax.set_ylabel("y")
plt.legend(loc="upper left", fontsize=8)
plt.show()
```
#Train Polynomial Model
```
from sklearn.preprocessing import PolynomialFeatures
degree = 8
pl = PolynomialFeatures(degree = degree)
X_train_p = pl.fit_transform(X_train)
X_test_p = pl.fit_transform(X_test)
print('Number of features X = {} Number of features X_poly = {}'.format(X_train.shape[1], X_train_p.shape[1]))
poly_reg = LinearRegression()
poly_reg.fit(X_train_p, y_train)
y_prediction_p = poly_reg.predict(X_train_p)
poly_mse_train = mean_squared_error(y_train, y_prediction_p)
y_test_prediction_p = poly_reg.predict(X_test_p)
poly_mse_test = mean_squared_error(y_test, y_test_prediction_p)
print("MSE linear: {} MSE polynomial: {} %difference {}".format(lin_mse_test, poly_mse_test, 1- poly_mse_test / lin_mse_test))
fig, ax = plt.subplots()
plt.title('Prediction Polynomial Model (degree ' + str(degree) + ')')
plt.axis(axes)
ax.plot(X_test, y_test_prediction_p, "r*", label = 'Prediction')
ax.plot(X_test, y_test, "b.", label = 'Test Data')
ax.axis(axes)
ax.set_xlabel("X")
ax.set_ylabel("y")
plt.legend(loc="upper left", fontsize=8)
plt.show()
```
#Variance Analysis
```
percent_diff = []
mse_train = []
mse_test = []
iter = 17
for d in range(iter):
pl = PolynomialFeatures(degree = d)
X_train_p = pl.fit_transform(X_train)
X_test_p = pl.fit_transform(X_test)
lin_reg = LinearRegression()
lin_reg.fit(X_train_p, y_train)
y_prediction_p = lin_reg.predict(X_train_p)
lin_mse_train_p = mean_squared_error(y_train, y_prediction_p)
y_test_prediction_p = lin_reg.predict(X_test_p)
lin_mse_test_p = mean_squared_error(y_test, y_test_prediction_p)
mse_train.append(lin_mse_train_p)
mse_test.append(lin_mse_test_p)
percent_diff.append(1- lin_mse_test_p / lin_mse_train_p)
fig, ax = plt.subplots()
ax.plot(np.linspace(1,iter,num=iter), mse_train, "b-*", label = 'MSE Train')
ax.plot(np.linspace(1,iter,num=iter), mse_test, "y-*", label = 'MSE Test')
ax.set_xlabel("Polinomial Degree")
ax.set_ylabel("MSE")
plt.legend(loc="upper left", fontsize=8)
plt.title('Model Variance')
plt.show()
```
| github_jupyter |
```
import keras
from IPython.display import SVG
from keras.optimizers import Adam
from keras.utils.vis_utils import model_to_dot
from tqdm import tqdm
from keras import backend as K
from keras.preprocessing.text import Tokenizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.feature_extraction.text import CountVectorizer
from numpy import array
from numpy import asarray
from numpy import zeros
from keras.preprocessing.text import Tokenizer
from keras.preprocessing.sequence import pad_sequences
from keras.models import Sequential
from keras.layers import Dense
from keras.layers import Flatten
from keras.layers import Embedding
from sklearn.metrics import mean_squared_error
from sklearn.metrics import classification_report, confusion_matrix, accuracy_score
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import warnings
warnings.filterwarnings('ignore')
import seaborn as sns
%matplotlib inline
sns.set(style='whitegrid', palette='muted', font_scale=1.2)
df_bills = pd.read_csv('../data/bill_all.csv')
print(df_bills.columns)
df_bills.tail()
df_final = pd.read_csv('../data/df_vote_final.csv')
df_final.tail()
print(len(df_final.name.unique()))
print(len(df_final.sponsor_id.unique()))
df_final.columns
dataset = df_final[['name', 'legis_num', 'vote']]
dataset['bill_id'] = dataset.legis_num.astype('category').cat.codes.values
dataset['name_id'] = dataset.name.astype('category').cat.codes.values
dataset['vote'] = dataset.vote.astype('category').cat.codes.values
# dataset.drop(columns=['name', 'legis_num'], inplace=True)
dataset = dataset.sample(frac=0.5, replace=True)
dataset.reset_index(inplace=True)
dataset.tail()
import gensim
from gensim import utils
word2vec_model = gensim.models.KeyedVectors.load_word2vec_format('/home/sonic/.keras/datasets/GoogleNews-vectors-negative300.bin',
binary=True)
%%time
import nltk
max_words = 20000
MAX_SEQUENCE_LENGTH = 1000
def process_doc(X):
tokenizer = Tokenizer(num_words=max_words,lower=True, split=' ',
filters='"#%&()*+-/<=>@[\\]^_`{|}~\t\n',
char_level=False, oov_token=u'<UNK>')
X_text = X['billText'].values
tokenizer.fit_on_texts(X_text)
print(X.shape)
X_seq = np.array(tokenizer.texts_to_sequences(X_text))
X_seq = pad_sequences(X_seq, maxlen=MAX_SEQUENCE_LENGTH, padding='post')
print('X_seq', X_seq.shape)
count_vect = CountVectorizer()
X_train_counts = count_vect.fit_transform(X_text)
tf_transformer = TfidfTransformer().fit(X_train_counts)
X_train_tf = tf_transformer.transform(X_train_counts)
x_emb = {}
# tokens = nltk.word_tokenize(list(X))
# print('tokens.shape', tokens.shape)
for idx, doc in tqdm(X.iterrows()): #look up each doc in model
# print(doc['legis_num'], doc['billText'])
x_emb[doc['legis_num']] = document_vector(word2vec_model, nltk.word_tokenize(doc['billText'].lower()))
word_index = tokenizer.word_index
print('Found %s unique tokens.' % len(word_index))
return np.array(X_seq), word_index, x_emb, X_train_tf, X_train_counts
def document_vector(word2vec_model, doc):
# remove out-of-vocabulary words
doc = [word for word in doc if word in word2vec_model.vocab]
return np.mean(word2vec_model[doc], axis=0)
def has_vector_representation(word2vec_model, doc):
"""check if at least one word of the document is in the
word2vec dictionary"""
return not all(word not in word2vec_model.vocab for word in doc)
df_bills['billText'] = df_bills['billText'].apply(str)
X_seq, word_index, X_emb, X_train_tf, X_train_counts = process_doc(df_bills)
# df_bills['X_seq'] = X_seq
# df_bills['X_emb'] = X_emb
# df_bills['X_train_tf'] = X_train_tf
# df_bills['X_train_counts'] = X_train_counts
# print(X_emb.shape)
print(X_emb['H R 5010'].shape)
# print(X_emb.item()['H R 5010'])
# print(X_emb.shape)
df_new = pd.DataFrame(X_emb)
# df_new['legis_num'] = df_bills['legis_num']
# df_new = df_new.drop_duplicates('legis_num', keep=False)
# df_new.set_index('legis_num')
df_new.reset_index(inplace=True, drop=True)
df_new.tail()
len(dataset['name_id'].unique())
from sklearn.model_selection import train_test_split
train, test = train_test_split(dataset, test_size=0.2)
print()
print('train', train.shape)
print('test', test.shape)
train.head()
# y_train.head()
n_users, n_bill = len(dataset.name_id.unique()), len(dataset.bill_id.unique())
n_latent_factors = 50
EMBEDDING_DIM = 100
print('number of legsitlators:', n_users)
print('number of bills', n_bill)
def plot_history(history):
# print(history.history)
df = pd.DataFrame(history.history)
print(df.describe())
df.plot(xticks=range(epochs))
# print(history.history.keys())
#plot data
fig, ax = plt.subplots(figsize=(15,7))
# print(dataset.groupby(['name'])['legis_num'].count())
print()
dataset.groupby(['name_id'])['legis_num'].count().plot(kind='hist', bins=100, alpha=0.5)
# dataset.groupby(['name']).count()['legis_num'].plot(ax=ax, kind='hist', bins=100, alpha=0.5)
plt.show()
# print(dataset.groupby(['legis_num'])['name_id'].count())
# dataset.groupby(['legis_num'])['name_id'].count().plot(kind='hist', bins=10, alpha=0.5)
! cd /home/sonic/.keras/datasets/glove.6B.100d.txt
# load the whole embedding into memory
embeddings_index = dict()
f = open('/home/sonic/.keras/datasets/glove.6B.100d.txt')
for line in f:
values = line.split()
word = values[0]
coefs = asarray(values[1:], dtype='float32')
embeddings_index[word] = coefs
f.close()
print('Loaded %s word vectors.' % len(embeddings_index))
vocab_size = len(word_index) + 1
print(len(word_index))
# create a weight matrix for words in training docs
embedding_matrix = zeros((vocab_size, EMBEDDING_DIM))
for word, i in word_index.items():
embedding_vector = embeddings_index.get(word)
if embedding_vector is not None:
embedding_matrix[i] = embedding_vector
# TODO
# Per congrss
%%time
# KRAFT
print(len(word_index))
def getKraftEmbeddingModel():
# define the model
model = Sequential()
model.add(Embedding(vocab_size, 100,
weights=[embedding_matrix],
trainable=True,
input_length=MAX_SEQUENCE_LENGTH))
print('before flatten', model.output_shape)
model.add(Flatten())
print('after flatten', model.output_shape)
model.add(Dense(1, activation='sigmoid'))
print('dense shape', model.output_shape)
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
# summarize the model
print(model.summary())
return model
model = getKraftEmbeddingModel()
from keras.initializers import glorot_uniform # Or your initializer of choice
from tqdm import tqdm
def reset_weights(model):
session = K.get_session()
for layer in model.layers:
if hasattr(layer, 'kernel_initializer'):
layer.kernel.initializer.run(session=session)
def getEmbeddingModel():
# define model
model = Sequential()
e = Embedding(300, EMBEDDING_DIM, input_length=300, name='embedding_layer', trainable=True)
model.add(e)
model.add(Flatten())
model.add(Dense(1, activation='sigmoid', name='pred'))
# compile the model
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['acc'])
return model
# print(embedding_matrix.shape)
# print(vocab_size)
def getDataset(df):
dataset = df[['name', 'legis_num', 'vote']]
dataset['bill_id'] = dataset.legis_num.astype('category').cat.codes.values
dataset['name_id'] = dataset.name.astype('category').cat.codes.values
dataset['vote'] = dataset.vote.astype('category').cat.codes.values
# dataset.drop(columns=['name', 'legis_num'], inplace=True)
dataset = dataset.sample(frac=0.5, replace=True)
dataset.reset_index(inplace=True)
return dataset
def runModel(df):
embeddinwg_learnt_all = {}
accuracy_all = {}
for name, group in df.groupby(['name_id']):
print(name, group.iloc[0]['name'])
labels = []
padded_docs = []
y = []
for ind, vote in group.iterrows():
padded_docs.append(X_emb[vote['legis_num']])
labels.append(vote['vote'])
padded_docs = np.array(padded_docs)
labels = np.array(labels)
reset_weights(model)
# fit the model
history = model.fit(padded_docs, labels, epochs=epochs, verbose=0)
# plot_history(history)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
accuracy_all[name] = {'loss' : loss, 'accuracy' : accuracy}
# print('Accuracy: %f' % (accuracy*100))
embeddinwg_learnt_all[name] = model.get_layer(name='embedding_layer').get_weights()[0]
return embeddinwg_learnt_all
grouped_congress = df_final.groupby('congress')
for name, group in grouped_congress:
print('Processing congress', name)
print('congress shape', group.shape)
df_votes_filtered = df_final[df_final['congress'] == name]
num_legistlators = len(df_votes_filtered['name'].unique())
print('number of legistlators', num_legistlators)
dataset = getDataset(df_votes_filtered)
train, test = train_test_split(dataset, test_size=0.2)
print('train', train.shape)
print('test', test.shape)
train.head()
# Run the embedding model
embeddinwg_learnt_all = runModel(train)
break
%%time
epochs = 20
model = getEmbeddingModel()
reset_weights(model)
embeddinwg_learnt_all = {}
accuracy_all = {}
i = 0
for name, group in train.groupby(['name_id']):
print(name, group.iloc[0]['name'])
labels = []
padded_docs = []
y = []
for ind, vote in group.iterrows():
# padded_docs.append(df_new[df_new['legis_num'] == vote['legis_num']].iloc[:,:-1])
padded_docs.append(X_emb[vote['legis_num']])
labels.append(vote['vote'])
padded_docs = np.array(padded_docs)
labels = np.array(labels)
# print('X', padded_docs.shape)
# print('y', labels.shape)
# print(len(padded_docs[0]))
reset_weights(model)
#
# summarize the model
# print(model.summary())
# fit the model
history = model.fit(padded_docs, labels, epochs=epochs, verbose=0)
# plot_history(history)
# evaluate the model
loss, accuracy = model.evaluate(padded_docs, labels, verbose=0)
accuracy_all[name] = {'loss' : loss, 'accuracy' : accuracy}
# print('Accuracy: %f' % (accuracy*100))
embeddinwg_learnt_all[name] = model.get_layer(name='embedding_layer').get_weights()[0]
# print('pred', model.get_layer(name='pred').get_weights()[0].shape)
# print('embedding_learnt.shape', embedding_learnt.shape)
i+=1
# if (5 == i):
# break
print(embeddinwg_learnt_all[0].shape)
np.save('../data/embeddinwg_learnt_all.npy', embeddinwg_learnt_all)
df_performace = pd.DataFrame(accuracy_all)
print('average accuracy', df_performace.loc['accuracy'].mean())
print('average loss', df_performace.loc['loss'].mean())
embeddinwg_learnt_all = np.load('../data/embeddinwg_learnt_all.npy')
embeddinwg_learnt_all.item()[0].shape
len(embeddinwg_learnt_all.item())
# 3 inputs
# bill emb (4062, 50) <- CF
# legistlator emb (1118, 50) <- CF
# legistlator_bill emb (1118, 300, 100) <- embedding
```
# Matrix Factorisation
```
def get_matrix_factorisation():
movie_input = keras.layers.Input(shape=[1],name='Item')
movie_embedding = keras.layers.Embedding(n_bill + 1, n_latent_factors, name='Movie-Embedding')(movie_input)
movie_vec = keras.layers.Flatten(name='FlattenMovies')(movie_embedding)
user_input = keras.layers.Input(shape=[1],name='User')
user_vec = keras.layers.Flatten(name='FlattenUsers')(keras.layers.Embedding(n_users + 1,
n_latent_factors,
name='User-Embedding')(user_input))
prod = keras.layers.dot([movie_vec, user_vec], axes=1, name='DotProduct')
model = keras.Model([user_input, movie_input], prod)
model.compile('adam', 'mean_squared_error')
# SVG(model_to_dot(model, show_shapes=True, show_layer_names=True, rankdir='HB').create(prog='dot', format='svg'))
return model
epochs=5
model = get_matrix_factorisation()
model.summary()
history = model.fit([train.name_id, train.bill_id],
train.vote, epochs=epochs, verbose=1)
plot_history(history)
y_hat = np.round(model.predict([test.name_id, test.bill_id]),0)
y_true = test.vote
from sklearn.metrics import mean_absolute_error
mean_absolute_error(y_true, y_hat)
movie_embedding_learnt = model.get_layer(name='Movie-Embedding').get_weights()[0]
user_embedding_learnt = model.get_layer(name='User-Embedding').get_weights()[0]
print('movie_embedding_learnt.shape', movie_embedding_learnt.shape)
print('user_embedding_learnt.shape', user_embedding_learnt.shape)
pd.DataFrame(movie_embedding_learnt).describe()
```
# SHARED MODEL
# Non-negative Matrix factorisation (NNMF) in Keras
```
from keras.constraints import non_neg
def get_NNMF():
movie_input = keras.layers.Input(shape=[1],name='Item')
movie_embedding = keras.layers.Embedding(n_bill + 1, n_latent_factors,
name='NonNegMovie-Embedding', embeddings_constraint=non_neg())(movie_input)
movie_vec = keras.layers.Flatten(name='FlattenMovies')(movie_embedding)
user_input = keras.layers.Input(shape=[1],name='User')
user_vec = keras.layers.Flatten(name='FlattenUsers')(keras.layers.Embedding(n_users + 1, n_latent_factors,
name='NonNegUser-Embedding',embeddings_constraint=non_neg())(user_input))
prod = keras.layers.dot([movie_vec, user_vec], axes=1,name='DotProduct')
model = keras.Model([user_input, movie_input], prod)
model.compile('adam', 'mean_squared_error')
return model
model = get_NNMF()
print(model.summary())
history_nonneg = model.fit([train.name_id, train.bill_id],
train.vote, epochs=epochs, verbose=1)
plot_history(history_nonneg)
movie_embedding_learnt = model.get_layer(name='NonNegMovie-Embedding').get_weights()[0]
print(movie_embedding_learnt.shape)
# pd.DataFrame(movie_embedding_learnt).describe()
```
# Neural networks for recommendation
```
n_latent_factors_user = 5
n_latent_factors_movie = 50
def get_nueral_net():
movie_input = keras.layers.Input(shape=[1],name='Item')
movie_embedding = keras.layers.Embedding(n_bill + 1, n_latent_factors_movie, name='Movie-Embedding')(movie_input)
movie_vec = keras.layers.Flatten(name='FlattenMovies')(movie_embedding)
movie_vec = keras.layers.Dropout(0.2)(movie_vec)
user_input = keras.layers.Input(shape=[1],name='User')
user_vec = keras.layers.Flatten(name='FlattenUsers')(keras.layers.Embedding(n_users + 1, n_latent_factors_user,name='User-Embedding')(user_input))
user_vec = keras.layers.Dropout(0.2)(user_vec)
concat = keras.layers.concatenate([movie_vec, user_vec], name='Concat')
concat_dropout = keras.layers.Dropout(0.2)(concat)
dense = keras.layers.Dense(200,name='FullyConnected')(concat)
dropout_1 = keras.layers.Dropout(0.2,name='Dropout')(dense)
dense_2 = keras.layers.Dense(100,name='FullyConnected-1')(concat)
dropout_2 = keras.layers.Dropout(0.2,name='Dropout')(dense_2)
dense_3 = keras.layers.Dense(50,name='FullyConnected-2')(dense_2)
dropout_3 = keras.layers.Dropout(0.2,name='Dropout')(dense_3)
dense_4 = keras.layers.Dense(20,name='FullyConnected-3', activation='relu')(dense_3)
result = keras.layers.Dense(1, activation='relu',name='Activation')(dense_4)
adam = Adam(lr=0.005)
model = keras.Model([user_input, movie_input], result)
model.compile(optimizer=adam,loss= 'mean_absolute_error')
# SVG(model_to_dot(model, show_shapes=True, show_layer_names=True, rankdir='HB').create(prog='dot', format='svg'))
return model
model = get_nueral_net()
model.summary()
history = model.fit([train.name_id, train.bill_id], train.vote,
epochs=epochs, verbose=1)
plot_history(history)
y_hat_2 = np.round(model.predict([test.name_id, test.bill_id]),0)
print(mean_absolute_error(y_true, y_hat_2))
print(mean_absolute_error(y_true, model.predict([test.name_id, test.bill_id])))
movie_embedding_learnt = model.get_layer(name='Movie-Embedding').get_weights()[0]
print(movie_embedding_learnt.shape)
```
| github_jupyter |
# Building the dataset
In this notebook, I'm going to be working with three datasets to create the dataset that the chatbot will be trained on.
```
import pandas as pd
files_path = 'D:/Sarcastic Chatbot/Input/'
```
# First dataset
**The Wordball Joke Dataset**, [link](https://www.kaggle.com/bfinan/jokes-question-and-answer/).
This dataset consists of three files, namely:
1. <i>qajokes1.1.2.csv</i>: with <i>75,114</i> pairs.
2. <i>t_lightbulbs.csv</i>: with <i>2,640</i> pairs.
3. <i>t_nosubject.csv</i>: with <i>32,120</i> pairs.
However, I'm not going to incorporate <i>t_lightbulbs.csv</i> in my dataset because I don't want that many examples of one topic. Besides, all the examples are similar in structure (they all start with <i>how many</i>).
Read the data files into pandas dataframes:
```
wordball_qajokes = pd.read_csv(files_path + 'qajokes1.1.2.csv', usecols=['Question', 'Answer'])
wordball_nosubj = pd.read_csv(files_path + 't_nosubject.csv', usecols=['Question', 'Answer'])
print(len(wordball_qajokes))
print(len(wordball_nosubj))
wordball_qajokes.head()
wordball_nosubj.head()
```
Concatenate both dataframes into one:
```
wordball = pd.concat([wordball_qajokes, wordball_nosubj], ignore_index=True)
wordball.head()
print(f"Number of question-answer pairs in the Wordball dataset: {len(wordball)}")
```
## Text Preprocessing
It turns out that not all cells are of type string. So, we can just apply the *str* function to make sure that all of them are of the same desired type.
```
wordball = wordball.applymap(str)
```
Let's look at the characters used in this dataset:
```
def distinct_chars(data, cols):
"""
This method takes in a pandas dataframe and prints all distinct characters.
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
"""
if cols is None:
cols = list(data.columns)
# join all questions into one string
questions = ' '.join(data[cols[0]])
# join all answers into one string
answers = ' '.join(data[cols[1]])
# get distinct characters used in the data (all questions and answers)
dis_chars = set(questions+answers)
# print the distinct characters that are used in the data
print(f"Number of distinct characters used in the dataset: {len(dis_chars)}")
# print(dis_chars)
dis_chars = list(dis_chars)
# Now let's print those characters in an organized way
digits = [char for char in dis_chars if char.isdigit()]
alphabets = [char for char in dis_chars if char.isalpha()]
special = [char for char in dis_chars if not (char.isdigit() | char.isalpha())]
# sort them to make them easier to read
digits = sorted(digits)
alphabets = sorted(alphabets)
special = sorted(special)
print(f"Digits: {digits}")
print(f"Alphabets: {alphabets}")
print(f"Special characters: {special}")
distinct_chars(wordball, ['Question', 'Answer'])
```
The following function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data.
```
def clean_text(text):
"""
This method takes a string, applies different text preprocessing (characters replacement, removal of unwanted characters,
removal of extra whitespaces) operations and returns a string.
text: a string.
"""
import re
text = str(text)
# REPLACEMENT
# replace " with ' (because they basically mean the same thing)
# text = text.replace('\"','\'')
text = re.sub('\"', '\'', text)
# replace โ and โ with '
# text = text.replace("โ",'\'').replace("โ",'\'')
text = re.sub("โ", '\'', text)
text = re.sub("โ", '\'', text)
# replace โ with '
# text = text.replace('โ','\'')
text = re.sub('โ', '\'', text)
# replace [] and {} with ()
#text = text.replace('[','(').replace(']',')').replace('{','(').replace('}',')')
text = re.sub('\[','(', text)
text = re.sub('\]',')', text)
text = re.sub('\{','(', text)
text = re.sub('\}',')', text)
# replace ? with itself and a whitespace preceding it
# ex. what's your name? (we want the word name and question mark to be separate tokens)
# text = re.sub('\?', ' ?', text)
# creating a space between a word and the punctuation following it
# punctuation we're using: . , : ; ' ? ! + - * / = % $ @ & ( )
text = re.sub("([?.!,:;'?!+\-*/=%$@&()])", r" \1 ", text)
# REMOVAL OF UNWANTED CHARACTERS
# accept only alphanumeric and some special characters and remove all others
# a-zA-Z0-9 : matches any alphanumeric character and the underscore.
# \. : matches .
# \, : matches ,
# \: : matches :
# \; : matches ;
# \' : matches '
# \? : matches ?
# \! : matches !
# \+ : matches +
# \- : matches -
# \* : matches *
# \/ : matches /
# \= : matches =
# \% : matches %
# \$ : matches $
# \@ : matches @
# \& : matches &
# ^ is added to the beginning of the set to express that we want the regex to recognize all other characters except
# these that are explicitly specified, so that we can omit them.
# define the pattern
pattern = re.compile('[^a-zA-Z0-9_\.\,\:\;\'\?\!\+\-\*\/\=\%\$\@\&\(\)]')
# remove unwanted characters
text = re.sub(pattern, ' ', text)
# lower case the characters in the string
text = text.lower()
# REMOVAL OF EXTRA WHITESPACES
# remove duplicated spaces
text = re.sub(' +', ' ', text)
# remove leading and trailing spaces
text = text.strip()
return text
```
Let's try it out:
```
clean_text("A nice quote I read today: โEverything that you are going through is preparing you for what you asked forโ. @hi % & =+-*/")
```
The following method prints a question-answer pair from the dataset, it will be helpful to give us a sense of what the *clean_text* function results in:
```
def print_question_answer(df, index, cols):
print(f"Question: ({index})")
print(df.loc[index][cols[0]])
print(f"Answer: ({index})")
print(df.loc[index][cols[1]])
print("Before applying text preprocessing:")
print_question_answer(wordball, 102, ['Question', 'Answer'])
print_question_answer(wordball, 200, ['Question', 'Answer'])
print_question_answer(wordball, 88376, ['Question', 'Answer'])
print_question_answer(wordball, 94351, ['Question', 'Answer'])
```
Apply text preprocessing (characters replacement, removal of unwanted characters, removal of extra whitespaces):
```
wordball = wordball.applymap(clean_text)
print("After applying text preprocessing:")
print_question_answer(wordball, 102, ['Question', 'Answer'])
print_question_answer(wordball, 200, ['Question', 'Answer'])
print_question_answer(wordball, 88376, ['Question', 'Answer'])
print_question_answer(wordball, 94351, ['Question', 'Answer'])
```
The following function applies some preprocessing operations on the data, concretely:
1. Drops unecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)*
2. Drops rows with empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) *
3. Drops rows with more than 30 words in either the question or the answer or if the answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)*
```
def preprocess_data(data, cols):
"""
This method preprocess data and does the following:
1. drops unecessary duplicate pairs.
2. drops rows with empty strings.
3. drops rows with more than 30 words in either the question or the answer,
or if the an answer has less than two characters.
Arguments:
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
Returns:
a pandas dataframe.
"""
# (1) Remove unecessary duplicate pairs but keep only one instance of all duplicates.
print('Removing unecessary duplicate pairs:')
data_len_before = len(data) # len of data before removing duplicates
print(f"# of examples before removing duplicates: {data_len_before}")
# drop duplicates
data = data.drop_duplicates(keep='first')
data_len_after = len(data) # len of data after removing duplicates
print(f"# of examples after removing duplicates: {data_len_after}")
print(f"# of removed duplicates: {data_len_before-data_len_after}")
# (2) Drop rows with empty strings.
print('Removing empty string rows:')
if cols is None:
cols = list(data.columns)
data_len_before = len(data) # len of data before removing empty strings
print(f"# of examples before removing rows with empty question/answers: {data_len_before}")
# I am going to use boolean masking to filter out rows with an empty question or answer
data = data[(data[cols[0]] != '') & (data[cols[1]] != '')]
# also, the following row results in the same as the above.
# data = data.query('Answer != "" and Question != ""')
data_len_after = len(data) # len of data after removing empty strings
print(f"# of examples after removing with empty question/answers: {data_len_after}")
print(f"# of removed empty string rows: {data_len_before-data_len_after}")
# (3) Drop rows with more than 30 words in either the question or the answer
# or if the an answer has less than two characters.
def accepted_length(qa_pair):
q_len = len(qa_pair[0].split(' '))
a_len = len(qa_pair[1].split(' '))
if (q_len <= 30) & ((a_len <= 30) & (len(qa_pair[1]) > 1)):
return True
return False
print('Removing rows with more than 30 words in either the question or the answer:')
data_len_before = len(data) # len of data before dropping those rows (30+ words)
print(f"# of examples before removing rows with more than 30 words: {data_len_before}")
# filter out rows with more than 30 words
accepted_mask = data.apply(accepted_length, axis=1)
data = data[accepted_mask]
data_len_after = len(data) # len of data after dropping those rows (50+ words)
print(f"# of examples after removing rows with more than 30 words: {data_len_after}")
print(f"# of removed empty rows with more than 30 words: {data_len_before-data_len_after}")
print("Data preprocessing is done.")
return data
wordball = preprocess_data(wordball, ['Question', 'Answer'])
print(f"# of question-answer pairs we have left in the Wordball dataset: {len(wordball)}")
```
Let's look at the characters after cleaning the data:
```
distinct_chars(wordball, ['Question', 'Answer'])
```
# Second Dataset
**reddit /r/Jokes**, [here](https://www.kaggle.com/cuddlefish/reddit-rjokes#jokes_score_name_clean.csv).
This dataset consists of two files, namely:
1. <i>jokes_score_name_clean.csv</i>: with <i>133,992</i> pairs.
2. <i>all_jokes.csv</i>
However, I'm not going to incorporate <i>all_jokes.csv</i> in the dataset because it's so messy.
```
reddit_jokes = pd.read_csv(files_path + 'jokes_score_name_clean.csv', usecols=['q', 'a'])
```
Let's rename the columns to have them aligned with the previous dataset:
```
reddit_jokes.rename(columns={'q':'Question', 'a':'Answer'}, inplace=True)
reddit_jokes.head()
print(len(reddit_jokes))
distinct_chars(reddit_jokes, ['Question', 'Answer'])
```
## Text Preprocessing
```
reddit_jokes = reddit_jokes.applymap(str)
```
Reddit data has some special tags like <i>[removed]</i> or <i>[deleted]</i> (these two mean that the comment has been removed/deleted). Also, they're written in an inconsistent way, i.e. you may find the tag <i>[removed]</i> capitalized or lowercased.<br>
The next function will address reddit tags as follows:
1. Drops rows with deleted, removed or censored tags.
2. Replaces other tags found in text with a whitespace. *(i.e. some comments have tags like <i>[censored], [gaming], [long], [request] and [dirty]</i> and we want to omit these tags from the text)*
```
def clean_reddit_tags(data, cols):
"""
This function removes reddit-related tags from the data and does the following:
1. drops rows with deleted, removed or censored tags.
2. replaces other tags found in text with a whitespace.
Arguments:
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
Returns:
a pandas dataframe.
"""
import re
if cols is None:
cols = list(data.columns)
# First, I'm going to lowercase all the text to address these tags
# however, I'm not going to alter the original dataframe because I don't want text to be lowercased.
data_copy = data.copy()
data_copy[cols[0]] = data_copy[cols[0]].str.lower()
data_copy[cols[1]] = data_copy[cols[1]].str.lower()
# drop rows with deleted, removed or censored tags.
# qa_pair[0] is the question, qa_pair[1] is the answer
mask = data_copy.apply(lambda qa_pair:
False if (qa_pair[0]=='[removed]') | (qa_pair[0]=='[deleted]') | (qa_pair[0]=='[censored]') |
(qa_pair[1]=='[removed]') | (qa_pair[1]=='[deleted]') | (qa_pair[1]=='[censored]')
else True, axis=1)
# drop the rows, notice we're using the mask to filter out those rows
# in the original dataframe 'data', because we don't need it anymore
data = data[mask]
print(f"# of rows dropped with [deleted], [removed] or [censored] tags: {mask.sum()}")
# replaces other tags found in text with a whitespace.
def sub_tag(pair):
"""
This method substitute tags (square brackets with words inside) with whitespace.
Arguments:
pair: a Pandas Series, where the first item is the question and the second is the answer.
Returns:
pair: a Pandas Series.
"""
# \[(.*?)\] is a regex to recognize square brackets [] with anything in between
p=re.compile("\[(.*?)\]")
pair[0] = re.sub(p, ' ', pair[0])
pair[1] = re.sub(p, ' ', pair[1])
return pair
# substitute tags with whitespaces.
data = data.apply(sub_tag, axis=1)
return data
print("Before addressing tags:")
print_question_answer(reddit_jokes, 1825, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 52906, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 59924, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 1489, ['Question', 'Answer'])
```
**Note:** the following cell may take multiple seconds to finish.
```
reddit_jokes = clean_reddit_tags(reddit_jokes, ['Question', 'Answer'])
reddit_jokes
print("After addressing tags:")
# because rows with [removed], [deleted] and [censored] tags have been dropped
# we're not going to print the rows (index=1825, index=59924) since they contain
# those tags, or we're going to have a KeyError
print_question_answer(reddit_jokes, 52906, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 1489, ['Question', 'Answer'])
```
**Note:** notice the question whose index is 52906, has some leading whitespaces. That's because it had the <i>[Corny]</i> tag and the function replaced it with whitespaces. Also, the question whose index is 1489 has an empty answer and that's because of the fact that the original answer just square brackets with some whitespaces in between. We're going to address all of that next!
Now, let's apply the *clean_text* function on the reddit data.<br>
**Remember:** the *clean_text* function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data.
```
reddit_jokes = reddit_jokes.applymap(clean_text)
print_question_answer(reddit_jokes, 52906, ['Question', 'Answer'])
print_question_answer(reddit_jokes, 1489, ['Question', 'Answer'])
```
Everything looks good!<br>
Now, let's apply the *preprocess_data* function on the data.<br>
**Remember:** the *preprocess_data* function applies the following preprocessing operations:
1. Drops unecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)*
2. Drops rows with empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) *
3. Drops rows with more than 30 words in either the question or the answer or if the an answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)*
```
reddit_jokes = preprocess_data(reddit_jokes, ['Question', 'Answer'])
print(f"Number of question answer pairs in the reddit /r/Jokes dataset: {len(reddit_jokes)}")
distinct_chars(reddit_jokes, ['Question', 'Answer'])
```
# Third Dataset
**Question-Answer Jokes**, [here](https://www.kaggle.com/jiriroz/qa-jokes).
This dataset consists of one file, namely:
* <i>jokes_score_name_clean.csv</i>: with <i>38,269</i> pairs.
```
qa_jokes = pd.read_csv(files_path + 'jokes.csv', usecols=['Question', 'Answer'])
qa_jokes
print(len(qa_jokes))
distinct_chars(qa_jokes, ['Question', 'Answer'])
```
## Text Preprocessing
If you look at some examples in the dataset, you notice that some examples has 'Q:' at beginning of the question and 'A:' at the beginning of the answer, so we need to get rid of these prefixes because they don't convey useful information.<br>
You also notice some examples where both 'Q:' and 'A:' are found in either the question or the answer, although I'm not going to omit these because they probably convey information and are part of the answer. However, some of them have 'Q:' in the question and 'Q: question A: answer' where the question in the answer is the same question, so we need to fix that.
```
def clean_qa_prefixes(data, cols):
"""
This function removes special prefixes ('Q:' and 'A:') found in the data.
i.e. input="Q: how's your day?" --> output=" how's your day?"
Arguments:
data: a pandas dataframe.
cols: a Python list, representing names of columns for questions and answers. First item of the list should be the name
of the questions column and the second item should be the name of the column corresponding to answers.
Returns:
a pandas dataframe.
"""
def removes_prefixes(pair):
"""
This function removes prefixes ('Q:' and 'A:') from the question and answer.
Examples:
Input: qusetion="Q: what is your favorite Space movie?", answer='A: Interstellar!'
Output: qusetion=' what is your favorite Space movie?', answer=' Interstellar!'
Input: question="Q: how\'s your day?", answer='Q: how\'s your day? A: good, thanks.'
Output: qusetion=" how's your day?", answer='good, thanks.'
Input: qusetion='How old are you?', answer='old enough'
Output: qusetion='How old are you?', answer='old enough'
Arguments:
pair: a Pandas Series, where the first item is the question and the second is the answer.
Returns:
pair: a Pandas Series.
"""
# pair[0] corresponds to the question
# pair[1] corresponds to the answer
# if the question contains 'Q:' and the answer contains 'A:' but doesn't contain 'Q:'
if ('Q:' in pair[0]) and ('A:' in pair[1]) and ('Q:' not in pair[1]):
pair[0] = pair[0].replace('Q:','')
pair[1] = pair[1].replace('A:','')
# if the answer contains both 'Q:' and 'A:'
elif ('A:' in pair[1]) and ('Q:' in pair[1]):
pair[0] = pair[0].replace('Q:','')
# now we should check if the text between 'Q:' and 'A:' is the same text in the question (pair[0])
# because if they are, this means that the question is repeated in the answer and we should address that.
q_start = pair[1].find('Q:') + 2 # index of the start of the text that we want to extract
q_end = pair[1].find('A:') # index of the end of the text that we want to extract
q_txt = pair[1][q_start:q_end].strip()
# if the question is repeated in the answer
if q_txt == pair[0].strip():
# in case the question is repeated in the answer, removes it from the answer
pair[1] = pair[1][q_end+2:].strip()
return pair
return data.apply(removes_prefixes, axis=1)
print("Before removing unnecessary prefixes:")
print_question_answer(qa_jokes, 44, ['Question', 'Answer'])
print_question_answer(qa_jokes, 22, ['Question', 'Answer'])
print_question_answer(qa_jokes, 31867, ['Question', 'Answer'])
qa_jokes = clean_qa_prefixes(qa_jokes, ['Question', 'Answer'])
print("After removing unnecessary prefixes:")
print_question_answer(qa_jokes, 44, ['Question', 'Answer'])
print_question_answer(qa_jokes, 22, ['Question', 'Answer'])
print_question_answer(qa_jokes, 31867, ['Question', 'Answer'])
```
Notice that the third example both 'Q:' and 'A:' are part of the answer and conveys information.
Now, let's apply the *clean_text* function on the Question-Answer Jokes data.<br>
**Remember:** the *clean_text* function replaces some characters with others, removes unwanted characters and gets rid of extra whitespaces from the data.
```
qa_jokes = qa_jokes.applymap(clean_text)
```
Now, let's apply the *preprocess_data* function on the data.<br>
**Remember:** the *preprocess_data* function applies the following preprocessing operations:
1. Drops unnecessary duplicate pairs (rows) but keep only one instance of all duplicates. *(For example, if the dataset contains three duplicates of the same question-answer pair, then two of them would be removed and one kept.)*
2. Drops rows with an empty question/answer. *(These may appear because of the previous step or because they happen to be empty in the original dataset) *
3. Drops rows with more than 30 words in either the question or the answer or if the an answer has less than two characters. *(Note: this is a hyperparameter and you can try other values.)*
```
qa_jokes = preprocess_data(qa_jokes, ['Question', 'Answer'])
print(f"Number of question-answer pairs in the Question-Answer Jokes dataset: {len(qa_jokes)}")
distinct_chars(qa_jokes, ['Question', 'Answer'])
```
# Putting it together
Let's concatenate all the data we have to create our final dataset.
```
dataset = pd.concat([wordball, reddit_jokes, qa_jokes], ignore_index=True)
dataset.head()
print(f"Number of question-answer pairs in the dataset: {len(dataset)}")
```
There may be duplicate examples in the data so let's drop them:
```
data_len_before = len(dataset) # len of data before removing duplicates
print(f"# of examples before removing duplicates: {data_len_before}")
# drop duplicates
dataset = dataset.drop_duplicates(keep='first')
data_len_after = len(dataset) # len of data after removing duplicates
print(f"# of examples after removing duplicates: {data_len_after}")
print(f"# of removed duplicates: {data_len_before-data_len_after}")
```
Let's drop rows with NaN values if there's any:
```
dataset.dropna(inplace=True)
dataset
```
Let's make sure that all our cells are of the same type:
```
dataset = dataset.applymap(str)
print(f"Number of question-answer pairs in the dataset: {len(dataset)}")
distinct_chars(dataset, ['Question', 'Answer'])
```
Finally, let's save the dataset:
```
dataset.to_csv(files_path + '/dataset.csv')
```
| github_jupyter |
# ์ ํ์ฐ๋ฆฝ๋ฐฉ์ ์ ์ฌ๋ก: ๊ฐ๋จํ ํธ๋ฌ์ค<br>Example of Systems of Linear Equations : Simple Truss
```
# ๊ทธ๋ํ, ์ํ ๊ธฐ๋ฅ ์ถ๊ฐ
# Add graph and math features
import pylab as py
import numpy as np
import numpy.linalg as nl
# ๊ธฐํธ ์ฐ์ฐ ๊ธฐ๋ฅ ์ถ๊ฐ
# Add symbolic operation capability
import sympy as sy
```
ํ์ดํ๋ฅผ ๊ทธ๋ฆฌ๋ ํจ์<br>Function to draw an arrow
```
def draw_2dvec(x, y, x0=0, y0=0, color='k', name=None):
py.quiver(x0, y0, x, y, color=color, angles='xy', scale_units='xy', scale=1)
if name is not None:
if not name.startswith('$'):
vec_str = '$\\vec{%s}$' % name
else:
vec_str = name
py.text(0.5 * x + x0, 0.5 * y + y0, vec_str)
```
์ ์ผ๊ฐํ์ ๊ทธ๋ฆฌ๋ ํจ์<br>Function to draw an equilateral triangle
```
def triangle_support(x, y, length):
# https://matplotlib.org/gallery/lines_bars_and_markers/fill.html
height = py.cos(py.radians(30)) * length
py.fill((x, x + length*0.5, x + length*-0.5), (y, y - height, y - height))
```
## 4 ์ ์ ํธ๋ฌ์ค<br>A Four Node Truss
๋ค์๊ณผ ๊ฐ์ ํธ๋ฌ์ค๋ฅผ ์๊ฐํด ๋ณด์.<br>
Let's think about a truss as follows.<br>
(ref: "[Application of system of linear equations](https://www.chegg.com/homework-help/questions-and-answers/application-system-linear-equations-sure-work-matlab-problem-figure-1-shows-mechanical-str-q22676917)", Chegg Study)
```
# ๋ง๋์ ์ขํ nodal point coordinates
xy_list = [(0, 0), (1, 1), (1, 0), (2, 0)]
# ๊ฐ ๋ถ์ฌ์ ์ ๋ ์ end points of each member
connectivity_list = [(0, 1), (0, 2), (1, 2), (1, 3), (2, 3)]
for k, i_j in enumerate(connectivity_list):
i, j = i_j
py.plot(
(xy_list[i][0], xy_list[j][0]),
(xy_list[i][1], xy_list[j][1]),
'.-'
)
# ๋ถ์ฌ ๋ฒํธ ํ์ Indicate member id
py.text(0.5 * (xy_list[i][0] + xy_list[j][0]),
0.5 * (xy_list[i][1] + xy_list[j][1]), k + 1)
# ๋ง๋์ ๋ฒํธ ํ์ Indicate node ids
for k, xy in enumerate(xy_list):
py.text(xy[0], xy[1], '(%d)' % (k+1))
draw_2dvec(0, -0.5, xy_list[2][0], xy_list[2][1], name='$F_1$')
triangle_support(xy_list[0][0], xy_list[0][1], 0.25)
triangle_support(xy_list[3][0], xy_list[3][1], 0.25)
py.axis('equal')
py.xlim((-1, 3))
py.ylim((-1, 2))
# https://stackoverflow.com/questions/9295026/matplotlib-plots-removing-axis-legends-and-white-spaces
py.axis('off')
py.savefig('triangular_truss.svg')
```
๋ชจ๋ ๊ฐ์ 45๋ ์ด๋ค.<br>
All angles are 45 degrees.
$$
\alpha = sin\left(\frac{\pi}{4}\right) = cos\left(\frac{\pi}{4}\right)
$$
๊ฐ ๋ง๋์์์ ํ์ ํํ์ ๋ค์๊ณผ ๊ฐ๋ค. $f_i$ ๋ $i$๋ฒ์งธ ๋ถ์ฌ์ ์ฅ๋ ฅ์ด๋ค.<br>
Force equilibrium equations at respective nodes are as follows. $f_i$ is the tensile force of $i$th member.
$$
\begin{align}
R_{1x} + \alpha \cdot f_{1}+f_{2} &= 0 \\
R_{1y} + \alpha \cdot f_{1} &= 0 \\
-\alpha \cdot f_{1}+\alpha \cdot f_{4} &=0 \\
-\alpha \cdot f_{1}-f_{3}-\alpha \cdot f_{4} &=0 \\
-f_{2}+f_{5}&=0 \\
f_{3}&=F_{1} \\
-\alpha \cdot f_4 - f_5 &=0 \\
\alpha \cdot f_4 + R_{4y} &=0 \\
\end{align}
$$
ํ๋ ฌํํ๋ก๋:<br>
In matrix form:
$$
\begin{bmatrix}
1 & 0 & \alpha & 1 & 0 & 0 & 0 & 0 \\
0 & 1 & \alpha & 0 & 0 & 0 & 0 & 0 \\
0 & 0 & -\alpha & 0 & 0 & \alpha & 0 & 0 \\
0 & 0 & -\alpha & 0 & -1 & -\alpha & 0 & 0 \\
0 & 0 & 0 & -1 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 1 & 0 & 0 & 0 \\
0 & 0 & 0 & 0 & 0 & -\alpha & -1 & 0 \\
0 & 0 & 0 & 0 & 0 & \alpha & 0 & 1 \\
\end{bmatrix}
\begin{pmatrix}
R_{1x} \\ R_{1y} \\ f_1 \\ f_2 \\ f_3 \\ f_4 \\ f_5 \\ R_{4y}
\end{pmatrix}
=
\begin{pmatrix}
0 \\ 0 \\ 0 \\ 0 \\ 0 \\ F_1 \\ 0 \\ 0
\end{pmatrix}
$$
```
alpha = py.sin(py.radians(45))
matrix = py.matrix([
[1, 0, alpha, 1, 0, 0, 0, 0],
[0, 1, alpha, 0, 0, 0, 0, 0],
[0, 0, -alpha, 0, 0, alpha, 0, 0],
[0, 0, -alpha, 0, -1, -alpha, 0, 0],
[0, 0, 0, -1, 0, 0, 1, 0],
[0, 0, 0, 0, 1, 0, 0, 0],
[0, 0, 0, 0, 0, -alpha, -1, 0],
[0, 0, 0, 0, 0, alpha, 0, 1],
])
```
ํ๋ ฌ์ ๊ณ์๋ฅผ ๊ณ์ฐํด ๋ณด์.<br>Let's check the rank of the matrix.
```
nl.matrix_rank(matrix)
```
๋ฏธ์ง์์ ๊ฐฏ์์ ์ ๋ฐฉํ๋ ฌ์ ๊ณ์๊ฐ ๊ฐ๋ค๋ ๊ฒ์ ์ด ์ ํ ์ฐ๋ฆฝ ๋ฐฉ์ ์์ ํด๋ฅผ ๊ตฌํ ์ ์๋ค๋ ๋ป์ด๋ค.<br>
The number of unknowns and the rank of the matrix are the same; we can find a root of this system of linear equations.
์ฐ๋ณ์ ์ค๋นํด ๋ณด์.<br>
Let's prepare for the right side.
```
vector = py.matrix([[0, 0, 0, 0, 0, 100, 0, 0]]).T
```
ํ์ด์ฌ์ ํ์ฅ ๊ธฐ๋ฅ ๊ฐ์ด๋ฐ ํ๋์ธ NumPy ์ ์ ํ ๋์ ๊ธฐ๋ฅ `solve()` ๋ฅผ ์ฌ์ฉํ์ฌ ํด๋ฅผ ๊ตฌํด ๋ณด์.<br>
Using `solve()` of linear algebra subpackage of `NumPy`, a Python package, let's find a solution.
```
sol = nl.solve(matrix, vector)
sol
```

## Final Bell<br>๋ง์ง๋ง ์ข
```
# stackoverfow.com/a/24634221
import os
os.system("printf '\a'");
```
| github_jupyter |

# Python for Data Professionals
## 02 Programming Basics
<p style="border-bottom: 1px solid lightgrey;"></p>
<dl>
<dt>Course Outline</dt>
<dt>1 - Overview and Course Setup</dt>
<dt>2 - Programming Basics <i>(This section)</i></dt>
<dd>2.1 - Getting help</dd>
<dd>2.2 Code Syntax and Structure</dd>
<dd>2.3 Variables<dd>
<dd>2.4 Operations and Functions<dd>
<dt>3 Working with Data</dt>
<dt>4 Deployment and Environments</dt>
<dl>
<p style="border-bottom: 1px solid lightgrey;"></p>
## Programming Basics Overview
From here on out, you'll focus on using Python in programming mode - you'll write code that you run from an IDE or a calling environment, not interactively from the command-line. As you work through this explanation, copy the code you see and run it to see the results. After you work through these copy-and-paste examples, you'll create your own code in the Activities that follow each section.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/cortanalogo.png"><b>2.1 - Getting help</b></p>
The very first thing you should learn in any language is how to get help. You can [find the help documents on-line](https://docs.python.org/3/index.html), or simply type
`help()`
in your code. For help on a specific topic, put the topic in the parenthesis:
`help(str)`
To see a list of topics, type
`help(topics)`
```
# Try it:
```
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/cortanalogo.png"><b>2.2 Code Syntax and Structure</b></p>
Let's cover a few basics about how Python code is written. (For a full discussion, check out the [Style Guide for Python, called PEP 8](https://www.python.org/dev/peps/pep-0008/) ) Let's use the "Zen of Python" rules from Tim Peters for this course:
<pre>
Beautiful is better than ugly.
Explicit is better than implicit.
Simple is better than complex.
Complex is better than complicated.
Flat is better than nested.
Sparse is better than dense.
Readability counts.
Special cases aren't special enough to break the rules.
Although practicality beats purity.
Errors should never pass silently.
Unless explicitly silenced.
In the face of ambiguity, refuse the temptation to guess.
There should be one-- and preferably only one --obvious way to do it.
Although that way may not be obvious at first unless you're Dutch.
Now is better than never.
Although never is often better than right now.
If the implementation is hard to explain, it's a bad idea.
If the implementation is easy to explain, it may be a good idea.
Namespaces are one honking great idea -- let's do more of those!
--Tim Peters
</pre>
In general, use standard coding practices - don't use keywords for variables, be consistent in your naming (camel-case, lower-case, etc.), comment your code clearly, and understand the general syntax of your language, and follow the principles above. But the most important tip is to at least read the PEP 8 and decide for yourself how well that fits into your Zen.
There is one hard-and-fast rule for Python that you *do* need to be aware of: indentation. You **must** indent your code for classes, functions (or methods), loops, conditions, and lists. You can use a tab or four spaces (spaces are the accepted way to do it) but in any case, you have to be consistent. If you use tabs, you always use tabs. If you use spaces, you have to use that throughout. It's best if you set your IDE to handle that for you, whichever way you go.
Python code files have an extension of `.py`.
Comments in Python start with the hash-tag: `#`. There are no block comments (and this makes us all sad) so each line you want to comment must have a tag in front of that line. Keep the lines short (80 characters or so) so that they don't fall off a single-line display like at the command line.
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png"><b>2.3 Variables</b></p>
Variables stand in for replaceable values. Python is not strongly-typed, meaning you can just declare a variable name and set it to a value at the same time, and Python will try and guess what data type you want. You use an `=` sign to assign values, and `==` to compare things.
Quotes \" or ticks \' are fine, just be consistent.
`# There are some keywords to be aware of, but x and y are always good choices.`
`x = "Buck" # I'm a string.`
`type(x)`
`y = 10 # I'm an integer.`
`type(y)`
To change the type of a value, just re-enter something else:
`x = "Buck" # I'm a string.`
`type(x)`
`x = 10 # Now I'm an integer.`
`type(x)`
Or cast it By implicitly declaring the conversion:
`x = "10"`
`type(x)`
`print int(x)`
To concatenate string values, use the `+` sign:
`x = "Buck"`
`y = " Woody"`
`print(x + y)`
```
# Try it:
```
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/checkbox.png"><b>2.4 Operations and Functions</b></p>
Python has the following operators:
Arithmetic Operators
Comparison (Relational) Operators
Assignment Operators
Logical Operators
Bitwise Operators
Membership Operators
Identity Operators
You have the standard operators and functions from most every language. Here are some of the tokens:
<pre>
!= *= << ^
" + <<= ^=
""" += <= `
% , <> __
%= - ==
& -= > b"
&= . >= b'
' ... >> j
''' / >>= r"
( // @ r'
) //= J |'
* /= [ |=
** : \ ~
**= < ]
</pre>
Wait...that's it? That's all you're going to tell me? *(Hint: use what you've learned):*
`help('symbols')`
Walk through each of these operators carefully - you'll use them when you work with data in the next module.
```
# Try it:
```
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/aml-logo.png"><b>Activity - Programming basics</b></p>
Open the **02_ProgrammingBasics.py** file and run the code you see there. The exercises will be marked out using comments:
`# <TODO> - Section Number`
```
# 02_ProgrammingBasics.py
# Purpose: General Programming exercises for Python
# Author: Buck Woody
# Credits and Sources: Inline
# Last Updated: 27 June 2018
# 2.1 Getting Help
help()
help(str)
# <TODO> - Write code to find help on help
# 2.2 Code Syntax and Structure
# <TODO> - Python uses spaces to indicate code blocks. Fix the code below:
x=10
y=5
if x > y:
print(str(x) + " is greater than " + str(y))
# <TODO> - Arguments on first line are forbidden when not using vertical alignment. Fix this code:
foo = long_function_name(var_one, var_two,
var_three, var_four)
# <TODO> operators sit far away from their operands. Fix this code:
income = (gross_wages +
taxable_interest +
(dividends - qualified_dividends) -
ira_deduction -
student_loan_interest)
# <TODO> - The import statement should use separate lines for each effort. You can fix the code below
# using separate lines or by using the "from" statement:
import sys, os
# <TODO> - The following code has extra spaces in the wrong places. Fix this code:
i=i+1
submitted +=1
x = x * 2 - 1
hypot2 = x * x + y * y
c = (a + b) * (a - b)
# 2.3 Variables
# <TODO> - Add a line below x=3 that changes the variable x from int to a string
x=3
type(x)
# <TODO> - Write code that prints the string "This class is awesome" using variables:
x="is awesome"
y="This Class"
# 2.4 Operations and Functions
# <TODO> - Use some basic operators to write the following code:
# Assign two variables
# Add them
# Subtract 20 from each, add those values together, save that to a new variable
# Create a new string variable with the text "The result of my operations are: "
# Print out a single string on the screen with the result of the variables
# showing that result.
# EOF: 02_ProgrammingBasics.py
```
<p><img style="float: left; margin: 0px 15px 15px 0px;" src="../graphics/thinking.jpg"><b>For Further Study</b></p>
- The PEP - https://www.python.org/dev/peps/pep-0008/
- Introduction to the Python Coding Style - http://stackabuse.com/introduction-to-the-python-coding-style/
- The Microsoft Tutorial and samples for Python - https://code.visualstudio.com/docs/languages/python
- Coding requirements and standards - PEP - https://www.python.org/dev/peps/pep-0008/
- Another free online self-paced course - https://www.w3schools.com/python/default.asp
Next, Continue to *03 Working with Data*
| github_jupyter |
# Manual Jupyter Notebook:
https://athena.brynmawr.edu/jupyter/hub/dblank/public/Jupyter%20Notebook%20Users%20Manual.ipynb
#Jupyter Notebook Users Manual
This page describes the functionality of the [Jupyter](http://jupyter.org) electronic document system. Jupyter documents are called "notebooks" and can be seen as many things at once. For example, notebooks allow:
* creation in a **standard web browser**
* direct **sharing**
* using **text with styles** (such as italics and titles) to be explicitly marked using a [wikitext language](http://en.wikipedia.org/wiki/Wiki_markup)
* easy creation and display of beautiful **equations**
* creation and execution of interactive embedded **computer programs**
* easy creation and display of **interactive visualizations**
Jupyter notebooks (previously called "IPython notebooks") are thus interesting and useful to different groups of people:
* readers who want to view and execute computer programs
* authors who want to create executable documents or documents with visualizations
<hr size="5"/>
###Table of Contents
* [1. Getting to Know your Jupyter Notebook's Toolbar](#1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar)
* [2. Different Kinds of Cells](#2.-Different-Kinds-of-Cells)
* [2.1 Code Cells](#2.1-Code-Cells)
* [2.1.1 Code Cell Layout](#2.1.1-Code-Cell-Layout)
* [2.1.1.1 Row Configuration (Default Setting)](#2.1.1.1-Row-Configuration-%28Default-Setting%29)
* [2.1.1.2 Cell Tabbing](#2.1.1.2-Cell-Tabbing)
* [2.1.1.3 Column Configuration](#2.1.1.3-Column-Configuration)
* [2.2 Markdown Cells](#2.2-Markdown-Cells)
* [2.3 Raw Cells](#2.3-Raw-Cells)
* [2.4 Header Cells](#2.4-Header-Cells)
* [2.4.1 Linking](#2.4.1-Linking)
* [2.4.2 Automatic Section Numbering and Table of Contents Support](#2.4.2-Automatic-Section-Numbering-and-Table-of-Contents-Support)
* [2.4.2.1 Automatic Section Numbering](#2.4.2.1-Automatic-Section-Numbering)
* [2.4.2.2 Table of Contents Support](#2.4.2.2-Table-of-Contents-Support)
* [2.4.2.3 Using Both Automatic Section Numbering and Table of Contents Support](#2.4.2.3-Using-Both-Automatic-Section-Numbering-and-Table-of-Contents-Support)
* [3. Keyboard Shortcuts](#3.-Keyboard-Shortcuts)
* [4. Using Markdown Cells for Writing](#4.-Using-Markdown-Cells-for-Writing)
* [4.1 Block Elements](#4.1-Block-Elements)
* [4.1.1 Paragraph Breaks](#4.1.1-Paragraph-Breaks)
* [4.1.2 Line Breaks](#4.1.2-Line-Breaks)
* [4.1.2.1 Hard-Wrapping and Soft-Wrapping](#4.1.2.1-Hard-Wrapping-and-Soft-Wrapping)
* [4.1.2.2 Soft-Wrapping](#4.1.2.2-Soft-Wrapping)
* [4.1.2.3 Hard-Wrapping](#4.1.2.3-Hard-Wrapping)
* [4.1.3 Headers](#4.1.3-Headers)
* [4.1.4 Block Quotes](#4.1.4-Block-Quotes)
* [4.1.4.1 Standard Block Quoting](#4.1.4.1-Standard-Block-Quoting)
* [4.1.4.2 Nested Block Quoting](#4.1.4.2-Nested-Block-Quoting)
* [4.1.5 Lists](#4.1.5-Lists)
* [4.1.5.1 Ordered Lists](#4.1.5.1-Ordered-Lists)
* [4.1.5.2 Bulleted Lists](#4.1.5.2-Bulleted-Lists)
* [4.1.6 Section Breaks](#4.1.6-Section-Breaks)
* [4.2 Backslash Escape](#4.2-Backslash-Escape)
* [4.3 Hyperlinks](#4.3-Hyperlinks)
* [4.3.1 Automatic Links](#4.3.1-Automatic-Links)
* [4.3.2 Standard Links](#4.3.2-Standard-Links)
* [4.3.3 Standard Links With Mouse-Over Titles](#4.3.3-Standard-Links-With-Mouse-Over-Titles)
* [4.3.4 Reference Links](#4.3.4-Reference-Links)
* [4.3.5 Notebook-Internal Links](#4.3.5-Notebook-Internal-Links)
* [4.3.5.1 Standard Notebook-Internal Links Without Mouse-Over Titles](#4.3.5.1-Standard-Notebook-Internal-Links-Without-Mouse-Over-Titles)
* [4.3.5.2 Standard Notebook-Internal Links With Mouse-Over Titles](#4.3.5.2-Standard-Notebook-Internal-Links-With-Mouse-Over-Titles)
* [4.3.5.3 Reference-Style Notebook-Internal Links](#4.3.5.3-Reference-Style-Notebook-Internal-Links)
* [4.4 Tables](#4.4-Tables)
* [4.4.1 Cell Justification](#4.4.1-Cell-Justification)
* [4.5 Style and Emphasis](#4.5-Style-and-Emphasis)
* [4.6 Other Characters](#4.6-Other-Characters)
* [4.7 Including Code Examples](#4.7-Including-Code-Examples)
* [4.8 Images](#4.8-Images)
* [4.8.1 Images from the Internet](#4.8.1-Images-from-the-Internet)
* [4.8.1.1 Reference-Style Images from the Internet](#4.8.1.1-Reference-Style-Images-from-the-Internet)
* [4.9 LaTeX Math](#4.9-LaTeX-Math)
* [5. Bibliographic Support](#5.-Bibliographic-Support)
* [5.1 Creating a Bibtex Database](#5.1-Creating-a-Bibtex-Database)
* [5.1.1 External Bibliographic Databases](#5.1.1-External-Bibliographic-Databases)
* [5.1.2 Internal Bibliographic Databases](#5.1.2-Internal-Bibliographic-Databases)
* [5.1.2.1 Hiding Your Internal Database](#5.1.2.1-Hiding-Your-Internal-Database)
* [5.1.3 Formatting Bibtex Entries](#5.1.3-Formatting-Bibtex-Entries)
* [5.2 Cite Commands and Citation IDs](#5.2-Cite-Commands-and-Citation-IDs)
* [6. Turning Your Jupyter Notebook into a Slideshow](#6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow)
# 1. Getting to Know your Jupyter Notebook's Toolbar
At the top of your Jupyter Notebook window there is a toolbar. It looks like this:

Below is a table which helpfully pairs a picture of each of the items in your toolbar with a corresponding explanation of its function.
Button|Function
-|-
|This is your save button. You can click this button to save your notebook at any time, though keep in mind that Jupyter Notebooks automatically save your progress very frequently.
|This is the new cell button. You can click this button any time you want a new cell in your Jupyter Notebook.
|This is the cut cell button. If you click this button, the cell you currently have selected will be deleted from your Notebook.
|This is the copy cell button. If you click this button, the currently selected cell will be duplicated and stored in your clipboard.
|This is the past button. It allows you to paste the duplicated cell from your clipboard into your notebook.
|These buttons allow you to move the location of a selected cell within a Notebook. Simply select the cell you wish to move and click either the up or down button until the cell is in the location you want it to be.
|This button will "run" your cell, meaning that it will interpret your input and render the output in a way that depends on [what kind of cell] [cell kind] you're using.
|This is the stop button. Clicking this button will stop your cell from continuing to run. This tool can be useful if you are trying to execute more complicated code, which can sometimes take a while, and you want to edit the cell before waiting for it to finish rendering.
|This is the restart kernel button. See your kernel documentation for more information.
|This is a drop down menu which allows you to tell your Notebook how you want it to interpret any given cell. You can read more about the [different kinds of cells] [cell kind] in the following section.
|Individual cells can have their own toolbars. This is a drop down menu from which you can select the type of toolbar that you'd like to use with the cells in your Notebook. Some of the options in the cell toolbar menu will only work in [certain kinds of cells][cell kind]. "None," which is how you specify that you do not want any cell toolbars, is the default setting. If you select "Edit Metadata," a toolbar that allows you to edit data about [Code Cells][code cells] directly will appear in the corner of all the Code cells in your notebook. If you select "Raw Cell Format," a tool bar that gives you several formatting options will appear in the corner of all your [Raw Cells][raw cells]. If you want to view and present your notebook as a slideshow, you can select "Slideshow" and a toolbar that enables you to organize your cells in to slides, sub-slides, and slide fragments will appear in the corner of every cell. Go to [this section][slideshow] for more information on how to create a slideshow out of your Jupyter Notebook.
|These buttons allow you to move the location of an entire section within a Notebook. Simply select the Header Cell for the section or subsection you wish to move and click either the up or down button until the section is in the location you want it to be. If your have used [Automatic Section Numbering][section numbering] or [Table of Contents Support][table of contents] remember to rerun those tools so that your section numbers or table of contents reflects your Notebook's new organization.
|Clicking this button will automatically number your Notebook's sections. For more information, check out the Reference Guide's [section on Automatic Section Numbering][section numbering].
|Clicking this button will generate a table of contents using the titles you've given your Notebook's sections. For more information, check out the Reference Guide's [section on Table of Contents Support][table of contents].
|Clicking this button will search your document for [cite commands][] and automatically generate intext citations as well as a references cell at the end of your Notebook. For more information, you can read the Reference Guide's [section on Bibliographic Support][bib support].
|Clicking this button will toggle [cell tabbing][], which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].
|Clicking this button will toggle the [collumn configuration][] for Code Cells, which you can learn more about in the Reference Guides' [section on the layout options for Code Cells][cell layout].
|Clicking this button will toggle spell checking. Spell checking only works in unrendered [Markdown Cells][] and [Header Cells][]. When spell checking is on all incorrectly spelled words will be underlined with a red squiggle. Keep in mind that the dictionary cannot tell what are [Markdown][md writing] commands and what aren't, so it will occasionally underline a correctly spelled word surrounded by asterisks, brackets, or other symbols that have specific meaning in Markdown.
[cell kind]: #2.-Different-Kinds-of-Cells "Different Kinds of Cells"
[code cells]: #2.1-Code-Cells "Code Cells"
[raw cells]: #2.3-Raw-Cells "Raw Cells"
[slideshow]: #6.-Turning-Your-Jupyter-Notebook-into-a-Slideshow "Turning Your Jupyter Notebook Into a Slideshow"
[section numbering]: #2.4.2.1-Automatic-Section-Numbering
[table of contents]: #2.4.2.2-Table-of-Contents-Support
[cell tabbing]: #2.1.1.2-Cell-Tabbing
[cell layout]: #2.1.1-Code-Cell-Layout
[bib support]: #5.-Bibliographic-Support
[cite commands]: #5.2-Cite-Commands-and-Citation-IDs
[md writing]: #4.-Using-Markdown-Cells-for-Writing
[collumn configuration]: #2.1.1.3-Column-Configuration
[Markdown Cells]: #2.2-Markdown-Cells
[Header Cells]: #2.4-Header-Cells
# 2. Different Kinds of Cells
There are essentially four kinds of cells in your Jupyter notebook: Code Cells, Markdown Cells, Raw Cells, and Header Cells, though there are six levels of Header Cells.
## 2.1 Code Cells
By default, Jupyter Notebooks' Code Cells will execute Python. Jupyter Notebooks generally also support JavaScript, Python, HTML, and Bash commands. For a more comprehensive list, see your Kernel's documentation.
### 2.1.1 Code Cell Layout
Code cells have both an input and an output component. You can view these components in three different ways.
#### 2.1.1.1 Row Configuration (Default Setting)
Unless you specific otherwise, your Code Cells will always be configured this way, with both the input and output components appearing as horizontal rows and with the input above the output. Below is an example of a Code Cell in this default setting:
```
2 + 3
```
#### 2.1.1.2 Cell Tabbing
Cell tabbing allows you to look at the input and output components of a cell separately. It also allows you to hide either component behind the other, which can be usefull when creating visualizations of data. Below is an example of a tabbed Code Cell:
```
2+3
```
#### 2.1.1.3 Column Configuration
Like the row configuration, the column layout option allows you to look at both the input and the output components at once. In the column layout, however, the two components appear beside one another, with the input on the left and the output on the right. Below is an example of a Code Cell in the column configuration:
```
2+3
```
## 2.2 Markdown Cells
In Jupyter Notebooks, Markdown Cells are the easiest way to write and format text. For a more thorough explanation of how to write in Markdown cells, refer to [this section of the guide][writing markdown].
[writing markdown]: #4.-Using-Markdown-Cells-for-Writing "Using Markdown Cells for Writing"
## 2.3 Raw Cells
Raw Cells, unlike all other Jupyter Notebook cells, have no input-output distinction. This means that Raw Cells cannot be rendered into anything other than what they already are. If you click the run button in your tool bar with a Raw Cell selected, the cell will remain exactly as is and your Jupyter Notebook will automatically select the cell directly below it. Raw cells have no style options, just the same monospace font that you use in all other unrendered Notebook cells. You cannot bold, italicize, or enlarge any text or characters in a Raw Cell.
Because they have no rendered form, Raw Cells are mainly used to create examples. If you save and close your Notebook and then reopen it, all of the Code, Markdown, and Header Cells will automatically render in whatever form you left them when you first closed the document. This means that if you wanted to preserve the unrendered version of a cell, say if you were writing a computer science paper and needed code examples, or if you were writing [documentation on how to use Markdown] [writing markdown] and needed to demonstrate what input would yield which output, then you might want to use a Raw Cell to make sure your examples stayed in their most useful form.
[writing markdown]: #4.-Using-Markdown-Cells-for-Writing "Using Markdown Cells for Writing"
## 2.4 Header Cells
While it is possible to organize your document using [Markdown headers][], Header Cells provide a more deeply structural organization for your Notebook and thus there are several advantages to using them.
[Markdown headers]: #4.1.3-Headers "Headers"
### 2.4.1 Linking
Header Cells have specific locations inside your Notebook. This means you can use them to [create Notebook-internal links](#4.3.5-Notebook-Internal-Links "Notebook-Internal Links").
### 2.4.2 Automatic Section Numbering and Table of Contents Support
Your Jupyter Notebook has two helpful tools that utilize the structural organization that Header Cells give your document: automatic section numbering and table of contents generation.
#### 2.4.2.1 Automatic Section Numbering
Suppose you are writing a paper and, as is prone to happening when you have a lot of complicate thoughts buzzing around your brain, you've reorganized your ideas several times. Automatic section numbering will go through your Notebook and number your sections and subsection as designated by your Header Cells. This means that if you've moved one or more big sections around several times, you won't have to go through your paper and renumber it, as well as all its subsections, yourself.
**Notes:** Automatic Section Numbering tri-toggling tool, so when you click the Number Sections button one of three actions will occur: Automatic Section Numbering will number your sections, correct inconsistent numbering, or unnumber your sections (if all of your sections are already consistently and correctly numbered).
So, even if you have previously numbered your sections, Automatic Section Numbering will go through your document, delete the current section numbers, and replace them the correct number in a linear sequence. This means that if your third section was once your second, Automatic Section Numbering will delete the "2" in front of your section's name and replace it with a "3."
While this function saves you a lot of time, it creates one limitation. Maybe you're writing a paper about children's books and one of the books you're discussing is called **`2 Cats`**. You've unsurprisingly titled the section where you summarize and analyze this book **`2 Cats`**. Automatic Section Numbering will assume the number 2 is section information and delete it, leaving just the title **`Cats`** behind. If you bold, italicize, or place the title of the section inside quotes, however, the entire section title will be be preserved without any trouble. It should also be noted that even if you must title a section with a number occurring before any letters and you do not want to bold it, italicize it, or place it inside quotes, then you can always run Automatic Section Numbering and then go to that section and retype its name by hand.
Because Automatic Section Numbering uses your header cells, its performance relies somewhat on the clarity of your organization. If you have two sections that begin with Header 1 Cells in your paper, and each of the sections has two subsections that begin with Header 2 Cells, Automatic Section Numbering will number them 1, 1.1, 1.2, 2, 2.1, and 2.2 respectively. If, however, you have used a Header 3 Cell to indicate the beginning of what would have been section 2.1, Automatic Section Numbering will number that section 2.0.1 and an error message will appear telling you that "You placed a Header 3 cell under a Header 2 Cell in section 2". Similarly, if you begin your paper with any Header Cell smaller than a Header 1, say a Header 3 Cell, then Automatic Section Numbering will number your first section 0.0.3 and an error message will appear telling you that "Notebook begins with a Header 3 Cell."
#### 2.4.2.2 Table of Contents Support
The Table of Contents tool will automatically generate a table of contents for your paper by taking all your Header Cell titles and ordering them in a list, which it will place in a new cell at the very beginning of you Notebook. Because your Notebook does note utilize formal page breaks or numbers, each listed section will be hyperlinked to the actual section within your document.
**Notes: **Because Table of Contents Support uses your header cells, its performance relies somewhat on the clarity of your organization. If you have two sections that begin with Header 1 Cells in your paper, and each of the sections has two subsections that begin with Header 2 Cells, Table of Contents will order them in the following way:
* 1.
* 1.1
* 1.2
* 2.
* 2.1
* 2.2
If, however, you have used a Header 3 Cell to indicate the beginning of what would have been section 2.1, Table of Contents Support will insert a dummy line so that your table of contents looks like this:
* 1.
* 1.1
* 1.2
* 2.
*
* 2.0.1
* 2.2
#### 2.4.2.3 Using Both Automatic Section Numbering and Table of Contents Support
Automatic Section Numbering will always update every aspect of your notebook that is dependent on the title of one or more of your sections. This means that it will automatically correct an existing table of contents and all of your Notebook-internal links to reflect the new numbered section titles.
# 3. Keyboard Shortcuts
Jupyter Notebooks support many helpful Keyboard shortcuts, including ones for most of the buttons in [your toolbar][]. To view these shortcuts, you can click the help menu and then select Keyboard Shortcuts, as pictured below.
[your toolbar]: #1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar "Getting to know Your Jupyter Notebook's Toolbar"

# 4. Using Markdown Cells for Writing
**Why aren't there font and font size selection drop down menus, buttons I can press to bold and italicize my text, or other advanced style options in my Notebook?**
When you use Microsoft Word, Google Docs, Apple Pages, Open Office, or any other word processing software, you generally use your mouse to select various style options, like line spacing, font size, font color, paragraph format etc. This kind of system is often describes as a WYSIWYG (What You See Is What You Get) interface. This means that the input (what you tell the computer) exactly matches the output (what the computer gives back to you). If you type the letter **`G`**, highlight it, select the color green and up the font size to 64 pt, your word processor will show you a fairly large green colored letter **`G`**. And if you print out that document you will print out a fairly large green colored letter **`G`**.
This Notebook, however, does not use a WYSIWYG interface. Instead it uses something called a "[markup Language][]". When you use a a markup language, your input does not necessarily exactly equal your output.
[markup language]: http://en.wikipedia.org/wiki/Markup_language "Wikipedia Article on Markup"
For example, if I type "#Header 1" at the beginning of a cell, but then press Shift-Enter (or click the play button at the top of the window), this notebook will turn my input into a somewhat different output in the following way:
<pre>
#Header 1
</pre>
#Header 1
And if I type "##Header 2" (at the beginning of a cell), this notebook will turn that input into another output:
<pre>
##Header 2
</pre>
##Header 2
In these examples, the hashtags are markers which tell the Notebook how to typeset the text. There are many markup languages, but one family, or perhaps guiding philosophy, of markup languages is called "Markdown," named somewhat jokingly for its simplicity. Your Notebook uses "marked," a Markdown library of typeset and other formatting instructions, like the hashtags in the examples above.
Markdown is a markup language that generates HTML, which the cell can interpret and render. This means that Markdown Cells can also render plain HTML code. If you're interested in learning HTML, check out this [helpful online tutorial][html tutorial].
[html tutorial]: http://www.w3schools.com/html/ "w3schools.com HTML Tutorial"
**Why Use Markdown (and not a WYSIWYG)?**
Why is Markdown better? Well, itโs worth saying that maybe it isn't. Mainly, itโs not actually a question of better or worse, but of whatโs in front of you and of who you are. A definitive answer depends on the user and on that userโs goals and experience. These Notebooks don't use Markdown because it's definitely better, but rather because it's different and thus encourages users to think about their work differently.
It is very important for computer science students to learn how to conceptualize input and output as dependent, but also distinct. One good reason to use Markdown is that it encourages this kind of thinking. Relatedly, it might also promote focus on substance over surface aesthetic. Markdown is somewhat limited in its style options, which means that there are inherently fewer non-subject-specific concerns to agonize over while working. It is the conceit of this philosophy that you would, by using Markdown and this Notebook, begin to think of the specific stylistic rendering of your cells as distinct from what you type into those same cells, and thus also think of the content of your writing as necessarily separate from its formating and appearance.
## 4.1 Block Elements
### 4.1.1 Paragraph Breaks
Paragraphs consist of one or more consecutive lines of text and they are separated by one or more blank lines. If a line contains only spaces, it is a blank line.
### 4.1.2 Line Breaks
#### 4.1.2.1 Hard-Wrapping and Soft-Wrapping
If you're used to word processing software, you've been writing with automatically hard-wrapped lines and paragraphs. In a hard-wrapped paragraph the line breaks are not dependent on the size of the viewing window. If you click and drag your mouse to expand a word processing document, for example, the shape of the paragraphs and the length of the lines will not change. In other words, the length of a hard-wrapped line is determined either by the number of words in the line (in the case of word processing software where this number is predetermined and the program wraps for the user automatically), or individual intention (when a user manually presses an Enter or Return key to control exactly how long a line is).
Soft-wrapped paragraphs and lines, however, *do* depend on the size of their viewing window. If you increase the size of a window where soft-wrapped paragraphs are displayed, they too will expand into longer lines, becoming shorter and wider to fill the increased window space horizontally. Unsurprising, then, if you *narrow* a window, soft-wrapped lines will shrink and the paragraphs will become longer vertically.
Markdown, unlike most word processing software, does not automatically hard-wrap. If you want your paragraphs to have a particular or deliberate shape and size, you must insert your own break by ending the line with two spaces and then typing Return.
#### 4.1.2.2 Soft-Wrapping
<tt>
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
</tt>
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
#### 4.1.2.3 Hard-Wrapping
<tt>
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah
</tt>
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah
### 4.1.3 Headers
<pre>
#Header 1
</pre>
#Header 1
<pre>
##Header 2
</pre>
##Header 2
<pre>
###Header 3
</pre>
###Header 3
<pre>
####Header 4
</pre>
####Header 4
<pre>
#####Header 5
</pre>
#####Header 5
<pre>
######Header 6
</pre>
######Header 6
### 4.1.4 Block Quotes
#### 4.1.4.1 Standard Block Quoting
<tt>
>blah blah block quote blah blah block quote blah blah block
quote blah blah block quote blah blah block
quote blah blah block quote blah blah block quote blah blah block quote
</tt>
>blah blah block quote blah blah block quote blah blah block
quote blah blah block quote blah blah block
quote blah blah block quote blah blah block quote blah blah block quote
**Note**: Block quotes work best if you intentionally hard-wrap the lines.
#### 4.1.4.2 Nested Block Quoting
<pre>
>blah blah block quote blah blah block quote blah blah block
block quote blah blah block block quote blah blah block
>>quote blah blah block quote blah blah
block block quote blah blah block
>>>quote blah blah block quote blah blah block quote blah blah block quote
</pre>
>blah blah block quote blah blah block quote blah blah block
block quote blah blah block block quote blah blah block
>>quote blah blah block quote blah blah
block block quote blah blah block
>>>quote blah blah block quote blah blah block quote blah blah block quote
### 4.1.5 Lists
#### 4.1.5.1 Ordered Lists
In Markdown, you can list items using numbers, a **`+`**, a **` - `**, or a **`*`**. However, if the first item in a list or sublist is numbered, Markdown will interpret the entire list as ordered and will automatically number the items linearly, no matter what character you use to denote any given separate item.
<pre>
####Groceries:
0. Fruit:
6. Pears
0. Peaches
3. Plums
4. Apples
2. Granny Smith
7. Gala
* Oranges
- Berries
8. Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
9. Whole Wheat
0. With oats on crust
0. Without oats on crust
0. Rye
0. White
0. Dairy:
0. Milk
0. Whole
0. Skim
0. Cheese
0. Wisconsin Cheddar
0. Pepper Jack
</pre>
####Groceries:
0. Fruit:
6. Pears
0. Peaches
3. Plums
4. Apples
2. Granny Smith
7. Gala
* Oranges
- Berries
8. Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
9. Whole Wheat
0. With oats on crust
0. Without oats on crust
0. Rye
0. White
0. Dairy:
0. Milk
0. Whole
0. Skim
0. Cheese
0. Wisconsin Cheddar
0. Pepper Jack
#### 4.1.5.2 Bulleted Lists
If you begin your list or sublist with a **`+`**, a **` - `**, or a **`*`**, then Markdown will interpret the whole list as unordered and will use bullets regardless of the characters you type before any individual list item.
<pre>
####Groceries:
* Fruit:
* Pears
0. Peaches
3. Plums
4. Apples
- Granny Smith
7. Gala
* Oranges
- Berries
- Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
* Whole Wheat
* With oats on crust
0. Without oats on crust
+ Rye
0. White
0. Dairy:
* Milk
+ Whole
0. Skim
- Cheese
- Wisconsin Cheddar
0. Pepper Jack
</pre>
####Groceries:
* Fruit:
* Pears
0. Peaches
3. Plums
4. Apples
- Granny Smith
7. Gala
* Oranges
- Berries
- Strawberries
+ Blueberries
* Raspberries
- Bananas
9. Bread:
* Whole Wheat
* With oats on crust
0. Without oats on crust
+ Rye
0. White
0. Dairy:
* Milk
+ Whole
0. Skim
- Cheese
- Wisconsin Cheddar
0. Pepper Jack
### 4.1.6 Section Breaks
<pre>
___
</pre>
___
<pre>
***
</pre>
***
<pre>------</pre>
------
<pre>
* * *
</pre>
* * *
<pre>
_ _ _
</pre>
_ _ _
<pre>
- - -
</pre>
- - -
## 4.2 Backslash Escape
What happens if you want to include a literal character, like a **`#`**, that usually has a specific function in Markdown? Backslash Escape is a function that prevents Markdown from interpreting a character as an instruction, rather than as the character itself. It works like this:
<pre>
\# Wow, this isn't a header.
# This is definitely a header.
</pre>
\# Wow, this isn't a header.
# This is definitely a header.
Markdown allows you to use a backslash to escape from the functions of the following characters:
* \ backslash
* ` backtick
* \* asterisk
* _ underscore
* {} curly braces
* [] square brackets
* () parentheses
* \# hashtag
* \+ plus sign|
* \- minus sign (hyphen)
* . dot
* ! exclamation mark
## 4.3 Hyperlinks
### 4.3.1 Automatic Links
<pre>
http://en.wikipedia.org
</pre>
http://en.wikipedia.org
### 4.3.2 Standard Links
<pre>
[click this link](http://en.wikipedia.org)
</pre>
[click this link](http://en.wikipedia.org)
### 4.3.3 Standard Links With Mouse-Over Titles
<pre>
[click this link](http://en.wikipedia.org "Wikipedia")
</pre>
[click this link](http://en.wikipedia.org "Wikipedia")
### 4.3.4 Reference Links
Suppose you are writing a document in which you intend to include many links. The format above is a little arduous and if you have to do it repeatedly *while* you're trying to focus on the content of what you're writing, it's going to be a really big pain.
Fortunately, there is an alternative way to insert hyperlinks into your text, one where you indicate that there is a link, name that link, and then use the name to provide the actually URL later on when you're less in the writing zone. This method can be thought of as a "reference-style" link because it is similar to using in-text citations and then defining those citations later in a more detailed reference section or bibliography.
<pre>
This is [a reference] [identification tag for link]
[identification tag for link]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference] [identification tag for link]
[identification tag for link]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
**Note:** The "identification tag for link" can be anything. For example:
<pre>
This is [a reference] [lfskdhflhslgfh333676]
[lfskdhflhslgfh333676]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference] [lfskdhflhslgfh333676]
[lfskdhflhslgfh333676]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
This means you can give your link an intuitive, easy to remember, and relevant ID:
<pre>
This is [a reference][Chile]
[chile]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference][Chile]
[chile]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
**Note**: Link IDs are not case-sensitive.
If you don't want to give your link an ID, you don't have to. As a short cut, Markdown will understand if you just use the words in the first set of brackets to define the link later on. This works in the following way:
<pre>
This is [a reference][]
[a reference]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</pre>
This is [a reference][]
[a reference]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
Another really helpful feature of a reference-style link is that you can define the link anywhere in the cell. (must be in the cell) For example:
<tt>
This is [a reference] [ref] blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah <br/><br/>
[ref]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
</tt>
This is [a reference] [ref] blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah blah
[ref]: http://en.wikipedia.org/wiki/Chile "Wikipedia Article About Chile"
**Note:** Providing a mouse-over title for any link, regardless of whether it is a standard or reference-stlye type, is optional. With reference-style links, you can include the mouse-over title by placing it in quotes, single quotes, or parentheses. For standard links, you can only define a mouse-over title in quotes.
### 4.3.5 Notebook-Internal Links
When you create a Header you also create a discrete location within your Notebook. This means that, just like you can link to a specific location on the web, you can also link to a Header Cell inside your Notebook. Internal links have very similar Markdown formatting to regular links. The only difference is that the name of the link, which is the URL in the case of external links, is just a hashtag plus the name of the Header Cell you are linking to (case-sensitive) with dashes in between every word. If you hover your mouse over a Header Cell, a blue Greek pi letter will appear next to your title. If you click on it, the URL at the top of your window will change and the internal link to that section will appear last in the address. You can copy and paste it in order to make an internal link inside a Markdown Cell.
#### 4.3.5.1 Standard Notebook-Internal Links Without Mouse-Over Titles
<pre>
[Here's a link to the section of Automatic Section Numbering](#Automatic-Section-Numbering)
</pre>
[Here's a link to the section of Automatic Section Numbering](#2.4.2.1-Automatic-Section-Numbering)
#### 4.3.5.2 Standard Notebook-Internal Links With Mouse-Over Titles
<pre>
[Here's a link to the section on lists](#Lists "Lists")
</pre>
[Here's a link to the section of Automatic Section Numbering](#2.4.2.1-Automatic-Section-Numbering)
#### 4.3.5.3 Reference-Style Notebook-Internal Links
<pre>
[Here's a link to the section on Table of Contents Support][TOC]
[TOC]: #Table-of-Contents-Support
</pre>
[Here's a link to the section on Table of Contents Support][TOC]
[TOC]: #2.4.2.2-Table-of-Contents-Support
## 4.4 Tables
In Markdown, you can make a table by using vertical bars and dashes to define the cell and header borders:
<pre>
|Header|Header|Header|Header|
|------|------|------|------|
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
</pre>
|Header|Header|Header|Header|
|------|------|------|------|
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
|Cell |Cell |Cell | Cell |
Making a table this way might be especially useful if you want your document to be legible both rendered and unrendered. However, you don't *need* to include all of those dashes, vertical bars, and spaces for Markdown to understand that you're making a table. Here's the bare minimum you would need to create the table above:
<pre>
Header|Header|Header|Header
-|-|-|-
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
</pre>
Header|Header|Header|Header
-|-|-|-
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
Cell|Cell|Cell|Cell
It's important to note that the second line of dashes and vertical bars is essential. If you have just the line of headers and the second line of dashes and vertical bars, that's enough for Markdown to make a table.
Another important formatting issue has to do with the vertical bars that define the left and right edges of the table. If you include all the vertical bars on the far left and right of the table, like in the first example above, Markdown will ignore them completely. *But*, if you leave out some and include others, Markdown will interpret any extra vertical bar as an additional cell on the side that the bar appears in the unrendered version of the text. This also means that if you include the far left or right vertical bar in the second line of bars and dashes, you must include all of the otherwise optional vertical bars (like in the first example above).
### 4.4.1 Cell Justification
If not otherwise specified the text in each header and cell of a table will justify to the left. If, however, you wish to specify either right justification or centering, you may do so like this:
<tt>
**Centered, Right-Justified, and Regular Cells and Headers**:
centered header | regular header | right-justified header | centered header | regular header
:-:|-|-:|:-:|-
centered cell|regular cell|right-justified cell|centered cell|regular cell
centered cell|regular cell|right-justified cell|centered cell|regular cell
</tt>
**Centered, Right-Justified, and Regular Cells and Headers**:
centered header | regular header | right-justified header | centered header | regular header
:-:|-|-:|:-:|-
centered cell|regular cell|right-justified cell|centered cell|regular cell
centered cell|regular cell|right-justified cell|centered cell|regular cell
While it is difficult to see that the headers are differently justified from one another, this is just because the longest line of characters in any column defines the width of the headers and cells in that column.
**Note:** You cannot make tables directly beneath a line of text. You must put a blank line between the end of a paragraph and the beginning of a table.
## 4.5 Style and Emphasis
<pre>
*Italics*
</pre>
*Italics*
<pre>
_Italics_
</pre>
_Italics_
<pre>
**Bold**
</pre>
**Bold**
<pre>
__Bold__
</pre>
__Bold__
**Note:** If you want actual asterisks or underscores to appear in your text, you can use the [backslash escape function] [backslash] like this:
[backslash]: #4.2-Backslash-Escape "Backslash Escape"
<pre>
\*awesome asterisks\* and \_incredible under scores\_
</pre>
\*awesome asterisks\* and \_incredible under scores\_
## 4.6 Other Characters
<pre>
Ampersand &amp; Ampersand
</pre>
Ampersand & Ampersand
<pre>
&lt; angle brackets &gt;
</pre>
< angle brackets >
<pre>
&quot; quotes &quot;
" quotes "
## 4.7 Including Code Examples
If you want to signify that a particular section of text is actually an example of code, you can use backquotes to surround the code example. These will switch the font to monospace, which creates a clear visual formatting difference between the text that is meant to be code and the text that isn't.
Code can either in the middle of a paragraph, or as a block. Use a single backquote to start and stop code in the middle of a paragraph. Here's an example:
<pre>
The word `monospace` will appear in a code-like form.
</pre>
The word `monospace` will appear in a code-like form.
**Note:** If you want to include a literal backquote in your code example you must suround the whole text block in double backquotes like this:
<pre>
`` Look at this literal backquote ` ``
</pre>
`` Look at this literal backquote ` ``
To include a complete code-block inside a Markdown cell, use triple backquotes. Optionally, you can put the name of the language that you are quoting after the starting triple backquotes, like this:
<pre>
```python
def function(n):
return n + 1
```
</pre>
That will format the code-block (sometimes called "fenced code") with syntax coloring. The above code block will be rendered like this:
```python
def function(n):
return n + 1
```
The language formatting names that you can currently use after the triple backquote are:
<pre>
apl django go jinja2 ntriples q smalltalk toml
asterisk dtd groovy julia octave r smarty turtle
clike dylan haml less pascal rpm smartymixed vb
clojure ecl haskell livescript pegjs rst solr vbscript
cobol eiffel haxe lua perl ruby sparql velocity
coffeescript erlang htmlembedded markdown php rust sql verilog
commonlisp fortran htmlmixed pig sass stex xml
css gas http mirc properties scheme tcl xquery
d gfm jade mllike puppet shell tiddlywiki yaml
diff gherkin javascript nginx python sieve tiki z80
</pre>
## 4.8 Images
### 4.8.1 Images from the Internet
Inserting an image from the internet is almost identical to inserting a link. You just also type a **`!`** before the first set of brackets:
<pre>

</pre>

**Note:** Unlike with a link, the words that you type in the first set of brackets do not appear when they are rendered into html by Markdown.
#### 4.8.1.1 Reference-Style Images from the Internet
Just like with links, you can also use a reference-style format when inserting images from the internet. This involves indicating where you want to place a picture, giving that picture an ID tag, and then later defining that ID tag. The process is nearly identical to using the reference-style format to insert a link:
<pre>
![][giraffe]
[giraffe]:http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/South_African_Giraffe,_head.jpg/877px-South_African_Giraffe,_head.jpg "Picture of a Giraffe"
</pre>
![][giraffe]
[giraffe]: http://upload.wikimedia.org/wikipedia/commons/thumb/b/b7/South_African_Giraffe,_head.jpg/877px-South_African_Giraffe,_head.jpg "Picture of a Giraffe"
## 4.9 LaTeX Math
Jupyter Notebooks' Markdown cells support LateX for formatting mathematical equations. To tell Markdown to interpret your text as LaTex, surround your input with dollar signs like this:
<pre>
$z=\dfrac{2x}{3y}$
</pre>
$z=\dfrac{2x}{3y}$
An equation can be very complex:
<pre>
$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx$
</pre>
$F(k) = \int_{-\infty}^{\infty} f(x) e^{2\pi i k} dx$
If you want your LaTex equations to be indented towards the center of the cell, surround your input with two dollar signs on each side like this:
<pre>
$$2x+3y=z$$
</pre>
$$2x+3y=z$$
For a comprehensive guide to the mathematical symbols and notations supported by Jupyter Notebooks' Markdown cells, check out [Martin Keefe's helpful reference materials on the subject][mkeefe].
[mkeefe]: http://martinkeefe.com/math/mathjax1 "Martin Keefe's MathJax Guide"
# 5. Bibliographic Support
Bibliographic Support makes managing references and citations in your Notebook much easier, by automating some of the bibliographic process every person goes through when doing research or writing in an academic context. There are essentially three steps to this process for which your Notebook's Bibliographic support can assist: gathering and organizing sources you intend to use, citing those sources within the text you are writing, and compiling all of the material you referenced in an organized, correctly formatted list, the kind which usually appears at the end of a paper in a section titled "References," "Bibliography," or "Works Cited.
In order to benefit from this functionality, you need to do two things while writing your paper: first, you need to create a [Bibtex database][bibdb] of information about your sources and second, you must use the the [cite command][cc] in your Markdown writing cells to indicate where you want in-text citations to appear.
If you do both these things, the "Generate References" button will be able to do its job by replacing all of your cite commands with validly formatted in-text citations and creating a References section at the end of your document, which will only ever include the works you specifically cited within in your Notebook.
**Note:** References are generated without a header cell, just a [markdown header][]. This means that if you want a References section to appear in your table of contents, you will have to unrender the References cell, delete the "References" header, make a Header Cell of the appropriate level and title it "References" yourself, and then generate a table of contents using [Table of Contents Support][table of contents]. This way, you can also title your References section "Bibliography" or "Works Cited," if you want.
[markdown header]: #4.1.3-Headers
[table of contents]: #2.4.2.2-Table-of-Contents-Support
[bibdb]: #5.1-Creating-a-Bibtex-Database
[cc]:#5.2-Cite-Commands-and-Citation-IDs
## 5.1 Creating a Bibtex Database
Bibtex is reference management software for formatting lists of references ([from Wikipedia](BibTeX is reference management software for formatting lists of references "Wikipedia Article On Bibtex")). While your Notebook does not use the Bibtex software, it does use [Bibtex formatting](#5.1.3-Formatting-Bibtex-Entries) for creating references within your Bibliographic database.
In order for the Generate References button to work, you need a bibliographic database for it to search and match up with the sources you've indicated you want to credit using [cite commands and citation IDs](#5.2-Cite-Commands-and-Citation-IDs).
When creating a bibliographic database for your Notebook, you have two options: you can make an external database, which will exist in a separate Notebook from the one you are writing in, or you can make an internal database which will exist in a single cell inside the Notebook in which you are writing. Below are explanations of how to use these database creation strategies, as well as a discussion of the pros and cons for each.
### 5.1.1 External Bibliographic Databases
To create an external bibliographic database, you will need to create a new Notebook and title it **`Bibliography`** in the toplevel folder of your current Jupyter session. As long as you do not also have an internal bibliographic database, when you click the Generate References button your Notebook's Bibliographic Support will search this other **`Bibliography`** Notebook for Bibtex entries. Bibtex entries can be in any cell and in any kind of cell in your **`Bibliography`** Notebook as long as the cell begins with **`<!--bibtex`** and ends with **`-->`**. Go to [this section][bibfor] for examples of valid BibTex formatting.
Not every cell has to contain BibTex entries for the external bibliographic database to work as intended with your Notebook's bibliographic support. This means you can use the same helpful organization features that you use in other Notebooks, like [Automatic Section Numbering][asn] and [Table of Contents Support][toc], to structure your own little library of references. The best part of this is that any Notebook containing validly formatted [cite commands][cc] can check your external database and find only the items that you have indicated you want to cite. So you only ever have to make the entry once and your external database can grow large and comprehensive over the course of your accademic writing career.
There are several advantages to using an external database over [an internal one][internal database]. The biggest one, which has already been described, is that you will only ever need to create one and you can organize it into sections by using headers and generating [automatic section numbers][asn] and a [table of contents][toc]. These tools will help you to easily find the right [citation ID][cc] for a given source you want to cite. The other major advantage is that an external database is not visible when viewing the Notebook in which you are citing sources and generating a References list. Bibtex databases are not very attractive or readable and you probably won't want one to show up in your finished document. There are [ways to hide internal databases][hiding bibtex cell], but it's convenient not to have to worry about that.
[asn]: #2.4.2.1-Automatic-Section-Numbering
[toc]: #2.4.2.2-Table-of-Contents-Support
[cc]: #5.2-Cite-Commands-and-Citation-IDs
[hiding bibtex cell]: #5.1.2.1-Hiding-Your-Internal-Database
[bibfor]:#5.1.3-Formatting-Bibtex-Entries
### 5.1.2 Internal Bibliographic Databases
Unlike [external bibliographic databases][exd], which are comprised from an entire separate notebook, internal bibliographic databases consist of only one cell within in the Notebook in which you are citing sources and compiling a References list. The single cell, like all of the many BibTex cells that can make up an external database, must begin with **`<!--bibtex`** and end with **`-->`** in order to be validly formatted and correctly interpreted by your Notebook's Bibliographic Support. It's probably best to keep this cell at the very end or the very beginning of your Notebook so you always know where it is. This is because when you use an intenral bibliographic databse it can only consist of one cell. This means that if you want to cite multiple sources you will need to keep track of the single cell that comprises your entire internal bibliographic database during every step of the research and writing process.
Internal bibliographic databases make more sense when your project is a small one and the list of total sources is short. This is especially convenient if you don't already have a built-up external database. With an internal database you don't have to create and organize a whole separate Notebook, a task that's only useful when you have to keep track of a lot of different material. Additionally, if you want to share your finished Notebook with others in a form that retains its structural validity, you only have to send one Notebook, as oppose to both the project itself and the Notebook that comprises your external bibliographic database. This is especially useful for a group project, where you want to give another reader the ability to edit, not simply read, your References section.
[exd]:#5.1.1-External-Bibliographic-Databases
#### 5.1.2.1 Hiding Your Internal Database
Even though they have some advantages, especially for smaller projects, internal databases have on major draw back. They are not very attractive or polished looking and you probably won't want one to appear in your final product. Fortunately, there are two methods for hiding your internal biblioraphic database.
While your Notebook's bibliographic support will be able to interpret [correctly formatted BibTex entries][bibfor] in any [kind of cell][cell kind], if you use a [Markdown Cell][md cell] to store your internal bibliographic database, then when you run the cell all of the ugly BibTex formatting will disappear. This is handy, but it also makes the cell very difficult to find, so remember to keep careful track of where your hidden BibTex databse is if you're planning to edit it later. If you want your final product to be viewed stably as HTML, then you can make your internal BibTex database inside a [Raw Cell][RC], use the [cell toolbar][] to select "Raw Cell Format", and then select "None" in the toolbar that appears in the corner of your Raw Cell BibTex database. This way, you will still be able to easily find and edit the database when you are working on your Notebook, but others won't be able to see the database when viewing your project in its final form.
[cell toolbar]: #1.-Getting-to-Know-your-Jupyter-Notebook's-Toolbar
[bibfor]:#5.1.3-Formatting-Bibtex-Entries
[RC]:#2.3-Raw-Cells
[md cell]: #2.2-Markdown-Cells
[cell kind]: #2.-Different-Kinds-of-Cells
### 5.1.3 Formatting Bibtex Entries
BibTex entries consist of three crucial components: one, the type of source you are citing (a book, article, website, etc.); two, the unique [citation ID][cc] you wish to remember the source by; and three, the fields of information about that source (author, title of work, date of publication, etc.). Below is an example entry, with each of these three components designated clearly
<pre>
<!--bibtex
@ENTRY TYPE{CITATION ID,
FIELD 1 = {source specific information},
FIELD 2 = {source specific informatio},
FIEL 3 = {source specific informatio},
FIELD 4 = {source specific informatio}
}
-->
</pre>
More comprehensive documentation of what entry types and corresponding sets of required and optional fields BibTex supports can be found in the [Wikipedia article on BibTex][wikibibt].
Below is a section of the external bibliographic database for a fake history paper about the fictional island nation of Calico. (None of the entries contain information about real books or articles):
[cc]: #5.2-Cite-Commands-and-Citation-IDs
[wikibibt]: http://en.wikipedia.org/wiki/Markdown
<pre>
<!--bibtex
@book{wellfarecut,
title = {Our Greatest Threat: The Rise of Anti-Wellfare Politics in Calico in the 21st Century},
author = {Jacob, Bernadette},
year = {2010},
publisher = {Jupyter University Press}
}
@article{militaryex2,
title = {Rethinking Calican Military Expansion for the New Century},
author = {Collier, Brian F.},
journal = {Modern Politics},
volume = {60},
issue = {25},
pages = {35 - 70},
year = {2012}
}
@article{militaryex1,
title = {Conservative Majority Passes Budget to Grow Military},
author = {Lane, Lois},
journal = {The Daily Calican},
month = {October 19th, 2011},
pages = {15 - 17},
year = {2011}
}
@article{oildrill,
title = {Oil Drilling Off the Coast of Jupyter Approved for Early Next Year},
author = {Marks, Meghan L.},
journal = {The Python Gazette},
month = {December 5th, 2012},
pages = {8 - 9},
year = {2012}
}
@article{rieseinterview,
title = {Interview with Up and Coming Freshman Senator, Alec Riese of Python},
author = {Wilmington, Oliver},
journal = {The Jupyter Times},
month = {November 24th, 2012},
pages = {4 - 7},
year = {2012}
}
@book{calicoww2:1,
title = {Calico and WWII: Untold History},
author = {French, Viola},
year = {1997},
publisher = {Calicia City Free Press}
}
@book{calicoww2:2,
title = {Rebuilding Calico After Japanese Occupation},
author = {Kepps, Milo },
year = {2002},
publisher = {Python Books}
}
-->
</pre>
## 5.2 Cite Commands and Citation IDs
When you want to cite a bibliographic entry from a database (either internal or external), you must know the citation ID, sometimes called the "key", for that entry. Citation IDs are strings of letters, numbers, and symbols that *you* make up, so they can be any word or combination of words you find easy to remember. Once, you've given an entry a citation ID, however, you do need to use that same ID every time you cite that source, so it may behoove you to keep your database organized. This way it will be much easier to locate any given source's entry and its potentially forgotten citation ID.
Once you know the citation ID for a given entry, use the following format to indicate to your Notebook's bibliographic support that you'd like to insert an in-text citation:
<pre>
[](#cite-CITATION ID)
</pre>
This format is the cite command. For example, if you wanted to cite *Rebuilding Calico After Japanese Occupation* listed above, you would use the cite command and the specific citation ID for that source:
<pre>
[](#cite-calicoww2:2)
</pre>
Before clicking the "Generate References" button, your unrendered text might look like this:
<pre>
Rebuilding Calico took many years [](#cite-calicoww2:2).
</pre>
After clicking the "Generate References" button, your unrendered text might look like this:
<pre>
Rebuilding Calico took many years <a name="ref-1"/>[(Kepps, 2002)](#cite-calicoww2:2).
</pre>
and then the text would render as:
>Rebuilding Calico took many years <a name="ref-1"/>[(Kepps, 2002)](#cite-calicoww2:2).
In addition, a cell would be added at the bottom with the following contents:
>#References
><a name="cite-calicoww2:2"/><sup>[^](#ref-1) [^](#ref-2) </sup>Kepps, Milo . 2002. _Rebuilding Calico After Japanese Occupation_.
# 6. Turning Your Jupyter Notebook into a Slideshow
To install slideshow support for your Notebook, go [here](http://nbviewer.ipython.org/github/fperez/nb-slideshow-template/blob/master/install-support.ipynb).
To see a tutorial and example slideshow, go [here](http://www.damian.oquanta.info/posts/make-your-slides-with-ipython.html).
| github_jupyter |
# Recommendations with IBM
In this notebook, you will be putting your recommendation skills to use on real data from the IBM Watson Studio platform.
You may either submit your notebook through the workspace here, or you may work from your local machine and submit through the next page. Either way assure that your code passes the project [RUBRIC](https://review.udacity.com/#!/rubrics/2322/view). **Please save regularly.**
By following the table of contents, you will build out a number of different methods for making recommendations that can be used for different situations.
## Table of Contents
I. [Exploratory Data Analysis](#Exploratory-Data-Analysis)<br>
II. [Rank Based Recommendations](#Rank)<br>
III. [User-User Based Collaborative Filtering](#User-User)<br>
IV. [Content Based Recommendations (EXTRA - NOT REQUIRED)](#Content-Recs)<br>
V. [Matrix Factorization](#Matrix-Fact)<br>
VI. [Extras & Concluding](#conclusions)
At the end of the notebook, you will find directions for how to submit your work. Let's get started by importing the necessary libraries and reading in the data.
```
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import project_tests as t
import pickle
from matplotlib.pyplot import figure
%matplotlib inline
df = pd.read_csv('data/user-item-interactions.csv')
df_content = pd.read_csv('data/articles_community.csv')
del df['Unnamed: 0']
del df_content['Unnamed: 0']
# Show df to get an idea of the data
df.head()
# Show df_content to get an idea of the data
df_content.head()
```
### <a class="anchor" id="Exploratory-Data-Analysis">Part I : Exploratory Data Analysis</a>
Use the dictionary and cells below to provide some insight into the descriptive statistics of the data.
`1.` What is the distribution of how many articles a user interacts with in the dataset? Provide a visual and descriptive statistics to assist with giving a look at the number of times each user interacts with an article.
```
# Count interactions per user, sorted
interactions = df.groupby('email').count().drop(['title'],axis=1)
interactions.columns = ['nb_articles']
interactions_sorted = interactions.sort_values(['nb_articles'])
interactions_sorted.head()
interactions_sorted.describe()
#plt.figure(figsize=(10,30))
plt.style.use('ggplot')
interactions_plot = interactions_sorted.reset_index().groupby('nb_articles').count()
interactions_plot.plot.bar(figsize=(20,10))
plt.title('Number of users per number of interactions')
plt.xlabel('Interactions')
plt.ylabel('Users')
plt.legend(('Number of users',),prop={"size":10})
plt.show()
# Fill in the median and maximum number of user_article interactios below
median_val = 3 # 50% of individuals interact with ____ number of articles or fewer.
max_views_by_user = 364 # The maximum number of user-article interactions by any 1 user is ______.
```
`2.` Explore and remove duplicate articles from the **df_content** dataframe.
```
row_per_article = df_content.groupby('article_id').count()
duplicates = row_per_article[row_per_article['doc_full_name'] > 1].index
df_content[df_content['article_id'].isin(duplicates)].sort_values('article_id')
# Remove any rows that have the same article_id - only keep the first
df_content_no_duplicates = df_content.drop_duplicates('article_id')
df_content_no_duplicates[df_content_no_duplicates['article_id'].isin(duplicates)].sort_values('article_id')
```
`3.` Use the cells below to find:
**a.** The number of unique articles that have an interaction with a user.
**b.** The number of unique articles in the dataset (whether they have any interactions or not).<br>
**c.** The number of unique users in the dataset. (excluding null values) <br>
**d.** The number of user-article interactions in the dataset.
```
# Articles with an interaction
len(df['article_id'].unique())
# Total articles
len(df_content_no_duplicates['article_id'].unique())
# Unique users
len(df[df['email'].isnull() == False]['email'].unique())
# Unique interactions
len(df)
unique_articles = 714 # The number of unique articles that have at least one interaction
total_articles = 1051 # The number of unique articles on the IBM platform
unique_users = 5148 # The number of unique users
user_article_interactions = 45993 # The number of user-article interactions
```
`4.` Use the cells below to find the most viewed **article_id**, as well as how often it was viewed. After talking to the company leaders, the `email_mapper` function was deemed a reasonable way to map users to ids. There were a small number of null values, and it was found that all of these null values likely belonged to a single user (which is how they are stored using the function below).
```
df.groupby('article_id').count().sort_values(by='email',ascending = False).head(1)
most_viewed_article_id = str(1429.0) # The most viewed article in the dataset as a string with one value following the decimal
max_views = 937 # The most viewed article in the dataset was viewed how many times?
## No need to change the code here - this will be helpful for later parts of the notebook
# Run this cell to map the user email to a user_id column and remove the email column
def email_mapper():
coded_dict = dict()
cter = 1
email_encoded = []
for val in df['email']:
if val not in coded_dict:
coded_dict[val] = cter
cter+=1
email_encoded.append(coded_dict[val])
return email_encoded
email_encoded = email_mapper()
del df['email']
df['user_id'] = email_encoded
# show header
df.head()
## If you stored all your results in the variable names above,
## you shouldn't need to change anything in this cell
sol_1_dict = {
'`50% of individuals have _____ or fewer interactions.`': median_val,
'`The total number of user-article interactions in the dataset is ______.`': user_article_interactions,
'`The maximum number of user-article interactions by any 1 user is ______.`': max_views_by_user,
'`The most viewed article in the dataset was viewed _____ times.`': max_views,
'`The article_id of the most viewed article is ______.`': most_viewed_article_id,
'`The number of unique articles that have at least 1 rating ______.`': unique_articles,
'`The number of unique users in the dataset is ______`': unique_users,
'`The number of unique articles on the IBM platform`': total_articles
}
# Test your dictionary against the solution
t.sol_1_test(sol_1_dict)
```
### <a class="anchor" id="Rank">Part II: Rank-Based Recommendations</a>
Unlike in the earlier lessons, we don't actually have ratings for whether a user liked an article or not. We only know that a user has interacted with an article. In these cases, the popularity of an article can really only be based on how often an article was interacted with.
`1.` Fill in the function below to return the **n** top articles ordered with most interactions as the top. Test your function using the tests below.
```
def get_top_articles(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
top_articles = list(df.groupby('title').count().sort_values(by='user_id',ascending = False).head(n).index)
return top_articles # Return the top article titles from df (not df_content)
def get_top_article_ids(n, df=df):
'''
INPUT:
n - (int) the number of top articles to return
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
top_articles - (list) A list of the top 'n' article titles
'''
top_articles = list(df.groupby('article_id').count().sort_values(by='user_id',ascending = False).head(n).index.astype(str))
return top_articles # Return the top article ids
print(get_top_articles(10))
print(get_top_article_ids(10))
# Test your function by returning the top 5, 10, and 20 articles
top_5 = get_top_articles(5)
top_10 = get_top_articles(10)
top_20 = get_top_articles(20)
# Test each of your three lists from above
t.sol_2_test(get_top_articles)
```
### <a class="anchor" id="User-User">Part III: User-User Based Collaborative Filtering</a>
`1.` Use the function below to reformat the **df** dataframe to be shaped with users as the rows and articles as the columns.
* Each **user** should only appear in each **row** once.
* Each **article** should only show up in one **column**.
* **If a user has interacted with an article, then place a 1 where the user-row meets for that article-column**. It does not matter how many times a user has interacted with the article, all entries where a user has interacted with an article should be a 1.
* **If a user has not interacted with an item, then place a zero where the user-row meets for that article-column**.
Use the tests to make sure the basic structure of your matrix matches what is expected by the solution.
```
# create the user-article matrix with 1's and 0's
def create_user_item_matrix(df):
'''
INPUT:
df - pandas dataframe with article_id, title, user_id columns
OUTPUT:
user_item - user item matrix
Description:
Return a matrix with user ids as rows and article ids on the columns with 1 values where a user interacted with
an article and a 0 otherwise
'''
user_item = df.groupby(['user_id', 'article_id']).count().groupby(['user_id', 'article_id']).count().unstack()
user_item = user_item.fillna(0)
return user_item # return the user_item matrix
user_item = create_user_item_matrix(df)
## Tests: You should just need to run this cell. Don't change the code.
assert user_item.shape[0] == 5149, "Oops! The number of users in the user-article matrix doesn't look right."
assert user_item.shape[1] == 714, "Oops! The number of articles in the user-article matrix doesn't look right."
assert user_item.sum(axis=1)[1] == 36, "Oops! The number of articles seen by user 1 doesn't look right."
print("You have passed our quick tests! Please proceed!")
```
`2.` Complete the function below which should take a user_id and provide an ordered list of the most similar users to that user (from most similar to least similar). The returned result should not contain the provided user_id, as we know that each user is similar to him/herself. Because the results for each user here are binary, it (perhaps) makes sense to compute similarity as the dot product of two users.
Use the tests to test your function.
```
def find_similar_users(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user_id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
similar_users - (list) an ordered list where the closest users (largest dot product users)
are listed first
Description:
Computes the similarity of every pair of users based on the dot product
Returns an ordered
'''
# compute similarity of each user to the provided user
similarities_matrix = user_item.dot(np.transpose(user_item))
similarities_user = similarities_matrix[similarities_matrix.index == user_id].transpose()
similarities_user.columns = ['similarities']
# sort by similarity
similarities_sorted = similarities_user.sort_values(by = 'similarities', ascending=False)
# create list of just the ids
most_similar_users = list(similarities_sorted.index)
# remove the own user's id
most_similar_users.remove(user_id)
return most_similar_users # return a list of the users in order from most to least similar
# Do a spot check of your function
print("The 10 most similar users to user 1 are: {}".format(find_similar_users(1)[:10]))
print("The 5 most similar users to user 3933 are: {}".format(find_similar_users(3933)[:5]))
print("The 3 most similar users to user 46 are: {}".format(find_similar_users(46)[:3]))
```
`3.` Now that you have a function that provides the most similar users to each user, you will want to use these users to find articles you can recommend. Complete the functions below to return the articles you would recommend to each user.
```
def get_article_names(article_ids, df=df):
'''
INPUT:
article_ids - (list) a list of article ids
df - (pandas dataframe) df as defined at the top of the notebook
OUTPUT:
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the title column)
'''
article_names = df[df['article_id'].isin(article_ids)]['title'].unique().tolist()
return article_names # Return the article names associated with list of article ids
def get_user_articles(user_id, user_item=user_item):
'''
INPUT:
user_id - (int) a user id
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
article_ids - (list) a list of the article ids seen by the user
article_names - (list) a list of article names associated with the list of article ids
(this is identified by the doc_full_name column in df_content)
Description:
Provides a list of the article_ids and article titles that have been seen by a user
'''
user_transpose = user_item[user_item.index == user_id].transpose()
user_transpose.columns = ['seen']
article_ids = list(user_transpose[user_transpose['seen'] == 1].reset_index()['article_id'].astype(str))
article_names = get_article_names(article_ids,df)
return article_ids, article_names # return the ids and names
def user_user_recs(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
Users who are the same closeness are chosen arbitrarily as the 'next' user
For the user where the number of recommended articles starts below m
and ends exceeding m, the last items are chosen arbitrarily
'''
articles_seen = get_user_articles(user_id)
closest_users = find_similar_users(user_id)
# Keep the recommended articles here
recs = np.array([])
# Go through the users and identify articles they have seen the user hasn't seen
for user in closest_users:
users_articles_seen_id, users_articles_seen_name = get_user_articles(user)
#Obtain recommendations for each neighbor
new_recs = np.setdiff1d(users_articles_seen_id, articles_seen, assume_unique=True)
# Update recs with new recs
recs = np.unique(np.concatenate([new_recs, recs], axis=0))
# If we have enough recommendations exit the loop
if len(recs) > m-1:
break
# Pick the first m
recs = recs[0:m]
return recs # return your recommendations for this user_id
# Test your functions here - No need to change this code - just run this cell
assert set(get_article_names(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_article_names(['1320.0', '232.0', '844.0'])) == set(['housing (2015): united states demographic measures','self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook']), "Oops! Your the get_article_names function doesn't work quite how we expect."
assert set(get_user_articles(20)[0]) == set(['1320.0', '232.0', '844.0'])
assert set(get_user_articles(20)[1]) == set(['housing (2015): united states demographic measures', 'self-service data preparation with ibm data refinery','use the cloudant-spark connector in python notebook'])
assert set(get_user_articles(2)[0]) == set(['1024.0', '1176.0', '1305.0', '1314.0', '1422.0', '1427.0'])
assert set(get_user_articles(2)[1]) == set(['using deep learning to reconstruct high-resolution audio', 'build a python app on the streaming analytics service', 'gosales transactions for naive bayes model', 'healthcare python streaming application demo', 'use r dataframes & ibm watson natural language understanding', 'use xgboost, scikit-learn & ibm watson machine learning apis'])
print("If this is all you see, you passed all of our tests! Nice job!")
```
`4.` Now we are going to improve the consistency of the **user_user_recs** function from above.
* Instead of arbitrarily choosing when we obtain users who are all the same closeness to a given user - choose the users that have the most total article interactions before choosing those with fewer article interactions.
* Instead of arbitrarily choosing articles from the user where the number of recommended articles starts below m and ends exceeding m, choose articles with the articles with the most total interactions before choosing those with fewer total interactions. This ranking should be what would be obtained from the **top_articles** function you wrote earlier.
```
def get_top_sorted_users(user_id, df=df, user_item=user_item):
'''
INPUT:
user_id - (int)
df - (pandas dataframe) df as defined at the top of the notebook
user_item - (pandas dataframe) matrix of users by articles:
1's when a user has interacted with an article, 0 otherwise
OUTPUT:
neighbors_df - (pandas dataframe) a dataframe with:
neighbor_id - is a neighbor user_id
similarity - measure of the similarity of each user to the provided user_id
num_interactions - the number of articles viewed by the user - if a u
Other Details - sort the neighbors_df by the similarity and then by number of interactions where
highest of each is higher in the dataframe
'''
# Get similarities
similarities_matrix = user_item.dot(np.transpose(user_item))
similarities_user = similarities_matrix[similarities_matrix.index == user_id].transpose()
similarities_user.columns = ['similarities']
# Get interactions
interactions = df.groupby('user_id').count().drop(['title'],axis=1)
interactions.columns = ['interactions']
# Merge similarities with interactions
neighbors_df_not_sorted = similarities_user.join(interactions, how='left')
neighbors_df = neighbors_df_not_sorted.sort_values(by = ['similarities', 'interactions'], ascending = False)
return neighbors_df # Return the dataframe specified in the doc_string
def user_user_recs_part2(user_id, m=10):
'''
INPUT:
user_id - (int) a user id
m - (int) the number of recommendations you want for the user
OUTPUT:
recs - (list) a list of recommendations for the user by article id
rec_names - (list) a list of recommendations for the user by article title
Description:
Loops through the users based on closeness to the input user_id
For each user - finds articles the user hasn't seen before and provides them as recs
Does this until m recommendations are found
Notes:
* Choose the users that have the most total article interactions
before choosing those with fewer article interactions.
* Choose articles with the articles with the most total interactions
before choosing those with fewer total interactions.
'''
articles_seen = get_user_articles(user_id)
closest_users = get_top_sorted_users(user_id).index.tolist()
closest_users.remove(user_id)
top_articles_all = get_top_article_ids(len(df))
# Keep the recommended articles here
recs = np.array([])
# Go through the users and identify articles they have seen the user hasn't seen
for user in closest_users:
users_articles_seen_id, users_articles_seen_name = get_user_articles(user)
# Sort articles according to the number of interactions
users_articles_seen_id = sorted(users_articles_seen_id, key=lambda x: top_articles_all.index(x))
# Obtain recommendations for each neighbor
new_recs = np.setdiff1d(users_articles_seen_id, articles_seen, assume_unique=True)
# Update recs with new recs
recs = np.unique(np.concatenate([new_recs, recs], axis=0))
# If we have enough recommendations exit the loop
if len(recs) > m-1:
break
# Pick the first m
recs = recs[0:m].tolist()
# Get rec names
rec_names = get_article_names(recs)
return recs, rec_names
# Quick spot check - don't change this code - just use it to test your functions
rec_ids, rec_names = user_user_recs_part2(20, 10)
print("The top 10 recommendations for user 20 are the following article ids:")
print(rec_ids)
print()
print("The top 10 recommendations for user 20 are the following article names:")
print(rec_names)
```
`5.` Use your functions from above to correctly fill in the solutions to the dictionary below. Then test your dictionary against the solution. Provide the code you need to answer each following the comments below.
```
### Tests with a dictionary of results
user1_most_sim = get_top_sorted_users(1).iloc[1].name #Find the user that is most similar to user 1
user131_10th_sim = get_top_sorted_users(131).iloc[10].name #Find the 10th most similar user to user 131
## Dictionary Test Here
sol_5_dict = {
'The user that is most similar to user 1.': user1_most_sim,
'The user that is the 10th most similar to user 131': user131_10th_sim,
}
t.sol_5_test(sol_5_dict)
```
`6.` If we were given a new user, which of the above functions would you be able to use to make recommendations? Explain. Can you think of a better way we might make recommendations? Use the cell below to explain a better method for new users.
We would provide the top articles for all the users.
`7.` Using your existing functions, provide the top 10 recommended articles you would provide for the a new user below. You can test your function against our thoughts to make sure we are all on the same page with how we might make a recommendation.
```
new_user = '0.0'
# What would your recommendations be for this new user '0.0'? As a new user, they have no observed articles.
# Provide a list of the top 10 article ids you would give to
new_user_recs = get_top_article_ids(10)
assert set(new_user_recs) == set(['1314.0','1429.0','1293.0','1427.0','1162.0','1364.0','1304.0','1170.0','1431.0','1330.0']), "Oops! It makes sense that in this case we would want to recommend the most popular articles, because we don't know anything about these users."
print("That's right! Nice job!")
```
### <a class="anchor" id="Content-Recs">Part IV: Content Based Recommendations (EXTRA - NOT REQUIRED)</a>
Another method we might use to make recommendations is to perform a ranking of the highest ranked articles associated with some term. You might consider content to be the **doc_body**, **doc_description**, or **doc_full_name**. There isn't one way to create a content based recommendation, especially considering that each of these columns hold content related information.
`1.` Use the function body below to create a content based recommender. Since there isn't one right answer for this recommendation tactic, no test functions are provided. Feel free to change the function inputs if you decide you want to try a method that requires more input values. The input values are currently set with one idea in mind that you may use to make content based recommendations. One additional idea is that you might want to choose the most popular recommendations that meet your 'content criteria', but again, there is a lot of flexibility in how you might make these recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
def make_content_recs():
'''
INPUT:
OUTPUT:
'''
```
`2.` Now that you have put together your content-based recommendation system, use the cell below to write a summary explaining how your content based recommender works. Do you see any possible improvements that could be made to your function? Is there anything novel about your content based recommender?
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
**Write an explanation of your content based recommendation system here.**
`3.` Use your content-recommendation system to make recommendations for the below scenarios based on the comments. Again no tests are provided here, because there isn't one right answer that could be used to find these content based recommendations.
### This part is NOT REQUIRED to pass this project. However, you may choose to take this on as an extra way to show off your skills.
```
# make recommendations for a brand new user
# make a recommendations for a user who only has interacted with article id '1427.0'
```
### <a class="anchor" id="Matrix-Fact">Part V: Matrix Factorization</a>
In this part of the notebook, you will build use matrix factorization to make article recommendations to the users on the IBM Watson Studio platform.
`1.` You should have already created a **user_item** matrix above in **question 1** of **Part III** above. This first question here will just require that you run the cells to get things set up for the rest of **Part V** of the notebook.
```
# Load the matrix here
user_item_matrix = pd.read_pickle('user_item_matrix.p')
# quick look at the matrix
user_item_matrix.head()
```
`2.` In this situation, you can use Singular Value Decomposition from [numpy](https://docs.scipy.org/doc/numpy-1.14.0/reference/generated/numpy.linalg.svd.html) on the user-item matrix. Use the cell to perform SVD, and explain why this is different than in the lesson.
```
# Perform SVD on the User-Item Matrix Here
u, s, vt = np.linalg.svd(user_item_matrix)
s.shape, u.shape, vt.shape
# Change the dimensions of u, s, and vt as necessary
# update the shape of u and store in u_new
u_new = u[:, :len(s)]
# update the shape of s and store in s_new
s_new = np.zeros((len(s), len(s)))
s_new[:len(s), :len(s)] = np.diag(s)
# Because we are using 4 latent features and there are only 4 movies,
# vt and vt_new are the same
vt_new = vt
s_new.shape, u_new.shape, vt_new.shape
```
There are no null values in the matrix since we are not using ratings, but whether the user has seen an article or not. Therefore it is enough for us to use SVD, we do not need to use funkSVD which needs to be used when handling null values.
`3.` Now for the tricky part, how do we choose the number of latent features to use? Running the below cell, you can see that as the number of latent features increases, we obtain a lower error rate on making predictions for the 1 and 0 values in the user-item matrix. Run the cell below to get an idea of how the accuracy improves as we increase the number of latent features.
```
num_latent_feats = np.arange(10,700+10,20)
sum_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_new, u_new, vt_new = np.diag(s[:k]), u[:, :k], vt[:k, :]
# take dot product
user_item_est = np.around(np.dot(np.dot(u_new, s_new), vt_new))
# compute error for each prediction to actual value
diffs = np.subtract(user_item_matrix, user_item_est)
# total errors and keep track of them
err = np.sum(np.sum(np.abs(diffs)))
sum_errs.append(err)
plt.plot(num_latent_feats, 1 - np.array(sum_errs)/df.shape[0]);
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
```
`4.` From the above, we can't really be sure how many features to use, because simply having a better way to predict the 1's and 0's of the matrix doesn't exactly give us an indication of if we are able to make good recommendations. Instead, we might split our dataset into a training and test set of data, as shown in the cell below.
Use the code from question 3 to understand the impact on accuracy of the training and test sets of data with different numbers of latent features. Using the split below:
* How many users can we make predictions for in the test set?
* How many users are we not able to make predictions for because of the cold start problem?
* How many articles can we make predictions for in the test set?
* How many articles are we not able to make predictions for because of the cold start problem?
```
df_train = df.head(40000)
df_test = df.tail(5993)
def create_test_and_train_user_item(df_train, df_test):
'''
INPUT:
df_train - training dataframe
df_test - test dataframe
OUTPUT:
user_item_train - a user-item matrix of the training dataframe
(unique users for each row and unique articles for each column)
user_item_test - a user-item matrix of the testing dataframe
(unique users for each row and unique articles for each column)
test_idx - all of the test user ids
test_arts - all of the test article ids
'''
# Get user_item_matrices
user_item_train = create_user_item_matrix(df_train)
user_item_test = create_user_item_matrix(df_test)
# Get user ids
test_idx = user_item_test.index.tolist()
# Get article ids
test_arts = user_item_test.columns.droplevel().tolist()
return user_item_train, user_item_test, test_idx, test_arts
user_item_train, user_item_test, test_idx, test_arts = create_test_and_train_user_item(df_train, df_test)
print('1. How many users can we make predictions for in the test set?')
qst_1 = len(np.intersect1d(test_idx, user_item_train.index.tolist(), assume_unique=True))
print(qst_1)
print('')
print('2. How many users in the test set are we not able to make predictions for because of the cold start problem?')
print(len(test_idx) - qst_1)
print('')
print('3. How many movies can we make predictions for in the test set?')
qst_3 = len(np.intersect1d(test_arts, user_item_train.columns.droplevel().tolist(), assume_unique=True))
print(qst_3)
print('')
print('4. How many movies in the test set are we not able to make predictions for because of the cold start problem')
print(len(test_arts) - qst_3)
print('')
# Replace the values in the dictionary below
a = 662
b = 574
c = 20
d = 0
sol_4_dict = {
'How many users can we make predictions for in the test set?': c,
'How many users in the test set are we not able to make predictions for because of the cold start problem?': a,
'How many movies can we make predictions for in the test set?': b,
'How many movies in the test set are we not able to make predictions for because of the cold start problem?': d
}
t.sol_4_test(sol_4_dict)
```
Please note that I had to modify 'articles' to 'movies' otherwise the function would not get the right result. However, we are talking about articles here, not movies.
`5.` Now use the **user_item_train** dataset from above to find U, S, and V transpose using SVD. Then find the subset of rows in the **user_item_test** dataset that you can predict using this matrix decomposition with different numbers of latent features to see how many features makes sense to keep based on the accuracy on the test data. This will require combining what was done in questions `2` - `4`.
Use the cells below to explore how well SVD works towards making predictions for recommendations on the test data.
```
# fit SVD on the user_item_train matrix
u_train, s_train, vt_train = np.linalg.svd(user_item_train)# fit svd similar to above then use the cells below
s_train.shape, u_train.shape, vt_train.shape
# Find users to predict in test matrix
users_to_predict = np.intersect1d(test_idx, user_item_train.index.tolist(), assume_unique=True).tolist()
# Get filtered test matrix
user_item_test_f = user_item_test[user_item_test.index.isin(users_to_predict)]
# Get position of the users to predict in the train matrix
users_train_pos = user_item_train.reset_index()[user_item_train.reset_index()['user_id'].isin(users_to_predict)].index.tolist()
# Find articles to predict in test matrix
articles_to_predict = np.intersect1d(test_arts, user_item_test.columns.droplevel().tolist(), assume_unique=True).tolist()
# Get position of the articles to predict in the train matrix
articles_train_pos = user_item_train.columns.droplevel().isin(articles_to_predict)
# Get u and vt matrices for train
u_test = u_train[users_train_pos,:]
vt_test = vt_train[:,articles_train_pos]
from sklearn.metrics import accuracy_score
# Use these cells to see how well you can use the training
# decomposition to predict on test data
num_latent_feats = np.arange(10,700+10,20)
sum_errs_train = []
sum_errs_test = []
train_errs = []
test_errs = []
for k in num_latent_feats:
# restructure with k latent features
s_train_new, u_train_new, vt_train_new = np.diag(s_train[:k]), u_train[:, :k], vt_train[:k, :]
u_test_new, vt_test_new = u_test[:, :k], vt_test[:k, :]
# take dot product
user_item_est_train = np.around(np.dot(np.dot(u_train_new, s_train_new), vt_train_new))
user_item_est_test = np.around(np.dot(np.dot(u_test_new, s_train_new), vt_test_new))
# compute error for each prediction to actual value
diffs_train = np.subtract(user_item_train, user_item_est_train)
diffs_test = np.subtract(user_item_test_f, user_item_est_test)
# total errors and keep track of them
err_train = np.sum(np.sum(np.abs(diffs_train)))
sum_errs_train.append(err_train)
err_test = np.sum(np.sum(np.abs(diffs_test)))
sum_errs_test.append(err_test)
# number of interactions
nb_interactions_train = user_item_est_train.shape[0] * user_item_est_train.shape[1]
nb_interactions_test = user_item_est_test.shape[0] * user_item_est_test.shape[1]
plt.plot(num_latent_feats, 1 - np.array(sum_errs_train)/nb_interactions_train, label = 'Train');
plt.plot(num_latent_feats, 1 - np.array(sum_errs_test)/nb_interactions_test, label = 'Test');
plt.xlabel('Number of Latent Features');
plt.ylabel('Accuracy');
plt.title('Accuracy vs. Number of Latent Features');
plt.legend()
plt.show()
```
`6.` Use the cell below to comment on the results you found in the previous question. Given the circumstances of your results, discuss what you might do to determine if the recommendations you make with any of the above recommendation systems are an improvement to how users currently find articles?
When using SVD on the test dataset, the accuracy decreases with the number of latent factors. This is due to the fact that only a small amount of users (20) are common between the train and test dataset. This makes that our data (0s and 1s) is imbalanced, that there is a big disproportionate ratio of 0s in the dataset compared to the 1s.
Moreover, when increasing the number of latent factors, we are increasing the overfitting on the training data (accuracy increases), which also explains why the accuracy on the testing dataset decreases.
In order to understand if our results are working in practice, I would conduct an experiment. I would split my users into three groups with different treatments:
- Group 1: do not receive any recommendation
- Group 2: receives recommendations from a mix of collaborative filtering and top ranked
- Group 3: receives recommendations from matrix factorization
We would split the users on a cookie-based, so that they see the same experience everytime they check the website.
I would do the following tests:
- Group 1 vs. Group 2, where the null hypothesis is that there is no difference between not providing recommendations to users and providing collaborative + top ranked-based recommendations
- Group 1 vs. Group 3, where the null hypothesis that there is no difference between not providing recommendations to suers and providing matrix factorization-based recommendations
The success metric will be the number of clicks on articles per user per session on the website. This would mainly focus on the novelty effect of the recommendations, as we would assume users would click on articles if they have not seen them before. It could also be beneficial if the users could rate the article (even with just thumbs up or down), in order to know if the article was interesting for them, and therefore if the recommendations are relevant.
This success metric would need to be statistically significant to go ahead with implementing the recommendations engine, and which method. If it is not, we'd need to understand if other factors would justify the implementation of the recommendations engine.
We would also check the invariant metric within our groups to be sure that they are equivalent (e.g. one group contains a high share of users who have already seen more than 200 articles (for example) while another group contains a high share of users who have only been through 10 articles).
```
from subprocess import call
call(['python', '-m', 'nbconvert', 'Recommendations_with_IBM.ipynb'])
```
| github_jupyter |
# 0.0. IMPORTS
```
import inflection
import math
import datetime
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import seaborn as sns
from IPython.core.display import HTML
from IPython.display import Image
```
## 0.1. Helper Functions
```
def load_csv(path):
df = pd.read_csv(path, low_memory=False)
return df
def rename_columns(df, old_columns):
snakecase = lambda x: inflection.underscore(x)
cols_new = list(map(snakecase, old_columns))
print(f"Old columns: {df.columns.to_list()}")
# Rename
df.columns = cols_new
print(f"\nNew columns: {df.columns.to_list()}")
print('\n', df.columns)
return df
def show_dimensions(df):
print(f"Number of Rows: {df1.shape[0]}")
print(f"Number of Columns: {df1.shape[1]}")
print(f"Shape: {df1.shape}")
return None
def show_data_types(df):
print(df.dtypes)
return None
def check_na(df):
print(df.isna().sum())
return None
def show_descriptive_statistical(df):
# Central Tendency - mean, median
ct1 = pd.DataFrame(df.apply(np.mean)).T
ct2 = pd.DataFrame(df.apply(np.median)).T
# Dispersion - std, min, max, range, skew, kurtosis
d1 = pd.DataFrame(df.apply(np.std)).T
d2 = pd.DataFrame(df.apply(min)).T
d3 = pd.DataFrame(df.apply(max)).T
d4 = pd.DataFrame(df.apply(lambda x: x.max() - x.min())).T
d5 = pd.DataFrame(df.apply(lambda x: x.skew())).T
d6 = pd.DataFrame(df.apply(lambda x: x.kurtosis())).T
m = pd.concat([d2, d3, d4, ct1, ct2, d1, d5, d6]).T.reset_index()
m.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
print(m)
def jupyter_settings():
%matplotlib inline
%pylab inline
plt.style.use( 'ggplot')
plt.rcParams['figure.figsize'] = [24, 9]
plt.rcParams['font.size'] = 24
display( HTML( '<style>.container { width:100% !important; }</style>') )
pd.options.display.max_columns = None
pd.options.display.max_rows = None
pd.set_option( 'display.expand_frame_repr', False )
sns.set()
jupyter_settings()
```
## 0.2. Path Definition
```
# path
home_path = 'C:\\Users\\sindolfo\\rossmann-stores-sales\\'
raw_data_path = 'data\\raw\\'
interim_data_path = 'data\\interim\\'
figures = 'reports\\figures\\'
```
## 0.3. Loading Data
```
## Historical data including Sales
df_sales_raw = load_csv(home_path+raw_data_path+'train.csv')
## Supplemental information about the stores
df_store_raw = load_csv(home_path+raw_data_path+'store.csv')
# Merge
df_raw = pd.merge(df_sales_raw, df_store_raw, how='left', on='Store')
```
# 1.0. DATA DESCRIPTION
```
df1 = df_raw.copy()
df1.to_csv(home_path+interim_data_path+'df1.csv')
```
### Data fields
Most of the fields are self-explanatory. The followingย are descriptions for those that aren't.
- **Id** - an Id that represents a (Store, Date) duple within the test set
- **Store** -ย a unique Id for each store
- **Sales** - the turnover for any given day (this is what you are predicting)
- **Customers** - the number of customers on a given day
- **Open** - an indicator for whether the store was open: 0 = closed, 1 = open
- **StateHoliday** -ย indicatesย a state holiday. Normally
all stores, with few exceptions, are closed on state holidays. Note that all schools are closed on public holidays and weekends. a = public
holiday, b = Easter holiday, c = Christmas, 0 = None
- **SchoolHoliday** -ย indicatesย if theย (Store, Date)ย was affected by the closure of public schools
- **StoreType**ย -ย differentiates between 4 different store models: a, b, c, d
- **Assortment** -ย describes an assortment level: a = basic, b = extra,ย c = extended
- **CompetitionDistance** - distance in meters to the nearest competitor store
- **CompetitionOpenSince[Month/Year]** - gives the approximate year and month of the time the nearest competitor was opened
- **Promo** - indicates whether a store is running a promo on that day
- **Promo2** - Promo2ย is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating
- **Promo2Since[Year/Week]** -ย describes the year and calendar week when the store startedย participating inย Promo2
- **PromoInterval** -ย describesย the consecutive intervals Promo2ย is started, naming the months the promotion is started anew.
E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store
## 1.1. Rename Columns
```
cols_old = [
'Store', 'DayOfWeek', 'Date', 'Sales', 'Customers', 'Open', 'Promo',
'StateHoliday', 'SchoolHoliday', 'StoreType', 'Assortment',
'CompetitionDistance', 'CompetitionOpenSinceMonth',
'CompetitionOpenSinceYear', 'Promo2', 'Promo2SinceWeek',
'Promo2SinceYear', 'PromoInterval'
]
df1 = rename_columns(df1, cols_old)
```
## 1.2. Data Dimensions
```
show_dimensions(df1)
```
## 1.3. Data Types
```
show_data_types(df1)
## Date is a object type. This is wrong. In the section "Types Changes" others chages is made.
df1['date'] = pd.to_datetime(df1['date'])
```
## 1.4. Check NA
```
check_na(df1)
## Columns with NA vales
## competition_distance 2642
## competition_open_since_month 323348
## competition_open_since_year 323348
## promo2_since_week 508031
## promo2_since_year 508031
## promo_interval 508031
```
## 1.5. Fillout NA
```
# competition_distance: distance in meters to the nearest competitor store
#
# Assumption: if there is a row that is NA in this column,
# it is because there is no close competitor.
# The way I used to represent this is to put
# a number much larger than the maximum value
# of the competition_distance variable.
#
# The number is 250000.
df1['competition_distance'] = df1['competition_distance'].apply(lambda x : 250000 if math.isnan(x) else x)
# competition_open_since_month:
# gives the approximate year and month of the
# time the nearest competitor was opened
#
# Assumption: I'm going to keep this variable because
# it's important to have something that expresses
# the feeling of "since it happened" or "until when".
#
# If it's NA I'll copy the month of sale of that line.
df1['competition_open_since_month'] = df1.apply(lambda x: x['date'].month if math.isnan(x['competition_open_since_month']) else x['competition_open_since_month'], axis=1)
#competition_open_since_year
# The same assumption from competition_open_since_month
df1['competition_open_since_year'] = df1.apply(lambda x: x['date'].year if math.isnan(x['competition_open_since_year']) else x['competition_open_since_year'], axis=1)
# promo2_since_week:
# describes the year and calendar week when the store started participating in Promo2
#
# The same assumption from competition_open_since_month
df1['promo2_since_week'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_week']) else x['promo2_since_week'], axis=1)
# promo2_since_year:
# describes the year and calendar week when the store started participating in Promo2
df1['promo2_since_year'] = df1.apply(lambda x: x['date'].week if math.isnan(x['promo2_since_year']) else x['promo2_since_year'], axis=1)
# promo_interval
month_map = {1: 'Jan', 2: 'Feb', 3: 'Mar', 4: 'Apr', 5: 'May', 6: 'Jun', 7: 'Jul', 8: 'Aug',9: 'Sep', 10: 'Oct', 11: 'Nov', 12: 'Dec'}
df1['promo_interval'].fillna(0, inplace=True)
df1['month_map'] = df1['date'].dt.month.map(month_map)
df1['is_promo'] = df1[['promo_interval', 'month_map']].apply(lambda x: 0 if x['promo_interval'] == 0 else 1 if x['month_map'] in x['promo_interval'].split(',') else 0, axis=1)
```
## 1.6. Type Changes
```
df1['competition_open_since_month'] = df1['competition_open_since_month'].astype('int64')
df1['competition_open_since_year'] = df1['competition_open_since_year'].astype('int64')
df1['promo2_since_week'] = df1['promo2_since_week'].astype('int64')
df1['promo2_since_year'] = df1['promo2_since_year'].astype('int64')
```
## 1.7. Descriptive Statistical
```
num_attributes = df1.select_dtypes(include=['int64', 'float64'])
cat_attributes = df1.select_dtypes(exclude=['int64', 'float64', 'datetime64[ns]'])
```
### 1.7.1 Numerical Attributes
```
show_descriptive_statistical(num_attributes)
sns.displot(df1['sales'])
```
### 1.7.2 Categorical Attributes
```
cat_attributes.apply(lambda x: x.unique().shape[0])
aux1 = df1[(df1['state_holiday'] != '0') & (df1['sales'] > 0)]
plt.subplot(1, 3, 1)
sns.boxplot(x='state_holiday', y='sales', data=aux1)
plt.subplot(1, 3, 2)
sns.boxplot(x='store_type', y='sales', data=aux1)
plt.subplot(1, 3, 3)
sns.boxplot(x='assortment', y='sales', data=aux1)
```
# 2.0. FEATURE ENGINEERING
```
df2 = df1.copy()
df2.to_csv(home_path+interim_data_path+'df2.csv')
```
## 2.1. Hypothesis Mind Map
```
Image(home_path+figures+'mind-map-hypothesis.png')
```
## 2.2 Creating hypotheses
### 2.2.1 Store Hypotheses
**1.** Stores with larger staff should sell more.
**2.** Stores with more inventory should sell more.
**3.** Stores with close competitors should sell less.
**4.** Stores with a larger assortment should sell more.
**5.** Stores with more employees should sell more.
**6.** Stores with longer-term competitors should sell more.
### 2.2.2 Product Hypotheses
**1.** Stores with more marketing investment should sell more.
**2.** Stores with more products on display should sell more.
**3.** Stores that have cheaper products should sell more.
**4.** Stores that have more inventory should sell more.
**5.** Stores that do more promotions should sell more.
**6.** Stores with promotions active for longer should sell more.
**7.** Stores with more promotion days should sell more.
**8.** Stores with more consecutive promotions should sell more.
### 2.2.3 Temporal Hypotheses
**1.** Stores that have more holidays should sell less.
**2.** Stores that open within the first six months should sell more.
**3.** Stores that open on weekends should sell more.
**4.** Stores open during the Christmas holiday should sell more.
**5.** Stores should sell more over the years.
**6.** Stores should sell more after the 10th of each month.
**7.** Stores should sell more in the second half of the year.
**8.** Stores should sell less on weekends.
**8.** Stores should sell less during school holidays.
## 2.3. Final List of Hypotheses
**1.** Stores with close competitors should sell less.
**2.** Stores with a larger assortment should sell more.
**3.** Stores with longer-term competitors should sell more.
**4.** Stores with promotions active for longer should sell more.
**5.** Stores with more promotion days should sell more.
**6.** Stores with more consecutive promotions should sell more.
**7.** Stores open during the Christmas holiday should sell more.
**8.** Stores should sell more over the years.
**9.** Stores should sell more after the 10th of each month.
**10.** Stores should sell more in the second half of the year.
**11.** Stores should sell less on weekends.
**12.** Stores should sell less during school holidays.
## 2.4. Feature Engineering
# RODAR DE NOVO O FEATURE ENGINEERING SEPARADAMENTE PARA CADA VARIรVEL
```
# year
df2['year'] = df2['date'].dt.year
# month
df2['month'] = df2['date'].dt.month
# day
df2['day'] = df2['date'].dt.day
# week of year
df2['week_of_year'] = df2['date'].dt.isocalendar().week
# year week
df2['year_week'] = df2['date'].dt.strftime('%Y-%W')
# competition since
# I have competition measured in months and years.
# Now I'm going to put the two together and create a date.
df2['competition_since'] = df2.apply(
lambda x: datetime.datetime(
year=x['competition_open_since_year'],
month=x['competition_open_since_month'],
day=1),
axis=1)
## competition_time_month
df2['competition_time_month'] = ((df2['date'] - df2['competition_since'])/30).apply(lambda x: x.days).astype('int64')
# promo since
df2['promo_since'] = df2['promo2_since_year'].astype(str) + '-' + df2['promo2_since_week'].astype(str)
print(df2['promo_since'].sample())
df2['promo_since'] = df2['promo_since'].apply(lambda x: datetime.datetime.strptime(x + '-1', '%Y-%W') - datetime.timedelta(days=7))
# promo_time_week
df2['promo_time_week'] = ((df2['date'] - df2['promo_since'])/7).apply(lambda x: x.days).astype('int64')
# assortment
df2['assortment'] = df2['assortment'].apply(lambda x: 'basic' if x == 'a' else 'extra' if x == 'b' else 'extended')
# state holiday
df2.head().T
```
| github_jupyter |
```
import json
import pandas as pd
import numpy as np
import re
from sqlalchemy import create_engine
import psycopg2
from config import db_password
import time
# Add the clean movie function that takes in the argument, "movie".
def clean_movie(movie):
movie = dict(movie) #create a non-destructive copy
alt_titles = {}
for key in ['Also known as','Arabic','Cantonese','Chinese','French',
'Hangul','Hebrew','Hepburn','Japanese','Literally',
'Mandarin','McCuneโReischauer','Original title','Polish',
'Revised Romanization','Romanized','Russian',
'Simplified','Traditional','Yiddish']:
if key in movie:
alt_titles[key] = movie[key]
movie.pop(key)
if len(alt_titles) > 0:
movie['alt_titles'] = alt_titles
# merge column names
def change_column_name(old_name, new_name):
if old_name in movie:
movie[new_name] = movie.pop(old_name)
change_column_name('Adaptation by', 'Writer(s)')
change_column_name('Country of origin', 'Country')
change_column_name('Directed by', 'Director')
change_column_name('Distributed by', 'Distributor')
change_column_name('Edited by', 'Editor(s)')
change_column_name('Length', 'Running time')
change_column_name('Original release', 'Release date')
change_column_name('Music by', 'Composer(s)')
change_column_name('Produced by', 'Producer(s)')
change_column_name('Producer', 'Producer(s)')
change_column_name('Productioncompanies ', 'Production company(s)')
change_column_name('Productioncompany ', 'Production company(s)')
change_column_name('Released', 'Release Date')
change_column_name('Release Date', 'Release date')
change_column_name('Screen story by', 'Writer(s)')
change_column_name('Screenplay by', 'Writer(s)')
change_column_name('Story by', 'Writer(s)')
change_column_name('Theme music composer', 'Composer(s)')
change_column_name('Written by', 'Writer(s)')
return movie
# 1 Add the function that takes in three arguments;
# Wikipedia data, Kaggle metadata, and MovieLens rating data (from Kaggle)
def extract_transform_load(wiki_file, kaggle_file, ratings_file):
# Read in the kaggle metadata and MovieLens ratings CSV files as Pandas DataFrames.
kaggle_metadata = pd.read_csv(kaggle_file, low_memory=False)
ratings = pd.read_csv(ratings_file)
# Open and read the Wikipedia data JSON file.
with open(wiki_file, mode='r') as file:
wiki_movies_raw = json.load(file)
# Write a list comprehension to filter out TV shows.
wiki_movies = [
movie for movie in wiki_movies_raw
if ('Director' in movie or 'Directed by' in movie)
and 'imdb_link' in movie
]
# Write a list comprehension to iterate through the cleaned wiki movies list
# and call the clean_movie function on each movie.
clean_movies = [clean_movie(movie) for movie in wiki_movies]
# Read in the cleaned movies list from Step 4 as a DataFrame.
wiki_movies_df = pd.DataFrame(clean_movies)
# Write a try-except block to catch errors while extracting the IMDb ID using a regular expression string and
# dropping any imdb_id duplicates. If there is an error, capture and print the exception.
try:
wiki_movies_df['imdb_id'] = wiki_movies_df['imdb_link'].str.extract(r'(tt\d{7})')
wiki_movies_df = wiki_movies_df.drop_duplicates(['imdb_id'],keep='first')
except:
print('An error occurred while getting the id or deleting the duplicates')
# Write a list comprehension to keep the columns that don't have null values from the wiki_movies_df DataFrame.
wiki_columns_to_keep = [column for column in wiki_movies_df.columns if wiki_movies_df[column].isnull().sum() < len(wiki_movies_df) * 0.9]
wiki_movies_df = wiki_movies_df[wiki_columns_to_keep]
# Create a variable that will hold the non-null values from the โBox officeโ column.
box_office = wiki_movies_df['Box office'].dropna()
# Convert the box office data created in Step 8 to string values using the lambda and join functions.
box_office = box_office.apply(lambda x: ' '.join(x) if type(x) == list else x)
box_office = box_office.str.replace(r'\$.*[-โโ](?![a-z])', '$', regex=True)
# Write a regular expression to match the six elements of "form_one" of the box office data.
form_one = r'\$\s*\d+\.?\d*\s*[mb]illi?on'
# Write a regular expression to match the three elements of "form_two" of the box office data.
form_two = r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!\s[mb]illion)'
# Add the parse_dollars function.
def parse_dollars(s):
if type(s) != str:
return np.nan
if re.match(r'\$\s*\d+\.?\d*\s*milli?on', s, flags=re.IGNORECASE):
s = re.sub('\$|\s|[a-zA-Z]','', s)
value = float(s) * 10**6
return value
elif re.match(r'\$\s*\d+\.?\d*\s*billi?on', s, flags=re.IGNORECASE):
s = re.sub('\$|\s|[a-zA-Z]','', s)
value = float(s) * 10**9
return value
elif re.match(r'\$\s*\d{1,3}(?:[,\.]\d{3})+(?!\s[mb]illion)', s, flags=re.IGNORECASE):
s = re.sub('\$|,','', s)
value = float(s)
return value
else:
return np.nan
# Clean the box office column in the wiki_movies_df DataFrame.
wiki_movies_df['box_office'] = box_office.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)
wiki_movies_df.drop('Box office', axis=1, inplace=True)
# Clean the budget column in the wiki_movies_df DataFrame.
budget = wiki_movies_df['Budget'].dropna()
budget = budget.map(lambda x: ' '.join(x) if type(x) == list else x)
budget = budget.str.replace(r'\$.*[-โโ](?![a-z])', '$', regex=True)
budget = budget.str.replace(r'\[\d+\]\s*', '')
wiki_movies_df['budget'] = budget.str.extract(f'({form_one}|{form_two})', flags=re.IGNORECASE)[0].apply(parse_dollars)
# Clean the release date column in the wiki_movies_df DataFrame.
release_date = wiki_movies_df['Release date'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)
date_form_one = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\s[123]?\d,\s\d{4}'
date_form_two = r'\d{4}.[01]\d.[0123]\d'
date_form_three = r'(?:January|February|March|April|May|June|July|August|September|October|November|December)\s\d{4}'
date_form_four = r'\d{4}'
wiki_movies_df['release_date'] = pd.to_datetime(release_date.str.extract(f'({date_form_one}|{date_form_two}|{date_form_three}|{date_form_four})')[0], infer_datetime_format=True)
# Clean the running time column in the wiki_movies_df DataFrame.
running_time = wiki_movies_df['Running time'].dropna().apply(lambda x: ' '.join(x) if type(x) == list else x)
running_time_extract = running_time.str.extract(r'(\d+)\s*ho?u?r?s?\s*(\d*)|(\d+)\s*m')
running_time_extract = running_time_extract.apply(lambda col: pd.to_numeric(col, errors='coerce')).fillna(0)
wiki_movies_df['running_time'] = running_time_extract.apply(lambda row: row[0]*60 + row[1] if row[2] == 0 else row[2], axis=1)
wiki_movies_df.drop('Running time', axis=1, inplace=True)
# 2. Clean the Kaggle metadata.
kaggle_metadata = kaggle_metadata[kaggle_metadata['adult'] == 'False'].drop('adult',axis='columns')
kaggle_metadata['video'] = kaggle_metadata['video'] == 'True'
kaggle_metadata = kaggle_metadata[~kaggle_metadata['budget'].str.contains(r'/\D')]
kaggle_metadata['budget'] = kaggle_metadata['budget'].astype(int)
kaggle_metadata['id'] = pd.to_numeric(kaggle_metadata['id'], errors='raise')
kaggle_metadata['popularity'] = pd.to_numeric(kaggle_metadata['popularity'], errors='raise')
kaggle_metadata['release_date'] = pd.to_datetime(kaggle_metadata['release_date'])
ratings['timestamp'] = pd.to_datetime(ratings['timestamp'], unit='s')
# 3. Merged the two DataFrames into the movies DataFrame.
movies_df = pd.merge(wiki_movies_df, kaggle_metadata, on='imdb_id', suffixes=['_wiki','_kaggle'])
# 4. Drop unnecessary columns from the merged DataFrame.
movies_df.drop(columns=['title_wiki','release_date_wiki','Language','Production company(s)'], inplace=True)
# 5. Add in the function to fill in the missing Kaggle data.
def fill_missing_kaggle_data(df, kaggle_column, wiki_column):
df[kaggle_column] = df.apply(
lambda row: row[wiki_column] if row[kaggle_column] == 0 else row[kaggle_column]
, axis=1)
df.drop(columns=wiki_column, inplace=True)
# 6. Call the function in Step 5 with the DataFrame and columns as the arguments.
fill_missing_kaggle_data(movies_df, 'runtime', 'running_time')
fill_missing_kaggle_data(movies_df, 'budget_kaggle', 'budget_wiki')
fill_missing_kaggle_data(movies_df, 'revenue', 'box_office')
movies_df['video'].value_counts(dropna=False)
# 7. Filter the movies DataFrame for specific columns.
movies_df = movies_df.loc[:, ['imdb_id','id','title_kaggle','original_title','tagline','belongs_to_collection','url','imdb_link',
'runtime','budget_kaggle','revenue','release_date_kaggle','popularity','vote_average','vote_count',
'genres','original_language','overview','spoken_languages','Country',
'production_companies','production_countries','Distributor',
'Producer(s)','Director','Starring','Cinematography','Editor(s)','Writer(s)','Composer(s)','Based on'
]]
# 8. Rename the columns in the movies DataFrame.
movies_df.rename({'id':'kaggle_id',
'title_kaggle':'title',
'url':'wikipedia_url',
'budget_kaggle':'budget',
'release_date_kaggle':'release_date',
'Country':'country',
'Distributor':'distributor',
'Producer(s)':'producers',
'Director':'director',
'Starring':'starring',
'Cinematography':'cinematography',
'Editor(s)':'editors',
'Writer(s)':'writers',
'Composer(s)':'composers',
'Based on':'based_on'
}, axis='columns', inplace=True)
# 9. Transform and merge the ratings DataFrame.
rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count()
rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count() \
.rename({'userId':'count'}, axis=1)
rating_counts = ratings.groupby(['movieId','rating'], as_index=False).count() \
.rename({'userId':'count'}, axis=1) \
.pivot(index='movieId',columns='rating', values='count')
rating_counts.columns = ['rating_' + str(col) for col in rating_counts.columns]
movies_with_ratings_df = pd.merge(movies_df, rating_counts, left_on='kaggle_id', right_index=True, how='left')
movies_with_ratings_df[rating_counts.columns] = movies_with_ratings_df[rating_counts.columns].fillna(0)
db_string = f"postgresql://postgres:{db_password}@127.0.0.1:5432/movie_data"
engine = create_engine(db_string)
start_time_movies = time.time()
movies_df.to_sql(name='movies', con=engine, if_exists='replace')
print(f'Done movies. {time.time() - start_time_movies} total seconds elapsed')
rows_imported = 0
start_time = time.time()
for data in pd.read_csv(f'./Resources/ratings.csv', chunksize=1000000):
print(f'importing rows {rows_imported} to {rows_imported + len(data)}...', end='')
data.to_sql(name='ratings', con=engine, if_exists='append')
rows_imported += len(data)
print(f'Done ratings. {time.time() - start_time} total seconds elapsed')
# 10. Create the path to your file directory and variables for the three files.
file_dir = './Resources'
# The Wikipedia data
wiki_file = f'{file_dir}/wikipedia-movies.json'
# The Kaggle metadata
kaggle_file = f'{file_dir}/movies_metadata.csv'
# The MovieLens rating data.
ratings_file = f'{file_dir}/ratings.csv'
# 11. Set the three variables equal to the function created in D1.
extract_transform_load(wiki_file, kaggle_file, ratings_file)
```
| github_jupyter |
<style type="text/css">
.tg {border-collapse:collapse;border-spacing:0;}
.tg td{border-color:white;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
overflow:hidden;padding:10px 5px;word-break:normal;}
.tg th{border-color:black;border-style:solid;border-width:1px;font-family:Arial, sans-serif;font-size:14px;
font-weight:normal;overflow:hidden;padding:10px 5px;word-break:normal;}
.tg .tg-baqh{text-align:left;vertical-align:top}
</style>
# 09 Compute best-fit ellipsoid approximation of the whole fruit
We now go for macro modeling.
For each fruit, a point cloud, a collection of $(x,y,z)$ coordinates in the space, was defined by the centers of all its individual oil glands.
- We compute an ellipsoid that fits the best this point cloud
- To that end, we do just an ordinary least square fit to find the best coefficients of the respective quadratic equation that approximate most of the point cloud points.
- The algebraic-fit ellipsoid was adapted from [Li and Griffiths (2004)](https://doi.org/10.1109/GMAP.2004.1290055).
- This produces a 10-dimensional vector that algebraically defines an ellipsoid.
- See [Panou et al. (2020)](https://doi.org/10.1515/jogs-2020-0105) on how to convert this vector into geometric parameters.
<table class="tg">
<tbody>
<tr>
<td class="tg-baqh" style="text-align:left">
<img src="https://www.egr.msu.edu/~amezqui3/citrus/figs/SW01_CRC3030_12B-8-5_L02_frontal_ell_projection.jpg" style="width:500px">
<p style="text-align:center;font-size:20px">Approximating a sweet orange</p>
</td>
<td class="tg-baqh" style="text-align:left">
<img src = "https://www.egr.msu.edu/~amezqui3/citrus/figs/SR01_CRC3289_12B-19-9_L02_frontal_ell_projection.jpg" alt = "barley" style="width:500px;"/>
<p style="text-align:center;font-size:20px">Approximating a sour orange</p>
</td>
</tr>
</tbody>
</table>
```
import numpy as np
import pandas as pd
import glob
import os
import tifffile as tf
from importlib import reload
import warnings
warnings.filterwarnings( "ignore")
import matplotlib.pyplot as plt
%matplotlib inline
import citrus_utils as vitaminC
```
### Define the appropriate base/root name and label name
- This is where having consistent file naming pays off
```
tissue_src = '../data/tissue/'
oil_src = '../data/oil/'
bnames = [os.path.split(x)[-1] for x in sorted(glob.glob(oil_src + 'WR*'))]
for i in range(len(bnames)):
print(i, '\t', bnames[i])
bname = bnames[0]
L = 3
lname = 'L{:02d}'.format(L)
rotateby = [2,1,0]
```
### Load voxel-size data
- The micron size of each voxel depends on the scanning parameters
```
voxel_filename = '../data/citrus_voxel_size.csv'
voxel_size = pd.read_csv(voxel_filename)
voxsize = (voxel_size.loc[voxel_size.ID == bname, 'voxel_size_microns'].values)[0]
print('Each voxel is of side', voxsize, 'microns')
```
## Load oil gland centers and align based on spine
- From the previous step, retrieve the `vh` rotation matrix to align the fruit
- The point cloud is made to have mean zero and it is scaled according to its voxel size
- The scale now should be in cm
- Plot 2D projections of the oil glands to make sure the fruit is standing upright after rotation
```
savefig= False
filename = tissue_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vh_alignment.csv'
vh = np.loadtxt(filename, delimiter = ',')
print(vh)
oil_dst = oil_src + bname + '/' + lname + '/'
filename = oil_dst + bname + '_' + lname + '_glands.csv'
glands = np.loadtxt(filename, delimiter=',', dtype=float)
glands = np.matmul(glands, np.transpose(vh))
centerby = np.mean(glands, axis = 0)
scaleby = 1e4/voxsize
glands = (glands - centerby)/scaleby
dst = oil_src + bname + '/'
vitaminC.plot_3Dprojections(glands, title=bname+'_'+lname, writefig=savefig, dst=dst)
```
# Compute the general conic parameters
Here we follow the algorithm laid out by [Li and Griffiths (2004)](https://doi.org/10.1109/GMAP.2004.1290055).
A general quadratic surface is defined by the equation
$$\eqalignno{ & ax^{2}+by^{2}+cz^{2}+2fxy+2gyz+2hzy\ \ \ \ \ \ \ \ \ &\hbox{(1)}\cr &+2px+2qy+2rz+d=0.}$$
Let $$\rho = \frac{4J-I}{a^2 + b^2 + c^2},$$
$$\eqalignno{ &I = a+b+c &\hbox{(2)}\cr &J =ab+bc+ac-f^{2}-g^{2}-h^{2}&\hbox {(3)}\cr & K=\left[\matrix{ a & h & g \cr h & b & f \cr g & f & c }\right] &\hbox{(4)}}.$$
These values are invariant under rotation and translation and equation (1) represents an ellipsoid if $J > 0$ and $IK>0$.
With our observations $\{(x_i,y_i,z_i)\}_i$, we would ideally want a vector of parameters $(a,b,c,f,g,h,p,q,r,d)$ such that
$$
\begin{pmatrix}
x_1^2 & y_1^2 & z_1^2 & 2x_1y_1 & 2y_1z_1 & 2x_1z_1 & x_1 & y_1 & z_1 & 1\\
x_2^2 & y_2^2 & z_2^2 & 2x_2y_2 & 2y_2z_2 & 2x_2z_2 & x_2 & y_2 & z_2 & 1\\
\vdots& \vdots& \vdots& \vdots & \vdots & \vdots & \vdots & \vdots & \vdots \\
x_n^2 & y_n^2 & z_n^2 & 2x_ny_n & 2y_nz_n & 2x_nz_n & x_n & y_n & z_n & 1
\end{pmatrix}
\begin{pmatrix}
a \\ b \\ \vdots \\ d
\end{pmatrix}
=
\begin{pmatrix}
0 \\ 0 \\ \vdots \\ 0
\end{pmatrix}
$$
or
$$
\mathbf{D}\mathbf{v} = 0
$$
The solution to the system above can be obtained via Lagrange multipliers
$$\min_{\mathbf{v}\in\mathbb{R}^{10}}\left\|\mathbf{D}\mathbf{v}\right\|^2, \quad \mathrm{s.t.}\; kJ - I^2 = 1$$
If $k=4$, the resulting vector $\mathbf{v}$ is guaranteed to be an ellipsoid.
- Experimental results suggest that the optimization problem also yields ellipsoids for higher $k$'s if there are enough sample points.
---
- This whole procedure yields a 10-dimensional vector $(a,b,c,f,g,h,p,q,r,1)$, which is then translated to geometric parameters as shown in [Panou et al. (2020)](https://doi.org/10.1515/jogs-2020-0105)
We obtain finally a `6 x 3` matrix with all the geometric parameters
```
[ x,y,z coordinates of ellipsoid center ]
[ semi-axes lengths ]
[ | ]
[ -- 3 x 3 rotation matrix -- ]
[ | ]
[ x,y,z rotation angles ]
```
```
np.vstack(tuple(ell_params.values())).shape
bbox = (np.max(glands, axis=0) - np.min(glands, axis=0))*.5
guess = np.argsort(np.argsort(bbox))
print(bbox)
print(guess[rotateby])
bbox[rotateby]
datapoints = glands.T
filename = oil_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vox_v_ell.csv'
ell_v_params, flag = vitaminC.ell_algebraic_fit2(datapoints, k=4)
print(np.around(ell_v_params,3), '\n', flag, 'ellipsoid\n')
np.savetxt(filename, ell_v_params, delimiter=',')
filename = oil_src + bname + '/' + lname + '/' + bname + '_' + lname + '_vox_m_ell.csv'
ell_params = vitaminC.get_ell_params_from_vector(ell_v_params, guess[rotateby])
np.savetxt(filename, np.vstack(tuple(ell_params.values())), delimiter=',')
ell_params
oil_src + bname + '/' + lname + '/' + bname + '_' + lname + '_ell_m.csv'
```
## Project the oil gland centers to the best-fit ellipsoid
- The oil gland point cloud is translated to the center of the best-fit ellipsoid.
- Projection will be **geocentric**: trace a ray from the origin to the oil gland and see where it intercepts the ellipsoid.
Additionally, we can compute these projection in terms of geodetic coordinates:
- longitude $\lambda\in[-\pi,\pi]$
- latitude $\phi\in[-\frac\pi2,\frac\pi2]$
- See [Diaz-Toca _et al._ (2020)](https://doi.org/10.1016/j.cageo.2020.104551) for more details.
The geodetic coordinates are invariant with respect to ellipsoid size, as long as the ratio between its semi-major axes lengths remains constant.
- These geodetic coordinates are a natural way to translate our data to the sphere
- Later, it will allow us to draw machinery from directional statistics ([Pewsey and Garcรญa-Portuguรฉs, 2021](https://doi.org/10.1007/s11749-021-00759-x)).
Results are saved in a `N x 3` matrix, where `N` is the number of individual oil glands
- Each row of the matrix is
```
[ longitude latitude residue ]
```
- The residue is the perpendicular distance from the oil gland to the ellipsoid surface.
```
footpoints = 'geocentric'
_, xyz = vitaminC.get_footpoints(datapoints, ell_params, footpoints)
rho = vitaminC.ell_rho(ell_params['axes'])
print(rho)
eglands = xyz - ell_params['center'].reshape(-1,1)
eglands = eglands[rotateby]
cglands = datapoints - ell_params['center'].reshape(-1,1)
cglands = cglands[rotateby]
eglands_params = {'center': np.zeros(len(eglands)),
'axes': ell_params['axes'],
'rotation': np.identity(len(eglands))}
geodetic, _ = vitaminC.get_footpoints(eglands, eglands_params, footpoints)
filename = oil_dst + bname + '_' + lname + '_' + footpoints + '.csv'
np.savetxt(filename, geodetic.T, delimiter=',')
print('Saved', filename)
pd.DataFrame(geodetic.T).describe()
```
### Plot the best-fit ellipsoid sphere and the gland projections
- Visual sanity check
```
domain_lon = [-np.pi, np.pi]
domain_lat = [-.5*np.pi, 0.5*np.pi]
lonN = 25
latN = 25
longitude = np.linspace(*domain_lon, lonN)
latitude = np.linspace(*domain_lat, latN)
shape_lon, shape_lat = np.meshgrid(longitude, latitude)
lonlat = np.vstack((np.ravel(shape_lon), np.ravel(shape_lat)))
ecoords = vitaminC.ellipsoid(*(lonlat), *ell_params['axes'])
title = bname + '_' + lname + ' - ' + footpoints.title() + ' projection'
markersize = 2
sidestep = np.min(bbox)
alpha = .5
fs = 20
filename = oil_dst + '_'.join(np.array(title.split(' '))[[0,2]])
vitaminC.plot_ell_comparison(cglands, eglands, ecoords, title, sidestep, savefig=savefig, filename=filename)
```
# References
- **Diaz-Toca, GM**, **Marin, L**, **Necula, I** (2020) Direct transformation from Cartesian into geodetic coordinates on a triaxial ellipsoid. _Computers & Geosciences_ **142**, 104551. [DOI: 10.1016/j.cageo.2020.104551](https://doi.org/10.1016/j.cageo.2020.104551)
- **Li, Q**, **Griffiths, J** (2004) Least squares ellipsoid specific fitting. _Geometric Modeling and Processing. Proceedings, 2004_. 335-340. [DOI: 10.1109/GMAP.2004.1290055](https://doi.org/10.1109/GMAP.2004.1290055)
- **Panou, G**, **Korakitis, R**, **Pantazis, G** (2020) Fitting a triaxial ellipsoid to a geoid model. _Journal of Geodetic Science_ **10**(1), 69-82. [DOI: 10.1515/jogs-2020-0105](https://doi.org/10.1515/jogs-2020-0105)
- **Pewsey, A**, **Garcรญa-Portuguรฉs, E** (2021) Recent advances in directional statistics. _TEST_ **30**(1), 1-58 [DOI: 10.1007/s11749-021-00759-x](https://doi.org/10.1007/s11749-021-00759-x)
| github_jupyter |
# LAB 4c: Create Keras Wide and Deep model.
**Learning Objectives**
1. Set CSV Columns, label column, and column defaults
1. Make dataset of features and label from CSV files
1. Create input layers for raw features
1. Create feature columns for inputs
1. Create wide layer, deep dense hidden layers, and output layer
1. Create custom evaluation metric
1. Build wide and deep model tying all of the pieces together
1. Train and evaluate
## Introduction
In this notebook, we'll be using Keras to create a wide and deep model to predict the weight of a baby before it is born.
We'll start by defining the CSV column names, label column, and column defaults for our data inputs. Then, we'll construct a tf.data Dataset of features and the label from the CSV files and create inputs layers for the raw features. Next, we'll set up feature columns for the model inputs and build a wide and deep neural network in Keras. We'll create a custom evaluation metric and build our wide and deep model. Finally, we'll train and evaluate our model.
Each learning objective will correspond to a __#TODO__ in this student lab notebook -- try to complete this notebook first and then review the [solution notebook](../solutions/4c_keras_wide_and_deep_babyweight.ipynb).
## Load necessary libraries
```
import datetime
import os
import shutil
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
print(tf.__version__)
```
## Verify CSV files exist
In the seventh lab of this series [4a_sample_babyweight](../solutions/4a_sample_babyweight.ipynb), we sampled from BigQuery our train, eval, and test CSV files. Verify that they exist, otherwise go back to that lab and create them.
```
%%bash
ls *.csv
%%bash
head -5 *.csv
```
## Create Keras model
### Lab Task #1: Set CSV Columns, label column, and column defaults.
Now that we have verified that our CSV files exist, we need to set a few things that we will be using in our input function.
* `CSV_COLUMNS` are going to be our header names of our columns. Make sure that they are in the same order as in the CSV files
* `LABEL_COLUMN` is the header name of the column that is our label. We will need to know this to pop it from our features dictionary.
* `DEFAULTS` is a list with the same length as `CSV_COLUMNS`, i.e. there is a default for each column in our CSVs. Each element is a list itself with the default value for that CSV column.
```
# Determine CSV, label, and key columns
# TODO: Create list of string column headers, make sure order matches.
CSV_COLUMNS = ["weight_pounds", "is_male", "mother_age", "plurality", "gestation_weeks"]
# TODO: Add string name for label column
LABEL_COLUMN = "weight_pounds"
# Set default values for each CSV column as a list of lists.
# Treat is_male and plurality as strings.
DEFAULTS = [[0.0], ["null"], [0.0], ["null"], [0.0]]
```
### Lab Task #2: Make dataset of features and label from CSV files.
Next, we will write an input_fn to read the data. Since we are reading from CSV files we can save ourself from trying to recreate the wheel and can use `tf.data.experimental.make_csv_dataset`. This will create a CSV dataset object. However we will need to divide the columns up into features and a label. We can do this by applying the map method to our dataset and popping our label column off of our dictionary of feature tensors.
```
def features_and_labels(row_data):
"""Splits features and labels from feature dictionary.
Args:
row_data: Dictionary of CSV column names and tensor values.
Returns:
Dictionary of feature tensors and label tensor.
"""
label = row_data.pop(LABEL_COLUMN)
return row_data, label # features, label
def load_dataset(pattern, batch_size=1, mode=tf.estimator.ModeKeys.EVAL):
"""Loads dataset using the tf.data API from CSV files.
Args:
pattern: str, file pattern to glob into list of files.
batch_size: int, the number of examples per batch.
mode: tf.estimator.ModeKeys to determine if training or evaluating.
Returns:
`Dataset` object.
"""
# TODO: Make a CSV dataset
dataset = tf.data.experimental.make_csv_dataset(
file_pattern=pattern,
batch_size=batch_size,
column_names=CSV_COLUMNS,
column_defaults=DEFAULTS)
# TODO: Map dataset to features and label
dataset = dataset.map(map_func=features_and_labels) # features, label
# Shuffle and repeat for training
if mode == tf.estimator.ModeKeys.TRAIN:
dataset = dataset.shuffle(buffer_size=1000).repeat()
# Take advantage of multi-threading; 1=AUTOTUNE
dataset = dataset.prefetch(buffer_size=1)
return dataset
```
### Lab Task #3: Create input layers for raw features.
We'll need to get the data read in by our input function to our model function, but just how do we go about connecting the dots? We can use Keras input layers [(tf.Keras.layers.Input)](https://www.tensorflow.org/api_docs/python/tf/keras/Input) by defining:
* shape: A shape tuple (integers), not including the batch size. For instance, shape=(32,) indicates that the expected input will be batches of 32-dimensional vectors. Elements of this tuple can be None; 'None' elements represent dimensions where the shape is not known.
* name: An optional name string for the layer. Should be unique in a model (do not reuse the same name twice). It will be autogenerated if it isn't provided.
* dtype: The data type expected by the input, as a string (float32, float64, int32...)
```
def create_input_layers():
"""Creates dictionary of input layers for each feature.
Returns:
Dictionary of `tf.Keras.layers.Input` layers for each feature.
"""
# TODO: Create dictionary of tf.keras.layers.Input for each dense feature
deep_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="float32")
for colname in ["mother_age", "gestation_weeks"]}
# TODO: Create dictionary of tf.keras.layers.Input for each sparse feature
wide_inputs = {
colname: tf.keras.layers.Input(
name=colname, shape=(), dtype="string")
for colname in ["is_male", "plurality"]}
inputs = {**wide_inputs, **deep_inputs}
return inputs
```
### Lab Task #4: Create feature columns for inputs.
Next, define the feature columns. `mother_age` and `gestation_weeks` should be numeric. The others, `is_male` and `plurality`, should be categorical. Remember, only dense feature columns can be inputs to a DNN.
```
def categorical_fc(name, values):
"""Helper function to wrap categorical feature by indicator column.
Args:
name: str, name of feature.
values: list, list of strings of categorical values.
Returns:
Categorical and indicator column of categorical feature.
"""
cat_column = tf.feature_column.categorical_column_with_vocabulary_list(
key=name, vocabulary_list=values)
ind_column = tf.feature_column.indicator_column(
categorical_column=cat_column)
return cat_column, ind_column
def create_feature_columns(nembeds):
"""Creates wide and deep dictionaries of feature columns from inputs.
Args:
nembeds: int, number of dimensions to embed categorical column down to.
Returns:
Wide and deep dictionaries of feature columns.
"""
# TODO: Create deep feature columns for numeric features
deep_fc = {
colname: tf.feature_column.numeric_column(key=colname)
for colname in ["mother_age", "gestation_weeks"]
}
# TODO: Create wide feature columns for categorical features
wide_fc = {}
is_male, wide_fc["is_male"] = categorical_fc(
"is_male", ["True", "False", "Unknown"])
plurality, wide_fc["plurality"] = categorical_fc(
"plurality", ["Single(1)", "Twins(2)", "Triplets(3)", "Quadruplets(4)", "Quintuplets(5)", "Multiple(2+)"])
# TODO: Bucketize the float fields. This makes them wide
age_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["mother_age"],
boundaries=np.arange(15, 45, 1).tolist())
wide_fc["age_buckets"] = tf.feature_column.indicator_column(
categorical_column=age_buckets)
gestation_buckets = tf.feature_column.bucketized_column(
source_column=deep_fc["gestation_weeks"],
boundaries=np.arange(17, 47, 1).tolist())
wide_fc["gestation_buckets"] = tf.feature_column.indicator_column(
categorical_column=gestation_buckets)
# TODO: Cross all the wide cols, have to do the crossing before we one-hot
crossed = tf.feature_column.crossed_column(
keys=[age_buckets, gestation_buckets],
hash_bucket_size=1000)
# TODO: Embed cross and add to deep feature columns
deep_fc["crosssed_embeds"] = tf.feature_column.embedding_column(
categorical_column=crossed, dimension=nembeds)
return wide_fc, deep_fc
```
### Lab Task #5: Create wide and deep model and output layer.
So we've figured out how to get our inputs ready for machine learning but now we need to connect them to our desired output. Our model architecture is what links the two together. We need to create a wide and deep model now. The wide side will just be a linear regression or dense layer. For the deep side, let's create some hidden dense layers. All of this will end with a single dense output layer. This is regression so make sure the output layer activation is correct and that the shape is right.
```
def get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units):
"""Creates model architecture and returns outputs.
Args:
wide_inputs: Dense tensor used as inputs to wide side of model.
deep_inputs: Dense tensor used as inputs to deep side of model.
dnn_hidden_units: List of integers where length is number of hidden
layers and ith element is the number of neurons at ith layer.
Returns:
Dense tensor output from the model.
"""
# Hidden layers for the deep side
layers = [int(x) for x in dnn_hidden_units]
deep = deep_inputs
# TODO: Create DNN model for the deep side
for layerno, numnodes in enumerate(layers):
deep = tf.keras.layers.Dense(
units=numnodes,
activation="relu",
name="dnn_{}".format(layerno+1))(deep)
deep_out = deep
# TODO: Create linear model for the wide side
wide_out = tf.keras.layers.Dense(
units=10, activation="relu", name="linear")(wide_inputs)
# Concatenate the two sides
both = tf.keras.layers.concatenate(
inputs=[deep_out, wide_out], name="both")
# TODO: Create final output layer
output=tf.keras.layers.Dense(
units=1, activation="linear", name="weight")(both)
return output
```
### Lab Task #6: Create custom evaluation metric.
We want to make sure that we have some useful way to measure model performance for us. Since this is regression, we would like to know the RMSE of the model on our evaluation dataset, however, this does not exist as a standard evaluation metric, so we'll have to create our own by using the true and predicted labels.
```
def rmse(y_true, y_pred):
"""Calculates RMSE evaluation metric.
Args:
y_true: tensor, true labels.
y_pred: tensor, predicted labels.
Returns:
Tensor with value of RMSE between true and predicted labels.
"""
# TODO: Calculate RMSE from true and predicted labels
return tf.sqrt(tf.reduce_mean((y_pred-y_true)**2))
```
### Lab Task #7: Build wide and deep model tying all of the pieces together.
Excellent! We've assembled all of the pieces, now we just need to tie them all together into a Keras Model. This is NOT a simple feedforward model with no branching, side inputs, etc. so we can't use Keras' Sequential Model API. We're instead going to use Keras' Functional Model API. Here we will build the model using [tf.keras.models.Model](https://www.tensorflow.org/api_docs/python/tf/keras/Model) giving our inputs and outputs and then compile our model with an optimizer, a loss function, and evaluation metrics.
```
def build_wide_deep_model(dnn_hidden_units=[64, 32], nembeds=3):
"""Builds wide and deep model using Keras Functional API.
Returns:
`tf.keras.models.Model` object.
"""
# Create input layers
inputs = create_input_layers()
# Create feature columns
wide_fc, deep_fc = create_feature_columns(nembeds)
# The constructor for DenseFeatures takes a list of numeric columns
# The Functional API in Keras requires: LayerConstructor()(inputs)
# TODO: Add wide and deep feature colummns
wide_inputs = tf.keras.layers.DenseFeatures(
feature_columns=wide_fc.values(), name="wide_inputs")(inputs)
deep_inputs = tf.keras.layers.DenseFeatures(
feature_columns=deep_fc.values(), name="deep_inputs")(inputs)
# Get output of model given inputs
output = get_model_outputs(wide_inputs, deep_inputs, dnn_hidden_units)
# Build model and compile it all together
model = tf.keras.models.Model(inputs=inputs, outputs=output)
# TODO: Add custom eval metrics to list
model.compile(optimizer="adam", loss="mse", metrics=["mse", rmse])
return model
print("Here is our wide and deep architecture so far:\n")
model = build_wide_deep_model()
print(model.summary())
```
We can visualize the wide and deep network using the Keras plot_model utility.
```
tf.keras.utils.plot_model(
model=model, to_file="wd_model.png", show_shapes=False, rankdir="LR")
```
## Run and evaluate model
### Lab Task #8: Train and evaluate.
We've built our Keras model using our inputs from our CSV files and the architecture we designed. Let's now run our model by training our model parameters and periodically running an evaluation to track how well we are doing on outside data as training goes on. We'll need to load both our train and eval datasets and send those to our model through the fit method. Make sure you have the right pattern, batch size, and mode when loading the data. Also, don't forget to add the callback to TensorBoard.
```
TRAIN_BATCH_SIZE = 32
NUM_TRAIN_EXAMPLES = 10000 * 5 # training dataset repeats, it'll wrap around
NUM_EVALS = 5 # how many times to evaluate
# Enough to get a reasonable sample, but not so much that it slows down
NUM_EVAL_EXAMPLES = 10000
# TODO: Load training dataset
trainds = load_dataset(
pattern="train*",
batch_size=TRAIN_BATCH_SIZE,
mode=tf.estimator.ModeKeys.TRAIN)
# TODO: Load evaluation dataset
evalds = load_dataset(
pattern="eval*",
batch_size=1000,
mode=tf.estimator.ModeKeys.EVAL).take(count=NUM_EVAL_EXAMPLES // 1000)
steps_per_epoch = NUM_TRAIN_EXAMPLES // (TRAIN_BATCH_SIZE * NUM_EVALS)
logdir = os.path.join(
"logs", datetime.datetime.now().strftime("%Y%m%d-%H%M%S"))
tensorboard_callback = tf.keras.callbacks.TensorBoard(
log_dir=logdir, histogram_freq=1)
# TODO: Fit model on training dataset and evaluate every so often
history = model.fit(
trainds,
validation_data=evalds,
epochs=NUM_EVALS,
steps_per_epoch=steps_per_epoch,
callbacks=[tensorboard_callback])
```
### Visualize loss curve
```
# Plot
nrows = 1
ncols = 2
fig = plt.figure(figsize=(10, 5))
for idx, key in enumerate(["loss", "rmse"]):
ax = fig.add_subplot(nrows, ncols, idx+1)
plt.plot(history.history[key])
plt.plot(history.history["val_{}".format(key)])
plt.title("model {}".format(key))
plt.ylabel(key)
plt.xlabel("epoch")
plt.legend(["train", "validation"], loc="upper left");
```
### Save the model
```
OUTPUT_DIR = "babyweight_trained_wd"
shutil.rmtree(OUTPUT_DIR, ignore_errors=True)
EXPORT_PATH = os.path.join(
OUTPUT_DIR, datetime.datetime.now().strftime("%Y%m%d%H%M%S"))
tf.saved_model.save(
obj=model, export_dir=EXPORT_PATH) # with default serving function
print("Exported trained model to {}".format(EXPORT_PATH))
!ls $EXPORT_PATH
```
## Lab Summary:
In this lab, we started by defining the CSV column names, label column, and column defaults for our data inputs. Then, we constructed a tf.data Dataset of features and the label from the CSV files and created inputs layers for the raw features. Next, we set up feature columns for the model inputs and built a wide and deep neural network in Keras. We created a custom evaluation metric and built our wide and deep model. Finally, we trained and evaluated our model.
Copyright 2020 Google LLC
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
https://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| github_jupyter |
```
import cv2
bild = cv2.imread("data\ped2//training//frames//01//000.jpg")
bild2 = cv2.imread("data\ped2//training//frames//01//001.jpg")
import numpy as np
lista = list()
lista.append(bild)
lista.append(bild2)
lista = np.array(lista)
import cv2
import os
import numpy as np
bilder = list()
for folder in os.listdir("data//avenue//testing//frames"):
path = os.path.join("data//avenue//testing//frames",folder)
for img in os.listdir(path):
bild = os.path.join(path,img)
#bilder.append(cv2.imread(bild))
bilder.append(bild)
#bilder = np.array(bilder)
labels = np.load("data/frame_labels_ped2_2.npy")
#labels = np.reshape(labels,labels.shape[1])
import pandas as pd
fjant = pd.DataFrame(data={"x_col":bilder,"y_col":labels})#columns=(["x_col","y_col"]))
fjant["y_col"] = fjant["y_col"].astype(str)
from keras_preprocessing.image import ImageDataGenerator
dataget = ImageDataGenerator(rescale=1. / 255)
train_get = dataget.flow_from_dataframe(dataframe=fjant,x_col="x_col",y_col="y_col",class_mode="sparse",target_size=(360,240),batch_size=64)
```
__________________________
```
import numpy as np
labels = np.load("data/frame_labels_avenue.npy")
labels = np.reshape(labels,labels.shape[1])
noll = 0
ett = 0
for x in Y_test:
if x == 0:
noll += 1
else:
ett +=1
print("Noll: ",noll)
print("Ett: ",ett)
from sklearn.model_selection import train_test_split
X_train, X_test, Y_train, Y_test = train_test_split(bilder,labels,test_size=0.2, random_state= 10)
#nylabels = np.concatenate((labels,nollor))
np.save("data/frame_labels_ped2_2.npy",nylabels)
bilder = bilder.reshape(bilder.shape[0],bilder.shape[1],bilder.shape[2],bilder.shape[3],1)
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
bilder = scaler.fit_transform(bilder)
output = np.full((2550,1),0)
ett = bilder[0,:,:,:]
import tensorflow.keras as keras
batch_size = 4
model = keras.Sequential()
inputs = keras.Input((240, 360, 3, 1))
#model.add(keras.layers.Conv3D(input_shape = ,activation="relu",filters=64,kernel_size=3,padding="same"))
model.add(keras.layers.Conv3D(activation="relu",filters=64,kernel_size=3,padding="same"))(inputs)
model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1)))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv3D(activation="relu",filters=64,kernel_size=3,padding="same"))
model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1)))
model.add(keras.layers.Conv3D(activation="relu",filters=128,kernel_size=3,padding="same"))
model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1)))
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(128,activation="relu"))
model.add(keras.layers.Dense(64,activation="relu"))
model.add(keras.layers.Dense(10,activation="relu"))
model.add(keras.layers.Dense(1,activation="sigmoid"))
model.compile(optimizer="adam",metrics=keras.metrics.categorical_crossentropy)
model.summary()
model = keras.Sequential()
model.add(keras.layers.Conv3D(input_shape =(240, 360, 3, 1),activation="relu",filters=64,kernel_size=3,padding="same"))
model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1)))
model.add(keras.layers.BatchNormalization())
model.add(keras.layers.Conv3D(activation="relu",filters=128,kernel_size=3,padding="same"))
model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1)))
model.add(keras.layers.Conv3D(activation="relu",filters=128,kernel_size=2,padding="same"))
model.add(keras.layers.MaxPooling3D(pool_size=(2,2,1)))
model.add(keras.layers.Dense(64,activation="relu"))
#model.add(keras.layers.GlobalAveragePooling3D())
model.add(keras.layers.Flatten())
model.add(keras.layers.Dense(256,activation="relu"))
model.add(keras.layers.Dense(64,activation="relu"))
model.add(keras.layers.Dense(10,activation="relu"))
model.add(keras.layers.Dense(1,activation="sigmoid"))
from tensorflow.keras.layers import Dense,Conv3D,MaxPooling3D,BatchNormalization,Flatten,Input, Add
from tensorflow.keras.models import Model
input = Input((240,360,3,1))
x = Conv3D(64,3,padding="same")(input)
x = MaxPooling3D(pool_size=(3,3,3))(x)
x = Flatten()(x)
x = Dense(128)(x)
#y = Dense(128)(input)
y = Flatten()(input)
y = Dense(128)(y)
y = Dense(128)(y)
x = Add()([x,y])
x = Dense(10)(x)
x = Dense(1)(x)
model = Model(inputs = input,outputs = x)
model.compile()
model.summary()
from tensorflow.keras.utils import plot_model
plot_model(model,show_shapes=True)
from tensorflow.keras.utils import plot_model
plot_model(model,show_shapes=True)
with open('data//UCFCrime2Local//UCFCrime2Local//Train_split_AD.txt') as f:
lines = f.readlines()
import cv2
import numpy as np
import os
from pathlib import *
path = "data/UFC"
films = list()
files = (x for x in Path(path).iterdir() if x.is_file())
for file in files:
#print(str(file.name).split("_")[0], "is a file!")
films.append(str(file.name).split("_")[0])
for x in range(len(lines)):
if lines[x].strip() != films[x]:
print(lines[x])
break
import cv2
import numpy as np
import os
from pathlib import *
path = "data//UCFCrime2Local//UCFCrime2Local//Txt annotations"
files = (x for x in Path(path).iterdir() if x.is_file())
for file in files:
films = list()
name = file.name.split(".")[0]
with open(file) as f:
lines = f.readlines()
for line in lines:
lost = int(line.split(" ")[6])
if lost == 0:
lost = 1
else:
lost = 0
films.append(lost)
films = np.array(films)
np.save(os.path.join("data//UFC//training",name + ".npy"),films)
#print(str(file.name).split("_")[0], "is a file!")
#films.append(str(file.name).split(" ")[6])
import cv2
import numpy as np
import os
from pathlib import *
file = "data//UCFCrime2Local//UCFCrime2Local//Txt annotations//Burglary099.txt"
films = list()
name = "Burglary099"
with open(file) as f:
lines = f.readlines()
for line in lines:
lost = int(line.split(" ")[6])
if lost == 0:
lost = 1
else:
lost = 0
films.append(lost)
films = np.array(films)
np.save(os.path.join("data//UFC//testing",name + ".npy"),films)
import numpy as np
assult = np.load("data//UFC//testing//NormalVideos004.npy")
sub = os.listdir("data//UFC//training//frames")
sub = os.listdir("data//UFC//testing//frames")
import numpy as np
for name in sub:
if "Normal" in name:
files = os.listdir(os.path.join("data//UFC//training//frames",name))
name = name.split("_")[0:2]
name = name[0] + name[1]
tom = np.zeros((len(files),),np.int8)
np.save(os.path.join("data//UFC//training",name),tom)
import tensorflow.keras as keras
keras.models.load_model("flow_inception_i3d_kinetics_only_tf_dim_ordering_tf_kernels_no_top.h5")
import math
import tensorflow.keras as keras
import cv2
import os
import numpy as np
from sklearn.model_selection import train_test_split
from tensorflow import config
from utils import Dataloader
from sklearn.metrics import roc_auc_score , roc_curve
from pathlib import *
gpus = config.experimental.list_physical_devices('GPU')
config.experimental.set_memory_growth(gpus[0], True)
test_bilder = list()
for folder in os.listdir("data//UFC//testing//frames"):
path = os.path.join("data//UFC//testing//frames",folder)
#bildmappar.append(folder)
for img in os.listdir(path):
bild = os.path.join(path,img)
test_bilder.append(bild)
test_etiketter = list()
path = "data//UFC//testing"
testnings_ettiketter = (x for x in Path(path).iterdir() if x.is_file())
for ettiket in testnings_ettiketter:
test_etiketter.append(np.load(ettiket))
test_etiketter = np.concatenate(test_etiketter,axis=0)
batch_size = 16
test_gen = Dataloader(test_bilder,test_etiketter,batch_size)
reconstructed_model = keras.models.load_model("modelUFC3D_4-ep004-loss0.367-val_loss0.421.tf")
validation_steps = math.floor( len(test_bilder) / batch_size)
y_score = reconstructed_model.predict(test_gen,verbose=1)
auc = roc_auc_score(test_etiketter,y_score=y_score)
print('AUC: ', auc*100, '%')
with open('y_score.npy', 'wb') as f:
np.save(f, y_score)
from sklearn.metrics import RocCurveDisplay
import matplotlib.pyplot as plt
RocCurveDisplay.from_predictions(test_etiketter,y_score)
plt.figure(figsize=(18, 6))
plt.get_figlabels()
plt.show
```
| github_jupyter |
# Neural Network Example
Build a 2-hidden layers fully connected neural network (a.k.a multilayer perceptron) with TensorFlow.
- Author: Aymeric Damien
- Project: https://github.com/aymericdamien/TensorFlow-Examples/
## Neural Network Overview
<img src="http://cs231n.github.io/assets/nn1/neural_net2.jpeg" alt="nn" style="width: 400px;"/>
## MNIST Dataset Overview
This example is using MNIST handwritten digits. The dataset contains 60,000 examples for training and 10,000 examples for testing. The digits have been size-normalized and centered in a fixed-size image (28x28 pixels) with values from 0 to 1. For simplicity, each image has been flatten and converted to a 1-D numpy array of 784 features (28*28).

More info: http://yann.lecun.com/exdb/mnist/
```
from __future__ import print_function
# Import MNIST data
from tensorflow.examples.tutorials.mnist import input_data
mnist = input_data.read_data_sets("/tmp/data/", one_hot=True)
import tensorflow as tf
# Parameters
learning_rate = 0.1
num_steps = 500
batch_size = 128
display_step = 100
# Network Parameters
n_hidden_1 = 256 # 1st layer number of neurons
n_hidden_2 = 256 # 2nd layer number of neurons
num_input = 784 # MNIST data input (img shape: 28*28)
num_classes = 10 # MNIST total classes (0-9 digits)
# tf Graph input
X = tf.placeholder("float", [None, num_input])
Y = tf.placeholder("float", [None, num_classes])
# Store layers weight & bias
weights = {
'h1': tf.Variable(tf.random_normal([num_input, n_hidden_1])),
'h2': tf.Variable(tf.random_normal([n_hidden_1, n_hidden_2])),
'out': tf.Variable(tf.random_normal([n_hidden_2, num_classes]))
}
biases = {
'b1': tf.Variable(tf.random_normal([n_hidden_1])),
'b2': tf.Variable(tf.random_normal([n_hidden_2])),
'out': tf.Variable(tf.random_normal([num_classes]))
}
# Create model
def neural_net(x):
# Hidden fully connected layer with 256 neurons
layer_1 = tf.add(tf.matmul(x, weights['h1']), biases['b1'])
# Hidden fully connected layer with 256 neurons
layer_2 = tf.add(tf.matmul(layer_1, weights['h2']), biases['b2'])
# Output fully connected layer with a neuron for each class
out_layer = tf.matmul(layer_2, weights['out']) + biases['out']
return out_layer
# Construct model
logits = neural_net(X)
# Define loss and optimizer
loss_op = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(
logits=logits, labels=Y))
optimizer = tf.train.AdamOptimizer(learning_rate=learning_rate)
train_op = optimizer.minimize(loss_op)
# Evaluate model (with test logits, for dropout to be disabled)
correct_pred = tf.equal(tf.argmax(logits, 1), tf.argmax(Y, 1))
accuracy = tf.reduce_mean(tf.cast(correct_pred, tf.float32))
# Initialize the variables (i.e. assign their default value)
init = tf.global_variables_initializer()
# Start training
with tf.Session() as sess:
# Run the initializer
sess.run(init)
for step in range(1, num_steps+1):
batch_x, batch_y = mnist.train.next_batch(batch_size)
# Run optimization op (backprop)
sess.run(train_op, feed_dict={X: batch_x, Y: batch_y})
if step % display_step == 0 or step == 1:
# Calculate batch loss and accuracy
loss, acc = sess.run([loss_op, accuracy], feed_dict={X: batch_x,
Y: batch_y})
print("Step " + str(step) + ", Minibatch Loss= " + \
"{:.4f}".format(loss) + ", Training Accuracy= " + \
"{:.3f}".format(acc))
print("Optimization Finished!")
# Calculate accuracy for MNIST test images
print("Testing Accuracy:", \
sess.run(accuracy, feed_dict={X: mnist.test.images,
Y: mnist.test.labels}))
```
| github_jupyter |
<a href="https://colab.research.google.com/github/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_10_3_text_generation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# T81-558: Applications of Deep Neural Networks
**Module 10: Time Series in Keras**
* Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)
* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/).
# Module 10 Material
* Part 10.1: Time Series Data Encoding for Deep Learning [[Video]](https://www.youtube.com/watch?v=dMUmHsktl04&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_1_timeseries.ipynb)
* Part 10.2: Programming LSTM with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=wY0dyFgNCgY&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_2_lstm.ipynb)
* **Part 10.3: Text Generation with Keras and TensorFlow** [[Video]](https://www.youtube.com/watch?v=6ORnRAz3gnA&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_3_text_generation.ipynb)
* Part 10.4: Image Captioning with Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=NmoW_AYWkb4&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_4_captioning.ipynb)
* Part 10.5: Temporal CNN in Keras and TensorFlow [[Video]](https://www.youtube.com/watch?v=i390g8acZwk&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_10_5_temporal_cnn.ipynb)
# Google CoLab Instructions
The following code ensures that Google CoLab is running the correct version of TensorFlow.
```
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
```
# Part 10.3: Text Generation with LSTM
Recurrent neural networks are also known for their ability to generate text. As a result, the output of the neural network can be free-form text. In this section, we will see how to train an LSTM can on a textual document, such as classic literature, and learn to output new text that appears to be of the same form as the training material. If you train your LSTM on [Shakespeare](https://en.wikipedia.org/wiki/William_Shakespeare), it will learn to crank out new prose similar to what Shakespeare had written.
Don't get your hopes up. You are not going to teach your deep neural network to write the next [Pulitzer Prize for Fiction](https://en.wikipedia.org/wiki/Pulitzer_Prize_for_Fiction). The prose generated by your neural network will be nonsensical. However, it will usually be nearly grammatically and of a similar style as the source training documents.
A neural network generating nonsensical text based on literature may not seem useful at first glance. However, this technology gets so much interest because it forms the foundation for many more advanced technologies. The fact that the LSTM will typically learn human grammar from the source document opens a wide range of possibilities. You can use similar technology to complete sentences when a user is entering text. Simply the ability to output free-form text becomes the foundation of many other technologies. In the next part, we will use this technique to create a neural network that can write captions for images to describe what is going on in the picture.
### Additional Information
The following are some of the articles that I found useful in putting this section together.
* [The Unreasonable Effectiveness of Recurrent Neural Networks](http://karpathy.github.io/2015/05/21/rnn-effectiveness/)
* [Keras LSTM Generation Example](https://keras.io/examples/lstm_text_generation/)
### Character-Level Text Generation
There are several different approaches to teaching a neural network to output free-form text. The most basic question is if you wish the neural network to learn at the word or character level. In many ways, learning at the character level is the more interesting of the two. The LSTM is learning to construct its own words without even being shown what a word is. We will begin with character-level text generation. In the next module, we will see how we can use nearly the same technique to operate at the word level. We will implement word-level automatic captioning in the next module.
We begin by importing the needed Python packages and defining the sequence length, named **maxlen**. Time-series neural networks always accept their input as a fixed-length array. Because you might not use all of the sequence elements, it is common to fill extra elements with zeros. You will divide the text into sequences of this length, and the neural network will train to predict what comes after this sequence.
```
from tensorflow.keras.callbacks import LambdaCallback
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense
from tensorflow.keras.layers import LSTM
from tensorflow.keras.optimizers import RMSprop
from tensorflow.keras.utils import get_file
import numpy as np
import random
import sys
import io
import requests
import re
```
For this simple example, we will train the neural network on the classic children's book [Treasure Island](https://en.wikipedia.org/wiki/Treasure_Island). We begin by loading this text into a Python string and displaying the first 1,000 characters.
```
r = requests.get("https://data.heatonresearch.com/data/t81-558/text/"\
"treasure_island.txt")
raw_text = r.text
print(raw_text[0:1000])
```
We will extract all unique characters from the text and sort them. This technique allows us to assign a unique ID to each character. Because we sorted the characters, these IDs should remain the same. If we add new characters to the original text, then the IDs would change. We build two dictionaries. The first **char2idx** is used to convert a character into its ID. The second **idx2char** converts an ID back into its character.
```
processed_text = raw_text.lower()
processed_text = re.sub(r'[^\x00-\x7f]',r'', processed_text)
print('corpus length:', len(processed_text))
chars = sorted(list(set(processed_text)))
print('total chars:', len(chars))
char_indices = dict((c, i) for i, c in enumerate(chars))
indices_char = dict((i, c) for i, c in enumerate(chars))
```
We are now ready to build the actual sequences. Just like previous neural networks, there will be an $x$ and $y$. However, for the LSTM, $x$ and $y$ will both be sequences. The $x$ input will specify the sequences where $y$ are the expected output. The following code generates all possible sequences.
```
# cut the text in semi-redundant sequences of maxlen characters
maxlen = 40
step = 3
sentences = []
next_chars = []
for i in range(0, len(processed_text) - maxlen, step):
sentences.append(processed_text[i: i + maxlen])
next_chars.append(processed_text[i + maxlen])
print('nb sequences:', len(sentences))
sentences
print('Vectorization...')
x = np.zeros((len(sentences), maxlen, len(chars)), dtype=np.bool)
y = np.zeros((len(sentences), len(chars)), dtype=np.bool)
for i, sentence in enumerate(sentences):
for t, char in enumerate(sentence):
x[i, t, char_indices[char]] = 1
y[i, char_indices[next_chars[i]]] = 1
x.shape
y.shape
```
The dummy variables for $y$ are shown below.
```
y[0:10]
```
Next, we create the neural network. This neural network's primary feature is the LSTM layer, which allows the sequences to be processed.
```
# build the model: a single LSTM
print('Build model...')
model = Sequential()
model.add(LSTM(128, input_shape=(maxlen, len(chars))))
model.add(Dense(len(chars), activation='softmax'))
optimizer = RMSprop(lr=0.01)
model.compile(loss='categorical_crossentropy', optimizer=optimizer)
model.summary()
```
The LSTM will produce new text character by character. We will need to sample the correct letter from the LSTM predictions each time. The **sample** function accepts the following two parameters:
* **preds** - The output neurons.
* **temperature** - 1.0 is the most conservative, 0.0 is the most confident (willing to make spelling and other errors).
The sample function below is essentially performing a [softmax]() on the neural network predictions. This causes each output neuron to become a probability of its particular letter.
```
def sample(preds, temperature=1.0):
# helper function to sample an index from a probability array
preds = np.asarray(preds).astype('float64')
preds = np.log(preds) / temperature
exp_preds = np.exp(preds)
preds = exp_preds / np.sum(exp_preds)
probas = np.random.multinomial(1, preds, 1)
return np.argmax(probas)
```
Keras calls the following function at the end of each training Epoch. The code generates sample text generations that visually demonstrate the neural network better at text generation. As the neural network trains, the generations should look more realistic.
```
def on_epoch_end(epoch, _):
# Function invoked at end of each epoch. Prints generated text.
print("******************************************************")
print('----- Generating text after Epoch: %d' % epoch)
start_index = random.randint(0, len(processed_text) - maxlen - 1)
for temperature in [0.2, 0.5, 1.0, 1.2]:
print('----- temperature:', temperature)
generated = ''
sentence = processed_text[start_index: start_index + maxlen]
generated += sentence
print('----- Generating with seed: "' + sentence + '"')
sys.stdout.write(generated)
for i in range(400):
x_pred = np.zeros((1, maxlen, len(chars)))
for t, char in enumerate(sentence):
x_pred[0, t, char_indices[char]] = 1.
preds = model.predict(x_pred, verbose=0)[0]
next_index = sample(preds, temperature)
next_char = indices_char[next_index]
generated += next_char
sentence = sentence[1:] + next_char
sys.stdout.write(next_char)
sys.stdout.flush()
print()
```
We are now ready to train. It can take up to an hour to train this network, depending on how fast your computer is. If you have a GPU available, please make sure to use it.
```
# Ignore useless W0819 warnings generated by TensorFlow 2.0. Hopefully can remove this ignore in the future.
# See https://github.com/tensorflow/tensorflow/issues/31308
import logging, os
logging.disable(logging.WARNING)
os.environ["TF_CPP_MIN_LOG_LEVEL"] = "3"
# Fit the model
print_callback = LambdaCallback(on_epoch_end=on_epoch_end)
model.fit(x, y,
batch_size=128,
epochs=60,
callbacks=[print_callback])
```
| github_jupyter |
```
%matplotlib inline
import pandas as pd
from os.path import join
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
import skbio
# from q2d2 import get_within_between_distances, filter_dm_and_map
from stats import mc_t_two_sample
from skbio.stats.distance import anosim, permanova
from skbio.stats.composition import ancom, multiplicative_replacement
import itertools
```
##Define a couple of helper functions
```
def get_within_between_distances(map_df, dm, col):
filtered_dm, filtered_map = filter_dm_and_map(dm, map_df)
groups = []
distances = []
map_dict = filtered_map[col].to_dict()
for id_1, id_2 in itertools.combinations(filtered_map.index.tolist(), 2):
row = []
if map_dict[id_1] == map_dict[id_2]:
groups.append('Within')
else:
groups.append('Between')
distances.append(filtered_dm[(id_1, id_2)])
groups = zip(groups, distances)
distances_df = pd.DataFrame(data=list(groups), columns=['Groups', 'Distance'])
return distances_df
def filter_dm_and_map(dm, map_df):
ids_to_exclude = set(dm.ids) - set(map_df.index.values)
ids_to_keep = set(dm.ids) - ids_to_exclude
filtered_dm = dm.filter(ids_to_keep)
filtered_map = map_df.loc[ids_to_keep]
return filtered_dm, filtered_map
colors = sns.color_palette("YlGnBu", 100)
sns.palplot(colors)
```
Load mapping file and munge it
-----------------
```
home = '/home/office-microbe-files'
map_fp = join(home, 'master_map_150908.txt')
sample_md = pd.read_csv(map_fp, sep='\t', index_col=0, dtype=str)
sample_md = sample_md[sample_md['16SITS'] == 'ITS']
sample_md = sample_md[sample_md['OfficeSample'] == 'yes']
replicate_ids = '''F2F.2.Ce.021
F2F.2.Ce.022
F2F.3.Ce.021
F2F.3.Ce.022
F2W.2.Ca.021
F2W.2.Ca.022
F2W.2.Ce.021
F2W.2.Ce.022
F3W.2.Ce.021
F3W.2.Ce.022
F1F.3.Ca.021
F1F.3.Ca.022
F1C.3.Ca.021
F1C.3.Ca.022
F1W.2.Ce.021
F1W.2.Ce.022
F1W.3.Dr.021
F1W.3.Dr.022
F1C.3.Dr.021
F1C.3.Dr.022
F2W.3.Dr.059
F3F.2.Ce.078'''.split('\n')
reps = sample_md[sample_md['Description'].isin(replicate_ids)]
reps = reps.drop(reps.drop_duplicates('Description').index).index
sample_md.drop(reps, inplace=True)
```
Load alpha diversity
----------------------
```
alpha_div_fp = '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/arare_max999/alpha_div_collated/observed_species.txt'
alpha_div = pd.read_csv(alpha_div_fp, sep='\t', index_col=0)
alpha_div = alpha_div.T.drop(['sequences per sample', 'iteration'])
alpha_cols = [e for e in alpha_div.columns if '990' in e]
alpha_div = alpha_div[alpha_cols]
sample_md = pd.concat([sample_md, alpha_div], axis=1, join='inner')
sample_md['MeanAlpha'] = sample_md[alpha_cols].mean(axis=1)
sample_md['MedianAlpha'] = sample_md[alpha_cols].median(axis=1)
alpha_div = pd.read_csv(alpha_div_fp, sep='\t', index_col=0)
alpha_div = alpha_div.T.drop(['sequences per sample', 'iteration'])
alpha_cols = [e for e in alpha_div.columns if '990' in e]
alpha_div = alpha_div[alpha_cols]
```
add alpha diversity to map
-------------
```
sample_md = pd.concat([sample_md, alpha_div], axis=1, join='inner')
sample_md['MeanAlpha'] = sample_md[alpha_cols].mean(axis=1)
```
Filter the samples so that only corrosponding row 2, 3 samples are included
-----------------------------------------------------------
```
sample_md['NoRow'] = sample_md['Description'].apply(lambda x: x[:3] + x[5:])
row_df = sample_md[sample_md.duplicated('NoRow', keep=False)].copy()
row_df['SampleType'] = 'All Row 2/3 Pairs (n={0})'.format(int(len(row_df)/2))
plot_row_df = row_df[['Row', 'MeanAlpha', 'SampleType']]
sample_md_wall = row_df[row_df['PlateLocation'] != 'floor'].copy()
sample_md_wall['SampleType'] = 'Wall and Ceiling Pairs (n={0})'.format(int(len(sample_md_wall)/2))
plot_sample_md_wall = sample_md_wall[['Row', 'MeanAlpha', 'SampleType']]
sample_md_floor = row_df[row_df['PlateLocation'] == 'floor'].copy()
sample_md_floor['SampleType'] = 'Floor Pairs (n={0})'.format(int(len(sample_md_floor)/2))
plot_sample_md_floor = sample_md_floor[['Row', 'MeanAlpha', 'SampleType']]
plot_df = pd.concat([plot_row_df, plot_sample_md_wall, plot_sample_md_floor])
with plt.rc_context(dict(sns.axes_style("darkgrid"),
**sns.plotting_context("notebook", font_scale=2.5))):
plt.figure(figsize=(20, 11))
ax = sns.violinplot(x='SampleType', y='MeanAlpha', data=plot_df, hue='Row', hue_order=['3', '2'],
palette="YlGnBu")
ax.set_xlabel('')
handles, labels = ax.get_legend_handles_labels()
ax.set_ylabel('OTU Counts')
ax.set_title('OTU Counts')
ax.legend(handles, ['Frequent', 'Infrequent'], title='Sampling Frequency')
ax.get_legend().get_title().set_fontsize('15')
plt.savefig('figure-3-its-A.svg', dpi=300)
row_2_values = list(row_df[(row_df['Row'] == '2')]['MeanAlpha'])
row_3_values = list(row_df[(row_df['Row'] == '3')]['MeanAlpha'])
obs_t, param_p_val, perm_t_stats, nonparam_p_val = mc_t_two_sample(row_2_values, row_3_values)
obs_t, param_p_val
print((obs_t, param_p_val), "row 2 mean: {0}, row 1 mean: {1}".format(np.mean(row_2_values),np.mean(row_3_values)))
row_2_values = list(sample_md_wall[(sample_md_wall['Row'] == '2')]['MeanAlpha'])
row_3_values = list(sample_md_wall[(sample_md_wall['Row'] == '3')]['MeanAlpha'])
obs_t, param_p_val, perm_t_stats, nonparam_p_val = mc_t_two_sample(row_2_values, row_3_values)
print((obs_t, param_p_val), "row 2 mean: {0}, row 1 mean: {1}".format(np.mean(row_2_values),np.mean(row_3_values)))
row_2_values = list(sample_md_floor[(sample_md_floor['Row'] == '2')]['MeanAlpha'])
row_3_values = list(sample_md_floor[(sample_md_floor['Row'] == '3')]['MeanAlpha'])
obs_t, param_p_val, perm_t_stats, nonparam_p_val = mc_t_two_sample(row_2_values, row_3_values)
print((obs_t, param_p_val), "row 2 mean: {0}, row 1 mean: {1}".format(np.mean(row_2_values),np.mean(row_3_values)))
```
#Beta Diversity!
Create beta diversity boxplots of within and bewteen distances for row. It may not make a lot of sense doing this for all samples as the location and or city affect may drown out the row affect
Load the distance matrix
----------------------
```
dm = skbio.DistanceMatrix.read(join(home, '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/bdiv_even999/binary_jaccard_dm.txt'))
```
Run permanova and recored within between values on various categories
----------------------
All of these will be based on the row 2, 3 paired samples, though they may be filtered to avoind confounding variables
###Row distances
```
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')]
filt_dm, filt_map = filter_dm_and_map(dm, filt_map)
row_dists = get_within_between_distances(filt_map, filt_dm, 'Row')
row_dists['Category'] = 'Row (n=198)'
permanova(filt_dm, filt_map, column='Row', permutations=999)
```
###Plate location
We can use the same samples for this as the previous test
```
plate_dists = get_within_between_distances(filt_map, filt_dm, 'PlateLocation')
plate_dists['Category'] = 'Plate Location (n=198)'
permanova(filt_dm, filt_map, column='PlateLocation', permutations=999)
```
###Run
```
filt_map = row_df[(row_df['City'] == 'flagstaff')]
filt_dm, filt_map = filter_dm_and_map(dm, filt_map)
run_dists = get_within_between_distances(filt_map, filt_dm, 'Run')
run_dists['Category'] = 'Run (n=357)'
permanova(filt_dm, filt_map, column='Run', permutations=999)
```
###Material
```
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')]
filt_dm, filt_map = filter_dm_and_map(dm, filt_map)
material_dists = get_within_between_distances(filt_map, filt_dm, 'Material')
material_dists['Category'] = 'Material (n=198)'
permanova(filt_dm, filt_map, column='Material', permutations=999)
all_dists = material_dists.append(row_dists).append(plate_dists).append(run_dists)
with plt.rc_context(dict(sns.axes_style("darkgrid"),
**sns.plotting_context("notebook", font_scale=1.8))):
plt.figure(figsize=(20,11))
ax = sns.boxplot(x="Category", y="Distance", hue="Groups", hue_order=['Within', 'Between'], data=all_dists, palette=sns.color_palette(['#f1fabb', '#2259a6']))
ax.set_ylim([0.9, 1.02])
ax.set_xlabel('')
ax.set_title('Binary-Jaccard')
plt.legend(loc='upper right')
plt.savefig('figure-3-its-B.svg', dpi=300)
dm = skbio.DistanceMatrix.read(join(home, '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_open/bdiv_even999/bray_curtis_dm.txt'))
```
##Row Distances
```
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')]
filt_dm, filt_map = filter_dm_and_map(dm, filt_map)
row_dists = get_within_between_distances(filt_map, filt_dm, 'Row')
row_dists['Category'] = 'Row (n=198)'
permanova(filt_dm, filt_map, column='Row', permutations=999)
```
##Plate Location
```
plate_dists = get_within_between_distances(filt_map, filt_dm, 'PlateLocation')
plate_dists['Category'] = 'Plate Location (n=198)'
permanova(filt_dm, filt_map, column='PlateLocation', permutations=999)
```
##Run
```
filt_map = row_df[(row_df['City'] == 'flagstaff')]
filt_dm, filt_map = filter_dm_and_map(dm, filt_map)
run_dists = get_within_between_distances(filt_map, filt_dm, 'Run')
run_dists['Category'] = 'Run (n=357)'
permanova(filt_dm, filt_map, column='Run', permutations=999)
```
##Material
```
filt_map = row_df[(row_df['City'] == 'flagstaff') & (row_df['Run'] == '2')]
filt_dm, filt_map = filter_dm_and_map(dm, filt_map)
material_dists = get_within_between_distances(filt_map, filt_dm, 'Material')
material_dists['Category'] = 'Material (n=198)'
permanova(filt_dm, filt_map, column='Material', permutations=999)
all_dists = material_dists.append(row_dists).append(plate_dists).append(run_dists)
with plt.rc_context(dict(sns.axes_style("darkgrid"),
**sns.plotting_context("notebook", font_scale=1.8))):
plt.figure(figsize=(20,11))
ax = sns.boxplot(x="Category", y="Distance", hue="Groups", hue_order=['Within', 'Between'], data=all_dists, palette=sns.color_palette(['#f1fabb', '#2259a6']))
ax.set_ylim([0.9, 1.02])
ax.set_xlabel('')
ax.set_title('Bray-Curtis')
plt.legend(loc='upper right')
plt.savefig('figure-3-its-C.svg', dpi=300)
```
ANCOM
-----
```
table_fp = join(home, 'core_div_out/table_even1000.txt')
table = pd.read_csv(table_fp, sep='\t', skiprows=1, index_col=0).T
table.index = table.index.astype(str)
table_ancom = table.loc[:, table[:3].sum(axis=0) > 0]
table_ancom = pd.DataFrame(multiplicative_replacement(table_ancom), index=table_ancom.index, columns=table_ancom.columns)
table_ancom.dropna(axis=0, inplace=True)
intersect_ids = set(row_md.index).intersection(set(table_ancom.index))
row_md_ancom = row_md.loc[intersect_ids, ]
table_ancom = table_ancom.loc[intersect_ids, ]
%time
results = ancom(table_ancom, row_md_ancom['Row'])
sigs = results[results['reject'] == True]
tax_fp = '/home/office-microbe-files/pick_otus_out_97/uclust_assigned_taxonomy/rep_set_tax_assignments.txt'
taxa_map = pd.read_csv(tax_fp, sep='\t', index_col=0, names=['Taxa', 'none', 'none'])
taxa_map.drop('none', axis=1, inplace=True)
taxa_map.index = taxa_map.index.astype(str)
taxa_map.loc[sigs.sort_values('W').index.astype(str)]
pd.options.display.max_colwidth = 200
sigs
np.mean(w_dm.data)
np.median(w_dm.data)
w_dm = skbio.DistanceMatrix.read(join(home, '/home/johnchase/office-project/office-microbes/notebooks/UNITE-analysis/core_div/core_div_closed/bdiv_even1000/bray_curtis_dm.txt'))
np.mean(w_dm.data)
np.median(w_dm.data)
4980239/22783729
foo
```
| github_jupyter |
## Training with Chainer
[VGG](https://arxiv.org/pdf/1409.1556v6.pdf) is an architecture for deep convolution networks. In this example, we train a convolutional network to perform image classification using the CIFAR-10 dataset. CIFAR-10 consists of 60000 32x32 colour images in 10 classes, with 6000 images per class. There are 50000 training images and 10000 test images. We'll train a model on SageMaker, deploy it to Amazon SageMaker hosting, and then classify images using the deployed model.
The Chainer script runs inside of a Docker container running on SageMaker. For more information about the Chainer container, see the sagemaker-chainer-containers repository and the sagemaker-python-sdk repository:
* https://github.com/aws/sagemaker-chainer-containers
* https://github.com/aws/sagemaker-python-sdk
For more on Chainer, please visit the Chainer repository:
* https://github.com/chainer/chainer
This notebook is adapted from the [CIFAR-10](https://github.com/chainer/chainer/tree/master/examples/cifar) example in the Chainer repository.
```
# Setup
from sagemaker import get_execution_role
import sagemaker
sagemaker_session = sagemaker.Session()
# This role retrieves the SageMaker-compatible role used by this Notebook Instance.
role = get_execution_role()
```
## Downloading training and test data
We use helper functions provided by `chainer` to download and preprocess the CIFAR10 data.
```
import chainer
from chainer.datasets import get_cifar10
train, test = get_cifar10()
```
## Uploading the data
We save the preprocessed data to the local filesystem, and then use the `sagemaker.Session.upload_data` function to upload our datasets to an S3 location. The return value `inputs` identifies the S3 location, which we will use when we start the Training Job.
```
import os
import shutil
import numpy as np
train_data = [element[0] for element in train]
train_labels = [element[1] for element in train]
test_data = [element[0] for element in test]
test_labels = [element[1] for element in test]
try:
os.makedirs("/tmp/data/train_cifar")
os.makedirs("/tmp/data/test_cifar")
np.savez("/tmp/data/train_cifar/train.npz", data=train_data, labels=train_labels)
np.savez("/tmp/data/test_cifar/test.npz", data=test_data, labels=test_labels)
train_input = sagemaker_session.upload_data(
path=os.path.join("/tmp", "data", "train_cifar"), key_prefix="notebook/chainer_cifar/train"
)
test_input = sagemaker_session.upload_data(
path=os.path.join("/tmp", "data", "test_cifar"), key_prefix="notebook/chainer_cifar/test"
)
finally:
shutil.rmtree("/tmp/data")
print("training data at %s", train_input)
print("test data at %s", test_input)
```
## Writing the Chainer script to run on Amazon SageMaker
### Training
We need to provide a training script that can run on the SageMaker platform. The training script is very similar to a training script you might run outside of SageMaker, but you can access useful properties about the training environment through various environment variables, such as:
* `SM_MODEL_DIR`: A string representing the path to the directory to write model artifacts to.
These artifacts are uploaded to S3 for model hosting.
* `SM_NUM_GPUS`: An integer representing the number of GPUs available to the host.
* `SM_OUTPUT_DIR`: A string representing the filesystem path to write output artifacts to. Output artifacts may
include checkpoints, graphs, and other files to save, not including model artifacts. These artifacts are compressed
and uploaded to S3 to the same S3 prefix as the model artifacts.
Supposing two input channels, 'train' and 'test', were used in the call to the Chainer estimator's ``fit()`` method,
the following will be set, following the format `SM_CHANNEL_[channel_name]`:
* `SM_CHANNEL_TRAIN`: A string representing the path to the directory containing data in the 'train' channel
* `SM_CHANNEL_TEST`: Same as above, but for the 'test' channel.
A typical training script loads data from the input channels, configures training with hyperparameters, trains a model, and saves a model to `model_dir` so that it can be hosted later. Hyperparameters are passed to your script as arguments and can be retrieved with an `argparse.ArgumentParser` instance. For example, the script run by this notebook starts with the following:
```python
import argparse
import os
if __name__ =='__main__':
parser = argparse.ArgumentParser()
# retrieve the hyperparameters we set from the client (with some defaults)
parser.add_argument('--epochs', type=int, default=50)
parser.add_argument('--batch-size', type=int, default=64)
parser.add_argument('--learning-rate', type=float, default=0.05)
# Data, model, and output directories These are required.
parser.add_argument('--output-data-dir', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
parser.add_argument('--model-dir', type=str, default=os.environ['SM_MODEL_DIR'])
parser.add_argument('--train', type=str, default=os.environ['SM_CHANNEL_TRAIN'])
parser.add_argument('--test', type=str, default=os.environ['SM_OUTPUT_DATA_DIR'])
args, _ = parser.parse_known_args()
num_gpus = int(os.environ['SM_NUM_GPUS'])
# ... load from args.train and args.test, train a model, write model to args.model_dir.
```
Because the Chainer container imports your training script, you should always put your training code in a main guard (`if __name__=='__main__':`) so that the container does not inadvertently run your training code at the wrong point in execution.
For more information about training environment variables, please visit https://github.com/aws/sagemaker-containers.
### Hosting and Inference
We use a single script to train and host the Chainer model. You can also write separate scripts for training and hosting. In contrast with the training script, the hosting script requires you to implement functions with particular function signatures (or rely on defaults for those functions).
These functions load your model, deserialize data sent by a client, obtain inferences from your hosted model, and serialize predictions back to a client:
* **`model_fn(model_dir)` (always required for hosting)**: This function is invoked to load model artifacts from those that were written into `model_dir` during training.
The script that this notebook runs uses the following `model_fn` function for hosting:
```python
def model_fn(model_dir):
chainer.config.train = False
model = L.Classifier(net.VGG(10))
serializers.load_npz(os.path.join(model_dir, 'model.npz'), model)
return model.predictor
```
* `input_fn(input_data, content_type)`: This function is invoked to deserialize prediction data when a prediction request is made. The return value is passed to predict_fn. `input_data` is the serialized input data in the body of the prediction request, and `content_type`, the MIME type of the data.
* `predict_fn(input_data, model)`: This function accepts the return value of `input_fn` as the `input_data` parameter and the return value of `model_fn` as the `model` parameter and returns inferences obtained from the model.
* `output_fn(prediction, accept)`: This function is invoked to serialize the return value from `predict_fn`, which is passed in as the `prediction` parameter, back to the SageMaker client in response to prediction requests.
`model_fn` is always required, but default implementations exist for the remaining functions. These default implementations can deserialize a NumPy array, invoking the model's `__call__` method on the input data, and serialize a NumPy array back to the client.
This notebook relies on the default `input_fn`, `predict_fn`, and `output_fn` implementations. See the Chainer sentiment analysis notebook for an example of how one can implement these hosting functions.
Please examine the script below. Training occurs behind the main guard, which prevents the function from being run when the script is imported, and `model_fn` loads the model saved into `model_dir` during training.
For more on writing Chainer scripts to run on SageMaker, or for more on the Chainer container itself, please see the following repositories:
* For writing Chainer scripts to run on SageMaker: https://github.com/aws/sagemaker-python-sdk
* For more on the Chainer container and default hosting functions: https://github.com/aws/sagemaker-chainer-containers
```
!pygmentize 'src/chainer_cifar_vgg_single_machine.py'
```
## Running the training script on SageMaker
To train a model with a Chainer script, we construct a ```Chainer``` estimator using the [sagemaker-python-sdk](https://github.com/aws/sagemaker-python-sdk). We pass in an `entry_point`, the name of a script that contains a couple of functions with certain signatures (`train` and `model_fn`), and a `source_dir`, a directory containing all code to run inside the Chainer container. This script will be run on SageMaker in a container that invokes these functions to train and load Chainer models.
The ```Chainer``` class allows us to run our training function as a training job on SageMaker infrastructure. We need to configure it with our training script, an IAM role, the number of training instances, and the training instance type. In this case we will run our training job on one `ml.p2.xlarge` instance.
```
from sagemaker.chainer.estimator import Chainer
chainer_estimator = Chainer(
entry_point="chainer_cifar_vgg_single_machine.py",
source_dir="src",
role=role,
sagemaker_session=sagemaker_session,
train_instance_count=1,
train_instance_type="ml.p2.xlarge",
hyperparameters={"epochs": 50, "batch-size": 64},
)
chainer_estimator.fit({"train": train_input, "test": test_input})
```
Our Chainer script writes various artifacts, such as plots, to a directory `output_data_dir`, the contents of which which SageMaker uploads to S3. Now we download and extract these artifacts.
```
from s3_util import retrieve_output_from_s3
chainer_training_job = chainer_estimator.latest_training_job.name
desc = sagemaker_session.sagemaker_client.describe_training_job(
TrainingJobName=chainer_training_job
)
output_data = desc["ModelArtifacts"]["S3ModelArtifacts"].replace("model.tar.gz", "output.tar.gz")
retrieve_output_from_s3(output_data, "output/single_machine_cifar")
```
These plots show the accuracy and loss over epochs:
```
from IPython.display import Image
from IPython.display import display
accuracy_graph = Image(filename="output/single_machine_cifar/accuracy.png", width=800, height=800)
loss_graph = Image(filename="output/single_machine_cifar/loss.png", width=800, height=800)
display(accuracy_graph, loss_graph)
```
## Deploying the Trained Model
After training, we use the Chainer estimator object to create and deploy a hosted prediction endpoint. We can use a CPU-based instance for inference (in this case an `ml.m4.xlarge`), even though we trained on GPU instances.
The predictor object returned by `deploy` lets us call the new endpoint and perform inference on our sample images.
```
predictor = chainer_estimator.deploy(initial_instance_count=1, instance_type="ml.m4.xlarge")
```
### CIFAR10 sample images
We'll use these CIFAR10 sample images to test the service:
<img style="display: inline; height: 32px; margin: 0.25em" src="images/airplane1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/automobile1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/bird1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/cat1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/deer1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/dog1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/frog1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/horse1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/ship1.png" />
<img style="display: inline; height: 32px; margin: 0.25em" src="images/truck1.png" />
## Predicting using SageMaker Endpoint
We batch the images together into a single NumPy array to obtain multiple inferences with a single prediction request.
```
from skimage import io
import numpy as np
def read_image(filename):
img = io.imread(filename)
img = np.array(img).transpose(2, 0, 1)
img = np.expand_dims(img, axis=0)
img = img.astype(np.float32)
img *= 1.0 / 255.0
img = img.reshape(3, 32, 32)
return img
def read_images(filenames):
return np.array([read_image(f) for f in filenames])
filenames = [
"images/airplane1.png",
"images/automobile1.png",
"images/bird1.png",
"images/cat1.png",
"images/deer1.png",
"images/dog1.png",
"images/frog1.png",
"images/horse1.png",
"images/ship1.png",
"images/truck1.png",
]
image_data = read_images(filenames)
```
The predictor runs inference on our input data and returns a list of predictions whose argmax gives the predicted label of the input data.
```
response = predictor.predict(image_data)
for i, prediction in enumerate(response):
print("image {}: prediction: {}".format(i, prediction.argmax(axis=0)))
```
## Cleanup
After you have finished with this example, remember to delete the prediction endpoint to release the instance(s) associated with it.
```
chainer_estimator.delete_endpoint()
```
| github_jupyter |
<img src="images/utfsm.png" alt="" width="200px" align="right"/>
# USM Numรฉrica
## Tema del Notebook
### Objetivos
1. Conocer el funcionamiento de la librerรฌa sklearn de Machine Learning
2. Aplicar la librerรฌa sklearn para solucionar problemas de Machine Learning
## Sobre el autor
### Sebastiรกn Flores
#### ICM UTFSM
#### [email protected]
## Sobre la presentaciรณn
#### Contenido creada en ipython notebook (jupyter)
#### Versiรณn en Slides gracias a RISE de Damiรกn Avila
Software:
* python 2.7 o python 3.1
* pandas 0.16.1
* sklearn 0.16.1
Opcional:
* numpy 1.9.2
* matplotlib 1.3.1
```
from sklearn import __version__ as vsn
print(vsn)
```
## 0.1 Instrucciones
Las instrucciones de instalaciรณn y uso de un ipython notebook se encuentran en el siguiente [link](link).
Despuรฉs de descargar y abrir el presente notebook, recuerden:
* Desarrollar los problemas de manera secuencial.
* Guardar constantemente con *`Ctr-S`* para evitar sorpresas.
* Reemplazar en las celdas de cรณdigo donde diga *`FIX_ME`* por el cรณdigo correspondiente.
* Ejecutar cada celda de cรณdigo utilizando *`Ctr-Enter`*
## 0.2 Licenciamiento y Configuraciรณn
Ejecutar la siguiente celda mediante *`Ctr-Enter`*.
```
"""
IPython Notebook v4.0 para python 3.0
Librerรญas adicionales: numpy, scipy, matplotlib. (EDITAR EN FUNCION DEL NOTEBOOK!!!)
Contenido bajo licencia CC-BY 4.0. Cรณdigo bajo licencia MIT.
(c) Sebastian Flores, Christopher Cooper, Alberto Rubio, Pablo Bunout.
"""
# Configuraciรณn para recargar mรณdulos y librerรญas dinรกmicamente
%reload_ext autoreload
%autoreload 2
# Configuraciรณn para graficos en lรญnea
%matplotlib inline
```
## 1.- Sobre la librerรญa sklearn
#### Historia
- Nace en 2007, como un Google Summer Project de David Cournapeau.
- Retomado por Matthieu Brucher para su proyecto de tesis.
- Desde 2010 con soporte por parte de INRIA.
- Actualmente +35 colaboradores.
## 1.- Sobre la librerรญa sklearn
#### Instalaciรณn
En python, con un poco de suerte:
```
pip install -U scikit-learn
```
Utilizando Anaconda:
```
conda install scikit-learn
```
## 1.- Sobre la librerรญa sklearn
#### ยฟPorquรฉ sklearn?
sklearn viene de scientific toolbox for Machine Learning.
scikit learn para los amigos.
Existen mรบltiples scikits, que son "scientific toolboxes" construidos sobre SciPy: [https://scikits.appspot.com/scikits](https://scikits.appspot.com/scikits).
Primero que nada... ยฟQuรฉ es Machine Learning?
## 2.- Machine Learning 101
#### Ejemplo
Consideremos un dataset consistente en caracterรญsticas de diversos animales.
```
patas, ancho, largo, alto, peso, especie
[numero],[metros],[metros],[metros],[kilogramos],[]
2, 0.6, 0.4, 1.7, 75, humano
2, 0.6, 0.4, 1.8, 90, humano
...
2, 0.5, 0.5, 1.7, 85, humano
4, 0.2, 0.5, 0,3, 30, gato
...
4, 0.25, 0.55, 0.32, 32, gato
4, 0.5, 0.8, 0.3, 50, perro
...
4, 0.4, 0.4, 0.32, 40, perro
```
## 2.- Machine Learning 101
### Clustering
Supongamos que no nos han dicho la especie de cada animal.
ยฟPodrรญamos reconocer las distintas especies?
ยฟPodrรญamos reconocer que existen 3 grupos distintos de animales?
## 2.- Machine Learning 101
### Clasificaciรณn
Supongamos que conocemos los datos de cada animal y ademรกs la especie.
Si alguien llega con las medidas de un animal... ยฟpodemos decir cuรกl serรก la especie?
## 2.- Machine Learning 101
### Regresiรณn
Supongamos que conocemos los datos de cada animal y su especie.
Si alguien llega con los datos de un animal, excepto el peso... ยฟpodemos predecir el peso que tendrรก el animal?
## 2.- Machine Learning 101
### Definiciones
* Los datos utilizados para predecir son predictores (features), y tรญpicamente se llama `X`.
* El dato que se busca predecir se llama etiqueta (label) y puede ser numรฉrica o categรณrica, y tรญpicamente se llama `y`.
## 3- Generalidades de sklearn
### Imagen resumen
<img src="images/ml_map.png" alt="" width="1400px" align="middle"/>
## 3- Generalidades de sklearn
### Procedimiento General
```
from sklearn import HelpfulMethods
from sklearn import AlgorithmIWantToUse
# split data into train and test datasets
# train model with train dataset
# compute error on test dataset
# Optional: Train model with all available data
# Use model for some prediction
```
## 4- Clustering con sklearn
#### Wine Dataset
Los datos del [Wine Dataset](https://archive.ics.uci.edu/ml/datasets/Wine) son un conjunto de datos clรกsicos para verificar los algoritmos de clustering.
<img src="images/wine.jpg" alt="" width="600px" align="middle"/>
Los datos corresponden a 3 cultivos diferentes de vinos de la misma regiรณn de Italia, y que han sido identificados con las etiquetas 1, 2 y 3.
## 4- Clustering con sklearn
#### Wine Dataset
Para cada tipo de vino se realizado 13 anรกlisis quรญmicos:
1. Alcohol
2. Malic acid
3. Ash
4. Alcalinity of ash
5. Magnesium
6. Total phenols
7. Flavanoids
8. Nonflavanoid phenols
9. Proanthocyanins
10. Color intensity
11. Hue
12. OD280/OD315 of diluted wines
13. Proline
La base de datos contiene 178 muestras distintas en total.
```
%%bash
head data/wine_data.csv
```
## 4- Clustering con sklearn
#### Lectura de datos
```
import pandas as pd
data = pd.read_csv("data/wine_data.csv")
data
```
## 4- Clustering con sklearn
#### Exploraciรณn de datos
```
data.columns
data["class"].value_counts()
data.describe(include="all")
```
## 4- Clustering con sklearn
#### Exploraciรณn grรกfica de datos
```
from matplotlib import pyplot as plt
data.hist(figsize=(12,20))
plt.show()
from matplotlib import pyplot as plt
#pd.scatter_matrix(data, figsize=(12,12), range_padding=0.2)
#plt.show()
```
## 4- Clustering con sklearn
#### Separaciรณn de los datos
Necesitamos separar los datos en los predictores (features) y las etiquetas (labels)
```
X = data.drop("class", axis=1)
true_labels = data["class"] -1 # labels deben ser 0, 1, 2, ..., n-1
```
## 4- Custering
#### Magnitudes de los datos
```
print(X.mean())
print(X.std())
```
## 4- Clustering con sklearn
#### Algoritmo de Clustering
Para Clustering usaremos el algoritmo KMeans.
Apliquemos un algoritmo de clustering directamente
```
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
# Parameters
n_clusters = 3
# Running the algorithm
kmeans = KMeans(n_clusters)
kmeans.fit(X)
pred_labels = kmeans.labels_
cm = confusion_matrix(true_labels, pred_labels)
print(cm)
```
## 4- Clustering con sklearn
#### Normalizacion de datos
Resulta conveniente escalar los datos, para que el algoritmo de clustering funcione mejor
```
from sklearn import preprocessing
X_scaled = preprocessing.scale(X)
print(X_scaled.mean())
print(X_scaled.std())
```
## 4- Clustering con sklearn
#### Algoritmo de Clustering
Ahora podemos aplicar un algoritmo de clustering
```
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
# Parameters
n_clusters = 3
# Running the algorithm
kmeans = KMeans(n_clusters)
kmeans.fit(X_scaled)
pred_labels = kmeans.labels_
cm = confusion_matrix(true_labels, pred_labels)
print(cm)
```
## 4- Clustering con sklearn
#### Regla del codo
En todos los casos hemos utilizado que el nรบmero de clusters es igual a 3. En caso que no conociรฉramos este dato, deberรญamos graficar la suma de las distancias a los clusters para cada punto, en funciรณn del nรบmero de clusters.
```
from sklearn.cluster import KMeans
clusters = range(2,20)
total_distance = []
for n_clusters in clusters:
kmeans = KMeans(n_clusters)
kmeans.fit(X_scaled)
pred_labels = kmeans.labels_
centroids = kmeans.cluster_centers_
# Get the distances
distance_for_n = 0
for k in range(n_clusters):
points = X_scaled[pred_labels==k]
aux = (points - centroids[k,:])**2
distance_for_n += (aux.sum(axis=1)**0.5).sum()
total_distance.append(distance_for_n)
```
## 4- Clustering con sklearn
Graficando lo anterior, obtenemos
```
from matplotlib import pyplot as plt
fig = plt.figure(figsize=(16,8))
plt.plot(clusters, total_distance, 'rs')
plt.xlim(min(clusters)-1, max(clusters)+1)
plt.ylim(0, max(total_distance)*1.1)
plt.show()
```
## 4- Clustering con sklearn
ยฟQuรฉ tan dificil es usar otro algoritmo de clustering?
Nada dificil.
Algoritmos disponibles:
* K-Means
* Mini-batch K-means
* Affinity propagation
* Mean-shift
* Spectral clustering
* Ward hierarchical clustering
* Agglomerative clustering
* DBSCAN
* Gaussian mixtures
* Birch
Lista con detalles: [http://scikit-learn.org/stable/modules/clustering.html](http://scikit-learn.org/stable/modules/clustering.html)
```
from sklearn.cluster import KMeans
from sklearn.metrics import confusion_matrix
from sklearn import preprocessing
# Normalization of data
X_scaled = preprocessing.scale(X)
# Running the algorithm
kmeans = KMeans(n_clusters=3)
kmeans.fit(X_scaled)
pred_labels = kmeans.labels_
# Evaluating the output
cm = confusion_matrix(true_labels, pred_labels)
print(cm)
from sklearn.cluster import MiniBatchKMeans
from sklearn.metrics import confusion_matrix
from sklearn import preprocessing
# Normalization of data
X_scaled = preprocessing.scale(X)
# Running the algorithm
kmeans = MiniBatchKMeans(n_clusters=3)
kmeans.fit(X_scaled)
pred_labels = kmeans.labels_
# Evaluating the output
cm = confusion_matrix(true_labels, pred_labels)
print(cm)
from sklearn.cluster import AffinityPropagation
from sklearn.metrics import confusion_matrix
from sklearn import preprocessing
# Normalization of data
X_scaled = preprocessing.scale(X)
# Running the algorithm
kmeans = AffinityPropagation(preference=-300)
kmeans.fit(X_scaled)
pred_labels = kmeans.labels_
# Evaluating the output
cm = confusion_matrix(true_labels, pred_labels)
print(cm)
```
## 5- Clasificaciรณn
#### Reconocimiento de dรญgitos
Los datos se encuentran en 2 archivos, `data/optdigits.train` y `data/optdigits.test`.
Como su nombre lo indica, el set `data/optdigits.train` contiene los ejemplos que deben ser usados para entrenar el modelo, mientras que el set `data/optdigits.test` se utilizarรก para obtener una estimaciรณn del error de predicciรณn.
Ambos archivos comparten el mismo formato: cada lรญnea contiene 65 valores. Los 64 primeros corresponden a la representaciรณn de la imagen en escala de grises (0-blanco, 255-negro), y el valor 65 corresponde al dรญgito de la imรกgen (0-9).
## 5- Clasificaciรณn
#### Cargando los datos
Para cargar los datos, utilizamos np.loadtxt con los parรกmetros extra delimiter (para indicar que el separador serรก en esta ocasiรณn una coma) y con el dype np.int8 (para que su representaciรณn en memoria sea la mรญnima posible, 8 bits en vez de 32/64 bits para un float).
```
import numpy as np
XY_tv = np.loadtxt("data/optdigits.train", delimiter=",", dtype=np.int8)
print(XY_tv)
X_tv = XY_tv[:,:64]
Y_tv = XY_tv[:, 64]
print(X_tv.shape)
print(Y_tv.shape)
print(X_tv[0,:])
print(X_tv[0,:].reshape(8,8))
print(Y_tv[0])
```
## 5- Clasificaciรณn
#### Visualizando los datos
Para visualizar los datos utilizaremos el mรฉtodo imshow de pyplot. Resulta necesario convertir el arreglo desde las dimensiones (1,64) a (8,8) para que la imagen sea cuadrada y pueda distinguirse el dรญgito. Superpondremos ademรกs el label correspondiente al dรญgito, mediante el mรฉtodo text. Realizaremos lo anterior para los primeros 25 datos del archivo.
```
from matplotlib import pyplot as plt
# Well plot the first nx*ny examples
nx, ny = 5, 5
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j+ny*i
data = X_tv[index,:].reshape(8,8)
label = Y_tv[index]
ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r'))
ax[i][j].text(7, 0, str(int(label)), horizontalalignment='center',
verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
```
## 5- Clasificaciรณn
#### Entrenamiento trivial
Para clasificar utilizaremos el algoritmo K Nearest Neighbours.
Entrenaremos el modelo con 1 vecino y verificaremos el error de predicciรณn en el set de entrenamiento.
```
from sklearn.neighbors import KNeighborsClassifier
k = 1
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_tv, Y_tv)
Y_pred = kNN.predict(X_tv)
n_errors = sum(Y_pred!=Y_tv)
print("Hay %d errores de un total de %d ejemplos de entrenamiento" %(n_errors, len(Y_tv)))
```
ยกLa mejor predicciรณn del punto es el mismo punto!
Pero esto generalizarรญa catastrรณficamente.
Es importantรญsimo **entrenar** en un set de datos y luego probar como generaliza/funciona en un set **completamente nuevo**.
## 5- Clasificaciรณn
#### Seleccionando el nรบmero adecuado de vecinos
Buscando el valor de k mรกs apropiado
A partir del anรกlisis del punto anterior, nos damos cuenta de la necesidad de:
1. Calcular el error en un set distinto al utilizado para entrenar.
2. Calcular el mejor valor de vecinos para el algoritmo.
(Esto tomarรก un tiempo)
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
template = "k={0:,d}: {1:.1f} +- {2:.1f} errores de clasificaciรณn de un total de {3:,d} puntos"
# Fitting the model
mean_error_for_k = []
std_error_for_k = []
k_range = range(1,8)
for k in k_range:
errors_k = []
for i in range(10):
kNN = KNeighborsClassifier(n_neighbors=k)
X_train, X_valid, Y_train, Y_valid = train_test_split(X_tv, Y_tv, train_size=0.75)
kNN.fit(X_train, Y_train)
# Predicting values
Y_valid_pred = kNN.predict(X_valid)
# Count the errors
n_errors = sum(Y_valid!=Y_valid_pred)
# Add them to vector
errors_k.append(100.*n_errors/len(Y_valid))
errors = np.array(errors_k)
print(template.format(k, errors.mean(), errors.std(), len(Y_valid)))
mean_error_for_k.append(errors.mean())
std_error_for_k.append(errors.std())
```
## 5- Clasificaciรณn
Podemos visualizar los datos anteriores utilizando el siguiente cรณdigo, que requiere que `sd_error_for k` y `mean_error_for_k` hayan sido apropiadamente definidos.
```
mean = np.array(mean_error_for_k)
std = np.array(std_error_for_k)
plt.figure(figsize=(12,8))
plt.plot(k_range, mean - std, "k:")
plt.plot(k_range, mean , "r.-")
plt.plot(k_range, mean + std, "k:")
plt.xlabel("Numero de vecinos k")
plt.ylabel("Error de clasificacion")
plt.show()
```
## 5- Clasificaciรณn
#### Entrenando todo el modelo
A partir de lo anterior, se fija el nรบmero de vecinos $k=3$ y se procede a entrenar el modelo con todos los datos.
```
from sklearn.neighbors import KNeighborsClassifier
from sklearn.model_selection import train_test_split
import numpy as np
k = 3
kNN = KNeighborsClassifier(n_neighbors=k)
kNN.fit(X_tv, Y_tv)
```
## 5- Clasificaciรณn
#### Predicciรณn en testing dataset
Ahora que el modelo kNN ha sido completamente entrenado, calcularemos el error de predicciรณn en un set de datos completamente nuevo: el set de testing.
```
# Cargando el archivo data/optdigits.tes
XY_test = np.loadtxt("data/optdigits.test", delimiter=",")
X_test = XY_test[:,:64]
Y_test = XY_test[:, 64]
# Predicciรณn de etiquetas
Y_pred = kNN.predict(X_test)
```
## 5- Clasificaciรณn
Puesto que tenemos las etiquetas verdaderas en el set de entrenamiento, podemos visualizar que nรบmeros han sido correctamente etiquetados.
```
from matplotlib import pyplot as plt
# Mostrar los datos correctos
mask = (Y_pred==Y_test)
X_aux = X_test[mask]
Y_aux_true = Y_test[mask]
Y_aux_pred = Y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
nx, ny = 5, 5
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j+ny*i
data = X_aux[index,:].reshape(8,8)
label_pred = str(int(Y_aux_pred[index]))
label_true = str(int(Y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r'))
ax[i][j].text(0, 0, label_pred, horizontalalignment='center',
verticalalignment='center', fontsize=10, color='green')
ax[i][j].text(7, 0, label_true, horizontalalignment='center',
verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
```
## 5- Clasificaciรณn
#### Visualizaciรณn de etiquetas incorrectas
Mรกs interesante que el grรกfico anterior, resulta considerar los casos donde los dรญgitos han sido incorrectamente etiquetados.
```
from matplotlib import pyplot as plt
# Mostrar los datos correctos
mask = (Y_pred!=Y_test)
X_aux = X_test[mask]
Y_aux_true = Y_test[mask]
Y_aux_pred = Y_pred[mask]
# We'll plot the first 100 examples, randomly choosen
nx, ny = 5, 5
fig, ax = plt.subplots(nx, ny, figsize=(12,12))
for i in range(nx):
for j in range(ny):
index = j+ny*i
data = X_aux[index,:].reshape(8,8)
label_pred = str(int(Y_aux_pred[index]))
label_true = str(int(Y_aux_true[index]))
ax[i][j].imshow(data, interpolation='nearest', cmap=plt.get_cmap('gray_r'))
ax[i][j].text(0, 0, label_pred, horizontalalignment='center',
verticalalignment='center', fontsize=10, color='red')
ax[i][j].text(7, 0, label_true, horizontalalignment='center',
verticalalignment='center', fontsize=10, color='blue')
ax[i][j].get_xaxis().set_visible(False)
ax[i][j].get_yaxis().set_visible(False)
plt.show()
```
## 5- Clasificaciรณn
#### Anรกlisis del error
Despuรฉs de la exploraciรณn visual de los resultados, queremos obtener el error de predicciรณn real del modelo.
ยฟExisten dรญgitos mรกs fรกciles o difรญciles de clasificar?
```
# Error global
mask = (Y_pred!=Y_test)
error_prediccion = 100.*sum(mask) / len(mask)
print("Error de predicciรณn total de {0:.1f} %".format(error_prediccion))
for digito in range(0,10):
mask_digito = Y_test==digito
Y_test_digito = Y_test[mask_digito]
Y_pred_digito = Y_pred[mask_digito]
mask = Y_test_digito!=Y_pred_digito
error_prediccion = 100.*sum((Y_pred_digito!=Y_test_digito)) / len(Y_pred_digito)
print("Error de predicciรณn para digito {0:d} de {1:.1f} %".format(digito, error_prediccion))
```
## 5- Clasificaciรณn
#### Anรกlisis del error (cont. de)
El siguiente cรณdigo muestra el error de clasificaciรณn, permitiendo verificar que nรบmeros son confundibles
```
from sklearn.metrics import confusion_matrix as cm
cm = cm(Y_test, Y_pred)
print(cm)
# As in http://scikit-learn.org/stable/auto_examples/model_selection/plot_confusion_matrix.html
def plot_confusion_matrix(cm, title='Confusion matrix', cmap=plt.cm.jet):
plt.figure(figsize=(10,10))
plt.imshow(cm, interpolation='nearest', cmap=cmap)
plt.title(title)
plt.colorbar()
tick_marks = np.arange(10)
plt.xticks(tick_marks, tick_marks)
plt.yticks(tick_marks, tick_marks)
plt.tight_layout()
plt.ylabel('True label')
plt.xlabel('Predicted label')
plt.show()
return None
# Compute confusion matrix
plt.figure()
plot_confusion_matrix(cm)
# Normalize the confusion matrix by row (i.e by the number of samples in each class)
cm_normalized = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
plot_confusion_matrix(cm_normalized, title='Normalized confusion matrix')
```
## 5- Clasificaciรณn
A partir de lo anterior, vemos observamos que los mayores errores son:
* El 2 puede clasificarse errรณneamente como 1 (pero no viceversa).
* El 7 puede clasificarse errรณneamente como 9 (pero no viceversa).
* El 8 puede clasificarse errรณneamente como 1 (pero no viceversa).
* El 9 puede clasificarse errรณneamente como 3 (pero no viceversa).
## 5- Clasificaciรณn
#### Preguntas
ยฟEs รฉste el mejor mรฉtodo de clasificaciรณn? ยฟQuรฉ otros mรฉtodos pueden utilizarse?
Mรบltiples familias de algoritmos:
* Logistic Regression
* Naive Bayes
* Decision Trees
* Random Forests
* Support Vector Machines
* Neural Networks
* Etc etc
link: [http://scikit-learn.org/stable/supervised_learning.html](http://scikit-learn.org/stable/supervised_learning.html)
## 5- Conclusiรณn
Sklearn tiene muchos algoritmos implementados y es fรกcil de usar.
Sin embargo, hay que tener presente GIGO: Garbage In, Garbage Out:
* Exploraciรณn y visualizaciรณn inicial de datos.
* Limpieza de datos
* Utilizaciรณn del algoritmo requiere conocer su funconamiento para mejor tuneo de parรกmetros.
* Es bueno y fรกcil probar mรกs de un algoritmo.
## 5- Conclusiรณn
Y por รบltimo:
* Aplicaciรณn de algoritmos de ML es delicado porque requiere (1) conocer bien los datos y (2) entender las limitaciones del algoritmo.
* Considerar siempre una muestra para entrenamiento y una muestra para testeo: predicciรณn es inรบtil si no se entrega un margen de error para la predicciรณn.
## 5- Conclusiรณn
# ยกGracias!
| github_jupyter |
```
import tensorflow as tf
print(tf.__version__)
import numpy as np
import matplotlib.pyplot as plt
def plot_series(time, series, format="-", start=0, end=None):
plt.plot(time[start:end], series[start:end], format)
plt.xlabel("Time")
plt.ylabel("Value")
plt.grid(True)
#!wget --no-check-certificate \
# https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv \
# -O /tmp/daily-min-temperatures.csv
root = r'D:\Users\Arkady\Verint\Coursera_2019_Tensorflow_Specialization\Course4_Sequences_TimeSeries_Prediction'
srcurl = 'https://raw.githubusercontent.com/jbrownlee/Datasets/master/daily-min-temperatures.csv'
#import pandas as pd
#df = pd.read_csv(srcurl)
#df.to_csv(root + '/tmp/daily-min-temperatures.csv')
import csv
time_step = []
temps = []
with open(root + '/tmp/daily-min-temperatures.csv') as csvfile:
reader = csv.reader(csvfile, delimiter=',')
next(reader)
step=0
for row in reader:
temps.append(float(row[2]))
time_step.append(step)
step = step + 1
series = np.array(temps)
time = np.array(time_step)
plt.figure(figsize=(10, 6))
plot_series(time, series)
split_time = 2500
time_train = time[:split_time]
x_train = series[:split_time]
time_valid = time[split_time:]
x_valid = series[split_time:]
window_size = 30
batch_size = 32
shuffle_buffer_size = 1000
def windowed_dataset(series, window_size, batch_size, shuffle_buffer):
series = tf.expand_dims(series, axis=-1)
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size + 1, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size + 1))
ds = ds.shuffle(shuffle_buffer)
ds = ds.map(lambda w: (w[:-1], w[1:]))
return ds.batch(batch_size).prefetch(1)
def model_forecast(model, series, window_size):
ds = tf.data.Dataset.from_tensor_slices(series)
ds = ds.window(window_size, shift=1, drop_remainder=True)
ds = ds.flat_map(lambda w: w.batch(window_size))
ds = ds.batch(32).prefetch(1)
forecast = model.predict(ds)
return forecast
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
window_size = 64
batch_size = 256
train_set = windowed_dataset(x_train, window_size, batch_size, shuffle_buffer_size)
print(train_set)
print(x_train.shape)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=32, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.LSTM(64, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
lr_schedule = tf.keras.callbacks.LearningRateScheduler(
lambda epoch: 1e-8 * 10**(epoch / 20))
optimizer = tf.keras.optimizers.SGD(lr=1e-8, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set, epochs=100, callbacks=[lr_schedule])
plt.semilogx(history.history["lr"], history.history["loss"])
plt.axis([1e-8, 1e-4, 0, 60])
tf.keras.backend.clear_session()
tf.random.set_seed(51)
np.random.seed(51)
train_set = windowed_dataset(x_train, window_size=60, batch_size=100, shuffle_buffer=shuffle_buffer_size)
model = tf.keras.models.Sequential([
tf.keras.layers.Conv1D(filters=60, kernel_size=5,
strides=1, padding="causal",
activation="relu",
input_shape=[None, 1]),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.LSTM(60, return_sequences=True),
tf.keras.layers.Dense(30, activation="relu"),
tf.keras.layers.Dense(10, activation="relu"),
tf.keras.layers.Dense(1),
tf.keras.layers.Lambda(lambda x: x * 400)
])
optimizer = tf.keras.optimizers.SGD(lr=1e-5, momentum=0.9)
model.compile(loss=tf.keras.losses.Huber(),
optimizer=optimizer,
metrics=["mae"])
history = model.fit(train_set,epochs=150)
rnn_forecast = model_forecast(model, series[..., np.newaxis], window_size)
rnn_forecast = rnn_forecast[split_time - window_size:-1, -1, 0]
plt.figure(figsize=(10, 6))
plot_series(time_valid, x_valid)
plot_series(time_valid, rnn_forecast)
tf.keras.metrics.mean_absolute_error(x_valid, rnn_forecast).numpy()
print(rnn_forecast)
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import h5py
archive = h5py.File('/Users/bmmorris/git/aesop/notebooks/spectra.hdf5', 'r+')
targets = list(archive)
list(archive['HD122120'])#['2017-09-11T03:27:13.140']['flux'][:]
from scipy.ndimage import gaussian_filter1d
spectrum1 = archive['HATP11']['2017-06-12T07:28:06.310'] # K4
spectrum2 = archive['HD110833']['2017-03-17T05:47:24.899'] # K3
spectrum3 = archive['HD122120']['2017-06-15T03:52:13.690'] # K5
wavelength1 = spectrum1['wavelength'][:]
flux1 = spectrum1['flux'][:]
wavelength2 = spectrum2['wavelength'][:]
flux2 = spectrum2['flux'][:]
wavelength3 = spectrum3['wavelength'][:]
flux3 = spectrum3['flux'][:]
plt.plot(wavelength1, flux1)
plt.plot(wavelength2, gaussian_filter1d(flux2, 1))# + 0.2)
plt.plot(wavelength3, gaussian_filter1d(flux3, 1))# + 0.4)
plt.ylim([0.5, 1.1])
#plt.xlim([3900, 4000])
# plt.xlim([7035, 7075])
plt.xlim([8850, 8890])
import sys
sys.path.insert(0, '../')
from toolkit import SimpleSpectrum
import astropy.units as u
target = SimpleSpectrum(wavelength1, flux1, dispersion_unit=u.Angstrom)
source1 = SimpleSpectrum(wavelength2, flux2, dispersion_unit=u.Angstrom)
source2 = SimpleSpectrum(wavelength3, flux3, dispersion_unit=u.Angstrom)
from toolkit import instr_model
from toolkit import slice_spectrum, concatenate_spectra, bands_TiO
spec_band = []
first_n_bands = 5
width = 5
for band in bands_TiO[:first_n_bands]:
target_slice = slice_spectrum(target, band.min-width*u.Angstrom, band.max+width*u.Angstrom)
target_slice.flux /= target_slice.flux.max()
spec_band.append(target_slice)
target_slices = concatenate_spectra(spec_band)
target_slices.plot(color='k', lw=2, marker='.')
spec_band = []
for band, inds in zip(bands_TiO[:first_n_bands], target_slices.wavelength_splits):
target_slice = slice_spectrum(source1, band.min-width*u.Angstrom, band.max+width*u.Angstrom,
force_length=abs(np.diff(inds))[0])
target_slice.flux /= target_slice.flux.max()
spec_band.append(target_slice)
source1_slices = concatenate_spectra(spec_band)
source1_slices.plot(color='r', lw=2, marker='.')
spec_band = []
for band, inds in zip(bands_TiO[:first_n_bands], target_slices.wavelength_splits):
target_slice = slice_spectrum(source2, band.min-width*u.Angstrom, band.max+width*u.Angstrom,
force_length=abs(np.diff(inds))[0])
target_slice.flux /= target_slice.flux.max()
spec_band.append(target_slice)
source2_slices = concatenate_spectra(spec_band)
source2_slices.plot(color='b', lw=2, marker='.')
def plot_spliced_spectrum(observed_spectrum, model_flux, other_model=None):
n_chunks = len(observed_spectrum.wavelength_splits)
fig, ax = plt.subplots(n_chunks, 1, figsize=(8, 10))
for i, inds in enumerate(observed_spectrum.wavelength_splits):
min_ind, max_ind = inds
ax[i].errorbar(observed_spectrum.wavelength[min_ind:max_ind].value,
observed_spectrum.flux[min_ind:max_ind],
0.025*np.ones(max_ind-min_ind))
ax[i].plot(observed_spectrum.wavelength[min_ind:max_ind],
model_flux[min_ind:max_ind])
if other_model is not None:
ax[i].plot(observed_spectrum.wavelength[min_ind:max_ind],
other_model[min_ind:max_ind], alpha=0.4)
ax[i].set_xlim([observed_spectrum.wavelength[min_ind].value,
observed_spectrum.wavelength[max_ind-1].value])
ax[i].set_ylim([0.9*observed_spectrum.flux[min_ind:max_ind].min(),
1.1])
return fig, ax
plot_spliced_spectrum(target_slices, source1_slices.flux, source2_slices.flux)
model, resid = instr_model(target_slices, source1_slices, source2_slices, np.log(0.5), 1, 1, 0, 0, 0, 0, 0)
plt.plot(target_slices.flux - model)
# from scipy.optimize import fmin_l_bfgs_b
# def chi2(p, target, temp_phot, temp_spot):
# spotted_area, lam_offset0, lam_offset1, lam_offset2, res = p
# lam_offsets = [lam_offset0, lam_offset1, lam_offset1]
# model, residuals = instr_model(target, temp_phot, temp_spot, spotted_area,
# res, *lam_offsets)
# return residuals
# bounds = [[-30, 0], [-2, 2], [-2, 2], [-2, 2], [1, 15]]
# initp = [np.log(0.03), 0.0, 0.0, 0.0, 1]
# bfgs_options_fast = dict(epsilon=1e-3, approx_grad=True,
# m=10, maxls=20)
# bfgs_options_precise = dict(epsilon=1e-3, approx_grad=True,
# m=30, maxls=50)
# result = fmin_l_bfgs_b(chi2, initp, bounds=bounds,
# args=(target_slices, source1_slices, source2_slices),
# **bfgs_options_precise)
# #**bfgs_options_fast)
# model, resid = instr_model(target_slices, source1_slices, source2_slices, *result[0])
# plot_spliced_spectrum(target_slices, model)
import emcee
yerr = 0.01
def random_in_range(min, max):
return (max-min)*np.random.rand(1)[0] + min
def lnprior(theta):
log_spotted_area, res = theta[:2]
dlambdas = theta[2:]
if (-15 < log_spotted_area <= 0 and 0. <= res < 3 and all([-3 < dlambda < 3 for dlambda in dlambdas])):
return 0.0
return -np.inf
def lnlike(theta, target, source1, source2):
log_spotted_area, res = theta[:2]
dlambdas = theta[2:]
model, residuals = instr_model(target, source1, source2, np.exp(log_spotted_area),
res, *dlambdas)
return -0.5*residuals/yerr**2
def lnprob(theta, target, source1, source2):
lp = lnprior(theta)
if not np.isfinite(lp):
return -np.inf
return lp + lnlike(theta, target, source1, source2)
from emcee import EnsembleSampler
dlam_init = -0.2
# initp = np.array([np.log(0.01), 1, dlam_init, dlam_init, dlam_init, dlam_init, dlam_init])
ndim, nwalkers = 6, 30
pos = []
counter = -1
while len(pos) < nwalkers:
realization = [random_in_range(-10, -8), random_in_range(0, 1),
random_in_range(dlam_init-0.1, dlam_init+0.1), random_in_range(dlam_init-0.1, dlam_init+0.1),
random_in_range(dlam_init-0.1, dlam_init+0.1), random_in_range(dlam_init-0.1, dlam_init+0.1)]
if np.isfinite(lnprior(realization)):
pos.append(realization)
sampler = EnsembleSampler(nwalkers, ndim, lnprob, threads=8,
args=(target_slices, source1_slices, source2_slices))
sampler.run_mcmc(pos, 4000);
from corner import corner
samples = sampler.chain[:, 1500:, :].reshape((-1, ndim))
corner(samples, labels=['$\log f_s$', '$R$', '$\Delta \lambda_0$', '$\Delta \lambda_1$',
'$\Delta \lambda_2$', '$\Delta \lambda_3$']);#, '$\Delta \lambda_4$']);
best_params = sampler.flatchain[np.argmax(sampler.flatlnprobability, axis=0), :]
best_model = instr_model(target_slices, source1_slices, source2_slices,
*best_params)[0]
best_params
# maximum spotted area
np.exp(np.percentile(samples[:, 0], 98))
n_chunks = len(target_slices.wavelength_splits)
fig, ax = plt.subplots(n_chunks, 1, figsize=(8, 10))
from copy import deepcopy
from toolkit.analysis import gaussian_kernel
for i, inds in enumerate(target_slices.wavelength_splits):
min_ind, max_ind = inds
ax[i].errorbar(target_slices.wavelength[min_ind:max_ind].value,
target_slices.flux[min_ind:max_ind],
yerr*np.ones_like(target_slices.flux[min_ind:max_ind]),
fmt='o', color='k')
#0.025*np.ones(max_ind-min_ind), fmt='.')
ax[i].plot(target_slices.wavelength[min_ind:max_ind],
best_model[min_ind:max_ind], color='r')
ax[i].set_xlim([target_slices.wavelength[min_ind].value,
target_slices.wavelength[max_ind-1].value])
#ax[i].set_ylim([0.9*target_slices.flux[min_ind:max_ind].min(),
# 1.1])
n_random_draws = 100
# draw models from posteriors
for j in range(n_random_draws):
step = np.random.randint(0, samples.shape[0])
random_step = samples[step, :]
rand_model = instr_model(target_slices, source1_slices, source2_slices, *random_step)[0]
for i, inds in enumerate(target_slices.wavelength_splits):
min_ind, max_ind = inds
ax[i].plot(target_slices.wavelength[min_ind:max_ind],
rand_model[min_ind:max_ind], color='#389df7', alpha=0.1)
```
| github_jupyter |
<h1>Table of Contents<span class="tocSkip"></span></h1>
<div class="toc"><ul class="toc-item"><li><span><a href="#Download-and-Clean-Data" data-toc-modified-id="Download-and-Clean-Data-1"><span class="toc-item-num">1 </span>Download and Clean Data</a></span></li><li><span><a href="#Making-Recommendations" data-toc-modified-id="Making-Recommendations-2"><span class="toc-item-num">2 </span>Making Recommendations</a></span><ul class="toc-item"><li><span><a href="#BERT" data-toc-modified-id="BERT-2.1"><span class="toc-item-num">2.1 </span>BERT</a></span></li><li><span><a href="#Doc2vec" data-toc-modified-id="Doc2vec-2.2"><span class="toc-item-num">2.2 </span>Doc2vec</a></span></li><li><span><a href="#LDA" data-toc-modified-id="LDA-2.3"><span class="toc-item-num">2.3 </span>LDA</a></span></li><li><span><a href="#TFIDF" data-toc-modified-id="TFIDF-2.4"><span class="toc-item-num">2.4 </span>TFIDF</a></span></li></ul></li></ul></div>
**rec_books**
Downloads an English Wikipedia dump and parses it for all available books. All available models are then ran to compare recommendation efficacy.
If using this notebook in [Google Colab](https://colab.research.google.com/github/andrewtavis/wikirec/blob/main/examples/rec_books.ipynb), you can activate GPUs by following `Edit > Notebook settings > Hardware accelerator` and selecting `GPU`.
```
# pip install wikirec -U
```
The following gensim update might be necessary in Google Colab as the default version is very low.
```
# pip install gensim -U
```
In Colab you'll also need to download nltk's names data.
```
# import nltk
# nltk.download("names")
import os
import json
import pickle
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="darkgrid")
sns.set(rc={"figure.figsize": (15, 5)})
from wikirec import data_utils, model, utils
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:99% !important; }</style>"))
```
# Download and Clean Data
```
files = data_utils.download_wiki(
language="en", target_dir="./enwiki_dump", file_limit=-1, dump_id=False
)
len(files)
topic = "books"
data_utils.parse_to_ndjson(
topics=topic,
output_path="./enwiki_books.ndjson",
input_dir="./enwiki_dump",
partitions_dir="./enwiki_book_partitions",
limit=None,
delete_parsed_files=True,
multicore=True,
verbose=True,
)
with open("./enwiki_books.ndjson", "r") as fin:
books = [json.loads(l) for l in fin]
print(f"Found a total of {len(books)} books.")
titles = [m[0] for m in books]
texts = [m[1] for m in books]
if os.path.isfile("./book_corpus_idxs.pkl"):
print(f"Loading book corpus and selected indexes")
with open(f"./book_corpus_idxs.pkl", "rb") as f:
text_corpus, selected_idxs = pickle.load(f)
selected_titles = [titles[i] for i in selected_idxs]
else:
print(f"Creating book corpus and selected indexes")
text_corpus, selected_idxs = data_utils.clean(
texts=texts,
language="en",
min_token_freq=5, # 0 for Bert
min_token_len=3, # 0 for Bert
min_tokens=50,
max_token_index=-1,
min_ngram_count=3,
remove_stopwords=True, # False for Bert
ignore_words=None,
remove_names=True,
sample_size=1,
verbose=True,
)
selected_titles = [titles[i] for i in selected_idxs]
with open("./book_corpus_idxs.pkl", "wb") as f:
print("Pickling book corpus and selected indexes")
pickle.dump([text_corpus, selected_idxs], f, protocol=4)
```
# Making Recommendations
```
single_input_0 = "Harry Potter and the Philosopher's Stone"
single_input_1 = "The Hobbit"
multiple_inputs = ["Harry Potter and the Philosopher's Stone", "The Hobbit"]
def load_or_create_sim_matrix(
method,
corpus,
metric,
topic,
path="./",
bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens",
**kwargs,
):
"""
Loads or creats a similarity matrix to deliver recommendations
NOTE: the .pkl files made are 5-10GB or more in size
"""
if os.path.isfile(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl"):
print(f"Loading {method} {topic} {metric} similarity matrix")
with open(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl", "rb") as f:
sim_matrix = pickle.load(f)
else:
print(f"Creating {method} {topic} {metric} similarity matrix")
embeddings = model.gen_embeddings(
method=method, corpus=corpus, bert_st_model=bert_st_model, **kwargs,
)
sim_matrix = model.gen_sim_matrix(
method=method, metric=metric, embeddings=embeddings,
)
with open(f"{path}{topic}_{metric}_{method}_sim_matrix.pkl", "wb") as f:
print(f"Pickling {method} {topic} {metric} similarity matrix")
pickle.dump(sim_matrix, f, protocol=4)
return sim_matrix
```
## BERT
```
# Remove n-grams for BERT training
corpus_no_ngrams = [
" ".join([t for t in text.split(" ") if "_" not in t]) for text in text_corpus
]
# We can pass kwargs for sentence_transformers.SentenceTransformer.encode
bert_sim_matrix = load_or_create_sim_matrix(
method="bert",
corpus=corpus_no_ngrams,
metric="cosine", # euclidean
topic=topic,
path="./",
bert_st_model="xlm-r-bert-base-nli-stsb-mean-tokens",
show_progress_bar=True,
batch_size=32,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=bert_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=bert_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=bert_sim_matrix,
n=10,
metric="cosine",
)
```
## Doc2vec
```
# We can pass kwargs for gensim.models.doc2vec.Doc2Vec
doc2vec_sim_matrix = load_or_create_sim_matrix(
method="doc2vec",
corpus=text_corpus,
metric="cosine", # euclidean
topic=topic,
path="./",
vector_size=100,
epochs=10,
alpha=0.025,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=doc2vec_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=doc2vec_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=doc2vec_sim_matrix,
n=10,
metric="cosine",
)
```
## LDA
```
topic_nums_to_compare = [1, 10, 20, 30, 40, 50, 60, 70, 80, 90, 100]
# We can pass kwargs for gensim.models.ldamulticore.LdaMulticore
utils.graph_lda_topic_evals(
corpus=text_corpus,
num_topic_words=10,
topic_nums_to_compare=topic_nums_to_compare,
metrics=True,
verbose=True,
)
plt.show()
# We can pass kwargs for gensim.models.ldamulticore.LdaMulticore
lda_sim_matrix = load_or_create_sim_matrix(
method="lda",
corpus=text_corpus,
metric="cosine", # euclidean not an option at this time
topic=topic,
path="./",
num_topics=90,
passes=10,
decay=0.5,
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=lda_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=lda_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=lda_sim_matrix,
n=10,
metric="cosine",
)
```
## TFIDF
```
# We can pass kwargs for sklearn.feature_extraction.text.TfidfVectorizer
tfidf_sim_matrix = load_or_create_sim_matrix(
method="tfidf",
corpus=text_corpus,
metric="cosine", # euclidean
topic=topic,
path="./",
max_features=None,
norm='l2',
)
model.recommend(
inputs=single_input_0,
titles=selected_titles,
sim_matrix=tfidf_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=single_input_1,
titles=selected_titles,
sim_matrix=tfidf_sim_matrix,
n=10,
metric="cosine",
)
model.recommend(
inputs=multiple_inputs,
titles=selected_titles,
sim_matrix=tfidf_sim_matrix,
n=10,
metric="cosine",
)
```
| github_jupyter |
# CI coverage, length and bias
For event related design.
```
# Directories of the data for different scenario's
DATAwd <- list(
'Take[8mmBox10]' = "/Volumes/2_TB_WD_Elements_10B8_Han/PhD/IBMAvsGLM/Results/Cambridge/ThirdLevel/8mm/boxcar10",
'Take[8mmEvent2]' = "/Volumes/2_TB_WD_Elements_10B8_Han/PhD/IBMAvsGLM/Results/Cambridge/ThirdLevel/8mm/event2"
)
NUMDATAwd <- length(DATAwd)
currentWD <- 2
# Number of conficence intervals
CIs <- c('MA-weightVar','GLM-t')
NumCI <- length(CIs)
# Number of executed runs
nruns.tmp <- matrix(c(
1,2500,
2,500
), ncol=2, byrow=TRUE)
nruns <- nruns.tmp[currentWD,2]
# Number of subjects and studies
nsub <- 20
nstud <- 5
# Dimension of brain
DIM <- c(91,109,91)
# True value
trueVal <- 0
# Load in libraries
library(oro.nifti)
library(dplyr)
library(lattice)
library(grDevices)
library(ggplot2)
library(data.table)
library(gridExtra)
# Function to count the number of instances in which true value is between lower and upper CI.
indicator <- function(UPPER, LOWER, trueval){
IND <- trueval >= LOWER & trueval <= UPPER
IND[is.na(IND)] <- 0
return(IND)
}
# Funtion to count the number of recorded values
counting <- function(UPPER, LOWER){
count <- (!is.na(UPPER) & !is.na(LOWER))
return(count)
}
##
###############
### Data Wrangling
###############
##
######################################################
# First we create a universal mask over all iterations
######################################################
# Set warnings off
# options(warn = -1)
# Vector to check progress
CheckProgr <- floor(seq(1,nruns,length.out=10))
# Vector of simulations where we have a missing mask
missingMask <- c()
# Do you want to make an universal mask again?
WRITEMASK <- FALSE
if(isTRUE(WRITEMASK)){
# Vector with all masks in it
AllMask <- c()
# Load in the masks
for(i in 1:nruns){
# Print progress
if(i %in% CheckProgr) print(paste('LOADING MASKS. NOW AT ', (i/nruns)*100, '%', sep = ''))
# Try reading in mask, then go to one column and convert to data frame.
CheckMask <- try(readNIfTI(paste(DATAwd[[currentWD]], '/', i,'/mask.nii', sep = ''))[,,,1] %>%
matrix(.,ncol = 1) %>% data.frame(), silent = TRUE)
# If there is no mask, skip iteration
if(class(CheckMask) == "try-error"){ missingMask <- c(missingMask, i); next}
# Some masks are broken: if all values are zero: REPORT
if(all(CheckMask == 0)){print(paste("CHECK MASK AT ITERATION ", i, sep = "")); next}
# Bind the masks of all iterations together
AllMask <- bind_cols(AllMask, CheckMask)
rm(CheckMask)
}
# Take product to have universal mask
UnivMask <- apply(AllMask, 1, prod)
# Better write this to folder
niftiimage <- nifti(img=array(UnivMask, dim = DIM),dim=DIM)
writeNIfTI(niftiimage,filename=paste(DATAwd[[currentWD]],'/universalMask',sep=''),gzipped=FALSE)
}
if(isTRUE(!WRITEMASK)){
# Read in mask
UnivMask <- readNIfTI(paste(DATAwd[[currentWD]],'/universalMask.nii', sep = ''))[,,] %>%
matrix(.,ncol = 1)
}
# Load the naming structure of the data
load(paste(paste(DATAwd[['Take[8mmBox10]']], '/1/ObjectsRestMAvsGLM_1.RData',sep=''))); objects <- names(ObjectsRestMAvsGLM); rm(ObjectsRestMAvsGLM)
OBJ.ID <- c(rep(objects[!objects %in% c("STHEDGE","STWEIGHTS")], each=prod(DIM)), rep(c("STHEDGE","STWEIGHTS"), each=c(prod(DIM)*nstud)))
objects.CI <- objects[grepl(c('upper'), objects) | grepl(c('lower'), objects)]
# Pre-define the CI coverage and length vectors in which we sum the values
# After running nruns, divide by amount of obtained runs.
# For bias, we work with VAR(X) = E(X**2) - E(X)**2 and a vector in which we sum the bias.
# Hence, we need to sum X**2 and X in a separate vector.
summed.coverage.IBMA <- summed.coverage.GLM <-
summed.length.IBMA <- summed.length.GLM <-
summed.X.IBMA <- summed.X.GLM <-
summed.X2.IBMA <- summed.X2.GLM <-
array(0,dim=c(sum(UnivMask == 1),1))
# Keeping count of amount of values
counterMA <- counterGLM <- 0
# Load in the data
t1 <- Sys.time()
for(i in 1:nruns){
if(i %in% CheckProgr) print(paste('PROCESSING. NOW AT ', (i/nruns)*100, '%', sep = ''))
# CI coverage: loop over the two procedures
for(p in 1:2){
objUP <- objects.CI[grepl(c('upper'), objects.CI)][p] %>% gsub(".", "_",.,fixed = TRUE)
objLOW <- objects.CI[grepl(c('lower'), objects.CI)][p] %>% gsub(".", "_",.,fixed = TRUE)
UP <- try(fread(file = paste(DATAwd[[currentWD]], '/', i, '/', objUP, '.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1), silent = TRUE)
if(class(UP) == "try-error"){print(paste('Missing data in iteration ', i, sep = '')); next}
LOW <- fread(file = paste(DATAwd[[currentWD]], '/',i, '/', objLOW, '.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1)
if(grepl('MA', x = objUP)){
# CI coverage: add when true value in CI
summed.coverage.IBMA[,1] <- summed.coverage.IBMA[,1] +
indicator(UPPER = UP, LOWER = LOW, trueval = 0)
# CI length: sum the length
summed.length.IBMA[,1] <- summed.length.IBMA[,1] + as.matrix(UP - LOW)
# Add one to the count (if data is available)
counterMA <- counterMA + counting(UPPER = UP, LOWER = LOW)
}else{
# GLM procedure: CI coverage
summed.coverage.GLM[,1] <- summed.coverage.GLM[,1] +
indicator(UPPER = UP, LOWER = LOW, trueval = 0)
# CI length: sum the length
summed.length.GLM[,1] <- summed.length.GLM[,1] + as.matrix(UP - LOW)
# Count
counterGLM <- counterGLM + counting(UPPER = UP, LOWER = LOW)
}
rm(objUP, objLOW, UP, LOW)
}
# Standardized bias: read in weighted average / cope
WAVG <- fread(file = paste(DATAwd[[currentWD]], '/', i, '/MA_WeightedAvg.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1)
GLMCOPE <- fread(file = paste(DATAwd[[currentWD]], '/', i, '/GLM_COPE', '.txt', sep = ''), header = FALSE) %>% filter(., UnivMask == 1)
# Sum X
summed.X.IBMA[,1] <- summed.X.IBMA[,1] + as.matrix(WAVG)
summed.X.GLM[,1] <- summed.X.GLM[,1] + as.matrix(GLMCOPE)
# Sum X**2
summed.X2.IBMA[,1] <- summed.X2.IBMA[,1] + as.matrix(WAVG ** 2)
summed.X2.GLM[,1] <- summed.X2.GLM[,1] + as.matrix(GLMCOPE ** 2)
}
Sys.time() - t1
# Calculate the average (over nsim) CI coverage, length and bias
Coverage.IBMA <- summed.coverage.IBMA/counterMA
Coverage.GLM <- summed.coverage.GLM/counterGLM
Length.IBMA <- summed.length.IBMA/counterMA
Length.GLM <- summed.length.GLM/counterGLM
# Formula: Var(X) = E(X**2) - [E(X)]**2
# E(X**2) = sum(X**2) / n
# E(X) = sum(X) / n
# \hat{var(X)} = var(X) * (N / N-1)
# \hat{SD} = sqrt(\hat{var(X)})
samplingSD.IBMA <- sqrt(((summed.X2.IBMA/(counterMA)) - ((summed.X.IBMA/counterMA)**2)) * (counterMA / (counterMA - 1)))
samplingSD.GLM <- sqrt(((summed.X2.GLM/(counterGLM)) - ((summed.X.GLM/counterGLM)**2)) * (counterGLM / (counterGLM - 1)))
# Standardized bias: true beta = 0
Bias.IBMA <- ((summed.X.IBMA / counterMA) - 0) / samplingSD.IBMA
Bias.GLM <- ((summed.X.GLM / counterGLM) - 0) / samplingSD.GLM
# Heatmap of the coverages
emptBrainIBMA <- emptBrainGLM <- array(NA, dim = prod(DIM))
emptBrainIBMA[UnivMask == 1] <- c(summed.coverage.IBMA/counterMA)
emptBrainGLM[UnivMask == 1] <- c(summed.coverage.GLM/counterGLM)
LevelPlotMACoV <- levelplot(array(emptBrainIBMA, dim = DIM)[,,40], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'CI coverage meta-analysis')
LevelPlotGLMCoV <- levelplot(array(emptBrainGLM, dim = DIM)[,,40], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'CI coverage GLM')
# Bias
emptBrainIBMA <- emptBrainGLM <- array(NA, dim = prod(DIM))
emptBrainIBMA[UnivMask == 1] <- Bias.IBMA
emptBrainGLM[UnivMask == 1] <- Bias.GLM
LevelPlotMABias <- levelplot(array(emptBrainIBMA, dim = DIM)[,,40], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'Standardized bias Meta-Analysis')
LevelPlotGLMBias <- levelplot(array(emptBrainGLM, dim = DIM)[,,40], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'Standardized bias GLM')
DifferenceBias <- levelplot(array(emptBrainIBMA - emptBrainGLM, dim = DIM)[,,c(36:46)], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'Bias MA - GLM')
# CI length
emptBrainIBMA <- emptBrainGLM <- array(NA, dim = prod(DIM))
emptBrainIBMA[UnivMask == 1] <- Length.IBMA
emptBrainGLM[UnivMask == 1] <- Length.GLM
LevelPlotMACL <- levelplot(array(emptBrainIBMA, dim = DIM)[,,40], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'CI length Meta-Analysis')
LevelPlotGLMCL <- levelplot(array(emptBrainGLM, dim = DIM)[,,40], col.regions = topo.colors,
xlim=c(0,DIM[1]),ylim=c(0,DIM[2]), xlab = 'x', ylab = 'y',
main = 'CI length GLM')
grid.arrange(LevelPlotMACoV,LevelPlotGLMCoV, ncol = 2)
grid.arrange(LevelPlotMABias,LevelPlotGLMBias, ncol = 2)
grid.arrange(LevelPlotMACL,LevelPlotGLMCL, ncol = 2)
```
| github_jupyter |
## Mixture Density Networks with PyTorch ##
Related posts:
JavaScript [implementation](http://blog.otoro.net/2015/06/14/mixture-density-networks/).
TensorFlow [implementation](http://blog.otoro.net/2015/11/24/mixture-density-networks-with-tensorflow/).
```
import matplotlib.pyplot as plt
import numpy as np
import torch
import math
from torch.autograd import Variable
import torch.nn as nn
```
### Simple Data Fitting ###
Before we talk about MDN's, we try to perform some simple data fitting using PyTorch to make sure everything works. To get started, let's try to quickly build a neural network to fit some fake data. As neural nets of even one hidden layer can be universal function approximators, we can see if we can train a simple neural network to fit a noisy sinusoidal data, like this ( $\epsilon$ is just standard gaussian random noise):
$y=7.0 \sin( 0.75 x) + 0.5 x + \epsilon$
After importing the libraries, we generate the sinusoidal data we will train a neural net to fit later:
```
NSAMPLE = 1000
x_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
r_data = np.float32(np.random.normal(size=(NSAMPLE,1)))
y_data = np.float32(np.sin(0.75*x_data)*7.0+x_data*0.5+r_data*1.0)
plt.figure(figsize=(8, 8))
plot_out = plt.plot(x_data,y_data,'ro',alpha=0.3)
plt.show()
```
We will define this simple neural network one-hidden layer and 100 nodes:
$Y = W_{out} \max( W_{in} X + b_{in}, 0) + b_{out}$
```
# N is batch size; D_in is input dimension;
# H is hidden dimension; D_out is output dimension.
# from (https://github.com/jcjohnson/pytorch-examples)
N, D_in, H, D_out = NSAMPLE, 1, 100, 1
# Create random Tensors to hold inputs and outputs, and wrap them in Variables.
# since NSAMPLE is not large, we train entire dataset in one minibatch.
x = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, D_in)))
y = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, D_out)), requires_grad=False)
model = torch.nn.Sequential(
torch.nn.Linear(D_in, H),
torch.nn.ReLU(),
torch.nn.Linear(H, D_out),
)
```
We can define a loss function as the sum of square error of the output vs the data (we can add regularisation if we want).
```
loss_fn = torch.nn.MSELoss()
```
We will also define a training loop to minimise the loss function later. We can use the RMSProp gradient descent optimisation method.
```
learning_rate = 0.01
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate, alpha=0.8)
for t in range(100000):
y_pred = model(x)
loss = loss_fn(y_pred, y)
if (t % 10000 == 0):
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
x_test = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
x_test = Variable(torch.from_numpy(x_test.reshape(NSAMPLE, D_in)))
y_test = model(x_test)
plt.figure(figsize=(8, 8))
plt.plot(x_data,y_data,'ro', x_test.data.numpy(),y_test.data.numpy(),'bo',alpha=0.3)
plt.show()
```
We see that the neural network can fit this sinusoidal data quite well, as expected. However, this type of fitting method only works well when the function we want to approximate with the neural net is a one-to-one, or many-to-one function. Take for example, if we invert the training data:
$x=7.0 \sin( 0.75 y) + 0.5 y+ \epsilon$
```
temp_data = x_data
x_data = y_data
y_data = temp_data
plt.figure(figsize=(8, 8))
plot_out = plt.plot(x_data,y_data,'ro',alpha=0.3)
plt.show()
```
If we were to use the same method to fit this inverted data, obviously it wouldn't work well, and we would expect to see a neural network trained to fit only to the square mean of the data.
```
x = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, D_in)))
y = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, D_out)), requires_grad=False)
learning_rate = 0.01
optimizer = torch.optim.RMSprop(model.parameters(), lr=learning_rate, alpha=0.8)
for t in range(3000):
y_pred = model(x)
loss = loss_fn(y_pred, y)
if (t % 300 == 0):
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
x_test = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
x_test = Variable(torch.from_numpy(x_test.reshape(NSAMPLE, D_in)))
y_test = model(x_test)
plt.figure(figsize=(8, 8))
plt.plot(x_data,y_data,'ro', x_test.data.numpy(),y_test.data.numpy(),'bo',alpha=0.3)
plt.show()
```
Our current model only predicts one output value for each input, so this approach will fail miserably. What we want is a model that has the capacity to predict a range of different output values for each input. In the next section we implement a Mixture Density Network (MDN) to achieve this task.
## Mixture Density Networks ##
Our current model only predicts one output value for each input, so this approach will fail. What we want is a model that has the capacity to predict a range of different output values for each input. In the next section we implement a *Mixture Density Network (MDN)* to do achieve this task.
Mixture Density Networks, developed by Christopher Bishop in the 1990s, is an attempt to address this problem. Rather to have the network predict a single output value, the MDN predicts an entire *probability distribution* of the output, so we can sample several possible different output values for a given input.
This concept is quite powerful, and can be employed many current areas of machine learning research. It also allows us to calculate some sort of confidence factor in the predictions that the network is making.
The inverse sinusoidal data we chose is not just for a toy problem, as there are applications in the field of robotics, for example, where we want to determine which angle we need to move the robot arm to achieve a target location. MDNs are also used to model handwriting, where the next stroke is drawn from a probability distribution of multiple possibilities, rather than sticking to one prediction.
Bishop's implementation of MDNs will predict a class of probability distributions called Mixture Gaussian distributions, where the output value is modelled as a sum of many gaussian random values, each with different means and standard deviations. So for each input $x$, we will predict a probability distribution function $P(Y = y | X = x)$ that is approximated by a weighted sum of different gaussian distributions.
$P(Y = y | X = x) = \sum_{k=0}^{K-1} \Pi_{k}(x) \phi(y, \mu_{k}(x), \sigma_{k}(x)), \sum_{k=0}^{K-1} \Pi_{k}(x) = 1$
Our network will therefore predict the *parameters* of the pdf, in our case the set of $\mu$, $\sigma$, and $\Pi$ values for each input $x$. Rather than predict $y$ directly, we will need to sample from our distribution to sample $y$. This will allow us to have multiple possible values of $y$ for a given $x$.
Each of the parameters $\Pi_{k}(x), \mu_{k}(x), \sigma_{k}(x)$ of the distribution will be determined by the neural network, as a function of the input $x$. There is a restriction that the sum of $\Pi_{k}(x)$ add up to one, to ensure that the pdf integrates to 1. In addition, $\sigma_{k}(x)$ must be strictly positive.
In our implementation, we will use a neural network of one hidden later with 100 nodes, and also generate 20 mixtures, hence there will be 60 actual outputs of our neural network of a single input. Our definition will be split into 2 parts:
$Z = W_{out} \max( W_{in} X + b_{in}, 0) + b_{out}$
In the first part, $Z$ is a vector of 60 values that will be then splitup into three equal parts, $[Z_{\Pi}, Z_{\sigma}, Z_{\mu}] = Z$, where each of $Z_{\Pi}$, $Z_{\sigma}$, $Z_{\mu}$ are vectors of length 20.
In this PyTorch implementation, unlike the TF version, we will implement this operation with 3 seperate Linear layers, rather than splitting a large $Z$, for clarity:
$Z_{\Pi} = W_{\Pi} \max( W_{in} X + b_{in}, 0) + b_{\Pi}$
$Z_{\sigma} = W_{\sigma} \max( W_{in} X + b_{in}, 0) + b_{\sigma}$
$Z_{\mu} = W_{\mu} \max( W_{in} X + b_{in}, 0) + b_{\mu}$
In the second part, the parameters of the pdf will be defined as below to satisfy the earlier conditions:
$\Pi = \frac{\exp(Z_{\Pi})}{\sum_{i=0}^{20} exp(Z_{\Pi, i})}, \\ \sigma = \exp(Z_{\sigma}), \\ \mu = Z_{\mu}$
$\Pi_{k}$ are put into a *softmax* operator to ensure that the sum adds to one, and that each mixture probability is positive. Each $\sigma_{k}$ will also be positive due to the exponential operator.
Below is the PyTorch implementation of the MDN network:
```
NHIDDEN = 100 # hidden units
KMIX = 20 # number of mixtures
class MDN(nn.Module):
def __init__(self, hidden_size, num_mixtures):
super(MDN, self).__init__()
self.fc_in = nn.Linear(1, hidden_size)
self.relu = nn.ReLU()
self.pi_out = torch.nn.Sequential(
nn.Linear(hidden_size, num_mixtures),
nn.Softmax()
)
self.sigma_out = nn.Linear(hidden_size, num_mixtures)
self.mu_out = nn.Linear(hidden_size, num_mixtures)
def forward(self, x):
out = self.fc_in(x)
out = self.relu(out)
out_pi = self.pi_out(out)
out_sigma = torch.exp(self.sigma_out(out))
out_mu = self.mu_out(out)
return (out_pi, out_sigma, out_mu)
```
Let's define the inverted data we want to train our MDN to predict later. As this is a more involved prediction task, I used a higher number of samples compared to the simple data fitting task earlier.
```
NSAMPLE = 2500
y_data = np.float32(np.random.uniform(-10.5, 10.5, (1, NSAMPLE))).T
r_data = np.float32(np.random.normal(size=(NSAMPLE,1))) # random noise
x_data = np.float32(np.sin(0.75*y_data)*7.0+y_data*0.5+r_data*1.0)
x_train = Variable(torch.from_numpy(x_data.reshape(NSAMPLE, 1)))
y_train = Variable(torch.from_numpy(y_data.reshape(NSAMPLE, 1)), requires_grad=False)
plt.figure(figsize=(8, 8))
plt.plot(x_train.data.numpy(),y_train.data.numpy(),'ro', alpha=0.3)
plt.show()
```
We cannot simply use the min square error L2 lost function in this task the output is an entire description of the probability distribution. A more suitable loss function is to minimise the logarithm of the likelihood of the distribution vs the training data:
$CostFunction(y | x) = -\log[ \sum_{k}^K \Pi_{k}(x) \phi(y, \mu(x), \sigma(x)) ]$
So for every $(x,y)$ point in the training data set, we can compute a cost function based on the predicted distribution versus the actual points, and then attempt the minimise the sum of all the costs combined. To those who are familiar with logistic regression and cross entropy minimisation of softmax, this is a similar approach, but with non-discretised states.
We have to implement this cost function ourselves:
```
oneDivSqrtTwoPI = 1.0 / math.sqrt(2.0*math.pi) # normalisation factor for gaussian.
def gaussian_distribution(y, mu, sigma):
# braodcast subtraction with mean and normalization to sigma
result = (y.expand_as(mu) - mu) * torch.reciprocal(sigma)
result = - 0.5 * (result * result)
return (torch.exp(result) * torch.reciprocal(sigma)) * oneDivSqrtTwoPI
def mdn_loss_function(out_pi, out_sigma, out_mu, y):
epsilon = 1e-3
result = gaussian_distribution(y, out_mu, out_sigma) * out_pi
result = torch.sum(result, dim=1)
result = - torch.log(epsilon + result)
return torch.mean(result)
```
Let's define our model, and use the Adam optimizer to train our model below:
```
model = MDN(hidden_size=NHIDDEN, num_mixtures=KMIX)
learning_rate = 0.00001
optimizer = torch.optim.Adam(model.parameters(), lr=learning_rate)
for t in range(20000):
(out_pi, out_sigma, out_mu) = model(x_train)
loss = mdn_loss_function(out_pi, out_sigma, out_mu, y_train)
if (t % 1000 == 0):
print(t, loss.data[0])
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
We want to use our network to generate the parameters of the pdf for us to sample from. In the code below, we will sample $M=10$ values of $y$ for every $x$ input, and compare the sampled results with the training data.
```
x_test_data = np.float32(np.random.uniform(-15, 15, (1, NSAMPLE))).T
x_test = Variable(torch.from_numpy(x_test_data.reshape(NSAMPLE, 1)))
(out_pi_test, out_sigma_test, out_mu_test) = model(x_test)
out_pi_test_data = out_pi_test.data.numpy()
out_sigma_test_data = out_sigma_test.data.numpy()
out_mu_test_data = out_mu_test.data.numpy()
def get_pi_idx(x, pdf):
N = pdf.size
accumulate = 0
for i in range(0, N):
accumulate += pdf[i]
if (accumulate >= x):
return i
print('error with sampling ensemble')
return -1
def generate_ensemble(M = 10):
# for each point in X, generate M=10 ensembles
NTEST = x_test_data.size
result = np.random.rand(NTEST, M) # initially random [0, 1]
rn = np.random.randn(NTEST, M) # normal random matrix (0.0, 1.0)
mu = 0
std = 0
idx = 0
# transforms result into random ensembles
for j in range(0, M):
for i in range(0, NTEST):
idx = get_pi_idx(result[i, j], out_pi_test_data[i])
mu = out_mu_test_data[i, idx]
std = out_sigma_test_data[i, idx]
result[i, j] = mu + rn[i, j]*std
return result
y_test_data = generate_ensemble()
plt.figure(figsize=(8, 8))
plt.plot(x_test_data,y_test_data,'b.', x_data,y_data,'r.',alpha=0.3)
plt.show()
```
In the above graph, we plot out the generated data we sampled from the MDN distribution, in blue. We also plot the original training data in red over the predictions. Apart from a few outliers, the distributions seem to match the data. We can also plot a graph of $\mu(x)$ as well to interpret what the neural net is actually doing:
```
plt.figure(figsize=(8, 8))
plt.plot(x_test_data,out_mu_test_data,'g.', x_data,y_data,'r.',alpha=0.3)
plt.show()
```
In the plot above, we see that for every point on the $x$-axis, there are multiple lines or states where $y$ may be, and we select these states with probabilities modelled by $\Pi$ .
| github_jupyter |
```
%load_ext autoreload
%autoreload 2
import preprocess
preprocess.main('CC*.pkl','BIOS.pkl')
!ls
import pickle
all_bios = pickle.load( open( "BIOS.pkl", "rb" ) )
```
## Dictionary Details
1. r["title"] tells you the noramlized title
2. r["gender"] tells you the gender (binary for simplicity, determined from the pronouns)3.
3. r["start_pos"] indicates the length of the first sentence.
4. r["raw"] has the entire bio
5. The field r["bio"] contains a scrubbed version of the bio (with the person's name and obvious gender words (like she/he removed)
## Problem Statement
So the classification task is to predict r["title"] from r["raw"][r["start_pos"]:]
#### Example Dictionary Element
```
test_bio = all_bios[0]
test_bio['bio']
test_bio['raw']
```
### Distribution of occupation
```
occupation_dict={}
for bio in all_bios:
occupation=bio['title']
try:
occupation_dict[occupation] = 1
except KeyError:
occupation_dict[occupation] += 1
import matplotlib.pyplot as plt
import numpy as np
keys = x.keys()
vals = x.values()
plt.bar(keys, np.divide(list(vals), sum(vals)), label="Real distribution")
plt.ylim(0,1)
plt.ylabel ('Percentage')
plt.xlabel ('Significant number')
plt.xticks(list(keys))
plt.legend (bbox_to_anchor=(1, 1), loc="upper right", borderaxespad=0.)
plt.show()
import pandas as pd
from matplotlib import pyplot as plt
import matplotlib as mpl
import seaborn as sns
%matplotlib inline
#Read in data & create total column
import pandas as pd
train_data=pd.read_csv('Data/Train.csv')
val_data =pd.read_csv('Data/Val.csv')
test_data =pd.read_csv('Data/Test.csv')
total_data = pd.concat([train_data,test_data,val_data],axis=0)
# #stacked_bar_data["total"] = stacked_bar_data.Series1 + stacked_bar_data.Series2
# # #Set general plot properties
# sns.set_style("white")
# sns.set_context({"figure.figsize": (24, 10)})
# # #Plot 1 - background - "total" (top) series
# sns.barplot(x = stacked_bar_data.title, y = stacked_bar_data., color = "red")
# # #Plot 2 - overlay - "bottom" series
# # bottom_plot = sns.barplot(x = stacked_bar_data.Group, y = stacked_bar_data.Series1, color = "#0000A3")
# # topbar = plt.Rectangle((0,0),1,1,fc="red", edgecolor = 'none')
# # bottombar = plt.Rectangle((0,0),1,1,fc='#0000A3', edgecolor = 'none')
# # l = plt.legend([bottombar, topbar], ['Bottom Bar', 'Top Bar'], loc=1, ncol = 2, prop={'size':16})
# # l.draw_frame(False)
# # #Optional code - Make plot look nicer
# # sns.despine(left=True)
# # bottom_plot.set_ylabel("Y-axis label")
# # bottom_plot.set_xlabel("X-axis label")
# # #Set fonts to consistent 16pt size
# # for item in ([bottom_plot.xaxis.label, bottom_plot.yaxis.label] +
# # bottom_plot.get_xticklabels() + bottom_plot.get_yticklabels()):
# # item.set_fontsize(16)
df=total_data.groupby(['title','gender'])['path'].count()
total_data['title'].unique()
df_to_plot=pd.DataFrame(columns=['title','M','F'])
list1=[]
for title in list(total_data['title'].unique()):
try:
list1.append((title, df[title,'M'],df[title,'F']))
except:
pass
df_to_plot=pd.DataFrame(list1,columns=['title','M','F'])
#total_data = pd.concat([train_data,test_data,val_data],axis=0)
df_to_plot["total"] = df_to_plot['M'] + df_to_plot['F']
df_to_plot=df_to_plot.sort_values(['total'],ascending=False)
# #Set general plot properties
sns.set_style("white")
sns.set_context({"figure.figsize": (24, 10)})
# #Plot 1 - background - "total" (top) series
sns.barplot(x = df_to_plot.title, y = df_to_plot.total, color = "green")
# #Plot 2 - overlay - "bottom" series
bottom_plot = sns.barplot(x = df_to_plot.title, y = df_to_plot['M'], color = "blue")
topbar = plt.Rectangle((0,0),1,1,fc="green", edgecolor = 'none')
bottombar = plt.Rectangle((0,0),1,1,fc='blue', edgecolor = 'none')
l = plt.legend([bottombar, topbar], ['Male', 'Female'], loc=1, ncol = 2, prop={'size':16})
l.draw_frame(False)
#Optional code - Make plot look nicer
sns.despine(left=True)
bottom_plot.set_ylabel("Log frequency")
plt.yscale('log')
#Set fonts to consistent 16pt size
for item in ([bottom_plot.xaxis.label, bottom_plot.yaxis.label] +
bottom_plot.get_xticklabels() + bottom_plot.get_yticklabels()):
item.set_fontsize(28)
item.set_rotation('vertical')
#bottom_plot.set_xlabel("Occupation")
plt.tight_layout()
bottom_plot.set_xlabel('')
plt.savefig('data_distribution.png')
```
### Mithun add your codes here
### Model 1 : Bag of words
```
word_dict={}
for bio in all_bios:
index_to_start=bio['start_pos']
tokens=bio['raw'][index_to_start:].split()
for tok in tokens:
tok = tok.strip().lower()
try:
word_dict[tok] += 1
except:
word_dict[tok] = 1
len(list(word_dict))
import nltk
import pandas as pd
from scipy.sparse import vstack, csr_matrix, save_npz, load_npz
!pip install scipy
df = pd.DataFrame(all_bios, columns =list(all_bios[0].keys()))
from sklearn.model_selection import train_test_split
df_train,df_test_val=train_test_split(df, test_size=0.35, random_state=42,stratify=df['title'])
df_test,df_val=train_test_split(df_test_val, test_size=0.28, random_state=42,stratify=df_test_val['title'])
df_train.to_csv('Train.csv',index=False)
df_test.to_csv('Test.csv',index=False)
df_val.to_csv('Val.csv',index=False)
import heapq
most_freq = heapq.nlargest(50000, word_dict, key=word_dict.get)
dataset = []
for bio in all_bios:
index_to_start=bio['start_pos']
tokens=bio['raw'][index_to_start:].split()
for tok in most_freq:
if token in sentence_tokens:
sent_vec.append(1)
else:
sent_vec.append(0)
sentence_vectors.append(sent_vec)
for sentence in corpus:
sentence_tokens = nltk.word_tokenize(sentence)
sent_vec = []
for token in most_freq:
if token in sentence_tokens:
sent_vec.append(1)
else:
sent_vec.append(0)
sentence_vectors.append(sent_vec)
```
| github_jupyter |
```
#pandas
#indexes are visible
#2 types of data structure
#1. sereis - vector - 1d
#2.data framee - 2d
#3. index - index is visible
import numpy as np
import pandas as pd
#descriptive statistics
data = pd.Series([0.25,0.5,0.75,1])
data
data.values
data.index
data.shape
data.describe
data.describe()
#explicit indexing - assigned manually, difficult for huge data
#if it is given as an intiger it will replace the default indexing
data = pd.Series([0.25,0.5,0.75,1],
index=['a','b','c','d'])
data
data['b']
data[1]
population_dict={'California':351264283,
'Texas':54123648,
'New York':5545865245,
'Florida':745565323,
'Illinois':1243579}
population = pd.Series(population_dict)
population
population.values
population.index
population.describe
population.describe()
population[1]
population['Texas']
pd.Series([2,5,7,8])
pd.Series(5,index=[100,200,300,400,500])
pd.Series({2:'a',3:'b',8:'c'}, index=[2,8])
##dataframe objects
area_dict={'California':1264283,'Texas':5413648,'New York':565245,'Florida':745523,'Illinois':3454569}
area = pd.Series(area_dict)
area
states = pd.DataFrame({'population': population,
'area':area})
states
states.keys()
states.ndim
states.info()
data = np.arange(12,24).reshape(4,3)
df = pd.DataFrame(data)
df
data = np.arange(12,24).reshape(4,3)
df = pd.DataFrame(data,columns=['a','b','c'],index=[5,8,9,4])
df
x= np.random.random(20).reshape(4,5)
df= pd.DataFrame(x, columns=[1,2,3,4,5],index=['a','b','c','d'])
df
y = [{'a':i,'b':2*i,'c':2**i+2}
for i in range(1,4)]
pd.DataFrame(y)
a= np.zeros(3, dtype=[('A','i8'),('B','f8')])
a
pd.DataFrame(a)
#indexig are immutable arrays
#indexes are orderd sets
indA = pd.Index([1,2,3,4,])
indB = pd.Index([2,4,6,8,10])
indA&indB
indA | indB
indA ^ indB
data = pd.read_csv("births.csv")
data
data.head() #top5 raws
data.tail() #last 5 raws
data.describe
data.info()
data.columns
data.index
data.shape
data.ndim
data.values
data['year']
import seaborn as sns
sns.pairplot(data)
datas = pd.read_csv("../Desktop/data/file.csv")
datas
sns.pairplot(datas)
Data = pd.read_excel('sales.xlsx')
Data
sns.pairplot(Data)
path = "../Desktop/1.txt"
with open(path,'r') as file: #"r" indicates read
print (file.read())
Data = pd.DataFrame(file)
Data
Data = pd.read_csv('iris.data')
Data.head()
##opertations in Pandas
rng = np.random.RandomState()
ser = pd.Series(rng.random(10))
ser
id(rng)
type(ser)
df = pd.DataFrame(rng.randint(0,10,(4,4)),
columns=['A','B','C','D'])
df
df.mean()
df.mean(1)
df.median()
np.exp(df)
np.sin(df*np.pi/4)
area = pd.Series({'Alaska':465356,'Texas':89535,'California':89553},
name='Area')
population = pd.Series({'Newyork':8658656,'Texas':583569535,'California':87856553},
name='Population')
population
area
population/area
area.index|population.index
A = pd.Series([2,6,3],index =[3,2,1])
B = pd.Series([1,5,3],index =[0,3,2])
A+B
A.add(B,fill_value=(0))
X= pd.DataFrame(rng.randint(0,20,(3,5)),
columns=list('ABCDE'))
X
Y = pd.DataFrame(rng.randint(0,20,(3,3)),
columns=list('ABC'))
Y
Z =X.stack().mean()
Z
X.add(Y,fill_value =Z)
p= rng.randint(10,size=(3,4))
p
p[2]
p-p[2]
s = pd.DataFrame(p, columns = list('QRST'))
s
s.iloc[0] #iloc is used to select a particular row
s-s.iloc[0]
s.subtract(s['R'],axis=0)
halfrow = s.iloc[0,::2]
halfrow
s-halfrow
#Handliing the missing data
#nan
#na
#none
#missing values has to be filled
#1. finding NA
#1Aa. identify NA
#1ai. is null
#1aii. not a null
#2. fill NA
#2a. fill_value
#2b.fill_NA
#2bi.bfill
#2bii.ffill
#3. drop - entire row will be deleted
a= np.array([1,None,2,3])
a
a.dtype #object datatype is for None values
b= np.array([1,4,2,3])
b
b.dtype
a[0]
a[2]
for dtype in['object','int','float']:
print("dtype=", dtype)
%timeit np.arange(1E6,dtype=dtype).sum()
print()
for dtype in['object','int','float']:
print("dtype=", dtype)
%timeit np.arange(1E6,dtype=dtype).mean()
print()
#NAN
vals2= np.array([1,np.nan,3,4])
vals2.dtype #nan data type is float
1+np.nan
vals2.sum(),vals2.min(),vals2.max(),vals2.mean()
np.nansum(vals2),np.nanmin(vals2),np.nanmax(vals2),np.nanmean(vals2) #can avoid nan values and perform operation
#nan and None
pd.Series([1,np.nan,2,None])
x= pd.Series(range(4),dtype=int)
x
x[3]=None
x
x.add(2)
#DATA TYPE CHANGES
#1. FLOAT -no change
#2. object - No change
#3. intiger - cast to float - np.nan
#4.Boolean - cast to objetc - None or np.nan
y = np.array([True,False,10,20,None,np.nan])
y.dtype
y = pd.Series([True,False,10,20,None,np.nan])
y.dtype
#null values
#isnull - find the null values in the data set
#notnull - opposite of is null
#dropna - filter the data with or without null values
#fillna()-fill the values for NAN
data = pd.Series([1,np.nan,3,None,'hello'])
data
data.isnull()
Df= pd.DataFrame(data)
Df
Df.isnull()
Df['age']=np.array([24,23,45,56,35])
Df
Df.notnull()
data[data.notnull()]
Df[Df.notnull()]
#dropna() delete the null values without affecting the original data
#fillna() will affect the original data
data.dropna()
data
data.fillna(1)
data.fillna(value = 10)
mean = np.nanmean(data)
print (mean)
data.fillna(value =mean) #data is manipulated
df = pd.DataFrame([[1, np.nan,2],
[2,3,5],
[np.nan,4,None]],columns=['A','B','C'])
df
df.dropna()
df.dropna(axis='columns')
df.dropna(axis='rows')
df['D']=np.nan
df
df.dropna(axis = 'columns',how = 'all')
df.dropna(axis = 'columns',how = 'any')
df.dropna(axis = 'rows',how = 'all')
df.dropna(axis = 'rows',how = 'any')
df.dropna(axis = 'rows',thresh =2) #number of available non null values =2
df.dropna(axis = 'columns', thresh =2)
df2 = pd.DataFrame({'Data':[10,20,30,np.nan,50,60],
'float':[1.5,2.5,3.2,4.5,5.5,np.nan],
'complex':[np.nan,2j+3,np.nan,23j+2,5j+2,np.nan]})
df2
df2.isnull()
df2.notnull()
df2.notna()
df2.dropna()
fill= np.nanmean(df2)
fill
Fill_Missing_Value = df2.fillna(fill)
Fill_Missing_Value
Fill_Missing_Value = df2.fillna(fill,axis=1)
Fill_Missing_Value
data = pd.Series([1,np.nan,2,None,3],index=list('abcde'))
data
x=data.fillna(0)
x
```
```
data.fillna(method='ffill')
df3 = pd.DataFrame({'Data':[10,20,30,np.nan,50,60],
'float':[1.5,2.5,3.2,4.5,5.5,np.nan],
})
df3
data.fillna(method='bfill')
import numpy as np
import pandas as pd
Data = pd.read_csv('california_cities.csv')
Data
Data.head()
Data.tail()
Data.describe()
Data.info()
Data.columns
Data.index
Data.isnull().info()
Data1 = Data.drop(['Unnamed: 0'],axis =1)
Data1
Data1.isnull()
Data1.info()
#Data1['elevation_m']=np.nanmean(Data1['elevation_m'])
#Data1 #original data will be affetced so not a recomanded method
Fill = np.nanmean(Data1['elevation_m'])
Data1['elevation_m']= Data1['elevation_m'].fillna(Fill)
Data1
Data1.info()
#hierarchical indexing
pd.__version__
import numpy as np
import pandas as pd
index = [('California', 2000), ('California', 2010),
('New York', 2000), ('New York', 2010),
('Texas', 2000), ('Texas', 2010)]
populations = [33871648, 37253956,
18976457, 19378102,
20851820, 25145561]
populations
pop = pd.Series(populations, index=index)
pop
#pop [1:4]
pop['California',2000]
pop[[i for i in pop.index if i[1]==2010]]
for i in pop.index:
if i[1]==2010:
print(i,pop[i])
index = pd.MultiIndex.from_tuples(index)
index
pop =pop.reindex(index)
pop
pop['California']
pop[:,2010]
pop_df = pop.unstack()
pop_df
pop_df.stack()
pop
pop_df = pd.DataFrame ({'Total': pop,
'under18': [8865325656,35689545,
656898,458545545,
4455687,965856]})
pop_df
df = pd.DataFrame(np.random.rand(4,2),
index =[['a','a','b','b'],[1,2,1,2]],
columns=['data1','data2'])
df
data ={('california',2000):5589865365,
('california',2010):89888556,
('Texas',2000):78454533,
('Texas',2010):58963568,
('Newyork',2000):57989656,
('Newyork',2010):555655878}
pd.Series(data)
pd.MultiIndex.from_arrays([['a','a','b','b'],[1,2,1,2]])
pd.MultiIndex.from_tuples([('a',1),('a',2),('b',1),('b',2)])
pd.MultiIndex.from_product([['a','b'],[1,2]])
pd.MultiIndex(levels = [['a','b'],[1,2]],
codes = [[0,0,1,1],[0,1,0,1]])
pop.index.names=['state','year']
pop
index = pd.MultiIndex.from_product([[2013,2014],[1,2,3]],
names=['year','visit'])
columns =pd.MultiIndex.from_product([['Rani','Raju','Sam'],['BMI','TEMP','WGHT']],
names =['subject','type'])
data=np.round(np.random.rand(6,9),2)
data+=37
health_data = pd.DataFrame(data,index=index,columns=columns)
health_data
health_data['Rani']
import numpy as np
import pandas as pd
health_data.iloc[:3,:-3] # starts with oth row and ends with 2nd row, from right side till -3
health_data.iloc[:3]
idx = pd.IndexSlice
health_data.loc[idx[:,1],idx[:,'TEMP']] #performing integer and string together
#sorted and unsorted indices
index = pd.MultiIndex.from_product([['a','c','b','d'],[1,2]])
data = pd.Series(np.random.rand(8),index=index)
data.index.names = ['char','int']
data
try:
data[ 'a':'b']
except KeyError as e:
print(type(e))
print(e)
data = data.sort_index()
data
pop.unstack(level =0)
pop.unstack().stack()
pop_flat = pop.reset_index(name='population')
pop_flat
health_data
data_mean1 = health_data.mean(level='year')
data_mean1
data_mean2 = health_data.mean(level='visit')
data_mean2
data_mean1.mean(axis =1 ,level='type')
```
| github_jupyter |
## Environment
```
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=0
from pathlib import Path
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
%autosave 20
import csv
import pandas as pd
from keras.backend import tf as ktf
import sys
import cv2
import six
# keras
import keras
from keras.models import Model
from keras.models import Sequential
from keras.regularizers import l2
from keras.layers.core import Lambda
from keras.optimizers import Adam
from keras.layers.normalization import BatchNormalization
from keras.callbacks import LearningRateScheduler
from keras.models import Model
from keras.layers import (
Input,
Activation,
Dense,
Flatten,
Dropout
)
from keras.layers.convolutional import (
Conv2D,
MaxPooling2D,
AveragePooling2D
)
from keras.layers.merge import add
from keras import backend as K
import math
import tensorflow as tf
config = tf.ConfigProto()
config.gpu_options.allow_growth = True
session = tf.Session(config=config)
ROOT_PATH = Path('/home/downloads/CarND-Behavioral-Cloning-P3/')
#ROOT_PATH=Path('/src')
from tensorflow.python.client import device_lib
print(device_lib.list_local_devices())
#SAMPLE_DATA_PATH = ROOT_PATH/'data/sample_data'
SAMPLE_DATA_PATH = ROOT_PATH/'data/all'
print('tensorflow version: ', tf.__version__)
print('keras version: ', keras.__version__)
print('python version: ', sys.version_info)
```
## Load images
```
#[str(x) for x in list(SAMPLE_DATA_PATH.iterdir())]
logs = pd.DataFrame()
num_tracks = [0, 0]
include_folders = [
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/IMG',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_recovery.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive4.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_curve.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_sampledata.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive3.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/backup',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive5.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_reverse.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive2.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track2_drive1.csv',
'/home/downloads/CarND-Behavioral-Cloning-P3/data/all/driving_log_track1_drive.csv'
]
for log_file in SAMPLE_DATA_PATH.glob('*.csv'):
if str(log_file) not in include_folders:
continue
one_log = pd.read_csv(log_file)
num_rows = one_log.shape[0]
print(log_file, '\t', num_rows)
if str(log_file).find('track1') != -1:
num_tracks[0] += num_rows
else:
num_tracks[1] += num_rows
logs = pd.concat([logs, one_log], axis=0)
print('\ntrack 1: ', num_tracks[0])
print('track 2: ', num_tracks[1])
logs.tail()
```
## Preprocessing and Augmentation
```
IMG_FOLDER_PATH = SAMPLE_DATA_PATH/'IMG'
def get_img_files(img_folder_path):
image_files = []
labels = dict()
correction = 0.2
for log in logs.iterrows():
center, left, right, y = log[1][:4]
for i, img_path in enumerate([center, left, right]):
img_path = img_path.split('/')[-1].strip()
abs_img_path = str(img_folder_path/img_path)
if i == 1:
y_corrected = y + correction # left
elif i == 2:
y_corrected = y - correction # right
else:
y_corrected = y
image_files.append(abs_img_path)
labels[abs_img_path] = y_corrected
np.random.shuffle(image_files)
trn_end_idx = int(len(image_files)*0.8)
train_img_files = image_files[:trn_end_idx]
val_img_files = image_files[trn_end_idx:]
return train_img_files, val_img_files, labels
TRAIN_IMG_FILES, VAL_IMG_FILES, LABELS = get_img_files(IMG_FOLDER_PATH)
len(TRAIN_IMG_FILES), len(VAL_IMG_FILES), len(LABELS.keys())
def augment_data(img, y, probs=0.5):
# flip
if np.random.rand() > probs:
img = np.fliplr(img)
y = -y
return img, y
```
## Create data generator for Keras model training
```
# adpated from https://stanford.edu/~shervine/blog/keras-how-to-generate-data-on-the-fly.html
class GeneratorFromFiles(keras.utils.Sequence):
'''Generate data from list of image files.'''
def __init__(self, list_files, labels, batch_size=64,
dim=(160, 320, 3),
post_dim=(66, 200, 3),
shuffle=True,
data_aug=None,
resize=False):
'''
Paramters
----------
list_files : a list of absolute path to image files
labels : a dictionary mapping image files to labels (classes/continous value)
batch_size : size for each batch
dim : dimension for input image, height x width x number of channel
shuffle : whether to shuffle data at each epoch
'''
self.dim = dim
self.post_dim = post_dim if resize else dim
self.batch_size = batch_size
self.list_files = list_files
self.labels = labels
self.shuffle = shuffle
self.data_aug = data_aug
self.resize=resize
self.on_epoch_end()
def __len__(self):
return int(len(self.list_files) / self.batch_size)
def __getitem__(self, index):
# generate indexes of the batch
indexes = self.indexes[index*self.batch_size:(index+1)*self.batch_size]
# find list of files
list_files_batch = [self.list_files[k] for k in indexes]
X, ys = self._generate(list_files_batch, self.data_aug)
return X, ys
def on_epoch_end(self):
self.indexes = np.arange(len(self.list_files))
if self.shuffle:
np.random.shuffle(self.indexes)
def _generate(self, list_files_batch, data_aug=None):
X = np.zeros((self.batch_size, ) + self.post_dim)
ys = np.zeros((self.batch_size))
for i, img_file in enumerate(list_files_batch):
x = plt.imread(img_file)
if self.resize:
x = cv2.resize(x, (self.post_dim[1], self.post_dim[0]))
y = self.labels[img_file]
if data_aug is not None:
x, y = data_aug(x, y)
X[i, ] = x
ys[i] = y
return X, ys
```
Visualize flipping the image
```
data_generator = GeneratorFromFiles(TRAIN_IMG_FILES, LABELS)
res = next(iter(data_generator))
plt.imshow(res[0][56].astype(int))
plt.imshow(augment_data(res[0][56], res[1][60], 0.0)[0].astype(int))
plt.imshow(cv2.resize(res[0][56], (200, 66)).astype(int))
plt.imshow(cv2.resize(augment_data(res[0][56], res[1][60], 0.0)[0], (200, 66)).astype(int))
```
## Model Architecture and Parameter
### Nvidia model
```
def _bn_act_dropout(input, dropout_rate):
"""Helper to build a BN -> activation block
"""
norm = BatchNormalization(axis=2)(input)
relu = Activation('elu')(norm)
return Dropout(dropout_rate)(relu)
def _conv_bn_act_dropout(**conv_params):
'''Helper to build a conv -> BN -> activation block -> dropout
'''
filters = conv_params['filters']
kernel_size = conv_params['kernel_size']
strides = conv_params.setdefault('strides', (1, 1))
kernel_initializer = conv_params.setdefault('kernel_initializer', 'he_normal')
padding = conv_params.setdefault('padding', 'valid')
kernel_regularizer = conv_params.setdefault('kernel_regularizer', l2(1.e-4))
dropout_rate = conv_params.setdefault('dropout_rate', 0.1)
def f(input):
conv = Conv2D(filters=filters, kernel_size=kernel_size,
strides=strides, padding=padding,
kernel_initializer=kernel_initializer,
kernel_regularizer=kernel_regularizer)(input)
return _bn_act_dropout(conv, dropout_rate)
return f
def _dense_dropout(input, n, dropout_rate, dropout_multi=1):
return Dropout(dropout_rate*dropout_multi)(Dense(n, activation='elu')(input))
def build_nvidia(in_shape, num_outputs, dropout_rate, dropout_multi=1):
input = Input(shape=in_shape)
in_layer = Lambda(lambda x: (x / 255.0) - 0.5, input_shape=(in_shape))(input)
in_layer = _conv_bn_act_dropout(filters=24, kernel_size=(5, 5), strides=(2, 2), dropout_rate=dropout_rate)(in_layer)
in_layer = _conv_bn_act_dropout(filters=36, kernel_size=(5, 5), strides=(2, 2), dropout_rate=dropout_rate)(in_layer)
in_layer = _conv_bn_act_dropout(filters=48, kernel_size=(5, 5), strides=(2, 2), dropout_rate=dropout_rate)(in_layer)
in_layer = _conv_bn_act_dropout(filters=64, kernel_size=(3, 3), strides=(1, 1), dropout_rate=dropout_rate)(in_layer)
in_layer = _conv_bn_act_dropout(filters=64, kernel_size=(3, 3), strides=(1, 1), dropout_rate=dropout_rate)(in_layer)
flatten = Flatten()(in_layer)
flatten = _dense_dropout(flatten, 1000, dropout_rate, dropout_multi)
flatten = _dense_dropout(flatten, 100, dropout_rate, dropout_multi)
#flatten = _dense_dropout(flatten, 50, dropout_rate)
flatten = Dense(50)(flatten)
dense = Dense(units=num_outputs)(flatten)
model = Model(inputs=input, outputs=dense)
return model
# learning rate schedule
def step_decay(epoch):
initial_lrate = 1e-3
drop = 0.5
epochs_drop = 3
lrate = initial_lrate * math.pow(drop, math.floor((1+epoch)/epochs_drop))
return lrate
in_shape = (66, 200, 3)
#in_shape = (160, 320, 3)
dropout_rate = 0.2
model = build_nvidia(in_shape, 1, dropout_rate, dropout_multi=2)
opt = Adam(lr=1e-4)
model.compile(loss='mse', optimizer=opt)
lrate = LearningRateScheduler(step_decay)
callbacks_list = [lrate]
model.summary()
```
## Training and Validation
```
%%time
trn_data_generator = GeneratorFromFiles(TRAIN_IMG_FILES, LABELS, resize=True)
val_data_generator = GeneratorFromFiles(VAL_IMG_FILES, LABELS, resize=True)
model.fit_generator(trn_data_generator,
validation_data=val_data_generator,
epochs=12,
workers=2,
callbacks=callbacks_list,
use_multiprocessing=False,
verbose=1)
# model.load_weights(ROOT_PATH/'models/model-nvidia-base-2.h5')
# trn_data_generator = GeneratorFromFiles(TRAIN_IMG_FILES, LABELS, resize=True)
# val_data_generator = GeneratorFromFiles(VAL_IMG_FILES, LABELS, resize=True)
```
## Fine-Tuning the Model
```
%%time
opt = Adam(lr=1e-5)
model.compile(loss='mse', optimizer=opt)
model.fit_generator(trn_data_generator,
validation_data=val_data_generator,
epochs=5,
workers=3,
use_multiprocessing=True,
verbose=1)
%%time
opt = Adam(lr=8e-6)
model.compile(loss='mse', optimizer=opt)
model.fit_generator(trn_data_generator,
validation_data=val_data_generator,
epochs=5,
workers=3,
use_multiprocessing=True,
verbose=1)
```
## Saving Model
```
model.save(ROOT_PATH/'models/model-nvidia-base-3.h5', include_optimizer=False)
```
| github_jupyter |
```
import numpy as np
import os
import pandas as pd
from sklearn.feature_extraction.text import CountVectorizer, TfidfVectorizer
from sklearn.metrics.classification import classification_report, accuracy_score
from sklearn.model_selection import cross_val_predict
from nltk.corpus import stopwords
stop_words=stopwords.words('english')
trainX =np.array([])
labels = []
path = '../Downloads/C50train/'
authors = os.listdir(path)[:10];
authors = [i for i in authors if '.DS_Store' not in i]
for auth in authors:
if authors != '.DS_Store':
files = os.listdir(path + auth + '/');
tmpX, tmpY = np.array([]), []
for file in files:
f = open(os.path.join(path, auth, file), 'r')
data = f.read().replace('\n', ' ')
tmpX = np.append(tmpX,data)
tmpY = tmpY + [auth]
f.close()
trainX = np.append(trainX, tmpX)
labels = labels + tmpY
# initializing Count Vector
vect = CountVectorizer()
vect
# Split in to train and test data
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(trainX, labels, train_size = 0.8)
# converting into pandas series
X_train = pd.Series(X_train)
X_test = pd.Series(X_test)
X_train.head()
# learn the 'vocabulary' of the training data
vect.fit(X_train)
# examine the fitted vocabulary
vect.get_feature_names()[:50]
train_vectors = vect.transform(X_train)
train_vectors
train_vectors
test_vectors = vect.transform(X_test)
test_vectors
pd.DataFrame(train_vectors.toarray(), columns=vect.get_feature_names()).head(4)
from sklearn.naive_bayes import GaussianNB, MultinomialNB, BernoulliNB
nb = GaussianNB()
nb.fit(train_vectors.toarray(), y_train)
# generate your cross-validation prediction with 10 fold
#Stratified sampling
y_pred = cross_val_predict(nb, test_vectors.toarray(), y_test, cv=10)
print(classification_report(y_test, y_pred))
print("ACCURACY::", accuracy_score(y_pred, y_test))
def classifier():
X_train, X_test, y_train, y_test = train_test_split(trainX, labels, train_size = 0.8)
# converting into pandas series
X_train = pd.Series(X_train)
X_test = pd.Series(X_test)
vect.fit(X_train)
# examine the fitted vocabulary
vect.get_feature_names()[:50]
train_vectors = vect.transform(X_train)
train_vectors
test_vectors = vect.transform(X_test)
test_vectors
pd.DataFrame(train_vectors.toarray(), columns=vect.get_feature_names()).head(4)
nb.fit(train_vectors.toarray(), y_train)
# generate your cross-validation prediction with 10 fold
#Stratified sampling
y_pred = cross_val_predict(nb, test_vectors.toarray(), y_test, cv=10)
print(classification_report(y_test, y_pred))
print("ACCURACY::", accuracy_score(y_pred, y_test))
# initializing Count Vector
vect = CountVectorizer(ngram_range=(1, 2))
classifier()
# initializing Count Vector
vect = CountVectorizer(ngram_range=(1, 3))
classifier()
# initializing Count Vector
vect = CountVectorizer(ngram_range=(1, 2),stop_words=stop_words)
classifier()
vect = CountVectorizer(ngram_range=(1, 2),stop_words=stop_words, max_df=0.7)
classifer()
vect = CountVectorizer(ngram_range=(1, 2),stop_words=stop_words, max_df=0.75)
classifier()
vect = CountVectorizer(ngram_range=(1, 2),stop_words=stop_words, max_df=0.8)
classifier()
vect = CountVectorizer(ngram_range=(1, 3),stop_words=stop_words, max_df=0.7)
classifier()
vect = CountVectorizer(ngram_range=(1, 3),stop_words=stop_words, max_df=0.75)
classifier()
vect = CountVectorizer(ngram_range=(1, 3),stop_words=stop_words, max_df=0.8)
classifier()
tf = TfidfVectorizer()
classifier()
```
| github_jupyter |
```
import os
os.environ['MACOSX_DEPLOYMENT_TARGET'] = '10.9' # HACK: needed for stan...
import astropy.coordinates as coord
from astropy.table import Table, join, hstack
import astropy.units as u
import matplotlib.pyplot as plt
%matplotlib inline
import numpy as np
import pystan
from pyia import GaiaData
sm = pystan.StanModel('../stan/plz.stan')
tbl1 = Table.read('/Users/adrian/data/GaiaDR2/rrlyrae.fits')
tbl2 = Table.read('/Users/adrian/data/GaiaDR2/vari_rrlyrae.fits')
tbl1.meta = None
tbl2.meta = None
tbl2 = tbl2[[x for x in tbl2.colnames if x not in tbl1.colnames or x == 'source_id']]
tbl = join(tbl1, tbl2, keys='source_id')
g = GaiaData(tbl)
c = g.get_skycoord(distance=False)
k2014 = Table.read('/Users/adrian/data/Misc/Klein2014_rrlyrae.fit')
kc = coord.SkyCoord(k2014['_RA'], k2014['_DE'], unit=u.deg)
idx, sep, _ = kc.match_to_catalog_sky(c)
join_tbl = hstack((tbl[idx[sep < 5*u.arcsec]], k2014[sep < 5*u.arcsec]))
# join_tbl = tbl
g = GaiaData(join_tbl)
g.data['ebv'] = g.get_ebv()
plx_snr_mask = (g.parallax / g.parallax_error) > 10
# plx_snr_mask = np.ones(len(g), dtype=bool)
rp_nobs_mask = g.phot_rp_n_obs >= 20
# rp_good_mask = np.isfinite(g.int_average_rp)
rp_good_mask = np.isfinite(g.W1mag.value)
feh_mask = np.isfinite(g.metallicity) & (g.metallicity.value > -3) & (g.metallicity.value < 0)
ab_mask = g.best_classification == 'RRab'
ebv_mask = g.ebv < 0.5
pf_mask = (g.pf > 0.1*u.day) & (g.pf < 1*u.day)
M = g.int_average_rp.value - g.get_distance(allow_negative=True).distmod.value
abs_rp_mask = M < 1.
all_mask = plx_snr_mask & rp_nobs_mask & rp_good_mask & feh_mask & ab_mask & ebv_mask & pf_mask & abs_rp_mask
sub_g = g[all_mask]
# HACK:
sub_g = sub_g[sub_g.parallax > 0.5*u.mas]
len(sub_g)
data = dict()
data['n_stars'] = len(sub_g)
data['Alambda'] = 0.1
# kRP = 0.6104 # from https://www.aanda.org/articles/aa/pdf/2018/08/aa32843-18.pdf - taking just c1
# data['Alambda'] = kRP * 3.1
data['plx'] = sub_g.parallax.value
data['plx_err'] = sub_g.parallax_error.value
# data['mag'] = sub_g.int_average_rp.value
# data['mag_err'] = sub_g.int_average_rp_error.value
data['mag'] = sub_g.W1mag.value
data['mag_err'] = sub_g.e_W1mag.value
data['EBV'] = sub_g.get_ebv()
data['EBV_err'] = 0.1 * data['EBV']
data['FeH'] = sub_g.metallicity.value
data['FeH_err'] = sub_g.metallicity_error.value
data['log10P'] = np.log10(sub_g.pf.value)
data['log10P_err'] = np.log10(np.e) * (sub_g.pf_error.value / sub_g.pf.value)
# Sesar+2017
data['log10P_ref'] = np.log10(0.52854)
data['FeH_ref'] = -1.4
plx_samples = np.random.normal(sub_g.parallax.value, sub_g.parallax_error.value,
size=(1024, len(sub_g)))
DM_samples = coord.Distance(parallax=plx_samples * sub_g.parallax.unit).distmod.value
DM_err = np.std(DM_samples, axis=0)
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.errorbar(sub_g.pf.value, data['mag'] - sub_g.distmod.value,
yerr=np.sqrt(data['mag_err']**2 + DM_err**2),
marker='o', ls='none', ecolor='#555555', color='k')
ax.set_xlabel('period [day]')
ax.set_ylabel('$M$ [day]')
ax.set_xlim(0.35, 0.8)
ax.set_ylim(0.1, -1.3)
# ax.set_ylim(1.5, -0.5)
ax.set_xscale('log')
init = dict()
init['ln_s_M'] = np.log(0.1)
init['plx0'] = 0.
init['ln_sig_plx_add'] = np.log(1e-8)
init['f_plx'] = 1.
init['r'] = 1000 / sub_g.parallax.value
init['L'] = 250.
init['FeH_int'] = data['FeH']
# init['EBV_int'] = data['EBV']
# init['log10P_int'] = data['log10P']
init['a'] = -2.
init['b'] = 0.15
init['M_ref'] = -0.5
fit = sm.optimizing(data=data, init=init, iter=4096)
samples = sm.sampling(data=data, chains=1, init=[fit], control=dict(adapt_delta=1))
fit
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
ax.errorbar(sub_g.pf.value, data['mag'] - sub_g.distmod.value,
yerr=np.sqrt(data['mag_err']**2 + DM_err**2),
marker='o', ls='none', ecolor='#555555', color='k')
xx = np.linspace(0., 1, 128)
# for feh in np.arange(-2.5, -0.5+1e-3, 0.5):
# M = (fit['a'] * (np.log10(xx) - data['log10P_ref']) +
# # fit['b'] * (feh - data['FeH_ref']) +
# fit['M_ref'])
# ax.plot(xx, M, marker='')
M = fit['a'] * (np.log10(xx) - data['log10P_ref']) + fit['M_ref']
ax.plot(xx, M, marker='')
M = np.mean(samples['a']) * (np.log10(xx) - data['log10P_ref']) + np.mean(samples['M_ref'])
ax.plot(xx, M, marker='')
ax.set_xlabel('period [day]')
ax.set_ylabel('$M$ [day]')
ax.set_xlim(0.35, 0.8)
# ax.set_ylim(0.1, -1.3)
ax.set_ylim(1.5, -0.5)
ax.set_xscale('log')
ax.axvline(10 ** data['log10P_ref'], zorder=-100, color='tab:blue', alpha=0.5)
ax.axhline(fit['M_ref'], zorder=-100, color='tab:blue', alpha=0.5)
fig, ax = plt.subplots(1, 1, figsize=(6, 6))
# ax.errorbar(data['log10P'], data['plx'] - 1000 / fit['r'], data['plx_err'], ls='none')
ax.errorbar(data['log10P'], data['plx'] - 1000 / np.mean(samples['r'], axis=0),
data['plx_err'],
marker='o', ls='none', ecolor='#555555', color='k')
ax.axhline(np.mean(samples['plx0']), color='tab:red', zorder=-10)
ax.axhline(0, color='tab:blue', zorder=-100)
ax.set_ylim(-0.1, 0.1)
```
| github_jupyter |
```
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
import seaborn as sns
from tqdm import tnrange, tqdm_notebook
import gc
import operator
sns.set_context('talk')
pd.set_option('display.max_columns', 500)
import warnings
warnings.filterwarnings('ignore', message='Changing the shape of non-C contiguous array')
```
# Read the data
```
dfXtrain = pd.read_csv('preprocessed_csv/train_tree.csv', index_col='id', sep=';')
dfXtest = pd.read_csv('preprocessed_csv/test_tree.csv', index_col='id', sep=';')
dfYtrain = pd.read_csv('preprocessed_csv/y_train_tree.csv', header=None, names=['ID', 'COTIS'], sep=';')
dfYtrain = dfYtrain.set_index('ID')
```
# Preprocessing
ะัะฝะตัะตะผ var14, department ะธ subreg.
```
dropped_col_names = ['var14', 'department', 'subreg']
def drop_cols(df):
return df.drop(dropped_col_names, axis=1), df[dropped_col_names]
train, dropped_train = drop_cols(dfXtrain)
test, dropped_test = drop_cols(dfXtest)
```
ะะพะฑะฐะฒะธะผ ะธะฝัั ะพ ะฒะตะปะธัะธะฝะต ะณะพัะพะดะฐ ะธะท subreg'a
```
def add_big_city_cols(df, dropped_df):
df['big'] = np.where(dropped_df['subreg'] % 100 == 0, 1, 0)
df['average'] = np.where(dropped_df['subreg'] % 10 == 0, 1, 0)
df['average'] = df['average'] - df['big']
df['small'] = 1 - df['big'] - df['average']
return df
train = add_big_city_cols(train, dropped_train)
test = add_big_city_cols(test, dropped_test)
```
ะะตะบะพะดะธััะตะผ ะพััะฐะฒัะธะตัั ะบะฐัะตะณะพัะธะฐะปัะฝัะต ะฟัะธะทะฝะฐะบะธ
```
categorical = list(train.select_dtypes(exclude=[np.number]).columns)
categorical
list(test.select_dtypes(exclude=[np.number]).columns)
for col in categorical:
print(col, train[col].nunique())
```
energie_veh ะธ var6 ั ะฟะพะผะพััั get_dummies
```
small_cat = ['energie_veh', 'var6']
train = pd.get_dummies(train, columns=small_cat)
test = pd.get_dummies(test, columns=small_cat)
```
ะะปั ะพััะฐะปัะฝัั
ะฟะพััะธัะฐะตะผ ัะณะปะฐะถะตะฝะฝัะต ััะตะดะฝะธะต ัะฐัะณะตัะฐ
```
big_cat = ['marque', 'profession', 'var8']
```
ะะฟะธัะฐะฝะธะต ะดะปั ะฝะฐัะฐะปะฐ
```
df = pd.concat([dfYtrain.describe()] + [train[col].value_counts().describe() for col in big_cat], axis=1)
df
```
ะกะณะปะฐะถะธะฒะฐัั ะฑัะดะตะผ ั 500
ะัะดะตะผ ะธัะฟะพะปัะทะพะฒะฐัั ััะตะดะฝะตะต, 25%, 50% ะธ 75%
ะะตะบะพะดะธัะพะฒะฐะฝะธะต
```
class EncodeWithAggregates():
def __init__(self, cols, y_train, train, *tests):
self.cols = cols
self.y_train = y_train
self.train = train
self.tests = tests
self.Xs = (self.train,) + self.tests
self.smooth_coef = 500
self.miss_val = 'NAN'
self.percentiles = [25, 50, 75]
self.names = ['Mean'] + [str(q) for q in self.percentiles]
self.aggs = [np.mean] + [self.percentile_fix(q) for q in self.percentiles]
self.miss_val_fills = [agg(y_train) for agg in self.aggs]
self.train_aggs = [agg(y_train) for agg in self.aggs]
def percentile_fix(self, q):
def wrapped(a):
return np.percentile(a, q)
return wrapped
def transform(self):
for col in self.cols:
self.encode(col)
gc.collect()
return self.Xs
def encode(self, col):
df = pd.concat([self.y_train, self.train[col]], axis=1)
dfgb = df.groupby(col)
dfsize = dfgb.size()
dfsize.ix[self.miss_val] = 0
for name, agg, miss_val_fill, train_agg in zip(self.names, self.aggs, self.miss_val_fills, self.train_aggs):
dfm = dfgb.agg(agg)
dfm.ix[self.miss_val] = miss_val_fill
for X in self.Xs:
agg_df = dfm.ix[X[col].fillna(self.miss_val)].set_index(X.index)[self.y_train.name]
agg_size = dfsize.ix[X[col].fillna(self.miss_val)]
agg_size = pd.DataFrame({'size': agg_size}).set_index(X.index)['size']
agg_name = "{}_{}".format(col, name)
X[agg_name] = (agg_df * agg_size + self.smooth_coef * train_agg) / (self.smooth_coef + agg_size)
self.Xs = [X.drop(col, axis=1) for X in self.Xs]
train, test = EncodeWithAggregates(big_cat, dfYtrain['COTIS'], train, test).transform()
test.shape
train.shape
train.fillna(-9999, inplace=True)
test.fillna(-9999, inplace=True)
y_train = np.array(dfYtrain)
x_train = np.array(train)
x_test = np.array(test)
```
# Save routines
```
dfYtest = pd.DataFrame({'ID': dfXtest.index, 'COTIS': np.zeros(test.shape[0])})
dfYtest = dfYtest[['ID', 'COTIS']]
dfYtest.head()
def save_to_file(y, file_name):
dfYtest['COTIS'] = y
dfYtest.to_csv('results/{}'.format(file_name), index=False, sep=';')
model_name = 'lmse_without_size_xtr'
dfYtest_stacking = pd.DataFrame({'ID': dfXtrain.index, model_name: np.zeros(train.shape[0])})
dfYtest_stacking = dfYtest_stacking[['ID', model_name]]
dfYtest_stacking.head()
def save_to_file_stacking(y, file_name):
dfYtest_stacking[model_name] = y
dfYtest_stacking.to_csv('stacking/{}'.format(file_name), index=False, sep=';')
```
# Train XGB
```
from sklearn.ensemble import ExtraTreesRegressor
def plot_quality(grid_searcher, param_name):
means = []
stds = []
for elem in grid_searcher.grid_scores_:
means.append(np.mean(elem.cv_validation_scores))
stds.append(np.sqrt(np.var(elem.cv_validation_scores)))
means = np.array(means)
stds = np.array(stds)
params = grid_searcher.param_grid
plt.figure(figsize=(10, 6))
plt.plot(params[param_name], means)
plt.fill_between(params[param_name], \
means + stds, means - stds, alpha = 0.3, facecolor='blue')
plt.xlabel(param_name)
plt.ylabel('MAPE')
def mape(y_true, y_pred):
return -np.mean(np.abs((y_true - y_pred) / y_true)) * 100
def mape_scorer(est, X, y):
gc.collect()
return mape(y, est.predict(X))
class MyGS():
class Element():
def __init__(self):
self.cv_validation_scores = []
def add(self, score):
self.cv_validation_scores.append(score)
def __init__(self, param_grid, name, n_folds):
self.param_grid = {name: param_grid}
self.grid_scores_ = [MyGS.Element() for item in param_grid]
def add(self, score, param_num):
self.grid_scores_[param_num].add(score)
validation_index = (dropped_train.department == 1) | (dropped_train.department > 90)
train_index = ~validation_index
subtrain, validation = train[train_index], train[validation_index]
x_subtrain = np.array(subtrain)
x_validation = np.array(validation)
ysubtrain, yvalidation = dfYtrain[train_index], dfYtrain[validation_index]
y_subtrain = np.array(ysubtrain).flatten()
y_validation = np.array(yvalidation).flatten()
%%time
est = ExtraTreesRegressor(n_estimators=10, max_features=51,
max_depth=None, n_jobs=-1, random_state=42).fit(X=x_subtrain, y=np.log(y_subtrain))
y_pred = est.predict(x_validation)
mape(y_validation, np.exp(y_pred))
est
sample_weight_subtrain = np.power(y_subtrain, -1)
from sklearn.tree import DecisionTreeRegressor
%%time
count = 10000
est = DecisionTreeRegressor(criterion='mae', max_depth=2,
max_features=None, random_state=42).fit(
X=x_subtrain[:count], y=y_subtrain[:count], sample_weight=sample_weight_subtrain[:count])
gc.collect()
y_pred = est.predict(x_validation)
mape(y_validation, y_pred)
```
# Save
```
save_to_file_stacking(y_lmse_pred * 0.995, 'xbg_tune_eta015_num300_dropped_lmse.csv')
%%time
param = {'base_score':0.5, 'colsample_bylevel':1, 'colsample_bytree':1, 'gamma':0,
'eta':0.15, 'max_delta_step':0, 'max_depth':9,
'min_child_weight':1, 'nthread':-1,
'objective':'reg:linear', 'alpha':0, 'lambda':1,
'scale_pos_weight':1, 'seed':56, 'silent':True, 'subsample':1}
num_round = 180
dtrain = xgb.DMatrix(x_train,
label=np.log(y_train),
missing=-9999,)
#weight=weight_coef * np.power(y_train[train_index], -2) )
dtest = xgb.DMatrix(x_test, missing=-9999)
param['base_score'] = np.percentile(np.log(y_train), 25)
bst = xgb.train(param, dtrain, num_round)
y_pred = np.exp(bst.predict(dtest))
gc.collect()
save_to_file(y_pred * 0.995, 'xbg_tune_eta015_num300_dropped_lmse.csv')
```
| github_jupyter |
# Test For The Best Machine Learning Algorithm For Prediction
This notebook takes about 40 minutes to run, but we've already run it and saved the data for you. Please read through it, though, so that you understand how we came to the conclusions we'll use moving forward.
## Six Algorithms
We're going to compare six different algorithms to determine the best one to produce an accurate model for our predictions.
### Logistic Regression
Logistic Regression (LR) is a technique borrowed from the field of statistics. It is the go-to method for binary classification problems (problems with two class values).

Logistic Regression is named for the function used at the core of the method: the logistic function. The logistic function is a probablistic method used to determine whether or not the driver will be the winner. Logistic Regression predicts probabilities.
### Decision Tree
A tree has many analogies in real life, and it turns out that it has influenced a wide area of machine learning, covering both classification and regression. In decision analysis, a decision tree can be used to visually and explicitly represent decisions and decision making.

This methodology is more commonly known as a "learning decision tree" from data, and the above tree is called a Classification tree because the goal is to classify a driver as the winner or not.
### Random Forest
Random forest is a supervised learning algorithm. The "forest" it builds is an **ensemble of decision trees**, usually trained with the โbaggingโ method, a combination of learning models which increases the accuracy of the result.
A random forest eradicates the limitations of a decision tree algorithm. It reduces the overfitting of datasets and increases precision. It generates predictions without requiring many configurations.

Here's the difference between the Decision Tree and Random Forest methods:

### Support Vector Machine Algorithm (SVC)
Support Vector Machines (SVMs) are a set of supervised learning methods used for classification, regression and detection of outliers.
The advantages of support vector machines are:
- Effective in high dimensional spaces
- Still effective in cases where number of dimensions is greater than the number of samples
- Uses a subset of training points in the decision function (called support vectors), so it is also memory efficient
- Versatile: different kernel functions can be specified for the decision function. Common kernels are provided, but it is also possible to specify custom kernels
The objective of a SVC (Support Vector Classifier) is to fit to the data you provide, returning a "best fit" hyperplane that divides, or categorizes, your data.
### Gaussian Naive Bayes Algorithm
Naive Bayes is a classification algorithm for binary (two-class) and multi-class classification problems. The technique is easiest to understand when described using binary or categorical input values. The representation used for naive Bayes is probabilities.
A list of probabilities is stored to a file for a learned Naive Bayes model. This includes:
- **Class Probabilities:** The probabilities of each class in the training dataset.
- **Conditional Probabilities:** The conditional probabilities of each input value given each class value.
Naive Bayes can be extended to real-value attributes, most commonly by assuming a Gaussian distribution. This extension of Naive Bayes is called Gaussian Naive Bayes. Other functions can be used to estimate the distribution of the data, but the Gaussian (or normal distribution) is the easiest to work with because you only need to estimate the mean and the standard deviation from your training data.
### k Nearest Neighbor Algorithm (kNN)
The k-Nearest Neighbors (KNN) algorithm is a simple, supervised machine learning algorithm that can be used to solve both classification and regression problems.
kNN works by finding the distances between a query and all of the examples in the data, selecting the specified number examples (k) closest to the query, then voting for the most frequent label (in the case of classification) or averages the labels (in the case of regression).
The kNN algorithm assumes the similarity between the new case/data and available cases, and puts the new case into the category that is most similar to the available categories.

## Analyzing the Data
### Feature Importance
Another great quality of the random forest algorithm is that it's easy to measure the relative importance of each feature to the prediction.
The Scikit-learn Python Library provides a great tool for this which measures a feature's importance by looking at how much the tree nodes that use that feature reduce impurity across all trees in the forest. It computes this score automatically for each feature after training, and scales the results so the sum of all importance is equal to one.
### Data Visualization When Building a Model
How do you visualize the influence of the data? How do you frame the problem?
An important tool in the data scientist's toolkit is the power to visualize data using several excellent libraries such as Seaborn or MatPlotLib. Representing your data visually might allow you to uncover hidden correlations that you can leverage. Your visualizations might also help you to uncover bias or unbalanced data.

### Splitting the Dataset
Prior to training, you need to split your dataset into two or more parts of unequal size that still represent the data well.
1. Training. This part of the dataset is fit to your model to train it. This set constitutes the majority of the original dataset.
2. Testing. A test dataset is an independent group of data, often a subset of the original data, that you use to confirm the performance of the model you built.
3. Validating. A validation set is a smaller independent group of examples that you use to tune the model's hyperparameters, or architecture, to improve the model. Depending on your data's size and the question you are asking, you might not need to build this third set.
## Building the Model
Using your training data, your goal is to build a model, or a statistical representation of your data, using various algorithms to train it. Training a model exposes it to data and allows it to make assumptions about perceived patterns it discovers, validates, and accepts or rejects.
### Decide on a Training Method
Depending on your question and the nature of your data, you will choose a method to train it. Stepping through Scikit-learn's documentation, you can explore many ways to train a model. Depending on the results you get, you might have to try several different methods to build the best model. You are likely to go through a process whereby data scientists evaluate the performance of a model by feeding it unseen data, checking for accuracy, bias, and other quality-degrading issues, and selecting the most appropriate training method for the task at hand.
### Train a Model
Armed with your training data, you are ready to "fit" it to create a model. In many ML libraries you will find the code 'model.fit' - it is at this time that you send in your data as an array of values (usually 'X') and a feature variable (usually 'y').
### Evaluate the Model
Once the training process is complete, you will be able to evaluate the model's quality by using test data to gauge its performance. This data is a subset of the original data that the model has not previously analyzed. You can print out a table of metrics about your model's quality.
#### Model Fitting
In the Machine Learning context, model fitting refers to the accuracy of the model's underlying function as it attempts to analyze data with which it is not familiar.
#### Underfitting and Overfitting
Underfitting and overfitting are common problems that degrade the quality of the model, as the model either doesn't fit well enough, or it fits too well. This causes the model to make predictions either too closely aligned or too loosely aligned with its training data. An overfit model predicts training data too well because it has learned the data's details and noise too well. An underfit model is not accurate as it can neither accurately analyze its training data nor data it has not yet 'seen'.

Let's test out some algorithms to choose our path for modelling our predictions.
```
import warnings
warnings.filterwarnings("ignore")
import time
start = time.time()
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
import pickle
from sklearn.metrics import confusion_matrix, precision_score
from sklearn.metrics import accuracy_score
from sklearn.preprocessing import StandardScaler,LabelEncoder,OneHotEncoder
from sklearn.model_selection import cross_val_score,StratifiedKFold,RandomizedSearchCV
from sklearn.linear_model import LogisticRegression
from sklearn.ensemble import RandomForestClassifier, RandomForestRegressor
from sklearn.svm import SVC
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.metrics import confusion_matrix,precision_score,f1_score,recall_score
from sklearn.neural_network import MLPClassifier, MLPRegressor
plt.style.use('seaborn')
np.set_printoptions(precision=4)
data = pd.read_csv('./data_f1/data_filtered.csv')
data.head()
len(data)
dnf_by_driver = data.groupby('driver').sum()['driver_dnf']
driver_race_entered = data.groupby('driver').count()['driver_dnf']
driver_dnf_ratio = (dnf_by_driver/driver_race_entered)
driver_confidence = 1-driver_dnf_ratio
driver_confidence_dict = dict(zip(driver_confidence.index,driver_confidence))
driver_confidence_dict
dnf_by_constructor = data.groupby('constructor').sum()['constructor_dnf']
constructor_race_entered = data.groupby('constructor').count()['constructor_dnf']
constructor_dnf_ratio = (dnf_by_constructor/constructor_race_entered)
constructor_reliability = 1-constructor_dnf_ratio
constructor_reliability_dict = dict(zip(constructor_reliability.index,constructor_reliability))
constructor_reliability_dict
data['driver_confidence'] = data['driver'].apply(lambda x:driver_confidence_dict[x])
data['constructor_reliability'] = data['constructor'].apply(lambda x:constructor_reliability_dict[x])
#removing retired drivers and constructors
active_constructors = ['Alpine F1', 'Williams', 'McLaren', 'Ferrari', 'Mercedes',
'AlphaTauri', 'Aston Martin', 'Alfa Romeo', 'Red Bull',
'Haas F1 Team']
active_drivers = ['Daniel Ricciardo', 'Mick Schumacher', 'Carlos Sainz',
'Valtteri Bottas', 'Lance Stroll', 'George Russell',
'Lando Norris', 'Sebastian Vettel', 'Kimi Rรคikkรถnen',
'Charles Leclerc', 'Lewis Hamilton', 'Yuki Tsunoda',
'Max Verstappen', 'Pierre Gasly', 'Fernando Alonso',
'Sergio Pรฉrez', 'Esteban Ocon', 'Antonio Giovinazzi',
'Nikita Mazepin','Nicholas Latifi']
data['active_driver'] = data['driver'].apply(lambda x: int(x in active_drivers))
data['active_constructor'] = data['constructor'].apply(lambda x: int(x in active_constructors))
data.head()
data.columns
```
## Directory to store Models
```
import os
if not os.path.exists('./models'):
os.mkdir('./models')
def position_index(x):
if x<4:
return 1
if x>10:
return 3
else :
return 2
```
## Model considering only Drivers
```
x_d= data[['GP_name','quali_pos','driver','age_at_gp_in_days','position','driver_confidence','active_driver']]
x_d = x_d[x_d['active_driver']==1]
sc = StandardScaler()
le = LabelEncoder()
x_d['GP_name'] = le.fit_transform(x_d['GP_name'])
x_d['driver'] = le.fit_transform(x_d['driver'])
x_d['GP_name'] = le.fit_transform(x_d['GP_name'])
x_d['age_at_gp_in_days'] = sc.fit_transform(x_d[['age_at_gp_in_days']])
X_d = x_d.drop(['position','active_driver'],1)
y_d = x_d['position'].apply(lambda x: position_index(x))
#cross validation for diffrent models
models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()]
names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier']
model_dict = dict(zip(models,names))
mean_results_dri = []
results_dri = []
name = []
for model in models:
cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True)
result = cross_val_score(model,X_d,y_d,cv=cv,scoring='accuracy')
mean_results_dri.append(result.mean())
results_dri.append(result)
name.append(model_dict[model])
print(f'{model_dict[model]} : {result.mean()}')
plt.figure(figsize=(15,10))
plt.boxplot(x=results_dri,labels=name)
plt.xlabel('Models')
plt.ylabel('Accuracy')
plt.title('Model performance comparision (drivers only)')
plt.show()
```
## Model considering only Constructors
```
x_c = data[['GP_name','quali_pos','constructor','position','constructor_reliability','active_constructor']]
x_c = x_c[x_c['active_constructor']==1]
sc = StandardScaler()
le = LabelEncoder()
x_c['GP_name'] = le.fit_transform(x_c['GP_name'])
x_c['constructor'] = le.fit_transform(x_c['constructor'])
X_c = x_c.drop(['position','active_constructor'],1)
y_c = x_c['position'].apply(lambda x: position_index(x))
#cross validation for diffrent models
models = [LogisticRegression(),DecisionTreeClassifier(),RandomForestClassifier(),SVC(),GaussianNB(),KNeighborsClassifier()]
names = ['LogisticRegression','DecisionTreeClassifier','RandomForestClassifier','SVC','GaussianNB','KNeighborsClassifier']
model_dict = dict(zip(models,names))
mean_results_const = []
results_const = []
name = []
for model in models:
cv = StratifiedKFold(n_splits=10,random_state=1,shuffle=True)
result = cross_val_score(model,X_c,y_c,cv=cv,scoring='accuracy')
mean_results_const.append(result.mean())
results_const.append(result)
name.append(model_dict[model])
print(f'{model_dict[model]} : {result.mean()}')
plt.figure(figsize=(15,10))
plt.boxplot(x=results_const,labels=name)
plt.xlabel('Models')
plt.ylabel('Accuracy')
plt.title('Model performance comparision (Teams only)')
plt.show()
```
# Model considering both Drivers and Constructors
```
cleaned_data = data[['GP_name','quali_pos','constructor','driver','position','driver_confidence','constructor_reliability','active_driver','active_constructor']]
cleaned_data = cleaned_data[(cleaned_data['active_driver']==1)&(cleaned_data['active_constructor']==1)]
cleaned_data.to_csv('./data_f1/cleaned_data.csv',index=False)
```
### Build your X dataset with next columns:
- GP_name
- quali_pos to predict the classification cluster (1,2,3)
- constructor
- driver
- position
- driver confidence
- constructor_reliability
- active_driver
- active_constructor
### Filter the dataset for this Model "Driver + Constructor" all active drivers and constructors
### Create Standard Scaler and Label Encoder for the different features in order to have a similar scale for all features
### Prepare the X (Features dataset) and y for predicted value.
In our case, we want to calculate the cluster of final position for ech driver using the "position_index" function
```
# Implement X, y
```
### Applied the same list of ML Algorithms for cross validation of different models
And Store the accuracy Mean Value in order to compare with previous ML Models
```
mean_results = []
results = []
name = []
# cross validation for different models
```
### Use the same boxplot plotter used in the previous Models
```
# Implement boxplot
```
# Comparing The 3 ML Models
Let's see mean score of our three assumptions.
```
lr = [mean_results[0],mean_results_dri[0],mean_results_const[0]]
dtc = [mean_results[1],mean_results_dri[1],mean_results_const[1]]
rfc = [mean_results[2],mean_results_dri[2],mean_results_const[2]]
svc = [mean_results[3],mean_results_dri[3],mean_results_const[3]]
gnb = [mean_results[4],mean_results_dri[4],mean_results_const[4]]
knn = [mean_results[5],mean_results_dri[5],mean_results_const[5]]
font1 = {
'family':'serif',
'color':'black',
'weight':'normal',
'size':18
}
font2 = {
'family':'serif',
'color':'black',
'weight':'bold',
'size':12
}
x_ax = np.arange(3)
plt.figure(figsize=(30,15))
bar1 = plt.bar(x_ax,lr,width=0.1,align='center', label="Logistic Regression")
bar2 = plt.bar(x_ax+0.1,dtc,width=0.1,align='center', label="DecisionTree")
bar3 = plt.bar(x_ax+0.2,rfc,width=0.1,align='center', label="RandomForest")
bar4 = plt.bar(x_ax+0.3,svc,width=0.1,align='center', label="SVC")
bar5 = plt.bar(x_ax+0.4,gnb,width=0.1,align='center', label="GaussianNB")
bar6 = plt.bar(x_ax+0.5,knn,width=0.1,align='center', label="KNN")
plt.text(0.05,1,'CV score for combined data',fontdict=font1)
plt.text(1.04,1,'CV score only driver data',fontdict=font1)
plt.text(2,1,'CV score only team data',fontdict=font1)
for bar in bar1.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar2.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar3.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar4.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar5.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
for bar in bar6.patches:
yval = bar.get_height()
plt.text(bar.get_x()+0.01,yval+0.01,f'{round(yval*100,2)}%',fontdict=font2)
plt.legend(loc='center', bbox_to_anchor=(0.5, -0.10), shadow=False, ncol=6)
plt.show()
end = time.time()
import datetime
str(datetime.timedelta(seconds=(end - start)))
print(str(end - start)+" seconds")
```
| github_jupyter |
# Model building
https://www.kaggle.com/vadbeg/pytorch-nn-with-embeddings-and-catboost/notebook#PyTorch
mostly based off this example, plus parts of code form tutorial 5 lab 3
```
# import load_data function from
%load_ext autoreload
%autoreload 2
# fix system path
import sys
sys.path.append("/home/jovyan/work")
import numpy as np
import pandas as pd
import torch
from torch.utils.data import Dataset, DataLoader
import random
def set_seed(seed):
random.seed(seed)
np.random.seed(seed)
torch.manual_seed(seed)
torch.backends.cudnn.deterministick = True
torch.backends.cudnn.benchmark = False
set_seed(27)
from src.data.sets import load_sets
X_train, y_train, X_val, y_val, X_test, y_test = load_sets()
X_test.shape
X_train.shape
X_val.shape
# need to convert to tensors
from src.models.pytorch import EmbeddingDataset
train_dataset = EmbeddingDataset(X_train,
targets=y_train,
cat_cols_idx=[0],
cont_cols_idx=[1,2,3,4])
val_dataset = EmbeddingDataset(X_val,
targets=y_val,
cat_cols_idx=[0],
cont_cols_idx=[1,2,3,4])
test_dataset = EmbeddingDataset(X_test,
cat_cols_idx=[0],
cont_cols_idx=[1,2,3,4],
is_train=False)
print(f'First element of train_dataset: {train_dataset[1]}',
f'First element of val_dataset: {val_dataset[1]}',
f'First element of test_dataset: {test_dataset[1]}',sep='\n')
# embedding example
class ClassificationEmbdNN(torch.nn.Module):
def __init__(self, emb_dims, no_of_cont=None):
super(ClassificationEmbdNN, self).__init__()
self.emb_layers = torch.nn.ModuleList([torch.nn.Embedding(x, y)
for x, y in emb_dims])
no_of_embs = sum([y for x, y in emb_dims])
self.no_of_embs = no_of_embs
self.emb_dropout = torch.nn.Dropout(0.2)
self.no_of_cont = 0
if no_of_cont:
self.no_of_cont = no_of_cont
self.bn_cont = torch.nn.BatchNorm1d(no_of_cont)
self.fc1 = torch.nn.Linear(in_features=self.no_of_embs + self.no_of_cont,
out_features=208)
self.dropout1 = torch.nn.Dropout(0.2)
self.bn1 = torch.nn.BatchNorm1d(208)
self.act1 = torch.nn.ReLU()
self.fc2 = torch.nn.Linear(in_features=208,
out_features=208)
self.dropout2 = torch.nn.Dropout(0.2)
self.bn2 = torch.nn.BatchNorm1d(208)
self.act2 = torch.nn.ReLU()
# self.fc3 = torch.nn.Linear(in_features=256,
# out_features=64)
# self.dropout3 = torch.nn.Dropout(0.2)
# self.bn3 = torch.nn.BatchNorm1d(64)
# self.act3 = torch.nn.ReLU()
self.fc3 = torch.nn.Linear(in_features=208,
out_features=104)
self.act3 = torch.nn.Softmax()
def forward(self, x_cat, x_cont=None):
if self.no_of_embs != 0:
x = [emb_layer(x_cat[:, i])
for i, emb_layer in enumerate(self.emb_layers)]
x = torch.cat(x, 1)
x = self.emb_dropout(x)
if self.no_of_cont != 0:
x_cont = self.bn_cont(x_cont)
if self.no_of_embs != 0:
x = torch.cat([x, x_cont], 1)
else:
x = x_cont
x = self.fc1(x)
x = self.dropout1(x)
x = self.bn1(x)
x = self.act1(x)
x = self.fc2(x)
x = self.dropout2(x)
x = self.bn2(x)
x = self.act2(x)
# x = self.fc3(x)
# x = self.dropout3(x)
# x = self.bn3(x)
# x = self.act3(x)
x = self.fc3(x)
x = self.act3(x)
return x
model = ClassificationEmbdNN(emb_dims=[[5742, 252]],
no_of_cont=4)
from src.models.pytorch import get_device
device = get_device()
model.to(device)
print(model)
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.1)
BATCH_SIZE = 300
N_EPOCHS = 10
train_loader = DataLoader(train_dataset,batch_size=BATCH_SIZE)
valid_loader = DataLoader(val_dataset,batch_size=BATCH_SIZE)
next(iter(train_loader))
next(iter(valid_loader))
from tqdm import tqdm_notebook as tqdm
def train_network(model, train_loader, valid_loader,
loss_func, optimizer, n_epochs=20,
saved_model='model.pt'):
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model.to(device)
train_losses = list()
valid_losses = list()
valid_loss_min = np.Inf
for epoch in range(n_epochs):
train_loss = 0.0
valid_loss = 0.0
# train_auc = 0.0
# valid_auc = 0.0
train_acc = 0.0
valid_acc = 0.0
model.train()
for batch in tqdm(train_loader):
optimizer.zero_grad()
output = model(batch['data'][0].to(device,
dtype=torch.long),
batch['data'][1].to(device,
dtype=torch.float))
loss = loss_func(output, batch['target'].to(device,
dtype=torch.long))
loss.backward()
optimizer.step()
# Calculate global accuracy
train_acc += (output.argmax(1) == batch['target']).sum().item()
# train_auc += roc_auc_score(batch['target'].cpu().numpy(),
# output.detach().cpu().numpy(),
# multi_class = "ovo")
train_loss += loss.item() * batch['data'][0].size(0) #!!!
model.eval()
for batch in tqdm(valid_loader):
output = model(batch['data'][0].to(device,
dtype=torch.long),
batch['data'][1].to(device,
dtype=torch.float))
loss = loss_func(output, batch['target'].to(device,
dtype=torch.long))
# valid_auc += roc_auc_score(batch['target'].cpu().numpy(),
# output.detach().cpu().numpy(),
# multi_class = "ovo")
valid_loss += loss.item() * batch['data'][0].size(0) #!!!
# Calculate global accuracy
valid_acc += (output.argmax(1) == batch['target']).sum().item()
# train_loss = np.sqrt(train_loss / len(train_loader.sampler.indices))
# valid_loss = np.sqrt(valid_loss / len(valid_loader.sampler.indices))
# train_auc = train_auc / len(train_loader)
# valid_auc = valid_auc / len(valid_loader)
# train_losses.append(train_loss)
# valid_losses.append(valid_loss)
print('Epoch: {}. Training loss: {:.6f}. Validation loss: {:.6f}'
.format(epoch, train_loss, valid_loss))
print('Training AUC: {:.6f}. Validation AUC: {:.6f}'
.format(train_acc, valid_acc))
if valid_loss < valid_loss_min: # let's save the best weights to use them in prediction
print('Validation loss decreased ({:.6f} --> {:.6f}). Saving model...'
.format(valid_loss_min, valid_loss))
torch.save(model.state_dict(), saved_model)
valid_loss_min = valid_loss
return train_losses, valid_losses
train_losses, valid_losses = train_network(model=model,
train_loader=train_loader,
valid_loader=valid_loader,
loss_func=criterion,
optimizer=optimizer,
n_epochs=N_EPOCHS,
saved_model='../models/embed_3layers.pt')
```
#### forgot to divide the loss and accuracy by length of data set
```
print('Training Accuracy: {:.2f}%'.format(5926.0/300.0))
print('Validation Accuracy: {:.2f}%'.format(2361.0/300.0))
```
# Predict with test set
```
def predict(data_loader, model):
model.eval()
device = torch.device('cuda:0' if torch.cuda.is_available() else 'cpu')
model.to(device)
with torch.no_grad():
predictions = None
for i, batch in enumerate(tqdm(data_loader)):
output = model(batch['data'][0].to(device,
dtype=torch.long),
batch['data'][1].to(device,
dtype=torch.float)).cpu().numpy()
if i == 0:
predictions = output
else:
predictions = np.vstack((predictions, output))
return predictions
model.load_state_dict(torch.load('../models/embed_3layers.pt'))
test_loader = DataLoader(test_dataset,
batch_size=BATCH_SIZE)
nn_predictions = predict(test_loader, model)
nn_predictions
test_acc = (nn_predictions.argmax(1) == y_test).sum().item()
test_acc/300
from sklearn.metrics import roc_auc_score, classification_report
# compute other metrics
roc_auc_score(y_test,nn_predictions, multi_class='ovr', average='macro')
print(y_test)
print(nn_predictions.argmax(1))
def convert_cr_to_dataframe(report_dict:dict) -> pd.DataFrame:
"""
Converts the dictionary format of the Classification Report (CR) to a
dataframe for easy of sorting
:param report_dict: The dictionary returned by
sklearn.metrics.classification_report.
:return: Returns a dataframe of the same information.
"""
beer_style = list(report_dict.keys())
beer_style.remove('accuracy')
beer_style.remove('macro avg')
beer_style.remove('weighted avg')
precision = []
recall = []
f1 = []
support = []
for key, value in report_dict.items():
if key not in ['accuracy', 'macro avg', 'weighted avg']:
precision.append(value['precision'])
recall.append(value['recall'])
f1.append(value['f1-score'])
support.append(value['support'])
result = pd.DataFrame({'beer_style': beer_style,
'precision': precision,
'recall': recall,
'f1': f1,
'support': support})
return result
from joblib import load
lbel_encoders = load('../models/label_encoders.joblib')
report_dict = classification_report(label_encoders['beer_style'].inverse_transform(y_test),
label_encoders['beer_style'].inverse_transform(nn_predictions.argmax(1)),
output_dict=True)
report_df = convert_cr_to_dataframe(report_dict)
print(report_df)
#classification_report(y_test, nn_predictions.argmax(1))
torch.save(model, "../models/model.pt")
```
| github_jupyter |
* ๆฏ่พไธๅ็ปๅ็ปๅไผๅๅจๅจไธๅ่งๆจก้ฎ้ขไธ็ๆง่ฝ๏ผ
* ไธ้ข็็ปๆไธป่ฆๆฏ่พ``alphamind``ๅ``python``ไธญๅ
ถไปไผๅๅจ็ๆง่ฝๅทฎๅซ๏ผๆไปฌๅฐๅฐฝๅฏ่ฝไฝฟ็จ``cvxopt``ไธญ็ไผๅๅจ๏ผๅ
ถๆฌก้ๆฉ``scipy``๏ผ
* ็ฑไบ``scipy``ๅจ``ashare_ex``ไธ้ขๆง่ฝๅคชๅทฎ๏ผๆไปฅไธ่ฌๅฟฝ็ฅ``scipy``ๅจ่ฟไธช่ก็ฅจๆฑ ไธ็่กจ็ฐ๏ผ
* ๆถ้ดๅไฝ้ฝๆฏๆฏซ็งใ
* ่ฏทๅจ็ฏๅขๅ้ไธญ่ฎพ็ฝฎ`DB_URI`ๆๅๆฐๆฎๅบ
```
import os
import timeit
import numpy as np
import pandas as pd
import cvxpy
from alphamind.api import *
from alphamind.portfolio.linearbuilder import linear_builder
from alphamind.portfolio.meanvariancebuilder import mean_variance_builder
from alphamind.portfolio.meanvariancebuilder import target_vol_builder
pd.options.display.float_format = '{:,.2f}'.format
```
## 0. ๆฐๆฎๅๅค
------------------
```
ref_date = '2018-02-08'
u_names = ['sh50', 'hs300', 'zz500', 'zz800', 'zz1000', 'ashare_ex']
b_codes = [16, 300, 905, 906, 852, None]
risk_model = 'short'
factor = 'EPS'
lb = 0.0
ub = 0.1
data_source = os.environ['DB_URI']
engine = SqlEngine(data_source)
universes = [Universe(u_name) for u_name in u_names]
codes_set = [engine.fetch_codes(ref_date, universe=universe) for universe in universes]
data_set = [engine.fetch_data(ref_date, factor, codes, benchmark=b_code, risk_model=risk_model) for codes, b_code in zip(codes_set, b_codes)]
```
## 1. ็บฟๆงไผๅ๏ผๅธฆ็บฟๆง้ๅถๆกไปถ๏ผ
---------------------------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
factor_data = sample_data['factor']
er = factor_data[factor].values
n = len(er)
lbound = np.ones(n) * lb
ubound = np.ones(n) * ub
risk_constraints = np.ones((n, 1))
risk_target = (np.array([1.]), np.array([1.]))
status, y, x1 = linear_builder(er, lbound, ubound, risk_constraints, risk_target)
elasped_time1 = timeit.timeit("linear_builder(er, lbound, ubound, risk_constraints, risk_target)", number=number, globals=globals()) / number * 1000
A_eq = risk_constraints.T
b_eq = np.array([1.])
w = cvxpy.Variable(n)
curr_risk_exposure = w * risk_constraints
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0]]
objective = cvxpy.Minimize(-w.T * er)
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
prob.value
```
## 2. ็บฟๆงไผๅ๏ผๅธฆL1้ๅถๆกไปถ๏ผ
-----------------------
```
from cvxpy import pnorm
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind (clp simplex)', 'alphamind (clp interior)', 'alphamind (ecos)'])
turn_over_target = 0.5
number = 1
for u_name, sample_data in zip(u_names, data_set):
factor_data = sample_data['factor']
er = factor_data[factor].values
n = len(er)
lbound = np.ones(n) * lb
ubound = np.ones(n) * ub
if 'weight' in factor_data:
current_position = factor_data.weight.values
else:
current_position = np.ones_like(er) / len(er)
risk_constraints = np.ones((len(er), 1))
risk_target = (np.array([1.]), np.array([1.]))
status, y, x1 = linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='interior')
elasped_time1 = timeit.timeit("""linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='interior')""", number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
curr_risk_exposure = risk_constraints.T @ w
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0],
pnorm(w - current_position, 1) <= turn_over_target]
objective = cvxpy.Minimize(-w.T * er)
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
status, y, x2 = linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='simplex')
elasped_time3 = timeit.timeit("""linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='simplex')""", number=number, globals=globals()) / number * 1000
status, y, x3 = linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='ecos')
elasped_time4 = timeit.timeit("""linear_builder(er,
lbound,
ubound,
risk_constraints,
risk_target,
turn_over_target=turn_over_target,
current_position=current_position,
method='ecos')""", number=number, globals=globals()) / number * 1000
np.testing.assert_almost_equal(x1 @ er, np.array(w.value).flatten() @ er, 4)
np.testing.assert_almost_equal(x2 @ er, np.array(w.value).flatten() @ er, 4)
np.testing.assert_almost_equal(x3 @ er, np.array(w.value).flatten() @ er, 4)
df.loc['alphamind (clp interior)', u_name] = elasped_time1
df.loc['alphamind (clp simplex)', u_name] = elasped_time3
df.loc['alphamind (ecos)', u_name] = elasped_time4
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 3. Mean - Variance ไผๅ ๏ผๆ ็บฆๆ๏ผ
-----------------------
```
from cvxpy import *
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
bm = np.zeros(n)
lbound = -np.ones(n) * np.inf
ubound = np.ones(n) * np.inf
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None,
lam=1)
elasped_time1 = timeit.timeit("""mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None,
lam=1)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er + 0.5 * risk)
prob = cvxpy.Problem(objective)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1
x2 = np.array(w.value).flatten()
u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 4. Mean - Variance ไผๅ ๏ผBox็บฆๆ๏ผ
---------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
bm = np.zeros(n)
lbound = np.zeros(n)
ubound = np.ones(n) * 0.1
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None)
elasped_time1 = timeit.timeit("""mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
None,
None)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er + 0.5 * risk)
constraints = [w >= lbound,
w <= ubound]
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1
x2 = np.array(w.value).flatten()
u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 5. Mean - Variance ไผๅ ๏ผBox็บฆๆไปฅๅ็บฟๆง็บฆๆ๏ผ
----------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
bm = np.zeros(n)
lbound = np.zeros(n)
ubound = np.ones(n) * 0.1
risk_constraints = np.ones((len(er), 1))
risk_target = (np.array([1.]), np.array([1.]))
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target)
elasped_time1 = timeit.timeit("""mean_variance_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er + 0.5 * risk)
curr_risk_exposure = risk_constraints.T @ w
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0]]
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er + 0.5 * x1 @ sec_cov @ x1
x2 = np.array(w.value).flatten()
u2 = -x2 @ er + 0.5 * x2 @ sec_cov @ x2
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
## 6. ็บฟๆงไผๅ๏ผๅธฆไบๆฌก้ๅถๆกไปถ๏ผ
-------------------------
```
df = pd.DataFrame(columns=u_names, index=['cvxpy', 'alphamind'])
number = 1
target_vol = 0.5
for u_name, sample_data in zip(u_names, data_set):
all_styles = risk_styles + industry_styles + ['COUNTRY']
factor_data = sample_data['factor']
risk_cov = sample_data['risk_cov'][all_styles].values
risk_exposure = factor_data[all_styles].values
special_risk = factor_data.srisk.values
sec_cov = risk_exposure @ risk_cov @ risk_exposure.T / 10000 + np.diag(special_risk ** 2) / 10000
er = factor_data[factor].values
n = len(er)
if 'weight' in factor_data:
bm = factor_data.weight.values
else:
bm = np.ones_like(er) / n
lbound = np.zeros(n)
ubound = np.ones(n) * 0.1
risk_constraints = np.ones((n, 1))
risk_target = (np.array([bm.sum()]), np.array([bm.sum()]))
risk_model = dict(cov=None, factor_cov=risk_cov/10000., factor_loading=risk_exposure, idsync=(special_risk**2)/10000.)
status, y, x1 = target_vol_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target,
vol_target=target_vol)
elasped_time1 = timeit.timeit("""target_vol_builder(er,
risk_model,
bm,
lbound,
ubound,
risk_constraints,
risk_target,
vol_target=target_vol)""",
number=number, globals=globals()) / number * 1000
w = cvxpy.Variable(n)
risk = sum_squares(multiply(special_risk / 100., w)) + quad_form((w.T * risk_exposure).T, risk_cov / 10000.)
objective = cvxpy.Minimize(-w.T * er)
curr_risk_exposure = risk_constraints.T @ w
constraints = [w >= lbound,
w <= ubound,
curr_risk_exposure == risk_target[0],
risk <= target_vol * target_vol]
prob = cvxpy.Problem(objective, constraints)
prob.solve(solver='ECOS')
elasped_time2 = timeit.timeit("prob.solve(solver='ECOS')",
number=number, globals=globals()) / number * 1000
u1 = -x1 @ er
x2 = np.array(w.value).flatten()
u2 = -x2 @ er
np.testing.assert_array_almost_equal(u1, u2, 4)
df.loc['alphamind', u_name] = elasped_time1
df.loc['cvxpy', u_name] = elasped_time2
alpha_logger.info(f"{u_name} is finished")
df
```
| github_jupyter |
# Control Flow
### Python if else
```
def multiply(a, b):
"""Function to multiply"""
print(a * b)
print(multiply.__doc__)
multiply(5,2)
def func():
"""Function to check i is greater or smaller"""
i=10
if i>5:
print("i is greater than 5")
else:
print("i is less than 15")
print(func.__doc__)
func()
```
### Nested if
```
if i==20:
print("i is 10")
if i<15:
print("i is less than 15")
if i>15:
print("i is greater than 15")
else:
print("Not present")
```
### if-elif-else ladder
```
def func():
i=10
if i==10:
print("i is equal to 10")
elif i==15:
print("Not present")
elif i==20:
print('i am there')
else:
print("none")
func()
```
### Python for loop
```
def func():
var = input("enter number:")
x = int(var)
for i in range(x):
print(i)
func()
## Lists iteration
def func():
print("List Iteration")
l = ["tulasi", "ram", "ponaganti"]
for i in l:
print(i)
func()
# Iterating over a tuple (immutable)
def func():
print("\nTuple Iteration")
t = ("tulasi", "ram", "ponaganti")
for i in t:
print(i)
func()
# Iterating over a String
def func():
print("\nString Iteration")
s = "tulasi"
for i in s:
print(i)
func()
# Iterating over dictionary
def func():
print("\nDictionary Iteration")
d = dict()
d['xyz'] = 123
d['abc'] = 345
for i in d:
print("% s % d" % (i, d[i]))
func()
```
### Python for Loop with Continue Statement
```
def func():
for letter in 'tulasiram':
if letter == 'a':
continue
print(letter)
func()
```
### Python For Loop with Break Statement
```
def func():
for letter in 'tulasiram':
if letter == 'a':
break
print('Current Letter :', letter)
func()
```
### Python For Loop with Pass Statement
```
list = ['tulasi','ram','ponaganti']
def func():
#An empty loop
for list in 'ponaganti':
pass
print('Last Letter :', list)
func()
```
### Python range
```
def func():
sum=0
for i in range(1,5):
sum = sum + i
print(sum)
func()
def func():
i=5
for x in range(i):
i = i+x
print(i)
func()
```
### Python for loop with else
```
for i in range(1, 4):
print(i)
else: # Executed because no break in for
print("No Break\n")
for i in range(1, 4):
print(i)
break
else: # Not executed as there is a break
print("No Break")
### Using all for loop statements in small program
def func():
var = input("enter number:")
x = int(var)
for i in range(x):
option = input("print, skip, or exit")
if option=="print":
print(i)
elif option=='skip':
continue
elif option=='exit':
break
print("Good bye....!")
func()
### Working with lists
def func():
product_prices = []
for i in range(5):
product = input("How much the product cost ?")
product = float(product)
product_prices.append(product)
print(product_prices)
print("Total price : " , sum(product_prices))
print("High cost of product :" , max(product_prices))
print("average price of products", sum(product_prices)/len(product_prices))
func()
### Nested for loop
### one to Twelve time tables using for loop
def func():
for num1 in range(1,13):
for num2 in range(1,13):
print(num1, "*", num2, "=", num1*num2)
func()
```
### Python while loop
```
## Single line statement
def func():
'''first one'''
count = 0
while (count < 5): count = count + 1; print("Tulasi Ram")
print(func.__doc__)
func()
### or
def func():
'''Second one'''
count = 0
while (count < 5):
count = count + 1
print("Tulasi Ram")
print(func.__doc__)
func()
def func():
list = ["ram","tulasi","ponaganti"]
while list:
print(list.pop())
func()
def func():
i=0
for i in range(10):
i+=1
return i
func()
def func():
i = 0
a = ['tulasi','ram','ponaganti']
while i < len(a):
if a[i] == 'tulasi' or a[i] == 'ram':
i += 1
continue
print('Current word :', a[i])
i+=1
func()
def func():
i = 0
a = ['tulasi','ram','ponaganti']
while i < len(a):
if a[i] == 'ponaganti':
i += 1
break
print('Current word :', a[i])
i+=1
func()
def func():
i = 0
a = ['tulasi','ram','ponaganti']
while i < len(a):
if a[i] == 'tulasi':
i += 1
pass
print('Current word :', a[i])
i+=1
func()
def whileElseFunc():
i=0
while i<10:
i+=1
print(i)
else:
print('no break')
whileElseFunc()
```
### using break in loops
```
def func():
i=0
for i in range(10):
i+=1
print(i)
break
else:
print('no break')
func()
```
### using continue in loops
```
def func():
i=0
for i in range(10):
i+=1
print(i)
continue
else:
for i in range(5):
i+=1
print(i)
break
func()
def func():
i=0
for i in range(10):
i+=1
print(i)
pass
else:
for i in range(5):
i+=1
print(i)
func()
```
#### Looping techniques using enumerate()
```
def enumearteFunc():
list =['tulasi','ram','ponaganti']
for key in enumerate(list):
print(key)
enumearteFunc()
def enumearteFunc():
list =['tulasi','ram','ponaganti']
for key, value in enumerate(list):
print(value)
enumearteFunc()
def zipFunc():
list1 = ['name', 'firstname', 'lastname']
list2 = ['ram', 'tulasi', 'ponaganti']
# using zip() to combine two containers
# and print values
for list1, list2 in zip(list1, list2):
print('What is your {0}? I am {1}.'.format(list1, list2))
zipFunc()
```
""" Using iteritem(): iteritems() is used to loop through the
dictionary printing the dictionary key-value pair sequentially
which is used before Python 3 version
Using items(): items() performs the similar task on dictionary as
iteritems() but have certain disadvantages when compared with iteritems() """
```
def itemFunc():
name = {"name": "tulasi", "firstname": "ram"}
print("The key value pair using items is : ")
for key, value in name.items():
print(key, value)
itemFunc()
```
sorting the list items using loop
```
def sortedFunc():
list = ['ram','tulasi','ponaganti']
for i in list:
print(sorted(i))
continue
for i in reversed(list):
print(i, end=" ")
sortedFunc()
```
| github_jupyter |
```
# Code for artery tracking
#simplified from 1-1
%load_ext autoreload
%autoreload 2
import os
os.environ["CUDA_VISIBLE_DEVICES"]="0"
import torch
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
import json
import cv2
import os
import matplotlib.pyplot as plt
import copy
import numpy as np
import pickle
import glob
import datetime
import pickle
%%javascript
$('<div id="toc"></div>').css({position: 'fixed', top: '120px', left: 0}).appendTo(document.body);
$.getScript('https://kmahelona.github.io/ipython_notebook_goodies/ipython_notebook_toc.js');
# Load CNN Tracker
import sys
sys.path.append(r'U:\LiChen\AICafe\CNNTracker')
from models.centerline_net import CenterlineNet
from centerline_train_tools.data_provider_argu import DataGenerater
from centerline_train_tools.centerline_trainner import Trainer
import torch
#import iCafe Python
import numpy as np
import sys
#sys.path.append(r'\\DESKTOP2\Ftensorflow\LiChen\iCafe')
sys.path.insert(0,r'\\DESKTOP4\Dtensorflow\LiChen\iCafePython')
from iCafePython import iCafe
from iCafePython import SnakeList,Snake,SWCNode,Point3D
```
# Load CNNTracker
```
#only need to select one model
#Model 1 CNN tracker for ICA TOF MRA
swc_name = 'cnn_snake'
import sys
sys.path.append(r'U:\LiChen\AICafe\CNNTracker')
from models.centerline_net import CenterlineNet
max_points = 500
prob_thr = 0.85
infer_model = CenterlineNet(n_classes=max_points)
checkpoint_path_infer = r"D:\tensorflow\LiChen\AICafe\CNNTracker\CNNTracker1-1\classification_checkpoints\centerline_net_model_Epoch_29.pkl"
checkpoint = torch.load(checkpoint_path_infer)
net_dict = checkpoint['net_dict']
infer_model.load_state_dict(net_dict)
infer_model.to(device)
infer_model.eval()
#Model 2 CNN tracker for Coronary CTA
swc_name = 'cnn_snake'
max_points = 500
prob_thr = 0.85
infer_model = CenterlineNet(n_classes=max_points)
checkpoint_path_infer = r"D:\tensorflow\LiChen\AICafe\CNNTracker\CNNTracker2-1\classification_checkpoints\centerline_net_model_Epoch_81.pkl"
checkpoint = torch.load(checkpoint_path_infer)
net_dict = checkpoint['net_dict']
infer_model.load_state_dict(net_dict)
infer_model.to(device)
infer_model.eval()
#Model 3 CNN tracker for LATTE
swc_name = 'cnn_snake'
max_points = 500
prob_thr = 0.85
infer_model = CenterlineNet(n_classes=max_points)
checkpoint_path_infer = r"D:\tensorflow\LiChen\AICafe\CNNTracker\CNNTracker4-1\classification_checkpoints\centerline_net_model_Epoch_99.pkl"
checkpoint = torch.load(checkpoint_path_infer)
net_dict = checkpoint['net_dict']
infer_model.load_state_dict(net_dict)
infer_model.to(device)
infer_model.eval()
```
# Load datasets
```
dbname = 'BRAVEAI'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
seg_model_name = 'LumenSeg2-3'
with open(icafe_dir+'/'+dbname+'/db.list','rb') as fp:
dblist = pickle.load(fp)
train_list = dblist['train']
val_list = dblist['val']
test_list = dblist['test']
pilist = [pi.split('/')[1] for pi in dblist['test']]
len(pilist)
dbname = 'RotterdanCoronary'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
pilist = ['0_dataset05_U']
seg_model_name = 'CoronarySeg1-8-5'
dbname = 'UNC'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
seg_model_name = 'LumenSeg5-1'
with open(icafe_dir+'/'+dbname+'/db.list','rb') as fp:
dblist = pickle.load(fp)
pilist = [pi.split('/')[1] for pi in dblist['test']]
len(pilist)
dbname = 'HarborViewT1Pre'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
pilist = ['0_ID%d_U'%i for i in [2,9,10,11,12]]
len(pilist)
# MERGE
dbname = 'CAREIIMERGEGT'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
seg_model_name = 'LumenSeg6-1'
with open(icafe_dir+'/'+dbname+'/db.list','rb') as fp:
dblist = pickle.load(fp)
pilist = [pi.split('/')[1] for pi in dblist['test']]
len(pilist)
dbname = 'IPH-Sup-TOF-FullCoverage'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
seg_model_name = 'LumenSeg7-1'
dblist_name = icafe_dir+'/'+dbname+'/db.list'
with open(dblist_name,'rb') as fp:
dblist = pickle.load(fp)
pilist = [pi.split('/')[1] for pi in dblist['test']]
len(pilist)
dbname = 'WALLIAI'
icafe_dir = r'\\DESKTOP2\GiCafe\result/'
seg_model_name = 'LumenSeg8-1'
dblist_name = icafe_dir+'/'+dbname+'/db.list'
with open(dblist_name,'rb') as fp:
dblist = pickle.load(fp)
pilist = [pi.split('/')[1] for pi in dblist['test']]
len(pilist),pilist
```
# Tracking
```
# from s.whole.modelname to swc traces
from iCafePython.connect.ext import extSnake
import SimpleITK as sitk
#redo artery tracing
RETRACE = 1
#redo artery tree contraint
RETREE = 1
#segmentation src
seg_src = 's.whole.'+seg_model_name
#Lumen segmentation threshold.
# Lower value will cause too many noise branches, and neighboring branches will merge as one
# Higher value will reduce the traces detectable
SEGTHRES = 0.5
#max search range in merge/branch, unit in mm
# Higher value will allow larger gap and merge parts of broken arteries,
# but will also force noise branches to be merged in the tree
search_range_thres = 10
#which ves to build graph for artery labeling
graph_ves = 'seg_ves_ext_tree2'
DEBUG = 0
for pi in pilist[20:19:-1]:
print('='*10,'Start processing',pilist.index(pi),'/',len(pilist),pi,'='*10)
if not os.path.exists(icafe_dir+'/'+dbname+'/'+pi):
os.mkdir(icafe_dir+'/'+dbname+'/'+pi)
icafem = iCafe(icafe_dir+'/'+dbname+'/'+pi)
#select correct version of s.whole from potentially multiple segmentation versions and save as s.whole
icafem.loadImg(seg_src)
icafem.saveImg('s.whole',icafem.I[seg_src],np.float16)
icafem.loadImg('s.whole')
#export v.tif for 3d visualization if icafe project does not have one already
if 'v' not in icafem.listAvailImgs():
vimg = copy.copy(icafem.I['s.whole'])
vimg[vimg<0] = 0
vimg = (vimg*255).astype(np.uint16)
icafem.saveImg('v',vimg,np.int16)
#Tracing
if RETRACE or not icafem.existPath('seg_ves_ext.swc'):
if 's.whole' not in icafem.I:
icafem.loadImg('s.whole')
seg_ves_snakelist = icafem.constructSkeleton(icafem.I['s.whole']>SEGTHRES)
#load image
file_name = icafem.getPath('o')
re_spacing_img = sitk.GetArrayFromImage(sitk.ReadImage(file_name))
seg_ves_snakelist = icafem.readSnake('seg_ves')
seg_ves_ext_snakelist = extSnake(seg_ves_snakelist,infer_model,re_spacing_img,DEBUG=DEBUG)
icafem.writeSWC('seg_ves_ext',seg_ves_ext_snakelist)
else:
seg_ves_ext_snakelist = icafem.readSnake('seg_ves_ext')
print('read from existing seg ves ext')
if seg_ves_ext_snakelist.NSnakes==0:
print('no snake found in seg ves, abort',pi)
continue
if RETREE or not icafem.existPath('seg_ves_ext_tree.swc'):
if 's.whole' not in icafem.I:
icafem.loadImg('s.whole')
if icafem.xml.res is None:
icafem.xml.setResolution(0.296875)
icafem.xml.writexml()
tree_snakelist = seg_ves_ext_snakelist.tree(icafem,search_range=search_range_thres/icafem.xml.res,int_src='o',DEBUG=DEBUG)
icafem.writeSWC('seg_ves_ext_tree', tree_snakelist)
tree_snakelist = tree_snakelist.tree(icafem,search_range=search_range_thres/3/icafem.xml.res,int_src='s.whole',DEBUG=DEBUG)
icafem.writeSWC('seg_ves_ext_tree2', tree_snakelist)
tree_main_snakelist = tree_snakelist.mainArtTree(dist_thres=10)
icafem.writeSWC('seg_ves_ext_tree2_main',tree_main_snakelist)
```
# Artery labeling
```
from iCafePython.artlabel.artlabel import ArtLabel
art_label_predictor = ArtLabel()
for pi in pilist[:]:
print('='*10,'Start processing',pilist.index(pi),'/',len(pilist),pi,'='*10)
if not os.path.exists(icafe_dir+'/'+dbname+'/'+pi):
os.mkdir(icafe_dir+'/'+dbname+'/'+pi)
icafem = iCafe(icafe_dir+'/'+dbname+'/'+pi)
#generate (simplified node!=2) graph for GNN art labeling
G = icafem.generateGraph(graph_ves,None,graphtype='graphsim', mode='test', trim=1)
if len(G.nodes())<5:
print('too few snakes for artlabeling')
continue
icafem.writeGraph(G,graphtype='graphsim')
#predict landmarks
pred_landmark, ves_end_pts = art_label_predictor.pred(icafem.getPath('graphsim'),icafem.xml.res)
#complete graph Gcom for finding the pts in the path
Gcom = icafem.generateGraph(graph_ves, None, graphtype='graphcom')
ves_snakelist = findSnakeFromPts(Gcom,G,ves_end_pts)
print('@@@predict',len(pred_landmark),'landmarks',ves_snakelist)
#save landmark and ves
icafem.xml.landmark = pred_landmark
icafem.xml.writexml()
icafem.writeSWC('ves_pred', ves_snakelist)
vimg = vimg[:,:,::-1]
np.max(vimg)
icafem.saveImg('v',vimg,np.float16)
import tifffile
a = tifffile.imread(r"\\DESKTOP2\GiCafe\result\WALLI\47_WALLI-V-09-1-B_M\TH_47_WALLI-V-09-1-B_Mv.tif")
np.max(a[118])
```
# Eval
```
def eval_simple(snakelist):
snakelist = copy.deepcopy(snakelist)
_ = snakelist.resampleSnakes(1)
#ground truth snakelist from icafem.veslist
all_metic = snakelist.motMetric(icafem.veslist)
metric_dict = all_metic.metrics(['MOTA','IDF1','MOTP','IDS'])
#ref_snakelist = icafem.readSnake('ves')
snakelist.compRefSnakelist(icafem.vessnakelist)
metric_dict['OV'], metric_dict['OF'], metric_dict['OT'], metric_dict['AI'], metric_dict['UM'], metric_dict['UMS'], metric_dict['ref_UM'], metric_dict['ref_UMS'], metric_dict['mean_diff'] = snakelist.evalCompDist()
str = ''
metric_dict_simple = ['MOTA','IDF1','MOTP','IDS','OV']
for key in metric_dict_simple:
str += key+'\t'
str += '\n'
for key in metric_dict_simple:
if type(metric_dict[key]) == int:
str += '%d\t'%metric_dict[key]
else:
str += '%.3f\t'%metric_dict[key]
print(str)
return metric_dict
# calculate metric and save in each pi folder
REFEAT = 0
for pi in pilist[:1]:
print('='*10,'Start processing',pilist.index(pi),'/',len(pilist),pi,'='*10)
icafem = iCafe(icafe_dir+'/'+pi)
if REFEAT or not icafem.existPath('metric.pickle'):
print('init metric')
all_metric_dict = {}
else:
print('load metric')
with open(icafem.getPath('metric.pickle'),'rb') as fp:
all_metric_dict = pickle.load(fp)
for vesname in ['seg_ves_ext_tree2_main']:
#for vesname in ['seg_raw','seg_ves_ext_main','seg_ves_ext_tree2']:
#comparison methods
#for vesname in ['frangi_ves','seg_unet','seg_raw','raw_sep','cnn_snake','dcat_snake','seg_ves_ext_tree2_main']:
if vesname in all_metric_dict:
continue
print('-'*10,vesname,'-'*10)
pred_snakelist = icafem.readSnake(vesname)
if pred_snakelist.NSnakes==0:
print('no snake',pi,vesname)
continue
all_metric_dict[vesname] = eval_simple(pred_snakelist.resampleSnakes(1))
with open(icafem.getPath('metric.pickle'),'wb') as fp:
pickle.dump(all_metric_dict,fp)
#check feat
pi = pilist[0]
icafem = iCafe(icafe_dir+'/'+pi)
with open(icafem.getPath('metric.pickle'),'rb') as fp:
all_metric_dict = pickle.load(fp)
all_metric_dict
#collect feats from pickle
eval_vesname = {'frangi_ves':'Frangi','seg_unet':'U-Net','seg_raw':'DDT',
'raw_sep':'iCafe','cnn_snake':'CNN Tracker','dcat_snake':'DCAT','seg_ves_ext_tree2_main':'DOST (ours)',
'seg_ves':'DOST (initial curve)','seg_ves_ext_main':'DOST (deep snake)','seg_ves_ext_tree2':'DOST tree'}
feats = {}
for vesname in eval_vesname:
feats[vesname] = {}
for pi in pilist[:]:
icafem = iCafe(icafe_dir+'/'+dbname+'/'+pi)
if not icafem.existPath('metric.pickle'):
continue
with open(icafem.getPath('metric.pickle'),'rb') as fp:
all_metric_dict = pickle.load(fp)
#for vesname in all_metric_dict:
for vesname in eval_vesname:
if vesname not in all_metric_dict:
print('no',vesname,'in',pi)
continue
for metric in all_metric_dict[vesname]:
if metric not in feats[vesname]:
feats[vesname][metric] = []
feats[vesname][metric].append(all_metric_dict[vesname][metric])
sel_metrics = ['OV','AI', 'MOTA', 'IDF1', 'IDS']
print('\t'.join(['']+sel_metrics))
for vesname in feats:
featstr = eval_vesname[vesname]+'\t'
for metric in sel_metrics:
if metric in ['IDS']:
featstr += '%.1f\t'%np.mean(feats[vesname][metric])
else:
featstr += '%.3f\t'%np.mean(feats[vesname][metric])
print(featstr)
```
| github_jupyter |
# Day and Night Image Classifier
---
The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.
We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!
*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).*
### Import resources
Before you get started on the project code, import the libraries and resources that you'll need.
```
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
```
## Training and Testing Data
The 200 day/night images are separated into training and testing datasets.
* 60% of these images are training images, for you to use as you create a classifier.
* 40% are test images, which will be used to test the accuracy of your classifier.
First, we set some variables to keep track of some where our images are stored:
image_dir_training: the directory where our training image data is stored
image_dir_test: the directory where our test image data is stored
```
# Image data directories
image_dir_training = "day_night_images/training/"
image_dir_test = "day_night_images/test/"
```
## Load the datasets
These first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night").
For example, the first image-label pair in `IMAGE_LIST` can be accessed by index:
``` IMAGE_LIST[0][:]```.
```
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
```
## Construct a `STANDARDIZED_LIST` of input images and output labels.
This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
```
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
```
## Visualize the standardized data
Display a standardized image from STANDARDIZED_LIST.
```
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
```
# Feature Extraction
Create a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image.
## RGB to HSV conversion
Below, a test image is converted from RGB to HSV colorspace and each component is displayed in an image.
```
# Convert and image to HSV colorspace
# Visualize the individual color channels
image_num = 0
test_im = STANDARDIZED_LIST[image_num][0]
test_label = STANDARDIZED_LIST[image_num][1]
# Convert to HSV
hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV)
# Print image label
print('Label: ' + str(test_label))
# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
# Plot the original image and the three channels
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('Standardized image')
ax1.imshow(test_im)
ax2.set_title('H channel')
ax2.imshow(h, cmap='gray')
ax3.set_title('S channel')
ax3.imshow(s, cmap='gray')
ax4.set_title('V channel')
ax4.imshow(v, cmap='gray')
```
---
### Find the average brightness using the V channel
This function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
```
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
## TODO: Calculate the average brightness using the area of the image
# and the sum calculated above
avg = 0
avg = sum_brightness/rgb_image.shape[0]/rgb_image.shape[1]
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
```
| github_jupyter |
<a href="https://colab.research.google.com/github/Abhishekauti21/dsmp-pre-work/blob/master/practice_project.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
```
class test:
def __init__(self,a):
self.a=a
def display(self):
print(self.a)
obj=test()
obj.display()
def f1():
x=100
print(x)
x=+1
f1()
area = { 'living' : [400, 450], 'living' : [650, 800], 'kitchen' : [300, 250], 'garage' : [250, 0]}
print (area['living'])
List_1=[2,6,7,8]
List_2=[2,6,7,8]
print(List_1[-2] + List_2[2])
d = {0: 'a', 1: 'b', 2: 'c'}
for x, y in d.items():
print(x, y)
Numbers=[10,5,7,8,9,5]
print(max(Numbers)-min(Numbers))
fo = open("foo.txt", "read+")
print("Name of the file: ", fo.name)
# Assuming file has following 5 lines
# This is 1st line
# This is 2nd line
# This is 3rd line
# This is 4th line
# This is 5th line
for index in range(5):
line = fo.readline()
print("Line No {} - {}".format(index, line))
#Close opened file
fo.close()
x = "abcdef"
while i in x:
print(i, end=" ")
def cube(x):
return x * x * x
x = cube(3)
print (x)
print(((True) or (False) and (False) or (False)))
x1=int('16')
x2=8 + 8
x3= (4**2)
print(x1 is x2 is x3)
Word='warrior knights' ,A=Word[9:14],B=Word[-13:-16:-1]
B+A
def to_upper(k):
return k.upper()
x = ['ab', 'cd']
print(list(map(to_upper, x)))
my_string = "hello world"
k = [(i.upper(), len(i)) for i in my_string]
print(k)
from csv import reader
def explore_data(dataset, start, end, rows_and_columns=False):
"""Explore the elements of a list.
Print the elements of a list starting from the index 'start'(included) upto the index 'end' (excluded).
Keyword arguments:
dataset -- list of which we want to see the elements
start -- index of the first element we want to see, this is included
end -- index of the stopping element, this is excluded
rows_and_columns -- this parameter is optional while calling the function. It takes binary values, either True or False. If true, print the dimension of the list, else dont.
"""
dataset_slice = dataset[start:end]
for row in dataset_slice:
print(row)
print('\n') # adds a new (empty) line between rows
if rows_and_columns:
print('Number of rows:', len(dataset))
print('Number of columns:', len(dataset[0]))
def duplicate_and_unique_movies(dataset, index_):
"""Check the duplicate and unique entries.
We have nested list. This function checks if the rows in the list is unique or duplicated based on the element at index 'index_'.
It prints the Number of duplicate entries, along with some examples of duplicated entry.
Keyword arguments:
dataset -- two dimensional list which we want to explore
index_ -- column index at which the element in each row would be checked for duplicacy
"""
duplicate = []
unique = []
for movie in dataset:
name = movie[index_]
if name in unique:
duplicate.append(name)
else:
unique.append(name)
print('Number of duplicate Movies:', len(duplicate))
print('\n')
print('Examples of duplicate Movies:', duplicate[:15])
def movies_lang(dataset, index_, lang_):
"""Extract the movies of a particular language.
Of all the movies available in all languages, this function extracts all the movies in a particular laguage.
Once you ahve extracted the movies, call the explore_data() to print first few rows.
Keyword arguments:
dataset -- list containing the details of the movie
index_ -- index which is to be compared for langauges
lang_ -- desired language for which we want to filter out the movies
Returns:
movies_ -- list with details of the movies in selected language
"""
movies_ = []
for movie in movies:
lang = movie[index_]
if lang == lang_:
movies_.append(movie)
print("Examples of Movies in English Language:")
explore_data(movies_, 0, 3, True)
return movies_
def rate_bucket(dataset, rate_low, rate_high):
"""Extract the movies within the specified ratings.
This function extracts all the movies that has rating between rate_low and high_rate.
Once you ahve extracted the movies, call the explore_data() to print first few rows.
Keyword arguments:
dataset -- list containing the details of the movie
rate_low -- lower range of rating
rate_high -- higher range of rating
Returns:
rated_movies -- list of the details of the movies with required ratings
"""
rated_movies = []
for movie in dataset:
vote_avg = float(movie[-4])
if ((vote_avg >= rate_low) & (vote_avg <= rate_high)):
rated_movies.append(movie)
print("Examples of Movies in required rating bucket:")
explore_data(rated_movies, 0, 3, True)
return rated_movies
# Read the data file and store it as a list 'movies'
opened_file = open(path, encoding="utf8")
read_file = reader(opened_file)
movies = list(read_file)
# The first row is header. Extract and store it in 'movies_header'.
movies_header = movies[0]
print("Movies Header:\n", movies_header)
# Subset the movies dataset such that the header is removed from the list and store it back in movies
movies = movies[1:]
# Delete wrong data
# Explore the row #4553. You will see that as apart from the id, description, status and title, no other information is available.
# Hence drop this row.
print("Entry at index 4553:")
explore_data(movies, 4553, 4554)
del movies[4553]
# Using explore_data() with appropriate parameters, view the details of the first 5 movies.
print("First 5 Entries:")
explore_data(movies, 0, 5, True)
# Our dataset might have more than one entry for a movie. Call duplicate_and_unique_movies() with index of the name to check the same.
duplicate_and_unique_movies(movies, 13)
# We saw that there are 3 movies for which the there are multiple entries.
# Create a dictionary, 'reviews_max' that will have the name of the movie as key, and the maximum number of reviews as values.
reviews_max = {}
for movie in movies:
name = movie[13]
n_reviews = float(movie[12])
if name in reviews_max and reviews_max[name] < n_reviews:
reviews_max[name] = n_reviews
elif name not in reviews_max:
reviews_max[name] = n_reviews
len(reviews_max)
# Create a list 'movies_clean', which will filter out the duplicate movies and contain the rows with maximum number of reviews for duplicate movies, as stored in 'review_max'.
movies_clean = []
already_added = []
for movie in movies:
name = movie[13]
n_reviews = float(movie[12])
if (reviews_max[name] == n_reviews) and (name not in already_added):
movies_clean.append(movie)
already_added.append(name)
len(movies_clean)
# Calling movies_lang(), extract all the english movies and store it in movies_en.
movies_en = movies_lang(movies_clean, 3, 'en')
# Call the rate_bucket function to see the movies with rating higher than 8.
high_rated_movies = rate_bucket(movies_en, 8, 10)
```
| github_jupyter |
# Detecting COVID-19 with Chest X Ray using PyTorch
Image classification of Chest X Rays in one of three classes: Normal, Viral Pneumonia, COVID-19
Dataset from [COVID-19 Radiography Dataset](https://www.kaggle.com/tawsifurrahman/covid19-radiography-database) on Kaggle
# Importing Libraries
```
from google.colab import drive
drive.mount('/gdrive')
%matplotlib inline
import os
import shutil
import copy
import random
import torch
import torch.nn as nn
import torchvision
import torch.optim as optim
from torch.optim import lr_scheduler
import numpy as np
import seaborn as sns
import time
from sklearn.metrics import confusion_matrix
from PIL import Image
import matplotlib.pyplot as plt
torch.manual_seed(0)
print('Using PyTorch version', torch.__version__)
```
# Preparing Training and Test Sets
```
class_names = ['Non-Covid', 'Covid']
root_dir = '/gdrive/My Drive/Research_Documents_completed/Data/Data/'
source_dirs = ['non', 'covid']
```
# Creating Custom Dataset
```
class ChestXRayDataset(torch.utils.data.Dataset):
def __init__(self, image_dirs, transform):
def get_images(class_name):
images = [x for x in os.listdir(image_dirs[class_name]) if x.lower().endswith('png') or x.lower().endswith('jpg')]
print(f'Found {len(images)} {class_name} examples')
return images
self.images = {}
self.class_names = ['Non-Covid', 'Covid']
for class_name in self.class_names:
self.images[class_name] = get_images(class_name)
self.image_dirs = image_dirs
self.transform = transform
def __len__(self):
return sum([len(self.images[class_name]) for class_name in self.class_names])
def __getitem__(self, index):
class_name = random.choice(self.class_names)
index = index % len(self.images[class_name])
image_name = self.images[class_name][index]
image_path = os.path.join(self.image_dirs[class_name], image_name)
image = Image.open(image_path).convert('RGB')
return self.transform(image), self.class_names.index(class_name)
```
# Image Transformations
```
train_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(size=(224, 224)),
torchvision.transforms.RandomHorizontalFlip(),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize(mean=[0.485, 0.456, 0.406], std=[0.229, 0.224, 0.225])
])
test_transform = torchvision.transforms.Compose([
torchvision.transforms.Resize(size=(224, 224)),
torchvision.transforms.ToTensor(),
torchvision.transforms.Normalize([0.485, 0.456, 0.406], [0.229, 0.224, 0.225])
])
```
# Prepare DataLoader
```
train_dirs = {
'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/non/',
'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/covid/'
}
#train_dirs = {
# 'Non-Covid': '/gdrive/My Drive/Data/Data/non/',
# 'Covid': '/gdrive/My Drive/Data/Data/covid/'
#}
train_dataset = ChestXRayDataset(train_dirs, train_transform)
test_dirs = {
'Non-Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/non/',
'Covid': '/gdrive/My Drive/Research_Documents_completed/Data/Data/test/covid/'
}
test_dataset = ChestXRayDataset(test_dirs, test_transform)
batch_size = 25
dl_train = torch.utils.data.DataLoader(train_dataset, batch_size=batch_size, shuffle=True)
dl_test = torch.utils.data.DataLoader(test_dataset, batch_size=batch_size, shuffle=True)
print(dl_train)
print('Number of training batches', len(dl_train))
print('Number of test batches', len(dl_test))
```
# Data Visualization
```
class_names = train_dataset.class_names
def show_images(images, labels, preds):
plt.figure(figsize=(30, 20))
for i, image in enumerate(images):
plt.subplot(1, 25, i + 1, xticks=[], yticks=[])
image = image.numpy().transpose((1, 2, 0))
mean = np.array([0.485, 0.456, 0.406])
std = np.array([0.229, 0.224, 0.225])
image = image * std + mean
image = np.clip(image, 0., 1.)
plt.imshow(image)
col = 'green'
if preds[i] != labels[i]:
col = 'red'
plt.xlabel(f'{class_names[int(labels[i].numpy())]}')
plt.ylabel(f'{class_names[int(preds[i].numpy())]}', color=col)
plt.tight_layout()
plt.show()
images, labels = next(iter(dl_train))
show_images(images, labels, labels)
images, labels = next(iter(dl_test))
show_images(images, labels, labels)
```
# Creating the Model
```
resnet18 = torchvision.models.resnet18(pretrained=True)
print(resnet18)
resnet18.fc = torch.nn.Linear(in_features=512, out_features=2)
loss_fn = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(resnet18.parameters(), lr=3e-5)
print(resnet18)
def show_preds():
resnet18.eval()
images, labels = next(iter(dl_test))
outputs = resnet18(images)
_, preds = torch.max(outputs, 1)
show_images(images, labels, preds)
show_preds()
```
# Training the Model
```
def train(epochs):
best_model_wts = copy.deepcopy(resnet18.state_dict())
b_acc = 0.0
t_loss = []
t_acc = []
avg_t_loss=[]
avg_t_acc=[]
v_loss = []
v_acc=[]
avg_v_loss = []
avg_v_acc = []
ep = []
print('Starting training..')
for e in range(0, epochs):
ep.append(e+1)
print('='*20)
print(f'Starting epoch {e + 1}/{epochs}')
print('='*20)
train_loss = 0.
val_loss = 0.
train_accuracy = 0
total_train = 0
correct_train = 0
resnet18.train() # set model to training phase
for train_step, (images, labels) in enumerate(dl_train):
optimizer.zero_grad()
outputs = resnet18(images)
_, pred = torch.max(outputs, 1)
loss = loss_fn(outputs, labels)
loss.backward()
optimizer.step()
train_loss += loss.item()
train_loss /= (train_step + 1)
_, predicted = torch.max(outputs, 1)
total_train += labels.nelement()
correct_train += sum((predicted == labels).numpy())
train_accuracy = correct_train / total_train
t_loss.append(train_loss)
t_acc.append(train_accuracy)
if train_step % 20 == 0:
print('Evaluating at step', train_step)
print(f'Training Loss: {train_loss:.4f}, Training Accuracy: {train_accuracy:.4f}')
accuracy = 0.
resnet18.eval() # set model to eval phase
for val_step, (images, labels) in enumerate(dl_test):
outputs = resnet18(images)
loss = loss_fn(outputs, labels)
val_loss += loss.item()
_, preds = torch.max(outputs, 1)
accuracy += sum((preds == labels).numpy())
val_loss /= (val_step + 1)
accuracy = accuracy/len(test_dataset)
print(f'Validation Loss: {val_loss:.4f}, Validation Accuracy: {accuracy:.4f}')
v_loss.append(val_loss)
v_acc.append(accuracy)
show_preds()
resnet18.train()
if accuracy > b_acc:
b_acc = accuracy
avg_t_loss.append(sum(t_loss)/len(t_loss))
avg_v_loss.append(sum(v_loss)/len(v_loss))
avg_t_acc.append(sum(t_acc)/len(t_acc))
avg_v_acc.append(sum(v_acc)/len(v_acc))
best_model_wts = copy.deepcopy(resnet18.state_dict())
print('Best validation Accuracy: {:4f}'.format(b_acc))
print('Training complete..')
plt.plot(ep, avg_t_loss, 'g', label='Training loss')
plt.plot(ep, avg_v_loss, 'b', label='validation loss')
plt.title('Training and Validation loss for each epoch')
plt.xlabel('Epochs')
plt.ylabel('Loss')
plt.legend()
plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_loss.png')
plt.show()
plt.plot(ep, avg_t_acc, 'g', label='Training accuracy')
plt.plot(ep, avg_v_acc, 'b', label='validation accuracy')
plt.title('Training and Validation Accuracy for each epoch')
plt.xlabel('Epochs')
plt.ylabel('Accuracy')
plt.legend()
plt.savefig('/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18_accuarcy.png')
plt.show()
torch.save(resnet18.state_dict(),'/gdrive/My Drive/Research_Documents_completed/Resnet18_completed/resnet18.pt')
%%time
train(epochs=5)
```
# Final Results
VALIDATION LOSS AND TRAINING LOSS VS EPOCH
VALIDATION ACCURACY AND TRAINING ACCURACY VS EPOCH
BEST ACCURACY ERROR..
```
show_preds()
```
| github_jupyter |
# Plotting Target Pixel Files with Lightkurve
## Learning Goals
By the end of this tutorial, you will:
- Learn how to download and plot target pixel files from the data archive using [Lightkurve](https://docs.lightkurve.org).
- Be able to plot the target pixel file background.
- Be able to extract and plot flux from a target pixel file.
## Introduction
The [*Kepler*](https://www.nasa.gov/mission_pages/kepler/main/index.html), [*K2*](https://www.nasa.gov/mission_pages/kepler/main/index.html), and [*TESS*](https://tess.mit.edu/) telescopes observe stars for long periods of time, from just under a month to four years. By doing so they observe how the brightnesses of stars change over time.
Pixels around targeted stars are cut out and stored as *target pixel files* at each observing cadence. In this tutorial, we will learn how to use Lightkurve to download and understand the different photometric data stored in a target pixel file, and how to extract flux using basic aperture photometry.
It is useful to read the accompanying tutorial discussing how to use target pixel file products with Lightkurve before starting this tutorial. It is recommended that you also read the tutorial on using *Kepler* light curve products with Lightkurve, which will introduce you to some specifics on how *Kepler*, *K2*, and *TESS* make observations, and how these are displayed as light curves. It also introduces some important terms and concepts that are referred to in this tutorial.
*Kepler* observed a single field in the sky, although not all stars in this field were recorded. Instead, pixels were selected around certain targeted stars. These cutout images are called target pixel files, or TPFs. By combining the amount of flux in the pixels where the star appears, you can make a measurement of the amount of light from a star in that observation. The pixels chosen to include in this measurement are referred to as an *aperture*.
TPFs are typically the first port of call when studying a star with *Kepler*, *K2*, or *TESS*. They allow us to see where our data is coming from, and identify potential sources of noise or systematic trends. In this tutorial, we will use the *Kepler* mission as the main example, but these tools equally apply to *TESS* and *K2* as well.
## Imports
This tutorial requires:
- **[Lightkurve](https://docs.lightkurve.org)** to work with TPF files.
- [**Matplotlib**](https://matplotlib.org/) for plotting.
```
import lightkurve as lk
import matplotlib.pyplot as plt
%matplotlib inline
```
## 1. Downloading a TPF
A TPF contains the original imaging data from which a light curve is derived. Besides the brightness data measured by the charge-coupled device (CCD) camera, a TPF also includes post-processing information such as an estimate of the astronomical background, and a recommended pixel aperture for extracting a light curve.
First, we download a target pixel file. We will use one quarter's worth of *Kepler* data for the star named [Kepler-8](http://www.openexoplanetcatalogue.com/planet/Kepler-8%20b/), a star somewhat larger than the Sun, and the host of a [hot Jupiter planet](https://en.wikipedia.org/wiki/Hot_Jupiter).
```
search_result = lk.search_targetpixelfile("Kepler-8", author="Kepler", quarter=4, cadence="long")
search_result
tpf = search_result.download()
```
This TPF contains data for every cadence in the quarter we downloaded. Let's focus on the first cadence for now, which we can select using zero-based indexing as follows:
```
first_cadence = tpf[0]
first_cadence
```
## 2. Flux and Background
At each cadence the TPF has a number of photometry data properties. These are:
- `flux_bkg`: the astronomical background of the image.
- `flux_bkg_err`: the statistical uncertainty on the background flux.
- `flux`: the stellar flux after the background is removed.
- `flux_err`: the statistical uncertainty on the stellar flux after background removal.
These properties can be accessed via a TPF object as follows:
```
first_cadence.flux.value
```
And you can plot the data as follows:
```
first_cadence.plot(column='flux');
```
Alternatively, if you are working directly with a FITS file, you can access the data in extension 1 (for example, `first_cadence.hdu[1].data['FLUX']`). Note that you can find all of the details on the structure and contents of TPF files in Section 2.3.2 of the [*Kepler* Archive Manual](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/archive_manual.pdf).
When plotting data using the `plot()` function, what you are seeing in the TPF is the flux *after* the background has been removed. This background flux typically consists of [zodiacal light](https://en.wikipedia.org/wiki/Zodiacal_light) or earthshine (especially in *TESS* observations). The background is typically smooth and changes on scales much larger than a single TPF. In *Kepler*, the background is estimated for the CCD as a whole, before being extracted from each TPF in that CCD. You can learn more about background removal in Section 4.2 of the [*Kepler* Data Processing Handbook](http://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19081-002-KDPH.pdf).
Now, let's compare the background to the background-subtracted flux to get a sense of scale. We can do this using the `plot()` function's `column` keyword. By default the function plots the flux, but we can change this to plot the background, as well as other data such as the error on each pixel's flux.
```
fig, axes = plt.subplots(2,2, figsize=(16,16))
first_cadence.plot(ax=axes[0,0], column='FLUX')
first_cadence.plot(ax=axes[0,1], column='FLUX_BKG')
first_cadence.plot(ax=axes[1,0], column='FLUX_ERR')
first_cadence.plot(ax=axes[1,1], column='FLUX_BKG_ERR');
```
From looking at the color scale on both plots, you may see that the background flux is very low compared to the total flux emitted by a star. This is expected โ stars are bright! But these small background corrections become important when looking at the very small scale changes caused by planets or stellar oscillations. Understanding the background is an important part of astronomy with *Kepler*, *K2*, and *TESS*.
If the background is particularly bright and you want to see what the TPF looks like with it included, passing the `bkg=True` argument to the `plot()` method will show the TPF with the flux added on top of the background, representing the total flux recorded by the spacecraft.
```
first_cadence.plot(bkg=True);
```
In this case, the background is low and the star is bright, so it doesn't appear to make much of a difference.
## 3. Apertures
As part of the data processing done by the *Kepler* pipeline, each TPF includes a recommended *optimal aperture mask*. This aperture mask is optimized to ensure that the stellar signal has a high signal-to-noise ratio, with minimal contamination from the background.
The optimal aperture is stored in the TPF as the `pipeline_mask` property. We can have a look at it by calling it here:
```
first_cadence.pipeline_mask
```
As you can see, it is a Boolean array detailing which pixels are included. We can plot this aperture over the top of our TPF using the `plot()` function, and passing in the mask to the `aperture_mask` keyword. This will highlight the pixels included in the aperture mask using red hatched lines.
```
first_cadence.plot(aperture_mask=first_cadence.pipeline_mask);
```
You don't necessarily have to pass in the `pipeline_mask` to the `plot()` function; it can be any mask you create yourself, provided it is the right shape. An accompanying tutorial explains how to create such custom apertures, and goes into aperture photometry in more detail. For specifics on the selection of *Kepler*'s optimal apertures, read the [*Kepler* Data Processing Handbook](https://archive.stsci.edu/files/live/sites/mast/files/home/missions-and-data/kepler/_documents/KSCI-19081-002-KDPH.pdf), Section 7, *Finding Optimal Apertures in Kepler Data*.
## 4. Simple Aperture Photometry
Finally, let's learn how to perform simple aperture photometry (SAP) using the provided optimal aperture in `pipeline_mask` and the TPF.
Using the full TPF for all cadences in the quarter, we can perform aperture photometry using the `to_lightcurve()` method as follows:
```
lc = tpf.to_lightcurve()
```
This method returns a `LightCurve` object which details the flux and flux centroid position at each cadence:
```
lc
```
Note that this [`KeplerLightCurve`](https://docs.lightkurve.org/api/lightkurve.lightcurve.KeplerLightCurve.html) object has fewer data columns than in light curves downloaded directly from MAST. This is because we are extracting our light curve directly from the TPF using minimal processing, whereas light curves created using the official pipeline include more processing and more columns.
We can visualize the light curve as follows:
```
lc.plot();
```
This light curve is similar to the SAP light curve we previously encountered in the light curve tutorial.
### Note
The background flux can be plotted in a similar way, using the [`get_bkg_lightcurve()`](https://docs.lightkurve.org/api/lightkurve.targetpixelfile.KeplerTargetPixelFile.html#lightkurve.targetpixelfile.KeplerTargetPixelFile.get_bkg_lightcurve) method. This does not require an aperture, but instead sums the flux in the TPF's `FLUX_BKG` column at each timestamp.
```
bkg = tpf.get_bkg_lightcurve()
bkg.plot();
```
Inspecting the background in this way is useful to identify signals which appear to be present in the background rather than in the astronomical object under study.
---
## Exercises
Some stars, such as the planet-hosting star Kepler-10, have been observed both with *Kepler* and *TESS*. In this exercise, download and plot both the *TESS* and *Kepler* TPFs, along with the optimal apertures. You can do this by either selecting the TPFs from the list returned by [`search_targetpixelfile()`](https://docs.lightkurve.org/api/lightkurve.search.search_targetpixelfile.html), or by using the `mission` keyword argument when searching.
Both *Kepler* and *TESS* produce target pixel file data products, but these can look different across the two missions. *TESS* is focused on brighter stars and has larger pixels, so a star that might occupy many pixels in *Kepler* may only occupy a few in *TESS*.
How do light curves extracted from both of them compare?
```
#datalist = lk.search_targetpixelfile(...)
#soln:
datalist = lk.search_targetpixelfile("Kepler-10")
datalist
kep = datalist[6].download()
tes = datalist[15].download()
fig, axes = plt.subplots(1, 2, figsize=(14,6))
kep.plot(ax=axes[0], aperture_mask=kep.pipeline_mask, scale='log')
tes.plot(ax=axes[1], aperture_mask=tes.pipeline_mask)
fig.tight_layout();
lc_kep = kep.to_lightcurve()
lc_tes = tes.to_lightcurve()
fig, axes = plt.subplots(1, 2, figsize=(14,6), sharey=True)
lc_kep.flatten().plot(ax=axes[0], c='k', alpha=.8)
lc_tes.flatten().plot(ax=axes[1], c='k', alpha=.8);
```
If you plot the light curves for both missions side by side, you will see a stark difference. The *Kepler* data has a much smaller scatter, and repeating transits are visible. This is because *Kepler*'s pixels were smaller, and so could achieve a higher precision on fainter stars. *TESS* has larger pixels and therefore focuses on brighter stars. For stars like Kepler-10, it would be hard to detect a planet using *TESS* data alone.
## About this Notebook
**Authors:** Oliver Hall ([email protected]), Geert Barentsen
**Updated On**: 2020-09-15
## Citing Lightkurve and Astropy
If you use `lightkurve` or `astropy` for published research, please cite the authors. Click the buttons below to copy BibTeX entries to your clipboard.
lk.show_citation_instructions()
<img style="float: right;" src="https://raw.githubusercontent.com/spacetelescope/notebooks/master/assets/stsci_pri_combo_mark_horizonal_white_bkgd.png" alt="Space Telescope Logo" width="200px"/>
| github_jupyter |
##### Copyright 2018 The TensorFlow Authors. [Licensed under the Apache License, Version 2.0](#scrollTo=ByZjmtFgB_Y5).
```
// #@title Licensed under the Apache License, Version 2.0 (the "License"); { display-mode: "form" }
// Licensed under the Apache License, Version 2.0 (the "License");
// you may not use this file except in compliance with the License.
// You may obtain a copy of the License at
//
// https://www.apache.org/licenses/LICENSE-2.0
//
// Unless required by applicable law or agreed to in writing, software
// distributed under the License is distributed on an "AS IS" BASIS,
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
// See the License for the specific language governing permissions and
// limitations under the License.
```
<table class="tfo-notebook-buttons" align="left">
<td>
<a target="_blank" href="https://www.tensorflow.org/swift/tutorials/python_interoperability"><img src="https://www.tensorflow.org/images/tf_logo_32px.png" />View on TensorFlow.org</a>
</td>
<td>
<a target="_blank" href="https://colab.research.google.com/github/tensorflow/swift/blob/main/docs/site/tutorials/python_interoperability.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" />Run in Google Colab</a>
</td>
<td>
<a target="_blank" href="https://github.com/tensorflow/swift/blob/main/docs/site/tutorials/python_interoperability.ipynb"><img src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" />View source on GitHub</a>
</td>
</table>
# Python interoperability
Swift For TensorFlow supports Python interoperability.
You can import Python modules from Swift, call Python functions, and convert values between Swift and Python.
```
import PythonKit
print(Python.version)
```
## Setting the Python version
By default, when you `import Python`, Swift searches system library paths for the newest version of Python installed.
To use a specific Python installation, set the `PYTHON_LIBRARY` environment variable to the `libpython` shared library provided by the installation. For example:
`export PYTHON_LIBRARY="~/anaconda3/lib/libpython3.7m.so"`
The exact filename will differ across Python environments and platforms.
Alternatively, you can set the `PYTHON_VERSION` environment variable, which instructs Swift to search system library paths for a matching Python version. Note that `PYTHON_LIBRARY` takes precedence over `PYTHON_VERSION`.
In code, you can also call the `PythonLibrary.useVersion` function, which is equivalent to setting `PYTHON_VERSION`.
```
// PythonLibrary.useVersion(2)
// PythonLibrary.useVersion(3, 7)
```
__Note: you should run `PythonLibrary.useVersion` right after `import Python`, before calling any Python code. It cannot be used to dynamically switch Python versions.__
Set `PYTHON_LOADER_LOGGING=1` to see [debug output for Python library loading](https://github.com/apple/swift/pull/20674#discussion_r235207008).
## Basics
In Swift, `PythonObject` represents an object from Python.
All Python APIs use and return `PythonObject` instances.
Basic types in Swift (like numbers and arrays) are convertible to `PythonObject`. In some cases (for literals and functions taking `PythonConvertible` arguments), conversion happens implicitly. To explicitly cast a Swift value to `PythonObject`, use the `PythonObject` initializer.
`PythonObject` defines many standard operations, including numeric operations, indexing, and iteration.
```
// Convert standard Swift types to Python.
let pythonInt: PythonObject = 1
let pythonFloat: PythonObject = 3.0
let pythonString: PythonObject = "Hello Python!"
let pythonRange: PythonObject = PythonObject(5..<10)
let pythonArray: PythonObject = [1, 2, 3, 4]
let pythonDict: PythonObject = ["foo": [0], "bar": [1, 2, 3]]
// Perform standard operations on Python objects.
print(pythonInt + pythonFloat)
print(pythonString[0..<6])
print(pythonRange)
print(pythonArray[2])
print(pythonDict["bar"])
// Convert Python objects back to Swift.
let int = Int(pythonInt)!
let float = Float(pythonFloat)!
let string = String(pythonString)!
let range = Range<Int>(pythonRange)!
let array: [Int] = Array(pythonArray)!
let dict: [String: [Int]] = Dictionary(pythonDict)!
// Perform standard operations.
// Outputs are the same as Python!
print(Float(int) + float)
print(string.prefix(6))
print(range)
print(array[2])
print(dict["bar"]!)
```
`PythonObject` defines conformances to many standard Swift protocols:
* `Equatable`
* `Comparable`
* `Hashable`
* `SignedNumeric`
* `Strideable`
* `MutableCollection`
* All of the `ExpressibleBy_Literal` protocols
Note that these conformances are not type-safe: crashes will occur if you attempt to use protocol functionality from an incompatible `PythonObject` instance.
```
let one: PythonObject = 1
print(one == one)
print(one < one)
print(one + one)
let array: PythonObject = [1, 2, 3]
for (i, x) in array.enumerated() {
print(i, x)
}
```
To convert tuples from Python to Swift, you must statically know the arity of the tuple.
Call one of the following instance methods:
- `PythonObject.tuple2`
- `PythonObject.tuple3`
- `PythonObject.tuple4`
```
let pythonTuple = Python.tuple([1, 2, 3])
print(pythonTuple, Python.len(pythonTuple))
// Convert to Swift.
let tuple = pythonTuple.tuple3
print(tuple)
```
## Python builtins
Access Python builtins via the global `Python` interface.
```
// `Python.builtins` is a dictionary of all Python builtins.
_ = Python.builtins
// Try some Python builtins.
print(Python.type(1))
print(Python.len([1, 2, 3]))
print(Python.sum([1, 2, 3]))
```
## Importing Python modules
Use `Python.import` to import a Python module. It works like the `import` keyword in `Python`.
```
let np = Python.import("numpy")
print(np)
let zeros = np.ones([2, 3])
print(zeros)
```
Use the throwing function `Python.attemptImport` to perform safe importing.
```
let maybeModule = try? Python.attemptImport("nonexistent_module")
print(maybeModule)
```
## Conversion with `numpy.ndarray`
The following Swift types can be converted to and from `numpy.ndarray`:
- `Array<Element>`
- `ShapedArray<Scalar>`
- `Tensor<Scalar>`
Conversion succeeds only if the `dtype` of the `numpy.ndarray` is compatible with the `Element` or `Scalar` generic parameter type.
For `Array`, conversion from `numpy` succeeds only if the `numpy.ndarray` is 1-D.
```
import TensorFlow
let numpyArray = np.ones([4], dtype: np.float32)
print("Swift type:", type(of: numpyArray))
print("Python type:", Python.type(numpyArray))
print(numpyArray.shape)
// Examples of converting `numpy.ndarray` to Swift types.
let array: [Float] = Array(numpy: numpyArray)!
let shapedArray = ShapedArray<Float>(numpy: numpyArray)!
let tensor = Tensor<Float>(numpy: numpyArray)!
// Examples of converting Swift types to `numpy.ndarray`.
print(array.makeNumpyArray())
print(shapedArray.makeNumpyArray())
print(tensor.makeNumpyArray())
// Examples with different dtypes.
let doubleArray: [Double] = Array(numpy: np.ones([3], dtype: np.float))!
let intTensor = Tensor<Int32>(numpy: np.ones([2, 3], dtype: np.int32))!
```
## Displaying images
You can display images in-line using `matplotlib`, just like in Python notebooks.
```
// This cell is here to display plots inside a Jupyter Notebook.
// Do not copy it into another environment.
%include "EnableIPythonDisplay.swift"
print(IPythonDisplay.shell.enable_matplotlib("inline"))
let np = Python.import("numpy")
let plt = Python.import("matplotlib.pyplot")
let time = np.arange(0, 10, 0.01)
let amplitude = np.exp(-0.1 * time)
let position = amplitude * np.sin(3 * time)
plt.figure(figsize: [15, 10])
plt.plot(time, position)
plt.plot(time, amplitude)
plt.plot(time, -amplitude)
plt.xlabel("Time (s)")
plt.ylabel("Position (m)")
plt.title("Oscillations")
plt.show()
```
| github_jupyter |
```
import numpy as np
import pandas as pd
import scipy as sp
from scipy import sparse
import nltk
from nltk.corpus import stopwords
from nltk.stem.porter import PorterStemmer
import string
import re
import glob
from sklearn.linear_model import LogisticRegression
from sklearn.feature_extraction.text import CountVectorizer, FeatureHasher
import keras
from keras.optimizers import Adam
from keras.callbacks import ModelCheckpoint
from keras.layers import Dense, Embedding, LSTM, Dropout
from keras.models import Sequential
import matplotlib.pyplot as plt
print('Keras version: %s' % keras.__version__)
PATH = "data/aclImdb"
# or use nltk or spacy
htmltag = re.compile(r'<.*?>')
numbers = re.compile(r'[0-9]')
quotes = re.compile(r'\"|`')
punctuation = re.compile(r'([%s])'% string.punctuation)
english_stopwords =set(stopwords.words('english'))
stemmer = PorterStemmer()
# read files in the given tree, using subfolders as the target classes
def read_files(folder, subfolders):
corpus, labels = [], []
for index, label in enumerate(subfolders):
path = '/'.join([folder, label, '*.txt'])
for filename in glob.glob(path):
corpus.append(open(filename, 'r').read())
labels.append(index)
return corpus, np.array(labels).astype(np.int)
# pre-processor
def preprocess(s):
# lowercase
s = s.lower()
# remove html tags
s = htmltag.sub(' ', s)
# remove numbers
s = numbers.sub(' ', s)
# remove quotes
s = quotes.sub(' ', s)
# replace puctuation
s = punctuation.sub(' ', s)
return s
# tokenization
def tokenize(s):
# use a serious tokenizer
tokens = nltk.word_tokenize(s)
# remove stopwords
tokens = filter(lambda w: not w in english_stopwords, tokens)
# stem words
tokens = [stemmer.stem(token) for token in tokens]
return tokens
#coprus_train_pos = [open(filename, 'r').read() for filename in glob.glob(PATH + '/train/pos/*.txt')]
#coprus_train_neg = [open(filename, 'r').read() for filename in glob.glob(PATH + '/train/neg/*.txt')]
corpus_train, y_train = read_files(PATH + '/train', ['neg', 'pos'])
corpus_test, y_test = read_files(PATH + '/test', ['neg', 'pos'])
len(corpus_train), len(y_train), corpus_train[0], y_train[0], corpus_train[24999], y_train[24999]
len(corpus_test), len(y_test), corpus_test[0], y_test[0]
vectorizer = CountVectorizer(preprocessor=preprocess, tokenizer=tokenize)
term_doc_train = vectorizer.fit_transform(corpus_train)
term_doc_test = vectorizer.transform(corpus_test)
vocab = vectorizer.get_feature_names()
vocab[100:102]
vocab_size = len(vocab)
h = FeatureHasher(n_features=10, input_type='string')
f = h.fit_transform(['q', 'w'])
f.shape, f.toarray()
term_doc_train[0]
term_doc_train[100].toarray()
vectorizer.vocabulary_['cool']
# Multinomial Naive Bayes
alpha = 0.1 # smoothing parameter
class MultinomialNaiveBayes():
"""
Arguments:
alpha: smoothing parameter
"""
def __init__(self, alpha=0.1):
self.b = 0
self.r = 0
self.alpha = alpha
def fit(self, X, y):
# bias
N_pos = (y==1).shape[0]
N_neg = (y==0).shape[0]
self.b = np.log(N_pos / N_neg)
# count of occurences for every token in vocabulary as they appear in positive samples
p = alpha + X[y==1].sum(axis=0)
p_l1 = np.linalg.norm(p, ord=1) # L1 norm
# count of occurences for every token in vocabulary as they appear in negative samples
q = alpha + X[y==0].sum(axis=0)
q_l1 = np.linalg.norm(q, ord=1) # L1 norm
# log count ratio
self.r = np.log((p/p_l1) / (q/q_l1))
#self.r = sp.sparse.csr_matrix(self.r.T)
return self.r, self.b
def predict(self, X):
y_pred = np.sign(sp.sparse.csr_matrix.dot(X, self.r.T) + self.b)
y_pred[y_pred==-1] = 0
return y_pred
def score(self, X, y):
y_predict = self.predict(X)
y_reshaped = np.reshape(y, y_predict.shape)
return (y_reshaped == y_predict).mean()
model = MultinomialNaiveBayes()
r, b = model.fit(term_doc_train, y_train)
b, r.shape, term_doc_train.shape
term_doc_train.shape, r.shape, term_doc_train[0], r
# accuracy on training set
y_pred = model.predict(term_doc_train)
#y_train = np.reshape(y_train, (25000, 1))
(np.reshape(y_train, (25000, 1)) == y_pred).mean()
# accuracy on validation set
y_pred2 = model.predict(term_doc_test)
#y_test = np.reshape(y_test, (25000, 1))
(np.reshape(y_test, (25000, 1)) == y_pred2).mean()
# now let's binary term document
term_doc_train = term_doc_train.sign() # turn everything into 1 or 0
term_doc_test = term_doc_test.sign() # turn everything into 1 or 0
term_doc_train.shape, term_doc_test.shape
model = MultinomialNaiveBayes()
model.fit(term_doc_train, y_train)
accuracy_train = model.score(term_doc_train, y_train)
accuracy_test = model.score(term_doc_test, y_test)
accuracy_train, accuracy_test
term_doc_train.shape, y_train.shape, term_doc_train[y_train==0].sum(axis=0).shape, term_doc_train[y_train==1].sum(axis=0).shape
(y_train==0).shape, (y_train==1).shape, y_pred.shape
# now with plain logistic regression
model = LogisticRegression()
model.fit(term_doc_train, y_train)
# accuracy on training
y_pred = model.predict(term_doc_train)
accuracy_train = (y_train == y_pred).mean()
# accuracy on validation
y_pred = model.predict(term_doc_test)
accuracy_test = (y_test == y_pred).mean()
accuracy_train, accuracy_test
# now with regularized logistic regression
model = LogisticRegression(C=0.01, dual=True)
model.fit(term_doc_train, y_train)
# accuracy on training
y_pred = model.predict(term_doc_train)
accuracy_train = (y_train == y_pred).mean()
# accuracy on validation
y_pred = model.predict(term_doc_test)
accuracy_test = (y_test == y_pred).mean()
accuracy_train, accuracy_test
# now combining Naive Base and Logistic Regression
"""
class NBLR(keras.Model):
def __init__(self):
super(NBLR, self).__init__(name='NBLR')
self.softmax = keras.layers.Activation('softmax')
def call(self, inputs):
out = self.softmax(inputs)
return out
model = NBLR()
model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['accuracy'])
losses = model.fit(x=term_doc_train, y=y_train)
"""
```
| github_jupyter |
## LDA
The graphical model representation of LDA is given blow:
<img src="figures/LDA.png">
The basic idea of LDA is that documents are represented as random mixtures over latent topics, where each topic is characterized by a distribution over words.
LDA assumes the following generative process for each document $\mathbf{w}$ in a corpus $\mathcal{D}$:
1. Choose $N\sim$ Poisson($\xi$).
2. Choose $\theta \sim $Dir($\alpha$).
3. For each of the $N$ words $w_n$:
a) Choose a topic $z_n\sim$ Multinomial($\theta$)
b) Choose a word $w_n$ from $p(w_n|z_n, \beta)$, a multinomial probability conditioned on the topic $z_n$.
Several simplifying assumptions are made in this basic model:
1. The dimensionality $k$ of the Dirichlet distribution (and thus the dimensionality of the topic variable $z$) is assumed known and fixed.
2. The word probabilities are parameterized by a $k\times V$ matrix $\beta$ where $\beta_{ij} = p(w^j=1 | z^i=1)$, which for now we treat as a fixed quantity that is to be estimated.
3. The Poisson assumption is not critical to anything that follows and more realistic document length distributions can be used as needed.
A $k$-dimensional Dirichlet random variable $\theta$ can take values in the $(k-1)$-simplex (a $k$-vector $\theta$ in the $(k-1)$-simplex if $\theta_i\ge 0, \sum_{i=1}^k\theta_i=1$), and has the following probability density on this simplex:
$$
p(\theta|\alpha) = \frac{\Gamma(\sum_{i=1}^k\alpha_i)}{\prod_{i=1}^k\Gamma(\alpha_i)}\theta_1^{\alpha_1-1}\cdots\theta_k^{a_k-1},
$$
where the parameter $\alpha$ is a $k$-vector with components $\alpha_i > 0$, and where $\Gamma(x)$ is the Gamma function.
Given the parameters $\alpha$ and $\beta$, the joint distribution of a topic mixture $\theta$, a set of $N$ topics $\mathbf{z}$, and a set of $N$ words $\mathbf{w}$ is given by:
$$
p(\theta, \mathbf{z}|\alpha, \beta) = p(\theta | \alpha) \prod_{n=1}^N p(z_n|\theta)p(w_n|z_n,\beta),
$$
where $p(z_n|\theta)$ is simply $\theta_i$ for the unique $i$ such that $z_n^i = 1$. Integrating over $\theta$ and summing over $z$, we obtain the marginal distribution of a document:
$$
p(\mathbf{w}|\alpha,\beta) = \int p(\theta|\alpha)\left(\prod_{n=1}^N\sum_{z_n}p(z_n|\theta) p(w_n|z_n,\beta)\right)d\theta.
$$
Finally, taking the product of the marginal probabilities of single documents, we obtain the probability of a corpus:
$$
p(\mathcal{D}|\alpha,\beta) = \prod_{d=1}^M\int p(\theta_d|\alpha)\left(\prod_{n=1}^{N_d}\sum_{z_{d_n}}p(z_{d_n}|\theta_d)p(w_{d_n}| z_{d_n},\beta)\right)d\theta_d.
$$
There are **three** levels to the LDA representation. The parameters $\alpha$ and $\beta$ are corpus level parameters, assumed to be sampled once in the process of generating a corpus.The variables $\theta_d$ are document-level variables, sampled once per document. Finally the variables $z_{dn}$ and $w_{dn}$ are word-level variables and are sampled once for each word in each document.
Note the topic node is sampled *repeatedly* within the document. Under LDA, documents can be associated with multiple topics.
### Inference
The key inference problem that we need to solve in order to use LDA is that of computing the posterior distribution of the hidden variables given a document:
$$
p(\theta, \mathbf{z}|\mathbf{w}, \alpha, \beta) = \frac{p(\theta, \mathbf{z}, \mathbf{w}| \alpha, \beta)}{p(\mathbf{w}|\alpha, \beta)}.
$$
Unfortunately, this distribution is intractable to compute in general. We can however use a variety of variety of approximate inference algorithms.
### Variational Inference
Convexity based variational algorithm for inference in LDA.
Basic idea:
- Use Jensen's inequality to obtain an adjustable lower bound on the log likelihood
## Probabilistic latent semantic indexing
This model posits that a document label $d$ and a word $w_n$ are conditionally independent given an unobserved topic $z$:
$$
p(d, w_n) = p(d)\sum_{z}p(w_n|z)p(z|d).
$$
The pLSI model attempts to relax the simplifying assumption made in the mixture of unigrams model that each document is generated from only one topic. In a sense, it does capture the possibility that a document may contain multiple topics since $p(z|d)$ serves as the mixture weights of the topics for a particular document $d$.
However, we need to note several problems:
1. $d$ is a dummy index into the list of documents in the *training set*. Thus, $d$ is a multinomial random variable with as many possible values as there are training documents and the model learns the topic mixtures $p(z|d)$ only for those documents on which it is trained.
2. Also stems from the use of a distribution index by training documents, is that the number of parameters which must be estimated grows linearly with the number of training documents. The parameters for a $k$-topic PLSI model are $k$ multinomial distributions of size $V$ and $M$ mixtures over the $k$ hidden topics. This gives $kV + kM$ parameters and therefore linear growth in $M$.
$\therefore$ pLSI is not a well-defined generative model of documents; there is no natural way to use it to assign probability to a previously seen document. Also, this linear growth in parameters suggests that the model is prone to overfitting.
The principal advantages of generative models such as LDA include their modularity and their extensibility. As a probabilistic module, LDA can be readily embedded in a more complex model
LDA overcomes both problems of pLSI by treating the topic mixture weights as a $k$-parameter hidden *random variables* rather than a large set of individual parameters which are explicitly linked to the training set.
| github_jupyter |
# Example 1: Sandstone Model
```
# Importing
import theano.tensor as T
import theano
import sys, os
sys.path.append("../GeMpy")
sys.path.append("../")
# Importing GeMpy modules
import gempy as GeMpy
# Reloading (only for development purposes)
import importlib
importlib.reload(GeMpy)
# Usuful packages
import numpy as np
import pandas as pn
import matplotlib.pyplot as plt
# This was to choose the gpu
os.environ['CUDA_LAUNCH_BLOCKING'] = '1'
# Default options of printin
np.set_printoptions(precision = 6, linewidth= 130, suppress = True)
#%matplotlib inline
%matplotlib inline
# Importing the data from csv files and settign extent and resolution
geo_data = GeMpy.create_data([696000,747000,6863000,6950000,-20000, 200],[ 50, 50, 50],
path_f = os.pardir+"/input_data/a_Foliations.csv",
path_i = os.pardir+"/input_data/a_Points.csv")
# Assigning series to formations as well as their order (timewise)
GeMpy.set_data_series(geo_data, {"EarlyGranite_Series":geo_data.formations[-1],
"BIF_Series":(geo_data.formations[0], geo_data.formations[1]),
"SimpleMafic_Series":geo_data.formations[2]},
order_series = ["EarlyGranite_Series",
"BIF_Series",
"SimpleMafic_Series"], verbose=0)
GeMpy.data_to_pickle(geo_data, 'sandstone')
inter = GeMpy.InterpolatorInput(geo_data)
inter.interpolator.tg.n_formation.get_value()
import numpy as np
np.zeros((100,0))
100000/1000
GeMpy.plot_data(geo_data)
geo_data.formations
di = GeMpy.InterpolatorInput(geo_data)
di.data.get_formation_number()
geo_data_s = GeMpy.select_series(geo_data, ['EarlyGranite_Series'])
# Preprocessing data to interpolate: This rescales the coordinates between 0 and 1 for stability issues.
# Here we can choose also the drift degree (in new updates I will change it to be possible to change the
# grade after compilation). From here we can set also the data type of the operations in case you want to
# use the GPU. Verbose is huge. There is a large list of strings that select what you want to print while
# the computation.
data_interp = GeMpy.set_interpolator(geo_data,
dtype="float32",
verbose=[])
# This cell will go to the backend
# Set all the theano shared parameters and return the symbolic variables (the input of the theano function)
input_data_T = data_interp.interpolator.tg.input_parameters_list()
# Prepare the input data (interfaces, foliations data) to call the theano function.
#Also set a few theano shared variables with the len of formations series and so on
input_data_P = data_interp.interpolator.data_prep()
# Compile the theano function.
debugging = theano.function(input_data_T, data_interp.interpolator.tg.whole_block_model(), on_unused_input='ignore',
allow_input_downcast=True, profile=True)
%%timeit
# Solve model calling the theano function
sol = debugging(input_data_P[0], input_data_P[1], input_data_P[2], input_data_P[3],input_data_P[4], input_data_P[5])
lith = sol[-1,0,:]
np.save('sandstone_lith', lith)
a = geo_data.grid.grid[:,0].astype(bool)
a2 = a.reshape(50,50,50)
a2[:,:,0]
geo_data.grid.grid
50*50
geo_data.data_to_pickle('sandstone')
a2 = a1[:2500]
g = geo_data.grid.grid
h = geo_data.grid.grid[:2500]
%%timeit
eu(g,h)
def squared_euclidean_distances(x_1, x_2):
"""
Compute the euclidian distances in 3D between all the points in x_1 and x_2
Args:
x_1 (theano.tensor.matrix): shape n_points x number dimension
x_2 (theano.tensor.matrix): shape n_points x number dimension
Returns:
theano.tensor.matrix: Distancse matrix. shape n_points x n_points
"""
# T.maximum avoid negative numbers increasing stability
return sqd
x_1 = T.matrix()
x_2 = T.matrix()
sqd = T.sqrt(T.maximum(
(x_1**2).sum(1).reshape((x_1.shape[0], 1)) +
(x_2**2).sum(1).reshape((1, x_2.shape[0])) -
2 * x_1.dot(x_2.T), 0
))
eu = theano.function([x_1, x_2], sqd)
from evtk.hl import gridToVTK
import numpy as np
# Dimensions
nx, ny, nz = 50, 50, 50
lx = geo_data.extent[0]-geo_data.extent[1]
ly = geo_data.extent[2]-geo_data.extent[3]
lz = geo_data.extent[4]-geo_data.extent[5]
dx, dy, dz = lx/nx, ly/ny, lz/nz
ncells = nx * ny * nz
npoints = (nx + 1) * (ny + 1) * (nz + 1)
# Coordinates
x = np.arange(0, lx + 0.1*dx, dx, dtype='float64')
y = np.arange(0, ly + 0.1*dy, dy, dtype='float64')
z = np.arange(0, lz + 0.1*dz, dz, dtype='float64')
x += geo_data.extent[0]
y +=geo_data.extent[2]
z +=geo_data.extent[5]
# Variables
litho = sol[-1,0,:].reshape( (nx, ny, nz))
gridToVTK("./sandstone", x, y, z, cellData = {"lithology" : litho},)
geo_data.extent[4]
# Plot the block model.
GeMpy.plot_section(geo_data, 13, block = sol[-1,0,:], direction='x', plot_data = True)
geo_res = pn.read_csv('olaqases.vox')
geo_res = geo_res.iloc[9:]
geo_res['nx 50'].unique(), geo_data.formations
ip_addresses = geo_data.interfaces["formation"].unique()
ip_dict = dict(zip(ip_addresses, range(1, len(ip_addresses) + 1)))
ip_dict['Murchison'] = 0
ip_dict['out'] = 0
ip_dict['SimpleMafic'] = 4
geo_res_num = geo_res['nx 50'].replace(ip_dict)
geo_res_num
ip_dict
(geo_res_num.shape[0]), sol[-1,0,:].shape[0]
sol[-1,0, :][7]
geo_res_num:
geo_res_num.as_matrix().astype(int)
plt.imshow( geo_res_num.as_matrix().reshape(50, 50, 50)[:, 23, :], origin="bottom", cmap="viridis" )
plt.imshow( sol[-1,0,:].reshape(50, 50, 50)[:, 23, :].T, origin="bottom", cmap="viridis" )
# Plot the block model.
GeMpy.plot_section(geo_data, 13, block = geo_res_num.as_matrix(), direction='y', plot_data = True)
50*50*50
np.unique(sol[-1,0,:])
# Formation number and formation
data_interp.interfaces.groupby('formation number').formation.unique()
data_interp.interpolator.tg.u_grade_T.get_value()
np.unique(sol)
#np.save('SandstoneSol', sol)
np.count_nonzero(np.load('SandstoneSol.npy') == sol)
sol.shape
GeMpy.PlotData(geo_data).plot3D_steno(sol[-1,0,:], 'Sandstone', description='The sandstone model')
np.linspace(geo_data.extent[0], geo_data.extent[1], geo_data.resolution[0], retstep=True)
np.diff(np.linspace(geo_data.extent[0], geo_data.extent[1], geo_data.resolution[0], retstep=False)).shape
(geo_data.extent[1]- geo_data.extent[0])/ geo_data.resolution[0]-4
(geo_data.extent[1]- geo_data.extent[0])/39
# So far this is a simple 3D visualization. I have to adapt it into GeMpy
lith0 = sol == 0
lith1 = sol == 1
lith2 = sol == 2
lith3 = sol == 3
lith4 = sol == 4
np.unique(sol)
import ipyvolume.pylab as p3
p3.figure(width=800)
p3.scatter(geo_data.grid.grid[:,0][lith0],
geo_data.grid.grid[:,1][lith0],
geo_data.grid.grid[:,2][lith0], marker='box', color = 'blue', size = 0.1 )
p3.scatter(geo_data.grid.grid[:,0][lith1],
geo_data.grid.grid[:,1][lith1],
geo_data.grid.grid[:,2][lith1], marker='box', color = 'yellow', size = 1 )
p3.scatter(geo_data.grid.grid[:,0][lith2],
geo_data.grid.grid[:,1][lith2],
geo_data.grid.grid[:,2][lith2], marker='box', color = 'green', size = 1 )
p3.scatter(geo_data.grid.grid[:,0][lith3],
geo_data.grid.grid[:,1][lith3],
geo_data.grid.grid[:,2][lith3], marker='box', color = 'pink', size = 1 )
p3.scatter(geo_data.grid.grid[:,0][lith4],
geo_data.grid.grid[:,1][lith4],
geo_data.grid.grid[:,2][lith4], marker='box', color = 'red', size = 1 )
p3.xlim(np.min(geo_data.grid.grid[:,0]),np.min(geo_data.grid.grid[:,0])+2175.0*40)
p3.ylim(np.min(geo_data.grid.grid[:,1]),np.max(geo_data.grid.grid[:,1]))
p3.zlim(np.min(geo_data.grid.grid[:,2]),np.min(geo_data.grid.grid[:,2])+2175.0*40)#np.max(geo_data.grid.grid[:,2]))
p3.show()
# The profile at the moment sucks because all what is whithin a scan is not subdivided
debugging.profile.summary()
```
#### Below here so far is deprecated
First we make a GeMpy instance with most of the parameters default (except range that is given by the project). Then we also fix the extension and the resolution of the domain we want to interpolate. Finally we compile the function, only needed once every time we open the project (the guys of theano they are working on letting loading compiled files, even though in our case it is not a big deal).
*General note. So far the reescaling factor is calculated for all series at the same time. GeoModeller does it individually for every potential field. I have to look better what this parameter exactly means*
All input data is stored in pandas dataframes under, ```self.Data.Interances``` and ```self.Data.Foliations```:
In case of disconformities, we can define which formation belong to which series using a dictionary. Until python 3.6 is important to specify the order of the series otherwise is random
Now in the data frame we should have the series column too
Next step is the creating of a grid. So far only regular. By default it takes the extent and the resolution given in the `import_data` method.
```
# Create a class Grid so far just regular grid
#GeMpy.set_grid(geo_data)
#GeMpy.get_grid(geo_data)
```
## Plotting raw data
The object Plot is created automatically as we call the methods above. This object contains some methods to plot the data and the results.
It is possible to plot a 2D projection of the data in a specific direction using the following method. Also is possible to choose the series you want to plot. Additionally all the key arguments of seaborn lmplot can be used.
```
#GeMpy.plot_data(geo_data, 'y', geo_data.series.columns.values[1])
```
## Class Interpolator
This class will take the data from the class Data and calculate potential fields and block. We can pass as key arguments all the variables of the interpolation. I recommend not to touch them if you do not know what are you doing. The default values should be good enough. Also the first time we execute the method, we will compile the theano function so it can take a bit of time.
```
%debug
geo_data.interpolator.results
geo_data.interpolator.tg.c_o_T.get_value(), geo_data.interpolator.tg.a_T.get_value()
geo_data.interpolator.compile_potential_field_function()
geo_data.interpolator.compute_potential_fields('BIF_Series',verbose = 3)
geo_data.interpolator.potential_fields
geo_data.interpolator.results
geo_data.interpolator.tg.c_resc.get_value()
```
Now we could visualize the individual potential fields as follow:
### Early granite
```
GeMpy.plot_potential_field(geo_data,10, n_pf=0)
```
### BIF Series
```
GeMpy.plot_potential_field(geo_data,13, n_pf=1, cmap = "magma", plot_data = True,
verbose = 5)
```
### SImple mafic
```
GeMpy.plot_potential_field(geo_data, 10, n_pf=2)
```
## Optimizing the export of lithologies
But usually the final result we want to get is the final block. The method `compute_block_model` will compute the block model, updating the attribute `block`. This attribute is a theano shared function that can return a 3D array (raveled) using the method `get_value()`.
```
GeMpy.compute_block_model(geo_data)
#GeMpy.set_interpolator(geo_data, u_grade = 0, compute_potential_field=True)
```
And again after computing the model in the Plot object we can use the method `plot_block_section` to see a 2D section of the model
```
GeMpy.plot_section(geo_data, 13, direction='y')
```
## Export to vtk. (*Under development*)
```
"""Export model to VTK
Export the geology blocks to VTK for visualisation of the entire 3-D model in an
external VTK viewer, e.g. Paraview.
..Note:: Requires pyevtk, available for free on: https://github.com/firedrakeproject/firedrake/tree/master/python/evtk
**Optional keywords**:
- *vtk_filename* = string : filename of VTK file (default: output_name)
- *data* = np.array : data array to export to VKT (default: entire block model)
"""
vtk_filename = "noddyFunct2"
extent_x = 10
extent_y = 10
extent_z = 10
delx = 0.2
dely = 0.2
delz = 0.2
from pyevtk.hl import gridToVTK
# Coordinates
x = np.arange(0, extent_x + 0.1*delx, delx, dtype='float64')
y = np.arange(0, extent_y + 0.1*dely, dely, dtype='float64')
z = np.arange(0, extent_z + 0.1*delz, delz, dtype='float64')
# self.block = np.swapaxes(self.block, 0, 2)
gridToVTK(vtk_filename, x, y, z, cellData = {"geology" : sol})
```
## Performance Analysis
One of the advantages of theano is the posibility to create a full profile of the function. This has to be included in at the time of the creation of the function. At the moment it should be active (the downside is larger compilation time and I think also a bit in the computation so be careful if you need a fast call)
### CPU
The following profile is with a 2 core laptop. Nothing spectacular.
Looking at the profile we can see that most of time is in pow operation (exponential). This probably is that the extent is huge and we are doing it with too much precision. I am working on it
### GPU
```
%%timeit
# Compute the block
GeMpy.compute_block_model(geo_data, [0,1,2], verbose = 0)
geo_data.interpolator._interpolate.profile.summary()
```
| github_jupyter |
<a href="https://colab.research.google.com/github/100rab-S/TensorFlow-Advanced-Techniques/blob/main/C1W3_L3_CustomLayerWithActivation.ipynb" target="_parent"><img src="https://colab.research.google.com/assets/colab-badge.svg" alt="Open In Colab"/></a>
# Ungraded Lab: Activation in Custom Layers
In this lab, we extend our knowledge of building custom layers by adding an activation parameter. The implementation is pretty straightforward as you'll see below.
## Imports
```
try:
# %tensorflow_version only exists in Colab.
%tensorflow_version 2.x
except Exception:
pass
import tensorflow as tf
from tensorflow.keras.layers import Layer
```
## Adding an activation layer
To use the built-in activations in Keras, we can specify an `activation` parameter in the `__init__()` method of our custom layer class. From there, we can initialize it by using the `tf.keras.activations.get()` method. This takes in a string identifier that corresponds to one of the [available activations](https://keras.io/api/layers/activations/#available-activations) in Keras. Next, you can now pass in the forward computation to this activation in the `call()` method.
```
class SimpleDense(Layer):
# add an activation parameter
def __init__(self, units=32, activation=None):
super(SimpleDense, self).__init__()
self.units = units
# define the activation to get from the built-in activation layers in Keras
self.activation = tf.keras.activations.get(activation)
def build(self, input_shape): # we don't need to change anything in this method to add activation to our custom layer
w_init = tf.random_normal_initializer()
self.w = tf.Variable(name="kernel",
initial_value=w_init(shape=(input_shape[-1], self.units),
dtype='float32'),
trainable=True)
b_init = tf.zeros_initializer()
self.b = tf.Variable(name="bias",
initial_value=b_init(shape=(self.units,), dtype='float32'),
trainable=True)
#super().build(input_shape)
def call(self, inputs):
# pass the computation to the activation layer
return self.activation(tf.matmul(inputs, self.w) + self.b)
```
We can now pass in an activation parameter to our custom layer. The string identifier is mostly the same as the function name so 'relu' below will get `tf.keras.activations.relu`.
```
mnist = tf.keras.datasets.mnist
(x_train, y_train),(x_test, y_test) = mnist.load_data()
x_train, x_test = x_train / 255.0, x_test / 255.0
model = tf.keras.models.Sequential([
tf.keras.layers.Flatten(input_shape=(28, 28)),
SimpleDense(128, activation='relu'),
tf.keras.layers.Dropout(0.2),
tf.keras.layers.Dense(10, activation='softmax')
])
model.compile(optimizer='adam',
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
model.fit(x_train, y_train, epochs=5)
model.evaluate(x_test, y_test)
```
| github_jupyter |
# Using Google Cloud Functions to support event-based triggering of Cloud AI Platform Pipelines
> This post shows how you can run a Cloud AI Platform Pipeline from a Google Cloud Function, providing a way for Pipeline runs to be triggered by events.
- toc: true
- badges: true
- comments: true
- categories: [ml, pipelines, mlops, kfp, gcf]
This example shows how you can run a [Cloud AI Platform Pipeline](https://cloud.google.com/blog/products/ai-machine-learning/introducing-cloud-ai-platform-pipelines) from a [Google Cloud Function](https://cloud.google.com/functions/docs/), thus providing a way for Pipeline runs to be triggered by events (in the interim before this is supported by Pipelines itself).
In this example, the function is triggered by the addition of or update to a file in a [Google Cloud Storage](https://cloud.google.com/storage/) (GCS) bucket, but Cloud Functions can have other triggers too (including [Pub/Sub](https://cloud.google.com/pubsub/docs/)-based triggers).
The example is Google Cloud Platform (GCP)-specific, and requires a [Cloud AI Platform Pipelines](https://cloud.google.com/ai-platform/pipelines/docs) installation using Pipelines version >= 0.4. To run this example as a notebook, click on one of the badges at the top of the page or see [here](https://github.com/amygdala/code-snippets/blob/master/ml/notebook_examples/functions/hosted_kfp_gcf.ipynb).
(If you are instead interested in how to do this with a Kubeflow-based pipelines installation, see [this notebook](https://github.com/amygdala/kubeflow-examples/blob/cookbook/cookbook/pipelines/notebooks/gcf_kfp_trigger.ipynb)).
## Setup
### Create a Cloud AI Platform Pipelines installation
Follow the instructions in the [documentation](https://cloud.google.com/ai-platform/pipelines/docs) to create a Cloud AI Platform Pipelines installation.
### Identify (or create) a Cloud Storage bucket to use for the example
**Before executing the next cell**, edit it to **set the `TRIGGER_BUCKET` environment variable** to a Google Cloud Storage bucket ([create a bucket first](https://console.cloud.google.com/storage/browser) if necessary). Do *not* include the `gs://` prefix in the bucket name.
We'll deploy the GCF function so that it will trigger on new and updated files (blobs) in this bucket.
```
%env TRIGGER_BUCKET=REPLACE_WITH_YOUR_GCS_BUCKET_NAME
```
### Give Cloud Function's service account the necessary access
First, make sure the Cloud Function API [is enabled](https://console.cloud.google.com/apis/library/cloudfunctions.googleapis.com?q=functions).
Cloud Functions uses the project's 'appspot' acccount for its service account. It will have the form:
`[email protected]`. (This is also the project's App Engine service account).
- Go to your project's [IAM - Service Account page](https://console.cloud.google.com/iam-admin/serviceaccounts).
- Find the ` [email protected]` account and copy its email address.
- Find the project's Compute Engine (GCE) default service account (this is the default account used for the Pipelines installation). It will have a form like this: `[email protected]`.
Click the checkbox next to the GCE service account, and in the 'INFO PANEL' to the right, click **ADD MEMBER**. Add the Functions service account (`[email protected]`) as a **Project Viewer** of the GCE service account.

Next, configure your `TRIGGER_BUCKET` to allow the Functions service account access to that bucket.
- Navigate in the console to your list of buckets in the [Storage Browser](https://console.cloud.google.com/storage/browser).
- Click the checkbox next to the `TRIGGER_BUCKET`. In the 'INFO PANEL' to the right, click **ADD MEMBER**. Add the service account (`[email protected]`) with `Storage Object Admin` permissions. (While not tested, giving both Object view and create permissions should also suffice).

## Create a simple GCF function to test your configuration
First we'll generate and deploy a simple GCF function, to test that the basics are properly configured.
```
%%bash
mkdir -p functions
```
We'll first create a `requirements.txt` file, to indicate what packages the GCF code requires to be installed. (We won't actually need `kfp` for this first 'sanity check' version of a GCF function, but we'll need it below for the second function we'll create, that deploys a pipeline).
```
%%writefile functions/requirements.txt
kfp
```
Next, we'll create a simple GCF function in the `functions/main.py` file:
```
%%writefile functions/main.py
import logging
def gcs_test(data, context):
"""Background Cloud Function to be triggered by Cloud Storage.
This generic function logs relevant data when a file is changed.
Args:
data (dict): The Cloud Functions event payload.
context (google.cloud.functions.Context): Metadata of triggering event.
Returns:
None; the output is written to Stackdriver Logging
"""
logging.info('Event ID: {}'.format(context.event_id))
logging.info('Event type: {}'.format(context.event_type))
logging.info('Data: {}'.format(data))
logging.info('Bucket: {}'.format(data['bucket']))
logging.info('File: {}'.format(data['name']))
file_uri = 'gs://%s/%s' % (data['bucket'], data['name'])
logging.info('Using file uri: %s', file_uri)
logging.info('Metageneration: {}'.format(data['metageneration']))
logging.info('Created: {}'.format(data['timeCreated']))
logging.info('Updated: {}'.format(data['updated']))
```
Deploy the GCF function as follows. (You'll need to **wait a moment or two for output of the deployment to display in the notebook**). You can also run this command from a notebook terminal window in the `functions` subdirectory.
```
%%bash
cd functions
gcloud functions deploy gcs_test --runtime python37 --trigger-resource ${TRIGGER_BUCKET} --trigger-event google.storage.object.finalize
```
After you've deployed, test your deployment by adding a file to the specified `TRIGGER_BUCKET`. You can do this easily by visiting the **Storage** panel in the Cloud Console, clicking on the bucket in the list, and then clicking on **Upload files** in the bucket details view.
Then, check in the logs viewer panel (https://console.cloud.google.com/logs/viewer) to confirm that the GCF function was triggered and ran correctly. You can select 'Cloud Function' in the first pulldown menu to filter on just those log entries.
## Deploy a Pipeline from a GCF function
Next, we'll create a GCF function that deploys an AI Platform Pipeline when triggered. First, preserve your existing main.py in a backup file:
```
%%bash
cd functions
mv main.py main.py.bak
```
Then, **before executing the next cell**, **edit the `HOST` variable** in the code below. You'll replace `<your_endpoint>` with the correct value for your installation.
To find this URL, visit the [Pipelines panel](https://console.cloud.google.com/ai-platform/pipelines/) in the Cloud Console.
From here, you can find the URL by clicking on the **SETTINGS** link for the Pipelines installation you want to use, and copying the 'host' string displayed in the client example code (prepend `https://` to that string in the code below).
You can alternately click on **OPEN PIPELINES DASHBOARD** for the Pipelines installation, and copy that URL, removing the `/#/pipelines` suffix.
```
%%writefile functions/main.py
import logging
import datetime
import logging
import time
import kfp
import kfp.compiler as compiler
import kfp.dsl as dsl
import requests
# TODO: replace with your Pipelines endpoint URL
HOST = 'https://<your_endpoint>.pipelines.googleusercontent.com'
@dsl.pipeline(
name='Sequential',
description='A pipeline with two sequential steps.'
)
def sequential_pipeline(filename='gs://ml-pipeline-playground/shakespeare1.txt'):
"""A pipeline with two sequential steps."""
op1 = dsl.ContainerOp(
name='filechange',
image='library/bash:4.4.23',
command=['sh', '-c'],
arguments=['echo "%s" > /tmp/results.txt' % filename],
file_outputs={'newfile': '/tmp/results.txt'})
op2 = dsl.ContainerOp(
name='echo',
image='library/bash:4.4.23',
command=['sh', '-c'],
arguments=['echo "%s"' % op1.outputs['newfile']]
)
def get_access_token():
url = 'http://metadata.google.internal/computeMetadata/v1/instance/service-accounts/default/token'
r = requests.get(url, headers={'Metadata-Flavor': 'Google'})
r.raise_for_status()
access_token = r.json()['access_token']
return access_token
def hosted_kfp_test(data, context):
logging.info('Event ID: {}'.format(context.event_id))
logging.info('Event type: {}'.format(context.event_type))
logging.info('Data: {}'.format(data))
logging.info('Bucket: {}'.format(data['bucket']))
logging.info('File: {}'.format(data['name']))
file_uri = 'gs://%s/%s' % (data['bucket'], data['name'])
logging.info('Using file uri: %s', file_uri)
logging.info('Metageneration: {}'.format(data['metageneration']))
logging.info('Created: {}'.format(data['timeCreated']))
logging.info('Updated: {}'.format(data['updated']))
token = get_access_token()
logging.info('attempting to launch pipeline run.')
ts = int(datetime.datetime.utcnow().timestamp() * 100000)
client = kfp.Client(host=HOST, existing_token=token)
compiler.Compiler().compile(sequential_pipeline, '/tmp/sequential.tar.gz')
exp = client.create_experiment(name='gcstriggered') # this is a 'get or create' op
res = client.run_pipeline(exp.id, 'sequential_' + str(ts), '/tmp/sequential.tar.gz',
params={'filename': file_uri})
logging.info(res)
```
Next, deploy the new GCF function. As before, **it will take a moment or two for the results of the deployment to display in the notebook**.
```
%%bash
cd functions
gcloud functions deploy hosted_kfp_test --runtime python37 --trigger-resource ${TRIGGER_BUCKET} --trigger-event google.storage.object.finalize
```
Add another file to your `TRIGGER_BUCKET`. This time you should see both GCF functions triggered. The `hosted_kfp_test` function will deploy the pipeline. You'll be able to see it running at your Pipeline installation's endpoint, `https://<your_endpoint>.pipelines.googleusercontent.com/#/pipelines`, under the given Pipelines Experiment (`gcstriggered` as default).
------------------------------------------
Copyright 2020, Google, LLC.
Licensed under the Apache License, Version 2.0 (the "License");
you may not use this file except in compliance with the License.
You may obtain a copy of the License at
http://www.apache.org/licenses/LICENSE-2.0
Unless required by applicable law or agreed to in writing, software
distributed under the License is distributed on an "AS IS" BASIS,
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
See the License for the specific language governing permissions and
limitations under the License.
| github_jupyter |
# Herramientas Estadisticas
# Contenido:
1.Estadistica:
- Valor medio.
- Mediana.
- Desviacion estandar.
2.Histogramas:
- Histrogramas con python.
- Histogramas con numpy.
- Como normalizar un histograma.
3.Distribuciones:
- Como obtener una distribucion a partir de un histograma.
- Distribucion Normal
- Distribucion de Poisson
- Distribucion Binomial
# 1. Estadistica
## Promedio
El promedio de una variable $x$ esta definado como:
$\bar{x} = \dfrac{\sum{x_i}}{N} $
## Mediana
La mediana de un conjunto de datos, es el valor al cual el conjunto de datos
se divide en dos:
Ejemplo:
sea $x$ = [1, 4, 7, 7, 3, 3, 1] la mediana de $median(x) = 3$
Formalmente la mediana se define como el valor $x_m$ que divide la funcion de probabilidad $F(x)$ en partes iguales.
$F(x_m) = \dfrac{1}{2}$
## El valor mas probable
Es el valor con mayor probabilidad $x_p$.
Ejemplo:
sea $x$ = [1, 4, 7, 7, 3, 2, 1] el valor mas probable es $x_p = 7$
```
import matplotlib.pyplot as plt
import numpy as np
# %pylab inline
def mi_mediana(lista):
x = sorted(lista)
d = int(len(x)/2)
if(len(x)%2==0):
return (x[d-1] + x[d])*0.5
else:
return x[d-1]
x_input = [1,3,4,5,5,7,7,6,8,6]
mi_mediana(x_input)
print(mi_mediana(x_input) == np.median(x_input))
```
## Problemas de no saber estadรญstica
Este tipo de conceptos parecen sencillos. Pero no siempre son claros para todo el mundo.
```
x = np.arange(1, 12)
y = np.random.random(11)*10
plt.figure(figsize=(12, 5))
fig = plt.subplot(1, 2, 1)
plt.scatter(x, y, c='purple', alpha=0.8, s=60)
y_mean = np.mean(y)
y_median = np.median(y)
plt.axhline(y_mean, c='g', lw=3, label=r"$\rm{Mean}$")
plt.axhline(y_median, c='r', lw=3, label=r"$\rm{Median}$")
plt.legend(fontsize=20)
fig = plt.subplot(1, 2, 2)
h = plt.hist(x, alpha=0.6, histtype='bar', ec='black')
print(y)
```
# Desviacion estandar
Es el promedio de las incertidumbres de las mediciones $x_i$
$\sigma = \sqrt{\dfrac{1}{n-1} \sum(x_{i} - \bar{x})^2}$
Donde $n$ es el nรบmero de la muestra
Adicionalmente la ${\bf{varianza}}$ se define como:
$\bar{x^2} - \bar{x}^{2}$
$\sigma^2 = \dfrac{1}{N} \sum(x_{i} - \bar{x})^2$
Y es una medida similar a la desviacion estandar que da cuenta de la
dispersion de los datos alrededor del promedio.
Donde $N$ es la poblaciรณn total.
# Funciรณn de Correlaciรณn
$cor(x, y) = \dfrac{<({(x-\bar{x})(y-\bar{y})})>}{\sigma_x \sigma_{y}} $
# Ejercicio:
Compruebe si se cumplen las siguientes propiedades:
1. Cor(X,Y) = Cor(Y, X)
2. Cor(X,X) = 1
3. Cor(X,-X) = -1
4. Cor(aX+b, cY + d) = Cor(X, Y), si a y c != 0
```
x = np.arange(1, 12)
y = np.random.random(11)*10
plt.figure(figsize=(9, 5))
y_mean = np.mean(y)
y_median = np.median(y)
plt.axhline(y_mean, c='g', lw=3, label=r"$\rm{Mean}$")
plt.axhline(y_median, c='r', lw=3, label=r"$\rm{Median}$")
sigma_y = np.std(y)
plt.axhspan(y_mean-sigma_y, y_mean + sigma_y, facecolor='g', alpha=0.5, label=r"$\rm{\sigma}$")
plt.legend(fontsize=20)
plt.scatter(x, y, c='purple', alpha=0.8, s=60)
plt.ylim(-2, 14)
print ("Variancia = ", np.var(y))
print ("Desviacion estandar = ", np.std(y))
```
## Referencias:
Para mas funciones estadisticas que se pueden usar en python ver:
- NumPy: http://docs.scipy.org/doc/numpy/reference/routines.statistics.html
- SciPy: http://docs.scipy.org/doc/scipy/reference/stats.html
# Histogramas
## 1. hist
hist es una funcion de python que genera un histograma a partir de un array de datos.
```
x = np.random.random(200)
plt.subplot(2,2,1)
plt.title("A simple hist")
h = plt.hist(x)
plt.subplot(2,2,2)
plt.title("bins")
h = plt.hist(x, bins=20)
plt.subplot(2,2,3)
plt.title("alpha")
h = plt.hist(x, bins=20, alpha=0.6)
plt.subplot(2,2,4)
plt.title("histtype")
h = plt.hist(x, bins=20, alpha=0.6, histtype='stepfilled')
```
## 2. Numpy-histogram
```
N, bins = np.histogram(caras, bins=15)
plt.plot(bins[0:-1], N)
```
# Histogramas 2D
```
x = np.random.random(500)
y = np.random.random(500)
plt.subplot(4, 2, 1)
plt.hexbin(x, y, gridsize=15, cmap="gray")
plt.colorbar()
plt.subplot(4, 2, 2)
data = plt.hist2d(x, y, bins=15, cmap="binary")
plt.colorbar()
plt.subplot(4, 2, 3)
plt.hexbin(x, y, gridsize=15)
plt.colorbar()
plt.subplot(4, 2, 4)
data = plt.hist2d(x, y, bins=15)
plt.colorbar()
```
# Como normalizar un histograma.
Normalizar un histograma significa que la integral del histograma sea 1.
```
x = np.random.random(10)*4
plt.title("Como no normalizar un histograma", fontsize=25)
h = plt.hist(x, normed="True")
print ("El numero tamaรฑo del bin debe de ser de la unidad")
plt.title("Como normalizar un histograma", fontsize=25)
h = hist(x, normed="True", bins=4)
```
Cual es la probabilidad de sacar 9 veces cara en 10 lanzamientos?
# Distribuciรณn de Probabilidad:
Las distribuciones de probabilidad dan informaciรณn de cual es la probabilidad de que una variable aleatoria $x$ aprezca en un intervalo dado. ยฟSi tenemos un conjunto de datos como podemos conocer la distribucion de probabilidad?
```
x = np.random.random(100)*10
plt.subplot(1, 2, 1)
h = plt.hist(x)
plt.subplot(1, 2, 2)
histo, bin_edges = np.histogram(x, density=True)
plt.bar(bin_edges[:-1], histo, width=1)
plt.xlim(min(bin_edges), max(bin_edges))
```
# Distribuciรณn Normal: Descripcion Matemรกtica.
$f(x, \mu, \sigma) = \dfrac{1}{\sigma \sqrt(2\pi)} e^{-\dfrac{(x-\mu)^2}{2\sigma^2}} $
donde $\sigma$ es la desviacion estandar y $\mu$ la media de los datos $x$
Es una funciรณn de distribucion de probabilidad que esta totalmente determinada por los parametros $\mu$ y $\sigma$.
La funcion es simetrica alrededor de $\mu$.
En python podemos usar scipy para hacer uso de la funciรณn normal.
```
import scipy.stats
x = np.linspace(0, 1, 100)
n_dist = scipy.stats.norm(0.5, 0.1)
plt.plot(x, n_dist.pdf(x))
```
## Podemos generar numeros aleatorios con una distribucion normal:
```
x = np.random.normal(0.0, 1.0, 1000)
y = np.random.normal(0.0, 2.0, 1000)
w = np.random.normal(0.0, 3.0, 1000)
z = np.random.normal(0.0, 4.0, 1000)
histo = plt.hist(z, alpha=0.2, histtype="stepfilled", color='r')
histo = plt.hist(w, alpha=0.4, histtype="stepfilled", color='b')
histo = plt.hist(y, alpha=0.6, histtype="stepfilled", color='k')
histo = plt.hist(x, alpha=0.8, histtype="stepfilled", color='g')
plt.title(r"$\rm{Distribuciones\ normales\ con\ diferente\ \sigma}$", fontsize=20)
```
**Intervalo de confianza**
$\sigma_1$ = 68% de los datos van a estar dentro de 1$\sigma$
$\sigma_2$ = 95% de los datos van a estar dentro de 2$\sigma$
$\sigma_3$ = 99.7% de los datos van a estar dentro de 3$\sigma$
### Ejercicio: Generen distribuciones normales con:
- $\mu = 5$ y $\sigma = 2$
- $\mu = -3$ y $\sigma = -2$
- $\mu = 4$ y $\sigma = 5$
#### Grafiquen las PDF,CDF sobre los mismos ejes, con distintos colores y leyendas. Quรฉ observan? (Una grรกfica con PDF y otra con CDF).
# Ejercicio:
1. Realize graficas de:
1. Diferencia de Caras - Sellos para 40 y 20 mediciones cada una con mayor numero de lanzamientos que la anterior. (abs(cara-sello)vs Numero de lanzamientos)
2. La razon (sara/sello) en funcion del Numero de lanzamientos.
Comente los resultados.
2. Repita los graficos anteriores pero ahora hagalos en escala logaritmica.
Comente los resultados.
3. Haga graficos de el promedio de abs(cara - sello) en funcion del numero de lanzamientos en escala logaritmica.
y otro con el promedio de (cara/sello).
Comente los reultados.
4. Repita el punto anterior pero esta vez con la desviaciรณn estandar.
comente los resultados.
Imaginemos por un momento el siguiente experimento:
Queremos estudiar la probabilidad de que al lanzar una moneda obtengamos cara o sello, de antamento sabemos que esta es del 50%.
Pero analizemos un poco mas a fondo, ยฟCual serรก la probabilidad de sacar 10 caras consecutivas?
Para responder proponemos el siguiente mรฉtodo:
1. Lanzamos una moneda 10 veces y miramos si sale cara o sello y guardamos estos datos.
2. Repetimos este procedimiento y 1000 veces.
## Funcion que lanza la moneda N veces.
```
def coinflip(N):
cara = 0
sello = 0
i=0
while i < N:
x = np.random.randint(0, 10)/5.0
if x >= 1.0:
cara+=1
elif x<1.0:
sello+=1
i+=1
return cara/N, sello/N
```
## Funciรณn que hace M veces N lanzamientos.
```
def realizaciones(M, N):
caras=[]
for i in range(M):
x, y = coinflip(N)
caras.append(x)
return caras
hist(caras, normed=True, bins=20)
caras = realizaciones(100000, 30.)
```
# PDF
```
N, bins = np.histogram(x, density=True)
plt.plot(bins[0:-1], N)
```
# CDF
```
h = plt.hist(x, cumulative=True, bins=20)
```
# References:
- Ejemplo de la Moneda: Introduction to computation and programming using Python. , John Guttag. Pagina 179.
- Ejemplos de estadistica en python: http://nbviewer.ipython.org/github/dhuppenkothen/ClassicalStatsPython/blob/master/classicalstatsexamples.ipynb
- Para ver una derivaciรณn matematica: A Modern course in Statistical Physics, Reichl, Pagina 191.
| github_jupyter |
# 01 - Sentence Classification Model Building
# Parse & clearn labeled training data
```
import xml.etree.ElementTree as ET
tree = ET.parse('../data/Restaurants_Train.xml')
root = tree.getroot()
root
# Use this dataframe for multilabel classification
# Must use scikitlearn's multilabel binarizer
labeled_reviews = []
for sentence in root.findall("sentence"):
entry = {}
aterms = []
aspects = []
if sentence.find("aspectTerms"):
for aterm in sentence.find("aspectTerms").findall("aspectTerm"):
aterms.append(aterm.get("term"))
if sentence.find("aspectCategories"):
for aspect in sentence.find("aspectCategories").findall("aspectCategory"):
aspects.append(aspect.get("category"))
entry["text"], entry["terms"], entry["aspects"]= sentence[0].text, aterms, aspects
labeled_reviews.append(entry)
labeled_df = pd.DataFrame(labeled_reviews)
print("there are",len(labeled_reviews),"reviews in this training set")
# print(sentence.find("aspectCategories").findall("aspectCategory").get("category"))
# Save annotated reviews
labeled_df.to_pickle("annotated_reviews_df.pkl")
labeled_df.head()
```
# Training the model with Naive Bayes
1. replace pronouns with neural coref
2. train the model with naive bayes
```
from neuralcoref import Coref
import en_core_web_lg
spacy = en_core_web_lg.load()
coref = Coref(nlp=spacy)
# Define function for replacing pronouns using neuralcoref
def replace_pronouns(text):
coref.one_shot_coref(text)
return coref.get_resolved_utterances()[0]
# Read annotated reviews df, which is the labeled dataset for training
# This is located in the pickled files folder
annotated_reviews_df = pd.read_pickle("../pickled_files/annotated_reviews_df.pkl")
annotated_reviews_df.head(3)
# Create a new column for text whose pronouns have been replaced
annotated_reviews_df["text_pro"] = annotated_reviews_df.text.map(lambda x: replace_pronouns(x))
# uncomment below to pickle the new df
# annotated_reviews_df.to_pickle("annotated_reviews_df2.pkl")
# Read pickled file with replaced pronouns if it exists already
annotated_reviews_df = pd.read_pickle("annotated_reviews_df2.pkl")
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import MultiLabelBinarizer
# Convert the multi-labels into arrays
mlb = MultiLabelBinarizer()
y = mlb.fit_transform(annotated_reviews_df.aspects)
X = annotated_reviews_df.text_pro
# Split data into train and test set
X_train, X_test, y_train, y_test = train_test_split(
X, y, test_size=0.25, random_state=0)
# save the the fitted binarizer labels
# This is important: it contains the how the multi-label was binarized, so you need to
# load this in the next folder in order to undo the transformation for the correct labels.
filename = 'mlb.pkl'
pickle.dump(mlb, open(filename, 'wb'))
from sklearn.feature_extraction.text import CountVectorizer
from sklearn.feature_extraction.text import TfidfTransformer
from sklearn.pipeline import Pipeline
from sklearn.naive_bayes import MultinomialNB
from skmultilearn.problem_transform import LabelPowerset
import numpy as np
# LabelPowerset allows for multi-label classification
# Build a pipeline for multinomial naive bayes classification
text_clf = Pipeline([('vect', CountVectorizer(stop_words = "english",ngram_range=(1, 1))),
('tfidf', TfidfTransformer(use_idf=False)),
('clf', LabelPowerset(MultinomialNB(alpha=1e-1))),])
text_clf = text_clf.fit(X_train, y_train)
predicted = text_clf.predict(X_test)
# Calculate accuracy
np.mean(predicted == y_test)
# Test if SVM performs better
from sklearn.linear_model import SGDClassifier
text_clf_svm = Pipeline([('vect', CountVectorizer()),
('tfidf', TfidfTransformer()),
('clf-svm', LabelPowerset(
SGDClassifier(loss='hinge', penalty='l2',
alpha=1e-3, max_iter=6, random_state=42)))])
_ = text_clf_svm.fit(X_train, y_train)
predicted_svm = text_clf_svm.predict(X_test)
#Calculate accuracy
np.mean(predicted_svm == y_test)
import pickle
# Train naive bayes on full dataset and save model
text_clf = Pipeline([('vect', CountVectorizer(stop_words = "english",ngram_range=(1, 1))),
('tfidf', TfidfTransformer(use_idf=False)),
('clf', LabelPowerset(MultinomialNB(alpha=1e-1))),])
text_clf = text_clf.fit(X, y)
# save the model to disk
filename = 'naive_model1.pkl'
pickle.dump(text_clf, open(filename, 'wb'))
```
At this point, we can move on to 02-Sentiment analysis notebook, which will load the fitted Naive bayes model.
```
#mlb.inverse_transform(predicted)
pred_df = pd.DataFrame(
{'text_pro': X_test,
'pred_category': mlb.inverse_transform(predicted)
})
pd.set_option('display.max_colwidth', -1)
pred_df.head()
```
## Some scrap code below which wasn't used
```
# Save annotated reviews
labeled_df.to_pickle("annotated_reviews_df.pkl")
labeled_df.head()
# This code was for parsing out terms & their relations to aspects
# However, the terms were not always hyponyms of the aspects, so they were unusable
aspects = {"food":[],"service":[],"anecdotes/miscellaneous":[], "ambience":[], "price":[]}
for i in range(len(labeled_df)):
if len(labeled_df.aspects[i]) == 1:
if labeled_df.terms[i] != []:
for terms in labeled_df.terms[i]:
aspects[labeled_df.aspects[i][0]].append(terms.lower())
for key in aspects:
aspects[key] = list(set(aspects[key]))
terms = []
for i in labeled_df.terms:
for j in i:
if j not in terms:
terms.append(j)
print("there are", len(terms),"unique terms")
# Use this dataframe if doing the classifications separately as binary classifications
labeled_reviews2 = []
for sentence in root.findall("sentence"):
entry = {"food":0,"service":0,"anecdotes/miscellaneous":0, "ambience":0, "price":0}
aterms = []
aspects = []
if sentence.find("aspectTerms"):
for aterm in sentence.find("aspectTerms").findall("aspectTerm"):
aterms.append(aterm.get("term"))
if sentence.find("aspectCategories"):
for aspect in sentence.find("aspectCategories").findall("aspectCategory"):
if aspect.get("category") in entry.keys():
entry[aspect.get("category")] = 1
entry["text"], entry["terms"] = sentence[0].text, aterms
labeled_reviews2.append(entry)
labeled_df2 = pd.DataFrame(labeled_reviews2)
# print(sentence.find("aspectCategories").findall("aspectCategory").get("category"))
labeled_df2.iloc[:,:5].sum()
```
| github_jupyter |
# Network inference of categorical variables: non-sequential data
```
import sys
import numpy as np
from scipy import linalg
from sklearn.preprocessing import OneHotEncoder
import matplotlib.pyplot as plt
%matplotlib inline
import inference
import fem
# setting parameter:
np.random.seed(1)
n = 20 # number of positions
m = 5 # number of values at each position
l = int(((n*m)**2)) # number of samples
g = 2.
nm = n*m
def itab(n,m):
i1 = np.zeros(n)
i2 = np.zeros(n)
for i in range(n):
i1[i] = i*m
i2[i] = (i+1)*m
return i1.astype(int),i2.astype(int)
# generate coupling matrix w0:
def generate_interactions(n,m,g):
nm = n*m
w = np.random.normal(0.0,g/np.sqrt(nm),size=(nm,nm))
i1tab,i2tab = itab(n,m)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,:] -= w[i1:i2,:].mean(axis=0)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
w[i1:i2,i1:i2] = 0. # no self-interactions
for i in range(nm):
for j in range(nm):
if j > i: w[i,j] = w[j,i]
return w
i1tab,i2tab = itab(n,m)
w0 = inference.generate_interactions(n,m,g)
#plt.imshow(w0,cmap='rainbow',origin='lower')
#plt.clim(-0.5,0.5)
#plt.colorbar(fraction=0.045, pad=0.05,ticks=[-0.5,0,0.5])
#plt.show()
#print(w0)
def generate_sequences2(w,n,m,l):
i1tab,i2tab = itab(n,m)
# initial s (categorical variables)
s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
print(s)
nrepeat = 500
for irepeat in range(nrepeat):
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
h = s.dot(w[i1:i2,:].T) # h[t,i1:i2]
h_old = (s[:,i1:i2]*h).sum(axis=1) # h[t,i0]
k = np.random.randint(0,m,size=l)
for t in range(l):
if np.exp(h[t,k[t]] - h_old[t]) > np.random.rand():
s[t,i1:i2] = 0.
s[t,i1+k[t]] = 1.
return s
# 2018.11.07: Tai
def nrgy(s,w):
l = s.shape[0]
n,m = 20,3
i1tab,i2tab = itab(n,m)
p = np.zeros((l,n))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
h = s.dot(w[i1:i2,:].T)
#e = (s[:,i1:i2]*h).sum(axis=1)
#p[:,i] = np.exp(e)
#p_sum = np.sum(np.exp(h),axis=1)
#p[:,i] /= p_sum
p[:,i] = np.exp((s[:,i1:i2]*h).sum(axis=1))/(np.exp(h).sum(axis=1))
#like = p.sum(axis=1)
return np.sum(np.log(p),axis=1)
# Vipul:
def nrgy_vp(onehot,w):
nrgy = onehot*(onehot.dot(w.T))
# print(nrgy - np.log(2*np.cosh(nrgy)))
return np.sum(nrgy - np.log(2*np.cosh(nrgy)),axis=1) #ln prob
def generate_sequences_vp(w,n_positions,n_residues,n_seq):
n_size = n_residues*n_positions
n_trial = 100*(n_size) #monte carlo steps to find the right sequences
b = np.zeros((n_size))
trial_seq = np.tile(np.random.randint(0,n_residues,size=(n_positions)),(n_seq,1))
print(trial_seq[0])
enc = OneHotEncoder(n_values=n_residues)
onehot = enc.fit_transform(trial_seq).toarray()
old_nrgy = nrgy(onehot,w) #+ n_positions*(n_residues-1)*np.log(2)
for trial in range(n_trial):
# print('before',np.mean(old_nrgy))
index_array = np.random.choice(range(n_positions),size=2,replace=False)
index,index1 = index_array[0],index_array[1]
r_trial = np.random.randint(0,n_residues,size=(n_seq))
r_trial1 = np.random.randint(0,n_residues,size=(n_seq))
mod_seq = np.copy(trial_seq)
mod_seq[:,index] = r_trial
mod_seq[:,index1] = r_trial1
mod_nrgy = nrgy(enc.fit_transform(mod_seq).toarray(),w) #+ n_positions*(n_residues-1)*np.log(2)
seq_change = mod_nrgy-old_nrgy > np.log(np.random.rand(n_seq))
#seq_change = mod_nrgy/(old_nrgy+mod_nrgy) > np.random.rand(n_seq)
if trial>n_size:
trial_seq[seq_change,index] = r_trial[seq_change]
trial_seq[seq_change,index1] = r_trial1[seq_change]
old_nrgy[seq_change] = mod_nrgy[seq_change]
else:
best_seq = np.argmax(mod_nrgy-old_nrgy)
trial_seq = np.tile(mod_seq[best_seq],(n_seq,1))
old_nrgy = np.tile(mod_nrgy[best_seq],(n_seq))
if trial%(10*n_size) == 0: print('after',np.mean(old_nrgy))#,trial_seq[0:5])
print(trial_seq[:10,:10])
#return trial_seq
return enc.fit_transform(trial_seq).toarray()
s = generate_sequences_vp(w0,n,m,l)
def generate_sequences_time_series(s_ini,w,n,m):
i1tab,i2tab = itab(n,m)
l = s_ini.shape[0]
# initial s (categorical variables)
#s_ini = np.random.randint(0,m,size=(l,n)) # integer values
#print(s_ini)
# onehot encoder
enc = OneHotEncoder(n_values=m)
s = enc.fit_transform(s_ini).toarray()
#print(s)
ntrial = 20*m
for t in range(l-1):
h = np.sum(s[t,:]*w[:,:],axis=1)
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
k = np.random.randint(0,m)
for itrial in range(ntrial):
k2 = np.random.randint(0,m)
while k2 == k:
k2 = np.random.randint(0,m)
if np.exp(h[i1+k2]- h[i1+k]) > np.random.rand():
k = k2
s[t+1,i1:i2] = 0.
s[t+1,i1+k] = 1.
return s
# generate non-sequences from time series
#l1 = 100
#s_ini = np.random.randint(0,m,size=(l1,n)) # integer values
#s = np.zeros((l,nm))
#for t in range(l):
# np.random.seed(t+10)
# s[t,:] = generate_sequences_time_series(s_ini,w0,n,m)[-1,:]
print(s.shape)
print(s[:10,:10])
## 2018.11.07: for non sequencial data
def fit_additive(s,n,m):
nloop = 10
i1tab,i2tab = itab(n,m)
nm = n*m
nm1 = nm - m
w_infer = np.zeros((nm,nm))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
# remove column i
x = np.hstack([s[:,:i1],s[:,i2:]])
x_av = np.mean(x,axis=0)
dx = x - x_av
c = np.cov(dx,rowvar=False,bias=True)
c_inv = linalg.pinv(c,rcond=1e-15)
#print(c_inv.shape)
h = s[:,i1:i2].copy()
for iloop in range(nloop):
h_av = h.mean(axis=0)
dh = h - h_av
dhdx = dh[:,:,np.newaxis]*dx[:,np.newaxis,:]
dhdx_av = dhdx.mean(axis=0)
w = np.dot(dhdx_av,c_inv)
#w = w - w.mean(axis=0)
h = np.dot(x,w.T)
p = np.exp(h)
p_sum = p.sum(axis=1)
#p /= p_sum[:,np.newaxis]
for k in range(m):
p[:,k] = p[:,k]/p_sum[:]
h += s[:,i1:i2] - p
w_infer[i1:i2,:i1] = w[:,:i1]
w_infer[i1:i2,i2:] = w[:,i1:]
return w_infer
w2 = fit_additive(s,n,m)
plt.plot([-1,1],[-1,1],'r--')
plt.scatter(w0,w2)
i1tab,i2tab = itab(n,m)
nloop = 5
nm1 = nm - m
w_infer = np.zeros((nm,nm))
wini = np.random.normal(0.0,1./np.sqrt(nm),size=(nm,nm1))
for i in range(n):
i1,i2 = i1tab[i],i2tab[i]
x = np.hstack([s[:,:i1],s[:,i2:]])
y = s.copy()
# covariance[ia,ib]
cab_inv = np.empty((m,m,nm1,nm1))
eps = np.empty((m,m,l))
for ia in range(m):
for ib in range(m):
if ib != ia:
eps[ia,ib,:] = y[:,i1+ia] - y[:,i1+ib]
which_ab = eps[ia,ib,:] !=0.
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
cab = np.cov(dxab,rowvar=False,bias=True)
cab_inv[ia,ib,:,:] = linalg.pinv(cab,rcond=1e-15)
w = wini[i1:i2,:].copy()
for iloop in range(nloop):
h = np.dot(x,w.T)
for ia in range(m):
wa = np.zeros(nm1)
for ib in range(m):
if ib != ia:
which_ab = eps[ia,ib,:] !=0.
eps_ab = eps[ia,ib,which_ab]
xab = x[which_ab]
# ----------------------------
xab_av = np.mean(xab,axis=0)
dxab = xab - xab_av
h_ab = h[which_ab,ia] - h[which_ab,ib]
ha = np.divide(eps_ab*h_ab,np.tanh(h_ab/2.), out=np.zeros_like(h_ab), where=h_ab!=0)
dhdx = (ha - ha.mean())[:,np.newaxis]*dxab
dhdx_av = dhdx.mean(axis=0)
wab = cab_inv[ia,ib,:,:].dot(dhdx_av) # wa - wb
wa += wab
w[ia,:] = wa/m
w_infer[i1:i2,:i1] = w[:,:i1]
w_infer[i1:i2,i2:] = w[:,i1:]
#return w_infer
plt.plot([-1,1],[-1,1],'r--')
plt.scatter(w0,w_infer)
#plt.scatter(w0[0:3,3:],w[0:3,:])
```
| github_jupyter |
1. You are provided the titanic dataset. Load the dataset and perform splitting into training and test sets with 70:30 ratio randomly using test train split.
2. Use the Logistic regression created from scratch (from the prev question) in this question as well.
3. Data cleaning plays a major role in this question. Report all the methods used by you in the ipynb.
-->
i. Check for missing values
ii. Drop Columns & Handle missing values
iii. Create dummies for categorical features
you are free to perform other data cleaning to improve your results.
4. Report accuracy score, Confusion matrix, heat map, classifiaction report and any other metrics you feel useful.
dataset link :
https://iiitaphyd-my.sharepoint.com/:f:/g/personal/apurva_jadhav_students_iiit_ac_in/Eictt5_qmoxNqezgQQiMWeIBph4sxlfA6jWAJNPnV2SF9Q?e=mQmYN0
(titanic.csv)
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score, f1_score,confusion_matrix,r2_score
sns.set(style="darkgrid")
df = pd.read_csv('titanic.csv')
df.head()
print('Missing Values in the columns : \n')
print(df.isnull().sum())
df.describe(include='all')
```
## Data cleaning
1. **Removal** :-
- Remove *Name* column as this attribute does not affect the *Survived* status of the passenger. And moreover we can see that each person has a unique name hence there is no point considering this column.
- Remove *Ticket* because there are 681 unique values of ticket and moreover if there is some correlation between the ticket and *Survived* status that can be captured by *Fare*.
- Remove *Cabin* as there are lot of missing values
```
df = df.drop(columns=['Name', 'Ticket', 'Cabin', 'PassengerId'])
s1 = sns.barplot(data = df, y='Survived' , hue='Sex' , x='Sex')
s1.set_title('Male-Female Survival')
plt.show()
```
Females had a better survival rate than male.
```
sns.pairplot(df, hue='Survived')
```
### Categorical data
For categorical variables where no ordinal relationship exists, the integer encoding may not be enough, at best, or misleading to the model at worst.
Forcing an ordinal relationship via an ordinal encoding and allowing the model to assume a natural ordering between categories may result in poor performance or unexpected results (predictions halfway between categories).
In this case, a one-hot encoding can be applied to the ordinal representation. This is where the integer encoded variable is removed and one new binary variable is added for each unique integer value in the variable.
### Dummy Variables
The one-hot encoding creates one binary variable for each category.
The problem is that this representation includes redundancy. For example, if we know that [1, 0, 0] represents โblueโ and [0, 1, 0] represents โgreenโ we donโt need another binary variable to represent โredโ, instead we could use 0 values for both โblueโ and โgreenโ alone, e.g. [0, 0].
This is called a dummy variable encoding, and always represents C categories with C-1 binary variables.
```
from numpy import mean
s1 = sns.barplot(data = df, y='Survived' , hue='Embarked' , x='Embarked', estimator=mean)
s1.set_title('Survival vs Boarding place')
plt.show()
carrier_count = df['Embarked'].value_counts()
sns.barplot(x=carrier_count.index, y=carrier_count.values, alpha=0.9)
plt.title('Frequency Distribution of Boarding place')
plt.ylabel('Number of Occurrences', fontsize=12)
plt.xlabel('Places', fontsize=12)
plt.show()
df = pd.get_dummies(df, columns=['Sex', 'Embarked'], prefix=['Sex', 'Embarked'])
df.head()
print('Missing Values in the columns : \n')
print(df.isnull().sum())
df = df.fillna(df['Age'].mean())
print('Missing Values in the columns : \n')
print(df.isnull().sum())
df = df.astype(np.float64)
Y = df['Survived']
Y = np.array(Y)
df.drop(columns=['Survived'], inplace=True)
def standardise(df, col):
df[col] = (df[col] - df[col].mean())/df[col].std()
return df
for col in df.columns:
df = standardise(df, col)
import copy
X = copy.deepcopy(df.to_numpy())
X.shape
x_train, x_test, y_train, y_test = train_test_split(X, Y, test_size=0.3, shuffle=True)
x_train.shape
class MyLogisticRegression:
def __init__(self, train_data, Y):
self.data = train_data # It is assumed that data is normalized and shuffled (rows, cols)
self.Y = Y[:, np.newaxis]
self.b = np.random.randn()
self.cols = self.data.shape[1]
self.rows = self.data.shape[0]
self.weights = np.random.randn(self.cols, 1) # Initialising weights to 1, shape (cols, 1)
self.num_iterations = 600
self.learning_rate = 0.0001
self.batch_size = 20
self.errors = []
@staticmethod
def sigmoid(x):
return 1/(1 + np.exp(-x))
def calc_mini_batches(self):
new_data = np.hstack((self.data, self.Y))
np.random.shuffle(new_data)
rem = self.rows % self.batch_size
num = self.rows // self.batch_size
till = self.batch_size * num
if num > 0:
dd = np.array(np.vsplit(new_data[ :till, :], num))
X_batch = dd[:, :, :-1]
Y_batch = dd[:, :, -1]
return X_batch, Y_batch
def update_weights(self, X, Y):
Y_predicted = self.predict(X) # Remember that X has data stored along the row for one sample
gradient = np.dot(np.transpose(X), Y_predicted - Y)
self.b = self.b - np.sum(Y_predicted - Y)
self.weights = self.weights - (self.learning_rate * gradient) # vector subtraction
def print_error(self):
Y_Predicted = self.predict(self.data)
class_one = self.Y == 1
class_two = np.invert(class_one)
val = np.sum(np.log(Y_Predicted[class_one]))
val += np.sum(np.log(1 - Y_Predicted[class_two]))
self.errors.append(-val)
print(-val)
def gradient_descent(self):
for j in range(self.num_iterations):
X, Y = self.calc_mini_batches()
num_batches = X.shape[0]
for i in range(num_batches):
self.update_weights(X[i, :, :], Y[i, :][:, np.newaxis]) # update the weights
if (j+1)%100 == 0:
self.print_error()
plt.plot(self.errors)
plt.style.use('ggplot')
plt.xlabel('iteration')
plt.ylabel('')
plt.title('Error Vs iteration')
plt.show()
def predict(self, X):
# X is 2 dimensional array, samples along the rows
return self.sigmoid(np.dot(X, self.weights) + self.b)
reg = MyLogisticRegression(x_train, y_train)
reg.gradient_descent()
y_pred = reg.predict(x_test)
pred = y_pred >= 0.5
pred = pred.astype(int)
print('accuracy : {a}'.format(a=accuracy_score(y_test, pred)))
print('f1 score : {a}'.format(a = f1_score(y_test, pred)))
confusion_matrix(y_test, pred)
sns.heatmap(confusion_matrix(y_test, pred))
from sklearn.metrics import classification_report
print(classification_report(y_test, pred))
```
| github_jupyter |
่ตทๆๅผ๏ผๅฐๅ
ฅ numpy, matplotlib
```
from PIL import Image
import numpy as np
%matplotlib inline
import matplotlib
import matplotlib.pyplot as plt
matplotlib.style.use('bmh')
matplotlib.rcParams['figure.figsize']=(8,5)
```
ไฝฟ็จไนๅไธ่ผ็ mnist ่ณๆ๏ผ่ผๅ
ฅ่จ็ทด่ณๆ `train_set` ๅๆธฌ่ฉฆ่ณๆ `test_set`
```
import gzip
import pickle
with gzip.open('../Week02/mnist.pkl.gz', 'rb') as f:
train_set, validation_set, test_set = pickle.load(f, encoding='latin1')
train_X, train_y = train_set
validation_X, validation_y = validation_set
test_X, test_y = test_set
```
ไนๅ็็ๅ็ๅฝๆธ
```
from IPython.display import display
def showX(X):
int_X = (X*255).clip(0,255).astype('uint8')
# N*784 -> N*28*28 -> 28*N*28 -> 28 * 28N
int_X_reshape = int_X.reshape(-1,28,28).swapaxes(0,1).reshape(28,-1)
display(Image.fromarray(int_X_reshape))
# ่จ็ทด่ณๆ๏ผ X ็ๅ 20 ็ญ
showX(train_X[:20])
```
train_set ๆฏ็จไพ่จ็ทดๆๅ็ๆจกๅ็จ็
ๆๅ็ๆจกๅๆฏๅพ็ฐกๅฎ็ logistic regression ๆจกๅ๏ผ็จๅฐ็ๅๆธๅชๆไธๅ 784x10 ็็ฉ้ฃ W ๅไธๅ้ทๅบฆ 10 ็ๅ้ bใ
ๆๅๅ
็จๅๅป้จๆฉไบๆธไพ่จญๅฎ W ๅ b ใ
```
W = np.random.uniform(low=-1, high=1, size=(28*28,10))
b = np.random.uniform(low=-1, high=1, size=10)
```
ๅฎๆด็ๆจกๅๅฆไธ
ๅฐๅ็็ๆๆฏ้ทๅบฆ 784 ็ๅ้ x
่จ็ฎ $Wx+b$๏ผ ็ถๅพๅๅ $exp$ใ ๆๅพๅพๅฐ็ๅๅๆธๅผใๅฐ้ไบๆธๅผ้คไปฅไปๅ็็ธฝๅใ
ๆๅๅธๆๅบไพ็ๆธๅญๆ็ฌฆๅ้ๅผตๅ็ๆฏ้ๅๆธๅญ็ๆฉ็ใ
### $ \Pr(Y=i|x, W, b) = \frac {e^{W_i x + b_i}} {\sum_j e^{W_j x + b_j}}$
ๅ
ๆฟ็ฌฌไธ็ญ่ณๆ่ฉฆ่ฉฆ็๏ผ x ๆฏ่ผธๅ
ฅใ y ๆฏ้ๅผตๅ็ๅฐๆๅฐ็ๆธๅญ(ไปฅ้ๅไพๅญไพ่ชช y=5)ใ
```
x = train_X[0]
y = train_y[0]
showX(x)
y
```
ๅ
่จ็ฎ $e^{Wx+b} $
```
Pr = np.exp(x @ W + b)
Pr.shape
```
็ถๅพ normalize๏ผ่ฎ็ธฝๅ่ฎๆ 1 ๏ผ็ฌฆๅๆฉ็็ๆ็พฉ๏ผ
```
Pr = Pr/Pr.sum()
Pr
```
็ฑๆผ $W$ ๅ $b$ ้ฝๆฏ้จๆฉ่จญๅฎ็๏ผๆไปฅไธ้ขๆๅ็ฎๅบ็ๆฉ็ไนๆฏ้จๆฉ็ใ
ๆญฃ็ขบ่งฃๆฏ $y=5$๏ผ ้ๆฐฃๅฅฝๆๅฏ่ฝ็ไธญ
็บไบ่ฆ่ฉๆทๆๅ็้ ๆธฌ็ๅ่ณช๏ผ่ฆ่จญ่จไธๅ่ฉๆท่ชคๅทฎ็ๆนๅผ๏ผๆๅ็จ็ๆนๆณๅฆไธ๏ผไธๆฏๅธธ่ฆ็ๆนๅทฎ๏ผ่ๆฏ็จ็ต็ๆนๅผไพ็ฎ๏ผๅฅฝ่ๆฏๅฎนๆๅพฎๅ๏ผๆๆๅฅฝ๏ผ
## $ loss = - \log(\Pr(Y=y|x, W,b)) $
ไธ่ฟฐ็่ชคๅทฎ่ฉๅๆนๅผ๏ผๅธธๅธธ็จฑไฝ error ๆ่
loss๏ผๆธๅญธๅผๅฏ่ฝๆ้ป่ฒป่งฃใๅฏฆ้่จ็ฎๅ
ถๅฏฆๅพ็ฐกๅฎ๏ผๅฐฑๆฏไธ้ข็ๅผๅญ
```
loss = -np.log(Pr[y])
loss
```
### ๆณ่พฆๆณๆน้ฒใ
ๆๅ็จไธ็จฎ่ขซ็จฑไฝๆฏ gradient descent ็ๆนๅผไพๆนๅๆๅ็่ชคๅทฎใ
ๅ ็บๆๅ็ฅ้ gradient ๆฏ่ฎๅฝๆธไธๅๆๅฟซ็ๆนๅใๆไปฅๆๅๅฆๆๆ gradient ็ๅๆนๅ่ตฐไธ้ป้ป๏ผไนๅฐฑๆฏไธ้ๆๅฟซ็ๆนๅ๏ผ๏ผ้ฃ้บผๅพๅฐ็ๅฝๆธๅผๆ่ฉฒๆๅฐไธ้ปใ
่จๅพๆๅ็่ฎๆธๆฏ $W$ ๅ $b$ (่ฃก้ข็ธฝๅ
ฑๆ 28*20+10 ๅ่ฎๆธ)๏ผๆไปฅๆๅ่ฆๆ $loss$ ๅฐ $W$ ๅ $b$ ่ฃก้ข็ๆฏไธๅๅๆธไพๅๅพฎๅใ
้ๅฅฝ้ๅๅๅพฎๅๆฏๅฏไปฅ็จๆ็ฎๅบไป็ๅฝขๅผ๏ผ่ๆๅพๅๅพฎๅ็ๅผๅญไนไธๆๅพ่ค้ใ
$loss$ ๅฑ้ๅพๅฏไปฅๅฏซๆ
$loss = \log(\sum_j e^{W_j x + b_j}) - W_i x - b_i$
ๅฐ $k \neq i$ ๆ, $loss$ ๅฐ $b_k$ ็ๅๅพฎๅๆฏ
$$ \frac{e^{W_k x + b_k}}{\sum_j e^{W_j x + b_j}} = \Pr(Y=k | x, W, b)$$
ๅฐ $k = i$ ๆ, $loss$ ๅฐ $b_k$ ็ๅๅพฎๅๆฏ
$$ \Pr(Y=k | x, W, b) - 1$$
```
gradb = Pr.copy()
gradb[y] -= 1
print(gradb)
```
ๅฐ $W$ ็ๅๅพฎๅไนไธ้ฃ
ๅฐ $k \neq i$ ๆ, $loss$ ๅฐ $W_{k,t}$ ็ๅๅพฎๅๆฏ
$$ \frac{e^{W_k x + b_k} W_{k,t} x_t}{\sum_j e^{W_j x + b_j}} = \Pr(Y=k | x, W, b) x_t$$
ๅฐ $k = i$ ๆ, $loss$ ๅฐ $W_{k,t}$ ็ๅๅพฎๅๆฏ
$$ \Pr(Y=k | x, W, b) x_t - x_t$$
```
print(Pr.shape, x.shape, W.shape)
gradW = x.reshape(784,1) @ Pr.reshape(1,10)
gradW[:, y] -= x
```
็ฎๅฅฝ gradient ๅพ๏ผ่ฎ W ๅ b ๅๅฅๅพ gradient ๅๆนๅ่ตฐไธ้ป้ป๏ผๅพๅฐๆฐ็ W ๅ b
```
W -= 0.1 * gradW
b -= 0.1 * gradb
```
ๅไธๆฌก่จ็ฎ $\Pr$ ไปฅๅ $loss$
```
Pr = np.exp(x @ W + b)
Pr = Pr/Pr.sum()
loss = -np.log(Pr[y])
loss
```
### Q
* ็็ Pr ๏ผ ็ถๅพๆพๅบๆฉ็ๆๅคง่
๏ผ predict y ๅผ
* ๅ่ทไธ้ไธ้ข็จๅบ๏ผ็็่ชคๅทฎๆฏๅฆ่ฎๅฐ๏ผ
* ๆฟๅ
ถไป็ๆธฌ่ฉฆ่ณๆไพ็็๏ผๆๅ็ W, b ๅญธๅฐไบไป้บผ๏ผ
ๆๅๅฐๅๆจฃ็ๆนๅผ่ผชๆตๅฐไบ่ฌ็ญ่จ็ทด่ณๆไพๅ๏ผ็็ๆ
ๅฝขๆๅฆไฝ
```
W = np.random.uniform(low=-1, high=1, size=(28*28,10))
b = np.random.uniform(low=-1, high=1, size=10)
score = 0
N=50000*20
d = 0.001
learning_rate = 1e-2
for i in range(N):
if i%50000==0:
print(i, "%5.3f%%"%(score*100))
x = train_X[i%50000]
y = train_y[i%50000]
Pr = np.exp( x @ W +b)
Pr = Pr/Pr.sum()
loss = -np.log(Pr[y])
score *=(1-d)
if Pr.argmax() == y:
score += d
gradb = Pr.copy()
gradb[y] -= 1
gradW = x.reshape(784,1) @ Pr.reshape(1,10)
gradW[:, y] -= x
W -= learning_rate * gradW
b -= learning_rate * gradb
```
็ตๆ็ผ็พๆญฃ็ขบ็ๅคง็ดๆฏ 92%๏ผ ไฝ้ๆฏๅฐ่จ็ทด่ณๆ่ไธๆฏๅฐๆธฌ่ฉฆ่ณๆ
่ไธ๏ผไธ็ญไธ็ญ็่จ็ทด่ณไนๆ้ปๆ
ข๏ผ็ทๆงไปฃๆธ็็น้ปๅฐฑๆฏ่ฝๅค ๅ้้็ฎใๅฆๆๆๅพๅค็ญ $x$ ็ถๆๅๅ้็ตๅๆไธๅ็ฉ้ฃ๏ผ็ถๅพๅซๅ $X$๏ผ๏ผ็ฑๆผ็ฉ้ฃไนๆณ็ๅ็๏ผๆๅ้ๆฏไธๆจฃ่จ็ฎ $WX+b$ ๏ผ ๅฐฑๅฏไปฅๅๆๅพๅฐๅค็ญ็ตๆใ
ไธ้ข็ๅฝๆธ๏ผๅฏไปฅไธๆฌก่ผธๅ
ฅๅค็ญ $x$๏ผ ๅๆไธๆฌก่จ็ฎๅค็ญ $x$ ็็ตๆๅๆบ็ขบ็ใ
```
def compute_Pr(X):
Pr = np.exp(X @ W + b)
return Pr/Pr.sum(axis=1, keepdims=True)
def compute_accuracy(Pr, y):
return (Pr.argmax(axis=1)==y).mean()
```
ไธ้ขๆฏๆดๆฐ้ๅพ่จ็ทด้็จ๏ผ ็ถ i%100000 ๆ๏ผ้ ไพฟ่จ็ฎไธไธ test accuracy ๅ valid accuracyใ
```
%%timeit -r 1 -n 1
def compute_Pr(X):
Pr = np.exp(X @ W + b)
return Pr/Pr.sum(axis=1, keepdims=True)
def compute_accuracy(Pr, y):
return (Pr.argmax(axis=1)==y).mean()
W = np.random.uniform(low=-1, high=1, size=(28*28,10))
b = np.random.uniform(low=-1, high=1, size=10)
score = 0
N=20000
batch_size = 128
learning_rate = 0.5
for i in range(0, N):
if (i+1)%2000==0:
test_score = compute_accuracy(compute_Pr(test_X), test_y)*100
train_score = compute_accuracy(compute_Pr(train_X), train_y)*100
print(i+1, "%5.2f%%"%test_score, "%5.2f%%"%train_score)
# ้จๆฉ้ธๅบไธไบ่จ็ทด่ณๆๅบไพ
rndidx = np.random.choice(train_X.shape[0], batch_size, replace=False)
X, y = train_X[rndidx], train_y[rndidx]
# ไธๆฌก่จ็ฎๆๆ็ Pr
Pr = compute_Pr(X)
# ่จ็ฎๅนณๅ gradient
Pr_one_y = Pr-np.eye(10)[y]
gradb = Pr_one_y.mean(axis=0)
gradW = X.T @ (Pr_one_y) / batch_size
# ๆดๆฐ W ๅ ba
W -= learning_rate * gradW
b -= learning_rate * gradb
```
ๆๅพๅพๅฐ็ๆบ็ขบ็ๆฏ 92%-93%
ไธ็ฎๅฎ็พ๏ผไธ้็ข็ซ้ๅชๆไธๅ็ฉ้ฃ่ๅทฒใ
ๅ
็ๆธๆๆฒๆ่ฆบ๏ผๆๅไพ็็ๅๅ็ญๆธฌ่ฉฆ่ณๆ่ท่ตทไพ็ๆ
ๅฝข
ๅฏไปฅ็ๅฐๅๅ็ญๅชๆ้ฏไธๅ
```
Pr = compute_Pr(test_X[:10])
pred_y =Pr.argmax(axis=1)
for i in range(10):
print(pred_y[i], test_y[i])
showX(test_X[i])
```
็็ๅไธ็พ็ญ่ณๆไธญ๏ผๆฏๅชไบๆ
ๆณ็ฎ้ฏ
```
Pr = compute_Pr(test_X[:100])
pred_y = Pr.argmax(axis=1)
for i in range(100):
if pred_y[i] != test_y[i]:
print(pred_y[i], test_y[i])
showX(test_X[i])
```
| github_jupyter |
# Python For Bioinformatics
Introduction to Python for Bioinformatics - available at https://github.com/kipkurui/Python4Bioinformatics.
<small><small><i>
## Attribution
These tutorials are an adaptation of the Introduction to Python for Maths by [Andreas Ernst](http://users.monash.edu.au/~andreas), available from https://gitlab.erc.monash.edu.au/andrease/Python4Maths.git. The original version was written by Rajath Kumar and is available at https://github.com/rajathkumarmp/Python-Lectures.
These notes have been greatly amended and updated for the MSC Bioinformatics and Molecular Biology at Pwani university, sponsored by EANBiT by [Caleb Kibet](https://twitter.com/calkibet)
</small></small></i>
# Quick Introduction to Jupyter Notebooks
Throughout this course, we will be using Jupyter Notebooks. Although the HPC you will be using will have Jupyter setup, these notes are provided for you want to set it up in your Computer.
## Introduction
The Jupyter Notebook is an interactive computing environment that enables users to author notebooks, which contain a complete and self-contained record of a computation. These notebooks can be shared more efficiently. The notebooks may contain:
* Live code
* Interactive widgets
* Plots
* Narrative text
* Equations
* Images
* Video
It is good to note that "Jupyter" is a loose acronym meaning Julia, Python, and R; the primary languages supported by Jupyter.
The notebook can allow a computational researcher to create reproducible documentation of their research. As Bioinformatics is datacentric, use of Jupyter Notebooks increases research transparency, hence promoting open science.
## First Steps
### Installation
1. [Download Miniconda](https://www.anaconda.com/download/) for your specific OS to your home directory
- Linux: `wget https://repo.continuum.io/miniconda/Miniconda3-latest-Linux-x86_64.sh`
- Mac: `curl https://repo.continuum.io/miniconda/Miniconda3-latest-MacOSX-x86_64.sh`
2. Run:
- `bash Miniconda3-latest-Linux-x86_64.sh`
- `bash Miniconda3-latest-MacOSX-x86_64.sh`
3. Follow all the prompts: if unsure, accept defaults
4. Close and re-open your terminal
5. If the installation is successful, you should see a list of installed packages with
- `conda list`
If the command cannot be found, you can add Anaconda bin to the path using:
` export PATH=~/anaconda3/bin:$PATH`
For reproducible analysis, you can [create a conda environment](https://conda.io/docs/user-guide/tasks/manage-environments.html) with all the Python packages you used.
`conda create --name bioinf python jupyter`
To activate the conda environment:
`source activate bioinf`
Having set-up conda environment, you can install any package you need using pip.
`conda install jupyter`
`conda install -c conda-forge jupyterlab`
or by using pip
`pip3 install jupyter`
Then you can quickly launch it using:
`jupyter notebook` or `jupyter lab`
NB: We will use a jupyter lab for training.
A Jupyter notebook is made up of many cells. Each cell can contain Python code. You can execute a cell by clicking on it and pressing `Shift-Enter` or `Ctrl-Enter` (run without moving to the next line).
### Further help
To learn more about Jupyter notebooks, check [the official introduction](http://nbviewer.jupyter.org/github/jupyter/notebook/blob/master/docs/source/examples/Notebook/Notebook%20Basics.ipynb) and [some useful Jupyter Tricks](https://www.dataquest.io/blog/jupyter-notebook-tips-tricks-shortcuts/).
Book: http://www.ict.ru.ac.za/Resources/cspw/thinkcspy3/thinkcspy3.pdf
# Python for Bioinformatics
## Introduction
Python is a modern, robust, high-level programming language. It is straightforward to pick up even if you are entirely new to programming.
Python, similar to other languages like Matlab or R, is interpreted hence runs slowly compared to C++, Fortran or Java. However, writing programs in Python is very quick. Python has an extensive collection of libraries for everything from scientific computing to web services. It caters for object-oriented and functional programming with a module system that allows large and complex applications to be developed in Python.
These lectures are using Jupyter notebooks which mix Python code with documentation. The python notebooks can be run on a web server or stand-alone on a computer.
## Contents
This course is broken up into a number of notebooks (lectures).
### Session 1
* [01](01.ipynb) Basic data types and operations (numbers, strings)
* [02](02.ipynb) String manipulation
### Session 2
* [03](03.ipynb) Data structures: Lists and Tuples
* [04](04.ipynb) Data structures (continued): dictionaries
### Session 3
* [05](05.ipynb) Control statements: if, for, while, try statements
* [06](06.ipynb) Functions
* [07](07.ipynb) Files, Scripting and Modules
### Session 4
* [08](08.ipynb) Data Analysis and plotting with Pandas
* [09](09.ipynb) Reproducible Bioinformatics Research
* [10](10.ipynb) Introduction to Biopython
This is a tutorial style introduction to Python. For a quick reminder/summary of Python syntax, the following [Quick Reference Card](http://www.cs.put.poznan.pl/csobaniec/software/python/py-qrc.html) may be useful. A longer and more detailed tutorial style introduction to python is available from the python site at: https://docs.python.org/3/tutorial/.
## How to learn from this resource?
Download all the notebooks from [Python4Bioinformatics](https://github.com/kipkurui/Python4Bioinformatics2019). The easiest way to do that is to clone the GitHub repository to your working directory using any of the following commands:
git clone https://github.com/kipkurui/Python4Bioinformatics2019.git
or
wget https://github.com/kipkurui/Python4Bioinformatics2019/archive/master.zip
unzip master.zip
rm master.zip
## How to Contribute
To contribute, fork the repository, make some updates and send me a pull request.
Alternatively, you can open an issue.
## License
This work is licensed under the Creative Commons Attribution 3.0 Unported License. To view a copy of this license, visit http://creativecommons.org/licenses/by/3.0/.
| github_jupyter |
ะกะบะพัะพะดัะผะพะฒ ะะปะตะบัะฐะฝะดั
ะะะข1904
ะะฐะฑะพัะฐัะพัะฝะฐั ัะฐะฑะพัะฐ โ2 ะะตัะพะดั ะฟะพะธัะบะฐ
โ1
```
#ะะผะฟะพััั
from IPython.display import HTML, display
from tabulate import tabulate
import random
import time
#ะ ะฐะฝะดะพะผะฝะฐั ะณะตะฝะตัะฐัะธั
def random_matrix(m = 50, n = 50, min_limit = -250, max_limit = 1016):
return [[random.randint(min_limit, max_limit) for _ in range(n)] for _ in range(m)]
#ะะธะฝะฐัะฝัะน ะฟะพะธัะบ
class BinarySearchMap:
def __init__(self):
self.data = [] # ั
ัะฐะฝะธะปะธัะต (key, value) ะทะฝะฐัะตะฝะธะน
def search(self, key):
""" ะะพะธัะบ ะธะฝะดะตะบัะฐ (ะฒะพ ะฒัะตั
ัะปััะฐัั
ะปัััะต ะปะตะฒะพััะพัะพะฝะฝะธะน,
ััะพะฑ insert ะฒััะฐะฒะปัะป ะฟะพ ัะฑัะฒะฐะฝะธั) """
l = 0
r = len(self.data)
while l < r:
m = (l + r) // 2
if self.data[m][0] < key:
l = m + 1
else:
r = m
return l
def __setitem__(self, key, value):
""" ะะพะฑะฐะฒะธัั ัะปะตะผะตะฝั """
index = self.search(key)
# ะตัะปะธ ะบะปัั ัะถะต ะตััั ะฒ ัะฐะฑะปะธัะต, ัะพ ะฝะฐะดะพ ะทะฐะผะตะฝะธัั ะทะฝะฐัะตะฝะธะต
if index < len(self.data) and self.data[index][0] == key:
self.data[index] = (key, value)
else:
# ะธะฝะฐัะต ะดะพะฑะฐะฒะปัะตะผ ะฝะพะฒัั ะทะฐะฟะธัั
self.data.insert(index, (key, value))
def __delitem__(self, key):
""" ะฃะดะฐะปะธัั ัะปะตะผะตะฝั """
index = self.search(key)
self.data.pop(index)
def __getitem__(self, key):
""" ะะพะปััะธัั ัะปะตะผะตะฝั """
index = self.search(key)
found_key, val = self.data[index]
# ะตัะปะธ ะฝะฐะนะดะตะฝะฝัะน ะธะฝะดะตะบั ะฒัะดะฐะตั ะทะฐะฟัะฐัะธะฒะฐะตะผัะน ะบะปัั
if found_key == key:
return val
raise KeyError()
#ะคะธะฑะพะฝะฐััะธะตะฒ ะฟะพะธัะบ
fib_c = [0, 1]
def fib(n):
if len(fib_c) - 1 < n:
fib_c.append(fib(n - 1) + fib(n - 2))
return fib_c[n]
class FibonacciMap(BinarySearchMap):
def search(self, key):
m = 0
while fib(m) < len(self.data):
m += 1
offset = 0
while fib(m) > 1:
i = min(offset + fib(m - 1), len(self.data) - 1)
if key > self.data[i][0]:
offset = i
elif key == self.data[i][0]:
return i
m -= 1
if len(self.data) and self.data[offset][0] < key:
return offset + 1
return 0
#ะะฝัะตัะฟะพะปััะธะพะฝะฝัะน ะฟะพะธัะบ
def nearest_mid(input_list, lower_bound_index, upper_bound_index, search_value):
return lower_bound_index + \
(upper_bound_index - lower_bound_index) * \
(search_value - input_list[lower_bound_index]) // \
(input_list[upper_bound_index][0] - input_list[lower_bound_index][0])
class InterpolateMap(BinarySearchMap):
def interpolation_search(self, term):
size_of_list = len(self.data) - 1
index_of_first_element = 0
index_of_last_element = size_of_list
while index_of_first_element <= index_of_last_element:
mid_point = nearest_mid(self.data, index_of_first_element, index_of_last_element, term)
if mid_point > index_of_last_element or mid_point < index_of_first_element:
return None
if self.data[mid_point][0] == term:
return mid_point
if term > self.data[mid_point][0]:
index_of_first_element = mid_point + 1
else:
index_of_last_element = mid_point - 1
if index_of_first_element > index_of_last_element:
return None
#ะะธะฝะฐัะฝะพะต ะดะตัะตะฒะพ
class Tree:
def __init__(self, key, value):
self.key = key
self.value = value
self.left = self.right = None
class BinaryTreeMap:
root = None
def insert(self, tree, key, value):
if tree is None:
return Tree(key, value)
if tree.key > key:
tree.left = self.insert(tree.left, key, value)
elif tree.key < key:
tree.right = self.insert(tree.right, key, value)
else:
tree.value = value
return tree
def search(self, tree, key):
if tree is None or tree.key == key:
return tree
if tree.key > key:
return self.search(tree.left, key)
return self.search(tree.right, key)
def __getitem__(self, key):
tree = self.search(self.root, key)
if tree is not None:
return tree.value
raise KeyError()
def __setitem__(self, key, value):
if self.root is None:
self.root = self.insert(self.root, key, value)
else: self.insert(self.root, key, value)
```
โ2
```
#ะัะพััะพะต ัะตั
ะตัะธัะพะฒะฐะฝะธะต
class HashMap:
def __init__(self):
self.size = 0
self.data = []
self._resize()
def _hash(self, key, i):
return (hash(key) + i) % len(self.data)
def _find(self, key):
i = 0;
index = self._hash(key, i);
while self.data[index] is not None and self.data[index][0] != key:
i += 1
index = self._hash(key, i);
return index;
def _resize(self):
temp = self.data
self.data = [None] * (2*len(self.data) + 1)
for item in temp:
if item is not None:
self.data[self._find(item[0])] = item
def __setitem__(self, key, value):
if self.size + 1 > len(self.data) // 2:
self._resize()
index = self._find(key)
if self.data[index] is None:
self.size += 1
self.data[index] = (key, value)
def __getitem__(self, key):
index = self._find(key)
if self.data[index] is not None:
return self.data[index][1]
raise KeyError()
#ะ ะตั
ะตัะธัะพะฒะฐะฝะธะต ั ะฟะพะผะพััั ะฟัะตะฒะดะพัะปััะฐะนะฝัั
ัะธัะตะป
class RandomHashMap(HashMap):
_rand_c = [5323]
def _rand(self, i):
if len(self._rand_c) - 1 < i:
self._rand_c.append(self._rand(i - 1))
return (123456789 * self._rand_c[i] + 987654321) % 65546
def _hash(self, key, i):
return (hash(key) + self._rand(i)) % len(self.data)
#ะะตัะพะด ะฆะตะฟะพัะตะบ
class ChainMap:
def __init__(self):
self.size = 0
self.data = []
self._resize()
def _hash(self, key):
return hash(key) % len(self.data)
def _insert(self, index, item):
if self.data[index] is None:
self.data[index] = [item]
return True
else:
for i, item_ in enumerate(self.data[index]):
if item_[0] == item[0]:
self.data[index][i] = item
return False
self.data[index].append(item)
return True
def _resize(self):
temp = self.data
self.data = [None] * (2*len(self.data) + 1)
for bucket in temp:
if bucket is not None:
for key, value in bucket:
self._insert(self._hash(key), (key, value))
def __setitem__(self, key, value):
if self.size + 1 > len(self.data) // 1.5:
self._resize()
if self._insert(self._hash(key), (key, value)):
self.size += 1
def __getitem__(self, key):
index = self._hash(key)
if self.data[index] is not None:
for key_, value in self.data[index]:
if key_ == key:
return value
raise KeyError()
```
ะกัะฐะฒะฝะตะฝะธะต ะฐะปะณะพัะธัะผะพะฒ
```
ะฐะปะณะพัะธัะผั = {
'ะะธะฝะฐัะฝัะน ะฟะพะธัะบ': BinarySearchMap,
'ะคะธะฑะพะฝะฐััะธะตะฒะฐ ะฟะพะธัะบ': FibonacciMap,
'ะะฝัะตัะฟะพะปััะธะพะฝะฝัะน ะฟะพะธัะบ': InterpolateMap,
'ะะธะฝะฐัะฝะพะต ะดะตัะตะฒะพ': BinaryTreeMap,
'ะัะพััะพะต ัะตั
ััะธัะพะฒะฐะฝะธะต': HashMap,
'ะ ะตั
ััะธัะพะฒะฐะฝะธะต ั ะฟะพะผะพััั ะฟัะตะฒะดะพัะปััะฐะนะฝัั
ัะธัะตะป': RandomHashMap,
'ะะตัะพะด ัะตะฟะพัะตะบ': ChainMap,
'ะกัะฐะฝะดะฐััะฝะฐั ััะฝะบัะธั ะฟะพะธัะบะฐ': dict
}
ะทะฐััะฐัะตะฝะฝะพะต_ะฒัะตะผั = {}
ัะตััะพะฒัะต_ะฝะฐะฑะพั = random_matrix(50, 1000)
for ะธะผั_ะฐะปะณะพัะธัะผะฐ, ะขะฐะฑะปะธัะฐ in ะฐะปะณะพัะธัะผั.items():
ะบะพะฟะธั_ะฝะฐะฑะพัะพะฒ = ัะตััะพะฒัะต_ะฝะฐะฑะพั.copy()
ะฒัะตะผั_ะฝะฐัะฐะปะพ = time.perf_counter()
for ะฝะฐะฑะพั in ะบะพะฟะธั_ะฝะฐะฑะพัะพะฒ:
ัะฐะฑะปะธัะฐ = ะขะฐะฑะปะธัะฐ()
for ะทะฝะฐัะตะฝะธะต, ะบะปัั in enumerate(ะฝะฐะฑะพั):
ัะฐะฑะปะธัะฐ[ะบะปัั] = ะทะฝะฐัะตะฝะธะต
assert ัะฐะฑะปะธัะฐ[ะบะปัั] == ะทะฝะฐัะตะฝะธะต, f'ะะฐะนะดะตะฝะฝัะน ัะปะตะผะตะฝั ะฝะต ัะพะพัะฒะตัััะฒัะตั ะทะฐะฟะธัะฐะฝะฝะพะผั'
ะฒัะตะผั_ะบะพะฝัะฐ = time.perf_counter()
ะทะฐััะฐัะตะฝะฝะพะต_ะฒัะตะผั[ะธะผั_ะฐะปะณะพัะธัะผะฐ] = (ะฒัะตะผั_ะบะพะฝัะฐ - ะฒัะตะผั_ะฝะฐัะฐะปะพ) / len(ัะตััะพะฒัะต_ะฝะฐะฑะพั)
ะพััะพััะธัะพะฒะฐะฝะฝะฐั_ัะฐะฑะปะธัะฐ_ะทะฐััะฐัะตะฝะฝะพะณะพ_ะฒัะตะผะตะฝะธ = sorted(ะทะฐััะฐัะตะฝะฝะพะต_ะฒัะตะผั.items(), key=lambda kv: kv[1])
tabulate(ะพััะพััะธัะพะฒะฐะฝะฝะฐั_ัะฐะฑะปะธัะฐ_ะทะฐััะฐัะตะฝะฝะพะณะพ_ะฒัะตะผะตะฝะธ, headers=['ะะปะณะพัะธัะผ','ะัะตะผั'], tablefmt='html', showindex="always")
```
โ3
```
#ะัะฒะพะด ัะตะทัะปััะฐัะฐ
def tag(x, color='white'):
return f'<td style="width:24px;height:24px;text-align:center;" bgcolor="{color}">{x}</td>'
th = ''.join(map(tag, ' abcdefgh '))
def chessboard(data):
row = lambda i: ''.join([
tag('<span style="font-size:24px">*</span>' * v,
color='white' if (i+j+1)%2 else 'silver')
for j, v in enumerate(data[i])])
tb = ''.join([f'<tr>{tag(8-i)}{row(i)}{tag(8-i)}</tr>' for i in range(len(data))])
return HTML(f'<table>{th}{tb}{th}</table>')
#ะกะพะทะดะฐะฝะธะต ะดะพัะบะธ
arr = [[0] * 8 for i in range(8)]
arr[1][2] = 1
chessboard(arr)
#ะะปะณะพัะธัะผ
def check_place(rows, row, column):
""" ะัะพะฒะตััะตั, ะตัะปะธ board[column][row] ะฟะพะด ะฐัะฐะบะพะน ะดััะณะธั
ัะตัะทะตะน """
for i in range(row):
if rows[i] == column or \
rows[i] - i == column - row or \
rows[i] + i == column + row:
return False
return True
total_shown = 0
def put_queen(rows=[0]*8, row=0):
""" ะััะฐะตััั ะฟะพะดะพะฑัะฐัั ะผะตััะพ ะดะปั ัะตัะทั, ะบะพัะพัะพะต ะฝะต ะฝะฐั
ะพะดะธััั ะฟะพะด ะฐัะฐะบะพะน ะดััะณะธั
"""
if row == 8: # ะผั ัะผะตััะธะปะธ ะฒัะตั
8 ัะตัะทะตะน ะธ ะผะพะถะตะผ ะฟะพะบะฐะทะฐัั ะดะพัะบั
arr = [[0] * 8 for i in range(8)]
for row, column in enumerate(rows):
arr[row][column] = 1
return chessboard(arr)
else:
for column in range(8):
if check_place(rows, row, column):
rows[row] = column
board = put_queen(rows, row + 1)
if board: return board
put_queen()
```
| github_jupyter |
```
import numpy as np
import tensorflow_datasets as tfds
import tensorflow as tf
tf.config.run_functions_eagerly(False)
#tfds.disable_progress_bar()
tf.version.VERSION
import pandas as pd
dataset = pd.read_csv("/content/drive/MyDrive/sentiment-dataset/airline_sentiment_analysis.csv")
print (dataset[:10])
print (dataset[len(dataset) - 10:])
def process(txt):
return ' '.join(word for word in txt.split(' ') if not word.startswith('@'))
process(" word1 word2 word3 @word4 word5 word6")
dataset_processed = pd.DataFrame.copy(dataset, deep=True)
dataset_processed['text'] = dataset['text'].apply(process)
print(dataset_processed[:3])
print(dataset_processed[len(dataset_processed) - 3:])
from sklearn.model_selection import train_test_split
def process_label(label):
if label == "negative":
return 0
elif label == "positive":
return 1
raise Exception("unrecognized label")
dataset_processed['airline_sentiment'] = dataset_processed['airline_sentiment'].apply(process_label)
dataset_train, dataset_test = train_test_split(dataset_processed, test_size = 0.2)
dataset_train[100:125]
len(dataset_train)
BUFFER_SIZE = 10000
BATCH_SIZE = 64
dataset_train_text_tf = tf.convert_to_tensor(dataset_train['text'], dtype=tf.string)
dataset_train_label_tf = tf.convert_to_tensor(dataset_train['airline_sentiment'], dtype=tf.float32)
dataset_test_text_tf = tf.convert_to_tensor(dataset_test['text'], dtype=tf.string)
dataset_test_lable_tf = tf.convert_to_tensor(dataset_test['airline_sentiment'], dtype=tf.float32)
dataset_train_tf = tf.data.Dataset.from_tensor_slices((dataset_train_text_tf, dataset_train_label_tf))
dataset_test_tf = tf.data.Dataset.from_tensor_slices((dataset_test_text_tf, dataset_test_lable_tf))
count = 10
i = 0
for ele in dataset_train_tf.as_numpy_iterator():
if i >= count:
break
print (ele)
i += 1
train_dataset_batched_tf = dataset_train_tf.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
test_dataset_batched_tf = dataset_test_tf.batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
count = 1
i = 0
for ele in train_dataset_batched_tf.as_numpy_iterator():
if i >= count:
break
print (ele)
i += 1
print(dataset_train_tf)
print(dataset_test_tf)
#VOCAB_SIZE = 1000
encoder = tf.keras.layers.TextVectorization()
#max_tokens=VOCAB_SIZE)
encoder.adapt(train_dataset_batched_tf.map(lambda text, label: text))
count_0 = len(train_dataset_batched_tf)
count = 0
for ds in train_dataset_batched_tf:
count += len(ds[0])
print(len(ds[0]))
count
encoder("hello world HELLO WORLD")[:].numpy()
import matplotlib.pyplot as plt
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
vocab = np.array(encoder.get_vocabulary())
vocab[100:150]
for example, label in dataset_train_tf.take(1):
print('texts: ', example.numpy())
print()
print('labels: ', label.numpy())
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1) #, activation='sigmoid')
])
model.summary()
encoder("hello world. This is great").numpy()
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)
history = model.fit(train_dataset_batched_tf, epochs=10,
validation_data=test_dataset_batched_tf,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset_batched_tf)
print('Test Loss:', test_loss)
print('Test Accuracy:', test_acc)
sample_text = ('good it\'s great')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad. It\'s very bad. Worse')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('This airlines is the best')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never fly with you')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never recommend you')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will always recommend you')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('Will be a long time before I recommend you to anyone.')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I liked the way you guys organize yourself')
predictions = model.predict(np.array([sample_text]))
print(predictions)
model.set_weights
encoder_new= None
encoder_new = tf.keras.layers.TextVectorization()
encoder_new.get_config()
encoder_new.adapt(np.array([['hell']], dtype=np.object), batch_size=None)
encoder_new.set_weights(encoder.get_weights())
encoder("hello world").numpy()
encoder_new("hello world").numpy()
model2 = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1) #, activation='sigmoid')
])
model2.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)
layers = []
for layer in model.layers:
layers.append(layer.get_weights())
i = 0
for layer in model2.layers:
# if i == 0:
# i += 1
# continue
print(layer.get_weights()[0].dtype)
layer.set_weights(layers[i])
i += 1
sample_text = ('good it\'s great')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad. It\'s very bad. Worse')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('This airlines is the best')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never fly with you')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never recommend you')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will always recommend you')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('Will be a long time before I recommend you to anyone.')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I liked the way you guys organize yourself')
predictions = model2.predict(np.array([sample_text]))
print(predictions)
import json
class NdarrayEncoder(json.JSONEncoder):
def default(self, obj):
if isinstance(obj, np.ndarray):
return obj.tolist()
if isinstance(obj, bytes):
return obj.decode('utf-8')
print(obj)
return json.JSONEncoder().default(self, obj)
layersInList = []
for layer in model.layers:
layersInList.append(layer.get_weights())
weightsInJson = json.dumps(layersInList, cls=NdarrayEncoder)
with open("weights.json", "w") as json_file:
json_file.write(weightsInJson)
with open("weights.json", "r") as json_file_r:
weightsInListRead = json_file_r.read()
weightsReadData = json.loads(weightsInListRead)
# def isIterable(obj):
# if hasattr(obj, '__iter__') and hasattr(obj, '__next__') and hasattr('__getitem__'):
# return True
# return False
def convertStringToBytesInObject(convertableObj):
if isinstance(convertableObj, list):
i = 0
for item in convertableObj:
if isinstance(item, str):
convertableObj[i] = item.encode()
elif isinstance(item, list):
convertStringToBytesInObject(item)
i += 1
else:
print(convertableObj)
raise Exception(" expected to be iterable ")
isIterable([])
# convertStringToBytesInObject(weightsReadData)
encoder_new= None
encoder_new = tf.keras.layers.TextVectorization()
encoder_new.get_config()
encoder_new.adapt(np.array([['hell']], dtype=np.object), batch_size=None)
encoder_new.set_weights(weightsReadData[0])
encoder_new("hello world").numpy()
encoder("hello world").numpy()
model3 = tf.keras.Sequential([
encoder_new,
tf.keras.layers.Embedding(
input_dim=len(encoder_new.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(32, activation='relu'),
tf.keras.layers.Dense(16, activation='relu'),
tf.keras.layers.Dense(8, activation='relu'),
tf.keras.layers.Dense(1) #, activation='sigmoid')
])
model3.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=True),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)
layers2 = []
for layerWeights in weightsReadData:
layers2.append(layerWeights)
print(len(layerWeights))
def convertToNdarray(obj):
if isinstance(obj, list):
return np.asarray([convertToNdarray(o) for o in obj])
else:
return obj
layers2 = convertToNdarray(layers2)
layers2 = [np.array(layer, dtype=object) for layer in layers2]
i = 0
for layer in model3.layers:
# if i == 0:
# i += 1
# continue
print(layer.get_weights()[0].dtype)
layer.set_weights(layers2[i])
i += 1
sample_text = ('good it\'s great')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad. It\'s very bad. Worse')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('This airlines is the best')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never fly with you')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will never recommend you')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I will always recommend you')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('Will be a long time before I recommend you to anyone.')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
sample_text = ('I liked the way you guys organize yourself')
predictions = model3.predict(np.array([sample_text]))
print(predictions)
len(weightsReadData)
len(model3.layers)
len(weightsReadData[2])
len(model3.layers[2].get_weights())
len(model2.layers[2].get_weights())
hw = b'hello world'
json.dumps(hw.decode('utf-8'))
tf.keras.layers.serialize(encoder)
encoder.get_weights()
encoder_new= None
encoder_new = tf.keras.layers.TextVectorization()
encoder_new.get_config()
encoder_new.adapt([['hell']], batch_size=None)
encoder_new.set_weights(encoder.get_weights())
# encoder_new.set_vocabulary(encoder.get_vocabulary())
encoder("hello world").numpy()
encoder_new("hello world").numpy()
dataset_train_batched_text = np.array_split(dataset_train['text'],len(dataset_train['text'])/BATCH_SIZE)
dataset_train_batched_class = np.array_split(dataset_train['airline_sentiment'], len(dataset_train['airline_sentiment'])/BATCH_SIZE)
dataset_test_batched_text = np.array_split(dataset_test['text'],len(dataset_test['text'])/BATCH_SIZE)
dataset_test_batched_class = np.array_split(dataset_test['airline_sentiment'], len(dataset_test['airline_sentiment'])/BATCH_SIZE)
print (len(dataset_train))
print (len(dataset_test))
print (" ------------------------ ")
print (len(dataset_train_batched_text))
print (len(dataset_train_batched_class))
print (len(dataset_train_batched_text[len(dataset_train_batched_text)- 1]))
print (len(dataset_train_batched_class[len(dataset_train_batched_text)- 1]))
print (" ------------------------ ")
print (len(dataset_test_batched_text))
print (len(dataset_test_batched_class))
print (len(dataset_test_batched_text[len(dataset_test_batched_text)- 1]))
print (len(dataset_test_batched_class[len(dataset_test_batched_class)- 1]))
dataset_test_batched_text_tmp = np.asarray(dataset_test_batched_text, dtype=object)
dataset_test_batched_class_tmp = np.asarray(dataset_test_batched_class, dtype=object)
dataset_train_batched_text_tmp = np.asarray(dataset_train_batched_text, dtype=object)
dataset_train_batched_class_tmp = np.asarray(dataset_train_batched_class, dtype=object)
np_dataset_test_batched_text = []
np_dataset_test_batched_class = []
np_dataset_train_batched_text = []
np_dataset_train_batched_class = []
for itr in dataset_test_batched_text_tmp:
np_dataset_test_batched_text.append(itr.to_numpy())
for itr in dataset_test_batched_class_tmp:
np_dataset_test_batched_class.append(itr.to_numpy())
for itr in dataset_train_batched_text_tmp:
np_dataset_train_batched_text.append(itr.to_numpy())
for itr in dataset_train_batched_class_tmp:
np_dataset_train_batched_class.append(itr.to_numpy())
np_dataset_test_batched_text = np.asarray(np_dataset_test_batched_text, dtype=object)
np_dataset_test_batched_class = np.asarray(np_dataset_test_batched_class, dtype=object)
np_dataset_train_batched_text = np.asarray(np_dataset_train_batched_text, dtype=object)
np_dataset_train_batched_class = np.asarray(np_dataset_train_batched_class, dtype=object)
np_dataset_test_batched_text[len(np_dataset_test_batched_text)- 1][0]
tf_dataset_test_batched_text = tf.data.Dataset.from_tensor_slices(np_dataset_test_batched_text)
tf_dataset_test_batched_text
VOCAB_SIZE = 1000
encoder = tf.keras.layers.TextVectorization()
#max_tokens=VOCAB_SIZE)
encoder.adapt(np_dataset_train_batched_text)
import matplotlib.pyplot as plt
def plot_graphs(history, metric):
plt.plot(history.history[metric])
plt.plot(history.history['val_'+metric], '')
plt.xlabel("Epochs")
plt.ylabel(metric)
plt.legend([metric, 'val_'+metric])
dataset_2, info = tfds.load('imdb_reviews', with_info=True,
as_supervised=True)
train_dataset, test_dataset = dataset_2['train'], dataset_2['test']
train_dataset.element_spec
for example, label in train_dataset.take(1):
print('text: ', example.numpy())
print('label: ', label.numpy())
BUFFER_SIZE = 10000
BATCH_SIZE = 64
train_dataset = train_dataset.shuffle(BUFFER_SIZE).batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
test_dataset = test_dataset.batch(BATCH_SIZE).prefetch(tf.data.AUTOTUNE)
train_dataset.as_numpy_iterator()
for example, label in train_dataset.take(1):
print('texts: ', example.numpy()[:3])
print()
print('labels: ', label.numpy()[:])
VOCAB_SIZE = 1000
encoder = tf.keras.layers.experimental.preprocessing.TextVectorization(
max_tokens=VOCAB_SIZE)
encoder.adapt(train_dataset.map(lambda text, label: text))
vocab = np.array(encoder.get_vocabulary())
vocab[:]
encoded_example = encoder(example)[:3].numpy()
encoded_example
for n in range(3):
print("Original: ", example[n].numpy())
print("Round-trip: ", " ".join(vocab[encoded_example[n]]))
print()
import os
model2 = None
print(os.listdir('/content/drive/MyDrive/sentiment/'))
if len(os.listdir('/content/drive/MyDrive/sentiment/')) == 0:
model = tf.keras.Sequential([
encoder,
tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True),
tf.keras.layers.Bidirectional(tf.keras.layers.LSTM(64)),
tf.keras.layers.Dense(64, activation='relu'),
tf.keras.layers.Dense(1, activation='sigmoid')
])
print ("created model")
else:
model2 = tf.keras.models.load_model ("/content/drive/MyDrive/sentiment/")
print ("loaded model")
if model2 is not None:
model = model2
print([layer.supports_masking for layer in model.layers])
# predict on a sample text without padding.
sample_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = model.predict(np.array([sample_text]))
print(predictions[0])
# predict on a sample text with padding
padding = "the " * 2000
predictions = model.predict(np.array([sample_text, padding]))
print(predictions[0])
model.compile(loss=tf.keras.losses.BinaryCrossentropy(from_logits=False),
optimizer=tf.keras.optimizers.Adam(1e-4),
metrics=['accuracy']) #run_eagerly=True)
history = model.fit(train_dataset, epochs=5,
validation_data=test_dataset,
validation_steps=30)
test_loss, test_acc = model.evaluate(test_dataset)
print('Test Loss:', test_loss)
print('Test Accuracy:', test_acc)
# predict on a sample text without padding.
sample_text = ('good is great')
predictions = model.predict(np.array([sample_text]))
print(predictions)
sample_text = ('bad equals very bad. Worse')
predictions = model.predict(np.array([sample_text]))
print(predictions)
x = tfds.as_numpy(test_dataset)
for ele in train_dataset.as_numpy_iterator():
print (ele)
print ("---------------------")
?plt.figure(figsize=(16, 6))
plt.subplot(1, 2, 1)
plot_graphs(history, 'accuracy')
plt.subplot(1, 2, 2)
plot_graphs(history, 'loss')
m = tf.keras.metrics.Accuracy()
m.update_state([[0], [2], [3], [4]], [[0], [2], [3], [4]])
m.result().numpy()
import copy
vicab2 = copy.deepcopy(vocab)
vicab2.sort()
vicab2
lst = []
def func(text, label):
lst.append([text, label])
return text, label
test_dataset.map(func)
lst
for ele in test_dataset.as_numpy_iterator():
print (ele)
tf.keras.models.save_model(model=model, filepath="/content/drive/MyDrive/sentiment/")
tf.saved_model.save(obj=model, export_dir="/content/drive/MyDrive/sentiment")
model.save("/content/drive/MyDrive/sentiment/model", save_format="tf")
for layer in model.layers: print(layer.get_config(), layer.get_weights())
input_array = np.random.randint(len(encoder.get_vocabulary()), size=(3, 1))
model_temp = tf.keras.Sequential()
model_temp.add(encoder)
model_temp.add(tf.keras.layers.Embedding(
input_dim=len(encoder.get_vocabulary()),
output_dim=64,
# Use masking to handle the variable sequence lengths
mask_zero=True))
model_temp.compile('rmsprop', 'mse')
# output_array = model_temp.predict("hello world this is great!")
# print(output_array.shape)
sample_text = ('The movie was cool. The animation and the graphics '
'were out of this world. I would recommend this movie.')
predictions = model_temp.predict(np.array([sample_text]))
print(len(predictions))
print(len(predictions[0]))
print(len(predictions[0][0]))
# The model will take as input an integer matrix of size (batch,
# input_length), and the largest integer (i.e. word index) in the input
# should be no larger than 999 (vocabulary size).
# Now model.output_shape is (None, 10, 64), where `None` is the batch
# dimension.
# input_array = np.random.randint(900, size=(3, 10))
# model_temp.compile('rmsprop', 'mse')
# output_array = model_temp.predict(input_array)
# print(output_array.shape)
print (output_array[0][0])
input_array[0][0]
```
| github_jupyter |
```
import tensorflow as tf
from tensorflow.python.keras.utils import HDF5Matrix
from tensorflow.python.keras.models import Model
from tensorflow.python.keras.layers import (Input, Lambda, Conv2D, MaxPooling2D, Flatten, Dense, Dropout,
Lambda, Activation, BatchNormalization, concatenate, UpSampling2D,
ZeroPadding2D)
from sklearn.metrics import mean_squared_error, mean_absolute_error, confusion_matrix
from matplotlib import pyplot as plt
%matplotlib inline
import numpy as np
```
```Python
x_train = HDF5Matrix("data.h5", "x_train")
x_valid = HDF5Matrix("data.h5", "x_valid")
```
shapes should be:
* (1355578, 432, 560, 1)
* (420552, 432, 560, 1)
```
def gen_data(shape=0, name="input"):
data = np.random.rand(512, 512, 4)
label = data[:,:,-1]
return tf.constant(data.reshape(1,512,512,4).astype(np.float32)), tf.constant(label.reshape(1,512,512,1).astype(np.float32))
## NOTE:
## Tensor 4D -> Batch,X,Y,Z
## Tesnor max. float32!
d, l = gen_data(0,0)
print(d.shape, l.shape)
def unet():
inputs, label = gen_data()
input_shape = inputs.shape
#down0a = Conv2D(16, (3, 3), padding='same')(inputs)
down0a = Conv2D(16, kernel_size=(3, 3), padding='same', input_shape=input_shape)(inputs)
down0a_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0a)
print("down0a.shape:",down0a.shape,"\ndwnpool.shap:", down0a_pool.shape)#?!? letztes != Batch?
#dim0 = Batch
#dim1,dim2 = X,Y
#dim3 = Kanaele
up1 = UpSampling2D((3, 3))(down0a)
print("upsamp.shape:",up1.shape) #UpSampling รคndert dim1, dim2... somit (?,X,Y,?) evtl. Batch auf dim0 ?
unet()
def unet2(input_shape, output_length):
inputs = Input(shape=input_shape, name="input")
# 512
down0a = Conv2D(16, (3, 3), padding='same')(inputs)
down0a = BatchNormalization()(down0a)
down0a = Activation('relu')(down0a)
down0a = Conv2D(16, (3, 3), padding='same')(down0a)
down0a = BatchNormalization()(down0a)
down0a = Activation('relu')(down0a)
down0a_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0a)
# 256
down0 = Conv2D(32, (3, 3), padding='same')(down0a_pool)
down0 = BatchNormalization()(down0)
down0 = Activation('relu')(down0)
down0 = Conv2D(32, (3, 3), padding='same')(down0)
down0 = BatchNormalization()(down0)
down0 = Activation('relu')(down0)
down0_pool = MaxPooling2D((2, 2), strides=(2, 2))(down0)
# 128
down1 = Conv2D(64, (3, 3), padding='same')(down0_pool)
down1 = BatchNormalization()(down1)
down1 = Activation('relu')(down1)
down1 = Conv2D(64, (3, 3), padding='same')(down1)
down1 = BatchNormalization()(down1)
down1 = Activation('relu')(down1)
down1_pool = MaxPooling2D((2, 2), strides=(2, 2))(down1)
# 64
down2 = Conv2D(128, (3, 3), padding='same')(down1_pool)
down2 = BatchNormalization()(down2)
down2 = Activation('relu')(down2)
down2 = Conv2D(128, (3, 3), padding='same')(down2)
down2 = BatchNormalization()(down2)
down2 = Activation('relu')(down2)
down2_pool = MaxPooling2D((2, 2), strides=(2, 2))(down2)
# 8
center = Conv2D(1024, (3, 3), padding='same')(down2_pool)
center = BatchNormalization()(center)
center = Activation('relu')(center)
center = Conv2D(1024, (3, 3), padding='same')(center)
center = BatchNormalization()(center)
center = Activation('relu')(center)
# center
up2 = UpSampling2D((2, 2))(center)
up2 = concatenate([down2, up2], axis=3)
up2 = Conv2D(128, (3, 3), padding='same')(up2)
up2 = BatchNormalization()(up2)
up2 = Activation('relu')(up2)
up2 = Conv2D(128, (3, 3), padding='same')(up2)
up2 = BatchNormalization()(up2)
up2 = Activation('relu')(up2)
up2 = Conv2D(128, (3, 3), padding='same')(up2)
up2 = BatchNormalization()(up2)
up2 = Activation('relu')(up2)
# 64
up1 = UpSampling2D((2, 2))(up2)
up1 = concatenate([down1, up1], axis=3)
up1 = Conv2D(64, (3, 3), padding='same')(up1)
up1 = BatchNormalization()(up1)
up1 = Activation('relu')(up1)
up1 = Conv2D(64, (3, 3), padding='same')(up1)
up1 = BatchNormalization()(up1)
up1 = Activation('relu')(up1)
up1 = Conv2D(64, (3, 3), padding='same')(up1)
up1 = BatchNormalization()(up1)
up1 = Activation('relu')(up1)
# 128
up0 = UpSampling2D((2, 2))(up1)
up0 = concatenate([down0, up0], axis=3)
up0 = Conv2D(32, (3, 3), padding='same')(up0)
up0 = BatchNormalization()(up0)
up0 = Activation('relu')(up0)
up0 = Conv2D(32, (3, 3), padding='same')(up0)
up0 = BatchNormalization()(up0)
up0 = Activation('relu')(up0)
up0 = Conv2D(32, (3, 3), padding='same')(up0)
up0 = BatchNormalization()(up0)
up0 = Activation('relu')(up0)
# 256
up0a = UpSampling2D((2, 2))(up0)
up0a = concatenate([down0a, up0a], axis=3)
up0a = Conv2D(16, (3, 3), padding='same')(up0a)
up0a = BatchNormalization()(up0a)
up0a = Activation('relu')(up0a)
up0a = Conv2D(16, (3, 3), padding='same')(up0a)
up0a = BatchNormalization()(up0a)
up0a = Activation('relu')(up0a)
up0a = Conv2D(16, (3, 3), padding='same')(up0a)
up0a = BatchNormalization()(up0a)
up0a = Activation('relu')(up0a)
# 512
output = Conv2D(1, (1, 1), activation='relu')(up0a)
model = Model(inputs=inputs, outputs=output)
model.compile(loss="mean_squared_error", optimizer='adam')
return model
d = unet2((512,512,4),(512,512,1))
```
Anschlieรend:
```Python
output_length = 1
input_length = output_length + 1
input_shape=(432, 560, input_length)
model_1 = unet(input_shape, output_length)
model_1.fit(x_train_1, y_train_1, batch_size = 16, epochs = 25,
validation_data=(x_valid_1, y_valid_1))
```
```
d.summary()
#ToDo: now learn something!
```
| github_jupyter |
```
import numpy as np
class PCA:
def __init__(self, n_components):
"""
ๅๅงๅPCA
"""
assert n_components>=1, "n_components ๅฟ
้กปๅคงไบ1"
self.n_components=n_components
self.components_=None
def fit(self, X, eta=0.01,n_iters=1e4):
"""
่ทๅพๆฐๆฎ้X็n_componentsไธปๆๅ
"""
assert self.n_components <= X.shape[1], "ไธปๆๅๆฐ้ไธ่ฝๅคงไบๆฐๆฎ็็ปดๅบฆ"
def demean(X):
"""
ๅฐๆฐๆฎ้็ๅๅผๅไธบ0
"""
return X - np.mean(X, axis=0)
def f(w, X):
"""
็ฎๆ ๅฝๆฐ
"""
return np.sum(X.dot(w)**2)/len(X)
def df(w, X):
"""
็ฎๆ ๅฝๆฐ็ๆขฏๅบฆ
"""
return X.T.dot(X.dot(w))*2/len(X)
def direction(w):
"""
ๅฐๅ้่ฝฌๅไธบๆ ๅๅ้
"""
return w/np.linalg.norm(w)
def first_components(X, initial_w, eta=0.01, n_iters=1e4, epsilon=1e-8):
w=direction(initial_w)
cur_iter=0
while cur_iter<n_iters:
gradient=df(w, X)
last_w=w
w=w+eta*gradient
w=direction(w)
if abs(f(w,X) - f(last_w, X))<epsilon:
break
cur_iter+=1
return w
X_pca=demean(X)
self.components_=np.empty((self.n_components, X.shape[1]))
for i in range(self.n_components):
initial_w=np.random.random(X_pca.shape[1])
w=first_components(X_pca, initial_w, eta, n_iters)
self.components_[i,:]=w
X_pca=X_pca - X_pca.dot(w).reshape(-1,1)*w
return self
def transform(self, X):
"""
ๅฐX๏ผๆ ๅฐๅฐๅไธชไธปๆๅๅ้ไธญ
"""
assert X.shape[1]==self.components_.shape[1], "็ปดๅบฆ่ฆไธ่ด"
return X.dot(self.components_.T)
def inverse_transform(self,X):
"""
ๅฐX๏ผๅๅๆ ๅฐๅฐๅๆฅ็็นๅพ็ฉบ้ด
"""
assert X.shape[1]==self.components_.shape[0]
return X.dot(self.components_)
def __repr__(self):
return "PCA(n_components=%d)" % self.n_components
X=np.empty((100,2))
X[:,0]=np.random.uniform(0.0,100.0,size=100)
X[:,1]=0.75*X[:,0]+3.0+np.random.normal(0.0,10.0,size=100)
X.shape
pca=PCA(n_components=2)
pca.fit(X)
pca.components_
pca=PCA(n_components=1)
pca.fit(X)
pca.components_
x_reduction=pca.transform(X)
x_reduction.shape
x_restore=pca.inverse_transform(x_reduction)
x_restore.shape
import matplotlib.pyplot as plt
plt.scatter(X[:,0],X[:,1], color='b',alpha=0.5)
plt.scatter(x_restore[:,0], x_restore[:,1], color='r', alpha=0.5)
plt.show()
```
| github_jupyter |
# Implementing a Neural Network
In this exercise we will develop a neural network with fully-connected layers to perform classification, and test it out on the CIFAR-10 dataset.
```
# A bit of setup
import numpy as np
import matplotlib.pyplot as plt
from cs231n.classifiers.neural_net import TwoLayerNet
%matplotlib inline
plt.rcParams['figure.figsize'] = (10.0, 8.0) # set default size of plots
plt.rcParams['image.interpolation'] = 'nearest'
plt.rcParams['image.cmap'] = 'gray'
# for auto-reloading external modules
# see http://stackoverflow.com/questions/1907993/autoreload-of-modules-in-ipython
%load_ext autoreload
%autoreload 2
def rel_error(x, y):
""" returns relative error """
return np.max(np.abs(x - y) / (np.maximum(1e-8, np.abs(x) + np.abs(y))))
```
We will use the class `TwoLayerNet` in the file `cs231n/classifiers/neural_net.py` to represent instances of our network. The network parameters are stored in the instance variable `self.params` where keys are string parameter names and values are numpy arrays. Below, we initialize toy data and a toy model that we will use to develop your implementation.
```
# Create a small net and some toy data to check your implementations.
# Note that we set the random seed for repeatable experiments.
input_size = 4
hidden_size = 10
num_classes = 3
num_inputs = 5
def init_toy_model():
np.random.seed(0)
return TwoLayerNet(input_size, hidden_size, num_classes, std=1e-1)
def init_toy_data():
np.random.seed(1)
X = 10 * np.random.randn(num_inputs, input_size)
y = np.array([0, 1, 2, 2, 1])
return X, y
net = init_toy_model()
X, y = init_toy_data()
```
# Forward pass: compute scores
Open the file `cs231n/classifiers/neural_net.py` and look at the method `TwoLayerNet.loss`. This function is very similar to the loss functions you have written for the SVM and Softmax exercises: It takes the data and weights and computes the class scores, the loss, and the gradients on the parameters.
Implement the first part of the forward pass which uses the weights and biases to compute the scores for all inputs.
```
scores = net.loss(X)
print 'Your scores:'
print scores
print
print 'correct scores:'
correct_scores = np.asarray([
[-0.81233741, -1.27654624, -0.70335995],
[-0.17129677, -1.18803311, -0.47310444],
[-0.51590475, -1.01354314, -0.8504215 ],
[-0.15419291, -0.48629638, -0.52901952],
[-0.00618733, -0.12435261, -0.15226949]])
print correct_scores
print
# The difference should be very small. We get < 1e-7
print 'Difference between your scores and correct scores:'
print np.sum(np.abs(scores - correct_scores))
```
# Forward pass: compute loss
In the same function, implement the second part that computes the data and regularizaion loss.
```
loss, _ = net.loss(X, y, reg=0.1)
correct_loss = 1.30378789133
# should be very small, we get < 1e-12
print 'Difference between your loss and correct loss:'
print np.sum(np.abs(loss - correct_loss))
```
# Backward pass
Implement the rest of the function. This will compute the gradient of the loss with respect to the variables `W1`, `b1`, `W2`, and `b2`. Now that you (hopefully!) have a correctly implemented forward pass, you can debug your backward pass using a numeric gradient check:
```
from cs231n.gradient_check import eval_numerical_gradient
# Use numeric gradient checking to check your implementation of the backward pass.
# If your implementation is correct, the difference between the numeric and
# analytic gradients should be less than 1e-8 for each of W1, W2, b1, and b2.
loss, grads = net.loss(X, y, reg=0.1)
# these should all be less than 1e-8 or so
for param_name in grads:
f = lambda W: net.loss(X, y, reg=0.1)[0]
param_grad_num = eval_numerical_gradient(f, net.params[param_name], verbose=False)
print param_grad_num.shape
print '%s max relative error: %e' % (param_name, rel_error(param_grad_num, grads[param_name]))
```
# Train the network
To train the network we will use stochastic gradient descent (SGD), similar to the SVM and Softmax classifiers. Look at the function `TwoLayerNet.train` and fill in the missing sections to implement the training procedure. This should be very similar to the training procedure you used for the SVM and Softmax classifiers. You will also have to implement `TwoLayerNet.predict`, as the training process periodically performs prediction to keep track of accuracy over time while the network trains.
Once you have implemented the method, run the code below to train a two-layer network on toy data. You should achieve a training loss less than 0.2.
```
net = init_toy_model()
stats = net.train(X, y, X, y,
learning_rate=1e-1, reg=1e-5,
num_iters=100, verbose=False)
print 'Final training loss: ', stats['loss_history'][-1]
# plot the loss history
plt.plot(stats['loss_history'])
plt.xlabel('iteration')
plt.ylabel('training loss')
plt.title('Training Loss history')
plt.show()
```
# Load the data
Now that you have implemented a two-layer network that passes gradient checks and works on toy data, it's time to load up our favorite CIFAR-10 data so we can use it to train a classifier on a real dataset.
```
from cs231n.data_utils import load_CIFAR10
def get_CIFAR10_data(num_training=49000, num_validation=1000, num_test=1000):
"""
Load the CIFAR-10 dataset from disk and perform preprocessing to prepare
it for the two-layer neural net classifier. These are the same steps as
we used for the SVM, but condensed to a single function.
"""
# Load the raw CIFAR-10 data
cifar10_dir = 'cs231n/datasets/cifar-10-batches-py'
X_train, y_train, X_test, y_test = load_CIFAR10(cifar10_dir)
# Subsample the data
mask = range(num_training, num_training + num_validation)
X_val = X_train[mask]
y_val = y_train[mask]
mask = range(num_training)
X_train = X_train[mask]
y_train = y_train[mask]
mask = range(num_test)
X_test = X_test[mask]
y_test = y_test[mask]
# Normalize the data: subtract the mean image
mean_image = np.mean(X_train, axis=0)
X_train -= mean_image
X_val -= mean_image
X_test -= mean_image
# Reshape data to rows
X_train = X_train.reshape(num_training, -1)
X_val = X_val.reshape(num_validation, -1)
X_test = X_test.reshape(num_test, -1)
return X_train, y_train, X_val, y_val, X_test, y_test
# Invoke the above function to get our data.
X_train, y_train, X_val, y_val, X_test, y_test = get_CIFAR10_data()
print 'Train data shape: ', X_train.shape
print 'Train labels shape: ', y_train.shape
print 'Validation data shape: ', X_val.shape
print 'Validation labels shape: ', y_val.shape
print 'Test data shape: ', X_test.shape
print 'Test labels shape: ', y_test.shape
```
# Train a network
To train our network we will use SGD with momentum. In addition, we will adjust the learning rate with an exponential learning rate schedule as optimization proceeds; after each epoch, we will reduce the learning rate by multiplying it by a decay rate.
```
input_size = 32 * 32 * 3
hidden_size = 50
num_classes = 10
net = TwoLayerNet(input_size, hidden_size, num_classes)
# Train the network
stats = net.train(X_train, y_train, X_val, y_val,
num_iters=1000, batch_size=200,
learning_rate=1e-4, learning_rate_decay=0.95,
reg=0.5, verbose=True)
# Predict on the validation set
val_acc = (net.predict(X_val) == y_val).mean()
print 'Validation accuracy: ', val_acc
```
# Debug the training
With the default parameters we provided above, you should get a validation accuracy of about 0.29 on the validation set. This isn't very good.
One strategy for getting insight into what's wrong is to plot the loss function and the accuracies on the training and validation sets during optimization.
Another strategy is to visualize the weights that were learned in the first layer of the network. In most neural networks trained on visual data, the first layer weights typically show some visible structure when visualized.
```
# Plot the loss function and train / validation accuracies
plt.subplot(2, 1, 1)
plt.plot(stats['loss_history'])
plt.title('Loss history')
plt.xlabel('Iteration')
plt.ylabel('Loss')
plt.subplot(2, 1, 2)
plt.plot(stats['train_acc_history'], label='train')
plt.plot(stats['val_acc_history'], label='val')
plt.title('Classification accuracy history')
plt.xlabel('Epoch')
plt.ylabel('Clasification accuracy')
plt.show()
from cs231n.vis_utils import visualize_grid
# Visualize the weights of the network
def show_net_weights(net):
W1 = net.params['W1']
W1 = W1.reshape(32, 32, 3, -1).transpose(3, 0, 1, 2)
plt.imshow(visualize_grid(W1, padding=3).astype('uint8'))
plt.gca().axis('off')
plt.show()
show_net_weights(net)
```
# Tune your hyperparameters
**What's wrong?**. Looking at the visualizations above, we see that the loss is decreasing more or less linearly, which seems to suggest that the learning rate may be too low. Moreover, there is no gap between the training and validation accuracy, suggesting that the model we used has low capacity, and that we should increase its size. On the other hand, with a very large model we would expect to see more overfitting, which would manifest itself as a very large gap between the training and validation accuracy.
**Tuning**. Tuning the hyperparameters and developing intuition for how they affect the final performance is a large part of using Neural Networks, so we want you to get a lot of practice. Below, you should experiment with different values of the various hyperparameters, including hidden layer size, learning rate, numer of training epochs, and regularization strength. You might also consider tuning the learning rate decay, but you should be able to get good performance using the default value.
**Approximate results**. You should be aim to achieve a classification accuracy of greater than 48% on the validation set. Our best network gets over 52% on the validation set.
**Experiment**: You goal in this exercise is to get as good of a result on CIFAR-10 as you can, with a fully-connected Neural Network. For every 1% above 52% on the Test set we will award you with one extra bonus point. Feel free implement your own techniques (e.g. PCA to reduce dimensionality, or adding dropout, or adding features to the solver, etc.).
```
best_net = None # store the best model into this
#################################################################################
# TODO: Tune hyperparameters using the validation set. Store your best trained #
# model in best_net. #
# #
# To help debug your network, it may help to use visualizations similar to the #
# ones we used above; these visualizations will have significant qualitative #
# differences from the ones we saw above for the poorly tuned network. #
# #
# Tweaking hyperparameters by hand can be fun, but you might find it useful to #
# write code to sweep through possible combinations of hyperparameters #
# automatically like we did on the previous exercises. #
#################################################################################
pass
#################################################################################
# END OF YOUR CODE #
#################################################################################
# visualize the weights of the best network
show_net_weights(best_net)
```
# Run on the test set
When you are done experimenting, you should evaluate your final trained network on the test set; you should get above 48%.
**We will give you extra bonus point for every 1% of accuracy above 52%.**
```
test_acc = (best_net.predict(X_test) == y_test).mean()
print 'Test accuracy: ', test_acc
```
| github_jupyter |
```
import pandas as pd
import numpy as np
# import pymssql
# from fuzzywuzzy import fuzz
import json
import tweepy
from collections import defaultdict
from datetime import datetime
import re
# import pyodbc
from wordcloud import WordCloud
import seaborn as sns
import matplotlib.pyplot as plt
from wordcloud import WordCloud
import string, nltk, re, json, tweepy, gensim, scipy.sparse, pickle, pyLDAvis, pyLDAvis.gensim
from sklearn.feature_extraction.text import CountVectorizer
from nltk.stem import WordNetLemmatizer
from nltk.corpus import stopwords
from gensim import matutils, models, corpora
import warnings
warnings.filterwarnings("ignore")
```
# Social Media Analysis
## EDA
```
df = pd.read_csv('./meme_cleaning.csv')
df_sentiment = pd.read_csv('563_df_sentiments.csv')
df_sentiment = df_sentiment.drop(columns=['Unnamed: 0', 'Unnamed: 0.1', 'Unnamed: 0.1.1'])
df_sentiment.head()
#Extract all words that begin with # and turn the results into a dataframe
temp = df_sentiment['Tweet'].str.lower().str.extractall(r"(#\w+)")
temp.columns = ['unnamed']
# Convert the multiple hashtag values into a list
temp = temp.groupby(level = 0)['unnamed'].apply(list)
# Save the result as a feature in the original dataset
df_sentiment['hashtags'] = temp
for i in range(len(df_sentiment)):
if df_sentiment.loc[i, 'No_of_Retweets'] >= 4:
df_sentiment.loc[i, 'No_of_Retweets'] = 4
for i in range(len(df_sentiment)):
if df_sentiment.loc[i, 'No_of_Likes'] >= 10:
df_sentiment.loc[i, 'No_of_Likes'] = 10
retweet_df = df_sentiment.groupby(['No_of_Retweets', 'vaderSentiment']).vaderSentimentScores.agg(count='count').reset_index()
like_df = df_sentiment.groupby(['No_of_Likes', 'vaderSentiment']).vaderSentimentScores.agg(count='count').reset_index()
classify_df = df_sentiment.vaderSentiment.value_counts().reset_index()
df_sentiment.Labels = df_sentiment.Labels.fillna('')
df_likes_dict = df_sentiment.groupby('No_of_Likes').vaderSentimentScores.agg(count='count').to_dict()['count']
df_retweet_dict = df_sentiment.groupby('No_of_Retweets').vaderSentimentScores.agg(count='count').to_dict()['count']
for i in range(len(like_df)):
like_df.loc[i, 'Normalized_count'] = like_df.loc[i, 'count'] / df_likes_dict[like_df.loc[i, 'No_of_Likes']]
for i in range(len(retweet_df)):
retweet_df.loc[i, 'Normalized_count'] = retweet_df.loc[i, 'count'] / df_retweet_dict[retweet_df.loc[i, 'No_of_Retweets']]
```
## Sentiment
```
g = sns.catplot(x = "No_of_Likes", y = "Normalized_count", hue = "vaderSentiment", data = like_df, kind = "bar")
g = sns.catplot(x = "No_of_Retweets", y = "Normalized_count", hue = "vaderSentiment", data = retweet_df, kind = "bar")
plt.pie(classify_df['vaderSentiment'], labels=classify_df['index']);
l = []
for i in range(len(df_sentiment)):
for element in df_sentiment.loc[i, 'Labels'].split():
if element != 'Font':
l.append(element)
```
## Word Cloud
```
wordcloud = WordCloud(width = 800, height = 800,
background_color ='white',
min_font_size = 10).generate(str(l))
plt.figure(figsize = (8, 8), facecolor = None)
plt.imshow(wordcloud, interpolation='bilinear')
plt.axis("off")
plt.tight_layout(pad = 0)
plt.show()
```
## Topic Modeling
```
cv = CountVectorizer(stop_words='english')
data_cv = cv.fit_transform(df.Tweet)
words = cv.get_feature_names()
data_dtm = pd.DataFrame(data_cv.toarray(), columns=cv.get_feature_names())
pickle.dump(cv, open("cv_stop.pkl", "wb"))
data_dtm_transpose = data_dtm.transpose()
sparse_counts = scipy.sparse.csr_matrix(data_dtm_transpose)
corpus = matutils.Sparse2Corpus(sparse_counts)
cv = pickle.load(open("cv_stop.pkl", "rb"))
id2word = dict((v, k) for k, v in cv.vocabulary_.items())
word2id = dict((k, v) for k, v in cv.vocabulary_.items())
d = corpora.Dictionary()
d.id2token = id2word
d.token2id = word2id
lda = gensim.models.ldamodel.LdaModel(corpus=corpus, id2word=id2word, num_topics=3, passes=10)
lda.print_topics()
pyLDAvis.enable_notebook()
vis = pyLDAvis.gensim.prepare(lda, corpus, d)
vis
```
| github_jupyter |
### Project: Create a neural network class
---
Based on previous code examples, develop a neural network class that is able to classify any dataset provided. The class should create objects based on the desired network architecture:
1. Number of inputs
2. Number of hidden layers
3. Number of neurons per layer
4. Number of outputs
5. Learning rate
The class must have the train, and predict functions.
Test the neural network class on the datasets provided below: Use the input data to train the network, and then pass new inputs to predict on. Print the expected label and the predicted label for the input you used. Print the accuracy of the training after predicting on different inputs.
Use matplotlib to plot the error that the train method generates.
**Don't forget to install Keras and tensorflow in your environment!**
---
### Import the needed Packages
```
import numpy as np
import matplotlib.pyplot as plt
# Needed for the mnist data
from keras.datasets import mnist
from keras.utils import to_categorical
```
### Define the class
```
class NeuralNetwork:
def __init__(self, architecture, alpha):
'''
layers: List of integers which represents the architecture of the network.
alpha: Learning rate.
'''
# TODO: Initialize the list of weights matrices, then store
# the network architecture and learning rate
self.layers = architecture
self.alpha = alpha
np.random.seed(13)
self.FW = np.random.randn(architecture[0], architecture[2])
self.MW = np.empty((architecture[0] - 1, architecture[2], architecture[2]))
for x in range(architecture[0] - 2):
self.MW[x] = np.random.randn(architecture[2], architecture[2])
self.LW = np.random.randn(architecture[2], architecture[3])
self.FB = np.random.randn(architecture[2])
self.MB = np.random.randn(architecture[1] - 1, architecture[2])
self.LB = np.random.randn(architecture[3])
pass
def __repr__(self):
return "NeuralNetwork: {}".format( "-".join(str(l) for l in self.layers))
def softmax(self, X):
expX = np.exp(X)
return expX / expX.sum(axis=1, keepdims=True)
def sigmoid(self, x):
# the sigmoid for a given input value
return 1.0 / (1.0 + np.exp(-x))
def sigmoid_deriv(self, x):
# the derivative of the sigmoid
return x * (1 - x)
def predict(self, inputs):
# TODO: Define the predict function
self.newWeights = np.empty((self.layers[1], inputs.shape[0], self.layers[2]))
self.newWeights[0] = self.sigmoid(np.dot(inputs, self.FW) + self.FB)
for x in range(self.layers[0] - 2):
self.newWeights[x+1] = self.sigmoid(np.dot(self.newWeights[x], self.MW[x]) + self.MB[x])
finalLevel = self.softmax( np.dot(self.newWeights[len(self.newWeights)-1], self.LW) + self.LB)
return finalLevel
def train(self, inputs, labels, epochs = 1000, displayUpdate = 100):
fail = []
for i in range(epochs):
hop = self.predict(inputs)
error = labels - hope
error1 = np.dot(error * self.sigmoid_deriv(hope), self.LW.T)
delta = np.dot(error * self.sigmoid_deriv(hope), self.LW.T) * self.sigmoid_deriv(self.newWeights[len(self.newWeights) - 1])
self.LB += * self.alpha
self.LW += np.dot(self.newWeights[len(self.newWeights) - 1].T, error * self.sigmoid_deriv(hope)) * self.alpha
for x in range(self.layers[1] - 1):
self.MW[(len(self.MW) - 1) - x] += np.dot(self.newWeights[(len(self.newWeights) - 2) - x].T, delta) * self.alpha
delta = np.sum(delta)
self.MB[x] += delta * self.alpha
delta = np.dot(delta, self.MW[(len(self.MW) - 1) - x]) * self.sigmoid_deriv(self.newWeights[(len(self.newWeights) - 2) - x])
self.FW += np.dot(inputs.T, delta) * self.alpha
delta2 = np.sum(delta)
self.FB += delta2 * self.alpha
fail.append(np.mean(np.abs(level_error)))
return fail
```
### Test datasets
#### XOR
```
# input dataset
XOR_inputs = np.array([
[0,0],
[0,1],
[1,0],
[1,1]
])
# labels dataset
XOR_labels = np.array([[0,1,1,0]]).T
#TODO: Test the class with the XOR data
```
#### Multiple classes
```
# Creates the data points for each class
class_1 = np.random.randn(700, 2) + np.array([0, -3])
class_2 = np.random.randn(700, 2) + np.array([3, 3])
class_3 = np.random.randn(700, 2) + np.array([-3, 3])
feature_set = np.vstack([class_1, class_2, class_3])
labels = np.array([0]*700 + [1]*700 + [2]*700)
one_hot_labels = np.zeros((2100, 3))
for i in range(2100):
one_hot_labels[i, labels[i]] = 1
plt.figure(figsize=(10,10))
plt.scatter(feature_set[:,0], feature_set[:,1], c=labels, s=30, alpha=0.5)
plt.show()
#TODO: Test the class with the multiple classes data
r = NeuralNetwork([2,2,5,3], 0.01)
fails = r.train(feature_set, one_hot_labels, 10000, 1000)
fig, ax = plt.subplots(1,1)
ax.set_ylabel('Error')
ax.plot(fails)
test = np.array([[0,-4]])
print(str(test) + str(network.predict(test)))
```
#### On the mnist data set
---
Train the network to classify hand drawn digits.
For this data set, if the training step is taking too long, you can try to adjust the architecture of the network to have fewer layers, or you could try to train it with fewer input. The data has already been loaded and preprocesed so that it can be used with the network.
---
```
# Load the train and test data from the mnist data set
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
# Plot a sample data point
plt.title("Label: " + str(train_labels[0]))
plt.imshow(train_images[0], cmap="gray")
# Standardize the data
# Flatten the images
train_images = train_images.reshape((60000, 28 * 28))
# turn values from 0-255 to 0-1
train_images = train_images.astype('float32') / 255
test_images = test_images.reshape((10000, 28 * 28))
test_images = test_images.astype('float32') / 255
# Create one hot encoding for the labels
train_labels = to_categorical(train_labels)
test_labels = to_categorical(test_labels)
# TODO: Test the class with the mnist data. Test the training of the network with the test_images data, and
# record the accuracy of the classification.
r = NeuralNetwork([2,2,5,3], 0.01)
fails = r.train(feature_set, one_hot_labels, 10000, 1000)
fig, ax = plt.subplots(1,1)
ax.set_ylabel('Error')
ax.plot(fails)
test = np.array([[0,-3]])
prf = network.predict(test_images[0:1000])
one_hot_test_labels = to_categorical(test_labels[0:1000])
np.set_printoptions(precision=10, suppress= True, linewidth=75)
guess = np.copy(prf)
guess[guess > 0.5] = 1
guess[guess < 0.5] = 0
fails = []
for index, (guess, label) in enumerate(zip(predictions[0:10], one_hot_test_labels[0:10])):
if not np.array_equal(prediction,label):
fails.append((index, prediction, label))
for img, plot in zip(fails, plots):
plot.imshow(test_images[img[0]].reshape(28,28), cmap = "gray")
plot.set_title(str(img[1]))
```
After predicting on the *test_images*, use matplotlib to display some of the images that were not correctly classified. Then, answer the following questions:
1. **Why do you think those were incorrectly classified?**
2. **What could you try doing to improve the classification accuracy?**
| github_jupyter |
# Titanic 4
> ### `Pclass, Sex, Age`
```
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
plt.style.use('seaborn')
sns.set(font_scale=2.5)
import missingno as msno
import warnings
warnings.filterwarnings('ignore')
%matplotlib inline
df_train=pd.read_csv('C:/Users/ehfus/Downloads/titanic/train.csv')
df_test=pd.read_csv('C:/Users/ehfus/Downloads/titanic/test.csv')
# violin plot
fig,ax=plt.subplots(1,2,figsize=(18,8))
sns.violinplot('Pclass','Age',hue='Survived',data=df_train,sacle='count',split=True,ax=ax[0])
ax[0].set_title('Pclass and Age vs Survived')
ax[0].set_yticks(range(0,110,10))
sns.violinplot('Sex','Age',hue='Survived',data=df_train,scale='count',split=True, ax=ax[1])
ax[1].set_title('Sexx and Age vs Survived')
ax[1].set_yticks(range(0,110,10))
plt.show()
```
- scale์๋ option์ด ์ฌ๋ฌ๊ฐ์ง ์์, google์์ ํ์ธํด๋ณผ ๊ฒ
> ### `Embarked : ํ์นํ ํญ๊ตฌ`
```
f,ax=plt.subplots(1,1,figsize=(7,7))
df_train[['Embarked','Survived']]\
.groupby(['Embarked'], as_index=True).mean()\
.sort_values(by='Survived', ascending=False)\
.plot.bar(ax=ax)
```
- `sort_values` ๋๋ `sort_index`๋ ์ฌ์ฉ ๊ฐ๋ฅ
```
f,ax=plt.subplots(2,2,figsize=(20,15)) #2์ฐจ์์/ 1,2๋ 1์ฐจ์
sns.countplot('Embarked',data=df_train, ax=ax[0,0])
ax[0,0].set_title('(1) No. Of Passengers Boared')
sns.countplot('Embarked',hue='Sex',data=df_train, ax=ax[0,1])
ax[0,1].set_title('(2) Male-Female split for embarked')
sns.countplot('Embarked', hue='Survived', data=df_train, ax=ax[1,0])
ax[1,0].set_title('(3) Embarked vs Survived')
sns.countplot('Embarked', hue='Pclass', data=df_train, ax=ax[1,1])
ax[1,1].set_title('(4) Embarked vs Survived')
plt.subplots_adjust(wspace=0.2, hspace=0.5) # ์ํ์ข์ฐ๊ฐ๊ฒฉ ๋ง์ถฐ์ค
plt.show()
```
> ### `Family - SibSp + ParCh`
```
df_train['FamilySize']=df_train['SibSp'] + df_train['Parch'] + 1
print('Maximum size of Family : ',df_train['FamilySize'].max())
print('Minimum size of Family : ',df_train['FamilySize'].min())
```
- Pandas series๋ ์ฐ์ฐ์ด ๊ฐ๋ฅ
```
f,ax=plt.subplots(1,3,figsize=(40,10))
sns.countplot('FamilySize', data=df_train, ax=ax[0])
ax[0].set_title('(1) No. Of Passenger Boarded', y=1.02)
sns.countplot('FamilySize', hue='Survived',data=df_train, ax=ax[1])
ax[1].set_title('(2) Survived countplot depending on FamilySize', y=1.02)
df_train[['FamilySize','Survived']].groupby(['FamilySize'],as_index=True).mean().sort_values(by='Survived',ascending=False).plot.bar(ax=ax[2])
ax[2].set_title('(3) Survived rate depending on FamilySize',y=1.02)
plt.subplots_adjust(wspace=0.2,hspace=0.5)
plt.show()
```
> ### `Fare : ์๊ธ, ์ฐ์ํ ๋ณ์`
- distplot ?? ์๋ฆฌ์ฆ์ ํ์คํ ๊ทธ๋จ์ ๊ทธ๋ ค์ค,Skewness? ์๋์ + ์ฒจ๋๋ ์์
- ์๋? ์ฒจ๋?
- python์์ ๋ํ๋ด๋ ํจ์๋?
```
fig,ax=plt.subplots(1,1,figsize=(8,8))
g=sns.distplot(df_train['Fare'], color='b',label='Skewness{:.2f}'.format(df_train['Fare'].skew()),ax=ax)
g=g.legend(loc='best')
```
- skewness๊ฐ 5์ ๋๋ก ๊ฝค ํผ -> ์ข๋ก ๋ง์ด ์น์ฐ์ณ์ ธ ์์ -> ๊ทธ๋๋ก ๋ชจ๋ธ์ ํ์ต์ํค๋ฉด ์ฑ๋ฅ์ด ๋ฎ์์ง ์ ์์
```
df_train['Fare']=df_train['Fare'].map(lambda i: np.log(i) if i>0 else 0)
```
df_train['Fare']์ ๊ฐ์ ์ ์ ํ๊ฒ ๋ณํ ์ค
```
fig,ax=plt.subplots(1,1,figsize=(8,8))
g=sns.distplot(df_train['Fare'], color='b',label='Skewness{:.2f}'.format(df_train['Fare'].skew()),ax=ax)
g=g.legend(loc='best')
```
์ด๋ฐ ์์
(log๋ก ๋ณํ)์ ํตํด skewness๊ฐ 0์ผ๋ก ๊ทผ์ ํ๊ฒ ํด์ฃผ์์
```
df_train['Ticket'].value_counts()
```
์ ๋ง ๋ค์ํ ์๋ฃํํ๊ฐ ๊ฒฐํฉ๋์ด ์์ -> ์ ์ ํ ๋ณํ์ด ํ์ํด ๋ณด์
| github_jupyter |
# Section 1.2: Dimension reduction and principal component analysis (PCA)
One of the iron laws of data science is know as the "curse of dimensionality": as the number of considered features (dimensions) of a feature space increases, the number of data configurations can grow exponentially and thus the number observations (data points) needed to account for these configurations must also increase. Because this fact of life has huge ramifications for the time, computational effort, and memory required it is often desirable to reduce the number of dimensions we have to work with.
One way to accomplish this is by reducing the number of features considered in an analysis. After all, not all features are created equal, and some yield more insight for a given analysis than others. While this type of feature engineering is necessary in any data-science project, we can really only take it so far; up to a point, considering more features can often increase the accuracy of a classifier. (For example, consder how many features could increase the accuracy of classifying images as cats or dogs.)
## PCA in theory
Another way to reduce the number of dimensions that we have to work with is by projecting our feature space into a lower dimensional space. The reason why we can do this is that in most real-world problems, data points are not spread uniformly across all dimensions. Some features might be near constant, while others are highly correlated, which means that those data points lie close to a lower-dimensional subspace.
In the image below, the data points are not spread across the entire plane, but are nicely clumped, roughly in an oval. Because the cluster (or, indeed, any cluster) is roughly elliptical, it can be mathematically described by two values: its major (long) axis and its minor (short) axis. These axes form the *principal components* of the cluster.
<img align="center" style="padding-right:10px;" src="Images/PCA.png">
In fact, we can construct a whole new feature space around this cluster, defined by two *eigenvectors* (the vectors that define the linear transformation to this new feature space), $c_{1}$ and $c_{2}$. Better still, we don't have to consider all of the dimensions of this new space. Intuitively, we can see that most of the points lie on or close to the line that runs through $c_{1}$. So, if we project the cluster down from two dimensions to that single dimension, we capture most of the information about this data sense while simplifying our analysis. This ability to extract most of the information from a dataset by considering only a fraction of its definitive eigenvectors forms the heart of principal component analysis (PCA).
## Import modules and dataset
You will need to clean and prepare the data in order to conduct PCA on it, so pandas will be essential. You will also need NumPy, a bit of Scikit Learn, and pyplot.
```
import pandas as pd
import numpy as np
from sklearn.decomposition import PCA
from sklearn.preprocessing import StandardScaler
import matplotlib.pyplot as plt
%matplotlib inline
```
The dataset weโll use here is the same one drawn from the [U.S. Department of Agriculture National Nutrient Database for Standard Reference](https://www.ars.usda.gov/northeast-area/beltsville-md-bhnrc/beltsville-human-nutrition-research-center/nutrient-data-laboratory/docs/usda-national-nutrient-database-for-standard-reference/) that you prepared in Section 1.1. Remember to set the encoding to `latin_1` (for those darn ยตg).
```
df = pd.read_csv('Data/USDA-nndb-combined.csv', encoding='latin_1')
```
We can check the number of columns and rows using the `info()` method for the `DataFrame`.
```
df.info()
```
> **Exercise**
>
> Can you think of a more concise way to check the number of rows and columns in a `DataFrame`? (***Hint:*** Use one of the [attributes](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.html) of the `DataFrame`.)
## Handle `null` values
Because this is a real-world dataset, it is a safe bet that it has `null` values in it. We could first check to see if this is true. However, later on in this section, we will have to transform our data using a function that cannot use `NaN` values, so we might as well drop rows containing those values.
> **Exercise**
>
> Drop rows from the `DataFrame` that contain `NaN` values. (If you need help remembering which method to use, see [this page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.dropna.html).)
> **Exercise solution**
>
> The correct code to use is `df = df.dropna()`.
Now letโs see how many rows we have left.
```
df.shape
```
Dropping those rows eliminated 76 percent of our data (8989 entries to 2190). An imperfect state of affairs, but we still have enough for our purposes in this section.
> **Key takeaway:** Another solution to removing `null` values is to impute values for them, but this can be tricky. Should we handle missing values as equal to 0? What about a fatty food with `NaN` for `Lipid_Tot_(g)`? We could try taking the averages of values surrounding a `NaN`, but what about foods that are right next to rows containing foods from radically different food groups? It is possible to make justifiable imputations for missing values, but it can be important to involve subject-matter experts (SMEs) in that process.
## Split off descriptive columns
Out descriptive columns (such as `FoodGroup` and `Shrt_Desc`) pose challenges for us when it comes time to perform PCA because they are categorical rather than numerical features, so we will split our `DataFrame` in to one containing the descriptive information and one containing the nutritional information.
```
desc_df = df.iloc[:, [0, 1, 2]+[i for i in range(50,54)]]
desc_df.set_index('NDB_No', inplace=True)
desc_df.head()
```
> **Question**
>
> Why was it necessary to structure the `iloc` method call the way we did in the code cell above? What did it accomplish? Why was it necessary set the `desc_df` index to `NDB_No`?
```
nutr_df = df.iloc[:, :-5]
nutr_df.head()
```
> **Question**
>
> What did the `iloc` syntax do in the code cell above?
```
nutr_df = nutr_df.drop(['FoodGroup', 'Shrt_Desc'], axis=1)
```
> **Exercise**
>
> Now set the index of `nutr_df` to use `NDB_No`.
> **Exercise solution**
>
> The correct code for students to use here is `nutr_df.set_index('NDB_No', inplace=True)`.
Now letโs take a look at `nutr_df`.
```
nutr_df.head()
```
## Check for correlation among features
One thing that can skew our classification results is correlation among our features. Recall that the whole reason that PCA works is that it exploits the correlation among data points to project our feature-space into a lower-dimensional space. However, if some of our features are highly correleted to begin with, these relationships might create spurious clusters of data in our PCA.
The code to check for correlations in our data isn't long, but it takes too long (up to 10 to 20 minutes) to run for a course like this. Instead, the table below shows the output from that code:
| | column | row | corr |
|--:|------------------:|------------------:|-----:|
| 0 | Folate\_Tot\_(ยตg) | Folate\_DFE\_(ยตg) | 0.98 |
| 1 | Folic\_Acid\_(ยตg) | Folate\_DFE\_(ยตg) | 0.95 |
| 2 | Folate\_DFE\_(ยตg) | Folate\_Tot\_(ยตg) | 0.98 |
| 3 | Vit\_A\_RAE | Retinol\_(ยตg) | 0.99 |
| 4 | Retinol\_(ยตg) | Vit\_A\_RAE | 0.99 |
| 5 | Vit\_D\_ยตg | Vit\_D\_IU | 1 |
| 6 | Vit\_D\_IU | Vit\_D\_ยตg | 1 |
As it turns out, dropping `Folate_DFE_(ยตg)`, `Vit_A_RAE`, and `Vit_D_IU` will eliminate the correlations enumerated in the table above.
```
nutr_df.drop(['Folate_DFE_(ยตg)', 'Vit_A_RAE', 'Vit_D_IU'],
inplace=True, axis=1)
nutr_df.head()
```
## Normalize and center the data
Our numeric data comes in a variety of mass units (grams, milligrams, and micrograms) and one energy unit (kilocalories). In order to make an apples-to-apples comparison (pun intended) of the nutritional data, we need to first *normalize* the data and make it more normally distributed (that is, make the distribution of the data look more like a familiar bell curve).
To help see why we need to normalize the data, let's look at a histogram of all of the columns.
```
ax = nutr_df.hist(bins=50, xlabelsize=-1, ylabelsize=-1, figsize=(11,11))
```
Not a bell curve in sight. Worse, a lot of the data is clumped at or around 0. We will use the Box-Cox Transformation on the data, but it requires strictly positive input, so we will add 1 to every value in each column.
```
nutr_df = nutr_df + 1
```
Now for the transformation. The [Box-Cox Transformation](https://www.statisticshowto.datasciencecentral.com/box-cox-transformation/) performs the transformation $y(\lambda) = \dfrac{y^{\lambda}-1}{\lambda}$ for $\lambda \neq 0$ and $y(\lambda) = log y$ for $\lambda = 0$ for all values $y$ in a given column. SciPy has a particularly useful `boxcox()` function that can automatically calculate the $\lambda$ for each column that best normalizes the data in that column. (However, it is does not support `NaN` values; scikit-learn has a comparable `boxcox()` function that is `NaN`-safe, but it is not available on the version of scikit-learn that comes with Azure notebooks.)
```
from scipy.stats import boxcox
nutr_df_TF = pd.DataFrame(index=nutr_df.index)
for col in nutr_df.columns.values:
nutr_df_TF['{}_TF'.format(col)] = boxcox(nutr_df.loc[:, col])[0]
```
Let's now take a look at the `DataFrame` containing the transformed data.
```
ax = nutr_df_TF.hist(bins=50, xlabelsize=-1, ylabelsize=-1, figsize=(11,11))
```
Few of these columns looks properly normal, but it is enough to now center the data.
Our data units were incompatible to begin with, and the transformations have not improved that. But we can address that by centering the data around 0; that is, we will again transform the data, this time so that every column has a mean of 0 and a standard deviation of 1. Scikit-learn has a convenient function for this.
```
nutr_df_TF = StandardScaler().fit_transform(nutr_df_TF)
```
You can satisfy your self that the data is now centered by using the `mean()` method on the `DataFrame`.
```
print("mean: ", np.round(nutr_df_TF.mean(), 2))
```
> **Exercise**
>
> Find the standard deviation for the `nutr_df_TF`. (If you need a hint as to which method to use, see [this page](https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.std.html).)
> **Exercise solution**
>
> The correct code to use here is `print("s.d.: ", np.round(nutr_df_TF.std(), 2))`.
## PCA in practice
It is finally time to perform the PCA on our data. (As stated before, even with pretty clean data, a lot of effort has to go into preparing the data for analysis.)
```
fit = PCA()
pca = fit.fit_transform(nutr_df_TF)
```
So, now that we have peformed the PCA on our data, what do we actually have? Remember that PCA is foremost about finding the eigenvectors for our data. We then want to select some subset of those vectors to form the lower-dimensional subspace in which to analyze our data.
Not all of the eigenvectors are created equal. Just a few of them will account for the majority of the variance in the data. (Put another way, a subspace composed of just a few of the eigenvectors will retain the majority of the information from our data.) We want to focus on those vectors.
To help us get a sense of how many vectors we should use, consider this scree graph of the variance for the PCA components, which plots the variance explained by the components from greatest to least.
```
plt.plot(fit.explained_variance_ratio_)
```
This is where data science can become an art. As a rule of thumb, we want to look for "elbow" in the graph, which is the point at which the few components have captured the majority of the variance in the data (after that point, we are only adding complexity to the analysis for increasingly diminishing returns). In this particular case, that appears to be at about five components.
We can take the cumulative sum of the first five components to see how much variance they capture in total.
```
print(fit.explained_variance_ratio_[:5].sum())
```
So our five components capture about 70 percent of the variance. We can see what fewer or additional components would yield by looking at the cumulative variance for all of the components.
```
print(fit.explained_variance_ratio_.cumsum())
```
We can also examine this visually.
```
plt.plot(np.cumsum(fit.explained_variance_ratio_))
plt.title("Cumulative Explained Variance Graph")
```
Ultimately, it is a matter of judgment as to how many components to use, but five vectors (and 70 percent of the variance) will suffice for our purposes in this section.
To aid further analysis, let's now put those five components into a DataFrame.
```
pca_df = pd.DataFrame(pca[:, :5], index=df.index)
pca_df.head()
```
Each column represents one of the eigenvectors, and each row is one of the coordinates that defines that vector in five-dimensional space.
We will want to add the FoodGroup column back in to aid with our interpretation of the data later on. Let's also rename the component-columns $c_{1}$ through $c_{5}$ so that we know what we are looking at.
```
pca_df = pca_df.join(desc_df)
pca_df.drop(['Shrt_Desc', 'GmWt_Desc1', 'GmWt_2', 'GmWt_Desc2', 'Refuse_Pct'],
axis=1, inplace=True)
pca_df.rename(columns={0:'c1', 1:'c2', 2:'c3', 3:'c4', 4:'c5'},
inplace=True)
pca_df.head()
```
Don't worry that the FoodGroup column has all `NaN` values: it is not a vector, so it has no vector coordinates.
One last thing we should demonstrate is that each of the components is mutually perpendicular (or orthogonal in math-speak). One way of expressing that condition is that each component-vector should perfectly correspond with itself and not correlate at all (positively or negatively) with any other vector.
```
np.round(pca_df.corr(), 5)
```
## Interpreting the results
What do our vectors mean? Put another way, what kinds of foods populate the differnt clusters we have discovered among the data?
To see these results, we will create pandas Series for each of the components, index them by feature, and then sort them in descreasing order (so that a higher number represents a feature that is positively correlated with that vector and negative numbers represent low correlation).
```
vects = fit.components_[:5]
c1 = pd.Series(vects[0], index=nutr_df.columns)
c1.sort_values(ascending=False)
```
Our first cluster is defined by foods that are high in protein and minerals like selenium and zinc while also being low in sugars and vitamin C. Even to a non-specialist, these sound like foods such as meat, poultry, or legumes.
> **Key takeaway:** Particularly when it comes to interpretation, subject-matter expertise can prove essential to producing high-quality analysis. For this reason, you should also try to include SMEs in your data -cience projects.
```
c2 = pd.Series(vects[1], index=nutr_df.columns)
c2.sort_values(ascending=False)
```
Our second group is foods that are high in fiber and folic acid and low in cholesterol.
> **Exercise**
>
> Find the sorted output for $c_{3}$, $c_{4}$, and $c_{5}$.
>
> ***Hint:*** Remember that Python uses zero-indexing.
Even without subject-matter expertise, it is possible to get a more accurate sense of the kinds of foods are defined by each component? Yes! This is the reason we merged the `FoodGroup` column back into `pca_df`. We will sort that `DataFrame` by the components and count the values from `FoodGroup` for the top items.
```
pca_df.sort_values(by='c1')['FoodGroup'][:500].value_counts()
```
We can do the same thing for $c_{2}$.
```
pca_df.sort_values(by='c2')['FoodGroup'][:500].value_counts()
```
> **Exercise**
>
> Repeat this process for $c_{3}$, $c_{4}$, and $c_{5}$.
> **A parting note:** `Baby Foods` and some other categories might seem to dominate several of the categories. This is a product of all of the rows we had to drop that had `NaN` values. If we look at all of the value counts for `FoodGroup`, we will see that they are not evenly distributed, with some categories far more represented than others.
```
df['FoodGroup'].value_counts()
```
| github_jupyter |
<table class="ee-notebook-buttons" align="left">
<td><a target="_blank" href="https://github.com/giswqs/geemap/tree/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=32px src="https://www.tensorflow.org/images/GitHub-Mark-32px.png" /> View source on GitHub</a></td>
<td><a target="_blank" href="https://nbviewer.jupyter.org/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img width=26px src="https://upload.wikimedia.org/wikipedia/commons/thumb/3/38/Jupyter_logo.svg/883px-Jupyter_logo.svg.png" />Notebook Viewer</a></td>
<td><a target="_blank" href="https://colab.research.google.com/github/giswqs/geemap/blob/master/examples/notebooks/geemap_and_ipyleaflet.ipynb"><img src="https://www.tensorflow.org/images/colab_logo_32px.png" /> Run in Google Colab</a></td>
</table>
## Install Earth Engine API and geemap
Install the [Earth Engine Python API](https://developers.google.com/earth-engine/python_install) and [geemap](https://github.com/giswqs/geemap). The **geemap** Python package is built upon the [ipyleaflet](https://github.com/jupyter-widgets/ipyleaflet) and [folium](https://github.com/python-visualization/folium) packages and implements several methods for interacting with Earth Engine data layers, such as `Map.addLayer()`, `Map.setCenter()`, and `Map.centerObject()`.
The following script checks if the geemap package has been installed. If not, it will install geemap, which automatically installs its [dependencies](https://github.com/giswqs/geemap#dependencies), including earthengine-api, folium, and ipyleaflet.
**Important note**: A key difference between folium and ipyleaflet is that ipyleaflet is built upon ipywidgets and allows bidirectional communication between the front-end and the backend enabling the use of the map to capture user input, while folium is meant for displaying static data only ([source](https://blog.jupyter.org/interactive-gis-in-jupyter-with-ipyleaflet-52f9657fa7a)). Note that [Google Colab](https://colab.research.google.com/) currently does not support ipyleaflet ([source](https://github.com/googlecolab/colabtools/issues/60#issuecomment-596225619)). Therefore, if you are using geemap with Google Colab, you should use [`import geemap.eefolium`](https://github.com/giswqs/geemap/blob/master/geemap/eefolium.py). If you are using geemap with [binder](https://mybinder.org/) or a local Jupyter notebook server, you can use [`import geemap`](https://github.com/giswqs/geemap/blob/master/geemap/geemap.py), which provides more functionalities for capturing user input (e.g., mouse-clicking and moving).
```
# Installs geemap package
import subprocess
try:
import geemap
except ImportError:
print('geemap package not installed. Installing ...')
subprocess.check_call(["python", '-m', 'pip', 'install', 'geemap'])
# Checks whether this notebook is running on Google Colab
try:
import google.colab
import geemap.eefolium as emap
except:
import geemap as emap
# Authenticates and initializes Earth Engine
import ee
try:
ee.Initialize()
except Exception as e:
ee.Authenticate()
ee.Initialize()
```
## Create an interactive map
```
import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
Map.add_minimap(position='bottomright')
Map
```
## Add tile layers
For example, you can Google Map tile layer:
```
url = 'https://mt1.google.com/vt/lyrs=m&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Map', attribution='Google')
```
Add Google Terrain tile layer:
```
url = 'https://mt1.google.com/vt/lyrs=p&x={x}&y={y}&z={z}'
Map.add_tile_layer(url, name='Google Terrain', attribution='Google')
```
## Add WMS layers
More WMS layers can be found at <https://viewer.nationalmap.gov/services/>.
For example, you can add NAIP imagery.
```
url = 'https://services.nationalmap.gov/arcgis/services/USGSNAIPImagery/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='0', name='NAIP Imagery', format='image/png')
```
Add USGS 3DEP Elevation Dataset
```
url = 'https://elevation.nationalmap.gov/arcgis/services/3DEPElevation/ImageServer/WMSServer?'
Map.add_wms_layer(url=url, layers='3DEPElevation:None', name='3DEP Elevation', format='image/png')
```
## Capture user inputs
```
import geemap
from ipywidgets import Label
from ipyleaflet import Marker
Map = geemap.Map(center=(40, -100), zoom=4)
label = Label()
display(label)
coordinates = []
def handle_interaction(**kwargs):
latlon = kwargs.get('coordinates')
if kwargs.get('type') == 'mousemove':
label.value = str(latlon)
elif kwargs.get('type') == 'click':
coordinates.append(latlon)
Map.add_layer(Marker(location=latlon))
Map.on_interaction(handle_interaction)
Map
print(coordinates)
```
## A simpler way for capturing user inputs
```
import geemap
Map = geemap.Map(center=(40, -100), zoom=4)
cluster = Map.listening(event='click', add_marker=True)
Map
# Get the last mouse clicked coordinates
Map.last_click
# Get all the mouse clicked coordinates
Map.all_clicks
```
## SplitMap control
```
import geemap
from ipyleaflet import *
Map = geemap.Map(center=(47.50, -101), zoom=7)
right_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2017_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2017_CIR',
name = 'AerialImage_ND_2017_CIR',
format = 'image/png'
)
left_layer = WMSLayer(
url = 'https://ndgishub.nd.gov/arcgis/services/Imagery/AerialImage_ND_2018_CIR/ImageServer/WMSServer?',
layers = 'AerialImage_ND_2018_CIR',
name = 'AerialImage_ND_2018_CIR',
format = 'image/png'
)
control = SplitMapControl(left_layer=left_layer, right_layer=right_layer)
Map.add_control(control)
Map.add_control(LayersControl(position='topright'))
Map.add_control(FullScreenControl())
Map
import geemap
Map = geemap.Map()
Map.split_map(left_layer='HYBRID', right_layer='ESRI')
Map
```
| github_jupyter |
# Gender and Age Detection
```
import pandas as pd
import numpy as np
import os
import matplotlib.pyplot as plt
import cv2
from tensorflow.keras.models import Sequential, load_model, Model
from tensorflow.keras.layers import Conv2D, MaxPool2D, Dense, Dropout, BatchNormalization, Flatten, Input
from sklearn.model_selection import train_test_split
# Defining the path .
datasetFolder = r"C:\Users\ACER\Documents\Gender Detection\DataSets\UTKFace"
# Creating empty list.
pixels = []
age = []
gender = []
for img in os.listdir(datasetFolder) : # os.listdir opens the directory "datasetFolder"
# Label of each image is splitted on "_" and required information is stored in required variable.
ages = img.split("_")[0]
genders = img.split("_")[1]
img = cv2.imread(str(datasetFolder) + "/" + str(img)) # Reading each image from the path of folder provided.
img = cv2.cvtColor(img, cv2.COLOR_BGR2RGB) # Converting the input image from BGR to RGB as computer by default sees an image in BGR.
# Appending necessary data in respective created lists.
pixels.append(np.array(img))
age.append(np.array(ages))
gender.append(np.array(genders))
# Converting list to array
age = np.array(age, dtype = np.int64)
pixels = np.array(pixels)
gender = np.array(gender, np.uint64)
# Printing the length of the pixel .
p = len(pixels)
print(f"No. of images working upon {p}")
# Splitting the images in train and test dataset.
x_train, x_test, y_train, y_test = train_test_split(pixels, age, random_state = 100)
# Splitting the dataset in train and test dataset as gender as.
x_train_2, x_test_2, y_train_2, y_test_2 = train_test_split(pixels, gender, random_state = 100)
# Checking the shape of the images set. Here (200, 200, 3) are height, width and channel of the images respectively.
x_train.shape, x_train_2.shape, x_test.shape, x_test_2.shape,
# Checking the shape of the target variable.
y_train.shape, y_train_2.shape, y_test.shape, y_test_2.shape
```
###### Below cell of code is used to create layers of a convolution neural network model. The layers in a CNN model are :
* Input Layer
* Convolution Layer
* ReLu Layer
* Pooling Layer
* Fully Connected Network
```
inputLayer = Input(shape = (200, 200, 3)) # From the Input Model called from keras.models. Again (200, 200, 3) are height, width and channel of the images respectively.
convLayer1 = Conv2D(140,(3,3), activation = 'relu')(inputLayer)
'''An activation function is basically just a simple function that transforms its inputs into outputs that have a certain range.
Also the ReLu activation transforms the -ve vaulues into 0 and positive remains the same, hence it is known as half rectifier as
well.'''
convLayer2 = Conv2D(130,(3,3), activation = 'relu')(convLayer1) # Creating seccond layer of CNN.
batch1 = BatchNormalization()(convLayer2) # Normalizing the data.
poolLayer3 = MaxPool2D((2,2))(batch1) # Creating third, Pool Layer of the CNN.
convLayer3 = Conv2D(120,(3,3), activation = 'relu')(poolLayer3) # Adding the third Layer.
batch2 = BatchNormalization()(convLayer3) # Normalizing the layer.
poolLayer4 = MaxPool2D((2,2))(batch2) #Adding fourth layer of CNN.
flt = Flatten()(poolLayer4) # Flattening the data.
age_model = Dense(128,activation="relu")(flt) # Here 128 is the no. of neurons connected with the flatten data layer.
age_model = Dense(64,activation="relu")(age_model) #Now as we move down, no. of neurons are reducing with previous neurons connected to them.
age_model = Dense(32,activation="relu")(age_model)
age_model = Dense(1,activation="relu")(age_model)
gender_model = Dense(128,activation="relu")(flt) # The same work as above with 128 neurons is done for gender predictive model.
gender_model = Dense(80,activation="relu")(gender_model)
gender_model = Dense(64,activation="relu")(gender_model)
gender_model = Dense(32,activation="relu")(gender_model)
gender_model = Dropout(0.5)(gender_model) # Drop-out layer is added to dodge the overfitting of the model.
'''Softmax is a mathematical function that converts a vector of numbers into a vector of probabilities, where the probabilities
of each value are proportional to the relative scale of each value in the vector. Here it is used as an activation function.'''
gender_model = Dense(2,activation="softmax")(gender_model)
```
###### Below cell of code is to make an object of the Model from keras.models.
```
model = Model(inputs=inputLayer,outputs=[age_model,gender_model]) # Adding the input layer and the output layer in our model and making the object.
model.compile(optimizer="adam",loss=["mse","sparse_categorical_crossentropy"],metrics=['mae','accuracy'])
model.summary() # To get the summary of our model.
save = model.fit(x_train,[y_train,y_train_2], validation_data=(x_test,[y_test,y_test_2]),epochs=50)
model.save("model.h5")
```
| github_jupyter |
# An Introduction to SageMaker LDA
***Finding topics in synthetic document data using Spectral LDA algorithms.***
---
1. [Introduction](#Introduction)
1. [Setup](#Setup)
1. [Training](#Training)
1. [Inference](#Inference)
1. [Epilogue](#Epilogue)
# Introduction
***
Amazon SageMaker LDA is an unsupervised learning algorithm that attempts to describe a set of observations as a mixture of distinct categories. Latent Dirichlet Allocation (LDA) is most commonly used to discover a user-specified number of topics shared by documents within a text corpus. Here each observation is a document, the features are the presence (or occurrence count) of each word, and the categories are the topics. Since the method is unsupervised, the topics are not specified up front, and are not guaranteed to align with how a human may naturally categorize documents. The topics are learned as a probability distribution over the words that occur in each document. Each document, in turn, is described as a mixture of topics.
In this notebook we will use the Amazon SageMaker LDA algorithm to train an LDA model on some example synthetic data. We will then use this model to classify (perform inference on) the data. The main goals of this notebook are to,
* learn how to obtain and store data for use in Amazon SageMaker,
* create an AWS SageMaker training job on a data set to produce an LDA model,
* use the LDA model to perform inference with an Amazon SageMaker endpoint.
The following are ***not*** goals of this notebook:
* understand the LDA model,
* understand how the Amazon SageMaker LDA algorithm works,
* interpret the meaning of the inference output
If you would like to know more about these things take a minute to run this notebook and then check out the SageMaker LDA Documentation and the **LDA-Science.ipynb** notebook.
```
!conda install -y scipy
%matplotlib inline
import os, re
import boto3
import matplotlib.pyplot as plt
import numpy as np
np.set_printoptions(precision=3, suppress=True)
# some helpful utility functions are defined in the Python module
# "generate_example_data" located in the same directory as this
# notebook
from generate_example_data import generate_griffiths_data, plot_lda, match_estimated_topics
# accessing the SageMaker Python SDK
import sagemaker
from sagemaker.amazon.common import RecordSerializer
from sagemaker.serializers import CSVSerializer
from sagemaker.deserializers import JSONDeserializer
```
# Setup
***
*This notebook was created and tested on an ml.m4.xlarge notebook instance.*
Before we do anything at all, we need data! We also need to setup our AWS credentials so that AWS SageMaker can store and access data. In this section we will do four things:
1. [Setup AWS Credentials](#SetupAWSCredentials)
1. [Obtain Example Dataset](#ObtainExampleDataset)
1. [Inspect Example Data](#InspectExampleData)
1. [Store Data on S3](#StoreDataonS3)
## Setup AWS Credentials
We first need to specify some AWS credentials; specifically data locations and access roles. This is the only cell of this notebook that you will need to edit. In particular, we need the following data:
* `bucket` - An S3 bucket accessible by this account.
* Used to store input training data and model data output.
* Should be within the same region as this notebook instance, training, and hosting.
* `prefix` - The location in the bucket where this notebook's input and and output data will be stored. (The default value is sufficient.)
* `role` - The IAM Role ARN used to give training and hosting access to your data.
* See documentation on how to create these.
* The script below will try to determine an appropriate Role ARN.
```
from sagemaker import get_execution_role
session = sagemaker.Session()
role = get_execution_role()
bucket = session.default_bucket()
prefix = 'sagemaker/DEMO-lda-introduction'
print('Training input/output will be stored in {}/{}'.format(bucket, prefix))
print('\nIAM Role: {}'.format(role))
```
## Obtain Example Data
We generate some example synthetic document data. For the purposes of this notebook we will omit the details of this process. All we need to know is that each piece of data, commonly called a *"document"*, is a vector of integers representing *"word counts"* within the document. In this particular example there are a total of 25 words in the *"vocabulary"*.
$$
\underbrace{w}_{\text{document}} = \overbrace{\big[ w_1, w_2, \ldots, w_V \big] }^{\text{word counts}},
\quad
V = \text{vocabulary size}
$$
These data are based on that used by Griffiths and Steyvers in their paper [Finding Scientific Topics](http://psiexp.ss.uci.edu/research/papers/sciencetopics.pdf). For more information, see the **LDA-Science.ipynb** notebook.
```
print('Generating example data...')
num_documents = 6000
num_topics = 5
known_alpha, known_beta, documents, topic_mixtures = generate_griffiths_data(
num_documents=num_documents, num_topics=num_topics)
vocabulary_size = len(documents[0])
# separate the generated data into training and tests subsets
num_documents_training = int(0.9*num_documents)
num_documents_test = num_documents - num_documents_training
documents_training = documents[:num_documents_training]
documents_test = documents[num_documents_training:]
topic_mixtures_training = topic_mixtures[:num_documents_training]
topic_mixtures_test = topic_mixtures[num_documents_training:]
print('documents_training.shape = {}'.format(documents_training.shape))
print('documents_test.shape = {}'.format(documents_test.shape))
```
## Inspect Example Data
*What does the example data actually look like?* Below we print an example document as well as its corresponding known *topic-mixture*. A topic-mixture serves as the "label" in the LDA model. It describes the ratio of topics from which the words in the document are found.
For example, if the topic mixture of an input document $\mathbf{w}$ is,
$$\theta = \left[ 0.3, 0.2, 0, 0.5, 0 \right]$$
then $\mathbf{w}$ is 30% generated from the first topic, 20% from the second topic, and 50% from the fourth topic. For more information see **How LDA Works** in the SageMaker documentation as well as the **LDA-Science.ipynb** notebook.
Below, we compute the topic mixtures for the first few training documents. As we can see, each document is a vector of word counts from the 25-word vocabulary and its topic-mixture is a probability distribution across the five topics used to generate the sample dataset.
```
print('First training document =\n{}'.format(documents[0]))
print('\nVocabulary size = {}'.format(vocabulary_size))
print('Known topic mixture of first document =\n{}'.format(topic_mixtures_training[0]))
print('\nNumber of topics = {}'.format(num_topics))
print('Sum of elements = {}'.format(topic_mixtures_training[0].sum()))
```
Later, when we perform inference on the training data set we will compare the inferred topic mixture to this known one.
---
Human beings are visual creatures, so it might be helpful to come up with a visual representation of these documents. In the below plots, each pixel of a document represents a word. The greyscale intensity is a measure of how frequently that word occurs. Below we plot the first few documents of the training set reshaped into 5x5 pixel grids.
```
%matplotlib inline
fig = plot_lda(documents_training, nrows=3, ncols=4, cmap='gray_r', with_colorbar=True)
fig.suptitle('Example Document Word Counts')
fig.set_dpi(160)
```
## Store Data on S3
A SageMaker training job needs access to training data stored in an S3 bucket. Although training can accept data of various formats we convert the documents MXNet RecordIO Protobuf format before uploading to the S3 bucket defined at the beginning of this notebook. We do so by making use of the SageMaker Python SDK utility `RecordSerializer`.
```
# convert documents_training to Protobuf RecordIO format
recordio_protobuf_serializer = RecordSerializer()
fbuffer = recordio_protobuf_serializer.serialize(documents_training)
# upload to S3 in bucket/prefix/train
fname = 'lda.data'
s3_object = os.path.join(prefix, 'train', fname)
boto3.Session().resource('s3').Bucket(bucket).Object(s3_object).upload_fileobj(fbuffer)
s3_train_data = 's3://{}/{}'.format(bucket, s3_object)
print('Uploaded data to S3: {}'.format(s3_train_data))
```
# Training
***
Once the data is preprocessed and available in a recommended format the next step is to train our model on the data. There are number of parameters required by SageMaker LDA configuring the model and defining the computational environment in which training will take place.
First, we specify a Docker container containing the SageMaker LDA algorithm. For your convenience, a region-specific container is automatically chosen for you to minimize cross-region data communication. Information about the locations of each SageMaker algorithm is available in the documentation.
```
from sagemaker.amazon.amazon_estimator import get_image_uri
# select the algorithm container based on this notebook's current location
region_name = boto3.Session().region_name
container = get_image_uri(region_name, 'lda')
print('Using SageMaker LDA container: {} ({})'.format(container, region_name))
```
Particular to a SageMaker LDA training job are the following hyperparameters:
* **`num_topics`** - The number of topics or categories in the LDA model.
* Usually, this is not known a priori.
* In this example, howevever, we know that the data is generated by five topics.
* **`feature_dim`** - The size of the *"vocabulary"*, in LDA parlance.
* In this example, this is equal 25.
* **`mini_batch_size`** - The number of input training documents.
* **`alpha0`** - *(optional)* a measurement of how "mixed" are the topic-mixtures.
* When `alpha0` is small the data tends to be represented by one or few topics.
* When `alpha0` is large the data tends to be an even combination of several or many topics.
* The default value is `alpha0 = 1.0`.
In addition to these LDA model hyperparameters, we provide additional parameters defining things like the EC2 instance type on which training will run, the S3 bucket containing the data, and the AWS access role. Note that,
* Recommended instance type: `ml.c4`
* Current limitations:
* SageMaker LDA *training* can only run on a single instance.
* SageMaker LDA does not take advantage of GPU hardware.
* (The Amazon AI Algorithms team is working hard to provide these capabilities in a future release!)
```
# specify general training job information
lda = sagemaker.estimator.Estimator(
container,
role,
output_path='s3://{}/{}/output'.format(bucket, prefix),
train_instance_count=1,
train_instance_type='ml.c4.2xlarge',
sagemaker_session=session,
)
# set algorithm-specific hyperparameters
lda.set_hyperparameters(
num_topics=num_topics,
feature_dim=vocabulary_size,
mini_batch_size=num_documents_training,
alpha0=1.0,
)
# run the training job on input data stored in S3
lda.fit({'train': s3_train_data})
```
If you see the message
> `===== Job Complete =====`
at the bottom of the output logs then that means training sucessfully completed and the output LDA model was stored in the specified output path. You can also view information about and the status of a training job using the AWS SageMaker console. Just click on the "Jobs" tab and select training job matching the training job name, below:
```
print('Training job name: {}'.format(lda.latest_training_job.job_name))
```
# Inference
***
A trained model does nothing on its own. We now want to use the model we computed to perform inference on data. For this example, that means predicting the topic mixture representing a given document.
We create an inference endpoint using the SageMaker Python SDK `deploy()` function from the job we defined above. We specify the instance type where inference is computed as well as an initial number of instances to spin up.
```
lda_inference = lda.deploy(
initial_instance_count=1,
instance_type='ml.m4.xlarge', # LDA inference may work better at scale on ml.c4 instances
)
```
Congratulations! You now have a functioning SageMaker LDA inference endpoint. You can confirm the endpoint configuration and status by navigating to the "Endpoints" tab in the AWS SageMaker console and selecting the endpoint matching the endpoint name, below:
```
print('Endpoint name: {}'.format(lda_inference.endpoint_name))
```
With this realtime endpoint at our fingertips we can finally perform inference on our training and test data.
We can pass a variety of data formats to our inference endpoint. In this example we will demonstrate passing CSV-formatted data. Other available formats are JSON-formatted, JSON-sparse-formatter, and RecordIO Protobuf. We make use of the SageMaker Python SDK utilities `CSVSerializer` and `JSONDeserializer` when configuring the inference endpoint.
```
lda_inference.serializer = CSVSerializer()
lda_inference.deserializer = JSONDeserializer()
```
We pass some test documents to the inference endpoint. Note that the serializer and deserializer will atuomatically take care of the datatype conversion from Numpy NDArrays.
```
results = lda_inference.predict(documents_test[:12])
print(results)
```
It may be hard to see but the output format of SageMaker LDA inference endpoint is a Python dictionary with the following format.
```
{
'predictions': [
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
{'topic_mixture': [ ... ] },
...
]
}
```
We extract the topic mixtures, themselves, corresponding to each of the input documents.
```
computed_topic_mixtures = np.array([prediction['topic_mixture'] for prediction in results['predictions']])
print(computed_topic_mixtures)
```
If you decide to compare these results to the known topic mixtures generated in the [Obtain Example Data](#ObtainExampleData) Section keep in mind that SageMaker LDA discovers topics in no particular order. That is, the approximate topic mixtures computed above may be permutations of the known topic mixtures corresponding to the same documents.
```
print(topic_mixtures_test[0]) # known test topic mixture
print(computed_topic_mixtures[0]) # computed topic mixture (topics permuted)
```
## Stop / Close the Endpoint
Finally, we should delete the endpoint before we close the notebook.
To do so execute the cell below. Alternately, you can navigate to the "Endpoints" tab in the SageMaker console, select the endpoint with the name stored in the variable `endpoint_name`, and select "Delete" from the "Actions" dropdown menu.
```
sagemaker.Session().delete_endpoint(lda_inference.endpoint_name)
```
# Epilogue
---
In this notebook we,
* generated some example LDA documents and their corresponding topic-mixtures,
* trained a SageMaker LDA model on a training set of documents,
* created an inference endpoint,
* used the endpoint to infer the topic mixtures of a test input.
There are several things to keep in mind when applying SageMaker LDA to real-word data such as a corpus of text documents. Note that input documents to the algorithm, both in training and inference, need to be vectors of integers representing word counts. Each index corresponds to a word in the corpus vocabulary. Therefore, one will need to "tokenize" their corpus vocabulary.
$$
\text{"cat"} \mapsto 0, \; \text{"dog"} \mapsto 1 \; \text{"bird"} \mapsto 2, \ldots
$$
Each text document then needs to be converted to a "bag-of-words" format document.
$$
w = \text{"cat bird bird bird cat"} \quad \longmapsto \quad w = [2, 0, 3, 0, \ldots, 0]
$$
Also note that many real-word applications have large vocabulary sizes. It may be necessary to represent the input documents in sparse format. Finally, the use of stemming and lemmatization in data preprocessing provides several benefits. Doing so can improve training and inference compute time since it reduces the effective vocabulary size. More importantly, though, it can improve the quality of learned topic-word probability matrices and inferred topic mixtures. For example, the words *"parliament"*, *"parliaments"*, *"parliamentary"*, *"parliament's"*, and *"parliamentarians"* are all essentially the same word, *"parliament"*, but with different conjugations. For the purposes of detecting topics, such as a *"politics"* or *governments"* topic, the inclusion of all five does not add much additional value as they all essentiall describe the same feature.
| github_jupyter |
Subsets and Splits